I’m new to Linux device management, and filesystems generally, so the answer to this question might be “No, of course not. What were you thinking?” or “Yes, of course. Isn’t it obvious?” All the same I’m hoping a little expertise might clear my confusion.
I recently found a legacy script that creates and attaches volumes to EC2 instances. Suppose I have four EBS volumes,
sdf4. The relevant portion looks something like this:
# Create a RAID0 array pvcreate /dev/xvdf1 /dev/xvdf2 /dev/xvdf3 /dev/xvdf4 vgcreate myvg /dev/xvdf1 /dev/xvdf2 /dev/xvdf3 /dev/xvdf4 lvcreate --stripes 4 --stripesize 256 --extents 100%VG --name mylv myvg mkfs.xfs /dev/myvg/mylv
That’s the entirety of the filesystem creation. In contrast, every single tutorial I can find on the web (and these are but a sampling) uses
mdadm first, then manipulates the resulting device, usually something like this:
mdadm --verbose --create /dev/md0 --level=0 --chunk=256 --raid-devices=4 /dev/sdf1 /dev/sdf2 /dev/sdf3 /dev/sdf4 mdadm --detail --scan >> /etc/mdadm/mdadm.conf
I sort of understand — albeit loosely and without the ability to comprehend details — that
mdadm creates software RAIDs, and that these are distinguishable from hardware RAIDs, but I can’t seem to figure out what, if anything, the above script snippet creates. Is it RAID? Is it not? Is it something else entirely?
“No, of course not. What were you thinking?”
The mdadm RAID 0 stripes data across all four volumes in small (here, 256KB) chunks, giving you the performance improvement you expect from RAID 0.
The LVM approach you listed here also does the same striping, making it functionally equivalent to RAID 0. (This is not the default behavior for LVM.)
You could use either approach, but the LVM approach actually limits you here since you won’t be able to add volumes later without completely recreating the logical device.
This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.