I have a Iomega IX2-200 which came with 2Tb (1.8Tb usable) space.
It has two disks set up as RAID1.
I am trying to upgrade this to 4Tb disks.
So far this is the process I have followed:
Remove the 2nd disk from the IX2, and replaced it with a 4Tb disk.
The IX2 automatically starts to resync / mirror disk1 (2Tb) to the new 4Tb disk.
After several hours, we see the seconds disk as 1.8Tb.
Replace the first disk with another 4Tb drive, and restart.
The IX2 again starts mirroring disk2 to disk1.
Several hours later we have 2 4Tb disk in the IX2, but with only 1.8Tb showing as available.
The IX2 does not have
gdiskinstalled, so I remove the disks, connect them to a Linux server as USB drives and run gdisk:
This enables me to extend the partition (type Microsoft basic data 0700).
Repeat with the other disk.
Now put disks back into the IX2 and reboot.
Grow and resize the volume:
mdadm –grow /dev/md1 –size=max
Check the results:
vgdisplay --- Volume group --- VG Name 5244dd0f_vg System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 6 VG Access read/write VG Status resizable MAX LV 0 Cur LV 1 Open LV 0 Max PV 0 Cur PV 1 Act PV 1 VG Size 3.62 TB PE Size 4.00 MB Total PE 948739 Alloc PE / Size 471809 / 1.80 TB Free PE / Size 476930 / 1.82 TB VG UUID FB2tzp-8Gr2-6Dlj-9Dck-Tyc4-Gxx5-HHIsBD --- Volume group --- VG Name md0_vg System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 3 VG Access read/write VG Status resizable MAX LV 0 Cur LV 2 Open LV 2 Max PV 0 Cur PV 1 Act PV 1 VG Size 20.01 GB PE Size 4.00 MB Total PE 5122 Alloc PE / Size 5122 / 20.01 GB Free PE / Size 0 / 0 VG UUID EA3tJR-nVdm-0Dcf-YtBE-t1Qj-peHc-Sh0zXe
Result – still shows as 1.8Tb:
df -h Filesystem Size Used Avail Use% Mounted on rootfs 50M 2.5M 48M 5% / /dev/root.old 6.5M 2.1M 4.4M 33% /initrd none 50M 2.5M 48M 5% / /dev/md0_vg/BFDlv 4.0G 607M 3.2G 16% /boot /dev/loop0 576M 569M 6.8M 99% /mnt/apps /dev/loop1 4.9M 2.2M 2.5M 47% /etc /dev/loop2 212K 212K 0 100% /oem tmpfs 122M 0 122M 0% /mnt/apps/lib/init/rw tmpfs 122M 0 122M 0% /dev/shm /dev/mapper/md0_vg-vol1 16G 1.2G 15G 8% /mnt/system /dev/mapper/5244dd0f_vg-lv58141b0d 1.8T 1.7T 152G 92% /mnt/pools/A/A0
I spotted a couple of config files with volume sizes, so I edited these:
Size values for Ident 2 and 3 below:
<Partitions> <Partition Ident="0" Drive="0" Size="21484429312" Device="sda1" SysPartition="1"></Partition> <Partition Ident="1" Drive="1" Size="21484429312" Device="sdb1" SysPartition="1"></Partition> <Partition Ident="2" Drive="0" Size="3979300000000" Device="sda2" SysPartition="0"></Partition> <Partition Ident="3" Drive="1" Size="3979300000000" Device="sdb2" SysPartition="0"></Partition> </Partitions>
Rebooted but still only 1.8Tb is usable.
You did everything except the last two steps:
Resizing the logical volume. You have a 1.82TB free showing in your vgdisplay, so you’ve done everything up to this point correctly. Now you just need to resize the LV. For example:
lvresize -l +100%FREE /dev/mapper/5244dd0f_vg-lv58141b0d
Finally resizing the filesystem within the logical volume. How to do that varies depending on what filesystem you used, but this information is not available in your post.
This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.