Access clean RAID5 on new machine using 2 of 3 disks

PeptideChain asked:

A server of mine is dead. On the server I had 3x disk with a software raid on the second partition of every HD.

Now I have inserted two out of three HDs in a PC (no place for the third!). Here what I’ve tried until now:

# cat /proc/mdstat 
Personalities : [raid1] 
md125 : inactive sdc2[5](S) sdd2[4](S)
      5859503624 blocks super 1.1

md126 : active raid1 sda[1] sdb[0]
      488383488 blocks super external:/md127/0 [2/2] [UU]

md127 : inactive sda[1](S) sdb[0](S)
      6192 blocks super external:imsm

unused devices: <none>

relevant is only md125. md127 is some old garbage (no idea, not relevant).
I removed it with mdadm --stop /dev/md125 and now:

# cat /proc/mdstat 
Personalities : [raid1] 
md126 : active raid1 sda[1] sdb[0]
      488383488 blocks super external:/md127/0 [2/2] [UU]

md127 : inactive sda[1](S) sdb[0](S)
      6192 blocks super external:imsm

unused devices: <none>

Further:

#  mdadm --examine /dev/sd[c-d]2
/dev/sdc2:
          Magic : a92b4efc
        Version : 1.1
    Feature Map : 0x1
     Array UUID : f94898fc:8310c296:adb8b51e:74344af4
           Name : socrates:0
  Creation Time : Fri Aug  3 21:55:59 2012
     Raid Level : raid5
   Raid Devices : 3

 Avail Dev Size : 5859503120 (2794.03 GiB 3000.07 GB)
     Array Size : 5859503104 (5588.06 GiB 6000.13 GB)
  Used Dev Size : 5859503104 (2794.03 GiB 3000.07 GB)
    Data Offset : 2048 sectors
   Super Offset : 0 sectors
   Unused Space : before=1976 sectors, after=16 sectors
          State : active
    Device UUID : d136f9e1:9971b337:52b603e1:6c711fd0

Internal Bitmap : 8 sectors from superblock
    Update Time : Sat Aug 24 12:33:37 2019
       Checksum : 67b6a3c3 - correct
         Events : 5440945

         Layout : left-symmetric
     Chunk Size : 512K

   Device Role : Active device 1
   Array State : AAA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdd2:
          Magic : a92b4efc
        Version : 1.1
    Feature Map : 0x1
     Array UUID : f94898fc:8310c296:adb8b51e:74344af4
           Name : socrates:0
  Creation Time : Fri Aug  3 21:55:59 2012
     Raid Level : raid5
   Raid Devices : 3

 Avail Dev Size : 5859504128 (2794.03 GiB 3000.07 GB)
     Array Size : 5859503104 (5588.06 GiB 6000.13 GB)
  Used Dev Size : 5859503104 (2794.03 GiB 3000.07 GB)
    Data Offset : 2048 sectors
   Super Offset : 0 sectors
   Unused Space : before=1976 sectors, after=1024 sectors
          State : active
    Device UUID : 7960d3f8:10353972:2cdd25bc:681bb674

Internal Bitmap : 8 sectors from superblock
    Update Time : Sat Aug 24 12:33:37 2019
       Checksum : 38cf65af - correct
         Events : 5440945

         Layout : left-symmetric
     Chunk Size : 512K

   Device Role : Active device 0
   Array State : AAA ('A' == active, '.' == missing, 'R' == replacing)

I would like to get it active and read-only.
Putting together different web-sites, the command should be

mdadm --assemble --verbose --readonly --scan --run

--run because the array in incomplete

This is what I get (omitting the other arrays):

mdadm: /dev/sdc2 has wrong uuid.
mdadm: no recogniseable superblock on /dev/sdc1
mdadm: Cannot assemble mbr metadata on /dev/sdc
mdadm: /dev/sdd2 has wrong uuid.
mdadm: no recogniseable superblock on /dev/sdd1
mdadm: Cannot assemble mbr metadata on /dev/sdd

I followed the question here.

I also have the following information:

# mdadm --examine --scan
ARRAY metadata=imsm UUID=407fbb06:df3d3717:dd6d0115:5bfe417b
ARRAY /dev/md/Volume1 container=407fbb06:df3d3717:dd6d0115:5bfe417b member=0 UUID=1cb761a5:8dcdc9fd:37cddbc1:b04cf067
ARRAY /dev/md/0  metadata=1.1 UUID=f94898fc:8310c296:adb8b51e:74344af4 name=socrates:0

The last line is the important one: socrates is the hostname of the dead machine.

Incredible, now I tryied again the following:

To add

# ARRAY /dev/md/0  metadata=1.1 UUID=f94898fc:8310c296:adb8b51e:74344af4 name=socrates:0
# to
vim /etc/mdadm.conf

and from mdadm --assemble --verbose --readonly --scan --run I get

mdadm: /dev/sdc2 is identified as a member of /dev/md/0, slot 1.
mdadm: /dev/sdd2 is identified as a member of /dev/md/0, slot 0.
mdadm: added /dev/sdc2 to /dev/md/0 as 1
mdadm: no uptodate device for slot 2 of /dev/md/0
mdadm: added /dev/sdd2 to /dev/md/0 as 0
mdadm: failed to RUN_ARRAY /dev/md/0: Input/output error
mdadm: Not enough devices to start the array while not clean - consider --force.

My question is now: What happen exactly if I add --force to the command?

My answer:


The man page states what will happen if you use --force.

       -f, --force
              Assemble  the array even if the metadata on some devices appears
              to be out-of-date.  If mdadm cannot find enough working  devices
              to  start the array, but can find some devices that are recorded
              as having failed, then it will mark those devices as working  so
              that  the array can be started.  An array which requires --force
              to be started may contain data corruption.  Use it carefully.

You should make every effort to connect the third disk, if you actually have it. It seems quite unlikely that you don’t have access to some PC hardware where you can plug in all three disks. Disconnect an existing disk if necessary. Aren’t you trying to recover the data? Without all three disks you run the risk of one of the drives failing, and in that case you will lose everything. You also run the risk of there being corrupt data on one of the drives already. With all three connected, this could be repaired, but with only two disks you may lose some data.


View the full question and any other answers on Server Fault.

Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.