Failed Drive Replacement

Re: Failed Drive Replacement

Postby hvymetal86 » Mon Jun 12, 2017 11:51 am

Code: Select all
root@HvyMtlNAS:/ # mdadm --assemble /dev/md0 --run
root@HvyMtlNAS:/ # cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
unused devices: <none>
root@HvyMtlNAS:/ # mdadm --detail /dev/md0
/dev/md0:
        Version :
     Raid Level : raid0
  Total Devices : 0

          State : inactive

    Number   Major   Minor   RaidDevice
hvymetal86
Donator VIP
Donator VIP
 
Posts: 121
Joined: Mon May 11, 2015 3:42 pm

Re: Failed Drive Replacement

Postby Jocko » Mon Jun 12, 2017 2:02 pm

So try
Code: Select all
mdadm --assemble /dev/md0 --uuid=2640f25a:4ed24787:e0e498eb:d7731140
check it
Code: Select all
cat /proc/mdstat
mdadm --detail /dev/md0


And if it fails again, try with
Code: Select all
mdadm --assemble --scan
Jocko
Site Admin - expert
 
Posts: 11529
Joined: Tue Apr 12, 2011 4:48 pm
Location: Orleans, France

Re: Failed Drive Replacement

Postby hvymetal86 » Mon Jun 12, 2017 2:51 pm

First assemble command failed:
Code: Select all
root@HvyMtlNAS:/ # mdadm --assemble /dev/md0 --uuid=2640f25a:4ed24787:e0e498eb:d7731140
mdadm: failed to get exclusive lock on mapfile - continue anyway...
mdadm: WARNING /dev/sde8 and /dev/sde appear to have very similar superblocks.
      If they are really different, please --zero the superblock on one
      If they are the same or overlap, please remove one from the
      DEVICE list in mdadm.conf.

Ran the next two which didn't return much because of the failure:
Code: Select all
root@HvyMtlNAS:/ # cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
unused devices: <none>

root@HvyMtlNAS:/ # mdadm --detail /dev/md0
mdadm: cannot open /dev/md0: No such file or directory

First thought the last command was standalone and ran that by itself but then realized it probably should have been adding the extra --scan flag into the first command and tried that too, but it got no output:
Code: Select all
root@HvyMtlNAS:/ # mdadm --assemble --scan
mdadm: failed to get exclusive lock on mapfile
root@HvyMtlNAS:/ # mdadm --assemble --scan /dev/md0 --uuid=2640f25a:4ed24787:e0e498eb:d7731140
root@HvyMtlNAS:/ #
hvymetal86
Donator VIP
Donator VIP
 
Posts: 121
Joined: Mon May 11, 2015 3:42 pm

Re: Failed Drive Replacement

Postby Jocko » Mon Jun 12, 2017 3:03 pm

Ok I see why mdadm fails to assemble md0:

Currently you have a missing component (sdc8) and it rejects sde8 with this warning: "dev/sde8 and /dev/sde appear to have very similar superblocks"
So there is not enough component to build the raid.

! Please do not try to remove the super block as suggested ("If they are really different, please --zero the superblock on one") as a member with the same case, when he deleted the superblock on sde it lost also the superblock on sde8.
In your case you will lose your raid and data :mrgreen:


Please to post
Code: Select all
gdisk -l /dev/sdd
gdisk -l /dev/sde
(I think there is an issue on the last sector used for sde8)

on mdadm.conf change the line
Code: Select all
DEVICE /dev/sd* /dev/se*
by
DEVICE /dev/sd[abcde]8 /dev/se*
and try first
Code: Select all
mdadm --assemble /dev/md0 --run
and the other assemble commands previously posted if it fails
Jocko
Site Admin - expert
 
Posts: 11529
Joined: Tue Apr 12, 2011 4:48 pm
Location: Orleans, France

Re: Failed Drive Replacement

Postby hvymetal86 » Mon Jun 12, 2017 3:49 pm

Note: I have not done anything with the superblock

Partition info commands:
Code: Select all
root@HvyMtlNAS:/ # gdisk -l /dev/sdd
GPT fdisk (gdisk) version 0.8.5

Partition table scan:
  MBR: protective
  BSD: not present
  APM: not present
  GPT: present

Found valid GPT with protective MBR; using GPT.
Disk /dev/sdd: 3907029168 sectors, 1.8 TiB
Logical sector size: 512 bytes
Disk identifier (GUID): F3E49E81-3213-436B-BF9F-1302D44C4CD0
Partition table holds up to 128 entries
First usable sector is 34, last usable sector is 3907029134
Partitions will be aligned on 2048-sector boundaries
Total free space is 5803998 sectors (2.8 GiB)

Number  Start (sector)    End (sector)  Size       Code  Name
   8         5804032      3907029134   1.8 TiB     FD00  Linux RAID
Code: Select all
root@HvyMtlNAS:/ # gdisk -l /dev/sde
GPT fdisk (gdisk) version 0.8.5

Partition table scan:
  MBR: protective
  BSD: not present
  APM: not present
  GPT: present

Found valid GPT with protective MBR; using GPT.
Disk /dev/sde: 3907029168 sectors, 1.8 TiB
Logical sector size: 512 bytes
Disk identifier (GUID): AC689F36-3A3A-4758-A70A-C3FF4D1428E9
Partition table holds up to 128 entries
First usable sector is 34, last usable sector is 3907029134
Partitions will be aligned on 2048-sector boundaries
Total free space is 5803998 sectors (2.8 GiB)

Number  Start (sector)    End (sector)  Size       Code  Name
   8         5804032      3907029134   1.8 TiB     FD00  Linux RAID
Edited the conf file again:
Code: Select all
root@HvyMtlNAS:/ # vi /rw_fs/etc/mdadm.conf
root@HvyMtlNAS:/ # cat /rw_fs/etc/mdadm.conf
CREATE owner=root group=root mode=0666 auto=yes metadata=1.0
PROGRAM /usr/bin/mdadm-events
DEVICE /dev/sd[abcde]8 /dev/se*
ARRAY /dev/md0 metadata=0.9 level=raid5 num-devices=5 UUID=2640f25a:4ed24787:e0e498eb:d7731140

MAILADDR [REDACTED]@gmail.com
Assemble command completed with no output:
Code: Select all
root@HvyMtlNAS:/ # mdadm --assemble /dev/md0
root@HvyMtlNAS:/ #
Ran commands given before for testing after assemble:
Code: Select all
root@HvyMtlNAS:/ # cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
unused devices: <none>
root@HvyMtlNAS:/ # mdadm --detail /dev/md0
mdadm: cannot open /dev/md0: No such file or directory
Tried other previously given assemble commands:
Code: Select all
root@HvyMtlNAS:/ # mdadm --assemble /dev/md0 --run
root@HvyMtlNAS:/ # cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
unused devices: <none>
root@HvyMtlNAS:/ # mdadm --detail /dev/md0
mdadm: cannot open /dev/md0: No such file or directory
The one with the UID appears to have worked
Code: Select all
root@HvyMtlNAS:/ # mdadm --assemble /dev/md0 --uuid=2640f25a:4ed24787:e0e498eb:d7731140
mdadm: failed to get exclusive lock on mapfile - continue anyway...
mdadm: /dev/md0 has been started with 4 drives (out of 5).
root@HvyMtlNAS:/ # cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
md0 : active raid5 sda8[0] sde8[4] sdd8[3] sdb8[1]
      7802449920 blocks level 5, 64k chunk, algorithm 2 [5/4] [UU_UU]

unused devices: <none>
root@HvyMtlNAS:/ # mdadm --detail /dev/md0
/dev/md0:
        Version : 0.90
  Creation Time : Thu Jun 18 23:12:56 2015
     Raid Level : raid5
     Array Size : 7802449920 (7441.00 GiB 7989.71 GB)
  Used Dev Size : 1950612480 (1860.25 GiB 1997.43 GB)
   Raid Devices : 5
  Total Devices : 4
Preferred Minor : 0
    Persistence : Superblock is persistent

    Update Time : Sun Jun 11 09:23:01 2017
          State : clean, degraded
 Active Devices : 4
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 64K

           UUID : 2640f25a:4ed24787:e0e498eb:d7731140
         Events : 0.17504

    Number   Major   Minor   RaidDevice State
       0       8       24        0      active sync   /dev/sda8
       1       8        8        1      active sync   /dev/sdb8
       4       0        0        4      removed
       3       8       56        3      active sync   /dev/sdd8
       4       8       40        4      active sync   /dev/sde8
Checked for folders in /share and there are none:
Code: Select all
root@HvyMtlNAS:/ # ls -lha /share/1100
ls: /share/1100: No such file or directory
root@HvyMtlNAS:/ # ls -lha
total 120K
drwxr-xr-x   23 root   root   4.0K 2017-06-12 01:30 .
drwxr-xr-x   23 root   root   4.0K 2017-06-12 01:30 ..
-rw-------    1 root   root   1.2K 2017-06-12 11:45 .ash_history
drwxr-xr-x    2 root   root   4.0K 2017-05-30 17:12 bin
drwxr-xr-x    2 root   root   4.0K 2017-05-30 17:12 bin_cab
drwxr-xr-x    2 root   root   4.0K 2013-09-25 16:49 boot
-rw-r--r--    1 root   root   6.7K 2017-06-11 20:35 boot.log
drwxrwxrwx    2 root   root   4.0K 2017-05-30 17:12 clunc
drwxr-xr-x    6 root   root    12K 2017-06-12 11:43 dev
lrwxrwxrwx    1 root   root      7 2017-06-12 01:30 direct-usb -> /share/
drwxrwxrwx   18 root   root   4.0K 2017-06-12 10:50 etc
drwxrwxrwx    2 root   root   4.0K 2017-06-11 09:21 lacie-boot
drwxr-xr-x    4 root   root   4.0K 2017-02-13 15:32 lib
lrwxrwxrwx    1 root   root     11 2017-05-30 17:12 linuxrc -> bin/busybox
drwx------    2 root   root    16K 2008-01-03 16:21 lost+found
drwxr-xr-x    3 root   root   4.0K 2017-05-30 17:12 mail
lrwxrwxrwx    1 root   root     20 2017-05-30 17:12 .mldonkey -> /share/1000/mldonkey
lrwxrwxrwx    1 root   root     14 2017-06-11 20:34 mnt -> /rw_fs/tmp/mnt
drwxr-xr-x    2 root   root   4.0K 2008-08-27 17:46 nowhere
drwxr-xr-x    7 root   root   4.0K 2017-05-30 17:12 opt
-rw-r--r--    1 root   root    368 2017-06-11 09:25 postupgrade.log
dr-xr-xr-x  120 root   root      0 1969-12-31 19:00 proc
drwxr-xr-x    2 root   root   4.0K 2008-01-03 15:06 root
drwxrwxrwx    8 nobody nobody 4.0K 2017-06-11 20:34 rw_fs
drwxr-xr-x    2 root   root   4.0K 2017-06-11 20:34 sbin
drwxr-xr-x    2 root   root   4.0K 2017-06-11 09:24 sda7-tmp
drwxrwxrwx    2 root   root   4.0K 2011-03-05 08:38 share
lrwxrwxrwx    1 root   root     16 2017-06-11 20:34 .ssh -> ../../rw_fs/.ssh
dr-xr-xr-x   14 root   root      0 2017-06-11 20:34 sys
lrwxrwxrwx    1 root   root     10 2017-06-11 20:34 tmp -> /rw_fs/tmp
drwxr-xr-x   21 root   root   4.0K 2017-06-12 01:30 usr
drwxr-xr-x    2 root   root   4.0K 2017-06-11 20:34 var
root@HvyMtlNAS:/ # cd share
root@HvyMtlNAS:/share # ls -lha
total 8.0K
drwxrwxrwx   2 root root 4.0K 2011-03-05 08:38 .
drwxr-xr-x  23 root root 4.0K 2017-06-12 01:30 ..
root@HvyMtlNAS:/share #
I looked in the GUI and it does not recognize the array. My guess is that it would need a reboot to see it, but I did not reboot since assembling it.
hvymetal86
Donator VIP
Donator VIP
 
Posts: 121
Joined: Mon May 11, 2015 3:42 pm

Re: Failed Drive Replacement

Postby Jocko » Mon Jun 12, 2017 3:59 pm

Be careful, the issue is not yet solved. (when it will succeed to assemble with the standard command. Then you will be able to reboot the nas and the raid volume will be set in the nas database)

so do
Code: Select all
 mdadm --detail --brief /dev/md0
and copy this line in mdadm.conf (remove the previous one)

Now try again
Code: Select all
mdadm -S /dev/md0

mdadm --assemble /dev/md0 --run
check if this time it succeeds to assemble md0
Jocko
Site Admin - expert
 
Posts: 11529
Joined: Tue Apr 12, 2011 4:48 pm
Location: Orleans, France

Re: Failed Drive Replacement

Postby Jocko » Mon Jun 12, 2017 4:04 pm

Now I am not here for one hour
Jocko
Site Admin - expert
 
Posts: 11529
Joined: Tue Apr 12, 2011 4:48 pm
Location: Orleans, France

Re: Failed Drive Replacement

Postby hvymetal86 » Mon Jun 12, 2017 4:10 pm

First command returned the new line as expected:
Code: Select all
root@HvyMtlNAS:/ # mdadm --detail --brief /dev/md0
ARRAY /dev/md0 metadata=0.90 UUID=2640f25a:4ed24787:e0e498eb:d7731140
Edited the conf file with it:
Code: Select all
root@HvyMtlNAS:/ # vi /rw_fs/etc/mdadm.conf
root@HvyMtlNAS:/ # cat /rw_fs/etc/mdadm.conf
CREATE owner=root group=root mode=0666 auto=yes metadata=1.0
PROGRAM /usr/bin/mdadm-events
DEVICE /dev/sd[abcde]8 /dev/se*
ARRAY /dev/md0 metadata=0.90 UUID=2640f25a:4ed24787:e0e498eb:d7731140

MAILADDR [REDACTED]@gmail.com
Then ran the status and assemble commands and they appeared to work:
Code: Select all
root@HvyMtlNAS:/ # mdadm -S /dev/md0
mdadm: stopped /dev/md0
root@HvyMtlNAS:/ # mdadm --assemble /dev/md0 --run
mdadm: failed to get exclusive lock on mapfile - continue anyway...
mdadm: /dev/md0 has been started with 4 drives (out of 5).
root@HvyMtlNAS:/ #
/share still appears to be empty. But nothing was said either way about it at this point in the process so I'm not assuming anything from it:
Code: Select all
root@HvyMtlNAS:/ # ls -lha /share
total 8.0K
drwxrwxrwx   2 root root 4.0K 2011-03-05 08:38 .
drwxr-xr-x  23 root root 4.0K 2017-06-12 01:30 ..


No problem about not being here. I REALLY appreciate all your help this time and previous times on the forum!
hvymetal86
Donator VIP
Donator VIP
 
Posts: 121
Joined: Mon May 11, 2015 3:42 pm

Re: Failed Drive Replacement

Postby Jocko » Mon Jun 12, 2017 6:02 pm

So now the assemble command works. :thumbup

Reboot the nas, and md0 should be assembled and then mounted on /share/1100.

If it is the case, you can recreate your shares and later repair the raid.
Jocko
Site Admin - expert
 
Posts: 11529
Joined: Tue Apr 12, 2011 4:48 pm
Location: Orleans, France

Re: Failed Drive Replacement

Postby hvymetal86 » Mon Jun 12, 2017 6:30 pm

Rebooted and RAID shows up in webgui and share folder now has contents again!:
Code: Select all
root@HvyMtlNAS:/ # ls -lha /share/1100
total 76K
drwxrwxrwx   12 root root 4.0K 2016-06-29 00:59 .
drwxrwxrwx    3 root root 4.0K 2017-06-12 14:21 ..
drwxrwxrwx  139 root root  20K 2017-05-22 19:09 BTDownloads
drwxrwxrwx    9 root root 4.0K 2017-03-25 13:16 Documents
drwxrwxrwx    5 root root 4.0K 2017-06-10 22:40 fvdw
drwx------    2 root root  16K 2015-06-19 23:03 lost+found
drwxrwxrwx    8 root root 4.0K 2016-10-08 22:28 Media
drwxrwxrwx   39 root root 4.0K 2015-08-12 06:31 PRNSYS
drwxrwxrwx    7 root root 4.0K 2015-08-12 17:25 Programs
drwxrwxrwx    5 root root 4.0K 2017-06-12 14:22 tr-daemon
drwxrwxrwx    2 root root 4.0K 2015-06-25 04:22 tr-downloads
drwxrwxrwx    2 root root 4.0K 2016-06-29 00:59 WriteAccess
When creating the shares, do I need to do anything other than ensure I name them the same exact thing? No need to provide the full file path or anything?
hvymetal86
Donator VIP
Donator VIP
 
Posts: 121
Joined: Mon May 11, 2015 3:42 pm

PreviousNext

Return to Lacie 5big Network vs2

Who is online

Users browsing this forum: No registered users and 9 guests