Failed Drive Replacement

Re: Failed Drive Replacement

Postby hvymetal86 » Sun Jun 11, 2017 9:56 pm

This is the output of the exact command you requested:
Code: Select all
root@HvyMtlNAS:/ # mdadm --examine /dev/sd[abce]8
/dev/sda8:
          Magic : a92b4efc
        Version : 0.90.00
           UUID : 2640f25a:4ed24787:e0e498eb:d7731140
  Creation Time : Thu Jun 18 23:12:56 2015
     Raid Level : raid5
  Used Dev Size : 1950612480 (1860.25 GiB 1997.43 GB)
     Array Size : 7802449920 (7441.00 GiB 7989.71 GB)
   Raid Devices : 5
  Total Devices : 4
Preferred Minor : 0

    Update Time : Sun Jun 11 09:23:01 2017
          State : clean
 Active Devices : 4
Working Devices : 4
 Failed Devices : 1
  Spare Devices : 0
       Checksum : f99c8627 - correct
         Events : 17504

         Layout : left-symmetric
     Chunk Size : 64K

      Number   Major   Minor   RaidDevice State
this     0       8       24        0      active sync   /dev/sda8

   0     0       8       24        0      active sync   /dev/sda8
   1     1       8        8        1      active sync   /dev/sdb8
   2     2       0        0        2      faulty removed
   3     3       8       56        3      active sync   /dev/sdd8
   4     4       8       40        4      active sync   /dev/sde8
/dev/sdb8:
          Magic : a92b4efc
        Version : 0.90.00
           UUID : 2640f25a:4ed24787:e0e498eb:d7731140
  Creation Time : Thu Jun 18 23:12:56 2015
     Raid Level : raid5
  Used Dev Size : 1950612480 (1860.25 GiB 1997.43 GB)
     Array Size : 7802449920 (7441.00 GiB 7989.71 GB)
   Raid Devices : 5
  Total Devices : 4
Preferred Minor : 0

    Update Time : Sun Jun 11 09:23:01 2017
          State : clean
 Active Devices : 4
Working Devices : 4
 Failed Devices : 1
  Spare Devices : 0
       Checksum : f99c8619 - correct
         Events : 17504

         Layout : left-symmetric
     Chunk Size : 64K

      Number   Major   Minor   RaidDevice State
this     1       8        8        1      active sync   /dev/sdb8

   0     0       8       24        0      active sync   /dev/sda8
   1     1       8        8        1      active sync   /dev/sdb8
   2     2       0        0        2      faulty removed
   3     3       8       56        3      active sync   /dev/sdd8
   4     4       8       40        4      active sync   /dev/sde8
/dev/sde8:
          Magic : a92b4efc
        Version : 0.90.00
           UUID : 2640f25a:4ed24787:e0e498eb:d7731140
  Creation Time : Thu Jun 18 23:12:56 2015
     Raid Level : raid5
  Used Dev Size : 1950612480 (1860.25 GiB 1997.43 GB)
     Array Size : 7802449920 (7441.00 GiB 7989.71 GB)
   Raid Devices : 5
  Total Devices : 4
Preferred Minor : 0

    Update Time : Sun Jun 11 09:23:01 2017
          State : clean
 Active Devices : 4
Working Devices : 4
 Failed Devices : 1
  Spare Devices : 0
       Checksum : f99c863f - correct
         Events : 17504

         Layout : left-symmetric
     Chunk Size : 64K

      Number   Major   Minor   RaidDevice State
this     4       8       40        4      active sync   /dev/sde8

   0     0       8       24        0      active sync   /dev/sda8
   1     1       8        8        1      active sync   /dev/sdb8
   2     2       0        0        2      faulty removed
   3     3       8       56        3      active sync   /dev/sdd8
   4     4       8       40        4      active sync   /dev/sde8

I wasn't sure if it was supposed to be this though, given that sdc is the "failed" drive:
Code: Select all
mdadm --examine /dev/sd[abde]8
so I ran both:
Code: Select all
root@HvyMtlNAS:/ # mdadm --examine /dev/sd[abde]8
/dev/sda8:
          Magic : a92b4efc
        Version : 0.90.00
           UUID : 2640f25a:4ed24787:e0e498eb:d7731140
  Creation Time : Thu Jun 18 23:12:56 2015
     Raid Level : raid5
  Used Dev Size : 1950612480 (1860.25 GiB 1997.43 GB)
     Array Size : 7802449920 (7441.00 GiB 7989.71 GB)
   Raid Devices : 5
  Total Devices : 4
Preferred Minor : 0

    Update Time : Sun Jun 11 09:23:01 2017
          State : clean
 Active Devices : 4
Working Devices : 4
 Failed Devices : 1
  Spare Devices : 0
       Checksum : f99c8627 - correct
         Events : 17504

         Layout : left-symmetric
     Chunk Size : 64K

      Number   Major   Minor   RaidDevice State
this     0       8       24        0      active sync   /dev/sda8

   0     0       8       24        0      active sync   /dev/sda8
   1     1       8        8        1      active sync   /dev/sdb8
   2     2       0        0        2      faulty removed
   3     3       8       56        3      active sync   /dev/sdd8
   4     4       8       40        4      active sync   /dev/sde8
/dev/sdb8:
          Magic : a92b4efc
        Version : 0.90.00
           UUID : 2640f25a:4ed24787:e0e498eb:d7731140
  Creation Time : Thu Jun 18 23:12:56 2015
     Raid Level : raid5
  Used Dev Size : 1950612480 (1860.25 GiB 1997.43 GB)
     Array Size : 7802449920 (7441.00 GiB 7989.71 GB)
   Raid Devices : 5
  Total Devices : 4
Preferred Minor : 0

    Update Time : Sun Jun 11 09:23:01 2017
          State : clean
 Active Devices : 4
Working Devices : 4
 Failed Devices : 1
  Spare Devices : 0
       Checksum : f99c8619 - correct
         Events : 17504

         Layout : left-symmetric
     Chunk Size : 64K

      Number   Major   Minor   RaidDevice State
this     1       8        8        1      active sync   /dev/sdb8

   0     0       8       24        0      active sync   /dev/sda8
   1     1       8        8        1      active sync   /dev/sdb8
   2     2       0        0        2      faulty removed
   3     3       8       56        3      active sync   /dev/sdd8
   4     4       8       40        4      active sync   /dev/sde8
/dev/sdd8:
          Magic : a92b4efc
        Version : 0.90.00
           UUID : 2640f25a:4ed24787:e0e498eb:d7731140
  Creation Time : Thu Jun 18 23:12:56 2015
     Raid Level : raid5
  Used Dev Size : 1950612480 (1860.25 GiB 1997.43 GB)
     Array Size : 7802449920 (7441.00 GiB 7989.71 GB)
   Raid Devices : 5
  Total Devices : 4
Preferred Minor : 0

    Update Time : Sun Jun 11 09:23:01 2017
          State : clean
 Active Devices : 4
Working Devices : 4
 Failed Devices : 1
  Spare Devices : 0
       Checksum : f99c864d - correct
         Events : 17504

         Layout : left-symmetric
     Chunk Size : 64K

      Number   Major   Minor   RaidDevice State
this     3       8       56        3      active sync   /dev/sdd8

   0     0       8       24        0      active sync   /dev/sda8
   1     1       8        8        1      active sync   /dev/sdb8
   2     2       0        0        2      faulty removed
   3     3       8       56        3      active sync   /dev/sdd8
   4     4       8       40        4      active sync   /dev/sde8
/dev/sde8:
          Magic : a92b4efc
        Version : 0.90.00
           UUID : 2640f25a:4ed24787:e0e498eb:d7731140
  Creation Time : Thu Jun 18 23:12:56 2015
     Raid Level : raid5
  Used Dev Size : 1950612480 (1860.25 GiB 1997.43 GB)
     Array Size : 7802449920 (7441.00 GiB 7989.71 GB)
   Raid Devices : 5
  Total Devices : 4
Preferred Minor : 0

    Update Time : Sun Jun 11 09:23:01 2017
          State : clean
 Active Devices : 4
Working Devices : 4
 Failed Devices : 1
  Spare Devices : 0
       Checksum : f99c863f - correct
         Events : 17504

         Layout : left-symmetric
     Chunk Size : 64K

      Number   Major   Minor   RaidDevice State
this     4       8       40        4      active sync   /dev/sde8

   0     0       8       24        0      active sync   /dev/sda8
   1     1       8        8        1      active sync   /dev/sdb8
   2     2       0        0        2      faulty removed
   3     3       8       56        3      active sync   /dev/sdd8
   4     4       8       40        4      active sync   /dev/sde8

And also ran:
Code: Select all
mdadm --examine /dev/sd[abcde]8
irregardless of failed or not failed drives:
Code: Select all
root@HvyMtlNAS:/ # mdadm --examine /dev/sd[abcde]8
/dev/sda8:
          Magic : a92b4efc
        Version : 0.90.00
           UUID : 2640f25a:4ed24787:e0e498eb:d7731140
  Creation Time : Thu Jun 18 23:12:56 2015
     Raid Level : raid5
  Used Dev Size : 1950612480 (1860.25 GiB 1997.43 GB)
     Array Size : 7802449920 (7441.00 GiB 7989.71 GB)
   Raid Devices : 5
  Total Devices : 4
Preferred Minor : 0

    Update Time : Sun Jun 11 09:23:01 2017
          State : clean
 Active Devices : 4
Working Devices : 4
 Failed Devices : 1
  Spare Devices : 0
       Checksum : f99c8627 - correct
         Events : 17504

         Layout : left-symmetric
     Chunk Size : 64K

      Number   Major   Minor   RaidDevice State
this     0       8       24        0      active sync   /dev/sda8

   0     0       8       24        0      active sync   /dev/sda8
   1     1       8        8        1      active sync   /dev/sdb8
   2     2       0        0        2      faulty removed
   3     3       8       56        3      active sync   /dev/sdd8
   4     4       8       40        4      active sync   /dev/sde8
/dev/sdb8:
          Magic : a92b4efc
        Version : 0.90.00
           UUID : 2640f25a:4ed24787:e0e498eb:d7731140
  Creation Time : Thu Jun 18 23:12:56 2015
     Raid Level : raid5
  Used Dev Size : 1950612480 (1860.25 GiB 1997.43 GB)
     Array Size : 7802449920 (7441.00 GiB 7989.71 GB)
   Raid Devices : 5
  Total Devices : 4
Preferred Minor : 0

    Update Time : Sun Jun 11 09:23:01 2017
          State : clean
 Active Devices : 4
Working Devices : 4
 Failed Devices : 1
  Spare Devices : 0
       Checksum : f99c8619 - correct
         Events : 17504

         Layout : left-symmetric
     Chunk Size : 64K

      Number   Major   Minor   RaidDevice State
this     1       8        8        1      active sync   /dev/sdb8

   0     0       8       24        0      active sync   /dev/sda8
   1     1       8        8        1      active sync   /dev/sdb8
   2     2       0        0        2      faulty removed
   3     3       8       56        3      active sync   /dev/sdd8
   4     4       8       40        4      active sync   /dev/sde8
/dev/sdd8:
          Magic : a92b4efc
        Version : 0.90.00
           UUID : 2640f25a:4ed24787:e0e498eb:d7731140
  Creation Time : Thu Jun 18 23:12:56 2015
     Raid Level : raid5
  Used Dev Size : 1950612480 (1860.25 GiB 1997.43 GB)
     Array Size : 7802449920 (7441.00 GiB 7989.71 GB)
   Raid Devices : 5
  Total Devices : 4
Preferred Minor : 0

    Update Time : Sun Jun 11 09:23:01 2017
          State : clean
 Active Devices : 4
Working Devices : 4
 Failed Devices : 1
  Spare Devices : 0
       Checksum : f99c864d - correct
         Events : 17504

         Layout : left-symmetric
     Chunk Size : 64K

      Number   Major   Minor   RaidDevice State
this     3       8       56        3      active sync   /dev/sdd8

   0     0       8       24        0      active sync   /dev/sda8
   1     1       8        8        1      active sync   /dev/sdb8
   2     2       0        0        2      faulty removed
   3     3       8       56        3      active sync   /dev/sdd8
   4     4       8       40        4      active sync   /dev/sde8
/dev/sde8:
          Magic : a92b4efc
        Version : 0.90.00
           UUID : 2640f25a:4ed24787:e0e498eb:d7731140
  Creation Time : Thu Jun 18 23:12:56 2015
     Raid Level : raid5
  Used Dev Size : 1950612480 (1860.25 GiB 1997.43 GB)
     Array Size : 7802449920 (7441.00 GiB 7989.71 GB)
   Raid Devices : 5
  Total Devices : 4
Preferred Minor : 0

    Update Time : Sun Jun 11 09:23:01 2017
          State : clean
 Active Devices : 4
Working Devices : 4
 Failed Devices : 1
  Spare Devices : 0
       Checksum : f99c863f - correct
         Events : 17504

         Layout : left-symmetric
     Chunk Size : 64K

      Number   Major   Minor   RaidDevice State
this     4       8       40        4      active sync   /dev/sde8

   0     0       8       24        0      active sync   /dev/sda8
   1     1       8        8        1      active sync   /dev/sdb8
   2     2       0        0        2      faulty removed
   3     3       8       56        3      active sync   /dev/sdd8
   4     4       8       40        4      active sync   /dev/sde8

Since sdc passed the Seagate long diagnostic test is there any merit in reinserting it?
hvymetal86
Donator VIP
Donator VIP
 
Posts: 121
Joined: Mon May 11, 2015 3:42 pm

Re: Failed Drive Replacement

Postby Jocko » Sun Jun 11, 2017 10:29 pm

What is the output of
Code: Select all
mdadm --assemble /dev/md0 --run
it is the command which failed on rebooting

Note: Currently keep out the disk (sdc)
Jocko
Site Admin - expert
 
Posts: 11529
Joined: Tue Apr 12, 2011 4:48 pm
Location: Orleans, France

Re: Failed Drive Replacement

Postby Jocko » Sun Jun 11, 2017 10:47 pm

I understand what happens but I do not understand how you succeeded to do it...

On the examine output, the raid UUID is 2640f25a:4ed24787:e0e498eb:d7731140 but in your mdadm.conf, we have:
Code: Select all
CREATE owner=root group=root mode=0666 auto=yes metadata=1.0
PROGRAM /usr/bin/mdadm-events
DEVICE /dev/sd* /dev/se*
ARRAY /dev/md0 level=raid5 num-devices=5 spares=1 UUID=c6b0476f:07a34c67:9b30df64:77fdfc96
=> a raid5 settings with UUID:c6b0476f:07a34c67:9b30df64:77fdfc96
So why do you have these settings ???

the right line must be
Code: Select all
ARRAY /dev/md0 metadata=0.9 level=raid5 num-devices=5 UUID=2640f25a:4ed24787:e0e498eb:d7731140


It is why the boot script failed to assemble md0

So edit mdadm.conf and reboot the nas. You should have the md0 device available and mounted on /share/1100

You need to create again your shares in accordance with the folders in /share/1100
Jocko
Site Admin - expert
 
Posts: 11529
Joined: Tue Apr 12, 2011 4:48 pm
Location: Orleans, France

Re: Failed Drive Replacement

Postby hvymetal86 » Sun Jun 11, 2017 10:49 pm

I tried it twice and got no visible output either time:
Code: Select all
root@HvyMtlNAS:/ # mdadm --assemble /dev/md0 --run
root@HvyMtlNAS:/ # mdadm --assemble /dev/md0 --run
root@HvyMtlNAS:/ #
hvymetal86
Donator VIP
Donator VIP
 
Posts: 121
Joined: Mon May 11, 2015 3:42 pm

Re: Failed Drive Replacement

Postby Jocko » Sun Jun 11, 2017 10:52 pm

Since sdc passed the Seagate long diagnostic test is there any merit in reinserting it?
I do not think that.

Indeed a disk get a faulty state when too many bad block are detected. Please to note this level is more drastic on a raid feature than a standard usage. So this disk may appear as clean with the seagate tools but not in a raid.
Jocko
Site Admin - expert
 
Posts: 11529
Joined: Tue Apr 12, 2011 4:48 pm
Location: Orleans, France

Re: Failed Drive Replacement

Postby Jocko » Sun Jun 11, 2017 10:53 pm

hvymetal86 wrote:I tried it twice and got no visible output either time:
Code: Select all
root@HvyMtlNAS:/ # mdadm --assemble /dev/md0 --run
root@HvyMtlNAS:/ # mdadm --assemble /dev/md0 --run
root@HvyMtlNAS:/ #
See my previous post it is the expected behaviour...as you have a bad mdadm.conf file
Jocko
Site Admin - expert
 
Posts: 11529
Joined: Tue Apr 12, 2011 4:48 pm
Location: Orleans, France

Re: Failed Drive Replacement

Postby hvymetal86 » Sun Jun 11, 2017 11:06 pm

Sorry, I missed those two newer replies. Probably didn't refresh the page.

In the earlier commands two paths were given for mdadm.conf:
/etc/mdadm.conf
/rw_fs/etc/mdadm.conf

Which file do i edit?
hvymetal86
Donator VIP
Donator VIP
 
Posts: 121
Joined: Mon May 11, 2015 3:42 pm

Re: Failed Drive Replacement

Postby Jocko » Sun Jun 11, 2017 11:13 pm

It is the same one is a symlink.

But I'd rather you edit directly the file /rw_fs/etc/mdadm.conf

Note:
UUID is set when a raid is created and never changes until you destroy the raid. It is why I do not understand the content of your mdadm.conf file :scratch
Jocko
Site Admin - expert
 
Posts: 11529
Joined: Tue Apr 12, 2011 4:48 pm
Location: Orleans, France

Re: Failed Drive Replacement

Postby hvymetal86 » Mon Jun 12, 2017 12:48 am

Used vi to edit the conf file, then used cat to get edited content:
Code: Select all
root@HvyMtlNAS:/ # vi /rw_fs/etc/mdadm.conf
root@HvyMtlNAS:/ # cat /rw_fs/etc/mdadm.conf
CREATE owner=root group=root mode=0666 auto=yes metadata=1.0
PROGRAM /usr/bin/mdadm-events
DEVICE /dev/sd* /dev/se*
ARRAY /dev/md0 metadata=0.9 level=raid5 num-devices=5 UUID=2640f25a:4ed24787:e0e498eb:d7731140

MAILADDR [REDACTED]@gmail.com

Then re-ran the commands you initially said returned incorrect values and they still did:
Code: Select all
root@HvyMtlNAS:/ # file -bs /dev/sda8
Linux rev 1.0 ext3 filesystem data, UUID=bd535352-1c00-4e7c-a4dd-7b2ca7731449 (needs journal recovery) (large files)
root@HvyMtlNAS:/ # file -bs /dev/sdb8
data
root@HvyMtlNAS:/ # file -bs /dev/sdd8
data
root@HvyMtlNAS:/ # file -bs /dev/sde8
Linux rev 268435463.0 ext4 filesystem data, UUID=bfd3825d-1c00-4e6c-a4dd-7b2ca7731449 (needs journal recovery) (large files)

Then I did cat on the boot.log file again which shows the RAID restore failing again but with different info:
Code: Select all
root@HvyMtlNAS:/ # cat /boot.log
start boot log
detect and set platform
5big2
kirkwood
UIMAGE-466-KIRKWOOD-7
Current kernel: 4.6.6 #7 PREEMPT Sun May 7 11:50:52 CEST 2017
5big2
 mount dev/pts
update dev nodes
booting using sda2 root file system...
make dev node for buttons...
make dev node for tun device...
enable IP forwarding...
start buttons control daemon
device = 5big2
source = buttons-nwsp2
buttons-nwsp2 daemon started
Sun Jun 11 20:34:02 EDT 2017
create temporary passwd file
run udevstart to update dev nodes when necessary
inserting kernel modules:
modprobe: module 'iscsi_trgt' not found
create temporary group file
configure loopback network interface
setting reboot and standby
5big2
rebootd-nwsp2
5big2
standbyd-nwsp2
start fan
starting php based setup routines step 1
 * Initialize the volatile db file...    [ OK ]
 * Starting udevd...                                     [ OK ]
 * Starting RAID monitor:                                [ OK ]
 * Starting restore RAID devices...
   - Assembling device /dev/md0:  [ Fail ]

Warning: file_get_contents(/sys/block/md0/md/sync_action): failed to open stream: No such file or directory in /etc/finc/dm_restore_md.finc on line 52
 * Finishing restore RAID devices...     [ Fail ]
 * Found database configuration file...  [ OK ]
 * Updating Disks database...
Warning: Illegal string offset 'vol' in /etc/finc/dm_update_single_db.finc on line 215

Warning: Invalid argument supplied for foreach() in /etc/finc/dm_update_single_db.finc on line 215
                         [ OK ]
 * Starting mount of volumes...
 * Finishing mount of volumes...                 [ OK ]
 * Generating Hosts File...                      [ OK ]
 * Configuring System Hostname...                [ OK ]
 * Configuring LAN interface...                  [ OK ]
 * Initializing Timezone...                      [ OK ]
 * Starting web server...                                [ OK ]
 * Starting mount of internal USB ...

Warning: Illegal string offset 'vol' in /etc/finc/dm_mount_internal_USB.finc on line 21
 * Finishing mount internal USB ...      [ OK ]
 * Configuring Disks...                                  [ OK ]
 * Configuring Samba...                                  [ OK ]
 * Configuring System Users...
        Root password: use default password
        Set users, linux and samba accounts      [ OK ]
 * Starting Fvdw-sl Discovery Daemon...  [ OK ]
start rpcbind service
starting php based setup routines step 2
 * Starting dropbear...                                  [ OK ]
 * Kill temporary dropbear...                    [ OK ]
starting php based setup routines step 3
 * Starting daemon update Hosts File...  [ OK ]
 * Starting Disk Temperature Guard...    [ OK ]
 * Starting mount of remote shares...
 * Finishing mount shares...                     [ OK ]
 * Starting NTP client...                                [ OK ]

Warning: Illegal string offset 'vol' in /etc/finc/dm_get_volume_addons.finc on line 31

Warning: Illegal string offset 'vol' in /etc/finc/dm_get_volume_addons.finc on line 32

Warning: Illegal string offset 'volid' in /etc/finc/dm_get_volume_addons.finc on line 32

Warning: Illegal string offset 'vol' in /etc/finc/dm_get_volume_addons.finc on line 31

Warning: Illegal string offset 'vol' in /etc/finc/dm_get_volume_addons.finc on line 32

Warning: Illegal string offset 'volid' in /etc/finc/dm_get_volume_addons.finc on line 32

Warning: Illegal string offset 'vol' in /etc/finc/dm_get_volume_addons.finc on line 31

Warning: Illegal string offset 'vol' in /etc/finc/dm_get_volume_addons.finc on line 32

Warning: Illegal string offset 'volid' in /etc/finc/dm_get_volume_addons.finc on line 32

Warning: Illegal string offset 'vol' in /etc/finc/dm_get_volume_addons.finc on line 31

Warning: Illegal string offset 'vol' in /etc/finc/dm_get_volume_addons.finc on line 32

Warning: Illegal string offset 'volid' in /etc/finc/dm_get_volume_addons.finc on line 32

Warning: Illegal string offset 'vol' in /etc/finc/dm_get_volume_addons.finc on line 31

Warning: Illegal string offset 'vol' in /etc/finc/dm_get_volume_addons.finc on line 32

Warning: Illegal string offset 'volid' in /etc/finc/dm_get_volume_addons.finc on line 32

Warning: Illegal string offset 'vol' in /etc/finc/dm_get_volume_addons.finc on line 31

Warning: Illegal string offset 'vol' in /etc/finc/dm_get_volume_addons.finc on line 32

Warning: Illegal string offset 'volid' in /etc/finc/dm_get_volume_addons.finc on line 32

Warning: Illegal string offset 'vol' in /etc/finc/dm_get_volume_addons.finc on line 31

Warning: Illegal string offset 'vol' in /etc/finc/dm_get_volume_addons.finc on line 32

Warning: Illegal string offset 'volid' in /etc/finc/dm_get_volume_addons.finc on line 32

Warning: Illegal string offset 'vol' in /etc/finc/dm_get_volume_addons.finc on line 31

Warning: Illegal string offset 'vol' in /etc/finc/dm_get_volume_addons.finc on line 32

Warning: Illegal string offset 'volid' in /etc/finc/dm_get_volume_addons.finc on line 32

Warning: Illegal string offset 'vol' in /etc/finc/dm_get_volume_addons.finc on line 31

Warning: Illegal string offset 'vol' in /etc/finc/dm_get_volume_addons.finc on line 32

Warning: Illegal string offset 'volid' in /etc/finc/dm_get_volume_addons.finc on line 32
 * Starting Transmission Client...
Warning: Illegal string offset 'vol' in /etc/finc/dm_get_volume_addons.finc on line 31

Warning: Illegal string offset 'vol' in /etc/finc/dm_get_volume_addons.finc on line 32

Warning: Illegal string offset 'volid' in /etc/finc/dm_get_volume_addons.finc on line 32

Warning: Illegal string offset 'vol' in /etc/finc/dm_get_volume_addons.finc on line 31

Warning: Illegal string offset 'vol' in /etc/finc/dm_get_volume_addons.finc on line 32

Warning: Illegal string offset 'volid' in /etc/finc/dm_get_volume_addons.finc on line 32
                 [ OK ]

Warning: Illegal string offset 'vol' in /etc/finc/dm_get_volume_addons.finc on line 31

Warning: Illegal string offset 'vol' in /etc/finc/dm_get_volume_addons.finc on line 32

Warning: Illegal string offset 'volid' in /etc/finc/dm_get_volume_addons.finc on line 32

Warning: Illegal string offset 'vol' in /etc/finc/dm_get_volume_addons.finc on line 31

Warning: Illegal string offset 'vol' in /etc/finc/dm_get_volume_addons.finc on line 32

Warning: Illegal string offset 'volid' in /etc/finc/dm_get_volume_addons.finc on line 32

Warning: Illegal string offset 'vol' in /etc/finc/dm_get_volume_addons.finc on line 31

Warning: Illegal string offset 'vol' in /etc/finc/dm_get_volume_addons.finc on line 32

Warning: Illegal string offset 'volid' in /etc/finc/dm_get_volume_addons.finc on line 32
starting php based setup routines step banner


*** fvdw-sl NAS firmware
    This is fvdw-sl version: fvdw-sl 17.0
    built on: May 28 2017

    LAN IP address: 192.168.1.128 (DHCP)

    Port configuration:

    LAN   -> eth0
php based setup routines finished
web permission on resolv.conf
set device tuning for dms performance
move smbd en nmbd db files away from ram disk to prevent hanging samba server
LED settings

Send a boot mail notification to [REDACTED]@gmail.com
else loop1 finished
rcS finished

Then last ran the mdadm examine command again too:
Code: Select all
root@HvyMtlNAS:/ # mdadm --examine /dev/sd[abde]8
/dev/sda8:
          Magic : a92b4efc
        Version : 0.90.00
           UUID : 2640f25a:4ed24787:e0e498eb:d7731140
  Creation Time : Thu Jun 18 23:12:56 2015
     Raid Level : raid5
  Used Dev Size : 1950612480 (1860.25 GiB 1997.43 GB)
     Array Size : 7802449920 (7441.00 GiB 7989.71 GB)
   Raid Devices : 5
  Total Devices : 4
Preferred Minor : 0

    Update Time : Sun Jun 11 09:23:01 2017
          State : clean
 Active Devices : 4
Working Devices : 4
 Failed Devices : 1
  Spare Devices : 0
       Checksum : f99c8627 - correct
         Events : 17504

         Layout : left-symmetric
     Chunk Size : 64K

      Number   Major   Minor   RaidDevice State
this     0       8       24        0      active sync   /dev/sda8

   0     0       8       24        0      active sync   /dev/sda8
   1     1       8        8        1      active sync   /dev/sdb8
   2     2       0        0        2      faulty removed
   3     3       8       56        3      active sync   /dev/sdd8
   4     4       8       40        4      active sync   /dev/sde8
/dev/sdb8:
          Magic : a92b4efc
        Version : 0.90.00
           UUID : 2640f25a:4ed24787:e0e498eb:d7731140
  Creation Time : Thu Jun 18 23:12:56 2015
     Raid Level : raid5
  Used Dev Size : 1950612480 (1860.25 GiB 1997.43 GB)
     Array Size : 7802449920 (7441.00 GiB 7989.71 GB)
   Raid Devices : 5
  Total Devices : 4
Preferred Minor : 0

    Update Time : Sun Jun 11 09:23:01 2017
          State : clean
 Active Devices : 4
Working Devices : 4
 Failed Devices : 1
  Spare Devices : 0
       Checksum : f99c8619 - correct
         Events : 17504

         Layout : left-symmetric
     Chunk Size : 64K

      Number   Major   Minor   RaidDevice State
this     1       8        8        1      active sync   /dev/sdb8

   0     0       8       24        0      active sync   /dev/sda8
   1     1       8        8        1      active sync   /dev/sdb8
   2     2       0        0        2      faulty removed
   3     3       8       56        3      active sync   /dev/sdd8
   4     4       8       40        4      active sync   /dev/sde8
/dev/sdd8:
          Magic : a92b4efc
        Version : 0.90.00
           UUID : 2640f25a:4ed24787:e0e498eb:d7731140
  Creation Time : Thu Jun 18 23:12:56 2015
     Raid Level : raid5
  Used Dev Size : 1950612480 (1860.25 GiB 1997.43 GB)
     Array Size : 7802449920 (7441.00 GiB 7989.71 GB)
   Raid Devices : 5
  Total Devices : 4
Preferred Minor : 0

    Update Time : Sun Jun 11 09:23:01 2017
          State : clean
 Active Devices : 4
Working Devices : 4
 Failed Devices : 1
  Spare Devices : 0
       Checksum : f99c864d - correct
         Events : 17504

         Layout : left-symmetric
     Chunk Size : 64K

      Number   Major   Minor   RaidDevice State
this     3       8       56        3      active sync   /dev/sdd8

   0     0       8       24        0      active sync   /dev/sda8
   1     1       8        8        1      active sync   /dev/sdb8
   2     2       0        0        2      faulty removed
   3     3       8       56        3      active sync   /dev/sdd8
   4     4       8       40        4      active sync   /dev/sde8
/dev/sde8:
          Magic : a92b4efc
        Version : 0.90.00
           UUID : 2640f25a:4ed24787:e0e498eb:d7731140
  Creation Time : Thu Jun 18 23:12:56 2015
     Raid Level : raid5
  Used Dev Size : 1950612480 (1860.25 GiB 1997.43 GB)
     Array Size : 7802449920 (7441.00 GiB 7989.71 GB)
   Raid Devices : 5
  Total Devices : 4
Preferred Minor : 0

    Update Time : Sun Jun 11 09:23:01 2017
          State : clean
 Active Devices : 4
Working Devices : 4
 Failed Devices : 1
  Spare Devices : 0
       Checksum : f99c863f - correct
         Events : 17504

         Layout : left-symmetric
     Chunk Size : 64K

      Number   Major   Minor   RaidDevice State
this     4       8       40        4      active sync   /dev/sde8

   0     0       8       24        0      active sync   /dev/sda8
   1     1       8        8        1      active sync   /dev/sdb8
   2     2       0        0        2      faulty removed
   3     3       8       56        3      active sync   /dev/sdd8
   4     4       8       40        4      active sync   /dev/sde8
hvymetal86
Donator VIP
Donator VIP
 
Posts: 121
Joined: Mon May 11, 2015 3:42 pm

Re: Failed Drive Replacement

Postby Jocko » Mon Jun 12, 2017 8:30 am

Ok, so boot script failed at the same step.

I would have liked you did not boot the NAS before running this command and see what happens:
Code: Select all
mdadm --assemble /dev/md0 --run

and post
Code: Select all
cat /proc/mdstat
mdadm --detail /dev/md0

Please to post these outputs.

Note : file commands are now useless and also examine commands
Jocko
Site Admin - expert
 
Posts: 11529
Joined: Tue Apr 12, 2011 4:48 pm
Location: Orleans, France

PreviousNext

Return to Lacie 5big Network vs2

Who is online

Users browsing this forum: No registered users and 11 guests