All Shares Gone - Lacie 5Big Network 2

Re: All Shares Gone - Lacie 5Big Network 2

Postby matt-white » Mon Mar 20, 2017 9:46 am

Hi,
I have a similar issue with my LaCie 5Big Network2 device. The Shares have dissappeared.
I have tried most of the suggestions on this thread, but am either not understanding or not getting the same results.
I am able to connect to the drive via telnet using fvdw-sl-console-6-16-1-9feb2016-32bits and the UIMAGE-3142-KIRKWOOD-150-standalone firmware, and have copied on the mdadm and mount software. When I run the mount command it hangs. Below is the result of my checks:

Code: Select all
root@fvdw-sta-kirkwood:/ # mdadm --detail /dev/md4
/dev/md4:
        Version : 1.0
  Creation Time : Tue Nov 12 13:13:41 2013
     Raid Level : raid6
     Array Size : 5854467072 (5583.26 GiB 5994.97 GB)
  Used Dev Size : 1951489024 (1861.09 GiB 1998.32 GB)
   Raid Devices : 5
  Total Devices : 5
    Persistence : Superblock is persistent

    Update Time : Mon Mar 20 08:12:12 2017
          State : clean
 Active Devices : 5
Working Devices : 5
 Failed Devices : 0
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 512K

           Name : LaCie-5big:4
           UUID : b488ac4b:a0485b89:9ee56137:36c06d50
         Events : 4286187

    Number   Major   Minor   RaidDevice State
       0       8       34        0      active sync   /dev/sda2
       1       8       18        1      active sync   /dev/sdb2
       2       8        2        2      active sync   /dev/sdc2
       5       8       66        3      active sync   /dev/sdd2
       4       8       50        4      active sync   /dev/sde2
root@fvdw-sta-kirkwood:/ #


I think this means the raid is OK.

When I tried a xfs_repair, I got the following:

Code: Select all
root@fvdw-sta-kirkwood:/ # xfs_repair -n /dev/md4
Phase 1 - find and verify superblock...

fatal error -- couldn't allocate block map, size = 8388608
root@fvdw-sta-kirkwood:/ #


While the mount was running, I opened a new connection to do 'dmesg' and got the following line repeated many times:
Code: Select all
[  275.544128] XFS (md4): xfs_log_force: error 5 returned.



Other things I have tried for troubleshooting:
Code: Select all
root@fvdw-sta-kirkwood:/ # cat /proc/mounts
rootfs / rootfs rw,size=254924k,nr_inodes=63731 0 0
none /proc proc rw,relatime 0 0
none /sys sysfs rw,relatime 0 0
none /dev/pts devpts rw,relatime,mode=600 0 0
root@fvdw-sta-kirkwood:/ #


Code: Select all
root@fvdw-sta-kirkwood:/ # mdadm --examine /dev/sd[abcde]2
/dev/sda2:
          Magic : a92b4efc
        Version : 1.0
    Feature Map : 0x0
     Array UUID : b488ac4b:a0485b89:9ee56137:36c06d50
           Name : LaCie-5big:4
  Creation Time : Tue Nov 12 13:13:41 2013
     Raid Level : raid6
   Raid Devices : 5

 Avail Dev Size : 3902978048 (1861.09 GiB 1998.32 GB)
     Array Size : 5854467072 (5583.26 GiB 5994.97 GB)
   Super Offset : 3902978304 sectors
          State : clean
    Device UUID : 1e4c4d6f:9198b11e:f471f205:192edf66

    Update Time : Mon Mar 20 11:09:58 2017
       Checksum : 6546b7e1 - correct
         Events : 4286205

         Layout : left-symmetric
     Chunk Size : 512K

   Device Role : Active device 0
   Array State : AAAAA ('A' == active, '.' == missing)
/dev/sdb2:
          Magic : a92b4efc
        Version : 1.0
    Feature Map : 0x0
     Array UUID : b488ac4b:a0485b89:9ee56137:36c06d50
           Name : LaCie-5big:4
  Creation Time : Tue Nov 12 13:13:41 2013
     Raid Level : raid6
   Raid Devices : 5

 Avail Dev Size : 3902978048 (1861.09 GiB 1998.32 GB)
     Array Size : 5854467072 (5583.26 GiB 5994.97 GB)
   Super Offset : 3902978304 sectors
          State : clean
    Device UUID : 0f069e49:82149a50:4979aa3c:683e2e1a

    Update Time : Mon Mar 20 11:09:58 2017
       Checksum : 5b870568 - correct
         Events : 4286205

         Layout : left-symmetric
     Chunk Size : 512K

   Device Role : Active device 1
   Array State : AAAAA ('A' == active, '.' == missing)
/dev/sdc2:
          Magic : a92b4efc
        Version : 1.0
    Feature Map : 0x0
     Array UUID : b488ac4b:a0485b89:9ee56137:36c06d50
           Name : LaCie-5big:4
  Creation Time : Tue Nov 12 13:13:41 2013
     Raid Level : raid6
   Raid Devices : 5

 Avail Dev Size : 3902978048 (1861.09 GiB 1998.32 GB)
     Array Size : 5854467072 (5583.26 GiB 5994.97 GB)
   Super Offset : 3902978304 sectors
          State : clean
    Device UUID : d011edb2:253a15e0:078be0f5:60f62ba4

    Update Time : Mon Mar 20 11:09:58 2017
       Checksum : 97850085 - correct
         Events : 4286205

         Layout : left-symmetric
     Chunk Size : 512K

   Device Role : Active device 2
   Array State : AAAAA ('A' == active, '.' == missing)
/dev/sdd2:
          Magic : a92b4efc
        Version : 1.0
    Feature Map : 0x0
     Array UUID : b488ac4b:a0485b89:9ee56137:36c06d50
           Name : LaCie-5big:4
  Creation Time : Tue Nov 12 13:13:41 2013
     Raid Level : raid6
   Raid Devices : 5

 Avail Dev Size : 3902978048 (1861.09 GiB 1998.32 GB)
     Array Size : 5854467072 (5583.26 GiB 5994.97 GB)
   Super Offset : 3902978304 sectors
          State : clean
    Device UUID : 99aa3168:2ffeee9e:345cbb80:588819e4

    Update Time : Mon Mar 20 11:09:58 2017
       Checksum : d66bc07f - correct
         Events : 4286205

         Layout : left-symmetric
     Chunk Size : 512K

   Device Role : Active device 3
   Array State : AAAAA ('A' == active, '.' == missing)
/dev/sde2:
          Magic : a92b4efc
        Version : 1.0
    Feature Map : 0x0
     Array UUID : b488ac4b:a0485b89:9ee56137:36c06d50
           Name : LaCie-5big:4
  Creation Time : Tue Nov 12 13:13:41 2013
     Raid Level : raid6
   Raid Devices : 5

 Avail Dev Size : 3902978048 (1861.09 GiB 1998.32 GB)
     Array Size : 5854467072 (5583.26 GiB 5994.97 GB)
   Super Offset : 3902978304 sectors
          State : clean
    Device UUID : c8def4d9:2c67fc66:78d1d50a:4dd4bc75

    Update Time : Mon Mar 20 11:09:58 2017
       Checksum : 2bfa1ee3 - correct
         Events : 4286205

         Layout : left-symmetric
     Chunk Size : 512K

   Device Role : Active device 4
   Array State : AAAAA ('A' == active, '.' == missing)


Code: Select all
root@fvdw-sta-kirkwood:/ # mdadm --detail /dev/md[01234]
mdadm: md device /dev/md0 does not appear to be active.
mdadm: md device /dev/md1 does not appear to be active.
mdadm: md device /dev/md2 does not appear to be active.
mdadm: md device /dev/md3 does not appear to be active.
/dev/md4:
        Version : 1.0
  Creation Time : Tue Nov 12 13:13:41 2013
     Raid Level : raid6
     Array Size : 5854467072 (5583.26 GiB 5994.97 GB)
  Used Dev Size : 1951489024 (1861.09 GiB 1998.32 GB)
   Raid Devices : 5
  Total Devices : 5
    Persistence : Superblock is persistent

    Update Time : Mon Mar 20 11:09:58 2017
          State : clean
 Active Devices : 5
Working Devices : 5
 Failed Devices : 0
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 512K

           Name : LaCie-5big:4
           UUID : b488ac4b:a0485b89:9ee56137:36c06d50
         Events : 4286205

    Number   Major   Minor   RaidDevice State
       0       8       34        0      active sync   /dev/sda2
       1       8       18        1      active sync   /dev/sdb2
       2       8        2        2      active sync   /dev/sdc2
       5       8       66        3      active sync   /dev/sdd2
       4       8       50        4      active sync   /dev/sde2


Can you please help?

Thanks,
Matt
matt-white
 
Posts: 7
Joined: Fri Mar 17, 2017 3:38 pm

Re: All Shares Gone - Lacie 5Big Network 2

Postby Jocko » Mon Mar 20, 2017 5:49 pm

Hi matt-white,

There is no issue on your raid but surely with xfs.

You need a swap partition so that xfs_repair may run properly on fvdw-sl console.

So you can use the swap raid1 set by LACIE firmware.

See this post viewtopic.php?f=26&p=24191#p24191
Jocko
Site Admin - expert
 
Posts: 11367
Joined: Tue Apr 12, 2011 4:48 pm
Location: Orleans, France

Re: All Shares Gone - Lacie 5Big Network 2

Postby matt-white » Tue Mar 21, 2017 7:54 am

Thank you for your help Jocko.

I have tried as you suggested. The xfs_repair seemed to have stopped part way through:

Code: Select all
fvdw-sta-kirkwood login: root
Password:
root@fvdw-sta-kirkwood:/ # tftp -l /sbin/mdadm -r mdadm -g 10.10.7.148
mdadm                100% |************************************************************|  1100k  0:00:00 ETA
root@fvdw-sta-kirkwood:/ # chmod 755 /sbin/mdadm
root@fvdw-sta-kirkwood:/ # mdadm --assemble /dev/md4 /dev/sd[abcde]2
mdadm: /dev/md4 has been started with 5 drives.
root@fvdw-sta-kirkwood:/ # mdadm --assemble /dev/md3 /dev/sd[abcde]5
mdadm: /dev/md3 has been started with 5 drives.
root@fvdw-sta-kirkwood:/ # swapon /dev/md3
root@fvdw-sta-kirkwood:/ # tftp -l /sbin/xfs_repair -r xfs_repair -g 10.10.7.148
xfs_repair           100% |************************************************************|   622k  0:00:00 ETA
root@fvdw-sta-kirkwood:/ # chmod 755 /sbin/xfs_repair
root@fvdw-sta-kirkwood:/ # tftp -l glibc-mini-mkfs.xfs-25feb14.tar -r glibc-mini-mkfs.xfs-25feb14.tar -g 10.1
0.7.148
glibc-mini-mkfs.xfs- 100% |************************************************************|  2519k  0:00:00 ETA
root@fvdw-sta-kirkwood:/ # tar -xvf glibc-mini-mkfs.xfs-25feb14.tar -C /
./
./usr/
./usr/lib/
./usr/lib/libuuid.so
./usr/lib/libuuid.so.1.3.0
./usr/lib/libgcc_s.so.1
./usr/lib/libuuid.so.1
./bin/
./sbin/
./sbin/mkfs.xfs
./lib/
./lib/libutil.so.1
./lib/libc-2.17.so
./lib/librt.so.1
./lib/librt-2.17.so
./lib/libc.so.6
./lib/libm-2.17.so
./lib/libpthread.so.0
./lib/ld-linux.so.3
./lib/ld-2.17.so
./lib/libutil-2.17.so
./lib/libpthread-2.17.so
./lib/libm.so.6
root@fvdw-sta-kirkwood:/ # xfs_repair -n /dev/md4
Phase 1 - find and verify superblock...
Phase 2 - using internal log
        - scan filesystem freespace and inode maps...
        - found root inode chunk
Phase 3 - for each AG...
        - scan (but don't clear) agi unlinked lists...
        - process known inodes and perform inode discovery...
        - agno = 0
Killed
root@fvdw-sta-kirkwood:/ #



If I try without the -n option, I get the warning
Code: Select all
root@fvdw-sta-kirkwood:/ # xfs_repair /dev/md4
Phase 1 - find and verify superblock...
Phase 2 - using internal log
        - zero log...
ERROR: The filesystem has valuable metadata changes in a log which needs to
be replayed.  Mount the filesystem to replay the log, and unmount it before
re-running xfs_repair.  If you are unable to mount the filesystem, then use
the -L option to destroy the log and attempt a repair.
Note that destroying the log may cause corruption -- please attempt a mount
of the filesystem before doing this.
root@fvdw-sta-kirkwood:/ #

Should I proceed? What would you do?

Thanks,
Matt
matt-white
 
Posts: 7
Joined: Fri Mar 17, 2017 3:38 pm

Re: All Shares Gone - Lacie 5Big Network 2

Postby Jocko » Tue Mar 21, 2017 11:23 am

So there is some thing wrong.

First you should try to mount /dev/md4 but as you have a xfs partition you must use a static binary to do it :
See the related fvdw's post: viewtopic.php?f=26&t=1574&start=40#p19163

Then try to mount md4 by using full path with mount command
Code: Select all
mkdir /md4
/usr/sbin/mount /dev/md4 /md4


so unmount it
Code: Select all
umount /md4
and try again xfs_repair. If you still have the warning about corrupted log, you have to use the option L
Jocko
Site Admin - expert
 
Posts: 11367
Joined: Tue Apr 12, 2011 4:48 pm
Location: Orleans, France

Re: All Shares Gone - Lacie 5Big Network 2

Postby matt-white » Tue Mar 21, 2017 11:57 am

Thanks Jocko.

I tried that mount command again, but it still hangs at the same point.

I also tried the xfs_repair command, but it still got killed part-way through.

Code: Select all
root@fvdw-sta-kirkwood:/ # xfs_repair /dev/md4
Phase 1 - find and verify superblock...
Phase 2 - using internal log
        - zero log...
ERROR: The filesystem has valuable metadata changes in a log which needs to
be replayed.  Mount the filesystem to replay the log, and unmount it before
re-running xfs_repair.  If you are unable to mount the filesystem, then use
the -L option to destroy the log and attempt a repair.
Note that destroying the log may cause corruption -- please attempt a mount
of the filesystem before doing this.

root@fvdw-sta-kirkwood:/ # xfs_repair -L /dev/md4
Phase 1 - find and verify superblock...
Phase 2 - using internal log
        - zero log...
ALERT: The filesystem has valuable metadata changes in a log which is being
destroyed because the -L option was used.
        - scan filesystem freespace and inode maps...
        - found root inode chunk
Phase 3 - for each AG...
        - scan and clear agi unlinked lists...
        - process known inodes and perform inode discovery...
        - agno = 0
Killed
root@fvdw-sta-kirkwood:/ #


Is there anything I can try, or should I just send it to a recovery company?

Thanks for your help,
Matt
matt-white
 
Posts: 7
Joined: Fri Mar 17, 2017 3:38 pm

Re: All Shares Gone - Lacie 5Big Network 2

Postby Jocko » Tue Mar 21, 2017 12:50 pm

So I suggest to repeat xfs_repair command without option L but with option -v to get a verbose output. Maybe there will be some additional details why it must abort the action...

Before starting the command, see the swap usage:
Code: Select all
swapon -s
Jocko
Site Admin - expert
 
Posts: 11367
Joined: Tue Apr 12, 2011 4:48 pm
Location: Orleans, France

Re: All Shares Gone - Lacie 5Big Network 2

Postby matt-white » Tue Mar 21, 2017 12:58 pm

Jacko,

The swapon command was not able to run with -s, and the verbose repair doesn't seem to have revealed much:

Code: Select all
root@fvdw-sta-kirkwood:/ # swapon -s
swapon: invalid option -- s
BusyBox v1.21.0 (2013-02-04 10:48:06 GMT+1) multi-call binary.

Usage: swapon [-a] [-p PRI] [DEVICE]

Start swapping on DEVICE

        -a      Start swapping on all swap devices
        -p PRI  Set swap device priority

root@fvdw-sta-kirkwood:/ # xfs_repair -v /dev/md4
Phase 1 - find and verify superblock...
        - block cache size set to 512 entries
Phase 2 - using internal log
        - zero log...
zero_log: head block 8 tail block 8
        - scan filesystem freespace and inode maps...
        - found root inode chunk
Phase 3 - for each AG...
        - scan and clear agi unlinked lists...
        - process known inodes and perform inode discovery...
        - agno = 0
Killed
root@fvdw-sta-kirkwood:/ #
matt-white
 
Posts: 7
Joined: Fri Mar 17, 2017 3:38 pm

Re: All Shares Gone - Lacie 5Big Network 2

Postby Jocko » Tue Mar 21, 2017 1:52 pm

Ok

So there is no much fix way...except if swap size is still not enough and explains this behaviour.

so swapon busybox is useless, can you see what is the swap size (used size) with the command top
Code: Select all
root@Acrab:/ # top

Mem:    510004k total,   473220k used,    36784k free,     6624k buffers
Swap:   524284k total,    33336k used,   490948k free,   358804k cached
...
(type q to end the top command)

otherwise you did not report if the mount command succeeded ? and can you browse the folder tree on /md4 ?
Jocko
Site Admin - expert
 
Posts: 11367
Joined: Tue Apr 12, 2011 4:48 pm
Location: Orleans, France

Re: All Shares Gone - Lacie 5Big Network 2

Postby matt-white » Tue Mar 21, 2017 2:02 pm

Hi,

There is no mention of swap in the Top command

Code: Select all
Mem: 12644K used, 499136K free, 0K shrd, 1162824K buff, 1162824K cached
CPU:  0.0% usr  0.1% sys  0.0% nic 99.8% idle  0.0% io  0.0% irq  0.0% sirq
Load average: 0.00 0.01 0.05 1/52 2137


I have just run the mount command again, and for the first time it didn't hang!
I can now see my files.
:woohoo
The only difference I can think of this time is I have deleted the filesystem log.

Within the Shares folder, there are four numbered folders that used to be the share points. How do I restore these?
matt-white
 
Posts: 7
Joined: Fri Mar 17, 2017 3:38 pm

Re: All Shares Gone - Lacie 5Big Network 2

Postby Jocko » Tue Mar 21, 2017 2:25 pm

matt-white wrote:There is no mention of swap in the Top command
if you did not reboot the nas after the last swapon /dev/md3 that would mean I forgot a step before adding the raid swap
so the steps should be
Code: Select all
mdadm --assemble /dev/md3 /dev/sd[abcde]5
mkswap /dev/md3
swapon /dev/md3


anyhow if you can mount the data raid, you should try now to reboot the nas to see if lacie firmware succeeds to start so on the telnet window, do
Code: Select all
reboot -f
Jocko
Site Admin - expert
 
Posts: 11367
Joined: Tue Apr 12, 2011 4:48 pm
Location: Orleans, France

PreviousNext

Return to Lacie 5big Network vs2

Who is online

Users browsing this forum: Google [Bot] and 1 guest