All Shares Gone - Lacie 5Big Network 2

Re: All Shares Gone - Lacie 5Big Network 2

Postby cwherbert » Mon Jun 22, 2015 8:26 pm

hi fvdw,

did as you said, but its been "blinking" for over 5 minutes now :crazy :crazy :crazy :sob :sob :sob :sob

root@fvdw-sta-kirkwood:/ # cd /
root@fvdw-sta-kirkwood:/ # pwd
/
root@fvdw-sta-kirkwood:/ # tftp -l mount.tar -r mount.tar -g 192.168.192.55
mount.tar 100% |****************************************************************| 199k 0:00:00 ETA
root@fvdw-sta-kirkwood:/ # tar -xvf mount.tar
usr/
usr/sbin/
usr/sbin/mount
usr/lib/
usr/lib/libblkid.so.1
usr/lib/libblkid.so.1.1.0
usr/lib/libblkid.so
root@fvdw-sta-kirkwood:/ # /usr/sbin/mount -o ro /dev/md4 /mountpoint
cwherbert
 
Posts: 24
Joined: Sun Jun 14, 2015 7:08 pm

Re: All Shares Gone - Lacie 5Big Network 2

Postby cwherbert » Mon Jun 22, 2015 8:28 pm

I can hear some noises, reading/writing or something from the NAS box ..... :pound
cwherbert
 
Posts: 24
Joined: Sun Jun 14, 2015 7:08 pm

Re: All Shares Gone - Lacie 5Big Network 2

Postby fvdw » Mon Jun 22, 2015 8:39 pm

keep your fingers crossed :whistle
fvdw
Site Admin - expert
 
Posts: 13471
Joined: Tue Apr 12, 2011 2:30 pm
Location: Netherlands

Re: All Shares Gone - Lacie 5Big Network 2

Postby fvdw » Mon Jun 22, 2015 8:48 pm

if mount still fails there is one thing I can think of and that is that the file system is damaged.
If it is an xfs file system then using xfs_repair might be worth a try.
But lets wait with this after advice from the experts

@Jocko, What do you think of this option ?
fvdw
Site Admin - expert
 
Posts: 13471
Joined: Tue Apr 12, 2011 2:30 pm
Location: Netherlands

Re: All Shares Gone - Lacie 5Big Network 2

Postby Jocko » Mon Jun 22, 2015 8:51 pm

Just a remark if the raid can be assembled without issue, your nas should be able to start normally :thinking

But indeed if the fs is corrupted on it this can explain your issue.

if I remember well, a guy succeeded to restore his data with xfs_repair
Jocko
Site Admin - expert
 
Posts: 11529
Joined: Tue Apr 12, 2011 4:48 pm
Location: Orleans, France

Re: All Shares Gone - Lacie 5Big Network 2

Postby fvdw » Mon Jun 22, 2015 8:53 pm

we have xfs_repair available...
viewtopic.php?f=7&t=1271#p18801
fvdw
Site Admin - expert
 
Posts: 13471
Joined: Tue Apr 12, 2011 2:30 pm
Location: Netherlands

Re: All Shares Gone - Lacie 5Big Network 2

Postby fvdw » Mon Jun 22, 2015 9:06 pm

another thing you can do is connect with second telnet window and give command
Code: Select all
dmesg

maybe we can see why mount fails or doesn't complete
fvdw
Site Admin - expert
 
Posts: 13471
Joined: Tue Apr 12, 2011 2:30 pm
Location: Netherlands

Re: All Shares Gone - Lacie 5Big Network 2

Postby cwherbert » Mon Jun 22, 2015 9:33 pm

hi guys,

will give the dmesg on another session a try tomorrow and paste the output here

question with regards to the xfs_repair ….. will that wipe my data ? I do not have a copy of the data backed up anywhere else, so I am very cautious :oops:

btw, I can see the drives through the lacie web frontend, but no share information - does that still fit in with the xfs_repair? would I be able to see something in the lacie web frontend, if the filesystem is corrupted?

thank you guys so much for sticking with this and helping me out - very much appreciated!!!!! :punk :applause :applause :applause
cwherbert
 
Posts: 24
Joined: Sun Jun 14, 2015 7:08 pm

Re: All Shares Gone - Lacie 5Big Network 2

Postby fvdw » Tue Jun 23, 2015 10:48 am

I can see the drives through the lacie web frontend, but no share information


Does this means the lacie firmware is running ?
In that case you could try to enable ssh access, there seem to be hacks for that and do the repair from there.

Normally xfs_repair should not cause data loss unless the damge was not already done before the file system got corrupt. There must be also a program xfs_check, nee to look if I have that available in a version that will run with the standalone kernel. That will be this evening when I return home

also a command like
Code: Select all
cat /proc/mdstat
could give you some info on what the nas is doing with the array

---edit see here how to use xfs_repair xfs-repair info
Running it with option -n will do only a check.

Ps you will need to install also the package glibc-mini-mkfs.xfs-25feb14 to be able to use xfs-repair
Uploud the tar archive in the glibc-mini...zip file to the 5big2 via telnet terminial and extract it.
Then after that upload the file xfs-repair in the xfs_repair.zip file and place it folder /usr/sbin and make it executable.
Now you should be able to use xfs-repair

---edit---
With the last version 6.0, no need to upload these tools manually. Use a new option with fvdw-programs menu :
* run on the telnet window the command: fvdw-sl-programs
(then a menu will appear)
* select "Upload and extract glibc mini and tools"

Jocko
You do not have the required permissions to view the files attached to this post.
fvdw
Site Admin - expert
 
Posts: 13471
Joined: Tue Apr 12, 2011 2:30 pm
Location: Netherlands

Re: All Shares Gone - Lacie 5Big Network 2

Postby cwherbert » Sun Aug 09, 2015 10:46 am

Hi Guys,

sorry for the long pause.

I am back now and have went through the load of stand alone kernel, everything upto mount and nothing again..... but this time like mentioned the last time I got the following output from dmesg:

[ 188.311072] random: nonblocking pool is initialized
[ 562.635924] md: md4 stopped.
[ 562.855243] md: bind<sdb2>
[ 562.858449] md: bind<sdc2>
[ 562.861756] md: bind<sdd2>
[ 562.864959] md: bind<sde2>
[ 562.868319] md: bind<sda2>
[ 562.874136] md/raid:md4: device sda2 operational as raid disk 0
[ 562.880068] md/raid:md4: device sde2 operational as raid disk 4
[ 562.886044] md/raid:md4: device sdd2 operational as raid disk 3
[ 562.892005] md/raid:md4: device sdc2 operational as raid disk 2
[ 562.897910] md/raid:md4: device sdb2 operational as raid disk 1
[ 562.905333] md/raid:md4: allocated 0kB
[ 562.909148] md/raid:md4: raid level 5 active with 5 out of 5 devices, algorithm 2
[ 562.916668] RAID conf printout:
[ 562.916678] --- level:5 rd:5 wd:5
[ 562.916688] disk 0, o:1, dev:sda2
[ 562.916696] disk 1, o:1, dev:sdb2
[ 562.916704] disk 2, o:1, dev:sdc2
[ 562.916712] disk 3, o:1, dev:sdd2
[ 562.916720] disk 4, o:1, dev:sde2
[ 562.916922] md4: detected capacity change from 0 to 7993301139456
[ 612.973572] md4: unknown partition table
[ 613.014046] XFS (md4): Mounting Filesystem
[ 613.567628] XFS (md4): Starting recovery (logdev: internal)
[ 613.798038] XFS (md4): resetting quota flags
[ 613.802920] XFS: Internal error XFS_WANT_CORRUPTED_GOTO at line 1629 of file fs/xfs/xfs_alloc.c. Caller 0xc02292d0
[ 613.813403] CPU: 0 PID: 2058 Comm: mount Not tainted 3.14.2 #24
[ 613.819334] [<c0018b5c>] (unwind_backtrace) from [<c0015950>] (show_stack+0x10/0x14)
[ 613.827094] [<c0015950>] (show_stack) from [<c022815c>] (xfs_free_ag_extent+0x14c/0x5b0)
[ 613.835204] [<c022815c>] (xfs_free_ag_extent) from [<c02292d0>] (xfs_free_extent+0xd8/0x10c)
[ 613.843665] [<c02292d0>] (xfs_free_extent) from [<c02554e4>] (xlog_recover_process_efi+0x18c/0x1fc)
[ 613.852731] [<c02554e4>] (xlog_recover_process_efi) from [<c02555e0>] (xlog_recover_process_efis+0x8c/0x104)
[ 613.862572] [<c02555e0>] (xlog_recover_process_efis) from [<c02598f4>] (xlog_recover_finish+0x18/0x90)
[ 613.871895] [<c02598f4>] (xlog_recover_finish) from [<c025d6b0>] (xfs_log_mount_finish+0x34/0x4c)
[ 613.880768] [<c025d6b0>] (xfs_log_mount_finish) from [<c0220b54>] (xfs_mountfs+0x4d8/0x694)
[ 613.889139] [<c0220b54>] (xfs_mountfs) from [<c02231d0>] (xfs_fs_fill_super+0x1d8/0x2a0)
[ 613.897241] [<c02231d0>] (xfs_fs_fill_super) from [<c00a0d90>] (mount_bdev+0x120/0x184)
[ 613.905247] [<c00a0d90>] (mount_bdev) from [<c02216a8>] (xfs_fs_mount+0x10/0x1c)
[ 613.912654] [<c02216a8>] (xfs_fs_mount) from [<c00a1760>] (mount_fs+0x10/0xc0)
[ 613.919866] [<c00a1760>] (mount_fs) from [<c00b8e5c>] (vfs_kern_mount+0x48/0x10c)
[ 613.927360] [<c00b8e5c>] (vfs_kern_mount) from [<c00bb758>] (do_mount+0x7d8/0x93c)
[ 613.934936] [<c00bb758>] (do_mount) from [<c00bbaf4>] (SyS_mount+0x84/0xb8)
[ 613.941912] [<c00bbaf4>] (SyS_mount) from [<c00127c0>] (ret_fast_syscall+0x0/0x2c)
[ 613.949482] XFS (md4): Failed to recover EFIs
[ 613.953857] XFS (md4): log mount finish failed
root@fvdw-sta-kirkwood:/ #


Do I read this correctly - the mount fails because the filesystem is corrupted?

Also, can you confirm that the filesystem is indeed XFS, as I can start trying the xfs_repair. Do you think that would fix it?

I greatly greatly greatly appreciate your guys time and effort you have already invested in my problem.

:applause :applause :applause :applause :applause
cwherbert
 
Posts: 24
Joined: Sun Jun 14, 2015 7:08 pm

PreviousNext

Return to Lacie 5big Network vs2

Who is online

Users browsing this forum: Bing Bot and 6 guests