Page 1 of 3

adding ssh to standalone firmware instance

PostPosted: Sun Oct 04, 2015 10:23 am
by AlexStanica
I'm trying to transfer data from one Lacie 5Big Network vs2 to another identical one.
The first one is a crashed factory firmware, with 5 x2Tb disks in RAID5, one of which is faulty.
I am perfectly able to run a standalone kernel (using the console), copy mdadm and assemble the array, see the files and even tftpd files to the machine I'm using to boot from.

However tftpd is slow and I'd like to use the second Lacie 5Big as the destination.
I am not proficient with unix terminal, but I was able to find that tar could be a fast and clean method to accomplish my task:

tar cpf - /some/important/data | ssh user@destination-machine "tar xpf - -C /some/directory/"


However, and this is where my question comes in, I lack ssh to connect to the second Lacie 5Big.
So how do I add a ssh daemon to the standalone instance? I presume it would be much like I did for mdadm, but I was able to find that on the forum.
Could you guys please steer me in the right direction?

PS:
The second Lacie 5Big has the fvdw-sl firmware.
Thinking of it, I understand I have 3 options:
1. copy from the standalone kernel boot to the second Lacie using tftpd, tar, rsync or scp, but I first neet a way to access the second Lacie from the first
2. mount a NFS share from the second Lacie to the first and use cp or tar to do the copying
3. start a service on the standalone kernel (like samba, nfs, ftp) and access it from the second Lacie to do the copying.
I chose 1, as it seemed simpler. Any other input is really appreciated.

Many thanks

Re: adding ssh to standalone firmware instance

PostPosted: Sun Oct 04, 2015 10:53 am
by Jocko
if you use fvdw-sl console version 5.5, the easiest way is to use an "USB disk on the standalone 5big2".

So after getting a telnet access,
- plug your usb disk (fvdw will confirm if other fs are supported) but fvwd-sl console supports at least fs ext3 or fat32 on USB disk)
- run udevstart to update the usb dev nodes
- Then you should be able to mount your usb partition and backup on it your data.

Note: if you use fvdw-sl on your another 5big2, you can unplug a disk sd[bcde] if they are not yet include in a raid. Then you need to mount partition 8 (data partition) to backup directly.

Re: adding ssh to standalone firmware instance

PostPosted: Mon Oct 05, 2015 7:43 am
by AlexStanica
Hi Jocko and thanks for your quick reply. Your suggestion is great!

I tried your idea of inserting a drive from the second Lacie, replacing the faulty RAID member from the first Lacie.
I get to assemble the array but when I try to mount it I get:
mount: mounting /dev/md4 on /mnt/ failed: Input/output error


I reverted back to the initial drives, that is faulty one back into its slot and unfortunately i keep getting the same error.

I did things methodically: I identified the faulty one by stating the Lacie without disk 4 (sdd2 was faulty) and running an mdadm --examine to confirm all remaining drives are ok. I then turned it off, inserted the other HDD and powered it again etc.

What am I doing wrong? Should I be worried that I cannot mount the RAID anymore?
Thanks for your time!

Re: adding ssh to standalone firmware instance

PostPosted: Mon Oct 05, 2015 8:10 am
by Jocko
Hi AlexStanica
AlexStanica wrote:I tried your idea of inserting a drive from the second Lacie, replacing the faulty RAID member from the first Lacie.
I get to assemble the array but when I try to mount it I get:
Be careful, I never said that.

I suggested to use an USB disk to backup data but not to include it in the raid.

Re: adding ssh to standalone firmware instance

PostPosted: Mon Oct 05, 2015 8:29 am
by AlexStanica
When I said replace, I meant about the slot in the first Lacie.
I did not fiddle with the array. Just examine and assemble.

Is it possible that some kind of corruption occurred while taking out the HDD and inserting the other one?
Should I run an xfs_repair -n?

Re: adding ssh to standalone firmware instance

PostPosted: Mon Oct 05, 2015 8:34 am
by AlexStanica
Just leaving here a bit of info about the RAID?

Magic : a92b4efc
Version : 1.0
Feature Map : 0x0
Array UUID : 7ac712a5:93fde66d:36524ba9:86519b6c
Name : NAS5:4
Creation Time : Wed Sep 17 09:28:15 2014
Raid Level : raid5
Raid Devices : 5

Avail Dev Size : 3902978048 (1861.09 GiB 1998.32 GB)
Array Size : 7805956096 (7444.34 GiB 7993.30 GB)
Super Offset : 3902978304 sectors
State : clean
Device UUID : e7ce5e12:dd9d0bbb:b3a1a642:1f253af0

Update Time : Tue Dec 23 10:57:08 2014
Checksum : 698a290a - correct
Events : 325716

Layout : left-symmetric
Chunk Size : 512K

Device Role : Active device 0
Array State : AAA.A ('A' == active, '.' == missing)
/dev/sdb2:
Magic : a92b4efc
Version : 1.0
Feature Map : 0x0
Array UUID : 7ac712a5:93fde66d:36524ba9:86519b6c
Name : NAS5:4
Creation Time : Wed Sep 17 09:28:15 2014
Raid Level : raid5
Raid Devices : 5

Avail Dev Size : 3902978048 (1861.09 GiB 1998.32 GB)
Array Size : 7805956096 (7444.34 GiB 7993.30 GB)
Super Offset : 3902978304 sectors
State : clean
Device UUID : 244ca798:f2558d6e:5ed78944:aab49d48

Update Time : Tue Dec 23 10:57:08 2014
Checksum : fd9b2052 - correct
Events : 325716

Layout : left-symmetric
Chunk Size : 512K

Device Role : Active device 1
Array State : AAA.A ('A' == active, '.' == missing)
/dev/sdc2:
Magic : a92b4efc
Version : 1.0
Feature Map : 0x0
Array UUID : 7ac712a5:93fde66d:36524ba9:86519b6c
Name : NAS5:4
Creation Time : Wed Sep 17 09:28:15 2014
Raid Level : raid5
Raid Devices : 5

Avail Dev Size : 3902978048 (1861.09 GiB 1998.32 GB)
Array Size : 7805956096 (7444.34 GiB 7993.30 GB)
Super Offset : 3902978304 sectors
State : clean
Device UUID : 43d6d20f:3d02ab48:417803ae:8fdec494

Update Time : Tue Dec 23 10:57:08 2014
Checksum : 4852186 - correct
Events : 325716

Layout : left-symmetric
Chunk Size : 512K

Device Role : Active device 2
Array State : AAA.A ('A' == active, '.' == missing)
mdadm: No md superblock detected on /dev/sdd2.
/dev/sde2:
Magic : a92b4efc
Version : 1.0
Feature Map : 0x0
Array UUID : 7ac712a5:93fde66d:36524ba9:86519b6c
Name : NAS5:4
Creation Time : Wed Sep 17 09:28:15 2014
Raid Level : raid5
Raid Devices : 5

Avail Dev Size : 3902978048 (1861.09 GiB 1998.32 GB)
Array Size : 7805956096 (7444.34 GiB 7993.30 GB)
Super Offset : 3902978304 sectors
State : clean
Device UUID : 2e4a9986:40aa385a:640bb0c0:9e422729

Update Time : Tue Dec 23 10:57:08 2014
Checksum : 33e834a8 - correct
Events : 325716

Layout : left-symmetric
Chunk Size : 512K

Device Role : Active device 4
Array State : AAA.A ('A' == active, '.' == missing)


I tftpd'd the xfs_repair but when I run it I get "xfs_repair: error while loading shared libraries: librt.so.1: cannot open shared object file: No such file or directory"

Re: adding ssh to standalone firmware instance

PostPosted: Mon Oct 05, 2015 3:42 pm
by Jocko
I need to understand what you want to do and what is your issue.

First to use xfs_repair, you need to upload also the tar file for xfs support and install it.

Did you succeed to re-assemble the data raid (md4) with sd[abcde]2 ?

I note you have a faulty disk.(In fact I assume sdd2 is not included as it is the disk from your 2d 5big2 ????)

xfs_repair is fully useless to fix it: it must be use only to repair the file system on the raid but it does nothing about the raid structure !

So explain what you want to do.

Re: adding ssh to standalone firmware instance

PostPosted: Mon Oct 05, 2015 3:52 pm
by AlexStanica
I am trying to save all data from a device with RAID5, formerly running on the official firmware.
One of the drives failed, but I was successful in booting with the console, examining and assembling the RAID.
I could also mount it and see the files. I wanted to transfer the files to another Lacie running fvdw-sl firmware, but really did not know how.
I took your advice and inserted a disk from the second Lacie into the first Lacie, on the slot of the faulty drive. After this I was not able to mount the RAID again. I did no changes to the array.

The commands I run are:
root@fvdw-sta-kirkwood:/ # su
~ # tftp -l /sbin/mdadm -r mdadm -g 192.168.202.3
mdadm 100% |***************************************************************************************| 1100k 0:00:00 ETA
~ # chmod 755 /sbin/mdadm
~ # mdadm --assemble /dev/md4 /dev/sd[abce]2
mdadm: /dev/md4 has been started with 4 drives (out of 5).
~ # mount -o ro /dev/md4 /mnt
mount: mounting /dev/md4 on /mnt failed: Input/output error
~ # cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
md4 : active raid5 sda2[0] sde2[4] sdc2[2] sdb2[1]
7805956096 blocks super 1.0 level 5, 512k chunk, algorithm 2 [5/4] [UUU_U]

unused devices: <none>
~ # fdisk -l

Disk /dev/mtdblock1: 16 MB, 16777216 bytes
255 heads, 63 sectors/track, 2 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/mtdblock1 doesn't contain a valid partition table

Disk /dev/mtdblock2: 250 MB, 250609664 bytes
255 heads, 63 sectors/track, 30 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/mtdblock2 doesn't contain a valid partition table

Disk /dev/sdc: 2000.3 GB, 2000398934016 bytes
255 heads, 63 sectors/track, 243201 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sdc1 1 243202 1953514583+ ee EFI GPT

Disk /dev/sdb: 2000.3 GB, 2000398934016 bytes
255 heads, 63 sectors/track, 243201 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sdb1 1 243202 1953514583+ ee EFI GPT

Disk /dev/sda: 2000.3 GB, 2000398934016 bytes
255 heads, 63 sectors/track, 243201 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sda1 1 243202 1953514583+ ee EFI GPT

Disk /dev/sde: 2000.3 GB, 2000398934016 bytes
255 heads, 63 sectors/track, 243201 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sde1 1 243202 1953514583+ ee EFI GPT

Disk /dev/sdd: 2000.3 GB, 2000398934016 bytes
255 heads, 63 sectors/track, 243201 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sdd1 1 243202 1953514583+ ee EFI GPT
fdisk: device has more than 2^32 sectors, can't use all of them

Disk /dev/md4: 2199.0 GB, 2199023255040 bytes
2 heads, 4 sectors/track, 536870911 cylinders
Units = cylinders of 8 * 512 = 4096 bytes

Disk /dev/md4 doesn't contain a valid partition table

~ # mdadm -Ss
mdadm: stopped /dev/md4
~ # poweroff -f


LE: Yes, I assemble the array without sdd2. Sdd2 has no superblock and i'd like to use its slot for another 2tb drive from the second Lacie (as that would yield the best transfer speed).

~ # mdadm --assemble /dev/md4 /dev/sd[abcde]2
mdadm: no RAID superblock on /dev/sdd2
mdadm: /dev/sdd2 has no superblock - assembly aborted

Re: adding ssh to standalone firmware instance

PostPosted: Mon Oct 05, 2015 4:35 pm
by Jocko
To assemble your raid I believe you should not include sdd2 in the mdadm command.

So try to do it
Code: Select all
mdadm --assemble /dev/md4 /dev/sda2  /dev/sdb2  /dev/sdc2  /dev/sde2


I don't known if version 5.5 contains it (@fvdw can confirm this point), you can try to use the binary available in this posthttp://plugout.net/viewtopic.php?f=26&t=1574&start=40#p19163

if you fail again to mount it, then that means fs on md4 is now corrupted and you should use xfs_repair.

But try first to mount md4 with this tool

Re: adding ssh to standalone firmware instance

PostPosted: Mon Oct 05, 2015 4:50 pm
by AlexStanica
~ # usr/sbin/mount -o ro /dev/md4 /mnt
mount: /dev/md4: can't read superblock


went ahead and ran a dmesg in a new telnet window:

[ 196.245538] md: bind<sdb2>
[ 196.248689] md: bind<sdc2>
[ 196.251902] md: bind<sde2>
[ 196.255011] md: bind<sda2>
[ 196.260929] md/raid:md4: device sda2 operational as raid disk 0
[ 196.266859] md/raid:md4: device sde2 operational as raid disk 4
[ 196.272945] md/raid:md4: device sdc2 operational as raid disk 2
[ 196.278859] md/raid:md4: device sdb2 operational as raid disk 1
[ 196.286277] md/raid:md4: allocated 0kB
[ 196.290147] md/raid:md4: raid level 5 active with 4 out of 5 devices, algorithm 2
[ 196.297610] RAID conf printout:
[ 196.297620] --- level:5 rd:5 wd:4
[ 196.297629] disk 0, o:1, dev:sda2
[ 196.297637] disk 1, o:1, dev:sdb2
[ 196.297646] disk 2, o:1, dev:sdc2
[ 196.297654] disk 4, o:1, dev:sde2
[ 196.297828] md4: detected capacity change from 0 to 7993299042304
[ 240.029462] md4: unknown partition table
[ 240.166442] XFS (md4): Mounting Filesystem
[ 240.596579] XFS (md4): Starting recovery (logdev: internal)
[ 240.604211] XFS (md4): xlog_recover_process_data: bad clientid 0x0
[ 240.610442] XFS (md4): log mount/recovery failed: error 5
[ 240.616025] XFS (md4): log mount failed


I guess I should try an xfs_repair. Any advice on good practices to not break things even more? Thanks