pacco wrote:checked the old "defective" disk. Works like a charm!
how do you check that ? Do you use e2fsck ?
pacco wrote:So I think what caused my problems was this; the system believes sda is in slot 1, sdb in 2, sdc in 3, sde in 4 en sdd in 5. Using the standalone kernel and using udevstart corrected that problem and I was able to mount the degraded RAID. Now back in the Lacie firmware it probably expects sdd in 5 and that one is the wiped disk.
You can forget udevstart if you use this kernel version (#17) instead of (#10)
http://plugout.net/viewtopic.php?f=26&t=2210&start=30#p18824pacco wrote:So I think what caused my problems was this; the system believes sda is in slot 1, sdb in 2, sdc in 3, sde in 4 en sdd in 5. Using the standalone kernel and using udevstart corrected that problem and I was able to mount the degraded RAID. Now back in the Lacie firmware it probably expects sdd in 5 and that one is the wiped disk.
I don't think that. When you use fvdw-sl console, you works in a temporary environment and udevstart fixes only the dev issue into this environment.
pacco wrote:Wiped its partitions and just booted the Lacie. Dashboard comes on and it says: The volume cannot be found.
I think that you should start the 5big2 without the new disk as disk replacement process is an hot-swapping. So you should get a degraded raid and when you hot plug the new disk:
- if you use the auto mode, the firmware must detect the disk as a new disk and add it in the raid to restore the raid
- if you use the manual mode you have to select it to add it in the raid