[SOLVED] After a firmware upgrade, blinking blue, lost 5Big2

Re: After a firmware upgrade, blinking blue and 5Big2 is los

Postby fvdw » Sun Jun 05, 2016 6:08 pm

I don't follow you in the last part of your post.:scratch .

Did you follow the instruction and kept the nas switched until the uboot window mentioned waiting for uboot?

The nas must receive from the fvdw-sl console a signal to interupt boot and switch to netconsole mode of uboot enabling uploading of the standalone kernel. If the uboot window keeps mentioning waiting for uboot when you switch on the nas then the signal of the console is not reaching the nas, either due to firewall issues, a bad wireless connection , more then one network interface, a router not teansmitting a broadcast signal. In vmware environment I am not sure if the console has acces to your network interface on all ports.
fvdw
Site Admin - expert
 
Posts: 13242
Joined: Tue Apr 12, 2011 2:30 pm
Location: Netherlands

Re: After a firmware upgrade, blinking blue and 5Big2 is los

Postby Glaven Clattuck » Sun Jun 05, 2016 6:50 pm

fvdw wrote:I don't follow you in the last part of your post.:scratch .

Did you follow the instruction and kept the nas switched until the uboot window mentioned waiting for uboot?

The nas must receive from the fvdw-sl console a signal to interupt boot and switch to netconsole mode of uboot enabling uploading of the standalone kernel. If the uboot window keeps mentioning waiting for uboot when you switch on the nas then the signal of the console is not reaching the nas, either due to firewall issues, a bad wireless connection , more then one network interface, a router not teansmitting a broadcast signal. In vmware environment I am not sure if the console has acces to your network interface on all ports.


I dunno if i make something wrong. But this is the action that i accomplished.

1) power off the Lacie 5Big2N
2) unplug from my Asus router, all the device, except the Mac Pro and the Lacie.
3) change the Lan network from 192.168.2.x to 10.211,55.x
4) disabled firewall on the router
5) start the VM Win 7 (Parallel or VMware is the same)
6) Under window 7, start the fvdw-sl-console
7) from the menu Action choose the Stad alone kernel
8) choose the kirkwood image
9) a dos window appear with the phrase: waiting fro u-boot
10) turn on the Lacie
11) .......nothing more.......

it's correct?
Glaven Clattuck
Donator VIP
Donator VIP
 
Posts: 152
Joined: Sat May 21, 2016 3:21 pm
Location: Urbe Immortalis

Re: After a firmware upgrade, blinking blue and 5Big2 is los

Postby fvdw » Sun Jun 05, 2016 7:18 pm

Yes that is the correct procedure. If nothing happens then there is network issue preventing the console to contact the nas
fvdw
Site Admin - expert
 
Posts: 13242
Joined: Tue Apr 12, 2011 2:30 pm
Location: Netherlands

Re: After a firmware upgrade, blinking blue and 5Big2 is los

Postby Glaven Clattuck » Mon Jun 06, 2016 2:24 pm

Ok, like always you are right.
The console not working via VM.
So i connect my Corporate PC, and the telnet session started.
I follow a thread from you and another user on NAS-Central and give the command mdadm
this is the result
Code: Select all
root@fvdw-sta-kirkwood:/ # tftp -r mdadm -l /sbin/mdadm -g 10.211.55.35
mdadm                100% |***********************************************************************************************************************************************|  1100k  0:00:00 ETA
root@fvdw-sta-kirkwood:/ # chmod 777 /sbin/mdadm
root@fvdw-sta-kirkwood:/ # mdadm --examine /dev/sd[abcde]2
/dev/sda2:
          Magic : a92b4efc
        Version : 1.0
    Feature Map : 0x0
     Array UUID : ad468f58:53571e53:74558f1d:48545ec5
           Name : (none):4
  Creation Time : Tue Oct 28 20:06:48 2014
     Raid Level : raid5
   Raid Devices : 5

 Avail Dev Size : 7809984512 (3724.09 GiB 3998.71 GB)
     Array Size : 15619969024 (14896.36 GiB 15994.85 GB)
   Super Offset : 7809984768 sectors
          State : clean
    Device UUID : dc7d0611:5aa89423:0493d8ff:d0875d7d

    Update Time : Thu Jun  2 18:42:58 2016
       Checksum : f271efc6 - correct
         Events : 716095

         Layout : left-symmetric
     Chunk Size : 64K

   Device Role : Active device 0
   Array State : AAAAA ('A' == active, '.' == missing)
/dev/sdb2:
          Magic : a92b4efc
        Version : 1.0
    Feature Map : 0x0
     Array UUID : ad468f58:53571e53:74558f1d:48545ec5
           Name : (none):4
  Creation Time : Tue Oct 28 20:06:48 2014
     Raid Level : raid5
   Raid Devices : 5

 Avail Dev Size : 7809984512 (3724.09 GiB 3998.71 GB)
     Array Size : 15619969024 (14896.36 GiB 15994.85 GB)
   Super Offset : 7809984768 sectors
          State : clean
    Device UUID : 2e0f0acd:a2afcae8:f521a5c4:8fdcdfc5

    Update Time : Thu Jun  2 18:42:58 2016
       Checksum : 80fa6c14 - correct
         Events : 716095

         Layout : left-symmetric
     Chunk Size : 64K

   Device Role : Active device 3
   Array State : AAAAA ('A' == active, '.' == missing)
/dev/sdc2:
          Magic : a92b4efc
        Version : 1.0
    Feature Map : 0x0
     Array UUID : ad468f58:53571e53:74558f1d:48545ec5
           Name : (none):4
  Creation Time : Tue Oct 28 20:06:48 2014
     Raid Level : raid5
   Raid Devices : 5

 Avail Dev Size : 7809984512 (3724.09 GiB 3998.71 GB)
     Array Size : 15619969024 (14896.36 GiB 15994.85 GB)
   Super Offset : 7809984768 sectors
          State : clean
    Device UUID : 5ea5fe06:16321d30:eb67cd7b:4f4e46cf

    Update Time : Thu Jun  2 18:42:58 2016
       Checksum : c2d03c6b - correct
         Events : 716095

         Layout : left-symmetric
     Chunk Size : 64K

   Device Role : Active device 1
   Array State : AAAAA ('A' == active, '.' == missing)
/dev/sdd2:
          Magic : a92b4efc
        Version : 1.0
    Feature Map : 0x0
     Array UUID : ad468f58:53571e53:74558f1d:48545ec5
           Name : (none):4
  Creation Time : Tue Oct 28 20:06:48 2014
     Raid Level : raid5
   Raid Devices : 5

 Avail Dev Size : 7809984512 (3724.09 GiB 3998.71 GB)
     Array Size : 15619969024 (14896.36 GiB 15994.85 GB)
   Super Offset : 7809984768 sectors
          State : clean
    Device UUID : e581f3d3:2fd2786e:54bb1fc5:c90f671e

    Update Time : Thu Jun  2 18:42:58 2016
       Checksum : 6693cdf1 - correct
         Events : 716095

         Layout : left-symmetric
     Chunk Size : 64K

   Device Role : Active device 2
   Array State : AAAAA ('A' == active, '.' == missing)
/dev/sde2:
          Magic : a92b4efc
        Version : 1.0
    Feature Map : 0x0
     Array UUID : ad468f58:53571e53:74558f1d:48545ec5
           Name : (none):4
  Creation Time : Tue Oct 28 20:06:48 2014
     Raid Level : raid5
   Raid Devices : 5

 Avail Dev Size : 7809984512 (3724.09 GiB 3998.71 GB)
     Array Size : 15619969024 (14896.36 GiB 15994.85 GB)
   Super Offset : 7809984768 sectors
          State : clean
    Device UUID : e53e7f17:7119ea16:ccb7f6d1:32e5f5c8

    Update Time : Thu Jun  2 18:42:58 2016
       Checksum : 9f6a415 - correct
         Events : 716095

         Layout : left-symmetric
     Chunk Size : 64K

   Device Role : Active device 4
   Array State : AAAAA ('A' == active, '.' == missing)


Hope you and @jocko can help me again
Glaven Clattuck
Donator VIP
Donator VIP
 
Posts: 152
Joined: Sat May 21, 2016 3:21 pm
Location: Urbe Immortalis

Re: After a firmware upgrade, blinking blue and 5Big2 is los

Postby Jocko » Mon Jun 06, 2016 5:07 pm

Hi Glaven Clattuck

After updating failure, I do not think it is a raid issue but some inconsistent data on the boot partition.

For this point, fvdw may help you if 5big2 uses the same mechanism than cloudbox (it helped a cloudbox user with a similar issue).

Anyhow you can easily check your raid
Code: Select all
mdadm --assemble /dev/md[01234]
and then try to mount them
Code: Select all
mkdir /md0 /md1 /md2 /md3 /md4
/usr/sbin/mount /dev/mdx /mdx
(change x by 0,1,..,4)

if all raid devices can be mounted so there is no issue on this side
Jocko
Site Admin - expert
 
Posts: 11367
Joined: Tue Apr 12, 2011 4:48 pm
Location: Orleans, France

Re: After a firmware upgrade, blinking blue and 5Big2 is los

Postby Glaven Clattuck » Mon Jun 06, 2016 5:10 pm

Jocko wrote:Hi Glaven Clattuck

After updating failure, I do not think it is a raid issue but some inconsistent data on the boot partition.

For this point, fvdw may help you if 5big2 uses the same mechanism than cloudbox (it helped a cloudbox user with a similar issue).

Anyhow you can easily check your raid
Code: Select all
mdadm --assemble /dev/md[01234]
and then try to mount them
Code: Select all
mkdir /md0 /md1 /md2 /md3 /md4
/usr/sbin/mount /dev/mdx /mdx
(change x by 0,1,..,4)

if all raid devices can be mounted so there is no issue on this side


This is the response:
Code: Select all
root@fvdw-sta-kirkwood:/ # mdadm --assemble /dev/md[01234]
mdadm: no correct container type: /dev/md1
mdadm: /dev/md1 has no superblock - assembly aborted
root@fvdw-sta-kirkwood:/ #
Glaven Clattuck
Donator VIP
Donator VIP
 
Posts: 152
Joined: Sat May 21, 2016 3:21 pm
Location: Urbe Immortalis

Re: After a firmware upgrade, blinking blue and 5Big2 is los

Postby Jocko » Mon Jun 06, 2016 5:20 pm

check if md1,... exists in /dev
Code: Select all
ls -l /dev/md1
(change 1 by 2,..4)

otherwise create them
Code: Select all
mknod /dev/mdx b 9 x
(change x by 1,2..4)
Jocko
Site Admin - expert
 
Posts: 11367
Joined: Tue Apr 12, 2011 4:48 pm
Location: Orleans, France

Re: After a firmware upgrade, blinking blue and 5Big2 is los

Postby Glaven Clattuck » Mon Jun 06, 2016 5:23 pm

Jocko wrote:check if md1,... exists in /dev
Code: Select all
ls -l /dev/md1
(change 1 by 2,..4)

otherwise create them
Code: Select all
mknod /dev/mdx b 9 x
(change x by 1,2..4)


so, we go:

Code: Select all
root@fvdw-sta-kirkwood:/ # ls -l /dev/md1
brw-r-----    1 root     root        9,   1 Feb  3 22:56 /dev/md1
root@fvdw-sta-kirkwood:/ # ls -l /dev/md2
brw-r-----    1 root     root        9,   2 Feb  3 22:56 /dev/md2
root@fvdw-sta-kirkwood:/ # ls -l /dev/md3
brw-r-----    1 root     root        9,   3 Feb  3 22:56 /dev/md3
root@fvdw-sta-kirkwood:/ # ls -l /dev/md4
brw-r-----    1 root     root        9,   4 Feb  3 22:56 /dev/md4
Glaven Clattuck
Donator VIP
Donator VIP
 
Posts: 152
Joined: Sat May 21, 2016 3:21 pm
Location: Urbe Immortalis

Re: After a firmware upgrade, blinking blue and 5Big2 is los

Postby Jocko » Mon Jun 06, 2016 5:31 pm

So do manually for each md device
Code: Select all
mdadm --assemble /dev/md0 /dev/sd[abcde]7
mdadm --assemble /dev/md1 /dev/sd[abcde]8
mdadm --assemble /dev/md2 /dev/sd[abcde]9
mdadm --assemble /dev/md3 /dev/sd[abcde]5
mdadm --assemble /dev/md4 /dev/sd[abcde]2


then post
Code: Select all
cat /proc/mdstat
and try to mount md0,md1,md2 and md4 (md3 is not mountable)
Jocko
Site Admin - expert
 
Posts: 11367
Joined: Tue Apr 12, 2011 4:48 pm
Location: Orleans, France

Re: After a firmware upgrade, blinking blue and 5Big2 is los

Postby Glaven Clattuck » Mon Jun 06, 2016 5:36 pm

Jocko wrote:So do manually for each md device
Code: Select all
mdadm --assemble /dev/md0 /dev/sd[abcde]7
mdadm --assemble /dev/md1 /dev/sd[abcde]8
mdadm --assemble /dev/md2 /dev/sd[abcde]9
mdadm --assemble /dev/md3 /dev/sd[abcde]5
mdadm --assemble /dev/md4 /dev/sd[abcde]2


then post
Code: Select all
cat /proc/mdstat
and try to mount md0,md1,md2 and md4 (md3 is not mountable)


for the first part
Code: Select all
root@fvdw-sta-kirkwood:/ # mdadm --assemble /dev/md0 /dev/sd[abcde]7
mdadm: /dev/md0 has been started with 5 drives.
root@fvdw-sta-kirkwood:/ # mdadm --assemble /dev/md1 /dev/sd[abcde]8
mdadm: /dev/md1 has been started with 5 drives.
root@fvdw-sta-kirkwood:/ # mdadm --assemble /dev/md2 /dev/sd[abcde]9
mdadm: /dev/md2 has been started with 5 drives.
root@fvdw-sta-kirkwood:/ # mdadm --assemble /dev/md3 /dev/sd[abcde]5
mdadm: /dev/md3 has been started with 5 drives.
root@fvdw-sta-kirkwood:/ # mdadm --assemble /dev/md4 /dev/sd[abcde]2
mdadm: /dev/sda2 is busy - skipping
mdadm: /dev/sdb2 is busy - skipping
mdadm: /dev/sdc2 is busy - skipping
mdadm: /dev/sdd2 is busy - skipping
mdadm: /dev/sde2 is busy - skipping


for the mdstat command
Code: Select all
root@fvdw-sta-kirkwood:/ # cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
md3 : active raid1 sda5[0] sde5[4] sdb5[3] sdd5[2] sdc5[1]
      255936 blocks [5/5] [UUUUU]
     
md2 : active raid1 sda9[0] sde9[4] sdb9[3] sdd9[2] sdc9[1]
      875456 blocks [5/5] [UUUUU]
     
md1 : active raid1 sda8[0] sde8[4] sdb8[3] sdd8[2] sdc8[1]
      843328 blocks [5/5] [UUUUU]
     
md0 : active raid1 sda7[0] sdd7[4] sde7[3] sdc7[2] sdb7[1]
      16000 blocks [5/5] [UUUUU]
     
md4 : active raid5 sda2[0] sde2[4] sdb2[2] sdd2[3] sdc2[1]
      15619969024 blocks super 1.0 level 5, 64k chunk, algorithm 2 [5/5] [UUUUU]
     
unused devices: <none>


i'm waiting your input to mount like suggested
Glaven Clattuck
Donator VIP
Donator VIP
 
Posts: 152
Joined: Sat May 21, 2016 3:21 pm
Location: Urbe Immortalis

PreviousNext

Return to Lacie 5big Network vs2

Who is online

Users browsing this forum: No registered users and 1 guest