Dashboard opens with blank only

Re: Dashboard opens with blank only

Postby Jocko » Wed Mar 10, 2021 4:39 pm

Yes but if it takes more time than with the previous attempts it may be a good new :dry
Jocko
Site Admin - expert
 
Posts: 11529
Joined: Tue Apr 12, 2011 4:48 pm
Location: Orleans, France

Re: Dashboard opens with blank only

Postby fariaslcfs » Wed Mar 10, 2021 4:43 pm

Jocko wrote:Yes but if it takes more time than with the previous attempts it may be a good new :dry

fingers crossed...
fariaslcfs
Donator VIP
Donator VIP
 
Posts: 114
Joined: Thu Feb 18, 2021 4:55 pm

Re: Dashboard opens with blank only

Postby Jocko » Fri Mar 12, 2021 7:57 am

Hi

xfs_repair command is still running ?
Jocko
Site Admin - expert
 
Posts: 11529
Joined: Tue Apr 12, 2011 4:48 pm
Location: Orleans, France

Re: Dashboard opens with blank only

Postby fariaslcfs » Fri Mar 12, 2021 11:12 am

Jocko wrote:Hi

xfs_repair command is still running ?


I'll post in a few minutes (arriving at the place soon).
fariaslcfs
Donator VIP
Donator VIP
 
Posts: 114
Joined: Thu Feb 18, 2021 4:55 pm

Re: Dashboard opens with blank only

Postby fariaslcfs » Fri Mar 12, 2021 11:46 am

It appears that xfs_repair finished.
(We had an eletric energy cut yesterday afternoon.
However, the LACIE no-break sustained its power supply until the energy return.
The telnet window, by the other side, was turned off, so I could not check the end of the repair process.)
What command should I enter in order to verify the repair?

(If follows some commands issued before.)
Code: Select all
root@(fvdw-kirkwood):/ # cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
md3 : active raid1 sda5[0] sdb5[4] sdc5[3] sdd5[2] sde5[1]
      256896 blocks [5/5] [UUUUU]
     
md4 : active raid5 sda2[0] sde2[5] sdd2[3] sdc2[2] sdb2[1]
      7805490688 blocks super 1.2 level 5, 64k chunk, algorithm 2 [5/5] [UUUUU]     
unused devices: <none>

Code: Select all
root@(fvdw-kirkwood):/ # dmesg | tail
[ 9393.652893] md: bind<sdd5>
[ 9393.656112] md: bind<sdc5>
[ 9393.659275] md: bind<sdb5>
[ 9393.662377] md: bind<sda5>
[ 9393.666450] md/raid1:md3: active with 5 out of 5 mirrors
[ 9393.671862] md3: detected capacity change from 0 to 263061504
[ 9406.275974]  md3: unknown partition table
[ 9419.429825] Adding 256892k swap on /dev/md3.  Priority:-1 extents:1 across:256892k
[108851.157456] mv643xx_eth_port mv643xx_eth_port.0 eth0: link down
[109559.517419] mv643xx_eth_port mv643xx_eth_port.0 eth0: link up, 100 Mb/s, full duplex, flow control disabled

Code: Select all
root@(fvdw-kirkwood):/ # mdadm --detail /dev/md4
/dev/md4:
        Version : 1.2
  Creation Time : Thu Jan  1 01:15:34 1970
     Raid Level : raid5
     Array Size : 7805490688 (7443.90 GiB 7992.82 GB)
  Used Dev Size : 1951372672 (1860.97 GiB 1998.21 GB)
   Raid Devices : 5
  Total Devices : 5
    Persistence : Superblock is persistent

    Update Time : Thu Jan  1 20:55:56 1970
          State : clean
 Active Devices : 5
Working Devices : 5
 Failed Devices : 0
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 64K

           Name : fvdw-sta-kirkwood:0
           UUID : 1126e27a:134ee56d:e944b67e:ecaecaf8
         Events : 262

    Number   Major   Minor   RaidDevice State
       0       8       34        0      active sync   /dev/sda2
       1       8       18        1      active sync   /dev/sdb2
       2       8        2        2      active sync   /dev/sdc2
       3       8       66        3      active sync   /dev/sdd2
       5       8       50        4      active sync   /dev/sde2
fariaslcfs
Donator VIP
Donator VIP
 
Posts: 114
Joined: Thu Feb 18, 2021 4:55 pm

Re: Dashboard opens with blank only

Postby fariaslcfs » Fri Mar 12, 2021 12:01 pm

Tried:
Code: Select all
root@(fvdw-kirkwood):/ # mount -t xfs /dev/md4 /md4
mount: wrong fs type, bad option, bad superblock on /dev/md4,
       missing codepage or helper program, or other error
       In some cases useful info is found in syslog - try
       dmesg | tail  or so

Diagnostics:
Code: Select all
root@(fvdw-kirkwood):/ # dmesg | tail
[ 9393.659275] md: bind<sdb5>
[ 9393.662377] md: bind<sda5>
[ 9393.666450] md/raid1:md3: active with 5 out of 5 mirrors
[ 9393.671862] md3: detected capacity change from 0 to 263061504
[ 9406.275974]  md3: unknown partition table
[ 9419.429825] Adding 256892k swap on /dev/md3.  Priority:-1 extents:1 across:256892k
[108851.157456] mv643xx_eth_port mv643xx_eth_port.0 eth0: link down
[109559.517419] mv643xx_eth_port mv643xx_eth_port.0 eth0: link up, 100 Mb/s, full duplex, flow control disabled
[166491.482723] XFS (md4): bad magic number
[166491.486722] XFS (md4): SB validate failed
fariaslcfs
Donator VIP
Donator VIP
 
Posts: 114
Joined: Thu Feb 18, 2021 4:55 pm

Re: Dashboard opens with blank only

Postby Jocko » Fri Mar 12, 2021 12:30 pm

Just a question did you select the option '-n' with your last xfs_repair command. If yes then the fs is not repaired but it just claims what it should do to do it
Jocko
Site Admin - expert
 
Posts: 11529
Joined: Tue Apr 12, 2011 4:48 pm
Location: Orleans, France

Re: Dashboard opens with blank only

Postby fariaslcfs » Fri Mar 12, 2021 12:34 pm

Jocko wrote:Just a question did you select the option '-n' with your last xfs_repair command. If yes then the fs is not repaired but it just claims what it should do to do it

I used:
Code: Select all
root@(fvdw-kirkwood):/ # xfs_repair -nv /dev/md4

as asked in a previous post. Should I repeat the command without the -n option, as that? :
Code: Select all
root@(fvdw-kirkwood):/ # xfs_repair -v /dev/md4
fariaslcfs
Donator VIP
Donator VIP
 
Posts: 114
Joined: Thu Feb 18, 2021 4:55 pm

Re: Dashboard opens with blank only

Postby Jocko » Fri Mar 12, 2021 12:41 pm

Yes you have to run the command
Code: Select all
xfs_repair -v /dev/md4
to really repair the fs.

Do you remember what information did you get on the previous command
Jocko
Site Admin - expert
 
Posts: 11529
Joined: Tue Apr 12, 2011 4:48 pm
Location: Orleans, France

Re: Dashboard opens with blank only

Postby fariaslcfs » Fri Mar 12, 2021 12:44 pm

Jocko wrote:Yes you have to tun the command
Code: Select all
xfs_repair -v /dev/md4
to really repair the fs.

Do you remember what information did you get on the previous command

I posted here. Is it enough?
(Or do you mean the previous md4 xfs_repair -v without the md3 swap? In that case, would you like to see the previous part of the dmesg output?)
Anyway, a longer segment of dmesg follows:
Code: Select all
[ 7603.508218] md/raid:md4: device sda2 operational as raid disk 0
[ 7603.514147] md/raid:md4: device sde2 operational as raid disk 4
[ 7603.520105] md/raid:md4: device sdd2 operational as raid disk 3
[ 7603.526046] md/raid:md4: device sdc2 operational as raid disk 2
[ 7603.531962] md/raid:md4: device sdb2 operational as raid disk 1
[ 7603.539162] md/raid:md4: allocated 5282kB
[ 7603.543251] md/raid:md4: raid level 5 active with 5 out of 5 devices, algorithm 2
[ 7603.550762] RAID conf printout:
[ 7603.550773]  --- level:5 rd:5 wd:5
[ 7603.550782]  disk 0, o:1, dev:sda2
[ 7603.550790]  disk 1, o:1, dev:sdb2
[ 7603.550798]  disk 2, o:1, dev:sdc2
[ 7603.550806]  disk 3, o:1, dev:sdd2
[ 7603.550814]  disk 4, o:1, dev:sde2
[ 7603.550969] md4: detected capacity change from 0 to 7992822464512
[ 7865.124222]  md4: unknown partition table
[ 7865.890128] REISERFS warning (device md4): sh-2021 reiserfs_fill_super: can not find reiserfs on md4
[ 7865.900045] EXT3-fs (md4): error: can't find ext3 filesystem on dev md4.
[ 7865.907134] EXT2-fs (md4): error: can't find an ext2 filesystem on dev md4.
[ 7865.914369] EXT4-fs (md4): VFS: Can't find ext4 filesystem
[ 7865.920667] FAT-fs (md4): bogus number of reserved sectors
[ 7865.926193] FAT-fs (md4): Can't find a valid FAT filesystem
[ 7865.932100] FAT-fs (md4): bogus number of reserved sectors
[ 7865.937624] FAT-fs (md4): Can't find a valid FAT filesystem
[ 7865.943483] NTFS-fs error (device md4): read_ntfs_boot_sector(): Primary boot sector is invalid.
[ 7865.952302] NTFS-fs error (device md4): read_ntfs_boot_sector(): Mount option errors=recover not used. Aborting without trying to recover.
[ 7865.964723] NTFS-fs error (device md4): ntfs_fill_super(): Not an NTFS volume.
[ 7865.973397] XFS (md4): bad magic number
[ 7865.977292] XFS (md4): SB validate failed
[ 8262.628060] XFS (md4): bad magic number
[ 8262.631931] XFS (md4): SB validate failed
[ 9393.455276] md: md3 stopped.
[ 9393.649706] md: bind<sde5>
[ 9393.652893] md: bind<sdd5>
[ 9393.656112] md: bind<sdc5>
[ 9393.659275] md: bind<sdb5>
[ 9393.662377] md: bind<sda5>
[ 9393.666450] md/raid1:md3: active with 5 out of 5 mirrors
[ 9393.671862] md3: detected capacity change from 0 to 263061504
[ 9406.275974]  md3: unknown partition table
[ 9419.429825] Adding 256892k swap on /dev/md3.  Priority:-1 extents:1 across:256892k
[108851.157456] mv643xx_eth_port mv643xx_eth_port.0 eth0: link down
[109559.517419] mv643xx_eth_port mv643xx_eth_port.0 eth0: link up, 100 Mb/s, full duplex, flow control disabled
[166491.482723] XFS (md4): bad magic number
[166491.486722] XFS (md4): SB validate failed
[167946.954332] XFS (md4): bad magic number
[167946.958334] XFS (md4): SB validate failed

I also started the xfs_repair command with option -v only, so doing the repair (if possible) at this time.
It will take again a long time, so that the results will be available probably on monday morning.
fariaslcfs
Donator VIP
Donator VIP
 
Posts: 114
Joined: Thu Feb 18, 2021 4:55 pm

PreviousNext

Return to Lacie 5big Network vs2

Who is online

Users browsing this forum: nos96 and 9 guests