Fail to update to version 18.1

Re: Fail to update to version 18.1

Postby Jocko » Fri Jan 24, 2020 12:47 pm

So we understand better what happens :
- either the dev node has been deleted !
- or partition 1 did not been created (or deleted ???)

Then need new output
Code: Select all
 ls -l /dev/sda[0-9]
cat /proc/partitions
gdisk -l /dev/sda

Jocko
Site Admin - expert
 
Posts: 11367
Joined: Tue Apr 12, 2011 4:48 pm
Location: Orleans, France

Re: Fail to update to version 18.1

Postby maxdo » Fri Jan 24, 2020 1:51 pm

here you are:

Code: Select all
root@fvdwsl-base:/ # [b]ls -l /dev/sda[0-9][/b]
brw-rw----  1 root root 8, 1 2008-01-03 21:06 /dev/sda1
brw-rw----  1 root root 8, 2 2008-01-03 21:06 /dev/sda2
brw-rw----  1 root root 8, 3 2011-11-16 12:00 /dev/sda3
brw-rw----  1 root root 8, 4 2008-01-03 21:06 /dev/sda4
brw-rw----  1 root root 8, 5 2008-01-03 21:06 /dev/sda5
brw-rw----  1 root root 8, 6 2012-03-11 11:10 /dev/sda6
brw-rw----  1 root root 8, 7 2008-01-03 21:06 /dev/sda7
brw-rw----  1 root root 8, 8 2011-11-16 12:18 /dev/sda8
brw-rw-rw-  1 root root 8, 9 2013-08-09 21:22 /dev/sda9
root@fvdwsl-base:/ #[b] cat /proc/partitions[/b]
major minor  #blocks  name

   1        0       4096 ram0
   1        1       4096 ram1
   1        2       4096 ram2
   1        3       4096 ram3
   1        4       4096 ram4
   1        5       4096 ram5
   1        6       4096 ram6
   1        7       4096 ram7
   1        8       4096 ram8
   1        9       4096 ram9
   1       10       4096 ram10
   1       11       4096 ram11
   1       12       4096 ram12
   1       13       4096 ram13
   1       14       4096 ram14
   1       15       4096 ram15
  31        0        500 mtdblock0
  31        1          4 mtdblock1
   8        0  976762584 sda
   8        4          1 sda4
   8        5     835380 sda5
   8        6      64260 sda6
   8        7     514080 sda7
   8        8  973474740 sda8
   8       16  976762584 sdb
   8       20          1 sdb4
   8       21     835380 sdb5
   8       22      64260 sdb6
   8       23     514080 sdb7
   8       24  973474740 sdb8
   9        0  973474560 md0
root@fvdwsl-base:/ # [b]gdisk -l /dev/sda[/b]
GPT fdisk (gdisk) version 0.8.5

Partition table scan:
  MBR: MBR only
  BSD: not present
  APM: not present
  GPT: not present


***************************************************************
Found invalid GPT and valid MBR; converting MBR to GPT format.
***************************************************************

Disk /dev/sda: 1953525168 sectors, 931.5 GiB
Logical sector size: 512 bytes
Disk identifier (GUID): 71EF4C53-A737-4BF1-AE8F-174512F59803
Partition table holds up to 128 entries
First usable sector is 34, last usable sector is 1953525134
Partitions will be aligned on 8-sector boundaries
Total free space is 3748181 sectors (1.8 GiB)

Number  Start (sector)    End (sector)  Size       Code  Name
   5         3341520         5012279   815.8 MiB   8300  Linux filesystem
   6         5140800         5269319   62.8 MiB    8300  Linux filesystem
   7         5397840         6425999   502.0 MiB   8300  Linux filesystem
   8         6554520      1953503999   928.4 GiB   FD00  Linux RAID
root@fvdwsl-base:/ #


thnaks
maxdo
Donator VIP
Donator VIP
 
Posts: 53
Joined: Sun Dec 09, 2018 12:44 pm

Re: Fail to update to version 18.1

Postby Jocko » Fri Jan 24, 2020 2:09 pm

:shocked :scratch

Ok I understood : bad partition table

How did you setup the disk as your table does not match no fvdw-sl partition table (even for the additional disk sdb) :pound
sda6 is too big (~x8), sda5 is too big,....

Note: it seems your nas runs without swap partition :whistle.

Now need to find a way how to restore sda configuration without losing your data :thinking

What raid type do you have on the nas ? (raid0, linear or raid1)
and post
Code: Select all
mdadm --detail /dev/md0
mdadm --examine /dev/sd[ab]8
cat /etc/mdadm.conf
free
gdisk -l /dev/sdb



Note: Please to use BBCode "code" when you will paste your outputs (they will be verbose and will be much more readable....)
as already asked here : https://plugout.net/viewtopic.php?f=27&t=2950&start=0#p28490

Please to note in this topic you had a right partition table on sda (after swapping the 2 disks) :scratch
Jocko
Site Admin - expert
 
Posts: 11367
Joined: Tue Apr 12, 2011 4:48 pm
Location: Orleans, France

Re: Fail to update to version 18.1

Postby maxdo » Fri Jan 24, 2020 2:43 pm

Code: Select all
login as: root
root@192.168.4.252's password:
root@fvdwsl-base:/ # mdadm --detail /dev/md0
/dev/md0:
        Version : 1.0
  Creation Time : Wed Nov 16 12:18:24 2011
     Raid Level : raid1
     Array Size : 973474560 (928.38 GiB 996.84 GB)
  Used Dev Size : 973474560 (928.38 GiB 996.84 GB)
   Raid Devices : 2
  Total Devices : 2
    Persistence : Superblock is persistent

  Intent Bitmap : Internal

    Update Time : Thu Jan 11 15:05:48 2018
          State : clean
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0

           Name : fvdwsl-base.local:0  (local to host fvdwsl-base.local)
           UUID : e329d3c0:6044131d:cf0e2c74:ac85918e
         Events : 2

    Number   Major   Minor   RaidDevice State
       0       8        8        0      active sync   /dev/sda8
       1       8       24        1      active sync   /dev/sdb8
root@fvdwsl-base:/ # mdadm --examine /dev/sd[ab]8
/dev/sda8:
          Magic : a92b4efc
        Version : 1.0
    Feature Map : 0x1
     Array UUID : e329d3c0:6044131d:cf0e2c74:ac85918e
           Name : fvdwsl-base.local:0  (local to host fvdwsl-base.local)
  Creation Time : Wed Nov 16 12:18:24 2011
     Raid Level : raid1
   Raid Devices : 2

 Avail Dev Size : 1946949200 (928.38 GiB 996.84 GB)
     Array Size : 973474560 (928.38 GiB 996.84 GB)
  Used Dev Size : 1946949120 (928.38 GiB 996.84 GB)
   Super Offset : 1946949464 sectors
   Unused Space : before=0 sectors, after=328 sectors
          State : clean
    Device UUID : 73457de6:8c7acdc9:99500896:869ceb19

Internal Bitmap : -16 sectors from superblock
    Update Time : Thu Jan 11 15:05:48 2018
  Bad Block Log : 512 entries available at offset -8 sectors
       Checksum : 58a24e08 - correct
         Events : 2


   Device Role : Active device 0
   Array State : AA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdb8:
          Magic : a92b4efc
        Version : 1.0
    Feature Map : 0x1
     Array UUID : e329d3c0:6044131d:cf0e2c74:ac85918e
           Name : fvdwsl-base.local:0  (local to host fvdwsl-base.local)
  Creation Time : Wed Nov 16 12:18:24 2011
     Raid Level : raid1
   Raid Devices : 2

 Avail Dev Size : 1946949200 (928.38 GiB 996.84 GB)
     Array Size : 973474560 (928.38 GiB 996.84 GB)
  Used Dev Size : 1946949120 (928.38 GiB 996.84 GB)
   Super Offset : 1946949464 sectors
   Unused Space : before=0 sectors, after=328 sectors
          State : clean
    Device UUID : 97b470a3:31b6b432:88e0aae7:a221de02

Internal Bitmap : -16 sectors from superblock
    Update Time : Thu Jan 11 15:05:48 2018
  Bad Block Log : 512 entries available at offset -8 sectors
       Checksum : b9120ddc - correct
         Events : 2


   Device Role : Active device 1
   Array State : AA ('A' == active, '.' == missing, 'R' == replacing)
root@fvdwsl-base:/ # cat /etc/mdadm.conf
CREATE owner=root group=root mode=0666 auto=yes metadata=1.0
PROGRAM /usr/bin/mdadm-events
DEVICE /dev/sd* /dev/se*

MAILADDR supporto@sintel.com

ARRAY /dev/md0 metadata=1.0 level=raid1 num-devices=2 UUID=e329d3c0:6044131d:cf0e2c74:ac85918e
root@fvdwsl-base:/ # free
             total         used         free       shared      buffers
Mem:        249988       145232       104756         6008        61984
-/+ buffers:              83248       166740
Swap:            0            0            0
root@fvdwsl-base:/ # gdisk -l /dev/sdb
GPT fdisk (gdisk) version 0.8.5

Partition table scan:
  MBR: MBR only
  BSD: not present
  APM: not present
  GPT: not present


***************************************************************
Found invalid GPT and valid MBR; converting MBR to GPT format.
***************************************************************

Disk /dev/sdb: 1953525168 sectors, 931.5 GiB
Logical sector size: 512 bytes
Disk identifier (GUID): 78FD4B70-D391-4B77-801B-AA37EA1BB013
Partition table holds up to 128 entries
First usable sector is 34, last usable sector is 1953525134
Partitions will be aligned on 8-sector boundaries
Total free space is 3748181 sectors (1.8 GiB)

Number  Start (sector)    End (sector)  Size       Code  Name
   5         3341520         5012279   815.8 MiB   8300  Linux filesystem
   6         5140800         5269319   62.8 MiB    8300  Linux filesystem
   7         5397840         6425999   502.0 MiB   8300  Linux filesystem
   8         6554520      1953503999   928.4 GiB   FD00  Linux RAID
root@fvdwsl-base:/ #

here you are.
thanks
maxdo
Donator VIP
Donator VIP
 
Posts: 53
Joined: Sun Dec 09, 2018 12:44 pm

Re: Fail to update to version 18.1

Postby maxdo » Fri Jan 24, 2020 2:52 pm

sorry forgot to say RAID 1
ciao
maxdo
Donator VIP
Donator VIP
 
Posts: 53
Joined: Sun Dec 09, 2018 12:44 pm

Re: Fail to update to version 18.1

Postby Jocko » Fri Jan 24, 2020 2:57 pm

I recheck your partition tables
The original table on sda was also wrong (sda6 and sda5 too big) and I think you have just deleted the partitions 1, 2 and 3 :corner

I confirm your nas can not use swap (required if there is not enough memory)

Your raid is clean :thumbup

what I suggest you do:
- backup sda5 partition (nas database, raid configuration...)
- reset fully sda disk by using fvdw-sl like a fresh first install
- restore sda5 content

Then after rebooting you will have a degraded raid1 (missing sda8) and an empty data volume Vol-A
So via a setup job re-add sda8 on your raid1

Do you agree with this ?
Note: If you have valuable data on your raid I advise you to save them before on another support

Note: all these steps require time and I'd rather we are available on the board all the time.
So today is ok and if no it will be next week
Jocko
Site Admin - expert
 
Posts: 11367
Joined: Tue Apr 12, 2011 4:48 pm
Location: Orleans, France

Re: Fail to update to version 18.1

Postby maxdo » Fri Jan 24, 2020 3:35 pm

Hi Jocko,
you had many patient with me.
so thanks.
but a last request. i will need to have a procedure to follow in mode to reinstall all from scratch

so deleting every thing and reinstall a new installation with the last firmware available.
i have a secodn NAS where i could to od a full backup of it, and then it is not necessary have a backup fo sda partition.

of course, when you have time.

explain me what i need to do to reinstall from scratch all firmware with last versione 18.x.

thanks for your time.

NOTE: i have done a donation too... :beer:
:hail
maxdo
Donator VIP
Donator VIP
 
Posts: 53
Joined: Sun Dec 09, 2018 12:44 pm

Re: Fail to update to version 18.1

Postby Jocko » Fri Jan 24, 2020 7:01 pm

Ok

The first step is to backup some sda5 content.
So from a shell window, do
Code: Select all
cd /rw_fs
tar -czf sda5-etc-24jan.tgz etc .ssh
(check the current folder is really rw_fs !!! when your run the tar command and copy it exactly)

- if it is not yet the case install the last fvdw-sl console on your windows laptop
(https://plugout.net/viewtopic.php?f=7&t=3086)

- now try to get a telnet standalone access
stop the nas and remove sdb (disk in the right slot) see:https://plugout.net/download/file.php?id=5524
start fvdw-sl console on your laptop
you have to run the action "Load standalone kernel" to get a telnet access on the nas.
Please to read this topic if you have some connect issues: https://plugout.net/viewtopic.php?f=7&t=2645. You must select as standalone kernel, the file UIMAGE-3142-KIRKWOOD-171-standalone

- when you are logged save sda5-etc-24jan.tgz on your laptop
Code: Select all
mkdir /sda5
mount /dev/sda5 /sda5
cd /sda5
ls     (<=== you should see the file sda5-etc-24jan.tgz)
tftp -l sda5-etc-24jan.tgz -r sda5-etc-24jan.tgz -p ip-pc
where ip-pc is the ip address of your laptop
if you did not change the default settings of fvdw-sl console you should see the file sda5-etc-24jan.tgz in the folder tftp on your laptop

- reset sda
(do not do this step if you do not find sda5-etc-24jan.tgz in tftp!)
unmount sda5
Code: Select all
cd /
umount /sda5
mount    (<== check you have no longer lines beginning by "/dev/sda5 on")
run fvdw-sl-programs
Code: Select all
./fvdw-sl-programs
and select "Install fvdw-sl firmware on a hard disk" (item 1)

Do not reboot the nas !
- restore sda5 content
upload sda5-etc-24jan.tgz
Code: Select all
cd /
tftp -l sda5-etc-24jan.tgz -r sda5-etc-24jan.tgz -g ip-pc           (<--------------this time we use the option '-g' (<=> get) instead of the option '-p' (<=> put)
mkdir /sda5  (may occur a warning as /sda5 should yet exist)
mount /dev/sda5 /sda5
tar -xzf /sda5-etc-24jan.tgz -C /sda5   


- stop the nas (button) plug again sdb. Then reboot the nas
(Do not use the command 'reboot -f' as you need to plug sdab before restarting)

if all is done as expected you should see two volumes 'Vol-A' and 'RD-1' with a degraded warning on the disk setup menu
So click on the raid volume name (load the edit volume menu) and add disk-A

wait the setup job is done and after rebooting you should have again one volume with all your previous shares.

That's all! :whistle
Jocko
Site Admin - expert
 
Posts: 11367
Joined: Tue Apr 12, 2011 4:48 pm
Location: Orleans, France

Previous

Return to Lacie 2Big Network vs2

Who is online

Users browsing this forum: No registered users and 4 guests

cron