Bitcoin Forum
April 25, 2024, 06:24:00 AM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: 1 2 3 [All]
  Print  
Author Topic:    (Read 575 times)
BobLawblaw (OP)
Legendary
*
Offline Offline

Activity: 1822
Merit: 5551


Neighborhood Shenanigans Dispenser


View Profile
January 15, 2020, 06:06:07 PM
Last edit: July 17, 2020, 03:26:26 PM by BobLawblaw
 #1

1714026240
Hero Member
*
Offline Offline

Posts: 1714026240

View Profile Personal Message (Offline)

Ignore
1714026240
Reply with quote  #2

1714026240
Report to moderator
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction.
1714026240
Hero Member
*
Offline Offline

Posts: 1714026240

View Profile Personal Message (Offline)

Ignore
1714026240
Reply with quote  #2

1714026240
Report to moderator
jojo69
Legendary
*
Offline Offline

Activity: 3150
Merit: 4309


diamond-handed zealot


View Profile
January 15, 2020, 06:17:50 PM
 #2

oh man, I'm sorry Bob, I loathe crap like this

I got no help for you, I was a whole day editing fstab to get my legacy volumes mounted recently, but I'm going to follow along and try to learn something.

I will say this, RAID, particularly software RAID, has been more trouble than good in my personal experience; supposed to protect you from physical disk failure but introduces whole other layers of failure modes.

This is not some pseudoeconomic post-modern Libertarian cult, it's an un-led, crowd-sourced mega startup organized around mutual self-interest where problems, whether of the theoretical or purely practical variety, are treated as temporary and, ultimately, solvable.
Censorship of e-gold was easy. Censorship of Bitcoin will be… entertaining.
makrospex
Sr. Member
****
Offline Offline

Activity: 728
Merit: 317


nothing to see here


View Profile
January 15, 2020, 06:30:44 PM
Merited by BobLawblaw (20)
 #3

uh, oh...

have you tried
Quote
mdadm --create --assume-clean ...
, too?
Don't be a fool and dd that hdd blocks off to a file on a backup medium first (if you not already did so).

EDIT: imo, as long as the disks are physically ok, you should have a chance of lossless recovery. I had more problems with journaling fs than with ext2 or vfat, only exception was reiser, but since hans went to jail, i stopped using it.
psycodad
Legendary
*
Offline Offline

Activity: 1604
Merit: 1564


精神分析的爸


View Profile
January 15, 2020, 07:12:42 PM
Merited by BobLawblaw (20)
 #4

May I ask what has led to this situation? (power loss, disk crash etc.)

Your array is obviously perfectly fine, don't panic!

Did you by chance have LVM on top of that RAID ?

If you can, post your fstab and the output of 'lsblk -f' or 'blkid'.

psycodad
Legendary
*
Offline Offline

Activity: 1604
Merit: 1564


精神分析的爸


View Profile
January 15, 2020, 07:32:53 PM
 #5

Thanks.

Again, your array is fine according to /proc/mdstat you posted, no need to recreate it and risk something.

Please also post output from
Code:
cat /etc/fstab


vapourminer
Legendary
*
Offline Offline

Activity: 4312
Merit: 3506


what is this "brake pedal" you speak of?


View Profile
January 15, 2020, 07:51:25 PM
 #6

bob sorry im of no help in this, just wanted to comment ive dropped my Z1 (3 drive raid 5) arrays and switched to either Z2 (5+ drives any 2 can fail) for important stuff or straight up mirrors for less important. i love mirrors cuz you can read the remaining drive on most anything.

best of luck, im lurking as i hope to learn something here too.

btw talking about zfs files system on my nas. for desktops i always mirror.
psycodad
Legendary
*
Offline Offline

Activity: 1604
Merit: 1564


精神分析的爸


View Profile
January 15, 2020, 08:14:32 PM
 #7

Thanks, so we know there was no LVM on top.
Please try fsck again, but on the device not the mountpoint as you did above:

Code:
fsck.ext4 /dev/md0

Edit: It the above starts, it will (depending on your hardware) take ages (or 2 minutes longer) to complete - do not interrupt it!

makrospex
Sr. Member
****
Offline Offline

Activity: 728
Merit: 317


nothing to see here


View Profile
January 15, 2020, 08:15:19 PM
 #8


EDIT: forget that, didn't make sense.
psycodad
Legendary
*
Offline Offline

Activity: 1604
Merit: 1564


精神分析的爸


View Profile
January 15, 2020, 08:21:57 PM
 #9

So you know the filesystem from md0 is ext4, right?
try giving mount /dev/sdb1 (and sdc1,sdd1)a hint via
Quote
-t ext4
since it can't read this info from /etc/fstab.
Mount READ ONLY if you don't have a backup and see if you can directly read from the partitions on the physical raid disk(s).

EDIT: Or was it /dev/sdb0 (...)
My daily unix practice aged well.


There is no point in trying to mount a linux-raid5 partition directly.
It is RAID5, not RAID1 (in which case - as vapourminer mentions - you can mount one part of the mirror only).

makrospex
Sr. Member
****
Offline Offline

Activity: 728
Merit: 317


nothing to see here


View Profile
January 15, 2020, 08:23:00 PM
 #10

So you know the filesystem from md0 is ext4, right?
try giving mount /dev/sdb1 (and sdc1,sdd1)a hint via
Quote
-t ext4
since it can't read this info from /etc/fstab.
Mount READ ONLY if you don't have a backup and see if you can directly read from the partitions on the physical raid disk(s).

EDIT: Or was it /dev/sdb0 (...)
My daily unix practice aged well.


There is no point in trying to mount a linux-raid5 partition directly.
It is RAID5, not RAID1 (in which case - as vapourminer mentions - you can mount one part of the mirror only).


Came to my mind soon after writing. Edited out, thanks.
makrospex
Sr. Member
****
Offline Offline

Activity: 728
Merit: 317


nothing to see here


View Profile
January 15, 2020, 08:34:00 PM
Merited by Mitchell (10), ABCbits (1)
 #11

So you might find out the location of a superblock copy with

dumpe2fs

and use that for fsck?
I had a similar situation on solaris 8, where i had to replace the superblock to make the volume readable again. It was a single volume, but it's a filesystem anyway.

EDIT: ran a quick google and the first result pretty hit what i searched for.
https://www.linuxquestions.org/questions/linux-software-2/mount-unknown-filesystem-type-%27linux_raid_member%27-4175491589/
It's a raid level 5, too. See last post in that thread.
psycodad
Legendary
*
Offline Offline

Activity: 1604
Merit: 1564


精神分析的爸


View Profile
January 15, 2020, 08:36:08 PM
 #12

@Bob: Does it anything after that (probably not)?

Please try with the superblock backups listed, i.e.

Code:
fsck.ext4 -b 8193 /dev/md0

Agree with makrospex^, fs is fs no matter on which block device it sits (at least from the perspective of the user and the tools trying to fix a filesystem).

psycodad
Legendary
*
Offline Offline

Activity: 1604
Merit: 1564


精神分析的爸


View Profile
January 15, 2020, 09:06:01 PM
Last edit: January 15, 2020, 09:21:11 PM by psycodad
Merited by Mitchell (10)
 #13

 Grin
Seriously:
Again don't panic, a power failure can't fsck up your array so badly, only the operator is capable of that (I know *exactly* what I am talking about here! - Lost TBs of data only due to getting panicky and impatient ). If you didn't forget to tell us about re-formatting the fs or similar nasties you are still in a good position. Just annoying problems so far.

Regarding the problem at hand: Did the RAID ever rebuild/resynch during your tries resp. after the failure?
I mean hours of disk activity while cat /proc/mdstat shows minimal progress?

If not, you have probably forcefully assembled a damaged RAID by --assume-clean from above. The next thing I would recommend to try is actually resynching that RAID, either by simply rebooting or by stopping md with
Code:
mdadm --stop /dev/md0
mdadm --assemble --scan

Code:
cat /proc/mdstat
should then show it is synching.

Edit:
Try
Code:
dumpe2fs /dev/md0|grep super
as makrospex suggested to verify if the superblock suggested by fsck.ext4 are correct (did assume so, but I could be wrong).

psycodad
Legendary
*
Offline Offline

Activity: 1604
Merit: 1564


精神分析的爸


View Profile
January 15, 2020, 09:21:54 PM
 #14

Apologies, didn't check back and just typed from the top of my head, my fault.
Edited in above post.

Quote
Quote
Regarding the problem at hand: Did the RAID ever rebuild/resynch during your tries resp. after the failure?
I mean hours of disk activity while cat /proc/mdstat shows minimal progress?

Took about 12 hours to rebuild/resync when I tried.


Hmm, okay then my assumption was wrong, then the array should be clean.

psycodad
Legendary
*
Offline Offline

Activity: 1604
Merit: 1564


精神分析的爸


View Profile
January 15, 2020, 09:51:40 PM
 #15

Ok, checked again through what you posted above and would suggest the following:

Code:
swapon -a
mount /dev/md0p1 /mnt

Didn't really question you trying to mount md0, even so your output above says you partitioned md0...


psycodad
Legendary
*
Offline Offline

Activity: 1604
Merit: 1564


精神分析的爸


View Profile
January 15, 2020, 09:59:48 PM
 #16

Code:
root@bitcorn:/dev# swapon -a
root@bitcorn:/dev# mount /dev/md0p1 /mnt
mount: /mnt: wrong fs type, bad option, bad superblock on /dev/md0p1, missing codepage or helper program, or other error.
root@bitcorn:/dev# mount /dev/md0p1 /mnt/md0
mount: /mnt/md0: wrong fs type, bad option, bad superblock on /dev/md0p1, missing codepage or helper program, or other error.

Try fsck'ing it first:

Code:
fsck.ext4 /dev/md0p1


psycodad
Legendary
*
Offline Offline

Activity: 1604
Merit: 1564


精神分析的爸


View Profile
January 15, 2020, 10:20:49 PM
 #17

Hmmm... that would have been too easy anyway  Cheesy

You could try again to find backup superblocks with dumpe2fs:

Code:
dumpe2fs /dev/md0p1|grep -i super


makrospex
Sr. Member
****
Offline Offline

Activity: 728
Merit: 317


nothing to see here


View Profile
January 15, 2020, 10:54:16 PM
Last edit: January 15, 2020, 11:24:15 PM by makrospex
 #18

Hmmm... that would have been too easy anyway  Cheesy

You could try again to find backup superblocks with dumpe2fs:

Code:
dumpe2fs /dev/md0p1|grep -i super



sorry for the late reply.

one may also poke for potential superblock copies by using fsck -b and popular blocksize values (8192,16384,32768...)
in case you don't know the blocksize of the filesystem.


I also see that GPT message...
i would try read that out with gdisk or sgdisk

sgdisk -p /dev/md0

since md0 is reported as active.
if that works, i would try to use the information to copy the data blocks from /md0p? to a new filesystem (with a valid superblock).
mounting the device/partition is failing, but the data should be on there. So i'd try to work around that strategy to mount the filesystem.
There is a bit more detail to this, but first i'd like to see the output of sgdisk -p

I also didn't find much about this /dev/md127 on the net. any idea psycodad?

EDIT: seems like a complicated situation there. Don't give up.
The main thing that kept up my motivation with these errors, was seeking to experience that certain feeling after i got one of them finally resolved.

Off to sleep now.

psycodad
Legendary
*
Offline Offline

Activity: 1604
Merit: 1564


精神分析的爸


View Profile
January 15, 2020, 11:43:16 PM
 #19

<snip>

Good points/ideas! Will be interesting to see the output of sgdisk and if it confirms my current theory.

I also didn't find much about /dev/md127 on the net. any idea psycodad?

Never seen that md127 before, would check /etc/mdadm/mdadm.conf, but that's only a wild guess.

After revisiting the below, I noticed that md0p1 has no recognized filesystem. So trying to rebuild the journal with tune2fs is probably of no help. But it indeed hints to a broken super block me thinks.

re: lsblk -f, etc. re: It crashed due to power failure, for the sake of argument. Cant be sure of LVM. I'm a bit of a tard when it comes to Linux.

Code:
root@bitcorn:/# lsblk -f
root@bitcorn:/# blkid
/dev/sda1: UUID="BA8D-865F" TYPE="vfat" PARTLABEL="EFI System Partition" PARTUUID="080f6638-f09f-4518-a785-6212e1b305b8"
/dev/sda2: UUID="42a6267e-28ba-4c93-8554-3dea69f5eb2e" TYPE="ext4" PARTUUID="f25d3ecd-9158-42b4-b502-4853b96bba78"
/dev/sda3: UUID="9bdd83b3-0bb0-474e-a83e-8394a1fb2e8b" TYPE="swap" PARTUUID="75ae0f41-c850-4153-af77-b2085f465a4d"
/dev/sdb: UUID="b7b2edc8-cd74-ad69-e5e4-81961dcc8509" UUID_SUB="8757b0af-bb26-b0e6-f0e1-eb5672344df3" LABEL="bitcorn:0" TYPE="linux_raid_member"
/dev/sdc: UUID="b7b2edc8-cd74-ad69-e5e4-81961dcc8509" UUID_SUB="600cedf1-06f6-f8a1-5c27-528f7588eada" LABEL="bitcorn:0" TYPE="linux_raid_member"
/dev/sdd: UUID="b7b2edc8-cd74-ad69-e5e4-81961dcc8509" UUID_SUB="9c08f657-41d9-1a24-33b9-0f993e7f0d6f" LABEL="bitcorn:0" TYPE="linux_raid_member"
/dev/md0: PTTYPE="gpt"
/dev/md0p1: PARTUUID="11d10ea4-e100-3949-b9d2-c30072ef7648"
/dev/md0p2: UUID="3df11d9c-51f2-48d8-9ab0-49b6dcd47753" TYPE="swap" PARTUUID="53d67f15-7962-ed43-840a-881c288fc86a"
root@bitcorn:/#


It definitely should read TYPE="ext4" at the md0p1 line, but there is no partition type detected at all.

Based on this the best recommendation I can come up with at this point is trying to restore a backup superblock on md0p1.

If dumpe2fs finds no superblock backups, there is the risky way of using mke2fs in simulation mode, by basically 'dry'-reformatting your partition the computer will come up with the same superblock addresses as the first time it formatted it.

RL calls, will check back tomorrow.

jojo69
Legendary
*
Offline Offline

Activity: 3150
Merit: 4309


diamond-handed zealot


View Profile
January 16, 2020, 12:15:07 AM
 #20


Hurr... fsck on mountpoint... Durr... I'm special.


you and me both, guess who had to learn that you can't mount multiple file systems on a single mount point...

This is not some pseudoeconomic post-modern Libertarian cult, it's an un-led, crowd-sourced mega startup organized around mutual self-interest where problems, whether of the theoretical or purely practical variety, are treated as temporary and, ultimately, solvable.
Censorship of e-gold was easy. Censorship of Bitcoin will be… entertaining.
makrospex
Sr. Member
****
Offline Offline

Activity: 728
Merit: 317


nothing to see here


View Profile
January 16, 2020, 12:19:33 AM
Merited by vapourminer (2)
 #21

No sleep yet  Roll Eyes

Good points/ideas! Will be interesting to see the output of sgdisk and if it confirms my current theory.

I think we are on a similar pathway.

Never seen that md127 before, would check /etc/mdadm/mdadm.conf, but that's only a wild guess.

Seems to appear in some recovery situations automatically, according to some outbound forum threads.
(possibly also created during reorg/rebuilds?)

After revisiting the below, I noticed that md0p1 has no recognized filesystem. So trying to rebuild the journal with tune2fs is probably of no help. But it indeed hints to a broken super block me thinks.

...

It definitely should read TYPE="ext4" at the md0p1 line, but there is no partition type detected at all.

Based on this the best recommendation I can come up with at this point is trying to restore a backup superblock on md0p1.

If dumpe2fs finds no superblock backups, there is the risky way of using mke2fs in simulation mode, by basically 'dry'-reformatting your partition the computer will come up with the same superblock addresses as the first time it formatted it.

RL calls, will check back tomorrow.


Maybe try
Code:
fsck -t ext4 /dev/md0p1

first?
With a forced repair afterwards, following a backup, if not already created.


Hurr... fsck on mountpoint... Durr... I'm special.


you and me both, guess who had to learn that you can't mount multiple file systems on a single mount point...

That's the way it goes.
Like Zen, only linux-wise  Grin
makrospex
Sr. Member
****
Offline Offline

Activity: 728
Merit: 317


nothing to see here


View Profile
January 16, 2020, 08:13:44 AM
Last edit: January 16, 2020, 08:26:01 AM by makrospex
Merited by ABCbits (1)
 #22

Good morning Smiley

So you have a drive sector size of 4096 bytes = 512 bytes * 8 and we know the starting sector number.

Quote
Number  Start (sector)    End (sector)  Size       Code  Name
   1            5118     15611104253   7.3 TiB     FD00  

Psycodad's opinion on recreating the filesystem with mke2fs seems a valid option to me there.
But first, i want you to try to mount partition1 directly, but without trying to read the bad superblock,which (iirc) should work this way:

Code:
mount -t ext4 -o ro,offset=((5118*4096)) /dev/md0 /path/to/mountpoint 

or

Code:
mount -t ext4 -o ro,offset=((5118*512)) /dev/md0 /path/to/mountpoint 

I don't know exactly which size is the right one to calc the offset, but imho it's the physical sector size (=command 1 should work, imo).
If either mount command succeeds, check if your files are there in /path/to/mountpoint.
Partition 1 (which is /dev/md0p1 in linux) will be mounted in read-only mode.

BUT: The filesystem could alsohave errors, that's why it is mounted in read-only mode. At least you are able to access the data for backup, but the final goal should be recreation of a healthy superblock, so fsck can do it's work before you remount in read-write mode.

If none of the above commands work, i'd try mke2fs to create a new superblock and handle the drive conventionally.
Wait, peeking for the first superblock backup location by multiples of the sector size for
Code:
fsck -B ... 
would be a valid option before that, but i want to read Psycodad's opinion on this first.

For future filesystem creation(s):
Optimally, logical sector size would match physical sector size and the start of any partition should be a multiple of that, unless you have to store many very small files (below physical sector size).
makrospex
Sr. Member
****
Offline Offline

Activity: 728
Merit: 317


nothing to see here


View Profile
January 16, 2020, 08:28:58 AM
 #23

Code:
mount -t ext4 -o ro,offset=((5118*4096)) /dev/md0 /path/to/mountpoint 
or
Code:
mount -t ext4 -o ro,offset=((5118*512)) /dev/md0 /path/to/mountpoint 

Code:
root@bitcorn:/dev# mount -t ext4 -o ro,offset=((5118*4096)) /dev/md0 /mnt/md0
bash: syntax error near unexpected token `('
root@bitcorn:/dev# mount -t ext4 -o ro,offset=((5118*512)) /dev/md0 /mnt/md0
bash: syntax error near unexpected token `('

Poop.

Arrr :/
Please calculate the value manually and omit the quotes.

mount -t ext4 -o ro,offset=20963328 /dev/md0 /mnt/md0

or

mount -t ext4 -o ro,offset=2620416 /dev/md0 /mnt/md0

makrospex
Sr. Member
****
Offline Offline

Activity: 728
Merit: 317


nothing to see here


View Profile
January 16, 2020, 08:40:16 AM
 #24

Please calculate the value manually and omit the quotes.

Yeah, figured I'd just enter the calculated values. No freaking idea what /dev/loop0 has anything to do with anything.

Linux really makes me sad sometimes Sad

EDIT: Sleep time. Back in a few hours. Thanks again !

you're welcome.
The "loop block device" lets you use a file as if it were a block device (a virtual hard drive, for example).

https://en.wikipedia.org/wiki/Loop_device

Windows made me even more sad, tbh.
But that's a longer story, and way off topic  Cheesy
psycodad
Legendary
*
Offline Offline

Activity: 1604
Merit: 1564


精神分析的爸


View Profile
January 16, 2020, 09:05:18 AM
 #25

Moinmoin,

Thanks makrospex for picking up where I left :-)

Anyway, there is one thing that strikes me as very odd, but it may be too early or I may be simply too stupid (or both), but...:


sgdisk -p /dev/md0

Code:
root@bitcorn:/dev# sgdisk -p /dev/md0
Disk /dev/md0: 15627544576 sectors, 7.3 TiB
Sector size (logical/physical): 512/4096 bytes
Disk identifier (GUID): 00000000-0000-0000-0000-000000000000
Partition table holds up to 128 entries
Main partition table begins at sector 2 and ends at sector 33
First usable sector is 34, last usable sector is 15627544542
Partitions will be aligned on 8-sector boundaries
Total free space is 8141 sectors (4.0 MiB)

Number  Start (sector)    End (sector)  Size       Code  Name
   1            5118     15611104253   7.3 TiB     FD00 
   2     15611104256     15627541487   7.8 GiB     8200 


While p2 is correctly showing as type "linux swap" (0x82), p1 shows as "Linux raid autodetect" (0xfd).
IMHO it should show as 0x83 for an ext4 partition. Looking for some confirmation or shown to be wrong from makrospex here.

@Bob: Can you show us /etc/mdadm/mdadm.conf and when it was last modified (so we know it was not recreated recently):
Code:
cat /etc/mdadm/mdadm.conf; stat /etc/mdadm/mdadm.conf

It might gives us a new idea..

* psycodad needs much more coffee..



makrospex
Sr. Member
****
Offline Offline

Activity: 728
Merit: 317


nothing to see here


View Profile
January 16, 2020, 09:11:12 AM
 #26

Moinmoin,

Thanks makrospex for picking up where I left :-)

Gern geschehen  Grin

Quote
Anyway, there is one thing that strikes me as very odd, but it may be too early or I may be simply too stupid (or both), but...:


sgdisk -p /dev/md0

Code:
root@bitcorn:/dev# sgdisk -p /dev/md0
Disk /dev/md0: 15627544576 sectors, 7.3 TiB
Sector size (logical/physical): 512/4096 bytes
Disk identifier (GUID): 00000000-0000-0000-0000-000000000000
Partition table holds up to 128 entries
Main partition table begins at sector 2 and ends at sector 33
First usable sector is 34, last usable sector is 15627544542
Partitions will be aligned on 8-sector boundaries
Total free space is 8141 sectors (4.0 MiB)

Number  Start (sector)    End (sector)  Size       Code  Name
   1            5118     15611104253   7.3 TiB     FD00  
   2     15611104256     15627541487   7.8 GiB     8200  


While p2 is correctly showing as type "linux swap" (0x82), p1 shows as "Linux raid autodetect" (0xfd).
IMHO it should show as 0x83 for an ext4 partition. Looking for some confirmation or shown to be wrong from makrospex here.

@Bob: Can you show us /etc/mdadm/mdadm.conf and when it was last modified (so we know it was not recreated recently):
Code:
cat /etc/mdadm/mdadm.conf; stat /etc/mdadm/mdadm.conf

It might gives us a new idea..

* psycodad needs much more coffee..

There can't be enough coffee, ever.
I realize that i need one too  Shocked

I was also wondering, looks like *cough* linuxnoob partitioning to me.

Now i'll paste what i was about to post before reading your reply:

I read into the mke2fs man page and found the -n switch:

Quote
-n
    Causes mke2fs to not actually create a filesystem, but display what it would do if it were to create a filesystem. This can be used to determine the location of the backup superblocks for a particular filesystem, so long as the mke2fs parameters that were passed when the filesystem was originally created are used again. (With the -n option added, of course!)

I suggest the default values ("auto" at setup) were used on filesystem creation, so

Code:
mke2fs -n /dev/md0p1

should output the same backup block numbers for use with
Code:
fsck -B ... 


Waddaya think?

EDIT: knowing mdadm.conf contents would surely help, but BobLawBlaw has to wake up first. Damned timeshifts.
psycodad
Legendary
*
Offline Offline

Activity: 1604
Merit: 1564


精神分析的爸


View Profile
January 16, 2020, 09:28:38 AM
 #27

<snip>

Quote
-n
    Causes mke2fs to not actually create a filesystem, but display what it would do if it were to create a filesystem. This can be used to determine the location of the backup superblocks for a particular filesystem, so long as the mke2fs parameters that were passed when the filesystem was originally created are used again. (With the -n option added, of course!)

I suggest the default values ("auto" at setup) were used on filesystem creation, so

Code:
mke2fs -n /dev/md0p1

should output the same backup block numbers for use with
Code:
fsck -B ... 


Waddaya think?

Not trying to be a smartass at all (ok, maybe a tiny lil bit  Grin), that was what I meant by:

<snip>

If dumpe2fs finds no superblock backups, there is the risky way of using mke2fs in simulation mode, by basically 'dry'-reformatting your partition the computer will come up with the same superblock addresses as the first time it formatted it.

RL calls, will check back tomorrow.


It seems dangerous, it will ask you to answer y to reformat your partition (even though with -n it doesn't). Though I tried it on USB stick and it does indeed not reformat (even the wording and confirmation is confusion as the author of mke2fs once admitted).

But yes, that would be one of the next steps I'd also propose.

After that I only have testdisk in my mind, but that's rather dangerous too easy to do something wrong.

makrospex
Sr. Member
****
Offline Offline

Activity: 728
Merit: 317


nothing to see here


View Profile
January 16, 2020, 09:46:34 AM
 #28


Not trying to be a smartass at all (ok, maybe a tiny lil bit  Grin), that was what I meant by:

<snip>

If dumpe2fs finds no superblock backups, there is the risky way of using mke2fs in simulation mode, by basically 'dry'-reformatting your partition the computer will come up with the same superblock addresses as the first time it formatted it.

RL calls, will check back tomorrow.


It seems dangerous, it will ask you to answer y to reformat your partition (even though with -n it doesn't). Though I tried it on USB stick and it does indeed not reformat (even the wording and confirmation is confusion as the author of mke2fs once admitted).

But yes, that would be one of the next steps I'd also propose.

After that I only have testdisk in my mind, but that's rather dangerous too easy to do something wrong.

Right, testdisk would also not be suitable for guided use, i would want my own hands on that.
So the next steps are clear now. I'd like to know if the offset mount of /dev/md0 was successful, but i'll have to wait for the TC to show up again.
Also mdadm.conf contents...

Since the source of the errors was a power outage and the filesystem wasn't in sync, i'm positive that the problem will finally be solved.
bad blocks would be worse.
Also, i seem to be in luck this week. Two days ago i shadow-copied an old laptop's hdd, which was making occasional noises of horror, into a 18gig .vhd file for use in Virtualbox for almost six hours. Roughly fifteen minutes after completion, the laptop rebooted and got stuck at the bios splash screen. Then i removed the drive, connected it via usb-ide cable and copied over the .vhd file to the virtualbox host. The file was vital, but the drive died just before i wanted to disconnected it later (i was lazy).
With so much luck, i'm confident  Grin
jojo69
Legendary
*
Offline Offline

Activity: 3150
Merit: 4309


diamond-handed zealot


View Profile
January 16, 2020, 02:23:45 PM
 #29



Windows made me even more sad, tbh.


precisely

This is not some pseudoeconomic post-modern Libertarian cult, it's an un-led, crowd-sourced mega startup organized around mutual self-interest where problems, whether of the theoretical or purely practical variety, are treated as temporary and, ultimately, solvable.
Censorship of e-gold was easy. Censorship of Bitcoin will be… entertaining.
psycodad
Legendary
*
Offline Offline

Activity: 1604
Merit: 1564


精神分析的爸


View Profile
January 16, 2020, 04:05:28 PM
Merited by vapourminer (2)
 #30

@Bob: Can you show us /etc/mdadm/mdadm.conf and when it was last modified (so we know it was not recreated recently):
Code:
cat /etc/mdadm/mdadm.conf; stat /etc/mdadm/mdadm.conf
It might gives us a new idea..
* psycodad needs much more coffee..

I commented out the mdadm.conf ARRAY stuff. I figured it looked fucked up enough that it was effectively useless. I have no idea what I'm doing.

Code:
#ARRAY /dev/md0 metadata=1.2 spares=1 name=xserver:0 UUID=e0104263:8aef7ef8:2d66045f:26a69f31
#ARRAY /dev/md0 metadata=1.2 spares=1 name=xserver:0 UUID=6314fb8d:1466fbbd:5aa88188:793c3019
#ARRAY /dev/md0 metadata=1.2 spares=1 name=bitcorn:0 UUID=4b7f5741:476a4550:ba1184b2:76efd22e
#ARRAY /dev/md/0  metadata=1.2 UUID=4b7f5741:476a4550:ba1184b2:76efd22e name=bitcorn:0
#ARRAY /dev/md/bitcorn:0 metadata=1.2 name=bitcorn:0 UUID=4196079a:155fab25:eb7fa044:37adb2ce

Okay, that looks pretty weird, I guess that system has seen quite a few changes already.

I am still circling around this partition type 0xfd on m0p1. Just for fun and giggles could you try to see if there is another raid device on top of m0p1 when you scan now?

Code:
mdadm --assemble --scan


makrospex
Sr. Member
****
Offline Offline

Activity: 728
Merit: 317


nothing to see here


View Profile
January 16, 2020, 04:08:13 PM
 #31

@Bob: Can you show us /etc/mdadm/mdadm.conf and when it was last modified (so we know it was not recreated recently):
Code:
cat /etc/mdadm/mdadm.conf; stat /etc/mdadm/mdadm.conf
It might gives us a new idea..
* psycodad needs much more coffee..

I commented out the mdadm.conf ARRAY stuff. I figured it looked fucked up enough that it was effectively useless. I have no idea what I'm doing.

Code:
#ARRAY /dev/md0 metadata=1.2 spares=1 name=xserver:0 UUID=e0104263:8aef7ef8:2d66045f:26a69f31
#ARRAY /dev/md0 metadata=1.2 spares=1 name=xserver:0 UUID=6314fb8d:1466fbbd:5aa88188:793c3019
#ARRAY /dev/md0 metadata=1.2 spares=1 name=bitcorn:0 UUID=4b7f5741:476a4550:ba1184b2:76efd22e
#ARRAY /dev/md/0  metadata=1.2 UUID=4b7f5741:476a4550:ba1184b2:76efd22e name=bitcorn:0
#ARRAY /dev/md/bitcorn:0 metadata=1.2 name=bitcorn:0 UUID=4196079a:155fab25:eb7fa044:37adb2ce

Hmm, that was unexpected  Huh
I hope you made a backup of the file, which you can move back in place?
Or roll back any changes?


Let's continue with the "script":

First a question: Did one of the
Code:
mount -t ext4 -o=ro,offset xxx /dev/md0 /mnt/md0
succeed?
So
Code:
ls -la /mnt/md0
is showing the contents of the raid?

If yes, copy away what you need and then:


Code:
umount /dev/md0
mke2fs -t ext4 -n /dev/md0p1

and paste the output into the reply.

Don't give it up yet, you seem frustrated, it's too early, man  Smiley

EDIT: First please follow Psycodad's guidance, i crossposted but was definitely later than him.
makrospex
Sr. Member
****
Offline Offline

Activity: 728
Merit: 317


nothing to see here


View Profile
January 16, 2020, 04:22:54 PM
 #32

First a question: Did one of the
Code:
mount -t ext4 -o=ro,offset xxx /dev/md0 /mnt/md0
succeed?
So
Code:
ls -la /mnt/md0
is showing the contents of the raid?

Was not able to get it to mount at all using the offset calculations.

doh
then it's making out the backup superblock adresses via
Code:
mke2fs -n
and using these to find a vital copy to use.
My fucking spacebar is making me mad, something sticksunder it, so i have to punch it after every word to get a space in the text  Angry
psycodad
Legendary
*
Offline Offline

Activity: 1604
Merit: 1564


精神分析的爸


View Profile
January 16, 2020, 04:36:25 PM
 #33


doh
then it's making out the backup superblock adresses via
Code:
mke2fs -n
and using these to find a vital copy to use.
My fucking spacebar is making me mad, something sticksunder it, so i have to punch it after every word to get a space in the text  Angry

Agree with makrospex, next try would be actually finding and restoring a good backup of the superblock of m0p1, I think he meant to say:

Code:
mke2fs -n /dev/md0p1

to find the superblocks.
Triplecheck it has the -n before you answer 'y' to the following prompt!


Regarding the mounts to try from makrospex, please try these again with /dev/md0p1 instead of /dev/md0, i.e.:

Code:
mount -t ext4 -o=ro,offset xxx /dev/md0p1 /mnt/md0

Maybe also check that /mnt/md0 is empty, otherwise the mount might fail with a slightly different error message that you could overlook easily.


psycodad
Legendary
*
Offline Offline

Activity: 1604
Merit: 1564


精神分析的爸


View Profile
January 16, 2020, 05:17:05 PM
Last edit: January 16, 2020, 07:06:44 PM by psycodad
 #34

At least there are valid superblock backups, that's a start I guess.
^The software just simply tells us where these backup superblocks would be stored on a partition of given size.

Please try the mounts with the device I suggested above and not /dev/md0:

i.e.
Code:
mount -t ext4 -o ro,offset=32768 /dev/md0p1 /mnt/md0

psycodad
Legendary
*
Offline Offline

Activity: 1604
Merit: 1564


精神分析的爸


View Profile
January 16, 2020, 05:26:18 PM
 #35

At least there are valid superblock backups, that's a start I guess.
Please try the mounts with the device I suggested above and not /dev/md0:
i.e.
Code:
mount -t ext4 -o ro,offset=32768 /dev/md0p1 /mnt/md0

Code:
root@bitcorn:/mnt/md0# mount -t ext4 -o ro,offset=32768 /dev/md0p1 /mnt/md0
mount: /mnt/md0: wrong fs type, bad option, bad superblock on /dev/loop0, missing codepage or helper program, or other error.

from mke2fs: /dev/md0p1 alignment is offset by 1024 bytes.
This may result in very poor performance, (re)-partitioning suggested.

Might that have something to do with it ? Add 1024 to whatever the offsets say ?

Hmm, not sure about the offset, you could try with 1024 and by adding 1024, you could also try to fsck it with the backup superblock added:
Code:
fsck.ext4 -b 32768 /dev/md0p1

Otherwise I am bit confused about the loop part still, did you by chance have for example an encrypted filesystem on top of that raid device?


psycodad
Legendary
*
Offline Offline

Activity: 1604
Merit: 1564


精神分析的爸


View Profile
January 16, 2020, 05:48:50 PM
 #36

You sure there is an ext4 filesystem to be expected? (Just asking, no offence meant)

Otherwise I'd be keen to hear of any ideas from makrospex, I am getting a bit lost, the loop part still doesn't let go on me.

Just as some therapeutic exercise, could you post the output of
Code:
dumpe2fs -h /dev/md0p1

(I am really just fishing for new ideas here)

At this point I need to ask: Did this contain a wallet that you didn't have backuped otherwise or any other unique and valuable data or just a synched btc node that you want to get back online without synching from genesis?


GoMaD
Member
**
Offline Offline

Activity: 74
Merit: 15


View Profile
January 16, 2020, 05:54:30 PM
 #37

can you please paste the output of

 
Code:
parted --list
GoMaD
Member
**
Offline Offline

Activity: 74
Merit: 15


View Profile
January 16, 2020, 06:49:29 PM
 #38

please try

Code:
fsck -fCV /dev/md0p1

and paste the output

EDIT: and the output from
Code:
dmesg | grep md0
please
makrospex
Sr. Member
****
Offline Offline

Activity: 728
Merit: 317


nothing to see here


View Profile
January 16, 2020, 06:52:58 PM
 #39

Sorry, had to put the kids to sleep.
will backread now and answer asap.
makrospex
Sr. Member
****
Offline Offline

Activity: 728
Merit: 317


nothing to see here


View Profile
January 16, 2020, 08:05:57 PM
Last edit: January 16, 2020, 08:21:05 PM by makrospex
 #40

This took a little longer, since i had to back/crossread and type a reply in a text editor, not to lose overview (cause of my impaired working memory ya'll might know already about). Here we go:

Quote
/dev/md0p1 alignment is offset by 1024 bytes.

This means the start of md0p1 is not at the start of a physical sector (multiple of 4096 bytes).
It's not a cause for failure.

compare to this:
Quote

root@bitcorn:/dev# sgdisk -p /dev/md0
Disk /dev/md0: 15627544576 sectors, 7.3 TiB
Sector size (logical/physical): 512/4096 bytes
Disk identifier (GUID): 00000000-0000-0000-0000-000000000000
Partition table holds up to 128 entries
Main partition table begins at sector 2 and ends at sector 33
First usable sector is 34, last usable sector is 15627544542
Partitions will be aligned on 8-sector boundaries
Total free space is 8141 sectors (4.0 MiB)

Number  Start (sector)    End (sector)  Size       Code  Name
   1            5118     15611104253   7.3 TiB     FD00  
   2     15611104256     15627541487   7.8 GiB     8200  

The partition should start at sector mulitple of 8, but starts at 34, because the partition table takes up 33 sectors.
So it's 2 sectors (each 512 bytes) = 1024 bytes "behind" of the optimum (or 6 sectors in front). Would have been better to start at sector 40, forexample.
There are unused sectors at the end of the disk, meaning this is a GPT partition (in contrast to classic MBR).
Strangely, the GUID of the drive is zero (00000000-0000-0000-0000-000000000000), but that might not be important, too.
Also strange is the fact that the table lists 5118 as starting sector of md0p1, instead of 34.
Partition 2 (md0p2) starts at 15611104256, which is a multiple of 8, thus PERFECT.
When calculating the length of md0p1, its 2 sectors short of a multiple of 8.
Conclusion: Like
Code:
parted --list
suggests,
Quote
Error: The primary GPT table is corrupt, but the backup appears OK, so that will
be used.
the Partition table is corrupted. I'll read up how to handle this after this post.

The superblock backup positions from
Code:
mke2fs -n ...
are for
Code:
fsck -t ext4 -b 
, not for mount.
Since the master partition table seems to be invalid, we can't calculate a valid offset for mount, because we don't know the correct starting sector yet.
The solution to these might be found in the backup partition table too.

An ext4 superblock has a size of 1024 bytes. coincidence?

Please try mount -t ext4 -o ro,offset=... /dev/md0p1 /mnt/md0 with offset values of 34*512 (and 35*512).
Make sure you are outside /mnt/md0 when mounting. Check contents of /mnt/md0 after mounting trials.

Then issue a

e2fsck -b some_backup_block_numbers_from_mke2fs /dev/md0p1

preferably using the higher backup superblock numbers.
The blocksize would be good to know exactly, while it defaults to 4096.

Next shot would be using "testdrive" in non-destructive mode, to find out more details about the partitions of md0 and values like real start, end, blocksize of md0p1
Even to restore it all completely by testdisk logs manually.

If you want to put up a thread in an expert forum (the community support forum of your linux distribution, i'd suggest) please link it here, so we can follow and give further assistance, raise our cups when solved, or whatever Smiley

EDIT:

Advice: I was avoiding raid arrays most of the time, because if you got trouble with them beyond anything a rebuild/resync can handle, you're going to face problems in the higher PITA levels. Either the disks/partitions are not readable after the raid controller died and you have no backup hardware at hand, or in case of software raid, you can't just mount the raid sub-partitions and read them easily. I went to single drives (ssd for performance) and frequent backups just NOT to rely on raid whenever possible.
Backup is uncomfortable, takes a lot of storage but also somehow inevitable. Sorry for the pain. I hope you have a key/seed backup for the wallet and the blockchain sync does not take forever if you set up a new node. If you didn't do already, DISABLE the f**ing green power saving functions of any hard drive you use, these lead to parking the heads much more often, spinning down drives too soon, which puts stress on the hardware and kills drives sooner (which adds to the electronic waste problem, thus anything but "green"). I disabled this on all my "green" WDC 2TB drives in my NAS, after i saw one failing after half a year because of that).
makrospex
Sr. Member
****
Offline Offline

Activity: 728
Merit: 317


nothing to see here


View Profile
January 16, 2020, 08:32:41 PM
 #41

Please try mount -t ext4 -o ro,offset=... /dev/md0p1 /mnt/md0 with offset values of 34*512 (and 35*512).
Make sure you are outside /mnt/md0 when mounting. Check contents of /mnt/md0 after mounting trials.

Code:
root@bitcorn:/# mount -t ext4 -o ro,offset=17408 /dev/md0p1 /mnt/md0
mount: /mnt/md0: wrong fs type, bad option, bad superblock on /dev/loop0, missing codepage or helper program, or other error.

root@bitcorn:/# mount -t ext4 -o ro,offset=17920 /dev/md0p1 /mnt/md0
mount: /mnt/md0: wrong fs type, bad option, bad superblock on /dev/loop0, missing codepage or helper program, or other error.

Would rather deal with you fine folks, here on these forums, than opening another thread on the Ubuntu forums. I intensely dislike creating more accounts than I have to.

For example, some dude from some registrar I don't use, wants to buy one of my domains from NetSol, and I'm all like "Fuck off, man. I don't want to create an account on your website. Rejecting your offer. Not for sale."

I'm a strange dude.

 Cheesy

Being a "normal" dude would be way worse, imo  Grin

I was misreading your last post in that support forum manner.

Please see if you have "testdisk" installed, or just *apt-get* (install) it. I'll point you to a quick guide so you can make yourself familiar what it does for you (as i will have to read up some pages too). You can examine disks/partitions, to get known to the program, just don't change/write anything yet.


makrospex
Sr. Member
****
Offline Offline

Activity: 728
Merit: 317


nothing to see here


View Profile
January 16, 2020, 08:48:22 PM
 #42

Interesting.

Found this, same problem, different configuration:

https://forum.cgsecurity.org/phpBB3/viewtopic.php?t=6446

All superblock backups failed for this user.
I'm reading further into it now...
makrospex
Sr. Member
****
Offline Offline

Activity: 728
Merit: 317


nothing to see here


View Profile
January 16, 2020, 08:56:34 PM
 #43

Strategy:

find valid superblocks using testdisk for use with e2fsck. Works in most cases, according to the internet.
In some cases, data can be read out directly with testdisk for backup.
But ultimately, lossless recovery is the goal.

EDIT:

https://forum.cgsecurity.org/phpBB3/viewtopic.php?t=34

Short version:

Code:
fsck.ext4 -y /dev/md127

Did the trick here. But please read the whole thing, since -y positively answers all fsck questions for confirmation of destructive changes.
makrospex
Sr. Member
****
Offline Offline

Activity: 728
Merit: 317


nothing to see here


View Profile
January 16, 2020, 09:14:51 PM
 #44

It's

/dev/md127

not

/dev/md0

this time.

EDIT: When the kernel fails to assemble a raid at /dev/md0, it places a spare at /dev/md127.
This results of raid failure, not uncommonly by errors in mdadm.conf.

If the fsck of md127 does not help getting it to mount, we will have to go over the mdadm.conf and identify the right ARRAY line to uncomment.
Most time it's a mismatch of UUID of the hdd and the config file.
makrospex
Sr. Member
****
Offline Offline

Activity: 728
Merit: 317


nothing to see here


View Profile
January 16, 2020, 09:41:22 PM
Last edit: January 16, 2020, 09:52:08 PM by makrospex
 #45

More bleh.

Came up as md127 upon a reboot.

Code:
root@bitcorn:/dev# mdadm --stop /dev/md127
mdadm: stopped /dev/md127
root@bitcorn:/dev# mdadm --assemble --scan
mdadm: /dev/md/0 has been started with 3 drives.
root@bitcorn:/dev# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4] [linear] [multipath] [raid0] [raid1] [raid10]
md0 : active raid5 sdb[0] sdd[2] sdc[1]
      7813772288 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU]
      bitmap: 0/30 pages [0KB], 65536KB chunk

unused devices: <none>
root@bitcorn:/dev# dumpe2fs /dev/md0 | grep super
dumpe2fs 1.44.1 (24-Mar-2018)
dumpe2fs: Bad magic number in super-block while trying to open /dev/md0
Found a gpt partition table in /dev/md0
Couldn't find valid filesystem superblock.

Plan:
1. Reboot to make /dev/md127 reappear.
2. fsck -y /dev/md127
3. mount -t ext4 -o ro /dev/md127 /mnt/md0
4. check contents of /mnt/md0 directory

If it works, it either mounts as md0 at the next reboot, or we have to go into mdadm.conf checking/editing

EDIT: We don't lose hope. We lose things like (car)keys, memories and all our corn because of boating accidents  Grin
makrospex
Sr. Member
****
Offline Offline

Activity: 728
Merit: 317


nothing to see here


View Profile
January 17, 2020, 06:33:43 AM
 #46

Plan:
1. Reboot to make /dev/md127 reappear.
2. fsck -y /dev/md127
3. mount -t ext4 -o ro /dev/md127 /mnt/md0
4. check contents of /mnt/md0 directory
If it works, it either mounts as md0 at the next reboot, or we have to go into mdadm.conf checking/editing
EDIT: We don't lose hope. We lose things like (car)keys, memories and all our corn because of boating accidents  Grin

1. Rebooted. Came back as /dev/md0
2.
Code:
root@bitcorn:/# fsck -y /dev/md0
fsck from util-linux 2.31.1
e2fsck 1.44.1 (24-Mar-2018)
ext2fs_open2: Bad magic number in super-block
fsck.ext4: Superblock invalid, trying backup blocks...
fsck.ext4: Bad magic number in super-block while trying to open /dev/md0

The superblock could not be read or does not describe a valid ext2/ext3/ext4
filesystem.  If the device is valid and it really contains an ext2/ext3/ext4
filesystem (and not swap or ufs or something else), then the superblock
is corrupt, and you might try running e2fsck with an alternate superblock:
    e2fsck -b 8193 <device>
 or
    e2fsck -b 32768 <device>

Found a gpt partition table in /dev/md0

3. Sheeeeeeeeeeeeeeeeeeeeeeeeeeeit.

Indeed  Undecided

Please paste the output of

Code:
mdadm --detail /dev/md0
makrospex
Sr. Member
****
Offline Offline

Activity: 728
Merit: 317


nothing to see here


View Profile
January 17, 2020, 07:15:18 AM
 #47

Shits fucked up reel gud now. No md arrays are showing up on reboot now.

Re-issuing mdadm --create --verbose /dev/md0 --level=5 --raid-devices=3 /dev/sdb /dev/sdc /dev/sdd and going to bed. Will check again in the morning, but about ready to just say "fuck the array", lose mdadm altogether, and just having four separate disks in the system.

Bleh. Such bullshit.

 Undecided

sucks
Pages: 1 2 3 [All]
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!