Bitcoin Forum
April 26, 2024, 12:45:04 AM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: [1] 2 3 »  All
  Print  
Author Topic:    (Read 575 times)
BobLawblaw (OP)
Legendary
*
Offline Offline

Activity: 1822
Merit: 5551


Neighborhood Shenanigans Dispenser


View Profile
January 15, 2020, 06:06:07 PM
Last edit: July 17, 2020, 03:26:26 PM by BobLawblaw
 #1

1714092304
Hero Member
*
Offline Offline

Posts: 1714092304

View Profile Personal Message (Offline)

Ignore
1714092304
Reply with quote  #2

1714092304
Report to moderator
1714092304
Hero Member
*
Offline Offline

Posts: 1714092304

View Profile Personal Message (Offline)

Ignore
1714092304
Reply with quote  #2

1714092304
Report to moderator
"The nature of Bitcoin is such that once version 0.1 was released, the core design was set in stone for the rest of its lifetime." -- Satoshi
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction.
jojo69
Legendary
*
Offline Offline

Activity: 3150
Merit: 4309


diamond-handed zealot


View Profile
January 15, 2020, 06:17:50 PM
 #2

oh man, I'm sorry Bob, I loathe crap like this

I got no help for you, I was a whole day editing fstab to get my legacy volumes mounted recently, but I'm going to follow along and try to learn something.

I will say this, RAID, particularly software RAID, has been more trouble than good in my personal experience; supposed to protect you from physical disk failure but introduces whole other layers of failure modes.

This is not some pseudoeconomic post-modern Libertarian cult, it's an un-led, crowd-sourced mega startup organized around mutual self-interest where problems, whether of the theoretical or purely practical variety, are treated as temporary and, ultimately, solvable.
Censorship of e-gold was easy. Censorship of Bitcoin will be… entertaining.
makrospex
Sr. Member
****
Offline Offline

Activity: 728
Merit: 317


nothing to see here


View Profile
January 15, 2020, 06:30:44 PM
Merited by BobLawblaw (20)
 #3

uh, oh...

have you tried
Quote
mdadm --create --assume-clean ...
, too?
Don't be a fool and dd that hdd blocks off to a file on a backup medium first (if you not already did so).

EDIT: imo, as long as the disks are physically ok, you should have a chance of lossless recovery. I had more problems with journaling fs than with ext2 or vfat, only exception was reiser, but since hans went to jail, i stopped using it.
psycodad
Legendary
*
Offline Offline

Activity: 1604
Merit: 1564


精神分析的爸


View Profile
January 15, 2020, 07:12:42 PM
Merited by BobLawblaw (20)
 #4

May I ask what has led to this situation? (power loss, disk crash etc.)

Your array is obviously perfectly fine, don't panic!

Did you by chance have LVM on top of that RAID ?

If you can, post your fstab and the output of 'lsblk -f' or 'blkid'.

psycodad
Legendary
*
Offline Offline

Activity: 1604
Merit: 1564


精神分析的爸


View Profile
January 15, 2020, 07:32:53 PM
 #5

Thanks.

Again, your array is fine according to /proc/mdstat you posted, no need to recreate it and risk something.

Please also post output from
Code:
cat /etc/fstab


vapourminer
Legendary
*
Offline Offline

Activity: 4312
Merit: 3507


what is this "brake pedal" you speak of?


View Profile
January 15, 2020, 07:51:25 PM
 #6

bob sorry im of no help in this, just wanted to comment ive dropped my Z1 (3 drive raid 5) arrays and switched to either Z2 (5+ drives any 2 can fail) for important stuff or straight up mirrors for less important. i love mirrors cuz you can read the remaining drive on most anything.

best of luck, im lurking as i hope to learn something here too.

btw talking about zfs files system on my nas. for desktops i always mirror.
psycodad
Legendary
*
Offline Offline

Activity: 1604
Merit: 1564


精神分析的爸


View Profile
January 15, 2020, 08:14:32 PM
 #7

Thanks, so we know there was no LVM on top.
Please try fsck again, but on the device not the mountpoint as you did above:

Code:
fsck.ext4 /dev/md0

Edit: It the above starts, it will (depending on your hardware) take ages (or 2 minutes longer) to complete - do not interrupt it!

makrospex
Sr. Member
****
Offline Offline

Activity: 728
Merit: 317


nothing to see here


View Profile
January 15, 2020, 08:15:19 PM
 #8


EDIT: forget that, didn't make sense.
psycodad
Legendary
*
Offline Offline

Activity: 1604
Merit: 1564


精神分析的爸


View Profile
January 15, 2020, 08:21:57 PM
 #9

So you know the filesystem from md0 is ext4, right?
try giving mount /dev/sdb1 (and sdc1,sdd1)a hint via
Quote
-t ext4
since it can't read this info from /etc/fstab.
Mount READ ONLY if you don't have a backup and see if you can directly read from the partitions on the physical raid disk(s).

EDIT: Or was it /dev/sdb0 (...)
My daily unix practice aged well.


There is no point in trying to mount a linux-raid5 partition directly.
It is RAID5, not RAID1 (in which case - as vapourminer mentions - you can mount one part of the mirror only).

makrospex
Sr. Member
****
Offline Offline

Activity: 728
Merit: 317


nothing to see here


View Profile
January 15, 2020, 08:23:00 PM
 #10

So you know the filesystem from md0 is ext4, right?
try giving mount /dev/sdb1 (and sdc1,sdd1)a hint via
Quote
-t ext4
since it can't read this info from /etc/fstab.
Mount READ ONLY if you don't have a backup and see if you can directly read from the partitions on the physical raid disk(s).

EDIT: Or was it /dev/sdb0 (...)
My daily unix practice aged well.


There is no point in trying to mount a linux-raid5 partition directly.
It is RAID5, not RAID1 (in which case - as vapourminer mentions - you can mount one part of the mirror only).


Came to my mind soon after writing. Edited out, thanks.
makrospex
Sr. Member
****
Offline Offline

Activity: 728
Merit: 317


nothing to see here


View Profile
January 15, 2020, 08:34:00 PM
Merited by Mitchell (10), ABCbits (1)
 #11

So you might find out the location of a superblock copy with

dumpe2fs

and use that for fsck?
I had a similar situation on solaris 8, where i had to replace the superblock to make the volume readable again. It was a single volume, but it's a filesystem anyway.

EDIT: ran a quick google and the first result pretty hit what i searched for.
https://www.linuxquestions.org/questions/linux-software-2/mount-unknown-filesystem-type-%27linux_raid_member%27-4175491589/
It's a raid level 5, too. See last post in that thread.
psycodad
Legendary
*
Offline Offline

Activity: 1604
Merit: 1564


精神分析的爸


View Profile
January 15, 2020, 08:36:08 PM
 #12

@Bob: Does it anything after that (probably not)?

Please try with the superblock backups listed, i.e.

Code:
fsck.ext4 -b 8193 /dev/md0

Agree with makrospex^, fs is fs no matter on which block device it sits (at least from the perspective of the user and the tools trying to fix a filesystem).

psycodad
Legendary
*
Offline Offline

Activity: 1604
Merit: 1564


精神分析的爸


View Profile
January 15, 2020, 09:06:01 PM
Last edit: January 15, 2020, 09:21:11 PM by psycodad
Merited by Mitchell (10)
 #13

 Grin
Seriously:
Again don't panic, a power failure can't fsck up your array so badly, only the operator is capable of that (I know *exactly* what I am talking about here! - Lost TBs of data only due to getting panicky and impatient ). If you didn't forget to tell us about re-formatting the fs or similar nasties you are still in a good position. Just annoying problems so far.

Regarding the problem at hand: Did the RAID ever rebuild/resynch during your tries resp. after the failure?
I mean hours of disk activity while cat /proc/mdstat shows minimal progress?

If not, you have probably forcefully assembled a damaged RAID by --assume-clean from above. The next thing I would recommend to try is actually resynching that RAID, either by simply rebooting or by stopping md with
Code:
mdadm --stop /dev/md0
mdadm --assemble --scan

Code:
cat /proc/mdstat
should then show it is synching.

Edit:
Try
Code:
dumpe2fs /dev/md0|grep super
as makrospex suggested to verify if the superblock suggested by fsck.ext4 are correct (did assume so, but I could be wrong).

psycodad
Legendary
*
Offline Offline

Activity: 1604
Merit: 1564


精神分析的爸


View Profile
January 15, 2020, 09:21:54 PM
 #14

Apologies, didn't check back and just typed from the top of my head, my fault.
Edited in above post.

Quote
Quote
Regarding the problem at hand: Did the RAID ever rebuild/resynch during your tries resp. after the failure?
I mean hours of disk activity while cat /proc/mdstat shows minimal progress?

Took about 12 hours to rebuild/resync when I tried.


Hmm, okay then my assumption was wrong, then the array should be clean.

psycodad
Legendary
*
Offline Offline

Activity: 1604
Merit: 1564


精神分析的爸


View Profile
January 15, 2020, 09:51:40 PM
 #15

Ok, checked again through what you posted above and would suggest the following:

Code:
swapon -a
mount /dev/md0p1 /mnt

Didn't really question you trying to mount md0, even so your output above says you partitioned md0...


psycodad
Legendary
*
Offline Offline

Activity: 1604
Merit: 1564


精神分析的爸


View Profile
January 15, 2020, 09:59:48 PM
 #16

Code:
root@bitcorn:/dev# swapon -a
root@bitcorn:/dev# mount /dev/md0p1 /mnt
mount: /mnt: wrong fs type, bad option, bad superblock on /dev/md0p1, missing codepage or helper program, or other error.
root@bitcorn:/dev# mount /dev/md0p1 /mnt/md0
mount: /mnt/md0: wrong fs type, bad option, bad superblock on /dev/md0p1, missing codepage or helper program, or other error.

Try fsck'ing it first:

Code:
fsck.ext4 /dev/md0p1


psycodad
Legendary
*
Offline Offline

Activity: 1604
Merit: 1564


精神分析的爸


View Profile
January 15, 2020, 10:20:49 PM
 #17

Hmmm... that would have been too easy anyway  Cheesy

You could try again to find backup superblocks with dumpe2fs:

Code:
dumpe2fs /dev/md0p1|grep -i super


makrospex
Sr. Member
****
Offline Offline

Activity: 728
Merit: 317


nothing to see here


View Profile
January 15, 2020, 10:54:16 PM
Last edit: January 15, 2020, 11:24:15 PM by makrospex
 #18

Hmmm... that would have been too easy anyway  Cheesy

You could try again to find backup superblocks with dumpe2fs:

Code:
dumpe2fs /dev/md0p1|grep -i super



sorry for the late reply.

one may also poke for potential superblock copies by using fsck -b and popular blocksize values (8192,16384,32768...)
in case you don't know the blocksize of the filesystem.


I also see that GPT message...
i would try read that out with gdisk or sgdisk

sgdisk -p /dev/md0

since md0 is reported as active.
if that works, i would try to use the information to copy the data blocks from /md0p? to a new filesystem (with a valid superblock).
mounting the device/partition is failing, but the data should be on there. So i'd try to work around that strategy to mount the filesystem.
There is a bit more detail to this, but first i'd like to see the output of sgdisk -p

I also didn't find much about this /dev/md127 on the net. any idea psycodad?

EDIT: seems like a complicated situation there. Don't give up.
The main thing that kept up my motivation with these errors, was seeking to experience that certain feeling after i got one of them finally resolved.

Off to sleep now.

psycodad
Legendary
*
Offline Offline

Activity: 1604
Merit: 1564


精神分析的爸


View Profile
January 15, 2020, 11:43:16 PM
 #19

<snip>

Good points/ideas! Will be interesting to see the output of sgdisk and if it confirms my current theory.

I also didn't find much about /dev/md127 on the net. any idea psycodad?

Never seen that md127 before, would check /etc/mdadm/mdadm.conf, but that's only a wild guess.

After revisiting the below, I noticed that md0p1 has no recognized filesystem. So trying to rebuild the journal with tune2fs is probably of no help. But it indeed hints to a broken super block me thinks.

re: lsblk -f, etc. re: It crashed due to power failure, for the sake of argument. Cant be sure of LVM. I'm a bit of a tard when it comes to Linux.

Code:
root@bitcorn:/# lsblk -f
root@bitcorn:/# blkid
/dev/sda1: UUID="BA8D-865F" TYPE="vfat" PARTLABEL="EFI System Partition" PARTUUID="080f6638-f09f-4518-a785-6212e1b305b8"
/dev/sda2: UUID="42a6267e-28ba-4c93-8554-3dea69f5eb2e" TYPE="ext4" PARTUUID="f25d3ecd-9158-42b4-b502-4853b96bba78"
/dev/sda3: UUID="9bdd83b3-0bb0-474e-a83e-8394a1fb2e8b" TYPE="swap" PARTUUID="75ae0f41-c850-4153-af77-b2085f465a4d"
/dev/sdb: UUID="b7b2edc8-cd74-ad69-e5e4-81961dcc8509" UUID_SUB="8757b0af-bb26-b0e6-f0e1-eb5672344df3" LABEL="bitcorn:0" TYPE="linux_raid_member"
/dev/sdc: UUID="b7b2edc8-cd74-ad69-e5e4-81961dcc8509" UUID_SUB="600cedf1-06f6-f8a1-5c27-528f7588eada" LABEL="bitcorn:0" TYPE="linux_raid_member"
/dev/sdd: UUID="b7b2edc8-cd74-ad69-e5e4-81961dcc8509" UUID_SUB="9c08f657-41d9-1a24-33b9-0f993e7f0d6f" LABEL="bitcorn:0" TYPE="linux_raid_member"
/dev/md0: PTTYPE="gpt"
/dev/md0p1: PARTUUID="11d10ea4-e100-3949-b9d2-c30072ef7648"
/dev/md0p2: UUID="3df11d9c-51f2-48d8-9ab0-49b6dcd47753" TYPE="swap" PARTUUID="53d67f15-7962-ed43-840a-881c288fc86a"
root@bitcorn:/#


It definitely should read TYPE="ext4" at the md0p1 line, but there is no partition type detected at all.

Based on this the best recommendation I can come up with at this point is trying to restore a backup superblock on md0p1.

If dumpe2fs finds no superblock backups, there is the risky way of using mke2fs in simulation mode, by basically 'dry'-reformatting your partition the computer will come up with the same superblock addresses as the first time it formatted it.

RL calls, will check back tomorrow.

jojo69
Legendary
*
Offline Offline

Activity: 3150
Merit: 4309


diamond-handed zealot


View Profile
January 16, 2020, 12:15:07 AM
 #20


Hurr... fsck on mountpoint... Durr... I'm special.


you and me both, guess who had to learn that you can't mount multiple file systems on a single mount point...

This is not some pseudoeconomic post-modern Libertarian cult, it's an un-led, crowd-sourced mega startup organized around mutual self-interest where problems, whether of the theoretical or purely practical variety, are treated as temporary and, ultimately, solvable.
Censorship of e-gold was easy. Censorship of Bitcoin will be… entertaining.
Pages: [1] 2 3 »  All
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!