BobLawblaw (OP)
Legendary
Offline
Activity: 1861
Merit: 5670
Neighborhood Shenanigans Dispenser
|
|
January 15, 2020, 06:06:07 PM Last edit: July 17, 2020, 03:26:26 PM by BobLawblaw |
|
|
|
|
|
jojo69
Legendary
Offline
Activity: 3290
Merit: 4534
diamond-handed zealot
|
|
January 15, 2020, 06:17:50 PM |
|
oh man, I'm sorry Bob, I loathe crap like this
I got no help for you, I was a whole day editing fstab to get my legacy volumes mounted recently, but I'm going to follow along and try to learn something.
I will say this, RAID, particularly software RAID, has been more trouble than good in my personal experience; supposed to protect you from physical disk failure but introduces whole other layers of failure modes.
|
This is not some pseudoeconomic post-modern Libertarian cult, it's an un-led, crowd-sourced mega startup organized around mutual self-interest where problems, whether of the theoretical or purely practical variety, are treated as temporary and, ultimately, solvable. Censorship of e-gold was easy. Censorship of Bitcoin will be… entertaining.
|
|
|
makrospex
Sr. Member
Offline
Activity: 728
Merit: 317
nothing to see here
|
|
January 15, 2020, 06:30:44 PM Merited by BobLawblaw (20) |
|
uh, oh... have you tried mdadm --create --assume-clean ... , too? Don't be a fool and dd that hdd blocks off to a file on a backup medium first (if you not already did so). EDIT: imo, as long as the disks are physically ok, you should have a chance of lossless recovery. I had more problems with journaling fs than with ext2 or vfat, only exception was reiser, but since hans went to jail, i stopped using it.
|
|
|
|
psycodad
Legendary
Offline
Activity: 1636
Merit: 1744
精神分析的爸
|
|
January 15, 2020, 07:12:42 PM Merited by BobLawblaw (20) |
|
May I ask what has led to this situation? (power loss, disk crash etc.)
Your array is obviously perfectly fine, don't panic!
Did you by chance have LVM on top of that RAID ?
If you can, post your fstab and the output of 'lsblk -f' or 'blkid'.
|
|
|
|
psycodad
Legendary
Offline
Activity: 1636
Merit: 1744
精神分析的爸
|
|
January 15, 2020, 07:32:53 PM |
|
Thanks. Again, your array is fine according to /proc/mdstat you posted, no need to recreate it and risk something. Please also post output from
|
|
|
|
vapourminer
Legendary
Offline
Activity: 4466
Merit: 3983
what is this "brake pedal" you speak of?
|
|
January 15, 2020, 07:51:25 PM |
|
bob sorry im of no help in this, just wanted to comment ive dropped my Z1 (3 drive raid 5) arrays and switched to either Z2 (5+ drives any 2 can fail) for important stuff or straight up mirrors for less important. i love mirrors cuz you can read the remaining drive on most anything.
best of luck, im lurking as i hope to learn something here too.
btw talking about zfs files system on my nas. for desktops i always mirror.
|
|
|
|
psycodad
Legendary
Offline
Activity: 1636
Merit: 1744
精神分析的爸
|
|
January 15, 2020, 08:14:32 PM |
|
Thanks, so we know there was no LVM on top. Please try fsck again, but on the device not the mountpoint as you did above: Edit: It the above starts, it will (depending on your hardware) take ages (or 2 minutes longer) to complete - do not interrupt it!
|
|
|
|
makrospex
Sr. Member
Offline
Activity: 728
Merit: 317
nothing to see here
|
|
January 15, 2020, 08:15:19 PM |
|
EDIT: forget that, didn't make sense.
|
|
|
|
psycodad
Legendary
Offline
Activity: 1636
Merit: 1744
精神分析的爸
|
|
January 15, 2020, 08:21:57 PM |
|
So you know the filesystem from md0 is ext4, right? try giving mount /dev/sdb1 (and sdc1,sdd1)a hint via -t ext4 since it can't read this info from /etc/fstab. Mount READ ONLY if you don't have a backup and see if you can directly read from the partitions on the physical raid disk(s). EDIT: Or was it /dev/sdb0 (...) My daily unix practice aged well. There is no point in trying to mount a linux-raid5 partition directly. It is RAID5, not RAID1 (in which case - as vapourminer mentions - you can mount one part of the mirror only).
|
|
|
|
makrospex
Sr. Member
Offline
Activity: 728
Merit: 317
nothing to see here
|
|
January 15, 2020, 08:23:00 PM |
|
So you know the filesystem from md0 is ext4, right? try giving mount /dev/sdb1 (and sdc1,sdd1)a hint via -t ext4 since it can't read this info from /etc/fstab. Mount READ ONLY if you don't have a backup and see if you can directly read from the partitions on the physical raid disk(s). EDIT: Or was it /dev/sdb0 (...) My daily unix practice aged well. There is no point in trying to mount a linux-raid5 partition directly. It is RAID5, not RAID1 (in which case - as vapourminer mentions - you can mount one part of the mirror only). Came to my mind soon after writing. Edited out, thanks.
|
|
|
|
|
psycodad
Legendary
Offline
Activity: 1636
Merit: 1744
精神分析的爸
|
|
January 15, 2020, 08:36:08 PM |
|
@Bob: Does it anything after that (probably not)? Please try with the superblock backups listed, i.e. fsck.ext4 -b 8193 /dev/md0 Agree with makrospex^, fs is fs no matter on which block device it sits (at least from the perspective of the user and the tools trying to fix a filesystem).
|
|
|
|
psycodad
Legendary
Offline
Activity: 1636
Merit: 1744
精神分析的爸
|
|
January 15, 2020, 09:06:01 PM Last edit: January 15, 2020, 09:21:11 PM by psycodad |
|
Seriously: Again don't panic, a power failure can't fsck up your array so badly, only the operator is capable of that (I know *exactly* what I am talking about here! - Lost TBs of data only due to getting panicky and impatient ). If you didn't forget to tell us about re-formatting the fs or similar nasties you are still in a good position. Just annoying problems so far. Regarding the problem at hand: Did the RAID ever rebuild/resynch during your tries resp. after the failure? I mean hours of disk activity while cat /proc/mdstat shows minimal progress? If not, you have probably forcefully assembled a damaged RAID by --assume-clean from above. The next thing I would recommend to try is actually resynching that RAID, either by simply rebooting or by stopping md with mdadm --stop /dev/md0 mdadm --assemble --scan should then show it is synching. Edit: Try dumpe2fs /dev/md0|grep super as makrospex suggested to verify if the superblock suggested by fsck.ext4 are correct (did assume so, but I could be wrong).
|
|
|
|
psycodad
Legendary
Offline
Activity: 1636
Merit: 1744
精神分析的爸
|
|
January 15, 2020, 09:21:54 PM |
|
Apologies, didn't check back and just typed from the top of my head, my fault. Edited in above post. Regarding the problem at hand: Did the RAID ever rebuild/resynch during your tries resp. after the failure? I mean hours of disk activity while cat /proc/mdstat shows minimal progress? Took about 12 hours to rebuild/resync when I tried. Hmm, okay then my assumption was wrong, then the array should be clean.
|
|
|
|
psycodad
Legendary
Offline
Activity: 1636
Merit: 1744
精神分析的爸
|
|
January 15, 2020, 09:51:40 PM |
|
Ok, checked again through what you posted above and would suggest the following: swapon -a mount /dev/md0p1 /mnt Didn't really question you trying to mount md0, even so your output above says you partitioned md0...
|
|
|
|
psycodad
Legendary
Offline
Activity: 1636
Merit: 1744
精神分析的爸
|
|
January 15, 2020, 09:59:48 PM |
|
root@bitcorn:/dev# swapon -a root@bitcorn:/dev# mount /dev/md0p1 /mnt mount: /mnt: wrong fs type, bad option, bad superblock on /dev/md0p1, missing codepage or helper program, or other error. root@bitcorn:/dev# mount /dev/md0p1 /mnt/md0 mount: /mnt/md0: wrong fs type, bad option, bad superblock on /dev/md0p1, missing codepage or helper program, or other error. Try fsck'ing it first:
|
|
|
|
psycodad
Legendary
Offline
Activity: 1636
Merit: 1744
精神分析的爸
|
|
January 15, 2020, 10:20:49 PM |
|
Hmmm... that would have been too easy anyway You could try again to find backup superblocks with dumpe2fs: dumpe2fs /dev/md0p1|grep -i super
|
|
|
|
makrospex
Sr. Member
Offline
Activity: 728
Merit: 317
nothing to see here
|
|
January 15, 2020, 10:54:16 PM Last edit: January 15, 2020, 11:24:15 PM by makrospex |
|
Hmmm... that would have been too easy anyway You could try again to find backup superblocks with dumpe2fs: dumpe2fs /dev/md0p1|grep -i super sorry for the late reply. one may also poke for potential superblock copies by using fsck -b and popular blocksize values (8192,16384,32768...) in case you don't know the blocksize of the filesystem. I also see that GPT message... i would try read that out with gdisk or sgdisk sgdisk -p /dev/md0 since md0 is reported as active. if that works, i would try to use the information to copy the data blocks from /md0p? to a new filesystem (with a valid superblock). mounting the device/partition is failing, but the data should be on there. So i'd try to work around that strategy to mount the filesystem. There is a bit more detail to this, but first i'd like to see the output of sgdisk -p I also didn't find much about this /dev/md127 on the net. any idea psycodad? EDIT: seems like a complicated situation there. Don't give up. The main thing that kept up my motivation with these errors, was seeking to experience that certain feeling after i got one of them finally resolved. Off to sleep now.
|
|
|
|
psycodad
Legendary
Offline
Activity: 1636
Merit: 1744
精神分析的爸
|
|
January 15, 2020, 11:43:16 PM |
|
<snip>
Good points/ideas! Will be interesting to see the output of sgdisk and if it confirms my current theory. I also didn't find much about /dev/md127 on the net. any idea psycodad?
Never seen that md127 before, would check /etc/mdadm/mdadm.conf, but that's only a wild guess. After revisiting the below, I noticed that md0p1 has no recognized filesystem. So trying to rebuild the journal with tune2fs is probably of no help. But it indeed hints to a broken super block me thinks. re: lsblk -f, etc. re: It crashed due to power failure, for the sake of argument. Cant be sure of LVM. I'm a bit of a tard when it comes to Linux. root@bitcorn:/# lsblk -f root@bitcorn:/# blkid /dev/sda1: UUID="BA8D-865F" TYPE="vfat" PARTLABEL="EFI System Partition" PARTUUID="080f6638-f09f-4518-a785-6212e1b305b8" /dev/sda2: UUID="42a6267e-28ba-4c93-8554-3dea69f5eb2e" TYPE="ext4" PARTUUID="f25d3ecd-9158-42b4-b502-4853b96bba78" /dev/sda3: UUID="9bdd83b3-0bb0-474e-a83e-8394a1fb2e8b" TYPE="swap" PARTUUID="75ae0f41-c850-4153-af77-b2085f465a4d" /dev/sdb: UUID="b7b2edc8-cd74-ad69-e5e4-81961dcc8509" UUID_SUB="8757b0af-bb26-b0e6-f0e1-eb5672344df3" LABEL="bitcorn:0" TYPE="linux_raid_member" /dev/sdc: UUID="b7b2edc8-cd74-ad69-e5e4-81961dcc8509" UUID_SUB="600cedf1-06f6-f8a1-5c27-528f7588eada" LABEL="bitcorn:0" TYPE="linux_raid_member" /dev/sdd: UUID="b7b2edc8-cd74-ad69-e5e4-81961dcc8509" UUID_SUB="9c08f657-41d9-1a24-33b9-0f993e7f0d6f" LABEL="bitcorn:0" TYPE="linux_raid_member" /dev/md0: PTTYPE="gpt" /dev/md0p1: PARTUUID="11d10ea4-e100-3949-b9d2-c30072ef7648" /dev/md0p2: UUID="3df11d9c-51f2-48d8-9ab0-49b6dcd47753" TYPE="swap" PARTUUID="53d67f15-7962-ed43-840a-881c288fc86a" root@bitcorn:/#
It definitely should read TYPE="ext4" at the md0p1 line, but there is no partition type detected at all. Based on this the best recommendation I can come up with at this point is trying to restore a backup superblock on md0p1. If dumpe2fs finds no superblock backups, there is the risky way of using mke2fs in simulation mode, by basically 'dry'-reformatting your partition the computer will come up with the same superblock addresses as the first time it formatted it. RL calls, will check back tomorrow.
|
|
|
|
jojo69
Legendary
Offline
Activity: 3290
Merit: 4534
diamond-handed zealot
|
|
January 16, 2020, 12:15:07 AM |
|
Hurr... fsck on mountpoint... Durr... I'm special.
you and me both, guess who had to learn that you can't mount multiple file systems on a single mount point...
|
This is not some pseudoeconomic post-modern Libertarian cult, it's an un-led, crowd-sourced mega startup organized around mutual self-interest where problems, whether of the theoretical or purely practical variety, are treated as temporary and, ultimately, solvable. Censorship of e-gold was easy. Censorship of Bitcoin will be… entertaining.
|
|
|
|