This took a little longer, since i had to back/crossread and type a reply in a text editor, not to lose overview (cause of my impaired working memory ya'll might know already about). Here we go:
/dev/md0p1 alignment is offset by 1024 bytes.
This means the start of md0p1 is not at the start of a physical sector (multiple of 4096 bytes).
It's not a cause for failure.
compare to this:
root@bitcorn:/dev# sgdisk -p /dev/md0
Disk /dev/md0: 15627544576 sectors, 7.3 TiB
Sector size (logical/physical): 512/4096 bytes
Disk identifier (GUID): 00000000-0000-0000-0000-000000000000
Partition table holds up to 128 entries
Main partition table begins at sector 2 and ends at sector 33
First usable sector is 34, last usable sector is 15627544542
Partitions will be aligned on 8-sector boundaries
Total free space is 8141 sectors (4.0 MiB)
Number Start (sector) End (sector) Size Code Name
1 5118 15611104253 7.3 TiB FD00
2 15611104256 15627541487 7.8 GiB 8200
The partition should start at sector mulitple of 8, but starts at 34, because the partition table takes up 33 sectors.
So it's 2 sectors (each 512 bytes) = 1024 bytes "behind" of the optimum (or 6 sectors in front). Would have been better to start at sector 40, forexample.
There are unused sectors at the end of the disk, meaning this is a GPT partition (in contrast to classic MBR).
Strangely, the GUID of the drive is zero (00000000-0000-0000-0000-000000000000), but that might not be important, too.
Also strange is the fact that the table lists 5118 as starting sector of md0p1, instead of 34.
Partition 2 (md0p2) starts at 15611104256, which is a multiple of 8, thus PERFECT.
When calculating the length of md0p1, its 2 sectors short of a multiple of 8.
Conclusion: Like
suggests,
Error: The primary GPT table is corrupt, but the backup appears OK, so that will
be used.
the Partition table is corrupted. I'll read up how to handle this after this post.
The superblock backup positions from
are for
, not for mount.
Since the master partition table seems to be invalid, we can't calculate a valid offset for mount, because we don't know the correct starting sector yet.
The solution to these might be found in the backup partition table too.
An ext4 superblock has a size of 1024 bytes. coincidence?
Please try mount -t ext4 -o ro,offset=... /dev/md0p1 /mnt/md0 with offset values of 34*512 (and 35*512).
Make sure you are outside /mnt/md0 when mounting. Check contents of /mnt/md0 after mounting trials.
Then issue a
e2fsck -b some_backup_block_numbers_from_mke2fs /dev/md0p1
preferably using the higher backup superblock numbers.
The blocksize would be good to know exactly, while it defaults to 4096.
Next shot would be using "testdrive" in non-destructive mode, to find out more details about the partitions of md0 and values like real start, end, blocksize of md0p1
Even to restore it all completely by testdisk logs manually.
If you want to put up a thread in an expert forum (the community support forum of your linux distribution, i'd suggest) please link it here, so we can follow and give further assistance, raise our cups when solved, or whatever
EDIT:
Advice: I was avoiding raid arrays most of the time, because if you got trouble with them beyond anything a rebuild/resync can handle, you're going to face problems in the higher PITA levels. Either the disks/partitions are not readable after the raid controller died and you have no backup hardware at hand, or in case of software raid, you can't just mount the raid sub-partitions and read them easily. I went to single drives (ssd for performance) and frequent backups just NOT to rely on raid whenever possible.
Backup is uncomfortable, takes a lot of storage but also somehow inevitable. Sorry for the pain. I hope you have a key/seed backup for the wallet and the blockchain sync does not take forever if you set up a new node. If you didn't do already, DISABLE the f**ing green power saving functions of any hard drive you use, these lead to parking the heads much more often, spinning down drives too soon, which puts stress on the hardware and kills drives sooner (which adds to the electronic waste problem, thus anything but "green"). I disabled this on all my "green" WDC 2TB drives in my NAS, after i saw one failing after half a year because of that).