Bitcoin Forum
March 24, 2023, 02:43:53 AM *
News: Latest Bitcoin Core release: 24.0.1 [Torrent]
   Home   Help Search Login Register More  
Pages: [1]
Author Topic: ISAWHIM on how bitcoin should work  (Read 445 times)
Hero Member
Offline Offline

Activity: 504
Merit: 500

View Profile
May 22, 2013, 02:23:05 AM

Just a thought...

Since you guys refuse to simply "only download portions of the chain related to THIS new wallet"... (or the portions related to an older one, if restored.)
EDIT: Also, don't create a KEY until we tell you to... you create keys for no reason... we may be trying to restore a wallet, and that key is now wasted. Has no use, could have been someone-else's key. If we use it, and THEN choose to encrypt, you make it sound as if any coins in the wallet will now be lost, on that key... Why, that is bass-ackwards. Tell us to create a password, before giving us a key/address. Only IF we tell you we want to MAKE a wallet, not RESTORE or ADD a wallet that we are trying to fix.

Can you then implement some simple standard compression for the chain. Only uncompressing and "adding to OUR database", the transactions related to us.

Here is the kicker...

The compression of historic block segments... compress it AFTER a two-part process.
1: Split the block data into EVEN and ODD bytes. (0A0e0P020E0r) = [0000000]&[AeP2Er]
- That allows the common RLE/zip/gzip/arj/rar etc... to compress better. "0x" is a common occurrence in ALL files. As are "xZ"...
2: After initial compression, apply another compression on the original, with a randomly selected method...
- bit-shifting all data x-bits
- CRC or SHA or MD5 key (OR, AND, XOR, NOT, NAND...) Altering the values in a reversible way, with that key...
- Then try the available compression again. The smaller of the two files will remain, and be broadcast back out. The larger file will not be saved, and never again become propagated.

Over time, the entire database will at-least be compressed one level. Followed by repeated random attempts to "alter the data in a restorable method", to yield smaller and smaller chunks the more it is used/loaded/saved/shared.

This would NOT be done to every chunk, by every user. Simply do one chunk at start-up, and one upon receipt of a new block, optionally (instead of a "fee", or in addition to a fee.) do one with every "free transaction" sent out (Not compressing the transaction, just attempting to compress an uncompressed or "large" block randomly from the hard-drive archive of blocks.

(Note, you would be surprised how many zipped files can be compressed down to 50%, or 10%, after simply doing the even-odd split, and a simple bit-shift. Files that normally actually become larger once zipped, due to them not having any apparent pattern within, to compress... like images and databases.)

The resulting block compressor would record the method (CRC/SHA/MD5) key-used {dkfs9d7f9sSTD8sd5} and RATIO to ORIGINAL-UNCOMPRESSED. Then the actual data to be manipulated.

Reversing the process should be rather fast, and would not need to be done on the whole archive. Just on the parts that pertain to the wallets first index, if that is known. (Could be gotten from any block-index explorer.) Provided that segments record the "continue-from" key, which is the block-segment separator. (The "result" from having previously processed from the original gen block, up to the last processed block-segment separator.)
Pages: [1]
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!