Bitcoin Forum
April 19, 2024, 01:49:02 AM *
News: Latest Bitcoin Core release: 26.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: [1]
  Print  
Author Topic: Merkle Tree Proof evaluation - Itsuku algorithm proposal  (Read 669 times)
hidetoshi (OP)
Newbie
*
Offline Offline

Activity: 28
Merit: 0


View Profile
October 11, 2017, 12:19:04 AM
Last edit: June 08, 2019, 10:47:19 AM by hidetoshi
 #1

Good tidings friends,

I'm posting this Proof-of-Work evaluation-draft on behalf of my colleague Fabien Coelho. My motive is to inspire further improvements to the design in the hopes it will materialize into a new standard for egalitarian computing. We plan on implementing a proof of concept for the new hash in a future research project codenamed Moneda.

http://docdro.id/GYEk2YB

The concept originated in 2007 with his work, "An (Almost) Constant-Effort Solution-Verification Proof-of-Work Protocol Based on Merkle Trees" - (http://www.hashcash.org/papers/merkle-proof.pdf) then was reignited by Biryukov et al. of Equihash and Argon2 fame with their recent work on "Egalitarian computing - Merkle Tree Proof" - (https://www.usenix.org/system/files/conference/usenixsecurity16/sec16_paper_biryukov.pdf).

The report started out as an evaluation of the latter then morphed into the "Itsuku" proposal outlined in section 4.3. The beauty of MTP is that the miner allocates memory but not the verifier. However, the trade-off for that is the size of the proof. Our hope is further peer review will result in smaller, more efficient proofs.

To my knowledge, mrb is the only other bitcointalk member that has contributed to the concept. We're hoping to expand this pool to the rest of the technical community here.
http://blog.zorinaq.com/attacks-on-mtp/

We're open to any and all criticism.

Thanks in advance!
Even if you use Bitcoin through Tor, the way transactions are handled by the network makes anonymity difficult to achieve. Do not expect your transactions to be anonymous unless you really know what you're doing.
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction.
1713491342
Hero Member
*
Offline Offline

Posts: 1713491342

View Profile Personal Message (Offline)

Ignore
1713491342
Reply with quote  #2

1713491342
Report to moderator
1713491342
Hero Member
*
Offline Offline

Posts: 1713491342

View Profile Personal Message (Offline)

Ignore
1713491342
Reply with quote  #2

1713491342
Report to moderator
1713491342
Hero Member
*
Offline Offline

Posts: 1713491342

View Profile Personal Message (Offline)

Ignore
1713491342
Reply with quote  #2

1713491342
Report to moderator
Hydrogen
Legendary
*
Offline Offline

Activity: 2562
Merit: 1441



View Profile
October 12, 2017, 10:14:46 PM
 #2

The beauty of MTP is that the miner allocates memory but not the verifier. However, the trade-off for that is the size of the proof.

Very cool and very interesting! I've always wondered why it never appears feasible to utilize a compression algorithm to shrink the size of the proof in this case, or compress data inside blocks in the case of btc.

I think the average data can be shrunk via compression is often by a factor of 1/3rd. From an amateur perspective, being able to fill roughly 3x the amount of data per size would seem to be a worthwhile approach. Especially as compression isn't particularly resource intensive, at least not in comparison to mining cryptographic functions. Adding a compression abstraction layer woudn't appear to bottleneck the process. But I would guess I'm missing some important point here as crypto engineers never seem to bother with compressing data?
tromp
Legendary
*
Offline Offline

Activity: 974
Merit: 1076


View Profile
October 13, 2017, 07:29:34 AM
Last edit: October 13, 2017, 07:44:54 AM by tromp
 #3

The beauty of MTP is that the miner allocates memory but not the verifier. However, the trade-off for that is the size of the proof.

Memory use by miners only, and not by verifiers, is common among asymmetric algorithms. Earlier ones,
like primecoin, Cuckoo Cycle, and Equihash do not require the big proofs that MTP requires, but only a
few hundred bytes.

Quote
Very cool and very interesting! I've always wondered why it never appears feasible to utilize a compression algorithm to shrink the size of the proof in this case, or compress data inside blocks in the case of btc.

I think the average data can be shrunk via compression is often by a factor of 1/3rd. From an amateur perspective, being able to fill roughly 3x the amount of data per size would seem to be a worthwhile approach. Especially as compression isn't particularly resource intensive, at least not in comparison to mining cryptographic functions. Adding a compression abstraction layer woudn't appear to bottleneck the process. But I would guess I'm missing some important point here as crypto engineers never seem to bother with compressing data?

Cryptographic data, such as hash outputs and elliptic curve points, look completely random and thus do not compress at all. These dominate the blockchain data, leaving only a small amount of compressibility in script bytecodes and counters and such.
matthewcampbell
Newbie
*
Offline Offline

Activity: 15
Merit: 1


View Profile WWW
October 13, 2017, 09:42:00 AM
 #4

Good tidings friends,

I'm posting this Proof-of-Work evaluation-draft on behalf of my colleague Fabien Coelho. My motive is to inspire further improvements to the design in the hopes it will materialize into a new standard for egalitarian computing. We plan on implementing a proof of concept for the new hash in a future research project codenamed Doubloon.

http://docdro.id/GYEk2YB

The concept originated in 2007 with his work, "An (Almost) Constant-Effort Solution-Verification Proof-of-Work Protocol Based on Merkle Trees" - (http://www.hashcash.org/papers/merkle-proof.pdf) then was reignited by Biryukov et al. of Equihash and Argon2 fame with their recent work on "Egalitarian computing - Merkle Tree Proof" - (https://www.usenix.org/system/files/conference/usenixsecurity16/sec16_paper_biryukov.pdf).

The report started out as an evaluation of the latter then morphed into the "Itsuku" proposal outlined in section 4.3. The beauty of MTP is that the miner allocates memory but not the verifier. However, the trade-off for that is the size of the proof. Our hope is further peer review will result in smaller, more efficient proofs.

To my knowledge, mrb is the only other bitcointalk member that has contributed to the concept. We're hoping to expand this pool to the rest of the technical community here.
http://blog.zorinaq.com/attacks-on-mtp/

We're open to any and all criticism.

Thanks in advance!


Looks really interesting, random side question. How are you doing your Latex, you code it directly? Or do you have some tool that generates latex format
Hydrogen
Legendary
*
Offline Offline

Activity: 2562
Merit: 1441



View Profile
October 14, 2017, 10:16:54 PM
 #5

Cryptographic data, such as hash outputs and elliptic curve points, look completely random and thus do not compress at all. These dominate the blockchain data, leaving only a small amount of compressibility in script bytecodes and counters and such.

Much thanks for the answer. I've searched for answers to why portions of the blockchain might not be compressible. Yours is the best I've come across so far.

I don't want to waste your time but if its not too much trouble, I would like to ask if its not the randomization of data which determines whether hash strings can be compressed but rather the size of the character set and the size of its container. The very few examples I've seen of data inside 1 MB blocks seems to suggest it could be compressed. Maybe not at as high a compression ratio as normal text with its high frequency of vowels and somewhat predictable patterns. But a lower and upper case alpha numeric range of characters could be deemed small enough for a useful degree of compression to occur? Unless the elliptic curve points you're referring to are represented as some form of machine code which tends to have too large of a character set for useful compression to occur?

Anyways, like I said I don't want to waste your time explaining things that are probably very basic. I'm sure I'll figure it out eventually.
WUWARAOLUOXIANG
Member
**
Offline Offline

Activity: 302
Merit: 10


View Profile
January 02, 2018, 01:36:18 PM
 #6

Thanks for your extraordinary work. It will be an unique revolution. XDB gots a lot of potential because of you!
Pages: [1]
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!