Bitcoin Forum
November 17, 2024, 11:05:31 PM *
News: Check out the artwork 1Dq created to commemorate this forum's 15th anniversary
 
   Home   Help Search Login Register More  
Pages: « 1 2 3 4 5 6 7 [8]  All
  Print  
Author Topic: Rule 30 automaton as hash function  (Read 10636 times)
DeathAndTaxes
Donator
Legendary
*
Offline Offline

Activity: 1218
Merit: 1079


Gerald Davis


View Profile
August 13, 2014, 03:19:28 PM
 #141

I learned that Bitcoin has a big problem with what is called variance. Transaction times are on average 10 minutes, yet they can vary from a few minutes to an hour. That indicates that the search space for the Bitcoin miners is non-randomly distributed.

That is a false conclusion.  Flip four coins at once and check if they are all heads.  What is the probability of that happening?  1:16 (2^4) right?  Will you get a set of all heads exactly every sixteen attempts?  Does that mean the outcome of a fair coin flip is not randomly distributed?  Of course not.

The expected time between blocks will be 10 minutes (if current hashrate exactly matches the estimated hashrate) but the actual times between blocks will be a Poisson distribution.  This is true even if there is a perfectly random distribution to hash outputs.  Transaction times of 1/5th to 5x the expected are not that rare in a poisson distribution.  Mathematically prove that Bitcoin block times don't follow a poisson distribution to some level of confidence and you might be able to say that the hash outputs are not randomly distributed.  All the evidence to date suggests they are.
Anders (OP)
Full Member
***
Offline Offline

Activity: 126
Merit: 100



View Profile
August 13, 2014, 04:09:18 PM
 #142

I learned that Bitcoin has a big problem with what is called variance. Transaction times are on average 10 minutes, yet they can vary from a few minutes to an hour. That indicates that the search space for the Bitcoin miners is non-randomly distributed.

That is a false conclusion.  Flip four coins at once and check if they are all heads.  What is the probability of that happening?  1:16 (2^4) right?  Will you get a set of all heads exactly every sixteen attempts?  Does that mean the outcome of a fair coin flip is not randomly distributed?  Of course not.

The expected time between blocks will be 10 minutes (if current hashrate exactly matches the estimated hashrate) but the actual times between blocks will be a Poisson distribution.  This is true even if there is a perfectly random distribution to hash outputs.  Transaction times of 1/5th to 5x the expected are not that rare in a poisson distribution.  Mathematically prove that Bitcoin block times don't follow a poisson distribution to some level of confidence and you might be able to say that the hash outputs are not randomly distributed.  All the evidence to date suggests they are.

But with four coin flips isn't on average 8 flips needed to get four heads? Oh, wait, I see now. There is no upper limit for how many flips are needed. Scary! But of course in practice the probability for very long intervals is very small.

Yes, it's a classical Poisson process it seems:

"In probability theory and statistics, the Poisson distribution (French pronunciation [pwasɔ̃]; in English usually /ˈpwɑːsɒn/), named after French mathematician Siméon Denis Poisson, is a discrete probability distribution that expresses the probability of a given number of events occurring in a fixed interval of time and/or space if these events occur with a known average rate and independently of the time since the last event.[1]" -- http://en.wikipedia.org/wiki/Poisson_distribution
Anders (OP)
Full Member
***
Offline Offline

Activity: 126
Merit: 100



View Profile
August 13, 2014, 04:16:12 PM
 #143

For fun I did a boolean algebra calculation for Rule 30:

abc
000 - 0
001 - 1
010 - 1
011 - 1
100 - 1
101 - 0
110 - 0
111 - 0

a'b'c + a'bc' + a'bc + ab'c' = a'(b'c + bc' + bc) + ab'c' =
a'(b'c + b(c' + c)) + ab'c' = a'(b'c + b) + ab'c' =
a'(c + b) + ab'c' = a'(c + b) + a(c + b)' =
a ⊕ (b + c)

Or, with a = ai-1, b = ai, c = ai+1:

ai-1 XOR (ai OR ai+1)

The same as in: Cryptography with Cellular Automata -- http://www.stephenwolfram.com/publications/academic/cryptography-cellular-automata.pdf
Peter R
Legendary
*
Offline Offline

Activity: 1162
Merit: 1007



View Profile
August 13, 2014, 08:46:03 PM
 #144

I learned that Bitcoin has a big problem with what is called variance. Transaction times are on average 10 minutes, yet they can vary from a few minutes to an hour.

It's not a problem.  It's a requirement. 

The variance in the block times is a mathematical consequence of the fact that the hash of any blockheader/nonce combo is equally likely to meet the difficulty target.  This property is critical for decentralized mining--the probability that a miner solves a block is linearly proportional to the amount of work that miner did.  If Miner A has 10 times more hashpower than Miner B, he should solve exactly 10 times more blocks on average.  Any scheme that reduces block-time variance will not have this property, instead favouring big miners disproportionately to small miners.

Run Bitcoin Unlimited (www.bitcoinunlimited.info)
Anders (OP)
Full Member
***
Offline Offline

Activity: 126
Merit: 100



View Profile
August 13, 2014, 09:05:35 PM
 #145

I learned that Bitcoin has a big problem with what is called variance. Transaction times are on average 10 minutes, yet they can vary from a few minutes to an hour.

It's not a problem.  It's a requirement. 

The variance in the block times is a mathematical consequence of the fact that the hash of any blockheader/nonce combo is equally likely to meet the difficulty target.  This property is critical for decentralized mining--the probability that a miner solves a block is linearly proportional to the amount of work that miner did.  If Miner A has 10 times more hashpower than Miner B, he should solve exactly 10 times more blocks on average.  Any scheme that reduces block-time variance will not have this property, instead favouring big miners disproportionately to small miners.

Not to go too much off topic, but after 10 minutes the difficulty could be reduced in real-time. Then the long transaction times would be reduced. I don't know if that would be possible to implement in practice though. Cheesy
leotheminer
Newbie
*
Offline Offline

Activity: 56
Merit: 0


View Profile WWW
August 22, 2014, 02:11:18 PM
 #146

I've read this whole thread with interest and even dug out my copy of A New Kind of Science for it. Anders, have you learned anything else in the last few weeks that would make you think Rule 30 would be a good or not good PoW function?
Anders (OP)
Full Member
***
Offline Offline

Activity: 126
Merit: 100



View Profile
August 24, 2014, 05:06:00 PM
 #147

I've read this whole thread with interest and even dug out my copy of A New Kind of Science for it. Anders, have you learned anything else in the last few weeks that would make you think Rule 30 would be a good or not good PoW function?

I was thinking about if Rule 30 could make ASIC implementations difficult, but I guess ASICs could easily do the same thing as CPUs/GPUs. One idea is to make the algorithm require large amounts of memory, but even that could be done with ASICs perhaps.
Anders (OP)
Full Member
***
Offline Offline

Activity: 126
Merit: 100



View Profile
September 16, 2014, 05:50:46 AM
Last edit: September 16, 2014, 07:29:24 PM by Anders
 #148

Oh, now I understand why people here have talked about using the Rule 30 cellular automaton for Bitcoin proof of work. I read this paper: ASICs and Decentralization FAQ -- https://download.wpsoftware.net/bitcoin/asic-faq.pdf

Here is my implementation of Rule 30 as a hash function, called R30: https://github.com/AndersLindman/R30-hash-function

R30 matches the desired requirements for a proof of work algorithm. Such as:

"In order that all actors can reach consensus, the proof-of-work must require enough time that previously proven transaction history can propagate and be verified before the history is extended." -- R30 can, just like SHA-256, be used with a target that can be adjusted to compensate for increased computing power.

"... the proof-of-work must be completely parallelizable and require no state. That is, one actor with 2N hashing power should have the same return as two actors each with N hashing power." -- R30 is completely parallelizable and has no state to preserve.

"... the computation of proof-of-work must be progress free, that is, the proof-of-work calculation at time T should not depend on any part of a calculation at time T' < T." -- Each hash value in R30 is calculated independently from each other.

"... poisson process ... the probability of a proof-of-work being calculated in a time interval [t0,t1] is proportional to (t1 - t0) but independent of t0 and t1." -- Assuming that R30 is a close approximation of a random oracle, the probability will follow a poission process.

"The proof-of-work must be optimization free; that is, there should not be any algorithmic speedups which would give an advantage over the standard algorithm." -- R30 directly uses Rule 30 which is a minimal one-dimensional cellular automaton that is considered to be computationally irreducible.

"The proof-of-work must be approximation free; that is, there should not be a defective variant of the proof-of-work which achieves a speedup in excess of its defect rate." -- R30 directly calculates a certain number of generations of Rule 30 without any possibility for approximations.

"... ASIC’s are still inevitable; all ASIC resistance does is increase the startup capital required and therefore increase centralization of manufacturing." -- R30 is a simple algorithm with low memory requirement that is easy implement in ASICs.

"... increasing the complexity of proof generation often means also increasing the complexity of proof validation, often disproportionately slow." -- Rule 30 is a 1-dimensional binary cellular automaton which means a very simple algorithm that allows fast proof validation.

"Memory hardness has the effect of increasing ASIC board footprint, weakening the heat-dissipation decentralization provided by the thermodynamic limit." -- R30 has low memory requirement with the cellular automaton calculated in a 1-dimensional array and together with buffers a total size of less than a kilobyte.
jtimon
Legendary
*
Offline Offline

Activity: 1372
Merit: 1002


View Profile WWW
September 16, 2014, 09:59:22 PM
 #149

I was thinking about if Rule 30 could make ASIC implementations difficult, but I guess ASICs could easily do the same thing as CPUs/GPUs. One idea is to make the algorithm require large amounts of memory, but even that could be done with ASICs perhaps.

Of course, anything can be done ASIC. ASICs are just specialized hardware. Anyone talking about "anti-ASIC" is either a fraudster or doesn't know what he's talking about (cough, cough, intheoreum).
Many people still think that Artforz already had a GPGPU implementation when he launched "GPU resistant" scrypt and he used it to mine litecoins while everybody else was basically burning money (yet litecoin's was supposed to offer "fair distribution").

That's why I liked this proposal, because you weren't talking about that kind of stupid things (well, as said I also love cellular automata).
If some properties could be proven mathematically with R30 that can not be proven for sha256d, it would make it a great candidate for an potential pow hardfork if mining companies are stupid enough to consistently controlling say, more than 30% of the hashrate, thus forcing the community to deploy a hardfork that makes their huge investments worthless.
Otherwise I think some slight modification of sha256d or some other well-known and well-tested hash function would be preferred for that case.

But yeah, R30 seems to have the same desirable properties SHA256d, it's just that it hasn't been reviewed by many cryptoanalists and it would be kind of scary to deploy it on bitcoin.

2 different forms of free-money: Freicoin (free of basic interest because it's perishable), Mutual credit (no interest because it's abundant)
Anders (OP)
Full Member
***
Offline Offline

Activity: 126
Merit: 100



View Profile
September 17, 2014, 04:25:05 AM
 #150

... it would make it a great candidate for an potential pow hardfork if mining companies are stupid enough to consistently controlling say, more than 30% of the hashrate, thus forcing the community to deploy a hardfork that makes their huge investments worthless.

The same problem will remain with R30. A hardfork like that would only buy some time. Soon the same situation can emerge for R30 as for SHA-256. I was thinking of R30 as a proof of work option in case a switch is needed. Just one candidate among many.

Or do you mean that SHA-256 has some particular property that makes it prone to centralization of mining power?
Anders (OP)
Full Member
***
Offline Offline

Activity: 126
Merit: 100



View Profile
September 17, 2014, 05:46:20 AM
 #151

Can R30 be calculated fast with ASICs? Yes, very fast.

"A High Performance ASIC for Cellular Automata (CA) Applications

CA are useful tools in modeling and simulation. However, the more complex a CA is, the longer it takes to run in typical environments. A dedicated CA machine solves this problem by allowing all cells to be computed in parallel. In this paper we present simple yet useful hardware that can perform the required computations in constant time complexity, O(l)." -- http://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&arnumber=4273215&url=http%3A%2F%2Fieeexplore.ieee.org%2Fxpls%2Fabs_all.jsp%3Farnumber%3D4273215

Cheesy
jtimon
Legendary
*
Offline Offline

Activity: 1372
Merit: 1002


View Profile WWW
September 17, 2014, 07:02:10 PM
 #152

The same problem will remain with R30. A hardfork like that would only buy some time. Soon the same situation can emerge for R30 as for SHA-256.

Well, the hope is that after losing millions or seeing other company losing millions, these people will realize that holding more than 30% of the hashing hardware is stupid and they stop doing it. I think they do it because they believe such a hardfork will never happen.

I was thinking of R30 as a proof of work option in case a switch is needed. Just one candidate among many.

Sure, agreed.

Or do you mean that SHA-256 has some particular property that makes it prone to centralization of mining power?

No, pows that are easy to make ASICs for like sha256d and R30 are GOOD to prevent manufacturing centralization because the barrier to entry is lower. Memory-hard functions will have less ASIC manufacturers.
 

2 different forms of free-money: Freicoin (free of basic interest because it's perishable), Mutual credit (no interest because it's abundant)
Anders (OP)
Full Member
***
Offline Offline

Activity: 126
Merit: 100



View Profile
September 17, 2014, 07:37:09 PM
 #153

The same problem will remain with R30. A hardfork like that would only buy some time. Soon the same situation can emerge for R30 as for SHA-256.

Well, the hope is that after losing millions or seeing other company losing millions, these people will realize that holding more than 30% of the hashing hardware is stupid and they stop doing it. I think they do it because they believe such a hardfork will never happen.

I was thinking of R30 as a proof of work option in case a switch is needed. Just one candidate among many.

Sure, agreed.

Or do you mean that SHA-256 has some particular property that makes it prone to centralization of mining power?

No, pows that are easy to make ASICs for like sha256d and R30 are GOOD to prevent manufacturing centralization because the barrier to entry is lower. Memory-hard functions will have less ASIC manufacturers.
 

Ok, the switch from SHA-256 to another PoW algorithm would give the big mining pools a lesson and prevent future mining operations from getting too dominating. Could be, although it's not a guarantee. And, yes good point about that ASICs with lots of memory will lead to fewer manufacturers.
vinda
Newbie
*
Offline Offline

Activity: 32
Merit: 0


View Profile
September 18, 2014, 08:19:50 AM
 #154

"Rule 30 is a one-dimensional binary cellular automaton rule introduced by Stephen Wolfram in 1983.[2] Using Wolfram's classification scheme, Rule 30 is a Class III rule, displaying aperiodic, chaotic behaviour." -- http://en.wikipedia.org/wiki/Rule_30

I read somewhere that when taking every other center bit of the Rule 30 cellular automaton, it's very difficult to determine from what initial condition the resulting bit string was generated. That can be used as a cryptographic hash function.

Indeed I found that this has already been done, such as:

A New Cryptographic Hash Function Based on Cellular Automata Rules 30, 134 and Omega-Flip Network -- http://www.ipcsit.com/vol27/32-ICICN2012-N10006.pdf

A new cryptographic hash function based on the Cellular Automaton Rule 30 -- http://www.zimuel.it/talks/Nks2006.pdf

I haven't read much about it yet but my idea is to start with the bits of the message as the initial condition for the Rule 30 cellular automaton. And then run the automaton a fixed number of steps so that the cells in the automaton have highly random values. Then after that every second bit is taken from the middle column of the automaton with 2n iterations, where n is the number of bits of the hash (digest) value.

interesting idea, looking for its completion.
Anders (OP)
Full Member
***
Offline Offline

Activity: 126
Merit: 100



View Profile
September 18, 2014, 03:04:56 PM
 #155

Will PoW with R30 consume a lot of electricity? Yes, about the same as for SHA-256 I'm afraid. But it may actually be a justified use of energy. Even if a Proof of Stake (PoS) system could be safe enough and consume much less energy, the idea of PoS makes me think of concentration of power in the hands of a small number of people. That seems like a step backwards in human history to me. We should aim at decentralization, not only in terms of technology but also in terms of preventing power being concentrated into a small group of elite people.
Pages: « 1 2 3 4 5 6 7 [8]  All
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!