Bitcoin Forum
April 25, 2024, 03:15:34 PM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: « 1 [2] 3 »  All
  Print  
Author Topic: [ANSWERED] Why is bitcoin proof of work parallelizable ?  (Read 4584 times)
Forp (OP)
Full Member
***
Offline Offline

Activity: 195
Merit: 100


View Profile
October 05, 2011, 10:56:00 AM
 #21

How would you prevent the fastest miner from winning every block?

This is a very important question and trying to answer this I am understanding the mechanism I want to design much better. Thank you for asking it!

The PoW system currently used in Bitcoin has the same problem. It solves this problem by offering an extremly large search space and a situation (hashing) where nothing can be said efficiently or systematically on how to find a good solution. Thus, the solution method is random parallelization of a brute force search. This solution method introduces a random element into the situation (which was not there from the beginning: the PoW task is not random).

So, parallelization introduces a random element here. If we now prevent parallelization completely and produce a PoW system which makes the solver work through a sequential number of deterministic steps, the fastest single core processing unit will win every block.

Thus, we must reintroduce a random element in the PoW. Possibly, there are two construction elements for building a PoW: Two tasks could be chained one-after-the-other (leading to non-parallelizable work) or combined (leading to parallelizable work). Currently Bitcoin uses only ONE of these construction elements. Using ONLY the other one is a clear fail (as FreeTrade just pointed out). Using BOTH mechanisms could lead to a new, different PoW.

The question is: Would there be advantages when using a different PoW?

In addition to the ones outlined in my above posts, I see one more: Currently the time for solving a PoW is distributed according to a Poisson distribution (Satoshi describes the consequences of this in his paper). We have a parameter (difficulty) where we can tune the mean of this distribution, but we cannot independently tune the variance of the distribution (with Poisson it will always be equal to the mean). With a different PoW system we will be able to obtain different distribution shapes (possibly with a smaller variance than Poisson). This could make the entire system more stable. Certainly it will impact the Bitcoin convergence behaviour. For the end user the impact might be a higher trust in a block with smaller waiting times.
Once a transaction has 6 confirmations, it is extremely unlikely that an attacker without at least 50% of the network's computation power would be able to reverse it.
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction.
1714058134
Hero Member
*
Offline Offline

Posts: 1714058134

View Profile Personal Message (Offline)

Ignore
1714058134
Reply with quote  #2

1714058134
Report to moderator
Forp (OP)
Full Member
***
Offline Offline

Activity: 195
Merit: 100


View Profile
October 05, 2011, 11:01:20 AM
 #22

A non-parallelizable proof of work scheme has the consequence that nobody can become stronger than a, say, 4.5 GHz overclocked single core pentium. This is what we want.

And bitcoin gets taken over by a single botnet.

To my current understanding this is just the other way round.

In Bitcoin, a single botnet can obtain so much hashing power as to take over the system, since it can parallelize the PoW.

In a completely non-parallelizable PoW (as FreeTrade pointed out recently and I just commented on), the fastest single processor takes over the system - but we have a perfect protection against a botnet take-over (because parallelization does not help).

In the concept I am thinking of right now, we should be able to combine the advantages of both worlds, depending on how we build the PoW (described in a bit more detail in my recent reply to FreeTrade).
Meni Rosenfeld
Donator
Legendary
*
Offline Offline

Activity: 2058
Merit: 1054



View Profile WWW
October 05, 2011, 11:10:11 AM
Last edit: October 05, 2011, 11:20:53 AM by Meni Rosenfeld
 #23

I think you're confused about how the so-called "Bitcoin lottery" works. You seem to think that if I have some system and you have a parallel system with x100 the power, then you will find all the blocks and I will find none, because you'll always beat me to the punch. But no, these are independent Poisson processes (tied only via occasional difficulty adjustments) with different rates, meaning that you will simply find 100 times the blocks I will. So over a period where 1010 blocks were found between us, about 1000 will be yours and 10 will be mine.

In other words, it scales linearly - the amount you get out is exactly proportional to what you put in.

If that's all you're after, mission is already accomplished.

But if you think your "non-parallelizable PoW" system should behave differently, let's say that in this system a person with a computer finds one block per month. Then four people with a computer each should find a total of 4 blocks per month, right?. So a person with 4 computers also finds 4 blocks per month, because the system can't know who the computers belong to (and if it can then it's not at all about a different computational problem, but about using non-computational cues in distributing blocks). So a person with a special 4-CPU system also finds 4 blocks, as does a person with a quad-core CPU.


And, once more - pools are not a security threat if implemented correctly. There's no reason the pooling mediator also has to generate the work. And, there are already peer-to-peer pools such as p2pool.


Edit: Parallelism means that an at-home miner can plug in his computer and contribute to security/receive rewards exactly in proportion to what he put in. Non-parallelism means his effect will depend in complicated ways on what others are doing and usually leave the poor person at a significant disadvantage (since others are using faster computers), which is the opposite of what you want.

In addition to the ones outlined in my above posts, I see one more: Currently the time for solving a PoW is distributed according to a Poisson distribution (Satoshi describes the consequences of this in his paper). We have a parameter (difficulty) where we can tune the mean of this distribution, but we cannot independently tune the variance of the distribution (with Poisson it will always be equal to the mean). With a different PoW system we will be able to obtain different distribution shapes (possibly with a smaller variance than Poisson). This could make the entire system more stable. Certainly it will impact the Bitcoin convergence behaviour. For the end user the impact might be a higher trust in a block with smaller waiting times.
Block finding follows a Poisson process, which means that the time to find a block follows the exponential distribution (where the variance is the square of the mean). The variance is high, but that's an inevitable consequence of the fair linearly scaling process.

If it pleases you, the variance of block finding times will probably be less in the transaction fees era.

1EofoZNBhWQ3kxfKnvWkhtMns4AivZArhr   |   Who am I?   |   bitcoin-otc WoT
Bitcoil - Exchange bitcoins for ILS (thread)   |   Israel Bitcoin community homepage (thread)
Analysis of Bitcoin Pooled Mining Reward Systems (thread, summary)  |   PureMining - Infinite-term, deterministic mining bond
Forp (OP)
Full Member
***
Offline Offline

Activity: 195
Merit: 100


View Profile
October 05, 2011, 11:11:12 AM
 #24

Another tiny problem, all nonparallelizable puzzle schemes proposed so far require the puzzle creator to keep a secret from the puzzle solver. How exactly do you do that in a decentralized system?

There are standard techniques to solve the problem you pose in a decentralized system. The buzzwords here are secret splitting (the puzzle is not created by a single person but by many persons, none of which knows the entire secret), you might want to Google "coin flipping over the phone" or "how to play any mental game".

I do not know yet if these standard techniques are practically feasible in the Bitcoin setting. They might. They might be not. That's the thrill of research. :-)

I am not sure whether there are non-parallelizable puzzle schemes where this requirement can be relaxed. Thus, there may be another way out.
Forp (OP)
Full Member
***
Offline Offline

Activity: 195
Merit: 100


View Profile
October 05, 2011, 11:53:46 AM
 #25

Thanx for your interestign reply.

You seem to think that if ...

No. I do not think that.

these are independent Poisson processes (tied only via occasional difficulty adjustments) with different rates, meaning that you will simply find 100 times the blocks I will. So over a period where 1010 blocks were found between us, about 1000 will be yours and 10 will be mine.

I completely agree.

Now suppose it is you and me and some 40 other guys with the same hash performance as you have in your example. Suppose I want to claim 100 BTC bounty for every block instead of the standard 50 BTC. Chances are next to 100% that I will manage. Since, on the avaerage, I am faster than you (and all the other guys combined), I will dominate the longest chain in the long run.

If that's all you're after, mission is already accomplished.

No. It is not my mission.

let's say that in this system a person with a computer finds one block per month. Then four people with a computer each should find a total of 4 blocks per month, right?.

Why?

The perspective I am looking at is not the single block but the development of the block chain.

As soon as one of the four people found a block, this person broadcasts this block and the puzzles the other three had been working on becomes obsolete (at least that's my understanding on what the reference implementation does). Only a cheater would be interested in continuing to work on "his" version of the block; however, having lost the block in question, chances are getting higher that he will not manage to push "his" version of the next block.

Four people with a computer would rather find a total of 4 blocks in FOUR months - and these blocks would be the four blocks chained next to each other, ie a block chain of length 4.

And, once more - pools are not a security threat ...

How do you prevent a pool from pooling more than 50% of the hashability and then imposing its own understanding of Bitcoin upon the remaining nodes?

Edit: Parallelism means that an at-home miner can plug in his computer and contribute to security/receive rewards exactly in proportion to what he put in. Non-parallelism means his effect will depend in complicated ways on what others are doing and usually leave the poor person at a significant disadvantage (since others are using faster computers), which is the opposite of what you want.

I agree to the interpretation of the parallel PoW situation. I disagree with the interpretation of the non-parallelism situation - there is not yet a final proposal for a non-parallelizable PoW, so we do not know yet if this is a necessary consequence. However, I am grateful that you are pointing out this argument, since it is a possible problem. I will take this in consideration in my future work on this - it is a helpful objection.

Block finding follows a Poisson process, which means that the time to find a block follows the exponential distribution (where the variance is the square of the mean). The variance is high, but that's an inevitable consequence of the fair linearly scaling process.

Again you are raising an important aspect. The task thus is to see that two goals can be balanced: Linear scaling and small variance.

I agree that the Poisson process is a very natural solution here and prominently unique due to a number of it's characteristic features, such as independence, being memory and state less etc. A non-parallelizable PoW will certainly lose the state-less property. If we drop this part, how will the linear scaling (effort to expected gain) and the variance change? We will not have all properties of Poisson, but we might keep most of the others. The question sounds quite interesting to me.
DeathAndTaxes
Donator
Legendary
*
Offline Offline

Activity: 1218
Merit: 1079


Gerald Davis


View Profile
October 05, 2011, 12:29:20 PM
 #26

The original paper from 1996, Rivest, Shamir, Wagner (see above) provides some nice real-world examples here.

1 woman needs 9 months to give birth to a baby. So 2 women will have twice the power and will need 4.5 months to produce a baby.

A bus full of passengers stands in the desert. They ran out of gasoline. The driver discovers that the next gas station is 50 miles away. This is no problem, since there are 50 passengers on the bus. They will just split the task and every passenger will walk his share of 1 mile.

The examples show that there are tasks where you do not get 2X power by doing Y twice. In my above posting there are references to mathematical examples in the literature.

Come on man you are ignoring the entire concept of a reward.

A pool doesn't "get more" it isn't trying to get 2x.  It still gets x.  It simply gets x more consistently.

A better example would be a lottery (real world) pool.  In a lottery (lets ignore the lesser prizes) you either win nothing or you win a HUGE prize.  However the odds of you winning in each draw is so low that even if you played every day for the rest of your life you may never win.  Rather similar to the bitcoin lottery right?

So you and 19 friends get a great idea.  Instead of playing individually you pool your 20 tickets and if any ticket wins you SPLIT THE REWARD 20 ways. 

Has the lottery pool gained 2x?  No.  Do bitcoin pools gain 2x?  No.  In the long run (say 100 years of solid 24/7 mining) a solo miner and a pool miner (assuming same hardware, downtime, and fees) will earn the same amount.  They both earn X.  The only advantage of a bitcoin pool is reduced volatility. You reach the "long run" (where expected value and actual value converge) much quicker.

Any problem no matter how non-parallelizable can be pooled.  Each "miner" would work completely independent and if/when he "wins" he shares the reward with the rest of the pool.  That is unavoidable.  If you think the biggest risk to bitcoin is pools then your proposal does nothing about that (and creates a large number of new problems).
Meni Rosenfeld
Donator
Legendary
*
Offline Offline

Activity: 2058
Merit: 1054



View Profile WWW
October 05, 2011, 01:39:07 PM
 #27

Now suppose it is you and me and some 40 other guys with the same hash performance as you have in your example. Suppose I want to claim 100 BTC bounty for every block instead of the standard 50 BTC. Chances are next to 100% that I will manage. Since, on the avaerage, I am faster than you (and all the other guys combined), I will dominate the longest chain in the long run.
Ok, you're definitely confused about the capabilities of someone with >50% of the hashing power. He cannot do things like put a 100BTC generation transaction per block. Such blocks are invalid and will be rejected by the network (particularly the nodes that actually accept bitcoins for goods and services). In other words, these will not be Bitcoin blocks - the rest of the network will happily continue to build the Bitcoin chain, while he enjoys his own isolated make-believe chain.

let's say that in this system a person with a computer finds one block per month. Then four people with a computer each should find a total of 4 blocks per month, right?.

Why?

The perspective I am looking at is not the single block but the development of the block chain.

As soon as one of the four people found a block, this person broadcasts this block and the puzzles the other three had been working on becomes obsolete (at least that's my understanding on what the reference implementation does). Only a cheater would be interested in continuing to work on "his" version of the block; however, having lost the block in question, chances are getting higher that he will not manage to push "his" version of the next block.

Four people with a computer would rather find a total of 4 blocks in FOUR months - and these blocks would be the four blocks chained next to each other, ie a block chain of length 4.
Does your system maintain the notion that each given block is found by some specific individual? If so, if 4 people find 4 blocks in 4 months, it means each person finds 1 block in 4 months, contrary to the premise that each person finds 1 block per month...

If it wasn't clear, in this example the intention was that the 4 people aren't all there is, there are 4000 more similar people each finding 1 block per month, for a total of 4000 blocks per month. So again, if 4 people find 1 block per month each, then between them they find 4 blocks per month.

And, once more - pools are not a security threat ...
How do you prevent a pool from pooling more than 50% of the hashability and then imposing its own understanding of Bitcoin upon the remaining nodes?
Because the pool shouldn't be the one deciding what goes in a block. As was explained, a pool is essentially just an agreement to share rewards. Even in centralized pools (and like I said there are decentralized ones), all the operator needs is to verify that miners intend to share rewards, by checking that they find shares which credit the pool in the generation transaction. But everything else can be chosen by the miner.

This is a future fix, however - currently centralized pools do tell miners what to include in the block. But miners can still verify that they're building on the latest block, so they can detect pools attempting a double-spend attack (which is the main thing you can do with >50%).

Block finding follows a Poisson process, which means that the time to find a block follows the exponential distribution (where the variance is the square of the mean). The variance is high, but that's an inevitable consequence of the fair linearly scaling process.

Again you are raising an important aspect. The task thus is to see that two goals can be balanced: Linear scaling and small variance.
Variance in block finding times is unwanted, but I think most will agree it pales in comparison to the other issues involved. Especially since there are basically two relevant timescales - "instant" (0 confirmations) and "not instant". The time for 10 confirmations follows Erlang(10) distribution which has less variance.

I agree that the Poisson process is a very natural solution here and prominently unique due to a number of it's characteristic features, such as independence, being memory and state less etc. A non-parallelizable PoW will certainly lose the state-less property. If we drop this part, how will the linear scaling (effort to expected gain) and the variance change? We will not have all properties of Poisson, but we might keep most of the others. The question sounds quite interesting to me.
By all means you should pursue whatever research question interests you, but I expect you'll be disappointed both in finding a solution satisfying your requirements, and in its potential usefulness.

1EofoZNBhWQ3kxfKnvWkhtMns4AivZArhr   |   Who am I?   |   bitcoin-otc WoT
Bitcoil - Exchange bitcoins for ILS (thread)   |   Israel Bitcoin community homepage (thread)
Analysis of Bitcoin Pooled Mining Reward Systems (thread, summary)  |   PureMining - Infinite-term, deterministic mining bond
Forp (OP)
Full Member
***
Offline Offline

Activity: 195
Merit: 100


View Profile
October 05, 2011, 06:07:48 PM
 #28

Ok, you're definitely confused about the capabilities of someone with >50% of the hashing power. He cannot do things like put a 100BTC generation transaction per block. Such blocks are invalid and will be rejected by the network (particularly the nodes that actually accept bitcoins for goods and services). In other words, these will not be Bitcoin blocks - the rest of the network will happily continue to build the Bitcoin chain, while he enjoys his own isolated make-believe chain.

My example is wrong, since an incorrect bounty is something a node can check on its own. If you replace the setting by a double spend, it should work.

let's say that in this system a person with a computer finds one block per month. Then four people with a computer each should find a total of 4 blocks per month, right?.

Why?

If it wasn't clear, in this example the intention was that the 4 people aren't all there is, there are 4000 more similar people each finding 1 block per month, for a total of 4000 blocks per month. So again, if 4 people find 1 block per month each, then between them they find 4 blocks per month.

Why?

It is characteristic of non-parallelizable PoWs that they do not scale in the way you describe. I believe we have a misunderstanding here.

Because the pool shouldn't be the one deciding what goes in a block. As was explained, a pool is essentially just an agreement to share rewards.

Ok. Forget the pool as part of the argument here but think of parallel computing. The pool is a parallel computer.

The line of reasoning is about parallel computation and scalability of the PoWs.

With parallelizable PoWs, Bill Gates can buy as much computing power as he wants. He then changes a transaction in block 5 to his favour. Thanx to his computing power he easily can redo the entire block chain history since then. If the PoWs are, as I suggest, non-parallelizable, he simply cannot do better buy buying more computers. The only thing he can do is increase the clocking. By this, he can speed up his computation mabe by a factor of 5 or 10 - as opposed to buying more computers, where only money is his limit. So, non-parallelizable PoWs are an effective solution against this kind of attack.

(Yes, I know that the hashes of some 6 or so intermediate blocks are hardcoded in the bitcoin program and hence the attack will not work out exactly the way I described it - but this does not damage the line of reasoning in principle.)

Variance in block finding times is unwanted, but I think most will agree it pales in comparison to the other issues involved. Especially since there are basically two relevant timescales - "instant" (0 confirmations) and "not instant". The time for 10 confirmations follows Erlang(10) distribution which has less variance.

I do not think that the "variance in block finding times" is the essential advantage, it is rather convergence speed to "longest chain" (I have no hard results on this but am currently simulating this a bit) and better resistence against attacks which involve pools parallel computers.

By all means you should pursue whatever research question interests you, but I expect you'll be disappointed both in finding a solution satisfying your requirements, and in its potential usefulness.

Trying to understand the argument. Do you think there is no PoW matching all the requirements? Care to give a hint why?

As to a potential usefulness: The concept is by now means "finished" but until now the discussion on the board proved very fruitful and helps to improve the system I am working on. This is for a different kind of block-chain application, so I am not expecting an impact for Bitcoin. Bitcoin is widely disseminated so I do not expect significant protocol changes to occur any time soon, especially by suggestions from outside the core team. 

Gandlaf
Newbie
*
Offline Offline

Activity: 59
Merit: 0


View Profile
October 05, 2011, 06:41:18 PM
 #29

let's say that in this system a person with a computer finds one block per month. Then four people with a computer each should find a total of 4 blocks per month, right?.

Why?

If it wasn't clear, in this example the intention was that the 4 people aren't all there is, there are 4000 more similar people each finding 1 block per month, for a total of 4000 blocks per month. So again, if 4 people find 1 block per month each, then between them they find 4 blocks per month.

Why?

It is characteristic of non-parallelizable PoWs that they do not scale in the way you describe. I believe we have a misunderstanding here.


Let´s make it a bit easier: assume, as in Menis example, that 4000 blocks are found per month by individual participants, under the assumption that your non-parallelizable PoW is in operation. Assume further that all of these people just meet up and decide to share the revenue equally to smoothe out their income stream. According to what you have been arguing, the total number of blocks they find would go down to 1 purely due to the fact, that they are colluding in terms of revenue-sharing.
By definition 4000 blocks, will reduce to 1 by your formular magically devining social contracts?
Good luck with that line of argumentation...
Meni Rosenfeld
Donator
Legendary
*
Offline Offline

Activity: 2058
Merit: 1054



View Profile WWW
October 05, 2011, 07:17:03 PM
Last edit: October 06, 2011, 07:46:29 AM by Meni Rosenfeld
 #30

let's say that in this system a person with a computer finds one block per month. Then four people with a computer each should find a total of 4 blocks per month, right?.
Why?
If it wasn't clear, in this example the intention was that the 4 people aren't all there is, there are 4000 more similar people each finding 1 block per month, for a total of 4000 blocks per month. So again, if 4 people find 1 block per month each, then between them they find 4 blocks per month.
Why?
It is characteristic of non-parallelizable PoWs that they do not scale in the way you describe. I believe we have a misunderstanding here.
This isn't about parallelizable vs. non-parallelizable computations. Performance in serial computations doesn't scale linearly with more computing cores, but this is irrelevant. This is about the process of block finding, which is why I asked if your system diverges fundamentally in the notion that blocks are something found once in a while by people on the network. If not then it's really "if Billy and Sally each have an apple, then that's two apples" math - if in a given scenario (not in distinct scenarios) two people find 1 block each, then both of them together finds 2 blocks. If a network of 4000 people find 4000 blocks per month, each finds on average 1 block per month. This isn't enough data to know the distribution (it's possible one person finds all 4000) but the best scenario is when each finds close to 1.

It also means that if in a given situation 4000 people find 4000 blocks, each finding about 1, then if I join in it would only be fair if I also find about 1 (or, more precisely, that each will now find 4000/4001).

Because the pool shouldn't be the one deciding what goes in a block. As was explained, a pool is essentially just an agreement to share rewards.

Ok. Forget the pool as part of the argument here but think of parallel computing. The pool is a parallel computer.

The line of reasoning is about parallel computation and scalability of the PoWs.

With parallelizable PoWs, Bill Gates can buy as much computing power as he wants. He then changes a transaction in block 5 to his favour. Thanx to his computing power he easily can redo the entire block chain history since then. If the PoWs are, as I suggest, non-parallelizable, he simply cannot do better buy buying more computers. The only thing he can do is increase the clocking. By this, he can speed up his computation mabe by a factor of 5 or 10 - as opposed to buying more computers, where only money is his limit. So, non-parallelizable PoWs are an effective solution against this kind of attack.

(Yes, I know that the hashes of some 6 or so intermediate blocks are hardcoded in the bitcoin program and hence the attack will not work out exactly the way I described it - but this does not damage the line of reasoning in principle.)
Yes, with parallelizable PoW you can overwhelm the network given enough time and money. My contention is that non-parallelizable makes the problem worse, not better. With fully serial only the fastest one will do anything, so noone else will be incentivized to contribute his resources. So this one person can do the attack, and even if he's honest, it's only his resources that stand against a potential attacker (rather than the resources of many interested parties).

And there's no indication that some hybrid middle ground gives better results - to me it seems like more like a linear utility function where fully parallel is best and it gets worse the closer you make it to fully serial.

Also, I hold the position that security can be significantly improved using some form of proof-of-stake (basically a more methodical version of the hardcoded hashes).

Variance in block finding times is unwanted, but I think most will agree it pales in comparison to the other issues involved. Especially since there are basically two relevant timescales - "instant" (0 confirmations) and "not instant". The time for 10 confirmations follows Erlang(10) distribution which has less variance.
I do not think that the "variance in block finding times" is the essential advantage, it is rather convergence speed to "longest chain" (I have no hard results on this but am currently simulating this a bit) and better resistence against attacks which involve pools parallel computers.
See above. I think you're going the wrong way.

By all means you should pursue whatever research question interests you, but I expect you'll be disappointed both in finding a solution satisfying your requirements, and in its potential usefulness.
Trying to understand the argument. Do you think there is no PoW matching all the requirements? Care to give a hint why?
I'm still not completely sure what the requirements are, this whole discussion has been confusing to me. But yes, to me it seems that from a "back to basics" viewpoint a serial computation only makes it easier for one entity to dominate the blockchain, making the "better security" requirement impossible. Again, if multiple computers don't give more power over the network, it means the attacker doesn't have to compete against multiple computers, only against one.

As to a potential usefulness: The concept is by now means "finished" but until now the discussion on the board proved very fruitful and helps to improve the system I am working on. This is for a different kind of block-chain application, so I am not expecting an impact for Bitcoin. Bitcoin is widely disseminated so I do not expect significant protocol changes to occur any time soon, especially by suggestions from outside the core team.  
You mean an alternative Bitcoin-like currency, or something that doesn't look anything like it? If the former I doubt this will be applicable, if the latter I can only speculate unless you give more details about the application.

The Bitcoin code progresses slowly, probably mostly because of the sophistication of the code, but I trust that all sufficiently good ideas will make it in eventually.

1EofoZNBhWQ3kxfKnvWkhtMns4AivZArhr   |   Who am I?   |   bitcoin-otc WoT
Bitcoil - Exchange bitcoins for ILS (thread)   |   Israel Bitcoin community homepage (thread)
Analysis of Bitcoin Pooled Mining Reward Systems (thread, summary)  |   PureMining - Infinite-term, deterministic mining bond
Forp (OP)
Full Member
***
Offline Offline

Activity: 195
Merit: 100


View Profile
October 05, 2011, 07:47:36 PM
 #31

Thanx, Gandalf, for your help. I still do not get it and plead guilty of having caused the misunderstanding.

But I still do not get it and would like to work it out.

Let´s make it a bit easier: assume, as in Menis example, that 4000 blocks are found per month by individual participants, under the assumption that your non-parallelizable PoW is in operation. Assume further that all of these people just meet up and decide to share the revenue equally to smoothe out their income stream.

Fine. Still with you. Assuming this.

According to what you have been arguing, the total number of blocks they find would go down to 1 purely due to the fact, that they are colluding in terms of revenue-sharing.

No. Why should the number of blocks go down? I do not claim that they go down.

Maybe the misunderstanding is earlier. A non-parallelizable PoW means that the participants CANNOT collude on the PoW. Of course, they still can share their revenues, that is a completely different issue.

In current parallelizable PoW, all 4000 participants work by looking for a nonce with a specific property. For this goal, they test large numbers of candidate nonces. Every test has a certain success probability (determined by difficulty). Individual tests are, of course, independent of each other. Hence the PoW can be brute forced. This can be done in parallel. A block is found sooner or later, according to the Poisson process in place. So, the time to find a block follows and exponential distribution.

Now let us consider a strictly sequential, deterministic PoW (not as a suggestion for Bitcoin, but to see the difference). Here, a specific computational result must be obtained. To obtain this result, a large number of arithmetic operations must be performed in strict sequence. The number is adapted to the average speed of a single core CPU. The participant who reaches the result first wins the block. This cannot be done in parallel. However, it is always the participant with the fastest single core CPU, who wins the block. This is boring and not what we need. This is time lock cryptography and not exactly useful for Bitcoin.

Now let us consider a non-parallelizable PoW. Here, every participant must make a large number of sequential steps to reach a goal. However, contrary to the sequential, deterministic PoW, there are still random aspects, branching points in the computation. So it still depends on random choices of the participants, who will win the block (which is what we need). Of course, participants can still pool. Two participants will still get twice as many blocks in the long run. The block rate does not go down magically.

However now comes the crucial difference. Assume I have 2^256 participants, numbered 0, 1, 2, 3, ... How long will they need for the first block? In the current (parallelizable) PoW used in Bitcoin they need a few microseconds. Every participant uses his own number as nonce in the first round...and most likely one of them will produce a hash which is smaller than the current target value. In the non-parallelizable PoW I am thinking of, they will still need more or less 10 minutes as they should, since this corresponds more or less to the number of operations they have to do before they get a realistic chance for reaching the goal. However, since there is some variability, also a slower CPU with better random choices gets a chance.

Forp (OP)
Full Member
***
Offline Offline

Activity: 195
Merit: 100


View Profile
October 05, 2011, 08:13:24 PM
 #32

To begin, the discussion and the reference to the proof-of-stakes thread, is very helpful to me. Thank you.

This isn't about parallelizable vs. non-parallelizable computations. Performance in serial computations doesn't scale linearly with more computing cores, but this is irrelevant. This is about the process of block finding, which is why I asked if your system diverges fundamentally in the notion that blocks are something found once in a while by people on the network. If not then it's really "if Billy and Sally each have an apple, then that's two apples" math - if in a given scenario (not in distinct scenarios) two people find 1 block each, then both of them together finds 2 blocks. If a network of 4000 people find 4000 blocks per month, each finds on average 1 block per month. This isn't enough data to know the distribution (it's possible one person finds all 4000) but the best scenario is when each finds close to 1.

It also means that if in a given situation 4000 people find 4000 blocks, each finding about 1, then if I join in it would only be fair if I also find about 1 (or, more precisely, that each will now find 4000/4001).

Agreed. I really guess we somehow got stuck in a misunderstanding, I might have caused.

Yes, with parallelizable PoW you can overwhelm the network given enough time and money. My contention is that non-parallelizable makes the problem worse, not better. With fully serial only the fastest one will do anything, so noone else will be incentivized to contribute his resources. So this one person can do the attack, and even if he's honest, it's only his resources that stand against a potential attacker (rather than the resources of many interested parties).

Agreed. A sequential deterministic PoW does not do the job for the obvious reasons you are giving. We need both (randomization of block winners AND non-parallelizability) and I am curious how this can be done.

The issue I take is: A very high number of CPUs has a different effect for parallelizable PoWs than for non-parallelizable PoWs. (The "Bill Gates" attack).

I'm still not completely sure what the requirements are, this whole discussion has been confusing to me. But yes, to me it seems that from a "back to basics" viewpoint a serial computation only makes it easier for one entity to dominate the blockchain, making the "better security" requirement impossible. Again, if multiple computers don't give more power over the network, it means the attacker doesn't have to compete against multiple computers, only against one.

We are reaching common ground. In my model, multiple computers do give more power to the network - but not due to the effect that a single PoW is solved faster in time. When we add computers to the network, the PoWs I am thinking of, must be adapted, as in normal Bitcoin. However, the effect is that the probabilities for finding a solution are rearranged not the overall time for solving a block.

You mean an alternative Bitcoin-like currency, or something that doesn't look anything like it? If the latter I doubt this will be applicable, if the latter I can only speculate unless you give more details about the application.

The Bitcoin code progresses slowly, probably mostly because of the sophistication of the code, but I trust that all sufficiently good ideas will make it in eventually.

I am thinking not of a currency-like application but on a replicated directory service kind of application. Since this is written from scratch, there is the chance to try different PoW systems without having to break the old algorithm (or mind sets of developers).

Thanx again for challenging my thoughts in the discussion. This is very fruitful.
Meni Rosenfeld
Donator
Legendary
*
Offline Offline

Activity: 2058
Merit: 1054



View Profile WWW
October 05, 2011, 08:19:26 PM
Last edit: October 05, 2011, 08:30:54 PM by Meni Rosenfeld
 #33

However now comes the crucial difference. Assume I have 2^256 participants, numbered 0, 1, 2, 3, ... How long will they need for the first block? In the current (parallelizable) PoW used in Bitcoin they need a few microseconds. Every participant uses his own number as nonce in the first round...and most likely one of them will produce a hash which is smaller than the current target value. In the non-parallelizable PoW I am thinking of, they will still need more or less 10 minutes as they should, since this corresponds more or less to the number of operations they have to do before they get a realistic chance for reaching the goal. However, since there is some variability, also a slower CPU with better random choices gets a chance.
I think I now understand what you're talking about. This is basically making the computation more granular, significantly increasing the time it takes to test one value (from a microsecond to 10 minutes).

I think you'll find that still, an entity with enough resources wins, and more easily than with the current system.

Thanx again for challenging my thoughts in the discussion. This is very fruitful.
You're welcome, glad to help.

1EofoZNBhWQ3kxfKnvWkhtMns4AivZArhr   |   Who am I?   |   bitcoin-otc WoT
Bitcoil - Exchange bitcoins for ILS (thread)   |   Israel Bitcoin community homepage (thread)
Analysis of Bitcoin Pooled Mining Reward Systems (thread, summary)  |   PureMining - Infinite-term, deterministic mining bond
phillipsjk
Legendary
*
Offline Offline

Activity: 1008
Merit: 1001

Let the chips fall where they may.


View Profile WWW
October 06, 2011, 07:11:56 AM
Last edit: October 06, 2011, 07:51:05 AM by phillipsjk
 #34


In a non-parallelizable PoW, we will have, say, 1.000.000 processing units all competing individually for a block. Some are faster, some are slower, but they do not differ so widely in performance. Every processing unit corresponds to a single processor (no parallelization advantage for a GPU or a multi core; however a single person might own several competing processing units, which might sit on a single die or a single GPU or several racks).


You lost me here: as others have said, you can parallelize by buying more computers. If you want to use a lot of memory and branching to remove the advantage of GPUs, I can still parallelize if I have more money than anybody else.

Oracle quotes me just over $44,000 USD for a fully loaded SPARC T4-1 Server (PDF) with 256GB of RAM, supporting 64 simultaneous compute threads. It can consolidate 64 virtual machines (with 4GB of RAM each) in 2U, while drawing under 800 Watts. Granted, the 4MB L3 cache becomes 64kB afer being split 64 ways (128KB L2 becomes 16kB split 8 ways (8 cores)),  but each "machine" is only costing you about $687.50 USD.

If you had money to burn you could probably put 20 in a rack for 1280 virtual machines in a rack. How is anybody doing this not taking advantage of parallelism? Edit: assuming your "racks" of 4 GPUs are 4U each, your example for the "parallel" case has only 8 times the density (256 threads per U vs 32).

James' OpenPGP public key fingerprint: EB14 9E5B F80C 1F2D 3EBE  0A2F B3DE 81FF 7B9D 5160
cbeast
Donator
Legendary
*
Offline Offline

Activity: 1736
Merit: 1006

Let's talk governance, lipstick, and pigs.


View Profile
October 06, 2011, 07:28:44 AM
 #35

The botnets seemed to have come to the conclusion that it is better to join the bitcoin network rather than sabotage it. What would be the point of an attack even if it could be briefly successful before being discovered and blocked?

Any significantly advanced cryptocurrency is indistinguishable from Ponzi Tulips.
Meni Rosenfeld
Donator
Legendary
*
Offline Offline

Activity: 2058
Merit: 1054



View Profile WWW
October 06, 2011, 07:44:15 AM
 #36

The botnets seemed to have come to the conclusion that it is better to join the bitcoin network rather than sabotage it.
This depends on who runs the botnet.

What would be the point of an attack even if it could be briefly successful before being discovered and blocked?
How do you block an attack? Reject blocks that have the "evil" bit set? (Actually there are ways, but they require a fundamental change in how branches are selected)

1EofoZNBhWQ3kxfKnvWkhtMns4AivZArhr   |   Who am I?   |   bitcoin-otc WoT
Bitcoil - Exchange bitcoins for ILS (thread)   |   Israel Bitcoin community homepage (thread)
Analysis of Bitcoin Pooled Mining Reward Systems (thread, summary)  |   PureMining - Infinite-term, deterministic mining bond
Forp (OP)
Full Member
***
Offline Offline

Activity: 195
Merit: 100


View Profile
October 06, 2011, 09:07:55 AM
 #37

Let me see if I understood correctly, where I lost you.

you can parallelize by buying more computers.

Your ability to parallelize depends on the number of computers you have and on the type of problem you want to solve.

If the problem you want to solve is finding the correct nonce for a bitcoin block to meet its target, you can parallelize very nicely. Generally speaking, every problem which consists of brute-force attacks is nicely parallelizable. Also, multi-dimensional problems can be parallelized very nicely, for example matrix mulitplication. Let us assume we multiply a 10x10 matrix with another 10x10 matrix. When I have 10 computers instead of 1 I will be (nearly) 10 times as fast. Even having 100 computers may help (one for every element of the result matrix). What about 200 computers? Still good, since I now could split the calculation of each sum into two halfs. What about 1 billion computers? Well, probably there is a limit to the degree to which matrix multiplication is parallelizable.

This observation may motivate a search for problems, which cannot be parallelized very well.

For example, assume you have a number x and want to calculate sha(x). Fine, we have an algorithm for that. But now let us calculate sha(sha(x)). How would we parallelize this? Actually we FIRST have to calculate sha(x) and THEN we have to evaluate the hash function again. It does not help me to have an additional computer. We know of no shortcut for parallelization. (There are functions, where we know shortcuts, like in addition: Many invocations of an addition can be expressed as multiplication, but with, for example sha, there is an issue).

So the idea was to replace the proof-of-work problem in Bitcoin (which currently is highly parallelizable) by a problem which is not parallelizable at all. (As outlined, this is only part of the concept, because a completely serialized proof-of-work would not lead to the probabilistic behaviour we want).

Hope I got the point where I lost you.
 
Forp (OP)
Full Member
***
Offline Offline

Activity: 195
Merit: 100


View Profile
October 06, 2011, 09:16:24 AM
 #38

The botnets seemed to have come to the conclusion that it is better to join the bitcoin network rather than sabotage it.
This depends on who runs the botnet.

And in different applications it depends on the motivation of the attacker. In a monetary application (Bitcoin) the attackers may be botnets which want to make some $$$ (or, rather, BBB); unless you are the FED or Visa, of course. If you look at directory applications, an attacker is not interested in double spending but might be interested in preventing a certain result or document from being stored in the system or to disrupt the operation of the system. Here, the motivation of an attacker is different, as well as the means he can use for an attack.

What would be the point of an attack even if it could be briefly successful before being discovered and blocked?
How do you block an attack? Reject blocks that have the "evil" bit set? (Actually there are ways, but they require a fundamental change in how branches are selected)

@Meni Care to give a hint about what you are thinking of? Working on a new application type, a "fundamental change" in how branches are selected is a real option. Therefore I would be very interested in learning about this, even if it is "just" a "raw" idea and not a source code patch.
Meni Rosenfeld
Donator
Legendary
*
Offline Offline

Activity: 2058
Merit: 1054



View Profile WWW
October 06, 2011, 09:25:00 AM
 #39

@Meni Care to give a hint about what you are thinking of? Working on a new application type, a "fundamental change" in how branches are selected is a real option. Therefore I would be very interested in learning about this, even if it is "just" a "raw" idea and not a source code patch.
I'm mostly referring to ideas I've expressed in the previously linked thread, to augment proof of work with proof-of-stake and circulation (possibly quantified as bitcoin days destroyed) in branch selection.

1EofoZNBhWQ3kxfKnvWkhtMns4AivZArhr   |   Who am I?   |   bitcoin-otc WoT
Bitcoil - Exchange bitcoins for ILS (thread)   |   Israel Bitcoin community homepage (thread)
Analysis of Bitcoin Pooled Mining Reward Systems (thread, summary)  |   PureMining - Infinite-term, deterministic mining bond
Forp (OP)
Full Member
***
Offline Offline

Activity: 195
Merit: 100


View Profile
October 06, 2011, 09:48:07 AM
 #40

to augment proof of work with proof-of-stake and circulation

Ah. That's a concept which might fit nicely into my application. Actually, by reading the other thread, I realize that it is much easier to express what I want, and probably also easy to implement: As part of the proof-of-work and branch selection concept !

I favor a technique of "those who have more documents, key-value pairs stored and have been active in the system for a longer time and are trusted/interconnected to more users have a stronger say on branch selection" over the "those who have more $$$ for GPUs have a stronger say".

Pages: « 1 [2] 3 »  All
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!