Bitcoin Forum
March 28, 2024, 01:56:50 PM *
News: Latest Bitcoin Core release: 26.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: [1] 2 3 »  All
  Print  
Author Topic: [ANSWERED] Why is bitcoin proof of work parallelizable ?  (Read 4575 times)
Forp (OP)
Full Member
***
Offline Offline

Activity: 195
Merit: 100


View Profile
October 03, 2011, 10:42:16 PM
Last edit: October 12, 2011, 11:02:56 PM by Forp
 #1


I am trying to understand why Bitcoin has a parallelizable proof of work.

The task of the proof of work in Bitcoin is to make it increasingly difficult for an attacker to "change" the "past" and to ensure some kind of convergence in the concept of the "longest chain" in the block structure. Also, the block bonus provides some incentive to the miner, which keeps the system running.

The currently employed proof of work schemes are parallelizable.

As a result of this, pooled mining is faster than GPU mining (250 cores) is faster than iCore 7 mining (8 cores) is faster than Celeron mining (1 core). In my opinion, this is a disadvantage, for a number of reasons:

* Larger pools have a significant share of the total hash capacity. This increases their chances for a 50% attack.
* The speed with which the block chain "converges" (or in the case of a fork "heals") is faster, when many competitors with small computing power are in competition, as compared to the situation, when there are less competitors or competitors with higher (aggregated) computing power.
* Bitcoin was meant to be peer 2 peer; pooling is in contradiction to this. The pools are the banks of tomorrow.

A non-parallelizable proof of work would still serve the same goals as a parallelizable proof of work but would solve these problems.

Of course, at the current difficulty, chances to "win" a block for a single miner would be small. This issue however can be solved easily:

* By increasing block speed and apropriately reducing the bounty, the economics could be kept the same in the long run, but the granularity of the block bounty would be adapted.
* If we want to keep block speed and bounty, we could still have miners operate several computers or GPU work on blocks, but every core would work on its own block variant. This would still increase the return on investment for the miner linearly, but it would not show the problems outlined above.

Therefore my question: Why are we having a parallelizable proof of work ??

Is there a good reason which I do not see or is it merely a historical accident (possibly waiting to be cured?) ?






Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction.
1711634210
Hero Member
*
Offline Offline

Posts: 1711634210

View Profile Personal Message (Offline)

Ignore
1711634210
Reply with quote  #2

1711634210
Report to moderator
DeathAndTaxes
Donator
Legendary
*
Offline Offline

Activity: 1218
Merit: 1063


Gerald Davis


View Profile
October 03, 2011, 11:43:56 PM
Last edit: October 04, 2011, 03:34:20 AM by DeathAndTaxes
 #2

Provide any example of
Quote
"A non-parallelizable proof of work would still serve the same goals as a parallelizable proof of work but would solve these problems."

Very few things can't be parallelized.  A mining pool is one example of that.  Provide an example of a problem that an individual can solve but a group of individuals can't solve more quickly.

Remember multiple GPU (or multiple people in a pool) aren't working together they are working independently and simply sharing the reward.  You could design a work unit so that it requires so much resources that one computer can only work on one solution at a time.  You simply shift the bottleneck from computation power to memory or disk access. Still even if you did so you wouldn't eliminate pools.

If one computer can work on a problem then two computers can work independently and share the reward.  That is all pools are doing.  They aren't working together.  Shares are worthless and don't help the pool, they are merely a metric to avoid cheating.  The only hash that is worth anything is the solution and it is worth 50BTC (plus fees).  So every share produced is worthless except the one which is the solution and it is worth everything.    A pool is simply an agreement that everyone works and then you split the 50 BTC.  Shares and RPC and long polling are simply mechanisms to enforce that sharing.

How exactly would you outlaw sharing?
hashcoin
Full Member
***
Offline Offline

Activity: 372
Merit: 101


View Profile
October 04, 2011, 03:21:33 AM
 #3

The aim is to distribute coins via lottery -- every nonce you test is a lottery ticket you bought.

If you used a non-parallelizable test, the same person would win every time: the person with the fastest single serial processor.  That would be an even dumber distribution than the existing one.
Gabi
Legendary
*
Offline Offline

Activity: 1148
Merit: 1008


If you want to walk on water, get out of the boat


View Profile
October 04, 2011, 05:24:18 AM
 #4

Quote
* Larger pools have a significant share of the total hash capacity. This increases their chances for a 50% attack.
* The speed with which the block chain "converges" (or in the case of a fork "heals") is faster, when many competitors with small computing power are in competition, as compared to the situation, when there are less competitors or competitors with higher (aggregated) computing power.
* Bitcoin was meant to be peer 2 peer; pooling is in contradiction to this. The pools are the banks of tomorrow.
How is this whine about pools related to parallelization? 1 cpu or 1 gpu, nothing would change, we would still have pools with cpu, cause otherwise to find a block in solo it will take us years.

And the "50% attack from pool" thing can be easily solved, there can still be pool but in a way that they CANNOT control the work

phillipsjk
Legendary
*
Offline Offline

Activity: 1008
Merit: 1001

Let the chips fall where they may.


View Profile WWW
October 04, 2011, 06:15:16 AM
 #5

A  parallelizable problem also makes it easy to tune the difficulty. If you were able to claim a block reward by finding large primes, the block interval would be a lot less predictable.

Also, the chosen method (cryptographic hashing with a result below a target value) is a problem that is difficult to solve, but easy to verify. The hash serves a dual prupose: poof-of-work and integrity verification. Combining the two makes pre-computing the work ahead of time nearly impossible (exception: block-chain fork).

James' OpenPGP public key fingerprint: EB14 9E5B F80C 1F2D 3EBE  0A2F B3DE 81FF 7B9D 5160
teknohog
Sr. Member
****
Offline Offline

Activity: 518
Merit: 252


555


View Profile WWW
October 04, 2011, 06:37:20 AM
 #6

IMHO, the short answer is that Bitcoin is a distributed network. If you want lots of people around the world to work on the same problem, you need to parallelize it one way or another.

This question has already been asked in many ways. If you can limit mining to a single network node (CPU), dedicated miners will just set up multiple nodes for themselves. If you want to limit mining power per person, you've just thrown away the whole idea of an anonymous distributed network. Also, do you want to ban people from working together towards a common goal?

Moreover, limiting generation per person throws away all incentives for developing the network. In its current form, the network is stronger against attacks, because people are rewarded for spending computational power on it. In other words, who is going to do any work, if you just give everyone a big chunk of money?

world famous math art | masternodes are bad, mmmkay?
Every sha(sha(sha(sha()))), every ho-o-o-old, still shines
Forp (OP)
Full Member
***
Offline Offline

Activity: 195
Merit: 100


View Profile
October 04, 2011, 06:13:48 PM
 #7

Provide any example of
Quote
"A non-parallelizable proof of work would still serve the same goals as a parallelizable proof of work but would solve these problems."

Very few things can't be parallelized.  A mining pool is one example of that.  Provide an example of a problem that an individual can solve but a group of individuals can't solve more quickly.

The problem to solve must, of course, be inherently serial. Searching (guessing) a nonce for a single hash condition is not. Thus we should think more along the lines of hash chains.

There is some literature on this although it is not so widely known.

Colin Boyd presented a talk at the CANS 2007 conference on "Toward Non-parallelizable Client Puzzles"

The classical reference if from Rivest, Shamir, wagner: Timelock puzzles and timed release crypto.

Green, Juen, Fatemieh, Shankesi et al discuss the GPU issue in "Reconstructing Hash reversal based Proof of Work Schemas".

All in all, there are some 20 papers on this and the math is straight forward.
Forp (OP)
Full Member
***
Offline Offline

Activity: 195
Merit: 100


View Profile
October 04, 2011, 06:19:27 PM
 #8

If you used a non-parallelizable test, the same person would win every time: the person with the fastest single serial processor.  That would be an even dumber distribution than the existing one.

No. This is not specific for non-parallelizable.

The current Bitcoin proof of work leads to a stochastic situation since the solution algorithm is stochastic in nature and every miner (rather: pools) works on a different task.

Both aspects of course must be built into the non-parallelizable proof of work. This can be done, for example by extending the algorithm in the Rivest shamir Wagner paper (see above) by nonce aspects. I have not yet worked out the details or implemented this, but assume it is straight forward.
 
Forp (OP)
Full Member
***
Offline Offline

Activity: 195
Merit: 100


View Profile
October 04, 2011, 06:21:48 PM
 #9

How is this whine about pools related to parallelization? 1 cpu or 1 gpu, nothing would change, we would still have pools with cpu, cause otherwise to find a block in solo it will take us years.

The incentive aspect can be solved by adapting difficulty as written in my original post.
Forp (OP)
Full Member
***
Offline Offline

Activity: 195
Merit: 100


View Profile
October 04, 2011, 09:12:26 PM
 #10

A  parallelizable problem also makes it easy to tune the difficulty.

Why would parallelizability and tunability relate to each other? In the Rivest Shamir paper I cited above there is a nice non-parallelizable puzzle which explicitly contains the "number fo computational seconds" as tuning parameter.

If you were able to claim a block reward by finding large primes, the block interval would be a lot less predictable.

Finding large primes would be a very bad non-parallelizable proof-of-work because most prime-finding algorithms are very nicely parallelizable - especiall the probabilistic algorithms. Thus: Non-parallelizable proof-of-works usually are not based on finding large primes.

Also, the chosen method (cryptographic hashing with a result below a target value) is a problem that is difficult to solve, but easy to verify. The hash serves a dual prupose: poof-of-work and integrity verification. Combining the two makes pre-computing the work ahead of time nearly impossible (exception: block-chain fork).

Which is true for all proof-of-work concepts by definition and also for the non-parallelitable ones we find in the literature.
Forp (OP)
Full Member
***
Offline Offline

Activity: 195
Merit: 100


View Profile
October 04, 2011, 09:20:14 PM
 #11

IMHO, the short answer is that Bitcoin is a distributed network. If you want lots of people around the world to work on the same problem, you need to parallelize it one way or another.

You are aware that they are not working on the same problem? Every miner is working on a different problem (with a different bounty transaction, different time stamp etc.). This cannot be the reason.

This question has already been asked in many ways. If you can limit mining to a single network node (CPU), dedicated miners will just set up multiple nodes for themselves.

Exactly. This improves the probabilistic convergence speed of the algorithm. That's just my claim. Hopefully I can come around to produce a more formal proof for this the next days.

Moreover, limiting generation per person throws away all incentives for developing the network. In its current form, the network is stronger against attacks, because people are rewarded for spending computational power on it. In other words, who is going to do any work, if you just give everyone a big chunk of money?

The point is: The current proof of work scheme makes it possible to parallelize and have pools. A pool could thus become a very strong adversary which is not what we want - right? A non-parallelizable proof of work scheme has the consequence that nobody can become stronger than a, say, 4.5 GHz overclocked single core pentium. This is what we want.
FreeMoney
Legendary
*
Offline Offline

Activity: 1246
Merit: 1014


Strength in numbers


View Profile WWW
October 04, 2011, 09:28:08 PM
 #12


The point is: The current proof of work scheme makes it possible to parallelize and have pools. A pool could thus become a very strong adversary which is not what we want - right? A non-parallelizable proof of work scheme has the consequence that nobody can become stronger than a, say, 4.5 GHz overclocked single core pentium. This is what we want.


If I can get X power by doing Y. How am I not going to be able to get 2X power by doing Y twice? How could you possibly tell the difference between two people doing Y and one person doing Y twice? Not to mention (again) that the pool is actually just a bunch of people separately doing Y and sharing results.

Play Bitcoin Poker at sealswithclubs.eu. We're active and open to everyone.
FreeTrade
Legendary
*
Offline Offline

Activity: 1428
Merit: 1030



View Profile
October 04, 2011, 09:39:23 PM
 #13


The point is: The current proof of work scheme makes it possible to parallelize and have pools. A pool could thus become a very strong adversary which is not what we want - right? A non-parallelizable proof of work scheme has the consequence that nobody can become stronger than a, say, 4.5 GHz overclocked single core pentium. This is what we want.


It is possible to design a proof-of-work scheme where being part of a pool doesn't provide any (or negligible) advantage, aside from the benefit of having a more predictable revenue stream.  Perhaps that's what the OP is wondering about.

Membercoin - Layer 1 Coin used for the member.cash decentralized social network.
10% Interest On All Balances. Browser and Solo Mining. 100% Distributed to Users and Developers.
FreeMoney
Legendary
*
Offline Offline

Activity: 1246
Merit: 1014


Strength in numbers


View Profile WWW
October 04, 2011, 09:57:46 PM
 #14


It is possible to design a proof-of-work scheme where being part of a pool doesn't provide any (or negligible) advantage, aside from the benefit of having a more predictable revenue stream.  Perhaps that's what the OP is wondering about.

That is all that a pool currently does.

Play Bitcoin Poker at sealswithclubs.eu. We're active and open to everyone.
FreeTrade
Legendary
*
Offline Offline

Activity: 1428
Merit: 1030



View Profile
October 04, 2011, 10:18:45 PM
 #15

A non-parallelizable proof of work scheme has the consequence that nobody can become stronger than a, say, 4.5 GHz overclocked single core pentium. This is what we want.

How would you prevent the fastest miner from winning every block?

Membercoin - Layer 1 Coin used for the member.cash decentralized social network.
10% Interest On All Balances. Browser and Solo Mining. 100% Distributed to Users and Developers.
Pieter Wuille
Legendary
*
Offline Offline

Activity: 1072
Merit: 1170


View Profile WWW
October 04, 2011, 10:21:03 PM
 #16

A non-parallelizable proof of work scheme has the consequence that nobody can become stronger than a, say, 4.5 GHz overclocked single core pentium. This is what we want.

And bitcoin gets taken over by a single botnet.

I do Bitcoin stuff.
DeathAndTaxes
Donator
Legendary
*
Offline Offline

Activity: 1218
Merit: 1063


Gerald Davis


View Profile
October 04, 2011, 10:51:19 PM
 #17

It is possible to design a proof-of-work scheme where being part of a pool doesn't provide any (or negligible) advantage, aside from the benefit of having a more predictable revenue stream.  Perhaps that's what the OP is wondering about.

THAT IS THE ONLY ADVANTAGE POOLS HAVE NOW.

Each miner in the pool is working Independently. A pool with 1000 miners has absolutely no advantage over a pool with 10 or a solo miner (1). At its most basic level a pool is simply an agreement to "share" the reward.  No work gets shared, only the reward.    If a pool of 10,000 miners mines 5 million shares and then one miner finds the solution.  Essentially the other 9,999 miners did ABSOLUTELY nothing.  One miner solved the entire problem ALL BY HIMSELF.

However there is huge volatility in mining yourself (due to the high reward and low chance of "winning").   A pool normalizes the results (smaller reward, higher chance of winning) but the expected value is the same.


The protocols of the pool (i.e. checking low difficulty shares, rpc, etc) merely exist to keep people honest.  The simplest pool requires no protocols just trust.

Me and 4 friends (all with exactly the same hardware) all hash independently.  If one of us "wins" we split the prize 5 ways.  Expected value is exactly the same but volatility has decreased.   Now as pool gets larger the risk of cheating gets higher so pools implement protocols and procedures to ensure "fairness".

So back to the OP point.  Please provide an example where the reward can't be shared by a pool.   You can't.  Even if the work is completely non-parallizable the reward certainly can be via INDEPENDENT actors sharing the reward via mutual agreement.

So what exactly would a non-parallelizable solution solve?  Of course the fastest miner would always win the non-parallelizable solution.
ArtForz
Sr. Member
****
Offline Offline

Activity: 406
Merit: 257


View Profile
October 05, 2011, 01:44:46 AM
 #18

Another tiny problem, all nonparallelizable puzzle schemes proposed so far require the puzzle creator to keep a secret from the puzzle solver. How exactly do you do that in a decentralized system?

bitcoin: 1Fb77Xq5ePFER8GtKRn2KDbDTVpJKfKmpz
i0coin: jNdvyvd6v6gV3kVJLD7HsB5ZwHyHwAkfdw
Forp (OP)
Full Member
***
Offline Offline

Activity: 195
Merit: 100


View Profile
October 05, 2011, 10:18:43 AM
 #19

If I can get X power by doing Y. How am I not going to be able to get 2X power by doing Y twice? How could you possibly tell the difference between two people doing Y and one person doing Y twice?

The original paper from 1996, Rivest, Shamir, Wagner (see above) provides some nice real-world examples here.

1 woman needs 9 months to give birth to a baby. So 2 women will have twice the power and will need 4.5 months to produce a baby.

A bus full of passengers stands in the desert. They ran out of gasoline. The driver discovers that the next gas station is 50 miles away. This is no problem, since there are 50 passengers on the bus. They will just split the task and every passenger will walk his share of 1 mile.

The examples show that there are tasks where you do not get 2X power by doing Y twice. In my above posting there are references to mathematical examples in the literature.
Forp (OP)
Full Member
***
Offline Offline

Activity: 195
Merit: 100


View Profile
October 05, 2011, 10:37:46 AM
 #20

It is possible to design a proof-of-work scheme where being part of a pool doesn't provide any (or negligible) advantage, aside from the benefit of having a more predictable revenue stream.  Perhaps that's what the OP is wondering about.
Well exactly. Joining a Bitcoin mining pool doesn't increase your revenue, it just provides you with a more predictable revenue stream.

From the discussion I realize that I did not properly describe my core goal. It is not about mining or increasing revenue or making them more predictable. It is rather about undstanding the implications of the current choice improving the stability and speed of statistical convergence of the longest chain.  The discussion here thus far was very helpful and enables me to rephrase my thoughts, hopefully in a better way.

In a non-parallelizable PoW, we will have, say, 1.000.000 processing units all competing individually for a block. Some are faster, some are slower, but they do not differ so widely in performance. Every processing unit corresponds to a single processor (no parallelization advantage for a GPU or a multi core; however a single person might own several competing processing units, which might sit on a single die or a single GPU or several racks).

In a parallelizable PoW we will have a much smaller number of processing units competing for a block. Many of the former units will cooperate by the mere fact that they are sitting inside of a GPU and are parallelizing their work. Other units are even stronger, since they form pools. Moreover, we see a big variance in performance (ranging from the Celeron miner to the 100 racks x 4 high end GPU x 256 GPU processing cores miner).

Now, how do these two worlds compare?

1) In the parallelizable world it is POSSIBLE to form a processing unit which has more than 50% of the hashing power. You simply buy as many GPUs as necessary and that's it. In the non-parallelizable world this is not possible. The best you can do is to overclock your single core or to build specific SHA processor.

2) As a result of (1), the distribution of processing power in the two cases are very much different and thus the two algorithms will definitely behave differently.

Current state: I am trying to understand this difference. My (yet unproven) mathematical guts feeling is, that the convergence in the non-parallelizable setting is better (I am referring to the thoughts very informally described on pages 7 and 8 of Satoshi's white paper) and the system is more robust against some forms of attacks. I do not have a completely thought through concept yet.

The discussion here is very helpful for this. Thank you!


Pages: [1] 2 3 »  All
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!