Bitcoin Forum
May 09, 2024, 06:29:45 AM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: 1 2 3 4 5 [All]
  Print  
Author Topic: Are GPU's Satoshi's mistake?  (Read 8135 times)
FreeTrade (OP)
Legendary
*
Offline Offline

Activity: 1428
Merit: 1030



View Profile
October 01, 2011, 03:23:12 PM
 #1

We all know that ordinary CPU's are useless for mining and that only using GPU's with specialized programs gives a reasonable return.

Did Satoshi mean it to be this way? I can't imagine that he did. Everything else is so well thought out, I imagine that his vision was that anybody with access to a computer would be able to effectively turn electricity into Bitcoins at a fairly reasonable return. If that were the case, it would certainly be easier to distribute BTC.

Did Satoshi not forsee that GPU's would be much more efficient, thus reducing mining to the GPU'd few?

Membercoin - Layer 1 Coin used for the member.cash decentralized social network.
10% Interest On All Balances. Browser and Solo Mining. 100% Distributed to Users and Developers.
1715236185
Hero Member
*
Offline Offline

Posts: 1715236185

View Profile Personal Message (Offline)

Ignore
1715236185
Reply with quote  #2

1715236185
Report to moderator
1715236185
Hero Member
*
Offline Offline

Posts: 1715236185

View Profile Personal Message (Offline)

Ignore
1715236185
Reply with quote  #2

1715236185
Report to moderator
1715236185
Hero Member
*
Offline Offline

Posts: 1715236185

View Profile Personal Message (Offline)

Ignore
1715236185
Reply with quote  #2

1715236185
Report to moderator
Whoever mines the block which ends up containing your transaction will get its fee.
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction.
error
Hero Member
*****
Offline Offline

Activity: 588
Merit: 500



View Profile
October 01, 2011, 03:30:25 PM
 #2

I suspect he never foresaw either GPUs or mining pools.

3KzNGwzRZ6SimWuFAgh4TnXzHpruHMZmV8
Serge
Legendary
*
Offline Offline

Activity: 1050
Merit: 1000


View Profile
October 01, 2011, 03:36:10 PM
 #3

At the same time every computer literate person realizes that computing power only going to increase with time.
memvola
Hero Member
*****
Offline Offline

Activity: 938
Merit: 1002


View Profile
October 01, 2011, 03:50:45 PM
 #4

Did Satoshi not forsee that GPU's would be much more efficient, thus reducing mining to the GPU'd few?

He didn't have to foresee anything that specific. It was obvious that mining would be dominated by specialized people and specialized hardware. If there weren't GPU's to compete, FPGA's would be very competitive since they consume far less power than CPU's. If FPGA's weren't, we'd expect ASIC solutions sooner.

So it's not a flaw, it's just inevitable. I'm guessing you're inspired by new proof-of-concept currencies that are CPU-friendly. Keep in mind that if they get to a critical size, it's also inevitable for them to be dominated by specialized hw.
FreeTrade (OP)
Legendary
*
Offline Offline

Activity: 1428
Merit: 1030



View Profile
October 01, 2011, 04:27:43 PM
 #5


So it's not a flaw, it's just inevitable. I'm guessing you're inspired by new proof-of-concept currencies that are CPU-friendly. Keep in mind that if they get to a critical size, it's also inevitable for them to be dominated by specialized hw.


I agree it is inevitable - but it is still a flaw.

The flaw is not that it is possible for specialized equipment to out perform general purpose computers. As you say, that is inevitable - it is the factor by which they do.

GPU's might outperform CPU's by 10:1 - So I must spend $50 in electricity for every $5 in BTC

However, if GPU's outperformed CPU's by 1.1:1 - then I'd spend $5.50 for every $5. You'd still have specialized hw, but you'd have general purpose computer participants too.

The flaw then is the ratio at which specialized hardware outperforms, and not having designed to minimize that ratio.


 

Membercoin - Layer 1 Coin used for the member.cash decentralized social network.
10% Interest On All Balances. Browser and Solo Mining. 100% Distributed to Users and Developers.
Serge
Legendary
*
Offline Offline

Activity: 1050
Merit: 1000


View Profile
October 01, 2011, 04:34:29 PM
 #6

it doesn't matter how you call it, Pentium, Xeon, GPU, FPGA, ASICs, Quantum computing

it's a computing power which over time going to increase. I don't think Satoshi expected people would be mining Bitcoins in 20 years on the same Pentiums it was developed to run on at the time.
Meni Rosenfeld
Donator
Legendary
*
Offline Offline

Activity: 2058
Merit: 1054



View Profile WWW
October 01, 2011, 07:42:27 PM
Last edit: October 01, 2011, 09:04:31 PM by Meni Rosenfeld
 #7

The jury is still out on whether this is a problem. Having ordinary PCs viable for mining causes a significant botnet risk, and while the idea of everyone being able to mine is appealing, it's not essential.

If this is agreed to be a problem, the hashing function can be changed to something for which ordinary PCs are more suitable, and it's possible this was Satoshi's plan.


I suspect he never foresaw either GPUs
I doubt that, GPGPU isn't exactly news. He might have not known a priori exactly how good GPUs are for SHA-256 but the idea would certainly have occurred to him and he could have investigated if he thought it was a relevant problem.

or mining pools.
I doubt that, someone who understands probability (which Satoshi does) and expects that many people will mine (which I believe he did), can't think that solo mining will be feasible.


At the same time every computer literate person realizes that computing power only going to increase with time.
This has nothing to do with it. This isn't computing in general becoming faster, it's about specialized hardware for this specific purpose putting people using commodity machines at a disadvantage. "Computing power increasing with time" benefits pros and amateurs equally.


Did Satoshi not forsee that GPU's would be much more efficient, thus reducing mining to the GPU'd few?
So it's not a flaw, it's just inevitable. I'm guessing you're inspired by new proof-of-concept currencies that are CPU-friendly. Keep in mind that if they get to a critical size, it's also inevitable for them to be dominated by specialized hw.
You're ignoring capital expenditures. At-home miners are leveraging existing hardware giving them a huge advantage over specialized businesses. To specialize mining, the advantage of the specialized hardware should be equally huge. This is the case with SHA-256, but not necessarily with a function chosen to require resources similar to the average PC.

1EofoZNBhWQ3kxfKnvWkhtMns4AivZArhr   |   Who am I?   |   bitcoin-otc WoT
Bitcoil - Exchange bitcoins for ILS (thread)   |   Israel Bitcoin community homepage (thread)
Analysis of Bitcoin Pooled Mining Reward Systems (thread, summary)  |   PureMining - Infinite-term, deterministic mining bond
k9quaint
Legendary
*
Offline Offline

Activity: 1190
Merit: 1000



View Profile
October 01, 2011, 07:46:02 PM
 #8


I agree it is inevitable - but it is still a flaw.

The flaw is not that it is possible for specialized equipment to out perform general purpose computers. As you say, that is inevitable - it is the factor by which they do.

GPU's might outperform CPU's by 10:1 - So I must spend $50 in electricity for every $5 in BTC

However, if GPU's outperformed CPU's by 1.1:1 - then I'd spend $5.50 for every $5. You'd still have specialized hw, but you'd have general purpose computer participants too.

The flaw then is the ratio at which specialized hardware outperforms, and not having designed to minimize that ratio.


You have it backwards. GPUs are far more efficient than CPUs in electricity use.

Bitcoin is backed by the full faith and credit of YouTube comments.
Meni Rosenfeld
Donator
Legendary
*
Offline Offline

Activity: 2058
Merit: 1054



View Profile WWW
October 01, 2011, 08:02:16 PM
 #9


I agree it is inevitable - but it is still a flaw.

The flaw is not that it is possible for specialized equipment to out perform general purpose computers. As you say, that is inevitable - it is the factor by which they do.

GPU's might outperform CPU's by 10:1 - So I must spend $50 in electricity for every $5 in BTC

However, if GPU's outperformed CPU's by 1.1:1 - then I'd spend $5.50 for every $5. You'd still have specialized hw, but you'd have general purpose computer participants too.

The flaw then is the ratio at which specialized hardware outperforms, and not having designed to minimize that ratio.


You have it backwards. GPUs are far more efficient than CPUs in electricity use.
No, he means that specialized GPU miners in equilibrium will be close to breakeven and so spend ~$5 electricity per BTC (which ignores all other costs, but whatever), while he as a CPU miner will have to spend $50.

1EofoZNBhWQ3kxfKnvWkhtMns4AivZArhr   |   Who am I?   |   bitcoin-otc WoT
Bitcoil - Exchange bitcoins for ILS (thread)   |   Israel Bitcoin community homepage (thread)
Analysis of Bitcoin Pooled Mining Reward Systems (thread, summary)  |   PureMining - Infinite-term, deterministic mining bond
mrb
Legendary
*
Offline Offline

Activity: 1512
Merit: 1027


View Profile WWW
October 01, 2011, 08:07:50 PM
 #10

Not only did Satoshi planned for an increasing amount of power being thrown at mining, but he did know that GPUs would exel at it, way before any GPU miner even existed, in December 2009:

We should have a gentleman's agreement to postpone the GPU arms race as long as we can for the good of the network.  It's much easer to get new users up to speed if they don't have to worry about GPU drivers and compatibility.  It's nice how anyone with just a CPU can compete fairly equally right now.
tvbcof
Legendary
*
Offline Offline

Activity: 4592
Merit: 1276


View Profile
October 01, 2011, 08:22:59 PM
 #11

At the risk of offending miners...

Seems to me that mining is something of a sideshow and and a system engineering problem who's solution does not matter very much as long as it works.  But I would suspect that Satoshi would have been aware of the possibilities for solving the chosen proof-of-work problem and put at least some consideration into it.  The evolution from CPU -> GPU -> ASIC overlayed with the rate of inflation seems to me to be yet another impressive (and probably non-accidental) happenstance.

I have some wonder about what will happen as ASICs come on-line and how that will occur, but don't understand things well enough to decide whether I have concern or not.  It does seem to me that a lot of the problems that could crop up should attackers gain the upper hand at such a juncture could be addressed by the 'sledgehammer' approach of addressing them in code.

sig spam anywhere and self-moderated threads on the pol&soc board are for losers.
Meni Rosenfeld
Donator
Legendary
*
Offline Offline

Activity: 2058
Merit: 1054



View Profile WWW
October 01, 2011, 08:44:15 PM
 #12

At the risk of offending miners...

Seems to me that mining is something of a sideshow and and a system engineering problem who's solution does not matter very much as long as it works.
None taken and I agree. And I say this as someone who mines, has been fascinated for years with the concept of monetizing distributed computing and was initially drawn to Bitcoin in large part due to its enabling of it, and whose primary skilled contribution to Bitcoin is analysis of mining pools reward systems.

But I think that people are vastly underestimating now the magnitude of the problems Bitcoin will face in the future related to the economics of mining. Maybe "mining for the masses" could play a part in the solution.

1EofoZNBhWQ3kxfKnvWkhtMns4AivZArhr   |   Who am I?   |   bitcoin-otc WoT
Bitcoil - Exchange bitcoins for ILS (thread)   |   Israel Bitcoin community homepage (thread)
Analysis of Bitcoin Pooled Mining Reward Systems (thread, summary)  |   PureMining - Infinite-term, deterministic mining bond
FreeTrade (OP)
Legendary
*
Offline Offline

Activity: 1428
Merit: 1030



View Profile
October 01, 2011, 09:00:28 PM
 #13

You're ignoring capital expenditures. At-home miners are leveraging existing hardware giving them a huge advantage over specialized businesses.

Thanks - yes, I had forgotten this entirely. Factoring this in, I'm guessing that a sufficiently well designed CPU-friendly currency could make commercial mining entirely unviable. 

Membercoin - Layer 1 Coin used for the member.cash decentralized social network.
10% Interest On All Balances. Browser and Solo Mining. 100% Distributed to Users and Developers.
FreeTrade (OP)
Legendary
*
Offline Offline

Activity: 1428
Merit: 1030



View Profile
October 01, 2011, 09:09:45 PM
 #14

Not only did Satoshi planned for an increasing amount of power being thrown at mining, but he did know that GPUs would exel at it, way before any GPU miner even existed, in December 2009:

We should have a gentleman's agreement to postpone the GPU arms race as long as we can for the good of the network.  It's much easer to get new users up to speed if they don't have to worry about GPU drivers and compatibility.  It's nice how anyone with just a CPU can compete fairly equally right now.

Thanks - very revealing quote there. My reading of it is that he became aware of the GPU problem after the 'horse had already bolted' - and is thus appealing for a 'gentleman's agreement' for what he might have otherwise prevented in design from the outset.

Membercoin - Layer 1 Coin used for the member.cash decentralized social network.
10% Interest On All Balances. Browser and Solo Mining. 100% Distributed to Users and Developers.
P4man
Hero Member
*****
Offline Offline

Activity: 518
Merit: 500



View Profile
October 01, 2011, 09:14:41 PM
 #15

bitcoin specific ASICs seems a bit of a stretch at this point, but I wonder can something like this not be used:
http://www.cast-inc.com/ip-cores/encryption/sha-256/cast_sha256-a.pdf

Anyone have an idea how to interprete the performance numbers and how they compare to our gpu's?

FreeTrade (OP)
Legendary
*
Offline Offline

Activity: 1428
Merit: 1030



View Profile
October 01, 2011, 09:22:18 PM
 #16

The jury is still out on whether this is a problem. Having ordinary PCs viable for mining causes a significant botnet risk, and while the idea of everyone being able to mine is appealing, it's not essential.

Agreed, it is not essential - it's an aspect of the Bitcoin protocol. I'm not sure if I'm on the Jury or not, but if I were I'd have to agree with Satoshi and say CPU-friendly mining is a desirable feature to help drive adoption.

I'm interested in the new developments with Fairbrix and Tenebrix - but clearly Bitcoin has all the momentum.

Membercoin - Layer 1 Coin used for the member.cash decentralized social network.
10% Interest On All Balances. Browser and Solo Mining. 100% Distributed to Users and Developers.
FreeTrade (OP)
Legendary
*
Offline Offline

Activity: 1428
Merit: 1030



View Profile
October 01, 2011, 09:29:28 PM
 #17

If this is agreed to be a problem, the hashing function can be changed to something for which ordinary PCs are more suitable, and it's possible this was Satoshi's plan.

My understanding is that this kind of agreement would require the agreement of the majority of the computing power of the BTC network and so is next to impossible? Am I mistaken that the Bitcoin protocol was built explicitly so that these are difficult to change, not easy?

Membercoin - Layer 1 Coin used for the member.cash decentralized social network.
10% Interest On All Balances. Browser and Solo Mining. 100% Distributed to Users and Developers.
DeathAndTaxes
Donator
Legendary
*
Offline Offline

Activity: 1218
Merit: 1079


Gerald Davis


View Profile
October 01, 2011, 09:32:38 PM
 #18

bitcoin specific ASICs seems a bit of a stretch at this point, but I wonder can something like this not be used:
http://www.cast-inc.com/ip-cores/encryption/sha-256/cast_sha256-a.pdf

Anyone have an idea how to interprete the performance numbers and how they compare to our gpu's?

Bad.  Very bad actually.  The throughput is in Mbps.  A hash is based on 512 byte message. A bitcoin solution takes 2 hashes.  Thus Mbps/1024 ~= MH/s.

For their example FPGA it is in the ballpark of ~1Mbps.  The FPGA developers on this forum have achieved far more.

As for ASIC being a stretch.... it all depends on how big bitcoin gets.  If it gets large enough (i.e. 10% of the transaction volume of paypal) then someone will develop ASICs.  Likely though they won't be for sale and will be retained as a competitive advantage.  Imagine someone like Amazon installing a bitcoin hashing card (based on custom built) ASIC into 100K computers in their cloud.  They could still run virtualized instances on demand but the same "host" would also mine 24/7 on highly efficient hardware.

DeathAndTaxes
Donator
Legendary
*
Offline Offline

Activity: 1218
Merit: 1079


Gerald Davis


View Profile
October 01, 2011, 09:34:51 PM
 #19

My understanding is that this kind of agreement would require the agreement of the majority of the computing power of the BTC network and so is next to impossible? Am I mistaken that the Bitcoin protocol was built explicitly so that these are difficult to change, not easy?

Exactly.  What makes it even less likely than other changes is that GPU have the majority of hashing power.  This GPU miners would in essence have to vote against their own interests.
kjj
Legendary
*
Offline Offline

Activity: 1302
Merit: 1025



View Profile
October 01, 2011, 10:13:59 PM
 #20

bitcoin specific ASICs seems a bit of a stretch at this point, but I wonder can something like this not be used:
http://www.cast-inc.com/ip-cores/encryption/sha-256/cast_sha256-a.pdf

Anyone have an idea how to interprete the performance numbers and how they compare to our gpu's?

Bad.  Very bad actually.  The throughput is in Mbps.  A hash is based on 512 byte message. A bitcoin solution takes 2 hashes.  Thus Mbps/1024 ~= MH/s.

For their example FPGA it is in the ballpark of ~1Mbps.  The FPGA developers on this forum have achieved far more.

It is even worse than you think.  All commercial SHA coprocessors are built for streaming, which doesn't work for bitcoin.

17Np17BSrpnHCZ2pgtiMNnhjnsWJ2TMqq8
I routinely ignore posters with paid advertising in their sigs.  You should too.
sharky112065
Sr. Member
****
Offline Offline

Activity: 383
Merit: 250



View Profile
October 01, 2011, 11:37:50 PM
 #21

I do not know whether or not he had though about GPU's or FPGA for mining, but he did put in difficulty adjustments, so I'm pretty sure he thought about better hardware coming out in the future.

Donations welcome: 12KaKtrK52iQjPdtsJq7fJ7smC32tXWbWr
mrb
Legendary
*
Offline Offline

Activity: 1512
Merit: 1027


View Profile WWW
October 02, 2011, 01:20:29 AM
Last edit: October 02, 2011, 07:42:50 PM by mrb
 #22

Thanks - very revealing quote there. My reading of it is that he became aware of the GPU problem after the 'horse had already bolted' - and is thus appealing for a 'gentleman's agreement' for what he might have otherwise prevented in design from the outset.

Unlikely. You may have discovered the power of GPUs for the first time with Bitcoin; but when Bitcoin was being designed, the cryptographic community was already very well aware of their computing power for ALU-bound workloads like SHA-256 which mining is based on.

I don't understand why you consider GPUs as being a "problem" presumably because the average user without a GPU farm cannot compete... The average user, by definition, owns an average PC that will run workloads at an average performance. It is inevitable that specialized hardware will appear and significantly outperforms him. Even if mining was done by an algorithm designed to be costly to implement in hardware (like Tenebrix using scrypt), this would cause miners to evolve toward specialized hardware like many-core server farms.
worldinacoin
Hero Member
*****
Offline Offline

Activity: 756
Merit: 500



View Profile
October 02, 2011, 01:22:40 AM
 #23

I don't think he has made a mistake, hindsight is always 20/20
d.james
Sr. Member
****
Offline Offline

Activity: 280
Merit: 250

Firstbits: 12pqwk


View Profile
October 02, 2011, 01:42:16 AM
 #24

gpu mining helped btc price to go way up.

You can not roll a BitCoin, but you can rollback some. Cheesy
Roll me back: 1NxMkvbYn8o7kKCWPsnWR4FDvH7L9TJqGG
memvola
Hero Member
*****
Offline Offline

Activity: 938
Merit: 1002


View Profile
October 02, 2011, 01:49:11 AM
 #25

You're ignoring capital expenditures. At-home miners are leveraging existing hardware giving them a huge advantage over specialized businesses.

Thanks - yes, I had forgotten this entirely. Factoring this in, I'm guessing that a sufficiently well designed CPU-friendly currency could make commercial mining entirely unviable.  

The thing is, I just don't see how such a design is possible. Eventually, need for power efficiency will cross the barrier and investments will be made. With GPU's, we had something ordinary computer users could use to get in with little investment. Arguably, it's no longer the case now. With GPU-hostile algorithms, you actually raise the required investment to get a huge advantage, but once the barrier is crossed, no hobbyist will be able to mine for profit. If you can assume that there is an algorithm that is most efficient only on ordinary PC's, then I agree, but it's a very bold assertion.

My take of the matter is, at the point where casual Internet user begins using Bitcoin for payments, mining and even running a node will be out of the question for the ordinary PC, as long as you keep the proof-of-work and blockchain concepts. We could include an additional, alternative CPU friendly proof-of-work protocol in Bitcoin, but I don't think it will help in the long run. At least the current developers were aware of this, deriving from the wiki entries on future obstacles, and I'd assume Satoshi was too. Bitcoin will scale, but the ordinary user will connect to the network with lighter protocols, and obtain bitcoins using conventional methods.

The PC is dying anyway... Wink

(Edit: Also, arguably, mining is something never to be mentioned in the introductions to Bitcoin. IMHO it always had a bad effect on the overall publicity.)
Meni Rosenfeld
Donator
Legendary
*
Offline Offline

Activity: 2058
Merit: 1054



View Profile WWW
October 02, 2011, 05:40:53 AM
 #26

If this is agreed to be a problem, the hashing function can be changed to something for which ordinary PCs are more suitable, and it's possible this was Satoshi's plan.
My understanding is that this kind of agreement would require the agreement of the majority of the computing power of the BTC network and so is next to impossible?
This is a common misconception. Miners are the slaves, not the masters. Protocol changes require primarily the agreement of the majority of the economic power of the network. In theory, if everyone who actually uses Bitcoin - exchanges, eWallets, merchants, end users - switched one day, miners could try mining for the old protocol all they want but their blocks will be rejected by the network and be worthless.

In practice, having a majority of current miners leave will create a shock in the generation rate, and the difficulty adjustment algorithm doesn't handle such things well. But changing the hashing algorithm is actually easier in this regard, because you'll need to hardcode a new difficulty value anyway. There could be a brief period of vulnerability while people adjust to the idea of being able to mine on their CPU.

Am I mistaken that the Bitcoin protocol was built explicitly so that these are difficult to change, not easy?
Oh, I didn't say it would be easy. But the protocol was designed so that things that should be changed, can be changed, if there is a good enough reason and someone makes a case for them and achieves consensus. The only thing which is fundamental and must never be changed is the coin generation schedule. And that's not due to something technically special about that part, just that nobody in their right mind would agree to change this.

Exactly.  What makes it even less likely than other changes is that GPU have the majority of hashing power.  This GPU miners would in essence have to vote against their own interests.
Like I said, miners aren't all-powerful. They do have some bargaining power, which is more reason to do any necessary changes sooner rather than later. The change can be gradual so the people who are already invested won't lose their investment entirely.

1EofoZNBhWQ3kxfKnvWkhtMns4AivZArhr   |   Who am I?   |   bitcoin-otc WoT
Bitcoil - Exchange bitcoins for ILS (thread)   |   Israel Bitcoin community homepage (thread)
Analysis of Bitcoin Pooled Mining Reward Systems (thread, summary)  |   PureMining - Infinite-term, deterministic mining bond
Meni Rosenfeld
Donator
Legendary
*
Offline Offline

Activity: 2058
Merit: 1054



View Profile WWW
October 02, 2011, 05:48:31 AM
Last edit: October 02, 2011, 07:19:00 AM by Meni Rosenfeld
 #27

Not only did Satoshi planned for an increasing amount of power being thrown at mining, but he did know that GPUs would exel at it, way before any GPU miner even existed, in December 2009:

We should have a gentleman's agreement to postpone the GPU arms race as long as we can for the good of the network.  It's much easer to get new users up to speed if they don't have to worry about GPU drivers and compatibility.  It's nice how anyone with just a CPU can compete fairly equally right now.

Thanks - very revealing quote there. My reading of it is that he became aware of the GPU problem after the 'horse had already bolted' - and is thus appealing for a 'gentleman's agreement' for what he might have otherwise prevented in design from the outset.
Did you read the quote in context? (The quote header is a link.) The immediately preceding comment says

Suggestion :
Since the coins are generated faster on fast machines, many people will want to use their GPU power to do this, too.
So, my suggestion is to implement a GPU-computing support using ATI Stream and Nvidia CUDA.
So, this means GPU mining still wasn't implemented (and I think this remained the case for another year). And, like I and others said, GPGPU is old news (and was probably foreseen well before the popular implementations).

1EofoZNBhWQ3kxfKnvWkhtMns4AivZArhr   |   Who am I?   |   bitcoin-otc WoT
Bitcoil - Exchange bitcoins for ILS (thread)   |   Israel Bitcoin community homepage (thread)
Analysis of Bitcoin Pooled Mining Reward Systems (thread, summary)  |   PureMining - Infinite-term, deterministic mining bond
Meni Rosenfeld
Donator
Legendary
*
Offline Offline

Activity: 2058
Merit: 1054



View Profile WWW
October 02, 2011, 05:58:21 AM
 #28

The thing is, I just don't see how such a design is possible.
Did you read the scrypt paper (linked by the alt currency Tenebrix)? It sounds promising. In the end it's a numbers game, and I don't know what the numbers will be. But I think that if the Bitcoin economy as a whole decides it doesn't want specialized mining, it can be done - for example, by deciding to change the hashing function every year, so that noone will want to design chips that will expire by the time they are out.

1EofoZNBhWQ3kxfKnvWkhtMns4AivZArhr   |   Who am I?   |   bitcoin-otc WoT
Bitcoil - Exchange bitcoins for ILS (thread)   |   Israel Bitcoin community homepage (thread)
Analysis of Bitcoin Pooled Mining Reward Systems (thread, summary)  |   PureMining - Infinite-term, deterministic mining bond
memvola
Hero Member
*****
Offline Offline

Activity: 938
Merit: 1002


View Profile
October 02, 2011, 06:36:23 AM
 #29

Did you read the scrypt paper (linked by the alt currency Tenebrix)? It sounds promising. In the end it's a numbers game, and I don't know what the numbers will be. But I think that if the Bitcoin economy as a whole decides it doesn't want specialized mining, it can be done - for example, by deciding to change the hashing function every year, so that noone will want to design chips that will expire by the time they are out.

Yes, this would really eliminate specialized mining. I guess you could also have two or more hashing functions concurrently. You'd still need to update all clients but the switch would not be so dramatic from the miners' aspect (e.g. GPU's would still generate but now CPU mining would be more profitable).

I had tested scrypt, but I guess I'll need to look at the paper more closely. Thanks for the link.
Meni Rosenfeld
Donator
Legendary
*
Offline Offline

Activity: 2058
Merit: 1054



View Profile WWW
October 02, 2011, 06:51:00 AM
 #30

I guess you could also have two or more hashing functions concurrently.
Yes, this is what I alluded to when I said that "the change can be gradual", but it can only be done for a transitional period because you need to hardcode how the difficulties of the two hashing functions relate. Either have a shared difficulty, and you hardcode the ratio of the targets, or you have separate difficulties and you hardcode the difficulty adjustment algorithm to target a certain percentage of the total blocks coming from each method. A "gradual" change would be starting with 100% old method, 0% new method and slowly shifting to to 0% old, 100% new.

But this is only required when we make changes that obsoletes existing investments. It will be necessary at the time we adopt a "change every year" strategy, but once that's adopted there's no need for a gradual transition each year because there will be no investment in hardware specific to the function of the year.

1EofoZNBhWQ3kxfKnvWkhtMns4AivZArhr   |   Who am I?   |   bitcoin-otc WoT
Bitcoil - Exchange bitcoins for ILS (thread)   |   Israel Bitcoin community homepage (thread)
Analysis of Bitcoin Pooled Mining Reward Systems (thread, summary)  |   PureMining - Infinite-term, deterministic mining bond
Meni Rosenfeld
Donator
Legendary
*
Offline Offline

Activity: 2058
Merit: 1054



View Profile WWW
October 02, 2011, 07:14:18 AM
 #31

I don't understand why you consider GPUs as being a "problem" presumably because the average user without a GPU farm cannot compete...
If the vision is that Bitcoin will be completely decentralized and that everyone will mine (and I'm not saying this has to be the vision), then this is a problem, regardless of if it is an inevitable problem.

The average user, by definition, owns an average PC that will run workloads at an average performance. It is inevitable that specialized hardware will appear and significantly outperforms him. Even if mining was done by an algorithm designed to be costly to implement in hardware (like Tenebrix using scrypt), this would cause miners to evolve toward specialized hardware like many-core server farms.
At-home miners use hardware they already purchased for other purposes; they have a place for their PC with presumably sufficient cooling; they can mine only when their PC is on anyway so they pay only for the additional electricity of the CPU load, not to keep their computer running. Commercial miners need to pay for all of this as well as other expenses; they take the risk that Bitcoin value will decrease, that competition will increase, or that the hashing function used will change obsoleting their investment; and they still need it to be profitable enough to make for a viable business. For this, their hardware advantage needs to be huge.

How huge will it be? The idea with scrypt is that every core needs a certain amount of RAM, the exact amount being configurable. So those many-core severs will need lots of RAM to go along with it, which will get pretty expensive.

1EofoZNBhWQ3kxfKnvWkhtMns4AivZArhr   |   Who am I?   |   bitcoin-otc WoT
Bitcoil - Exchange bitcoins for ILS (thread)   |   Israel Bitcoin community homepage (thread)
Analysis of Bitcoin Pooled Mining Reward Systems (thread, summary)  |   PureMining - Infinite-term, deterministic mining bond
P4man
Hero Member
*****
Offline Offline

Activity: 518
Merit: 500



View Profile
October 02, 2011, 07:58:33 AM
 #32

It is even worse than you think.  All commercial SHA coprocessors are built for streaming, which doesn't work for bitcoin.

Care to elaborate? What is streaming in this context?

Gladiator
Newbie
*
Offline Offline

Activity: 42
Merit: 0


View Profile
October 02, 2011, 08:41:59 AM
 #33

GPU-minning is the only thing that saved Bitcoin from being overrun by bot-nets. And one of big reasons of Bitcoin growth.
GPU-minning is much more scalable than CPU-minning. And it attracts much more "investors".
So it's neither a mistake nor a bad thing.
FreeTrade (OP)
Legendary
*
Offline Offline

Activity: 1428
Merit: 1030



View Profile
October 02, 2011, 09:25:07 AM
 #34

So, this means GPU mining still wasn't implemented (and I think this remained the case for another year). And, like I and others said, GPGPU is old news (and was probably foreseen well before the popular implementations).

My reading is that he realised the error after it was too late to easily fix (big blockchain existing), but before any GPU miners had come online. Thus calling for a gentleman's agreement.

Membercoin - Layer 1 Coin used for the member.cash decentralized social network.
10% Interest On All Balances. Browser and Solo Mining. 100% Distributed to Users and Developers.
finway
Hero Member
*****
Offline Offline

Activity: 714
Merit: 500


View Profile
October 02, 2011, 09:41:53 AM
 #35

like Garvin said, the problem is msg relaying

teknohog
Sr. Member
****
Offline Offline

Activity: 519
Merit: 252


555


View Profile WWW
October 02, 2011, 09:59:32 AM
 #36

All commercial SHA coprocessors are built for streaming, which doesn't work for bitcoin.
Care to elaborate? What is streaming in this context?

I believe it means they are built for hashing huge sets of data. Latency is not a problem since you have to wait for entire incoming data anyway. However, Bitcoin miners hash lots of small packets, and the I/O latency becomes important. In FPGA miners, this is solved by generating new datasets inside the chip (by incrementing the nonce), and only getting external data in (from bitcoin getwork) every few seconds.

world famous math art | masternodes are bad, mmmkay?
Every sha(sha(sha(sha()))), every ho-o-o-old, still shines
Prattler
Full Member
***
Offline Offline

Activity: 192
Merit: 100


View Profile
October 02, 2011, 10:16:51 AM
 #37

GPU-minning is the only thing that saved Bitcoin from being overrun by bot-nets. And one of big reasons of Bitcoin growth.
GPU-minning is much more scalable than CPU-minning. And it attracts much more "investors".
So it's neither a mistake nor a bad thing.

I agree. We're talking money! It doesn't have to be fair, it has to be secure! GPU mining is only increasing security and this will not change in the near future. If anything, there should be anti-FPGA and anti-ASIC protocols, but GPU is more than fine!
teknohog
Sr. Member
****
Offline Offline

Activity: 519
Merit: 252


555


View Profile WWW
October 02, 2011, 10:42:57 AM
 #38

It doesn't have to be fair, it has to be secure! GPU mining is only increasing security and this will not change in the near future.

Agreed!

Quote
If anything, there should be anti-FPGA and anti-ASIC protocols, but GPU is more than fine!

Not agreeing with this one, but perhaps it's just because I mine with FPGAs Wink

First of all, FPGAs are more power efficient than GPUs. GPUs get about 2 Mhash/J, but good FPGAs get about 20. Getting the same work done with less wasted energy (and environment) should be the goal of any sane person. Of course, the initial cost of FPGAs is a problem, and GPUs are better in the short run.

Secondly, if the algorithm really only works on a CPU, then I'll download a CPU design on the FPGA and keep on mining Wink Though the point of reconfigurable hardware is that you'll almost always find a more efficient implementation than copying the original hardware.

As for ASICs, I think we should be open to changing the algorithm anyway, so fixed-function chips would not make sense in the long run. For example, if SHA2 is broken.

world famous math art | masternodes are bad, mmmkay?
Every sha(sha(sha(sha()))), every ho-o-o-old, still shines
Etlase2
Hero Member
*****
Offline Offline

Activity: 798
Merit: 1000


View Profile
October 02, 2011, 01:59:10 PM
 #39

First of all, FPGAs are more power efficient than GPUs. GPUs get about 2 Mhash/J, but good FPGAs get about 20. Getting the same work done with less wasted energy (and environment) should be the goal of any sane person. Of course, the initial cost of FPGAs is a problem, and GPUs are better in the short run.

You say this but don't understand the real-world relevance. Bitcoins have most certainly gone up in price due to GPU mining and lack of energy efficiency. The more energy put into the system, the more coins cost to make, the higher the bitcoin price is. If less energy is wasted, more people mine because it is profitable. For now, of course. But the more demand that is created, the more tempted coin hoarders are to sell and waste your effort. Such a self-sustaining system.

DeathAndTaxes
Donator
Legendary
*
Offline Offline

Activity: 1218
Merit: 1079


Gerald Davis


View Profile
October 02, 2011, 02:33:27 PM
 #40

You say this but don't understand the real-world relevance. Bitcoins have most certainly gone up in price due to GPU mining and lack of energy efficiency. The more energy put into the system, the more coins cost to make, the higher the bitcoin price is. If less energy is wasted, more people mine because it is profitable. For now, of course. But the more demand that is created, the more tempted coin hoarders are to sell and waste your effort. Such a self-sustaining system.

Who cares?

I mean it might hurt you personally but bitcoin doesn't need to have any set price.  If the price was stable it wouldn't matter if BTC was $10,000 per coin or $0.01 per coin.  Now GPU miners would be crushed if the price of bitcoin (relative to difficulty) fell 90% but the economy would work at any exchange rate.
Meni Rosenfeld
Donator
Legendary
*
Offline Offline

Activity: 2058
Merit: 1054



View Profile WWW
October 02, 2011, 03:08:35 PM
 #41

Bitcoins have most certainly gone up in price due to GPU mining and lack of energy efficiency. The more energy put into the system, the more coins cost to make, the higher the bitcoin price is.
No. The causal chain is Market => Price => Total mining reward => Incentive to mine => Amount of miners => Difficulty => Cost to mine. The other direction, the direct influence of the specifics of mining on the coin price, is negligible. If cost per hash was lower, there would simply be more hashes and larger difficulty thus maintaining market equilibrium. (though there are indirect effects due to network security, popularity etc).

1EofoZNBhWQ3kxfKnvWkhtMns4AivZArhr   |   Who am I?   |   bitcoin-otc WoT
Bitcoil - Exchange bitcoins for ILS (thread)   |   Israel Bitcoin community homepage (thread)
Analysis of Bitcoin Pooled Mining Reward Systems (thread, summary)  |   PureMining - Infinite-term, deterministic mining bond
Dirt Rider
Member
**
Offline Offline

Activity: 111
Merit: 10


View Profile
October 02, 2011, 05:53:01 PM
 #42

Maybe I am looking at this wrong but, lets say for a minute that somehow it were even pssible to forever limit mining to CPUs (which I don't believe it is but that's another discussion)...

Wouldn't eventually pretty much the same thing happen, it would become only profitable for those with the most  (and fastest) CPUs and the resources needed to support them (electricity, etc) at the lowest costs.  So for the average person with one or two average computers in their house eventually would not  be able to profit?

And even if there isn't specialized hardware available to those with the know-how and resources to use it, there would still be specialized setups/datacenters (like mine in my basement, lol), and enough of them, that the same end result would happen as we have now with GPUs, and will have with some other technology in the future that we might not have even considered yet.

Am I wrong in thinking that as long as there is profit to be made, there will likely be many people competing for a piece of it which will always and automatically lead to those with the skills and resources will be the ones getting the profits, regardless of any limitations of what technology can be used?
P4man
Hero Member
*****
Offline Offline

Activity: 518
Merit: 500



View Profile
October 02, 2011, 05:58:16 PM
 #43

Maybe I am looking at this wrong but, lets say for a minute that somehow it were even pssible to forever limit mining to CPUs (which I don't believe it is but that's another discussion)...

Wouldn't eventually pretty much the same thing happen, it would become only profitable for those with the most  (and fastest) CPUs and the resources needed to support them (electricity, etc) at the lowest costs.  So for the average person with one or two average computers in their house eventually would not  be able to profit?

And even if there isn't specialized hardware available to those with the know-how and resources to use it, there would still be specialized setups/datacenters (like mine in my basement, lol), and enough of them, that the same end result would happen as we have now with GPUs, and will have with some other technology in the future that we might not have even considered yet.

Am I wrong in thinking that as long as there is profit to be made, there will likely be many people competing for a piece of it which will always and automatically lead to those with the skills and resources will be the ones getting the profits, regardless of any limitations of what technology can be used?

Thats pretty much correct. Although there are some nuances; for instance, an individual who already owns a (gaming) PC may not have to factor in the cost of hardware. A "professional" will have to. Same applies to electricity, an individual would only have to take in to account the marginal electricity consumption caused my mining (for the time his machine would be powered on anyway), a pro would have to take the entire power consumption in to account.

But yes, by and large, eventually mining will only be (marginally) profitable for those who can achieve the lowest cost /MH.

pekv2
Hero Member
*****
Offline Offline

Activity: 770
Merit: 502



View Profile
October 02, 2011, 08:04:50 PM
 #44

Probably not, just like F@H, it was all CPU till they found a way to fold with gpu's and was much faster, just like bitcoin mining.
Gabi
Legendary
*
Offline Offline

Activity: 1148
Merit: 1008


If you want to walk on water, get out of the boat


View Profile
October 02, 2011, 08:55:26 PM
 #45

Yes but F@H use computing power to do something, while for bitcoin if it's only cpu or only gpu mining doesn't change, since difficulty will simply change to adjust with the increased computing power.

DeathAndTaxes
Donator
Legendary
*
Offline Offline

Activity: 1218
Merit: 1079


Gerald Davis


View Profile
October 02, 2011, 09:33:53 PM
 #46

Making a crypto currency GPU immune is a stupid and naive goal.

1) It still won't be "fair".  Sure if you can only use CPU then the traditional mining farm because kaput.  It still doesn't make average user an "equal share".  What about IT department managers who may have access to thousands of CPU?  They dwarf the returns than an "average" user can ever make.  You simply substitute one king of the hill for another.

2) It makes the currency very very very vulnerable to botnets.  The largest botnet (Storm Trojan) has roughly 230,000 computers under its control.  It could instantaneously fork/control/double spend any crypto currency.   There are much fewer computers with high end GPU systems, they are more detectable when compromised, and on average tend to be owned by more computer savy users making controlling an equally powerful GPU botnet a more difficult task.

3) If GPU were dead then FPGA would simply rein supreme.  CPU are still very inefficient because they are a jack of all trades.  That versatility means they don't excel at anything.  If bitcoin or some other crypto currency was GPU immune large professional miners would simply use FPGA and drive price down below electrical cost of CPU based nodes.  The bad news is it would make the network even smaller and even more vulnerable to a botnet (who's owner doesn't really care about electrical costs).

4) Technology is always changing.  GPU are becoming more and more "general purpose".  It is entirely possible that code which runs inefficiently on today's GPU would run much more efficiently on next generation GPU.  So what are we going to scrap the block chain and start over everytime their is an architectural change.

5) CPU will become much more GPU "like" in the future.  The idea of using multiple cores with redundant fully independent implementation is highly inefficient (current Phenom and Core i designs).  To continue to maintain Moore's law expect more designs like AMD APU which blend CPU and GPU elements.  Another example is the PS3 Cell processor with a single general purpose cores and 8 "simple" number crunching cores.  As time goes on these hybrid designs will become the rule rather than the exception.  Would be very silly is any crypto currency was less efficient on future CPU designs than current ones out of some naive goal of making it "GPU proof".
Etlase2
Hero Member
*****
Offline Offline

Activity: 798
Merit: 1000


View Profile
October 03, 2011, 02:47:27 AM
 #47

No. The causal chain is Market => Price => Total mining reward => Incentive to mine => Amount of miners => Difficulty => Cost to mine. The other direction, the direct influence of the specifics of mining on the coin price, is negligible. If cost per hash was lower, there would simply be more hashes and larger difficulty thus maintaining market equilibrium. (though there are indirect effects due to network security, popularity etc).
The causal chain is Amount Hoarded => Scarcity => Price => How much can I sell that will leave miners still profitable for the security of the network so I can sell more later => Cost to mine

Meni Rosenfeld
Donator
Legendary
*
Offline Offline

Activity: 2058
Merit: 1054



View Profile WWW
October 03, 2011, 05:08:43 AM
 #48

only profitable for those with the most (and fastest) CPUs and the resources needed to support them (electricity, etc)
It's not about quantity. Someone with 1000 CPUs will make x1000 times the revenue, but with x1000 times the cost. It's about efficiency, cost per bitcoin generated (where all costs are considered - electricity, hardware, maintenance...). If I have just 1 CPU but with the same efficiency I can also profit.

those with the skills and resources will be the ones getting the profits
Skills, resources and opportunities. Someone who has a computer he bought for other purposes, which happens to be able to mine, has an opportunity to profit. At-home miners have several other big advantages over dedicated businesses. If there's really no specialized hardware, all the business has is things like some more technical knowledge and negotiating slightly better power prices and it simply can't compete.


1) It still won't be "fair".  Sure if you can only use CPU then the traditional mining farm because kaput.  It still doesn't make average user an "equal share".  What about IT department managers who may have access to thousands of CPU?  They dwarf the returns than an "average" user can ever make.  You simply substitute one king of the hill for another.
Nobody in the thread said it should be "fair". It's about making Bitcoin decentralized per the vision, and making it more secure (by making it more difficult for an attacker to build a dedicated cluster).

2) It makes the currency very very very vulnerable to botnets.  The largest botnet (Storm Trojan) has roughly 230,000 computers under its control.  It could instantaneously fork/control/double spend any crypto currency.   There are much fewer computers with high end GPU systems, they are more detectable when compromised, and on average tend to be owned by more computer savy users making controlling an equally powerful GPU botnet a more difficult task.
Then solve that problem. Botnets are a potential problem now but they will become less so as Bitcoin grows. In any case they seem like a challenge to overcome rather than a fatal flaw in CPU-mining.

3) If GPU were dead then FPGA would simply rein supreme.  CPU are still very inefficient because they are a jack of all trades.  That versatility means they don't excel at anything.  If bitcoin or some other crypto currency was GPU immune large professional miners would simply use FPGA and drive price down below electrical cost of CPU based nodes.  The bad news is it would make the network even smaller and even more vulnerable to a botnet (who's owner doesn't really care about electrical costs).
The point with CPU-friendly functions is RAM. With a given amount of RAM you can only run so many instances, so you're bound by your sequential speed. Unless FPGA can achieve a big advantage over CPU in this regard, they will just be too expensive to make it worthwhile.

4) Technology is always changing.  GPU are becoming more and more "general purpose".  It is entirely possible that code which runs inefficiently on today's GPU would run much more efficiently on next generation GPU.  So what are we going to scrap the block chain and start over everytime their is an architectural change.
Who said anything about scrapping the block chain? We're using the same block chain but deciding that starting with block X a different hash function is used. And yes, having a policy for updating the hash function is good for this and other reasons.

5) CPU will become much more GPU "like" in the future.  The idea of using multiple cores with redundant fully independent implementation is highly inefficient (current Phenom and Core i designs).  To continue to maintain Moore's law expect more designs like AMD APU which blend CPU and GPU elements.  Another example is the PS3 Cell processor with a single general purpose cores and 8 "simple" number crunching cores.  As time goes on these hybrid designs will become the rule rather than the exception.  Would be very silly is any crypto currency was less efficient on future CPU designs than current ones out of some naive goal of making it "GPU proof".
Again, RAM. You can choose a hash function which requires 2GB RAM per instance. Then the amortized cost of the CPU time will be negligible, and your computing rate is determined strictly by your available RAM and sequential speed.

1EofoZNBhWQ3kxfKnvWkhtMns4AivZArhr   |   Who am I?   |   bitcoin-otc WoT
Bitcoil - Exchange bitcoins for ILS (thread)   |   Israel Bitcoin community homepage (thread)
Analysis of Bitcoin Pooled Mining Reward Systems (thread, summary)  |   PureMining - Infinite-term, deterministic mining bond
TiagoTiago
Hero Member
*****
Offline Offline

Activity: 616
Merit: 500


Firstbits.com/1fg4i :)


View Profile
October 03, 2011, 06:37:37 AM
 #49

Wouldn't trying to keep changing things all the time result in fragmentation of the network as bunches of people get too lazy or just simply are not all that computer savvy to feel comfortable constantly upgrading things and stay with their old stuff?

(I dont always get new reply notifications, pls send a pm when you think it has happened)

Wanna gimme some BTC/BCH for any or no reason? 1FmvtS66LFh6ycrXDwKRQTexGJw4UWiqDX Smiley

The more you believe in Bitcoin, and the more you show you do to other people, the faster the real value will soar!

Do you like mmmBananas?!
Meni Rosenfeld
Donator
Legendary
*
Offline Offline

Activity: 2058
Merit: 1054



View Profile WWW
October 03, 2011, 08:22:43 AM
Last edit: October 03, 2011, 11:57:55 AM by Meni Rosenfeld
 #50

Wouldn't trying to keep changing things all the time result in fragmentation of the network as bunches of people get too lazy or just simply are not all that computer savvy to feel comfortable constantly upgrading things and stay with their old stuff?
Upgrading the client every so often is good practice anyway. If the big players agree to the change everyone else will just have to follow. Those that can't bother to keep up are better off using an eWallet rather than a client. It's not essential that we actually do change the hashing function frequently, only that we are prepared for the contingency. The "change every year" plan was just an example of how we could prevent specialization if we really wanted and all else fails.

1EofoZNBhWQ3kxfKnvWkhtMns4AivZArhr   |   Who am I?   |   bitcoin-otc WoT
Bitcoil - Exchange bitcoins for ILS (thread)   |   Israel Bitcoin community homepage (thread)
Analysis of Bitcoin Pooled Mining Reward Systems (thread, summary)  |   PureMining - Infinite-term, deterministic mining bond
DeathAndTaxes
Donator
Legendary
*
Offline Offline

Activity: 1218
Merit: 1079


Gerald Davis


View Profile
October 03, 2011, 12:46:39 PM
 #51

Basing it on RAM is even more foolish.

While most consumer grade hardware only supports ~16GB per system and the average computer has likely ~4GB there already exists specialized motherboards which support up to 16TB per system.  This would give commercial miner 4000x the hashing power of average node.  A commercial miner is always going to be able to pick the right hardware to maximize yield.  Limiting the hashing algorithm by RAM wouldn't change that.

BTW 2GB would be a poor choice as many GPU now have 2GB thus the entire solution set could fit in videoram and GDDR5 is many magnitudes faster than DDR3 (desktop ram). 

There is no need for everyone to be hashing.   As long as the nodes are sufficiently decentralized there is no need for them to be completely decentralized. 

Also it is unlikely you are going to achieve that level of decentralization anyways.  Currently hashing is worth ~$60,000 per day.  If you have 1000 nodes then the average node has a gross revenue of $6 per day.  With 100K nodes it is $0.06 per day. 

Given botnets have up to 230K zombie CPU to defeat botnets in numerical superiority you would need 230K+ nodes making average yield ~$0.02 per day before electrical costs.  Most people aren't going to hash for $0.02 per day and pay massive amounts of electrical costs.

This idea that wide acceptance of hashing is a requirement of wide acceptance of usage is flawed.  How many people run a VISA or Paypal processing node?  What is the ratio of end users to processors?

Sure we don't want a monopoly but as long as no entity achieves a critical mass we also don't need 200K+ nodes either.  If you are worried about the strength of the network a better change would be one which has a gradually decreasing efficiency as pool gets larger.  i.e. a non-linear relationship between hashing power and pool size.  This would cause pools to stabilize at a sweet spot that minimizes variance and minimizes the effect of non-linear hashing relationship.  Rather than deepbit having 50% and the next 10 pools having 45% and everyone else making up 5% you likely would see the top 20 pools having on average 4% of network capacity.

I am not saying we even need or should do that but that would attack the real problem not the flawed belief that GPU makes the network weaker. GPUs makes the network very botnet resistant.    Bitcoin likely would have been destroyed by botnets already (if out of spite or to simply see if they could) had it not been for the rise of specialized (i.e. GPU) mining.
Meni Rosenfeld
Donator
Legendary
*
Offline Offline

Activity: 2058
Merit: 1054



View Profile WWW
October 03, 2011, 01:08:15 PM
 #52

Basing it on RAM is even more foolish.

While most consumer grade hardware only supports ~16GB per system and the average computer has likely ~4GB there already exists specialized motherboards which support up to 16TB per system.  This would give commercial miner 4000x the hashing power of average node.  A commercial miner is always going to be able to pick the right hardware to maximize yield.  Limiting the hashing algorithm by RAM wouldn't change that.
And they get this 16TB of RAM for free? RAM is expensive, and the kind of RAM usually used on servers is more expensive than consumer RAM. And again, even if they manage to make it a bit more efficient it's not close to competing with already having a computer.

BTW 2GB would be a poor choice as many GPU now have 2GB thus the entire solution set could fit in videoram and GDDR5 is many magnitudes faster than DDR3 (desktop ram).
You need 2GB per instance. You can't parallelize over this 2GB bringing all the GPU's ALUS to bear. GPU computation and RAM are very parallel but not "fast", this takes away their advantage.

Sure we don't want a monopoly but as long as no entity achieves a critical mass we also don't need 200K+ nodes either.  If you are worried about the strength of the network a better change would be one which has a gradually decreasing efficiency as pool gets larger.  i.e. a non-linear relationship between hashing power and pool size.  This would cause pools to stabilize at a sweet spot that minimizes variance and minimizes the effect of non-linear hashing relationship.  Rather than deepbit having 50% and the next 10 pools having 45% and everyone else making up 5% you likely would see the top 20 pools having on average 4% of network capacity.
There's no need for that, the "deepbit security problem" is only because of an implementation detail. Currently the pool both handles payments and generates getwork, but there's no need for this to be the case. In theory miners can generate work themselves or get it from another node and still mine for the pool. Also, things like p2pool (as a substrate for proxy pools) can do away with the need for giant centralized pools to reduce variance.

1EofoZNBhWQ3kxfKnvWkhtMns4AivZArhr   |   Who am I?   |   bitcoin-otc WoT
Bitcoil - Exchange bitcoins for ILS (thread)   |   Israel Bitcoin community homepage (thread)
Analysis of Bitcoin Pooled Mining Reward Systems (thread, summary)  |   PureMining - Infinite-term, deterministic mining bond
FreeTrade (OP)
Legendary
*
Offline Offline

Activity: 1428
Merit: 1030



View Profile
October 03, 2011, 01:20:07 PM
 #53

GPUs makes the network very botnet resistant.    Bitcoin likely would have been destroyed by botnets already (if out of spite or to simply see if they could) had it not been for the rise of specialized (i.e. GPU) mining.

This idea that specialized mining may defend the Bitcoin network from botnets might have merit.

I wonder if it might be possible to have the best of both worlds - where specialist mining makes commercial sense and casual CPU miners can also convert electricity to cryptocurrency at a rate that isn't prohibitive.

I'd guess you'd need to see a ratio of about approximately - 1.5:1 (Efficiency on Specialist Hardward:General CPU) to have both co-exist.

Membercoin - Layer 1 Coin used for the member.cash decentralized social network.
10% Interest On All Balances. Browser and Solo Mining. 100% Distributed to Users and Developers.
DeathAndTaxes
Donator
Legendary
*
Offline Offline

Activity: 1218
Merit: 1079


Gerald Davis


View Profile
October 03, 2011, 01:35:47 PM
Last edit: October 03, 2011, 01:56:30 PM by DeathAndTaxes
 #54

RAM is actually incredibly cheap.  2GB costs ~$10 and that price will be half that in 12 months.  While server ram is more expensive it is much cheaper than multiple complete computer systems.  For example 1TB of FB-RDIMM runs ~$30K.  While that is some serious coin it enough for 500 instances.  $28K is far cheaper than 500 computers.  The per instance cost would only be $60.  As a % of overall computer cost RAM has been falling over the last two decades.  

As a commercial miner I would love you "solution"  I could replace my entire jury rigged GPU farm with one rack of high density servers and put them in a co-location cage.  The largest risk for me wouldn't be legitimate nodes it would be botnets.  It is hard to compete with $0 cost.

Still I don't see what "problem" having a cpu-only work unit solves.  If anything it makes the network MORE vulnerable to botnets.  Most users will simply not hash if the reward is ~$2 per year.  So you put a limit on how decentralized the network will become.  Those users will likely joins pools so pools which already the largest threat to decentralization still remain an issue.  The network will never be decentralized enough to be immune to botnets.  My prediction is within a year the current CPU-only alt crypto currency (can't remember the name) will be dead or forked due to the ease of taking over the network.

So looking at cpu only vs open network (CPU, GPU, FPGA, ASIC, etc)
1) it is more decentralized - dubious value see next points

2) highly vulnerable to botnets.  Even 100K "valid nodes" would easily be crushed by Storm Trojan (230K average controlled nodes). A smaller network could be crushed within days.  Even if there wasn't a financial incentive someone could hash the difficulty up to 1000x current and then leave letting the network fail.

3) unlikely to become "super decentralized" due to lack of financial incentive (most users won't hash 24/7 to earn $2 per year).   There are many more potential users of bitcoin than potential hashers.  Most people just want something with low fees they can use to safely buy and sell stuff.  They have no interest in becoming a payment processing node.

4) Still possible for commercial miners to game the system by exploiting whatever combination of CPU/memory gives them the highest return

5) Pools still remain the largest vulnerability. 

So what exactly does "CPU only" hashing achieve other than decentralized network for the sake of decentralizing?  Sure I grant you a cpu-only network would be more decentralized than an open network however I argue that the amount of decentralization you would gain solves nothing and makes the entire network more vulnerable.
Meni Rosenfeld
Donator
Legendary
*
Offline Offline

Activity: 2058
Merit: 1054



View Profile WWW
October 03, 2011, 01:36:53 PM
Last edit: October 03, 2011, 01:50:02 PM by Meni Rosenfeld
 #55

GPUs makes the network very botnet resistant.    Bitcoin likely would have been destroyed by botnets already (if out of spite or to simply see if they could) had it not been for the rise of specialized (i.e. GPU) mining.

This idea that specialized mining may defend the Bitcoin network from botnets might have merit.

I wonder if it might be possible to have the best of both worlds - where specialist mining makes commercial sense and casual CPU miners can also convert electricity to cryptocurrency at a rate that isn't prohibitive.

I'd guess you'd need to see a ratio of about approximately - 1.5:1 (Efficiency on Specialist Hardward:General CPU) to have both co-exist.

You could do something like that. Have two kinds of blocks, call them "CPU blocks" and "GPU blocks", each using a hashing function with corresponding friendliness. They're mostly equivalent for purposes of confirming transactions and they exist on the same chain (a GPU block can reference a CPU block, etc.). Each type will have its own difficulty, where the difficulties are targeted so that 50% of blocks are GPU and 50% are CPU. There is an additional rule that a block is invalid if the last 8 blocks were the same type as it. So a botnet dominating the CPU blocks or a cluster dominating the GPU blocks can't do much because it can't generate a branch longer than 8 blocks (if someone manages to do both there's still a problem). You can change the recommended waiting period from 6 to 10 blocks.


Those users will likely joins pools so pools which already the largest threat to decentralization still remain an issue.
I already explained that pools are not a threat to decentralization going forward.

1EofoZNBhWQ3kxfKnvWkhtMns4AivZArhr   |   Who am I?   |   bitcoin-otc WoT
Bitcoil - Exchange bitcoins for ILS (thread)   |   Israel Bitcoin community homepage (thread)
Analysis of Bitcoin Pooled Mining Reward Systems (thread, summary)  |   PureMining - Infinite-term, deterministic mining bond
relmeas
Full Member
***
Offline Offline

Activity: 124
Merit: 100


View Profile
October 03, 2011, 06:03:41 PM
 #56

the flaw of Bitcoin design is not the GPUs, it is the mining pools. They completely invalidate the initial assumption thatr every Bitcoin participator is contributing to the network security. With the pools, only pool owners do. Currently the end miners don't need a Bitcoin client and can't even know for sure which network they are mining for...
Meni Rosenfeld
Donator
Legendary
*
Offline Offline

Activity: 2058
Merit: 1054



View Profile WWW
October 03, 2011, 06:19:30 PM
 #57

the flaw of Bitcoin design is not the GPUs, it is the mining pools. They completely invalidate the initial assumption thatr every Bitcoin participator is contributing to the network security. With the pools, only pool owners do. Currently the end miners don't need a Bitcoin client and can't even know for sure which network they are mining for...
I guess you need to use bold and all caps to be heard around here. So, for the third time,

POOLS ARE ONLY A SECURITY THREAT DUE TO AN IMPLEMENTATION DETAIL THAT CAN BE EASILY FIXED. IT IS NOT A FUNDAMENTAL PROBLEM WITH THE DESIGN.

1EofoZNBhWQ3kxfKnvWkhtMns4AivZArhr   |   Who am I?   |   bitcoin-otc WoT
Bitcoil - Exchange bitcoins for ILS (thread)   |   Israel Bitcoin community homepage (thread)
Analysis of Bitcoin Pooled Mining Reward Systems (thread, summary)  |   PureMining - Infinite-term, deterministic mining bond
memvola
Hero Member
*****
Offline Offline

Activity: 938
Merit: 1002


View Profile
October 03, 2011, 07:07:44 PM
 #58

the flaw of Bitcoin design is not the GPUs, it is the mining pools. They completely invalidate the initial assumption thatr every Bitcoin participator is contributing to the network security. With the pools, only pool owners do. Currently the end miners don't need a Bitcoin client and can't even know for sure which network they are mining for...

There's no need for that, the "deepbit security problem" is only because of an implementation detail. Currently the pool both handles payments and generates getwork, but there's no need for this to be the case. In theory miners can generate work themselves or get it from another node and still mine for the pool. Also, things like p2pool (as a substrate for proxy pools) can do away with the need for giant centralized pools to reduce variance.

Also, to make it easier, we could separate hashing pools from work servers. Pools get signed work units from work servers and pass work from a random source to each miner. Ordinary mining tools can be used, but in order to make sure the pool operator is honest, mining software can support requesting specific channels (round-robin style) or keep a list of signatures in order to verify received work units. This has other advantages like allowing work servers to add a small fee of their own, which would motivate running persistent nodes when running a node becomes a professional business. Most work servers would be operated by respectable Bitcoin businesses (banks, etc.) that would benefit from additional promotion, so fees would probably be close to 0 anyway.

You could do something like that. Have two kinds of blocks, call them "CPU blocks" and "GPU blocks", each using a hashing function with corresponding friendliness. They're mostly equivalent for purposes of confirming transactions and they exist on the same chain (a GPU block can reference a CPU block, etc.). Each type will have its own difficulty, where the difficulties are targeted so that 50% of blocks are GPU and 50% are CPU. There is an additional rule that a block is invalid if the last 8 blocks were the same type as it. So a botnet dominating the CPU blocks or a cluster dominating the GPU blocks can't do much because it can't generate a branch longer than 8 blocks (if someone manages to do both there's still a problem). You can change the recommended waiting period from 6 to 10 blocks.

I think you shouldn't artificially balance targets. They would both converge to a point where both are at the same level of profitability, regardless of the advances in technology. Requiring an alternate every n blocks makes sense though...
Meni Rosenfeld
Donator
Legendary
*
Offline Offline

Activity: 2058
Merit: 1054



View Profile WWW
October 03, 2011, 07:36:13 PM
 #59

You could do something like that. Have two kinds of blocks, call them "CPU blocks" and "GPU blocks", each using a hashing function with corresponding friendliness. They're mostly equivalent for purposes of confirming transactions and they exist on the same chain (a GPU block can reference a CPU block, etc.). Each type will have its own difficulty, where the difficulties are targeted so that 50% of blocks are GPU and 50% are CPU. There is an additional rule that a block is invalid if the last 8 blocks were the same type as it. So a botnet dominating the CPU blocks or a cluster dominating the GPU blocks can't do much because it can't generate a branch longer than 8 blocks (if someone manages to do both there's still a problem). You can change the recommended waiting period from 6 to 10 blocks.
I think you shouldn't artificially balance targets. They would both converge to a point where both are at the same level of profitability, regardless of the advances in technology. Requiring an alternate every n blocks makes sense though...
If you have blocks corresponding to two different hash functions, you have two options:
1. Have a single difficulty value, and hardcode a ratio between the hash targets. Then if you choose a wrong ratio one type will be more profitable than the other and be solely used.
2. Have two separate difficulty values, each computed by the time it took to find X blocks of a type compared to the desired time. To know what the desired time is you have to set what % of the blocks you want to be of this type. It needn't be 50/50 but that gives the most security for this application. Then the respective difficulties of the two types will converge to a point where they are equally profitable.

1EofoZNBhWQ3kxfKnvWkhtMns4AivZArhr   |   Who am I?   |   bitcoin-otc WoT
Bitcoil - Exchange bitcoins for ILS (thread)   |   Israel Bitcoin community homepage (thread)
Analysis of Bitcoin Pooled Mining Reward Systems (thread, summary)  |   PureMining - Infinite-term, deterministic mining bond
DeathAndTaxes
Donator
Legendary
*
Offline Offline

Activity: 1218
Merit: 1079


Gerald Davis


View Profile
October 03, 2011, 07:53:59 PM
 #60

If you have blocks corresponding to two different hash functions, you have two options:
1. Have a single difficulty value, and hardcode a ratio between the hash targets. Then if you choose a wrong ratio one type will be more profitable than the other and be solely used.
2. Have two separate difficulty values, each computed by the time it took to find X blocks of a type compared to the desired time. To know what the desired time is you have to set what % of the blocks you want to be of this type. It needn't be 50/50 but that gives the most security for this application. Then the respective difficulties of the two types will converge to a point where they are equally profitable.

Or simply make both half's Independent.  Currently the bitcoin block is signed by a single hash (well technically a hash of the hash).    There is no reason some alternate design couldn't requires 2 hashes.  A valid block is only valid if signed by both hashing algorithms and each hash is below their required difficulty.  In essence a double key to lock each block.  Each algorithm would be completely independent in terms of difficulty and target and would reset

Even if you compromised one of the two algorithms you still wouldn't have control over the block chain.  IF the algorithms were different enough that no single hardware was efficient at both you would see hashing split into two distinct camps each developing independent "economies".
Meni Rosenfeld
Donator
Legendary
*
Offline Offline

Activity: 2058
Merit: 1054



View Profile WWW
October 03, 2011, 08:12:58 PM
 #61

If you have blocks corresponding to two different hash functions, you have two options:
1. Have a single difficulty value, and hardcode a ratio between the hash targets. Then if you choose a wrong ratio one type will be more profitable than the other and be solely used.
2. Have two separate difficulty values, each computed by the time it took to find X blocks of a type compared to the desired time. To know what the desired time is you have to set what % of the blocks you want to be of this type. It needn't be 50/50 but that gives the most security for this application. Then the respective difficulties of the two types will converge to a point where they are equally profitable.

Or simply make both half's Independent.  Currently the bitcoin block is signed by a single hash (well technically a hash of the hash).    There is no reason some alternate design couldn't requires 2 hashes.  A valid block is only valid if signed by both hashing algorithms and each hash is below their required difficulty.  In essence a double key to lock each block.  Each algorithm would be completely independent in terms of difficulty and target and would reset

Even if you compromised one of the two algorithms you still wouldn't have control over the block chain.  IF the algorithms were different enough that no single hardware was efficient at both you would see hashing split into two distinct camps each developing independent "economies".
That's the opposite of independence. It means that the same party needs to do both CPU and GPU to validate their block. So end users can't mine because they don't have GPUs. And there's no way to adjust difficulty separately since you don't have separate blocks to count.

It will create very weird dynamics where you first try to find one key, and if you do you power down that key and start the other to find a matching key. If you can find both keys before anyone else you get the block. And I think this means that it's enough to dominate one type because then you "only" need to find the key for the other to win a block.

1EofoZNBhWQ3kxfKnvWkhtMns4AivZArhr   |   Who am I?   |   bitcoin-otc WoT
Bitcoil - Exchange bitcoins for ILS (thread)   |   Israel Bitcoin community homepage (thread)
Analysis of Bitcoin Pooled Mining Reward Systems (thread, summary)  |   PureMining - Infinite-term, deterministic mining bond
DeathAndTaxes
Donator
Legendary
*
Offline Offline

Activity: 1218
Merit: 1079


Gerald Davis


View Profile
October 03, 2011, 08:24:34 PM
Last edit: October 03, 2011, 08:45:33 PM by DeathAndTaxes
 #62

That's the opposite of independence. It means that the same party needs to do both CPU and GPU to validate their block. So end users can't mine because they don't have GPUs. And there's no way to adjust difficulty separately since you don't have separate blocks to count.

No it wouldn't.  It would simply be a public double signing.

Two algorithms lets call them C & G (for obvious reasons).

A pool of G miners find a hash below their target and sign the block and publish it to all other nodes in network.  The block is now half signed.
A pool of C miners then take the half signed block and look for a hash that meets their independent target.  The block is now fully signed.

Simply adjust the rules for a valid block then for those half signing they can only generate a reward half the size + half transaction fees.  The second half does the same.  So the G miner (or pool) who half signs the block gets 25BTC + 1/2 transaction fees, the C miners would complete the half signed block get the other 25 BTC + 1/2 the transaction fees.

A block isn't considered confirmed until both halves of the hash pair are complete and published.  If you want block signing to take 10 minutes of average adjust the difficulty for each half so that average solution takes 5 minutes for half signed block.

While I doubt any dual algorithm solution is needed it makes more sense to have both keys required otherwise bitcoin becomes vulnerable to the weaker of either algorithm (which is worse than having single algorithm).

EhVedadoOAnonimato
Hero Member
*****
Offline Offline

Activity: 630
Merit: 500



View Profile
October 03, 2011, 08:35:25 PM
 #63

like Garvin said, the problem is msg relaying

Good point. It doesn't really matter if the mining algorithm is CPU-friendly. If bitcoin usage grows significantly, other resources - mainly bandwidth - required by the mining process will probably rule out the "average guy".

Mining will probably become a specialized business despite the mining algorithm. So, better to keep the algorithm which doesn't make us vulnerable to botnets.
Meni Rosenfeld
Donator
Legendary
*
Offline Offline

Activity: 2058
Merit: 1054



View Profile WWW
October 03, 2011, 08:40:04 PM
 #64

like Garvin said, the problem is msg relaying
Good point. It doesn't really matter if the mining algorithm is CPU-friendly. If bitcoin usage grows significantly, other resources - mainly bandwidth - required by the mining process will probably rule out the "average guy".

Mining will probably become a specialized business despite the mining algorithm. So, better to keep the algorithm which doesn't make us vulnerable to botnets.
Mining pools. The miner only needs the block headers.

1EofoZNBhWQ3kxfKnvWkhtMns4AivZArhr   |   Who am I?   |   bitcoin-otc WoT
Bitcoil - Exchange bitcoins for ILS (thread)   |   Israel Bitcoin community homepage (thread)
Analysis of Bitcoin Pooled Mining Reward Systems (thread, summary)  |   PureMining - Infinite-term, deterministic mining bond
EhVedadoOAnonimato
Hero Member
*****
Offline Offline

Activity: 630
Merit: 500



View Profile
October 03, 2011, 09:48:48 PM
 #65

The proof of work is not a hash of the entire block then?

Or is it an indirect hash of something in the header which is itself a hash to all transactions? Even if it's that, wouldn't such header have to be retransmitted each time a new transaction is propagated?
DeathAndTaxes
Donator
Legendary
*
Offline Offline

Activity: 1218
Merit: 1079


Gerald Davis


View Profile
October 03, 2011, 10:35:23 PM
Last edit: October 03, 2011, 11:06:05 PM by DeathAndTaxes
 #66

The proof of work is not a hash of the entire block then?

Or is it an indirect hash of something in the header which is itself a hash to all transactions? Even if it's that, wouldn't such header have to be retransmitted each time a new transaction is propagated?

It is a hash of the header which contains the Merkle Root of all transactions in the block plus the hash of the last block.  That is how the "chain" is efficiently created.  If you know a previous block is valid and each block contains the hash of the prior block in the current block then you know the current block is valid by following the chain from the genesis block. Every transaction in the current block is confirmed because the merkle root is the hash that contains all the hashes of the transaction in the block.  If an extra transaction was added or one taken away the merkle root hash would be invalid.

https://en.bitcoin.it/wiki/Block_hashing_algorithm

All together the only thing that is hashed is the header which is 80 bytes.  The nonce is determined by the miner so the pool actually only transmits 76 bytes.  The miner then tries all nonces from 0 to 2^32 -1 (roughly 4 billion attempted hashes).  A shares is 2^32 hashes so 1 share ~= 1 header transmitted.

Since nonce only has 2^32 possibilities the pool server needs to provide a new header (containing extra nonnce) after every 4 billion hashes.

Thus bandwidth requirement (without any overhead) is ~ 72 bytes every 4 GH (the same header can be used for 4 billion hashes) of hashing power that the miner has.   Even a 40GH miner wouldn't require very much bandwidth.  It would require 10 headers per second and would produce 10 shares (lower difficulty solutions) per second.  The headers would require ~800 bps inbound and the outgoing shares would require ~2kbps outbound.

Now for the server that bandwidth requirement can be larger as they will have significantly more aggregate traffic.  A 5TH/s mining pool would need to issue 23,283 headers per second but even that is only 1.8Mbps.

Still bandwidth is really a non-issue.  As difficulty rise and the pool gets larger the computational load on server becomes the larger bottleneck.  Every 2^32 hashes a miner will need a new header and that requires the pool to change the generation transaction and thus requires a new hash and that changes the merkle root which requires a new hash.

If it ever became a problem where pools simply couldn't handle the load, changing the size of the nonce could make the problem more manageable.  The Nonce is only 32 bit and is the only element a pool miner changes thus every pool miner needs a new header ever 4 billion hashes. A 100MH/s miner needs a new header every 40 seconds.  A 4GH/s miner needs a new header every second.  If the nonce value was larger more hashes could be attempted without changing the header.  For example if nonce value was 64bit a 4GH miner would only need a ONE header every 17 minutes instead of one every second (unless a block was found).  Most miners would never change headers except when a block is found.  The load on pool server could be cut by a factor of billions.
kjj
Legendary
*
Offline Offline

Activity: 1302
Merit: 1025



View Profile
October 03, 2011, 11:53:33 PM
 #67

That's the opposite of independence. It means that the same party needs to do both CPU and GPU to validate their block. So end users can't mine because they don't have GPUs. And there's no way to adjust difficulty separately since you don't have separate blocks to count.

No it wouldn't.  It would simply be a public double signing.

Two algorithms lets call them C & G (for obvious reasons).

A pool of G miners find a hash below their target and sign the block and publish it to all other nodes in network.  The block is now half signed.
A pool of C miners then take the half signed block and look for a hash that meets their independent target.  The block is now fully signed.

Simply adjust the rules for a valid block then for those half signing they can only generate a reward half the size + half transaction fees.  The second half does the same.  So the G miner (or pool) who half signs the block gets 25BTC + 1/2 transaction fees, the C miners would complete the half signed block get the other 25 BTC + 1/2 the transaction fees.

A block isn't considered confirmed until both halves of the hash pair are complete and published.  If you want block signing to take 10 minutes of average adjust the difficulty for each half so that average solution takes 5 minutes for half signed block.

While I doubt any dual algorithm solution is needed it makes more sense to have both keys required otherwise bitcoin becomes vulnerable to the weaker of either algorithm (which is worse than having single algorithm).

There are subtle, er, issues with this idea.  I think they are actually problems, but I haven't worked through all the details yet, so I'm not confident enough to use that label yet.  Think carefully about the coinbase transactions, and how those are included (or not) in the half signatures.  I'm pretty sure that this system ends up being no better than just the second half system, but it could be modified to be as good as whichever system was slower at the moment.

17Np17BSrpnHCZ2pgtiMNnhjnsWJ2TMqq8
I routinely ignore posters with paid advertising in their sigs.  You should too.
kjj
Legendary
*
Offline Offline

Activity: 1302
Merit: 1025



View Profile
October 03, 2011, 11:59:06 PM
 #68

The proof of work is not a hash of the entire block then?

Or is it an indirect hash of something in the header which is itself a hash to all transactions? Even if it's that, wouldn't such header have to be retransmitted each time a new transaction is propagated?

It is a hash of the header which contains the Merkle Root of all transactions in the block plus the hash of the last block.  That is how the "chain" is efficiently created.  If you know a previous block is valid and each block contains the hash of the prior block in the current block then you know the current block is valid by following the chain from the genesis block. Every transaction in the current block is confirmed because the merkle root is the hash that contains all the hashes of the transaction in the block.  If an extra transaction was added or one taken away the merkle root hash would be invalid.

https://en.bitcoin.it/wiki/Block_hashing_algorithm

All together the only thing that is hashed is the header which is 80 bytes.  The nonce is determined by the miner so the pool actually only transmits 76 bytes.  The miner then tries all nonces from 0 to 2^32 -1 (roughly 4 billion attempted hashes).  A shares is 2^32 hashes so 1 share ~= 1 header transmitted.

Since nonce only has 2^32 possibilities the pool server needs to provide a new header (containing extra nonnce) after every 4 billion hashes.

Thus bandwidth requirement (without any overhead) is ~ 72 bytes every 4 GH (the same header can be used for 4 billion hashes) of hashing power that the miner has.   Even a 40GH miner wouldn't require very much bandwidth.  It would require 10 headers per second and would produce 10 shares (lower difficulty solutions) per second.  The headers would require ~800 bps inbound and the outgoing shares would require ~2kbps outbound.

Now for the server that bandwidth requirement can be larger as they will have significantly more aggregate traffic.  A 5TH/s mining pool would need to issue 23,283 headers per second but even that is only 1.8Mbps.

Still bandwidth is really a non-issue.  As difficulty rise and the pool gets larger the computational load on server becomes the larger bottleneck.  Every 2^32 hashes a miner will need a new header and that requires the pool to change the generation transaction and thus requires a new hash and that changes the merkle root which requires a new hash.

If it ever became a problem where pools simply couldn't handle the load, changing the size of the nonce could make the problem more manageable.  The Nonce is only 32 bit and is the only element a pool miner changes thus every pool miner needs a new header ever 4 billion hashes. A 100MH/s miner needs a new header every 40 seconds.  A 4GH/s miner needs a new header every second.  If the nonce value was larger more hashes could be attempted without changing the header.  For example if nonce value was 64bit a 4GH miner would only need a ONE header every 17 minutes instead of one every second (unless a block was found).  Most miners would never change headers except when a block is found.  The load on pool server could be cut by a factor of billions.

This would also slow transactions, and/or not decrease traffic by nearly as much as you expect.  The mining pool node is constantly updating its Merkle tree, so a new getwork request includes not just a different coinbase with a new extranonce, but also a different set of transactions, some new.  A 64 bit nonce would roughly triple the average transaction confirmation time, unless the node trips the long polling system, which makes extra traffic.

17Np17BSrpnHCZ2pgtiMNnhjnsWJ2TMqq8
I routinely ignore posters with paid advertising in their sigs.  You should too.
DeathAndTaxes
Donator
Legendary
*
Offline Offline

Activity: 1218
Merit: 1079


Gerald Davis


View Profile
October 04, 2011, 12:12:52 AM
Last edit: October 04, 2011, 12:28:35 AM by DeathAndTaxes
 #69

This would also slow transactions, and/or not decrease traffic by nearly as much as you expect.  The mining pool node is constantly updating its Merkle tree, so a new getwork request includes not just a different coinbase with a new extranonce, but also a different set of transactions, some new.  A 64 bit nonce would roughly triple the average transaction confirmation time, unless the node trips the long polling system, which makes extra traffic.

How?  Regardless of the nonce size a block will be confirmed on average every 10 minutes. At worst case scenario transactions can always be included in the next block.  

Are new transactions after the start of a block included in the current block (as opposed to next block)?  If so then on average never updating transaction list would add 5 minutes to first confirmation and nothing to subsequent confirmations.   If not then confirmations are no slower.

Still even w/ 64bit nonce there is no reason you couldn't update merkle tree between blocks. Look at it this way.  Take a hypothetical 1TH/s pool.  On average it needs to compute and issue 15,000 header per minute for it's pool members.  Looking @ block explorer the last 24 hours had 6407 total transactions.  That is one average 4 per minute.  If the pool had 1000 members using a 64bit nonce and only changing header on transactions would cut that down to 4,000 headers per minute.  If pool only updated headers once per minute (on average adding a few seconds to each transaction confirmation time) it would be only 1,000 headers per second.
kjj
Legendary
*
Offline Offline

Activity: 1302
Merit: 1025



View Profile
October 04, 2011, 12:24:45 AM
 #70

This would also slow transactions, and/or not decrease traffic by nearly as much as you expect.  The mining pool node is constantly updating its Merkle tree, so a new getwork request includes not just a different coinbase with a new extranonce, but also a different set of transactions, some new.  A 64 bit nonce would roughly triple the average transaction confirmation time, unless the node trips the long polling system, which makes extra traffic.

How?  Regardless of the nonce size a block will be confirmed on average every 10 minutes. At worst case scenario transactions can always be included in the next block.   

Are new transactions after the start of a block included in the current block (as opposed to next block)?  If so then on average never updating transaction list would add 5 minutes to first confirmation and nothing to subsequent confirmations.   If not then confirmations are no slower.

If we assume that the average transaction happens about 5 minutes before the next block is found, the current system makes it very, very likely that the transaction will be included in the current block.  This means that the expected waiting time for a transaction is just a bit over 5 minutes.

With a 64 bit nonce, all mining clients will only update their work every 10 minutes (on average), when a new longpoll hits.  So, the average transaction will wait 5 minutes before anyone even starts working on a block that includes it, and then 10 minutes more (on average, of course) for that block to be found.  So, the total wait time is then 15 minutes, instead of 5.  The worst case, sending a new transaction just moments after all the pools update their miners, goes from 20 minutes to 30.

17Np17BSrpnHCZ2pgtiMNnhjnsWJ2TMqq8
I routinely ignore posters with paid advertising in their sigs.  You should too.
DeathAndTaxes
Donator
Legendary
*
Offline Offline

Activity: 1218
Merit: 1079


Gerald Davis


View Profile
October 04, 2011, 12:37:21 AM
Last edit: October 04, 2011, 01:25:25 AM by DeathAndTaxes
 #71

I see what you are saying now.  I was updating my post (above) while you were responding.

Most of the need for header changes comes from nonce exhaustion.  Lets look at the times a miner needs to change headers.
a) block change - once per 600 seconds
b) new transaction - once per 13 seconds (at current transaction volume)
c) nonce exhaustion -  once per 4000/(MH/s)

For most miners nonce exhaustion creates the majority of the header changes.  For example a 2GH miner needs a new header every 2 seconds.  For every block change it exhausts it's nonce range 300 times.  For every transaction on the network it exhausts its nonce range 7 times.

Now transaction volume will grow but it is unlikely it will grow longterm faster than Moore's law.  Average hashing power will increase at a rate equal to Moore's law.  That is a doubling every 24 months.  2^5 = 32 fold every decade.  A decade from now that 2GH miner will be a 64GH miner.  That is 16 header requests per second.   

Even with real-time inclusion of transactions nonce exhaustion makes up the majority of the load on a pool server and that will only increase.  If a server were to delay including transactions to once per minute by holding all transactions till the next minute and then including them all in a block change (which would only slightly delay confirmations) then nonce exhaustion makes up an ever greater % of server load.
Meni Rosenfeld
Donator
Legendary
*
Offline Offline

Activity: 2058
Merit: 1054



View Profile WWW
October 04, 2011, 05:11:43 AM
 #72

Or is it an indirect hash of something in the header which is itself a hash to all transactions? Even if it's that, wouldn't such header have to be retransmitted each time a new transaction is propagated?
What DeathAndTaxes said, the Merkle root is the "executive summary" of the transactions. And, inclusion of transactions in the block is on a "best effort" basis - everyone chooses which transactions to include, and currently most miners/pools include all transactions they know. But it's ok if a miner is missing a few recent transactions, he'll get (a header corresponding to) them in the next getwork.

1EofoZNBhWQ3kxfKnvWkhtMns4AivZArhr   |   Who am I?   |   bitcoin-otc WoT
Bitcoil - Exchange bitcoins for ILS (thread)   |   Israel Bitcoin community homepage (thread)
Analysis of Bitcoin Pooled Mining Reward Systems (thread, summary)  |   PureMining - Infinite-term, deterministic mining bond
EhVedadoOAnonimato
Hero Member
*****
Offline Offline

Activity: 630
Merit: 500



View Profile
October 04, 2011, 07:32:39 AM
 #73

Thank you DeathAndTaxes for the full explanation.

So, currently, it's 76 bytes at each 4GH. Easily manageable. And even if we reach "Visa levels" as described here, miners would only have to download, at peaks, 76*4.000 = 304KB/s if I got it right (a new header each time a new transaction arrives and changes the Merkle Tree). I can download at that speed from my home connection today, so it probably wouldn't be a major problem for miners in the future. And even if it was, the Merkle Tree doesn't really need to be updated at each new transaction, that can be done on bulks.
So, nothing that frighting. Only pool operators would need lots of bandwidth, but at this stage, such operators could use local caches distributed in different parts of the world and other techniques to decrease their load.

Interesting.
Meni Rosenfeld
Donator
Legendary
*
Offline Offline

Activity: 2058
Merit: 1054



View Profile WWW
October 04, 2011, 07:37:32 AM
 #74

And even if we reach "Visa levels" as described here, miners would only have to download, at peaks, 76*4.000 = 304KB/s if I got it right (a new header each time a new transaction arrives and changes the Merkle Tree).
No, as I explained, the miner doesn't need to get a new header when there's a new transaction. He just keeps mining on a header which doesn't include all the new transactions. When he finishes 4GH he gets a new header with all the recent transactions. That's how it's done right now, it's not a potential future optimization.

1EofoZNBhWQ3kxfKnvWkhtMns4AivZArhr   |   Who am I?   |   bitcoin-otc WoT
Bitcoil - Exchange bitcoins for ILS (thread)   |   Israel Bitcoin community homepage (thread)
Analysis of Bitcoin Pooled Mining Reward Systems (thread, summary)  |   PureMining - Infinite-term, deterministic mining bond
EhVedadoOAnonimato
Hero Member
*****
Offline Offline

Activity: 630
Merit: 500



View Profile
October 04, 2011, 08:29:06 AM
 #75

And even if we reach "Visa levels" as described here, miners would only have to download, at peaks, 76*4.000 = 304KB/s if I got it right (a new header each time a new transaction arrives and changes the Merkle Tree).
No, as I explained, the miner doesn't need to get a new header when there's a new transaction. He just keeps mining on a header which doesn't include all the new transactions. When he finishes 4GH he gets a new header with all the recent transactions. That's how it's done right now, it's not a potential future optimization.

That's what I meant 2 phrases after:

And even if it was, the Merkle Tree doesn't really need to be updated at each new transaction, that can be done on bulks.

So, yeah, as you said, miners definitely don't need lots of bandwidth, not even on "Visa levels". Only pool operators need.
ArtForz
Sr. Member
****
Offline Offline

Activity: 406
Merit: 257


View Profile
October 04, 2011, 08:55:27 AM
 #76

Currently 76 bytes? Not using the getwork protocol. A getwork response without http headers is already close to 600 bytes... to transfer 76 bytes of data.

But yes, optimally it'd be 76 bytes (+ a simple header).

There'd be ways to cut that down even more, version is constant, nBits only changes every 2016 blocks, hashPrevBlock only changes on a new block, why send those every time?
Another option, allow miners to update nTime themselves.
work submits could be cut down in pretty much the same way, requiring only hMerkleRoot, nTime and nNonce. If there's too many increasing share difficulty would be trivial.
So, a simple more efficent protocol would have per 4Ghps:
hMerkleroot + nTime every 60 seconds or whatever the tx update interval is, hPrevblock every 10 minutes avg.
hMerkleroot + nTime + nNonce every second

at 100% efficiency, diff 1 shares and with some overhead that comes out to around 1 byte/second avg send and 45 byte/second or so avg receive for a poolserver for 4Ghps of miners.
Or about 24kbit/s send and 1Mbit/s receive for a pool the size of the whole current bitcoin network. Yeah.
If hashrates increase in the future, increase share difficulty by a few powers of 2 and you cut down the incoming rate accordingly...
So for the pool-miner interface, you can scale it up quite a few orders of magnitude before bandwidth becomes an issue.

For the network side, scaling to transaction volumes that are allowed by the current max network rule of 1MB/block, we need to receive and send the tx in that block and the block itself, that comes out to... 53kbit/s average.
The 1MB block size limit should be enough to fit about 4 tx/second average.
So... your average home DSL will become a problem when scaling up more than an order or magnitude above the current limits, we'd *need* some kind of hub-leaf setup beyond that, and assuming the hubs are decent servers you could easily get another 2-3 orders of magnitude. ... which would be roughly on par with visas peak tx capacity levels...
So doesn't look like bandwidth would become a major issue.

bitcoin: 1Fb77Xq5ePFER8GtKRn2KDbDTVpJKfKmpz
i0coin: jNdvyvd6v6gV3kVJLD7HsB5ZwHyHwAkfdw
DeathAndTaxes
Donator
Legendary
*
Offline Offline

Activity: 1218
Merit: 1079


Gerald Davis


View Profile
October 04, 2011, 12:37:35 PM
 #77

What DeathAndTaxes said, the Merkle root is the "executive summary" of the transactions. And, inclusion of transactions in the block is on a "best effort" basis - everyone chooses which transactions to include, and currently most miners/pools include all transactions they know. But it's ok if a miner is missing a few recent transactions, he'll get (a header corresponding to) them in the next getwork.

Thanks this is how I believed it worked but wasn't sure.  In that case the use of larger nonce value (say 64bit) would make pool server even more efficient.  Then again to those who are opposed to the concept of pool mining likely don't want pools to become more efficient.  Grin
DeathAndTaxes
Donator
Legendary
*
Offline Offline

Activity: 1218
Merit: 1079


Gerald Davis


View Profile
October 04, 2011, 12:45:53 PM
Last edit: October 04, 2011, 04:07:49 PM by DeathAndTaxes
 #78

Currently 76 bytes? Not using the getwork protocol. A getwork response without http headers is already close to 600 bytes... to transfer 76 bytes of data.

Good to know.  Never looked inside a getwork request.  I just knew it was 76 bytes of actual header information.  600 for 76 seems kinda "fat" but then again as you point out even at 600b per header the bandwidth is trivial so likely there was no concern with making the getwork more bandwidth efficient.

Nice analysis on complete bandwidth economy.  Really shows bandwidth is a non-issue.  We will hit a wall on larger pools computational power long before bandwidth even becomes a topic of discussion.   I think a 64bit nonce solves the pool efficiency problem more elegantly but the brute force method is just to convert a pool server into an associated collection of independent pool servers (i.e. deepbit goes to a.deepbit, b.deepbit, c.deepbit ... z.deepbit) each processing a portion of pool requests.

Still had Satoshi thought that just a few years into this experiment miners would be getting up to 3GH per machine (30,000X his original performance) he likely would have gone with a 64bit nonce.  When he started a high end CPU got what 100KH/s?  4.2 billion nonce range is good for 11.9 hours @ 100KH/s.  No real reason for a larger nonce since the header changes more frequently than that due to block changes and transactions (even if queued into batches).  A 30,000 increase in performance suddenly shrinked that nonce lifespan though.
btcbaby
Member
**
Offline Offline

Activity: 87
Merit: 10



View Profile WWW
October 04, 2011, 01:20:17 PM
 #79

Satoshi didn't see the pool miners coming for sure.  But the algorithm still takes their collective power into account.  The most successful miners still pass on a fair amount of BTC.  I guess in the end it comes down to electricity. 

http://www.btclog.com/uploads/FileUpload/e6/9cc97eb4c91db1ec5fb30ca35f0da8.png
Write an excellent post on btc::log and you just might win 1BTC in our daily giveaway.
btc::log is the professionally managed and community moderated Bitcoin Forum
Meni Rosenfeld
Donator
Legendary
*
Offline Offline

Activity: 2058
Merit: 1054



View Profile WWW
October 04, 2011, 03:14:15 PM
 #80

Satoshi didn't see the pool miners coming for sure.
Satoshi understands probability so he clearly expected pools to emerge. It's likely though he didn't think they need any special consideration in the design.

1EofoZNBhWQ3kxfKnvWkhtMns4AivZArhr   |   Who am I?   |   bitcoin-otc WoT
Bitcoil - Exchange bitcoins for ILS (thread)   |   Israel Bitcoin community homepage (thread)
Analysis of Bitcoin Pooled Mining Reward Systems (thread, summary)  |   PureMining - Infinite-term, deterministic mining bond
hmongotaku
Full Member
***
Offline Offline

Activity: 168
Merit: 100


View Profile WWW
October 04, 2011, 03:58:39 PM
 #81

Satoshi used google servers and Suupah Komputahz! chi chang!

ArtForz
Sr. Member
****
Offline Offline

Activity: 406
Merit: 257


View Profile
October 04, 2011, 04:21:37 PM
 #82

Currently 76 bytes? Not using the getwork protocol. A getwork response without http headers is already close to 600 bytes... to transfer 76 bytes of data.

Good to know.  Never looked inside a getwork request.  I just knew it was 76 bytes of actual header information.  600 for 76 seems kinda "fat" but then again as you point out even at 600b per header the bandwidth is trivial so likely there was no concern with making the getwork more bandwidth efficient.

Nice analysis on complete bandwidth economy.  Really shows bandwidth is a non-issue.  We will hit a wall on larger pools computational power long before bandwidth even becomes a topic of discussion.   I think a 64bit nonce solves the pool efficiency problem more elegantly but the brute force method is just to convert a pool server into an associated collection of independent pool servers (i.e. deepbit goes to a.deepbit, b.deepbit, c.deepbit ... z.deepbit) each processing a portion of pool requests.

Still had Satoshi imagines just a few years into this experiment  miners would be getting up to 3GH per machine he likely would have gone with a 64bit nonce.  When he started a high end CPU got what 100KH/s?  4 billion nonce range lasts a long time with sub MH performance. 
That also shouldnt become a major issue, merely the current implementation scaling badly there.
To increment the extraNonce in the coinbase transaction, we rehash every transaction in the block and rebuild the whole merkle tree, so the whole thing ends up scaling with (blocksize * requestrate). Adding the obvious optimization of storing the opposite side for the merkle branch and only rehashing coinbase and its merkle branch, we need an additional (log2(#tx in block) * 32) bytes of memory but scale roughly with (log2(#tx in block) * requestrate) for getwork.
At something like a current average block (~10kB in ~20 transactions), that comes out to ~240 vs. 8 sha 256 operations for 4Ghps worth of work.
Scaling to visa-level 10 kTX/sec (that'd be 3GB blocks containing ~6M transactions ...), it's ... about 50 sha256 operations for 4Ghps worth of work.
So for a pool roughly the size of the current network that'd be... for current tx volume 24k sha256/sec vs 150k sha256/sec for 10k tx/s.
And using something like "miners increment block time themselves", this can be cut down by another factor of 60.
Scaling this for increasing hashrates due to Moore's law... well... that applies to both sides.
So for the getwork+PoW side, I just don't see any hard issues coming up.
I expect to see way bigger problems on the transaction handling side of things scaling to such massive levels, assuming every tx has 2 inputs + outputs on average, you'd be verifying about 20k ECDSA sigs/second and on every block you're marking ~12M outputs as spent and storing ~12M new outputs in some kind of transactional fashion, probably just the list of current unspent outputs would be on the order of 10s of GB ... ugh.

bitcoin: 1Fb77Xq5ePFER8GtKRn2KDbDTVpJKfKmpz
i0coin: jNdvyvd6v6gV3kVJLD7HsB5ZwHyHwAkfdw
Mike Hearn
Legendary
*
Offline Offline

Activity: 1526
Merit: 1129


View Profile
October 05, 2011, 08:05:52 PM
 #83

Satoshi knew about GPUs back in at least April of 2009, which was only a few months after launch. So for sure he knew about it before the system was designed.

Quote from: satoshi
Eventually, most nodes may be run by specialists with multiple GPU cards.  For now, it's nice that anyone with a PC can play without worrying about what video card they have, and hopefully it'll stay that way for a while.  More computers are shipping with fairly decent GPUs these days, so maybe later we'll transition to that.
worldinacoin
Hero Member
*****
Offline Offline

Activity: 756
Merit: 500



View Profile
October 10, 2011, 05:33:57 AM
 #84

I wonder if Satoshi will ever come out to the open.  Or is he already among us? Smiley
Gabi
Legendary
*
Offline Offline

Activity: 1148
Merit: 1008


If you want to walk on water, get out of the boat


View Profile
October 10, 2011, 08:32:18 AM
 #85

What if satoshi is a group of people?

Pages: 1 2 3 4 5 [All]
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!