Bitcoin Forum
November 01, 2024, 12:41:38 PM *
News: Latest Bitcoin Core release: 28.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: « 1 2 3 [4] 5 »  All
  Print  
Author Topic: Are GPU's Satoshi's mistake?  (Read 8179 times)
Meni Rosenfeld
Donator
Legendary
*
Offline Offline

Activity: 2058
Merit: 1054



View Profile WWW
October 03, 2011, 08:12:58 PM
 #61

If you have blocks corresponding to two different hash functions, you have two options:
1. Have a single difficulty value, and hardcode a ratio between the hash targets. Then if you choose a wrong ratio one type will be more profitable than the other and be solely used.
2. Have two separate difficulty values, each computed by the time it took to find X blocks of a type compared to the desired time. To know what the desired time is you have to set what % of the blocks you want to be of this type. It needn't be 50/50 but that gives the most security for this application. Then the respective difficulties of the two types will converge to a point where they are equally profitable.

Or simply make both half's Independent.  Currently the bitcoin block is signed by a single hash (well technically a hash of the hash).    There is no reason some alternate design couldn't requires 2 hashes.  A valid block is only valid if signed by both hashing algorithms and each hash is below their required difficulty.  In essence a double key to lock each block.  Each algorithm would be completely independent in terms of difficulty and target and would reset

Even if you compromised one of the two algorithms you still wouldn't have control over the block chain.  IF the algorithms were different enough that no single hardware was efficient at both you would see hashing split into two distinct camps each developing independent "economies".
That's the opposite of independence. It means that the same party needs to do both CPU and GPU to validate their block. So end users can't mine because they don't have GPUs. And there's no way to adjust difficulty separately since you don't have separate blocks to count.

It will create very weird dynamics where you first try to find one key, and if you do you power down that key and start the other to find a matching key. If you can find both keys before anyone else you get the block. And I think this means that it's enough to dominate one type because then you "only" need to find the key for the other to win a block.

1EofoZNBhWQ3kxfKnvWkhtMns4AivZArhr   |   Who am I?   |   bitcoin-otc WoT
Bitcoil - Exchange bitcoins for ILS (thread)   |   Israel Bitcoin community homepage (thread)
Analysis of Bitcoin Pooled Mining Reward Systems (thread, summary)  |   PureMining - Infinite-term, deterministic mining bond
DeathAndTaxes
Donator
Legendary
*
Offline Offline

Activity: 1218
Merit: 1079


Gerald Davis


View Profile
October 03, 2011, 08:24:34 PM
Last edit: October 03, 2011, 08:45:33 PM by DeathAndTaxes
 #62

That's the opposite of independence. It means that the same party needs to do both CPU and GPU to validate their block. So end users can't mine because they don't have GPUs. And there's no way to adjust difficulty separately since you don't have separate blocks to count.

No it wouldn't.  It would simply be a public double signing.

Two algorithms lets call them C & G (for obvious reasons).

A pool of G miners find a hash below their target and sign the block and publish it to all other nodes in network.  The block is now half signed.
A pool of C miners then take the half signed block and look for a hash that meets their independent target.  The block is now fully signed.

Simply adjust the rules for a valid block then for those half signing they can only generate a reward half the size + half transaction fees.  The second half does the same.  So the G miner (or pool) who half signs the block gets 25BTC + 1/2 transaction fees, the C miners would complete the half signed block get the other 25 BTC + 1/2 the transaction fees.

A block isn't considered confirmed until both halves of the hash pair are complete and published.  If you want block signing to take 10 minutes of average adjust the difficulty for each half so that average solution takes 5 minutes for half signed block.

While I doubt any dual algorithm solution is needed it makes more sense to have both keys required otherwise bitcoin becomes vulnerable to the weaker of either algorithm (which is worse than having single algorithm).

EhVedadoOAnonimato
Hero Member
*****
Offline Offline

Activity: 630
Merit: 500



View Profile
October 03, 2011, 08:35:25 PM
 #63

like Garvin said, the problem is msg relaying

Good point. It doesn't really matter if the mining algorithm is CPU-friendly. If bitcoin usage grows significantly, other resources - mainly bandwidth - required by the mining process will probably rule out the "average guy".

Mining will probably become a specialized business despite the mining algorithm. So, better to keep the algorithm which doesn't make us vulnerable to botnets.
Meni Rosenfeld
Donator
Legendary
*
Offline Offline

Activity: 2058
Merit: 1054



View Profile WWW
October 03, 2011, 08:40:04 PM
 #64

like Garvin said, the problem is msg relaying
Good point. It doesn't really matter if the mining algorithm is CPU-friendly. If bitcoin usage grows significantly, other resources - mainly bandwidth - required by the mining process will probably rule out the "average guy".

Mining will probably become a specialized business despite the mining algorithm. So, better to keep the algorithm which doesn't make us vulnerable to botnets.
Mining pools. The miner only needs the block headers.

1EofoZNBhWQ3kxfKnvWkhtMns4AivZArhr   |   Who am I?   |   bitcoin-otc WoT
Bitcoil - Exchange bitcoins for ILS (thread)   |   Israel Bitcoin community homepage (thread)
Analysis of Bitcoin Pooled Mining Reward Systems (thread, summary)  |   PureMining - Infinite-term, deterministic mining bond
EhVedadoOAnonimato
Hero Member
*****
Offline Offline

Activity: 630
Merit: 500



View Profile
October 03, 2011, 09:48:48 PM
 #65

The proof of work is not a hash of the entire block then?

Or is it an indirect hash of something in the header which is itself a hash to all transactions? Even if it's that, wouldn't such header have to be retransmitted each time a new transaction is propagated?
DeathAndTaxes
Donator
Legendary
*
Offline Offline

Activity: 1218
Merit: 1079


Gerald Davis


View Profile
October 03, 2011, 10:35:23 PM
Last edit: October 03, 2011, 11:06:05 PM by DeathAndTaxes
 #66

The proof of work is not a hash of the entire block then?

Or is it an indirect hash of something in the header which is itself a hash to all transactions? Even if it's that, wouldn't such header have to be retransmitted each time a new transaction is propagated?

It is a hash of the header which contains the Merkle Root of all transactions in the block plus the hash of the last block.  That is how the "chain" is efficiently created.  If you know a previous block is valid and each block contains the hash of the prior block in the current block then you know the current block is valid by following the chain from the genesis block. Every transaction in the current block is confirmed because the merkle root is the hash that contains all the hashes of the transaction in the block.  If an extra transaction was added or one taken away the merkle root hash would be invalid.

https://en.bitcoin.it/wiki/Block_hashing_algorithm

All together the only thing that is hashed is the header which is 80 bytes.  The nonce is determined by the miner so the pool actually only transmits 76 bytes.  The miner then tries all nonces from 0 to 2^32 -1 (roughly 4 billion attempted hashes).  A shares is 2^32 hashes so 1 share ~= 1 header transmitted.

Since nonce only has 2^32 possibilities the pool server needs to provide a new header (containing extra nonnce) after every 4 billion hashes.

Thus bandwidth requirement (without any overhead) is ~ 72 bytes every 4 GH (the same header can be used for 4 billion hashes) of hashing power that the miner has.   Even a 40GH miner wouldn't require very much bandwidth.  It would require 10 headers per second and would produce 10 shares (lower difficulty solutions) per second.  The headers would require ~800 bps inbound and the outgoing shares would require ~2kbps outbound.

Now for the server that bandwidth requirement can be larger as they will have significantly more aggregate traffic.  A 5TH/s mining pool would need to issue 23,283 headers per second but even that is only 1.8Mbps.

Still bandwidth is really a non-issue.  As difficulty rise and the pool gets larger the computational load on server becomes the larger bottleneck.  Every 2^32 hashes a miner will need a new header and that requires the pool to change the generation transaction and thus requires a new hash and that changes the merkle root which requires a new hash.

If it ever became a problem where pools simply couldn't handle the load, changing the size of the nonce could make the problem more manageable.  The Nonce is only 32 bit and is the only element a pool miner changes thus every pool miner needs a new header ever 4 billion hashes. A 100MH/s miner needs a new header every 40 seconds.  A 4GH/s miner needs a new header every second.  If the nonce value was larger more hashes could be attempted without changing the header.  For example if nonce value was 64bit a 4GH miner would only need a ONE header every 17 minutes instead of one every second (unless a block was found).  Most miners would never change headers except when a block is found.  The load on pool server could be cut by a factor of billions.
kjj
Legendary
*
Offline Offline

Activity: 1302
Merit: 1026



View Profile
October 03, 2011, 11:53:33 PM
 #67

That's the opposite of independence. It means that the same party needs to do both CPU and GPU to validate their block. So end users can't mine because they don't have GPUs. And there's no way to adjust difficulty separately since you don't have separate blocks to count.

No it wouldn't.  It would simply be a public double signing.

Two algorithms lets call them C & G (for obvious reasons).

A pool of G miners find a hash below their target and sign the block and publish it to all other nodes in network.  The block is now half signed.
A pool of C miners then take the half signed block and look for a hash that meets their independent target.  The block is now fully signed.

Simply adjust the rules for a valid block then for those half signing they can only generate a reward half the size + half transaction fees.  The second half does the same.  So the G miner (or pool) who half signs the block gets 25BTC + 1/2 transaction fees, the C miners would complete the half signed block get the other 25 BTC + 1/2 the transaction fees.

A block isn't considered confirmed until both halves of the hash pair are complete and published.  If you want block signing to take 10 minutes of average adjust the difficulty for each half so that average solution takes 5 minutes for half signed block.

While I doubt any dual algorithm solution is needed it makes more sense to have both keys required otherwise bitcoin becomes vulnerable to the weaker of either algorithm (which is worse than having single algorithm).

There are subtle, er, issues with this idea.  I think they are actually problems, but I haven't worked through all the details yet, so I'm not confident enough to use that label yet.  Think carefully about the coinbase transactions, and how those are included (or not) in the half signatures.  I'm pretty sure that this system ends up being no better than just the second half system, but it could be modified to be as good as whichever system was slower at the moment.

17Np17BSrpnHCZ2pgtiMNnhjnsWJ2TMqq8
I routinely ignore posters with paid advertising in their sigs.  You should too.
kjj
Legendary
*
Offline Offline

Activity: 1302
Merit: 1026



View Profile
October 03, 2011, 11:59:06 PM
 #68

The proof of work is not a hash of the entire block then?

Or is it an indirect hash of something in the header which is itself a hash to all transactions? Even if it's that, wouldn't such header have to be retransmitted each time a new transaction is propagated?

It is a hash of the header which contains the Merkle Root of all transactions in the block plus the hash of the last block.  That is how the "chain" is efficiently created.  If you know a previous block is valid and each block contains the hash of the prior block in the current block then you know the current block is valid by following the chain from the genesis block. Every transaction in the current block is confirmed because the merkle root is the hash that contains all the hashes of the transaction in the block.  If an extra transaction was added or one taken away the merkle root hash would be invalid.

https://en.bitcoin.it/wiki/Block_hashing_algorithm

All together the only thing that is hashed is the header which is 80 bytes.  The nonce is determined by the miner so the pool actually only transmits 76 bytes.  The miner then tries all nonces from 0 to 2^32 -1 (roughly 4 billion attempted hashes).  A shares is 2^32 hashes so 1 share ~= 1 header transmitted.

Since nonce only has 2^32 possibilities the pool server needs to provide a new header (containing extra nonnce) after every 4 billion hashes.

Thus bandwidth requirement (without any overhead) is ~ 72 bytes every 4 GH (the same header can be used for 4 billion hashes) of hashing power that the miner has.   Even a 40GH miner wouldn't require very much bandwidth.  It would require 10 headers per second and would produce 10 shares (lower difficulty solutions) per second.  The headers would require ~800 bps inbound and the outgoing shares would require ~2kbps outbound.

Now for the server that bandwidth requirement can be larger as they will have significantly more aggregate traffic.  A 5TH/s mining pool would need to issue 23,283 headers per second but even that is only 1.8Mbps.

Still bandwidth is really a non-issue.  As difficulty rise and the pool gets larger the computational load on server becomes the larger bottleneck.  Every 2^32 hashes a miner will need a new header and that requires the pool to change the generation transaction and thus requires a new hash and that changes the merkle root which requires a new hash.

If it ever became a problem where pools simply couldn't handle the load, changing the size of the nonce could make the problem more manageable.  The Nonce is only 32 bit and is the only element a pool miner changes thus every pool miner needs a new header ever 4 billion hashes. A 100MH/s miner needs a new header every 40 seconds.  A 4GH/s miner needs a new header every second.  If the nonce value was larger more hashes could be attempted without changing the header.  For example if nonce value was 64bit a 4GH miner would only need a ONE header every 17 minutes instead of one every second (unless a block was found).  Most miners would never change headers except when a block is found.  The load on pool server could be cut by a factor of billions.

This would also slow transactions, and/or not decrease traffic by nearly as much as you expect.  The mining pool node is constantly updating its Merkle tree, so a new getwork request includes not just a different coinbase with a new extranonce, but also a different set of transactions, some new.  A 64 bit nonce would roughly triple the average transaction confirmation time, unless the node trips the long polling system, which makes extra traffic.

17Np17BSrpnHCZ2pgtiMNnhjnsWJ2TMqq8
I routinely ignore posters with paid advertising in their sigs.  You should too.
DeathAndTaxes
Donator
Legendary
*
Offline Offline

Activity: 1218
Merit: 1079


Gerald Davis


View Profile
October 04, 2011, 12:12:52 AM
Last edit: October 04, 2011, 12:28:35 AM by DeathAndTaxes
 #69

This would also slow transactions, and/or not decrease traffic by nearly as much as you expect.  The mining pool node is constantly updating its Merkle tree, so a new getwork request includes not just a different coinbase with a new extranonce, but also a different set of transactions, some new.  A 64 bit nonce would roughly triple the average transaction confirmation time, unless the node trips the long polling system, which makes extra traffic.

How?  Regardless of the nonce size a block will be confirmed on average every 10 minutes. At worst case scenario transactions can always be included in the next block.  

Are new transactions after the start of a block included in the current block (as opposed to next block)?  If so then on average never updating transaction list would add 5 minutes to first confirmation and nothing to subsequent confirmations.   If not then confirmations are no slower.

Still even w/ 64bit nonce there is no reason you couldn't update merkle tree between blocks. Look at it this way.  Take a hypothetical 1TH/s pool.  On average it needs to compute and issue 15,000 header per minute for it's pool members.  Looking @ block explorer the last 24 hours had 6407 total transactions.  That is one average 4 per minute.  If the pool had 1000 members using a 64bit nonce and only changing header on transactions would cut that down to 4,000 headers per minute.  If pool only updated headers once per minute (on average adding a few seconds to each transaction confirmation time) it would be only 1,000 headers per second.
kjj
Legendary
*
Offline Offline

Activity: 1302
Merit: 1026



View Profile
October 04, 2011, 12:24:45 AM
 #70

This would also slow transactions, and/or not decrease traffic by nearly as much as you expect.  The mining pool node is constantly updating its Merkle tree, so a new getwork request includes not just a different coinbase with a new extranonce, but also a different set of transactions, some new.  A 64 bit nonce would roughly triple the average transaction confirmation time, unless the node trips the long polling system, which makes extra traffic.

How?  Regardless of the nonce size a block will be confirmed on average every 10 minutes. At worst case scenario transactions can always be included in the next block.   

Are new transactions after the start of a block included in the current block (as opposed to next block)?  If so then on average never updating transaction list would add 5 minutes to first confirmation and nothing to subsequent confirmations.   If not then confirmations are no slower.

If we assume that the average transaction happens about 5 minutes before the next block is found, the current system makes it very, very likely that the transaction will be included in the current block.  This means that the expected waiting time for a transaction is just a bit over 5 minutes.

With a 64 bit nonce, all mining clients will only update their work every 10 minutes (on average), when a new longpoll hits.  So, the average transaction will wait 5 minutes before anyone even starts working on a block that includes it, and then 10 minutes more (on average, of course) for that block to be found.  So, the total wait time is then 15 minutes, instead of 5.  The worst case, sending a new transaction just moments after all the pools update their miners, goes from 20 minutes to 30.

17Np17BSrpnHCZ2pgtiMNnhjnsWJ2TMqq8
I routinely ignore posters with paid advertising in their sigs.  You should too.
DeathAndTaxes
Donator
Legendary
*
Offline Offline

Activity: 1218
Merit: 1079


Gerald Davis


View Profile
October 04, 2011, 12:37:21 AM
Last edit: October 04, 2011, 01:25:25 AM by DeathAndTaxes
 #71

I see what you are saying now.  I was updating my post (above) while you were responding.

Most of the need for header changes comes from nonce exhaustion.  Lets look at the times a miner needs to change headers.
a) block change - once per 600 seconds
b) new transaction - once per 13 seconds (at current transaction volume)
c) nonce exhaustion -  once per 4000/(MH/s)

For most miners nonce exhaustion creates the majority of the header changes.  For example a 2GH miner needs a new header every 2 seconds.  For every block change it exhausts it's nonce range 300 times.  For every transaction on the network it exhausts its nonce range 7 times.

Now transaction volume will grow but it is unlikely it will grow longterm faster than Moore's law.  Average hashing power will increase at a rate equal to Moore's law.  That is a doubling every 24 months.  2^5 = 32 fold every decade.  A decade from now that 2GH miner will be a 64GH miner.  That is 16 header requests per second.   

Even with real-time inclusion of transactions nonce exhaustion makes up the majority of the load on a pool server and that will only increase.  If a server were to delay including transactions to once per minute by holding all transactions till the next minute and then including them all in a block change (which would only slightly delay confirmations) then nonce exhaustion makes up an ever greater % of server load.
Meni Rosenfeld
Donator
Legendary
*
Offline Offline

Activity: 2058
Merit: 1054



View Profile WWW
October 04, 2011, 05:11:43 AM
 #72

Or is it an indirect hash of something in the header which is itself a hash to all transactions? Even if it's that, wouldn't such header have to be retransmitted each time a new transaction is propagated?
What DeathAndTaxes said, the Merkle root is the "executive summary" of the transactions. And, inclusion of transactions in the block is on a "best effort" basis - everyone chooses which transactions to include, and currently most miners/pools include all transactions they know. But it's ok if a miner is missing a few recent transactions, he'll get (a header corresponding to) them in the next getwork.

1EofoZNBhWQ3kxfKnvWkhtMns4AivZArhr   |   Who am I?   |   bitcoin-otc WoT
Bitcoil - Exchange bitcoins for ILS (thread)   |   Israel Bitcoin community homepage (thread)
Analysis of Bitcoin Pooled Mining Reward Systems (thread, summary)  |   PureMining - Infinite-term, deterministic mining bond
EhVedadoOAnonimato
Hero Member
*****
Offline Offline

Activity: 630
Merit: 500



View Profile
October 04, 2011, 07:32:39 AM
 #73

Thank you DeathAndTaxes for the full explanation.

So, currently, it's 76 bytes at each 4GH. Easily manageable. And even if we reach "Visa levels" as described here, miners would only have to download, at peaks, 76*4.000 = 304KB/s if I got it right (a new header each time a new transaction arrives and changes the Merkle Tree). I can download at that speed from my home connection today, so it probably wouldn't be a major problem for miners in the future. And even if it was, the Merkle Tree doesn't really need to be updated at each new transaction, that can be done on bulks.
So, nothing that frighting. Only pool operators would need lots of bandwidth, but at this stage, such operators could use local caches distributed in different parts of the world and other techniques to decrease their load.

Interesting.
Meni Rosenfeld
Donator
Legendary
*
Offline Offline

Activity: 2058
Merit: 1054



View Profile WWW
October 04, 2011, 07:37:32 AM
 #74

And even if we reach "Visa levels" as described here, miners would only have to download, at peaks, 76*4.000 = 304KB/s if I got it right (a new header each time a new transaction arrives and changes the Merkle Tree).
No, as I explained, the miner doesn't need to get a new header when there's a new transaction. He just keeps mining on a header which doesn't include all the new transactions. When he finishes 4GH he gets a new header with all the recent transactions. That's how it's done right now, it's not a potential future optimization.

1EofoZNBhWQ3kxfKnvWkhtMns4AivZArhr   |   Who am I?   |   bitcoin-otc WoT
Bitcoil - Exchange bitcoins for ILS (thread)   |   Israel Bitcoin community homepage (thread)
Analysis of Bitcoin Pooled Mining Reward Systems (thread, summary)  |   PureMining - Infinite-term, deterministic mining bond
EhVedadoOAnonimato
Hero Member
*****
Offline Offline

Activity: 630
Merit: 500



View Profile
October 04, 2011, 08:29:06 AM
 #75

And even if we reach "Visa levels" as described here, miners would only have to download, at peaks, 76*4.000 = 304KB/s if I got it right (a new header each time a new transaction arrives and changes the Merkle Tree).
No, as I explained, the miner doesn't need to get a new header when there's a new transaction. He just keeps mining on a header which doesn't include all the new transactions. When he finishes 4GH he gets a new header with all the recent transactions. That's how it's done right now, it's not a potential future optimization.

That's what I meant 2 phrases after:

And even if it was, the Merkle Tree doesn't really need to be updated at each new transaction, that can be done on bulks.

So, yeah, as you said, miners definitely don't need lots of bandwidth, not even on "Visa levels". Only pool operators need.
ArtForz
Sr. Member
****
Offline Offline

Activity: 406
Merit: 257


View Profile
October 04, 2011, 08:55:27 AM
 #76

Currently 76 bytes? Not using the getwork protocol. A getwork response without http headers is already close to 600 bytes... to transfer 76 bytes of data.

But yes, optimally it'd be 76 bytes (+ a simple header).

There'd be ways to cut that down even more, version is constant, nBits only changes every 2016 blocks, hashPrevBlock only changes on a new block, why send those every time?
Another option, allow miners to update nTime themselves.
work submits could be cut down in pretty much the same way, requiring only hMerkleRoot, nTime and nNonce. If there's too many increasing share difficulty would be trivial.
So, a simple more efficent protocol would have per 4Ghps:
hMerkleroot + nTime every 60 seconds or whatever the tx update interval is, hPrevblock every 10 minutes avg.
hMerkleroot + nTime + nNonce every second

at 100% efficiency, diff 1 shares and with some overhead that comes out to around 1 byte/second avg send and 45 byte/second or so avg receive for a poolserver for 4Ghps of miners.
Or about 24kbit/s send and 1Mbit/s receive for a pool the size of the whole current bitcoin network. Yeah.
If hashrates increase in the future, increase share difficulty by a few powers of 2 and you cut down the incoming rate accordingly...
So for the pool-miner interface, you can scale it up quite a few orders of magnitude before bandwidth becomes an issue.

For the network side, scaling to transaction volumes that are allowed by the current max network rule of 1MB/block, we need to receive and send the tx in that block and the block itself, that comes out to... 53kbit/s average.
The 1MB block size limit should be enough to fit about 4 tx/second average.
So... your average home DSL will become a problem when scaling up more than an order or magnitude above the current limits, we'd *need* some kind of hub-leaf setup beyond that, and assuming the hubs are decent servers you could easily get another 2-3 orders of magnitude. ... which would be roughly on par with visas peak tx capacity levels...
So doesn't look like bandwidth would become a major issue.

bitcoin: 1Fb77Xq5ePFER8GtKRn2KDbDTVpJKfKmpz
i0coin: jNdvyvd6v6gV3kVJLD7HsB5ZwHyHwAkfdw
DeathAndTaxes
Donator
Legendary
*
Offline Offline

Activity: 1218
Merit: 1079


Gerald Davis


View Profile
October 04, 2011, 12:37:35 PM
 #77

What DeathAndTaxes said, the Merkle root is the "executive summary" of the transactions. And, inclusion of transactions in the block is on a "best effort" basis - everyone chooses which transactions to include, and currently most miners/pools include all transactions they know. But it's ok if a miner is missing a few recent transactions, he'll get (a header corresponding to) them in the next getwork.

Thanks this is how I believed it worked but wasn't sure.  In that case the use of larger nonce value (say 64bit) would make pool server even more efficient.  Then again to those who are opposed to the concept of pool mining likely don't want pools to become more efficient.  Grin
DeathAndTaxes
Donator
Legendary
*
Offline Offline

Activity: 1218
Merit: 1079


Gerald Davis


View Profile
October 04, 2011, 12:45:53 PM
Last edit: October 04, 2011, 04:07:49 PM by DeathAndTaxes
 #78

Currently 76 bytes? Not using the getwork protocol. A getwork response without http headers is already close to 600 bytes... to transfer 76 bytes of data.

Good to know.  Never looked inside a getwork request.  I just knew it was 76 bytes of actual header information.  600 for 76 seems kinda "fat" but then again as you point out even at 600b per header the bandwidth is trivial so likely there was no concern with making the getwork more bandwidth efficient.

Nice analysis on complete bandwidth economy.  Really shows bandwidth is a non-issue.  We will hit a wall on larger pools computational power long before bandwidth even becomes a topic of discussion.   I think a 64bit nonce solves the pool efficiency problem more elegantly but the brute force method is just to convert a pool server into an associated collection of independent pool servers (i.e. deepbit goes to a.deepbit, b.deepbit, c.deepbit ... z.deepbit) each processing a portion of pool requests.

Still had Satoshi thought that just a few years into this experiment miners would be getting up to 3GH per machine (30,000X his original performance) he likely would have gone with a 64bit nonce.  When he started a high end CPU got what 100KH/s?  4.2 billion nonce range is good for 11.9 hours @ 100KH/s.  No real reason for a larger nonce since the header changes more frequently than that due to block changes and transactions (even if queued into batches).  A 30,000 increase in performance suddenly shrinked that nonce lifespan though.
btcbaby
Member
**
Offline Offline

Activity: 87
Merit: 10



View Profile WWW
October 04, 2011, 01:20:17 PM
 #79

Satoshi didn't see the pool miners coming for sure.  But the algorithm still takes their collective power into account.  The most successful miners still pass on a fair amount of BTC.  I guess in the end it comes down to electricity. 

http://www.btclog.com/uploads/FileUpload/e6/9cc97eb4c91db1ec5fb30ca35f0da8.png
Write an excellent post on btc::log and you just might win 1BTC in our daily giveaway.
btc::log is the professionally managed and community moderated Bitcoin Forum
Meni Rosenfeld
Donator
Legendary
*
Offline Offline

Activity: 2058
Merit: 1054



View Profile WWW
October 04, 2011, 03:14:15 PM
 #80

Satoshi didn't see the pool miners coming for sure.
Satoshi understands probability so he clearly expected pools to emerge. It's likely though he didn't think they need any special consideration in the design.

1EofoZNBhWQ3kxfKnvWkhtMns4AivZArhr   |   Who am I?   |   bitcoin-otc WoT
Bitcoil - Exchange bitcoins for ILS (thread)   |   Israel Bitcoin community homepage (thread)
Analysis of Bitcoin Pooled Mining Reward Systems (thread, summary)  |   PureMining - Infinite-term, deterministic mining bond
Pages: « 1 2 3 [4] 5 »  All
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!