Bitcoin Forum
May 23, 2018, 11:39:34 PM *
News: Latest stable version of Bitcoin Core: 0.16.0  [Torrent]. (New!)
 
   Home   Help Search Donate Login Register  
Pages: 1 2 3 4 5 6 [All]
  Print  
Author Topic: Handle much larger MH/s rigs : simply increase the nonce size  (Read 9845 times)
kano
Legendary
*
Online Online

Activity: 2464
Merit: 1040


Linux since 1997 RedHat 4


View Profile
June 23, 2012, 11:22:26 AM
 #1

It seems there are comments flying around about pools not being able to handle a larger network load with 1-difficulty shares caused by having devices that can hash 5/10/100 times faster, and that roll-n-time wont solve it.

So ... why not just add another 'nonce' 32 bit field after the current one?

Then pools and miners can use this as they see fit and I'm sure all the devs can understand how this will allow a single getwork to hash more shares or higher difficulty shares

The problem with roll-n-time is that each roll advances the block time into the future.
Instead, with a 2nd nonce field you can roll that field as many times as is valid and allowed by the pool (up to 4 billion times if hardware ever got that much faster)

Obviously getworks must still occur to add transactions into the work being processed, however, rigs that are able to process 100 times faster than your typical 400MH/s GPU will still only need the same number of getworks, not 100 times as many (with miner support of course) and with higher difficulty shares, less work will be returned also

So ... when's the fork gonna happen? Or will we wait until it's a critical problem ...

Pool: https://kano.is Here on Bitcointalk: Forum BTC: 1KanoPb8cKYqNrswjaA8cRDk4FAS9eDMLU
FreeNode IRC: irc.freenode.net channel #kano.is Majority developer of the ckpool code
Help keep Bitcoin secure by mining on pools with full block verification on all blocks - and NO empty blocks!
1527118774
Hero Member
*
Offline Offline

Posts: 1527118774

View Profile Personal Message (Offline)

Ignore
1527118774
Reply with quote  #2

1527118774
Report to moderator
1527118774
Hero Member
*
Offline Offline

Posts: 1527118774

View Profile Personal Message (Offline)

Ignore
1527118774
Reply with quote  #2

1527118774
Report to moderator
1527118774
Hero Member
*
Offline Offline

Posts: 1527118774

View Profile Personal Message (Offline)

Ignore
1527118774
Reply with quote  #2

1527118774
Report to moderator
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction. Advertise here.
DeepBit
Donator
Hero Member
*
Offline Offline

Activity: 532
Merit: 500


We have cookies


View Profile WWW
June 23, 2012, 11:25:36 AM
 #2

We already have time rolling for this.
Adding another field is unacceptable since it will break ASIC miners.

Welcome to my bitcoin mining pool: https://deepbit.net ~ 3600 GH/s, Both payment schemes, instant payout, no invalid blocks !
Coming soon: ICBIT Trading platform
Pieter Wuille
Legendary
*
qt
Offline Offline

Activity: 1050
Merit: 1005


View Profile WWW
June 23, 2012, 11:26:45 AM
 #3

See BIP 22. It's a bit complex, but basically it allows moving the entire block-generation process to external processes, which only need to contact bitcoind to receive new transactions or blocks.

I don't think a hard fork is warranted for an efficiency problem for miners. When a fork comes, we can always consider extending the nonce of course...

aka sipa, core dev team

Tips and donations: 1KwDYMJMS4xq3ZEWYfdBRwYG2fHwhZsipa
kano
Legendary
*
Online Online

Activity: 2464
Merit: 1040


Linux since 1997 RedHat 4


View Profile
June 23, 2012, 11:33:30 AM
 #4

We already have time rolling for this.
...
Um ... no.
https://bitcointalk.org/index.php?topic=89029.0

And when a device that hashes 100 times a 400MH/s GPU comes out - you really think the time field is appropriate? Seriously?

...
Adding another field is unacceptable since it will break ASIC miners.
Name one ASIC miner ...

Pool: https://kano.is Here on Bitcointalk: Forum BTC: 1KanoPb8cKYqNrswjaA8cRDk4FAS9eDMLU
FreeNode IRC: irc.freenode.net channel #kano.is Majority developer of the ckpool code
Help keep Bitcoin secure by mining on pools with full block verification on all blocks - and NO empty blocks!
DeepBit
Donator
Hero Member
*
Offline Offline

Activity: 532
Merit: 500


We have cookies


View Profile WWW
June 23, 2012, 12:26:57 PM
 #5

We already have time rolling for this.
...
Um ... no.
https://bitcointalk.org/index.php?topic=89029.0

And when a device that hashes 100 times a 400MH/s GPU comes out - you really think the time field is appropriate? Seriously?
Yes. Also we have other options: fast pool protocols and client-side work generation.

Adding another field is unacceptable since it will break ASIC miners.
Name one ASIC miner ...
BFL SC, OpenASIC initiative, Vlad's something, Reclaimer.
Not to mention old Artforz's 350nm sASICs.

Welcome to my bitcoin mining pool: https://deepbit.net ~ 3600 GH/s, Both payment schemes, instant payout, no invalid blocks !
Coming soon: ICBIT Trading platform
kano
Legendary
*
Online Online

Activity: 2464
Merit: 1040


Linux since 1997 RedHat 4


View Profile
June 23, 2012, 12:29:51 PM
 #6

...
Adding another field is unacceptable since it will break ASIC miners.
Name one ASIC miner ...
BFL SC, OpenASIC initiative, Vlad's something, Reclaimer.
Not to mention old Artforz's 350nm ASICs.
Yeah I meant one that exists.

If bitcoin decisions are made based on unsubstantiated comments by companies like BFL - man are we in deep shit investing in BTC

Pool: https://kano.is Here on Bitcointalk: Forum BTC: 1KanoPb8cKYqNrswjaA8cRDk4FAS9eDMLU
FreeNode IRC: irc.freenode.net channel #kano.is Majority developer of the ckpool code
Help keep Bitcoin secure by mining on pools with full block verification on all blocks - and NO empty blocks!
Pieter Wuille
Legendary
*
qt
Offline Offline

Activity: 1050
Merit: 1005


View Profile WWW
June 23, 2012, 12:36:54 PM
 #7

Doing a hard fork requires an exceedingly high level of consensus, as it requires everyone in the network to upgrade (not just miners). Unless there is a security flaw in the protocol, I doubt we'll see one anytime soon.

The problem you are complaining about is an efficiency problem for miners. I doubt you'll get a large percentage of the Bitcoin community to even accept this is a problem. At least in my opinion, it is not, as there are much more scalable solutions already available, such as local work generation.

aka sipa, core dev team

Tips and donations: 1KwDYMJMS4xq3ZEWYfdBRwYG2fHwhZsipa
DeepBit
Donator
Hero Member
*
Offline Offline

Activity: 532
Merit: 500


We have cookies


View Profile WWW
June 23, 2012, 12:41:16 PM
 #8

Yeah I meant one that exists.
If bitcoin decisions are made based on unsubstantiated comments by companies like BFL - man are we in deep shit investing in BTC
1. Do you understand that this change will break every existing bitcoin client and service ?

2. ASIC may be "broken" even at the design or testing stage. It doesn't matters if this is BFL or not.

3. Breaking protocol (and possible ASICs, and possible hardware solutions like bitcoin smartcards and so on) without a reason is like waving a big sign "No serious business here" for most investors.

Welcome to my bitcoin mining pool: https://deepbit.net ~ 3600 GH/s, Both payment schemes, instant payout, no invalid blocks !
Coming soon: ICBIT Trading platform
kano
Legendary
*
Online Online

Activity: 2464
Merit: 1040


Linux since 1997 RedHat 4


View Profile
June 23, 2012, 12:55:38 PM
 #9

Doing a hard fork requires an exceedingly high level of consensus, as it requires everyone in the network to upgrade (not just miners). Unless there is a security flaw in the protocol, I doubt we'll see one anytime soon.

The problem you are complaining about is an efficiency problem for miners. I doubt you'll get a large percentage of the Bitcoin community to even accept this is a problem. At least in my opinion, it is not, as there are much more scalable solutions already available, such as local work generation.

Actually, it's a problem for pools not miners.
If a pool can't handle larger devices, when large MH/s FPGA or ASIC devices show up - ok we'll have pools failing around us.

It's not the issue of handling a few more higher power devices, it's the simple fact that if people can get these devices, then either everyone will be getting devices that are close to an order of magnitude faster, or most of the backbone of BTC (the miners) will be gone and BTC will die.
Anyone can see there is no exaggeration in that if they have a little sense of understanding of today's BTC network.

Yes there are other suggestion, but this one simply ties in with how things work now ...

Yes I do understand the issues with doing a hard fork - hell look at the mess caused by the way it was done back in April and that was a minor change ...

Ignoring Deepbit (especially since most of his miners don't understand or pay much attention to BTC) I'd actually expect most pools to be looking for a solution already and this solution in itself fits, code and design wise, well and easily into the current design.

It is also a long term solution
If someone thinks 4 billion times the current network isn't enough, then ok add two 32 bit nonce fields (for a total of 3) and I can't see that running out before BTC is completely redesigned some time in the far future.

Pool: https://kano.is Here on Bitcointalk: Forum BTC: 1KanoPb8cKYqNrswjaA8cRDk4FAS9eDMLU
FreeNode IRC: irc.freenode.net channel #kano.is Majority developer of the ckpool code
Help keep Bitcoin secure by mining on pools with full block verification on all blocks - and NO empty blocks!
kano
Legendary
*
Online Online

Activity: 2464
Merit: 1040


Linux since 1997 RedHat 4


View Profile
June 23, 2012, 01:00:31 PM
 #10

Yeah I meant one that exists.
If bitcoin decisions are made based on unsubstantiated comments by companies like BFL - man are we in deep shit investing in BTC
1. Do you understand that this change will break every existing bitcoin client and service ?

2. ASIC may be "broken" even at the design or testing stage. It doesn't matters if this is BFL or not.

3. Breaking protocol (and possible ASICs, and possible hardware solutions like bitcoin smartcards and so on) without a reason is like waving a big sign "No serious business here" for most investors.
No it means that this hardware is being design by short-sighted people that should not be designing hardware.

The US government mandated that SHA-1 no longer be used within the government after 2010.
Why? Because it was broken.
Is this something that people expect to never happen to SHA-2?
Why is there an SHA-3?
Seriously your argument is nonsense since, who knows, tomorrow SHA-2 could be broken.

Pool: https://kano.is Here on Bitcointalk: Forum BTC: 1KanoPb8cKYqNrswjaA8cRDk4FAS9eDMLU
FreeNode IRC: irc.freenode.net channel #kano.is Majority developer of the ckpool code
Help keep Bitcoin secure by mining on pools with full block verification on all blocks - and NO empty blocks!
Pieter Wuille
Legendary
*
qt
Offline Offline

Activity: 1050
Merit: 1005


View Profile WWW
June 23, 2012, 01:20:45 PM
 #11

Bitcoin is certainly a lot more than just mining, but that doesn't mean the mining business is not part of it. While profitable, economics of scale will always lead to research and development of more efficient (and specialized) hardware. You may argue against the potential for centralization this brings, but from an economic point of view, it is inevitable. A hard fork that would render their investments void is a problem, and it will undermine trust in the Bitcoin system.

Yes, cryptographic primitives get broken and better ones are being developed all the time. I've already said that a security flaw is a very good reason for a hard fork - presumably, few people with an interest in Bitcoin will object against a fix for a fatal security flaw.

However, you still haven't convinced me there is a problem. The current getwork()-based mining requires new work every 4 billion hashes, yes. But when combined with local work generation, or even direct mining on top of getmemorypool(), there is no performance problem at all. A normal CPU can generate work for several TH/s easily when implemented efficiently. I believe a few pools already use this.

Unless it becomes clear that there is an inherent problem with the current system that will limit mining operation in the future (as opposed to implementation issues because of the current state of operations), I see no reason at all for a hard fork.

aka sipa, core dev team

Tips and donations: 1KwDYMJMS4xq3ZEWYfdBRwYG2fHwhZsipa
Nachtwind
Hero Member
*****
Offline Offline

Activity: 700
Merit: 502



View Profile
June 23, 2012, 02:35:00 PM
 #12

...
Adding another field is unacceptable since it will break ASIC miners.
Name one ASIC miner ...
BFL SC, OpenASIC initiative, Vlad's something, Reclaimer.
Not to mention old Artforz's 350nm ASICs.
Yeah I meant one that exists.

If bitcoin decisions are made based on unsubstantiated comments by companies like BFL - man are we in deep shit investing in BTC

Might as well announce a ASIC that can contain up to three nonces and is modular enough.. would have the same backing as those of BFL and other companies..

Sorry.. turning down good ideas because some small fpga assembler (havent seen an asic yet!) announces some wonderhasher with non-meetable estimates is not a way to discuss the future of the whole project. What if someone had decided to change the bitcoin algo becaue GPU miners were invented? Should we end up like SolidCoin where important decision are made based upon childish ideologies?
2112
Legendary
*
Offline Offline

Activity: 2100
Merit: 1025



View Profile
June 23, 2012, 05:06:12 PM
 #13

kano, here's the way to strenghten your arguments.

I believe you are a programmer and actual co-developer of the mining program written in C++. Please run a test showing how many block headers per second can be generated on a contemporary CPU when the additional "nonce" space is actually put in the "coinbase" field.

ASIC mining chips will probably stay the same as long as the position of the 32-bit nonce field doesn't change within the block header. It makes sense to implement only the innermost loop in the hardware, any outer loops should be implemented in software that's driving the hashing hardware.

Please comment, critique, criticize or ridicule BIP 2112: https://bitcointalk.org/index.php?topic=54382.0
Long-term mining prognosis: https://bitcointalk.org/index.php?topic=91101.0
kano
Legendary
*
Online Online

Activity: 2464
Merit: 1040


Linux since 1997 RedHat 4


View Profile
June 24, 2012, 01:26:44 AM
 #14

...
However, you still haven't convinced me there is a problem. The current getwork()-based mining requires new work every 4 billion hashes, yes. But when combined with local work generation, or even direct mining on top of getmemorypool(), there is no performance problem at all. A normal CPU can generate work for several TH/s easily when implemented efficiently. I believe a few pools already use this.

Unless it becomes clear that there is an inherent problem with the current system that will limit mining operation in the future (as opposed to implementation issues because of the current state of operations), I see no reason at all for a hard fork.
Well the ASIC 'discussions' at the moment are suggesting 1TH/s devices ... these being the devices that are 10x what most people would normally use.

Looking at current technology, we have ~200 - ~800 MH/s devices in GPU and FPGA (of course there are lower also and a few slightly higher)
And looking around pools it is common to find users who have ~2 - ~5 GH/s spending a few thousand dollars on hardware.

gigavps received 4 FPGA rigs in the past couple of days that hash at around 25GH/s each - an order or magnitude in performance and cost

So if this ASIC performance change reality is even close to what is being suggested - an order of magnitude is expected and people like gigvps will be running multiple single devices that are in the hundreds of GH/s

Now this already puts a question mark over the statement:
"A normal CPU can generate work for several TH/s easily when implemented efficiently"

However, if we are talking more than an order of magnitude - then this statement is very questionable.

The point in all this is not that some people will be running faster hardware so yeah it may be an issue for them, it's that the whole network will be (at least) an order of magnitude faster in hashing since those who do not take up the new hardware will be gone due to the difficulty change being in the 10's of millions instead of 1's of millions and the cost to mine with current hardware prohibitive compared to the return.

The other solutions to this are saying that bitcoind code should be moved to the miner - decisions about how to construct a block header and what information to put into it.
i.e. a performance issue is solved by moving code out of bitcoind and into the end mining software.
Currently pools already do this anyway (but not to the miner) to make their own decisions about what txn's to include and how to construct the coinbase, but I see the idea that the miner code itself should take on this a very poor design solution ... almost a hack.

On top of all this is the network requirement increase on the current code base.
And order of magnitude higher I think is really not an option.

Pool: https://kano.is Here on Bitcointalk: Forum BTC: 1KanoPb8cKYqNrswjaA8cRDk4FAS9eDMLU
FreeNode IRC: irc.freenode.net channel #kano.is Majority developer of the ckpool code
Help keep Bitcoin secure by mining on pools with full block verification on all blocks - and NO empty blocks!
Pieter Wuille
Legendary
*
qt
Offline Offline

Activity: 1050
Merit: 1005


View Profile WWW
June 24, 2012, 08:46:38 AM
 #15

"A normal CPU can generate work for several TH/s easily when implemented efficiently"
However, if we are talking more than an order of magnitude - then this statement is very questionable.

That is one single CPU. If work generation becomes to heavy for a CPU, do it on two. If that becomes too much, do it on a GPU. By the time a GPU can't do work generation for your ASIC cluster behind it, it will be economically viable to move the work generation to an ASIC.

aka sipa, core dev team

Tips and donations: 1KwDYMJMS4xq3ZEWYfdBRwYG2fHwhZsipa
Mike Hearn
Legendary
*
expert
Offline Offline

Activity: 1526
Merit: 1005


View Profile
June 24, 2012, 11:57:22 AM
 #16

If you look at what poolserverj is doing, it's clear that centralized pools can scale up more or less indefinitely. No protocol changes are needed and thus none will occur.
Inaba
Legendary
*
Offline Offline

Activity: 1260
Merit: 1000



View Profile WWW
June 25, 2012, 12:15:47 PM
 #17

What is Poolserverj doing that allows it to scale?  You mean the local work generation? 

I don't think PSJ can scale all that well in it's current form, but I could be wrong.  From when I investigated it (and ultimately rejected it as a back end), it had some serious limitations.  It may have improved since then, but it was my understanding the developer had abandoned it and it has not progressed in a very long time?


If you're searching these lines for a point, you've probably missed it.  There was never anything there in the first place.
DrHaribo
Legendary
*
Offline Offline

Activity: 2408
Merit: 1018


Bitminter.com Operator


View Profile WWW
June 25, 2012, 12:52:56 PM
 #18

rollntime will do just fine. If you don't want to roll even 1 second into the future and you need 3 nonce ranges per second, just get 3 sets of rollable work data. Problem solved. Also, being a few seconds into the future (or the past) is ok.

The other option besides rollntime is generating work locally and using getmemorypool (BIP22 as Pieter Wuille already mentioned) instead of getwork.

There's no need to change the format of bitcoin blocks.

▶▶▶ Bitminter.com - Your trusted mining pool since 2011.
kjj
Legendary
*
Offline Offline

Activity: 1302
Merit: 1000



View Profile
June 25, 2012, 05:43:16 PM
 #19

You could always add a new block version number to exist alongside the current block version.  There is no need for a hard fork, nor to break everything in one day.

17Np17BSrpnHCZ2pgtiMNnhjnsWJ2TMqq8
I routinely ignore posters with paid advertising in their sigs.  You should too.
Pieter Wuille
Legendary
*
qt
Offline Offline

Activity: 1050
Merit: 1005


View Profile WWW
June 25, 2012, 05:52:36 PM
 #20

You could always add a new block version number to exist alongside the current block version.  There is no need for a hard fork, nor to break everything in one day.

Doing that requires a hard fork, as it means some blocks will be valid to new nodes but not to old. The first block mined in the new system will kick out every old node permanently on a sidechain without new-version blocks.

aka sipa, core dev team

Tips and donations: 1KwDYMJMS4xq3ZEWYfdBRwYG2fHwhZsipa
kjj
Legendary
*
Offline Offline

Activity: 1302
Merit: 1000



View Profile
June 25, 2012, 07:30:51 PM
 #21

You could always add a new block version number to exist alongside the current block version.  There is no need for a hard fork, nor to break everything in one day.

Doing that requires a hard fork, as it means some blocks will be valid to new nodes but not to old. The first block mined in the new system will kick out every old node permanently on a sidechain without new-version blocks.

Ahh, duh.  My bad, it would require a fork.

But not a fork that would break everything.  Non-upgradable miners could keep making the blocks that they know how to make, while their control nodes would accept blocks from the network that were at the new version.

17Np17BSrpnHCZ2pgtiMNnhjnsWJ2TMqq8
I routinely ignore posters with paid advertising in their sigs.  You should too.
allten
Sr. Member
****
Offline Offline

Activity: 449
Merit: 250


You Don't Bitcoin 'till You Mint Coin


View Profile WWW
June 25, 2012, 08:08:45 PM
 #22

to the OP:
I think the idea is very reasonable.
For it to work, we need to have a transition period where the block with a nounce size of 32 bits still works a long side with a block and a nounce size of say 64 or whatever. The 32 bit nounce could be phased out after 4 to 8 years. plenty of time for hardware to naturally phase out anyways. Seems doable. Hope you write a detailed BIP.
Pieter Wuille
Legendary
*
qt
Offline Offline

Activity: 1050
Merit: 1005


View Profile WWW
June 25, 2012, 11:16:54 PM
 #23

Let us look at this from a theoretical point of view, rather than from what current infrastructure provides. The first pool that is still operational started in december 2010 - two years ago. By the time we can pull of a block format change, we are at least two years away anyway. At that time, I'm sure the Bitcoin infrastructure will be very different from now.

Assume blocks reach their maximum size: 1000000 bytes, all the time. The smallest typical transactions are 227 bytes (1 input from a compressed pubkey, 2 outputs to addresses). That means a maximum of 4405 transactions.

In a merkle tree with 4405 elements, the leaves are 13 levels deep. That means generation of a new piece of work (for 4 billion double-SHA256 operations worth of calculation), you need to increase the extranonce in the first transaction (the coinbase) 's output, and hash your way up to the merkle root. This requires 13 double-SHA256 operations. If this is offloaded to the GPU's/FPGA's/ASIC's/QuantumCPU's/... that are *already* doing precisely that hashing operation for the block header anyway, they get a 0.0000003% overhead. The only thing they'd require is an occasional (at 4405 transactions per 10 minutes, 7 times per second) update of the list of transactions. A phone connection with a modem suffices for that kind of traffic.

In short: nothing to worry about. The only problem is with the current infrastructure, which will evolve.

aka sipa, core dev team

Tips and donations: 1KwDYMJMS4xq3ZEWYfdBRwYG2fHwhZsipa
kano
Legendary
*
Online Online

Activity: 2464
Merit: 1040


Linux since 1997 RedHat 4


View Profile
June 26, 2012, 12:17:59 AM
 #24

Well ... 2 minute block times also reduces block size almost 5 times on average ...
but that idea got chucked out when I brought that up last year Smiley
https://bitcointalk.org/index.php?topic=51504.0

Meanwhile ... why does the nonce exist?
From my understanding this is the reason, however, it will be too small if the network grows due to ASIC.

Pool: https://kano.is Here on Bitcointalk: Forum BTC: 1KanoPb8cKYqNrswjaA8cRDk4FAS9eDMLU
FreeNode IRC: irc.freenode.net channel #kano.is Majority developer of the ckpool code
Help keep Bitcoin secure by mining on pools with full block verification on all blocks - and NO empty blocks!
Pieter Wuille
Legendary
*
qt
Offline Offline

Activity: 1050
Merit: 1005


View Profile WWW
June 26, 2012, 12:28:55 AM
 #25

Meanwhile ... why does the nonce exist?
From my understanding this is the reason, however, it will be too small if the network grows due to ASIC.

Because of the nonce, we only need to recalculate the merkle root once every 4 billion hashes, instead of for every hash. In the current infrastructure, that merkle root is calculated on the server (typically) while the hashes are calculated by the miner. This means that there is an interaction between server and miner every 4 billion hashes. But the actual calculation per merkle root is nothing compared to the 4 billion hashes (see my post above) the miner already does. If the requesting of work becomes the bottleneck, the work generation can simply be moved to the miner.

No, this is not an issue. No, there is no need to increase the nonce size. Yes, 64 bit nonces would have things slightly easier for us, but all is required is a slightly more complex mining infrastructure and this inconvenience is nothing compared to doing a hard fork.

aka sipa, core dev team

Tips and donations: 1KwDYMJMS4xq3ZEWYfdBRwYG2fHwhZsipa
kano
Legendary
*
Online Online

Activity: 2464
Merit: 1040


Linux since 1997 RedHat 4


View Profile
June 26, 2012, 02:15:29 AM
 #26

Meanwhile ... why does the nonce exist?
From my understanding this is the reason, however, it will be too small if the network grows due to ASIC.

Because of the nonce, we only need to recalculate the merkle root once every 4 billion hashes, instead of for every hash. In the current infrastructure, that merkle root is calculated on the server (typically) while the hashes are calculated by the miner. This means that there is an interaction between server and miner every 4 billion hashes. But the actual calculation per merkle root is nothing compared to the 4 billion hashes (see my post above) the miner already does. If the requesting of work becomes the bottleneck, the work generation can simply be moved to the miner.

No, this is not an issue. No, there is no need to increase the nonce size. Yes, 64 bit nonces would have things slightly easier for us, but all is required is a slightly more complex mining infrastructure and this inconvenience is nothing compared to doing a hard fork.

i.e. the correct solution based on the bitcoin spec is indeed to increase the nonce size - and as I mentioned later, if it's increased it may as well be 3 x 32bits ... or even 4 for a nice round number ... though I doubt 3 would run out for at least many ... decades? Smiley

Again, the solution others are saying is to move block generation and txn selection to the miner software - that seems indeed like a hack to me.
A hack to solve a problem with a very clear and specific solution.

The issue is - a fork.
Well, the equivalent of a fork was done in April (and yeah wasn't done very well) and that was for a much lesser reason.

These ASICs don't exist yet and seriously, that seems to be the only real argument anyone has against doing it properly is that BFL have announced their ASICs with a time frame.
Their last related announcement on a similar subject was last September which proved to be an over spec announcement made using a simulation that didn't even exist that they delivered more than 3 months late.

Hmm, who's in control here ...

Pool: https://kano.is Here on Bitcointalk: Forum BTC: 1KanoPb8cKYqNrswjaA8cRDk4FAS9eDMLU
FreeNode IRC: irc.freenode.net channel #kano.is Majority developer of the ckpool code
Help keep Bitcoin secure by mining on pools with full block verification on all blocks - and NO empty blocks!
btharper
Sr. Member
****
Offline Offline

Activity: 389
Merit: 250



View Profile
June 26, 2012, 08:02:17 AM
 #27

There's already an "extraNonce" field that's used in the coinbase transaction, incrementing it changes the merkle root and gives you a new batch of hashes to work on. From a recently generated coinbase, this value is 4294967295.

Maybe what we need is to update the way work is fetched to allow more efficient processing. Since the other block header fields don't change much - version, prev hash, merkle root, timestamp (with the exception of rollntime), and target. Why not just package get works to include one copy of those fields and 4-100 merkle roots (depending on client speed).

As far as generating the merkle root en masse, my laptop with very average performance could do ~1.8MH/s back when that mattered, which is about 4 million sha256() rounds in a second. I'm not intimately familiar with how the merkle roots are generated (n*log(n) hashes for n transactions?) but if we call it 400 transactions we get a bit south of 4000 hashes per merkle root and with heavy rounding and lots of assumptions every step of the way gets about one thousand merkle roots generated per second on an economy two-core laptop.
Checking slush's pool shows 430 getworks/s to handle 1.2GH/s of mining power, which could be handled twice over on much less hardware than a decent server (granted the server also has to do things like handle the miners connecting and running whatever backend is required). If you want to make sure you have enough capacity I'm sure you could use a GPU to do this and get easily another order of magnitude above a CPU.

For a dedicated miner running 1TH/s you would need to supply 250 merkle roots/s, but if you're running 1TH/s you could probably afford to mine solo (at least for current difficulty, and probably for a good while). Even on a pool, a condensed request like the one above would be under 10KByte/s (56Kbit dial-up is just slightly slower).

Will pools be affected by ever-climbing hash rates? Sure. Will it matter in the long run? I doubt it. Will it require a fork or protocol change. Almost certainly not (the logistics of changing something this low-level in the blockchain would be a nightmare).
Maybe you could ask one of the large pool operators slush or giga both come to mind off hand (and I know giga is already thinking about upgrading to ASIC and the changes that requires).
Pieter Wuille
Legendary
*
qt
Offline Offline

Activity: 1050
Merit: 1005


View Profile WWW
June 26, 2012, 08:13:08 AM
 #28

Again, the solution others are saying is to move block generation and txn selection to the miner software - that seems indeed like a hack to me.
A hack to solve a problem with a very clear and specific solution.

It is only a hack when considered from the viewpoint of the current infrastructure. There is no reason why you'd leave the hashing of the merkle root on the server instead of the client, especially as it is exactly the same operation (double SHA256).

Quote
The issue is - a fork.
Well, the equivalent of a fork was done in April (and yeah wasn't done very well) and that was for a much lesser reason.

As fas as I know, there has not been a single hard fork in Bitcoin's history. The changes for BIP16 and BIP30 were "soft forks", that only made a backward-compatible change (meaning only making some existing rules more strict). Even the much more severe bug fixes in juli-august 2010 (see CVEs) were in fact only soft forks. Soft forks are safe as soon as a majority of mining power enforces them.

Changing the serialization format of blocks or transactions, or introducing a new version for these, however does require a hard fork. Other changes that require a hard fork are changing the maximum block size, changing the precision of transaction amounts, or changing the mining subsidy function. All these need a much much higher level of consensus, as these require basically the entire Bitcoin network to upgrade (not just a majority, and not just miners). Everyone using an old client after the switch will get stuck in a sidechain with old miners (even if that is just 1% of the hashing power). If we ever do a hard fork (and we may need to), it will have to be planned, implemented and agreed upon years in advance.

Quote
Hmm, who's in control here ...

You are. Bitcoin is based on consensus, but you can only reach consensus as long as you can convince enough people there is a problem, and I personally still see this is a minor implementation inconvenience rather than a problem that will limit our growth.

aka sipa, core dev team

Tips and donations: 1KwDYMJMS4xq3ZEWYfdBRwYG2fHwhZsipa
eleuthria
Legendary
*
Offline Offline

Activity: 1750
Merit: 1000



View Profile
June 26, 2012, 09:27:18 AM
 #29

I'm working with a few others on a draft to revise the way the mining protocol works.  Current outlook is very good, in that it should be able to support 256 TH/s -per device-, while almost eliminating all network traffic (total "getwork" to be downloaded by the miner is ~1 KB -total- between each longpoll).  Additionally, the 256 TH/s per device is a limit that can be readily increased in *2^8 increments.

More information coming soon.  The protocol design will require pools to be redesigned if they want to adapt to the changing landscape that ASICs may bring (I still don't think we'll see BFL's claimed specs).  However, this protocol would "future proof" the pooled mining design.

For the miner to utilize the protocol, they would either need mining software with direct support, or a local proxy which interprets the new protocol and translates it for older miners.  In the coming weeks I will hopefully be able to post a complete spec for mining software developers to consider implementing it.  Hopefully a proof-of-concept pool server will be available in the next 2 months.

This will require -no- change to Bitcoin's current protocols.  It is purely a change in the way pools interact with miners.

RIP BTC Guild, April 2011 - June 2015
kano
Legendary
*
Online Online

Activity: 2464
Merit: 1040


Linux since 1997 RedHat 4


View Profile
June 26, 2012, 09:57:55 AM
 #30

Well I guess I'm gonna have to spend some time looking at these other options coz I see all sorts of issues about having the miner generating blocks (why I call it hack): like having to pass merkle trees, or running a memory pool (that even bitcoind doesn't get that perfectly right still), or passing some encrypted protocol about reuse of data already passed back and forward and thus keeping track of that also, stuff like increasing the network load (not decreasing it) due to passing all this extra information, passing txn's back and forward and handing LP's and orphans, and all sorts of other issues that every miner program is now going to have to deal with: simply to get the miner to generate the coinbase txn i.e. do the work of the pool/bitcoind, rather than the pool/bitcoind doing it (and also rather than fixing the actual problem that the nonce will be too small when ASIC truly lands - next year possibly?)

Again, increasing the nonce size is quite simply elegant and also the actual correct solution based on the design.

This argument about a hard fork seems to be to find a way around the nonce solution, due to some deigned consideration that hard forks are not possible to be done ... before the problem occurs ... which is also why I bring this up now and not later when it becomes a problem.

This is of course normal in any programming environment, to on occasion see things in advance, and yet to not bite the bullet and make the required change that some may consider difficult until there's no time to do it and everyone can clearly see the problem because it has already presented itself. So instead to make a complex work around that is open to all sorts of issues - the biggest one in this case will be having every miner program implementing code from bitcoind at whatever level required (non trivial) and more to deal with passing this information around.

As for hard vs soft ... well in April there were multiple >3 block forks due to the issues of updating and multiple release candidates - so really no worse than a hard fork IMO in the number of people hashing on bad forks regularly back then.
That was of course caused by someone putting a poisoned transaction into the network to cause the exact problem that happened and as a result it was extremely similar to a hard fork.

Edit: heh, lets see what eleuthria's protocol has to say when he defines it ...

Edit2: while you're at it - kill that retarded design of LP - keeping an idle socket connection open sometimes for more than an hour ... wow deepbit/tycho must have had a bad hangover when they came up with that idea

Pool: https://kano.is Here on Bitcointalk: Forum BTC: 1KanoPb8cKYqNrswjaA8cRDk4FAS9eDMLU
FreeNode IRC: irc.freenode.net channel #kano.is Majority developer of the ckpool code
Help keep Bitcoin secure by mining on pools with full block verification on all blocks - and NO empty blocks!
eleuthria
Legendary
*
Offline Offline

Activity: 1750
Merit: 1000



View Profile
June 26, 2012, 10:27:33 AM
 #31

Edit: heh, lets see what eleuthria's protocol has to say when he defines it ...

Edit2: while you're at it - kill that retarded design of LP - keeping an idle socket connection open sometimes for more than an hour ... wow deepbit/tycho must have had a bad hangover when they came up with that idea

The new protocol will be based on a single TCP socket connection between the miner and the pool (or the proxy software and the pool).  All data is asynchronous.  Only one package of work will need to be sent from the pool to the miner between updates.  Updates would be either:  Traditional longpolls, or a list of new transactions.  It eliminates the current mess of a protocol where miners open new connections for work requests/submissions, and then hold a separate one open just to get notice of a new block.  Everything will use a single persistent connection.

I'm hoping to have this more formalized before I publish any kind of draft protocol for public comment/changes.  I didn't quite expect as much progress as we've had in the last few hours.  It's been an exciting couple of hours to say the least!

RIP BTC Guild, April 2011 - June 2015
Pieter Wuille
Legendary
*
qt
Offline Offline

Activity: 1050
Merit: 1005


View Profile WWW
June 26, 2012, 10:35:37 AM
 #32

As for hard vs soft ... well in April there were multiple >3 block forks due to the issues of updating and multiple release candidates - so really no worse than a hard fork IMO in the number of people hashing on bad forks regularly back then.
That was of course caused by someone putting a poisoned transaction into the network to cause the exact problem that happened and as a result it was extremely similar to a hard fork.

A hard forking change is one that causes an infinite split between old and new nodes, and is in no way comparable to any change we've ever done.

aka sipa, core dev team

Tips and donations: 1KwDYMJMS4xq3ZEWYfdBRwYG2fHwhZsipa
Hawkix
Hero Member
*****
Offline Offline

Activity: 519
Merit: 500



View Profile WWW
July 01, 2012, 09:28:48 PM
 #33

I do not understand - the higher-than-1 difficulty shares should handle all this problems, shouldn't they?

Donations: 1Hawkix7GHym6SM98ii5vSHHShA3FUgpV6
http://btcportal.net/ - All about Bitcoin - coming soon!
btharper
Sr. Member
****
Offline Offline

Activity: 389
Merit: 250



View Profile
July 01, 2012, 09:37:40 PM
 #34

I do not understand - the higher-than-1 difficulty shares should handle all this problems, shouldn't they?
That may be part of the solution, but the other problem is that a 4GH/s device can run through an entire getwork (4 billion hashes) in one second (A SC Jalapeno at current specs goes through a whole getwork in ~1.15 seconds). So even very small miners will need to request more work. Higher difficulty shares just lower the number of found shares. The really fast machines like the 1TH/s device powers through 250 getworks per second, so more resources will be required to support these in the future.

Actually, come to think of it, higher shares would just increase variability.
kano
Legendary
*
Online Online

Activity: 2464
Merit: 1040


Linux since 1997 RedHat 4


View Profile
July 01, 2012, 10:47:50 PM
 #35

Yeah with a 64 bit nonce, if the device is big enough, it can power through 4 billion getworks a second ... but with a single getwork with the change I'm suggesting
... but as I mentioned 128 bit nonce would be best to future proof that for ... a very long time

Then high difficulty shares will also mean that fewer shares will be returned.

The overall result of managing the two properly (bigger nonce and higher difficulty shares) will mean that BTC can handle VERY large network growth for a VERY long time

Pool: https://kano.is Here on Bitcointalk: Forum BTC: 1KanoPb8cKYqNrswjaA8cRDk4FAS9eDMLU
FreeNode IRC: irc.freenode.net channel #kano.is Majority developer of the ckpool code
Help keep Bitcoin secure by mining on pools with full block verification on all blocks - and NO empty blocks!
kano
Legendary
*
Online Online

Activity: 2464
Merit: 1040


Linux since 1997 RedHat 4


View Profile
October 05, 2012, 02:24:04 PM
 #36

So meanwhile - 3 months later - and no one has actually really done anything with any sort of future consideration about this.

Once these 1TH/s mining devices turn up, they will be mining 232 1-difficulty shares a second - or 4ms a share.
THEY CAN'T MINE HIGHER DIFFICULTY SHARES SINCE THE NONCE SIZE IS ONLY 32 BITS

Now I'm not sure who thought that's OK, but sending 60 bytes of work to a USB device over serial at 115200 baud takes ... ~5ms - so with a device that was able to hash a nonce range at 1TH/s we are already well beyond a problem into:
More than half your mining time (if it was a single device) would be spent doing ... nothing.

Of course 1TH/s rigs are only a few months away ........ though they probably don't hash a nonce range that fast ...
Give it a year (if BTC hasn't died due to people ignoring this sort of stuff) the devices will simply be stunted due to the poor nonce implementation in BTC (yeah you all remember MSDOS ...)

Even implementing this is quite straight forward, allow for 2 types of blocks - each with a different version number, the second one to be available on a future date, that has a nonce size of 128 bits instead of 32.
Give it a 3 to 6 month time frame (well we already wasted 3 months doing nothing about it)

The latest bunch of implementations are not even related to a solution to this problem.
No one seems to care about it so I guess we'll hit a brick wall some time in the not too distant future and then the bitcoin devs will suddenly have to hack the shit out of bitcoin and implement a hard fork in a short time frame and screw it up like they've done in the past with soft forks.

Or ... as I mentioned before ... they could show a little forward planning ... but 3 months thrown away so far ...

Pool: https://kano.is Here on Bitcointalk: Forum BTC: 1KanoPb8cKYqNrswjaA8cRDk4FAS9eDMLU
FreeNode IRC: irc.freenode.net channel #kano.is Majority developer of the ckpool code
Help keep Bitcoin secure by mining on pools with full block verification on all blocks - and NO empty blocks!
Gavin Andresen
Legendary
*
qt
Offline Offline

Activity: 1652
Merit: 1012


Chief Scientist


View Profile WWW
October 05, 2012, 02:30:54 PM
 #37

I thought the consensus was that the mining devices just need a little extra software onboard to increment extranonce and recompute the merkle root.

I don't know nuthin about hardware/firmware design, or the miner<->pool communication protocols, but it seems to me that should be pretty easy to accomplish (the device will need to know the full coinbase transaction, a pointer to where the extranonce is in that transaction, and a list of transaction hashes so it can recompute the merkle root).

How often do you get the chance to work on a potentially world-changing project?
kano
Legendary
*
Online Online

Activity: 2464
Merit: 1040


Linux since 1997 RedHat 4


View Profile
October 05, 2012, 02:45:35 PM
 #38

The devices just sit then spinning a nonce range.
Nothing more, nothing less.

Implementing the bitocoin protocol of dealing with merkle trees and changing the coinbase is not what some ASIC company would consider doing unless they were looking to spend a lot of money every time that had to rewrite that (when their current hardware becomes door stops)

The device are simple - hard wired into silicon - they do the hash process and cycle the nonce.

Pool: https://kano.is Here on Bitcointalk: Forum BTC: 1KanoPb8cKYqNrswjaA8cRDk4FAS9eDMLU
FreeNode IRC: irc.freenode.net channel #kano.is Majority developer of the ckpool code
Help keep Bitcoin secure by mining on pools with full block verification on all blocks - and NO empty blocks!
Gavin Andresen
Legendary
*
qt
Offline Offline

Activity: 1652
Merit: 1012


Chief Scientist


View Profile WWW
October 05, 2012, 02:56:01 PM
 #39

The device are simple - hard wired into silicon - they do the hash process and cycle the nonce.

Okey doke.  I thought they had some firmware that knew how to talk over USB (or ethernet or whatever), too.

How often do you get the chance to work on a potentially world-changing project?
Mike Hearn
Legendary
*
expert
Offline Offline

Activity: 1526
Merit: 1005


View Profile
October 05, 2012, 03:03:33 PM
 #40

Yes, they do.

If USB1 is too slow to send new work to a 1TH ASIC fast enough then there's a simple solution - don't use USB1. If you're capable of running a rig of that speed you're capable of shoveling it work fast enough. This really isn't a problem the Bitcoin core devs need to solve, the ball is in the ASIC developers court.
kano
Legendary
*
Online Online

Activity: 2464
Merit: 1040


Linux since 1997 RedHat 4


View Profile
October 05, 2012, 03:17:22 PM
 #41

The device are simple - hard wired into silicon - they do the hash process and cycle the nonce.

Okey doke.  I thought they had some firmware that knew how to talk over USB (or ethernet or whatever), too.

Of course - you couldn't talk to them without that - should I respond to what you are really saying - or just what you wrote Tongue
Yes I do know what you are doing Tongue

Anyway: it still is simply a ~2 stage hash of 80 bytes with 4 of the bytes cycling from 0 to 0xffffffff and then the 80 bytes being fed into the ~double sha256 - be it 1 2 4 8 or 256+ ~double sha256 streams.

Yes you could put a merkle processor into the silicon and a coinbase construction processor - and it would then have to match what the pools handle ... and with crap like GBT and Stratum spinning around and not being able to decide what to do, what will be next?
That variable target is not worth risking hundreds of thousands of dollars to implement when it could change tomorrow - backward compatibility takes a back seat in bitcoin ...

Pool: https://kano.is Here on Bitcointalk: Forum BTC: 1KanoPb8cKYqNrswjaA8cRDk4FAS9eDMLU
FreeNode IRC: irc.freenode.net channel #kano.is Majority developer of the ckpool code
Help keep Bitcoin secure by mining on pools with full block verification on all blocks - and NO empty blocks!
Sergio_Demian_Lerner
Hero Member
*****
expert
Offline Offline

Activity: 540
Merit: 509


View Profile WWW
October 05, 2012, 04:12:32 PM
 #42

There is a possible FIX to add more nonce space while maintaining backwards compatibility:


We can take 16 bits from the block version field to be used as more nonce space:

4    version    uint32_t    Block version information, based upon the software version creating this block

split it as:

2    version    uint16_t    Block version information, based upon the software version creating this block
2    extra_nonce uint16_t    More nonce space


I think it will not break compatibility. Old nodes will only tell the users to upgrade.

What yo you think?
Jutarul
Donator
Legendary
*
Offline Offline

Activity: 994
Merit: 1000



View Profile
October 05, 2012, 04:41:23 PM
 #43

No need to change the protocol. Let it be an engineering problem.

AFAIK a simple solution is to implement the capability into the mining rig to do merkle tree reorganization. Then if you have 10 transactions to play with you could create 10!=3628800 permutations of the same block. If you have less than 10 transactions, the mining software could create spaceholder transactions with unique hashes.

The ASICMINER Project https://bitcointalk.org/index.php?topic=99497.0
"The way you solve things is by making it politically profitable for the wrong people to do the right thing.", Milton Friedman
btharper
Sr. Member
****
Offline Offline

Activity: 389
Merit: 250



View Profile
October 05, 2012, 04:52:15 PM
 #44

No need to change the protocol. Let it be an engineering problem.

AFAIK a simple solution is to implement the capability into the mining rig to do merkle tree reorganization. Then if you have 10 transactions to play with you could create 10!=3628800 permutations of the same block. If you have less than 10 transactions, the mining software could create spaceholder transactions with unique hashes.
I've seen blocks come out after a minute with dozens of transactions. As long as the transaction rate stays relatively high I don't see this being a huge issue. Maybe the first set of blocks has fewer permutations to it, but not a huge issue.

As for having mining hardware doing the merkle tree computations. Sticking a simple microprocessor in near the upstream communication would be trivial for anyone making these things, and hopefully it's something they've already thought of (at least the ability to adapt to small changes like the version number being incremented). The old system of fetching and reporting on difficulty 1 shares is pretty dead when ASICs come out, that's no shock. If I remember correctly Eclipse was considering switching to difficulty 10 shares a few weeks ago. It may just also be the case that the old standard for getwork's just won't work anymore either and something closer to the hardware has to manage the merkle root.
Sergio_Demian_Lerner
Hero Member
*****
expert
Offline Offline

Activity: 540
Merit: 509


View Profile WWW
October 05, 2012, 05:07:41 PM
 #45

No need to change the protocol. Let it be an engineering problem.

Well, it's not a change in the protocol implementation. It's a change in the protocol semantics.

Right now you can use part of the version field as nonce and nothing bad will happen AFAIK.
btharper
Sr. Member
****
Offline Offline

Activity: 389
Merit: 250



View Profile
October 05, 2012, 05:23:09 PM
 #46

No need to change the protocol. Let it be an engineering problem.
Well, it's not a change in the protocol implementation. It's a change in the protocol semantics.

Right now you can use part of the version field as nonce and nothing bad will happen AFAIK.
Even that is still a hard fork. What you have to worry about is "How will an old client see this?" When the answer is "Like something broke" you have a hard fork.

Messing with the version field (even resizing it) means that old clients will see a bucket full of fuck and have no idea what's going on. Particularly when mixing it with a fairly random field like the nonce it starts to see random version numbers on posts. Even leaving the version number the same and shifting the field over to make room in the future (making the two bytes meaningless until the switch) still means you're left shifting the version bytes and now you're seeing version 65536. If you decide to use the first two bytes then there really isn't much change until the abrupt changeover when majority agrees to switch and start using those extra two bytes for something they weren't before.
Sergio_Demian_Lerner
Hero Member
*****
expert
Offline Offline

Activity: 540
Merit: 509


View Profile WWW
October 05, 2012, 05:41:50 PM
 #47

Messing with the version field (even resizing it) means that old clients will see a bucket full of fuck ...

I can't  follow you line of reasoning.

If you force the Block.nVersion>=2 or higher, old clients will see "garbage", and will do nothing (or do the same as when a 0.6.x version sees a version 2 block, just process it).

Then you can keep the 2 less significant bytes for version, and use the 2 most significant bytes for nonce. Any miner have the power to do it, and nobody can stop them from doing it.

When Bitcoin version 7.1 is out, then the dev team will need to change the code to reflect this change, splitting the nVersion field in two.
If they try to force the 2 MSbytes of nVersion to zero, then they will force a hard fork, because there will be already a block in the chain with nVersion>65536 and because old clients will accept new blocks with nVersion>65536, so they must follow the community and accept the new semantics.

In this regard, any miner has the power to change the protocol semantics.

Obviously, having consent from all parties involved is better.

best regards,
 Sergio.









Jutarul
Donator
Legendary
*
Offline Offline

Activity: 994
Merit: 1000



View Profile
October 05, 2012, 05:49:18 PM
 #48

In this regard, any miner has the power to change the protocol semantics.
Obviously, having consent from all parties involved is better.
If fields are not being used right now, they can obviously be hijacked for another purpose. That shouldn't break the protocol, it's a reinterpretation of the protocol. However, it's bad practice because you never know how many independent projects use the same trick. Then you run into issues when you want to merge functionality.

The ASICMINER Project https://bitcointalk.org/index.php?topic=99497.0
"The way you solve things is by making it politically profitable for the wrong people to do the right thing.", Milton Friedman
Inaba
Legendary
*
Offline Offline

Activity: 1260
Merit: 1000



View Profile WWW
October 05, 2012, 05:55:11 PM
 #49

The BFL units have made provisions for a larger nonce range in our firmware already, so if/when this happens we should be able to accommodate it without any problems.  The rate we chew threw the nonce has already caused some concern internally, so it's been a design feature almost from the beginning.  Personally, I would like to just see a 64 bit field for the nonce, that would give us a lot of expansion room and it would take a truly monster machine to chew through that in a reasonable time frame.  I don't see monster machines like that happening anytime soon.

Maaku: The nonce range and difficulty are two entirely separate issues.  One is a problem for the hardware one is a problem for the software.  Both need to be addressed (and the software side has been addressed with variable difficulty and GBT/Stratum) - but the hardware issue still remains as Kano has outlined and I think his proposals are the best long term solution so far, everything else is just a hack that is prone to problems and also being overrun as hardware becomes faster.

If you're searching these lines for a point, you've probably missed it.  There was never anything there in the first place.
maaku
Legendary
*
expert
Offline Offline

Activity: 905
Merit: 1000


View Profile
October 05, 2012, 07:43:55 PM
 #50

Right now you can use part of the version field as nonce and nothing bad will happen AFAIK.
No, just.... no.

I'm an independent developer working on bitcoin-core, making my living off community donations.
If you like my work, please consider donating yourself: 13snZ4ZyCzaL7358SmgvHGC9AxskqumNxP
jgarzik
Legendary
*
qt
Offline Offline

Activity: 1526
Merit: 1001


View Profile
October 05, 2012, 07:45:15 PM
 #51

Block's nVersion policy is already specified, in the recently accepted BIP 34 - "Block v2, Height in Coinbase"

Please do not use nVersion as an additional nonce.

Stratum mining and getblocktemplate (BIP 22) already have provisions for ASIC miners, through the extraNonce field and other possibilities.  This provides unlimited nonce range.

Any change to the block header is a hard fork upgrade, and should be avoided at all costs.


Jeff Garzik, Bloq CEO, former bitcoin core dev team; opinions are my own.
Visit bloq.com / metronome.io
Donations / tip jar: 1BrufViLKnSWtuWGkryPsKsxonV2NQ7Tcj
Luke-Jr
Legendary
*
expert
Offline Offline

Activity: 2366
Merit: 1001



View Profile
October 05, 2012, 07:46:09 PM
 #52

Realistically, GBT (BIP 22/23) solved this for the pool/software end a long time ago, and stealing 8 bits from the ntime (about 3 minutes of rolling) is probably plenty for the hardware side for a while...

While using [part of] the block version as another nonce was at one time a possibility, starting with 0.7.0 any changes there will cause the client to report an upgrade required. And again, it's not necessary.

slush
Legendary
*
Offline Offline

Activity: 1372
Merit: 1019



View Profile WWW
October 05, 2012, 08:28:44 PM
 #53

WTF I just read. Use block version as extra-extra-nonce?

* slush is checking today's date; no today isn't 1.April

Sergio_Demian_Lerner
Hero Member
*****
expert
Offline Offline

Activity: 540
Merit: 509


View Profile WWW
October 05, 2012, 09:46:44 PM
 #54

WTF I just read. Use block version as extra-extra-nonce?

Not the whole field, just take a couple of bytes from it. 2 bytes is more than enough for version information... Smiley
btharper
Sr. Member
****
Offline Offline

Activity: 389
Merit: 250



View Profile
October 06, 2012, 02:13:37 AM
 #55

WTF I just read. Use block version as extra-extra-nonce?

* slush is checking today's date; no today isn't 1.April
Yea, that's been my reaction on a lot of these proposals. To me a lot of this thread seems like "Holy crap, ASICs mean nonce won't be enough and bitcoin will break and my 56 billion bitcoins will be worthless because the network will break and my 23 TH/s rig won't be able to request work fast enough. And then ways to radically alter the syntax of the block to allow for more play room in each block (doubling the nonce range (which is one of the more sane requests), using two bytes of the version header as extra space for nonce, making things up to add in as extra hashes).

Don't get me wrong, especially for larger rigs (BFL's 1.5TH/s for one) trying to use difficulty 1 getworks from a pool server isn't realistic at all.

What I'd like to see is a small device (think the size and power of a phone or home router) that can manage a few miners (say around 10 TH/s of miners) over a local connection (which would help keep latency low among other things). On top of that I'd like to be picky and say that everything should be connected via ethernet, possibly even using power over ethernet (up to ~25W / 25GH with BFL's current specs). Just imagine if setting up new mining gear involved plugging in 1 ethernet cable per miner (plus a little config setup, or possibly them automatically syncing to the mining sever they're connected to).
gmaxwell
Moderator
Legendary
*
qt
Offline Offline

Activity: 2436
Merit: 1191



View Profile
October 06, 2012, 02:47:07 AM
 #56

Not the whole field, just take a couple of bytes from it. 2 bytes is more than enough for version information... Smiley
It's ridiculous, completely unnecessary, and craps on one of our very few backwards compatible upgrade mechanisms. Backwards compatibility means we may need to put some data there in the future— since even incrementing the version can't change the layout without creating a hard fork. Perhaps worst of all, it's simply tasteless. Tongue The version field? really. yuck.

Even a fairly slow computer can do extra-nonce advancing for many TH/s, esp once considering a few bits of ntime rolling— well, at least if someone doesn't insist on writing their SHA256 in javascript or other ultra slow interpenetrated language... And all these TH/s are not free. If someone can't manage a modest CPU or two per hundred thousand dollars in mining asics then they have a financial management problem that no amount of protocol fiddling can fix. Certainly it's not healthy for the network to encourage miners which are so CPU starved they can't even afford to update their coinbases enough to keep up with their hardware even given the ~2^48 work reduction that they get from nonce plus ntime rolling.

As far as communications goes, presumably people engineering products that cost as much as a nice car are smart enough to not use lawnmower wheels on it when it needs tank treads. If they aren't then I feel bad for their customers for all the other bits of the design they'll get wrong. But since its totally unprecedented that some mining hardware maker would produce a horribly mismatched design which did something daft like provide their hardware with a fraction of the required power supply we should have nothing to worry about…

Bitcoin will not be compromised
kano
Legendary
*
Online Online

Activity: 2464
Merit: 1040


Linux since 1997 RedHat 4


View Profile
October 06, 2012, 07:34:02 AM
 #57

Sigh - it does become rather silly if you look at it from this perspective:

People are suggesting ways to solve the problem that the nonce size is too small (including Stratum and that other guff like that)

So ... what's the problem?

The nonce size is too small (as everyone agrees so far who has posted)

So ... make it bigger ... as I said in the first post 3 months ago ... though as I said later - 128 bits is a very long term solution.

... and as I said on the previous page ... create a version 2 block type that has the bigger nonce and set it to some time in the future
Yeah we've already lost 3 months ... though I guess in another 3, 6, or 9 months this wont be fixed and people will have the same excuse then that hard forks can't be done ...

Pool: https://kano.is Here on Bitcointalk: Forum BTC: 1KanoPb8cKYqNrswjaA8cRDk4FAS9eDMLU
FreeNode IRC: irc.freenode.net channel #kano.is Majority developer of the ckpool code
Help keep Bitcoin secure by mining on pools with full block verification on all blocks - and NO empty blocks!
btharper
Sr. Member
****
Offline Offline

Activity: 389
Merit: 250



View Profile
October 06, 2012, 09:10:41 AM
 #58

Sigh - it does become rather silly if you look at it from this perspective:

People are suggesting ways to solve the problem that the nonce size is too small (including Stratum and that other guff like that)

So ... what's the problem?

The nonce size is too small (as everyone agrees so far who has posted)

So ... make it bigger ... as I said in the first post 3 months ago ... though as I said later - 128 bits is a very long term solution.

... and as I said on the previous page ... create a version 2 block type that has the bigger nonce and set it to some time in the future
Yeah we've already lost 3 months ... though I guess in another 3, 6, or 9 months this wont be fixed and people will have the same excuse then that hard forks can't be done ...
The nonce size is too small (as everyone agrees so far who has posted)

I would like to very explicitly disagree with this. I've never thought this and I joined the thread over a month ago I'd say. There is no reason to go through the trouble of a hard fork to implement a solution in search of a problem.

If this were put up for a vote by miners (similar to bip 16) where miners vote with their hashing power (though I do think this gives mining pool operators extra say since they're voting with their entire pool's hash rate), since they're the ones most affected and the ones who should care.
kano
Legendary
*
Online Online

Activity: 2464
Merit: 1040


Linux since 1997 RedHat 4


View Profile
October 06, 2012, 10:56:17 AM
 #59

...
I would like to very explicitly disagree with this. I've never thought this and I joined the thread over a month ago I'd say. There is no reason to go through the trouble of a hard fork to implement a solution in search of a problem.
...
So if nonce size isn't too small - why are we changing bitcoin at all to support ASIC?

Why is this stupid BIP in 0.7.0 that's been forced on everyone, that also removed functionality from bitcoind, necessary?
Why have people called sticking more junk in the coinbase an 'extra' nonce? What? They gave it the wrong name?
The current nonce is big enough? It's not necessary?

Why is Stratum necessary?
Coz when you get a block header to hash, you can only hash 2^32 times before needing to change it.
Why does that restriction exist?

What was the stupid (short term) hack used up until now? Roll ntime?
Why? Certainly couldn't be related to the nonce size? Tongue

It's gonna work fine without any changes right?

Oh ... no, that's not correct Tongue

Where IS the restriction at the moment?

OK back to the topic subject now Tongue

Pool: https://kano.is Here on Bitcointalk: Forum BTC: 1KanoPb8cKYqNrswjaA8cRDk4FAS9eDMLU
FreeNode IRC: irc.freenode.net channel #kano.is Majority developer of the ckpool code
Help keep Bitcoin secure by mining on pools with full block verification on all blocks - and NO empty blocks!
slush
Legendary
*
Offline Offline

Activity: 1372
Merit: 1019



View Profile WWW
October 06, 2012, 11:28:03 AM
 #60

Why is Stratum necessary?

"Solves small iteration space without need of protocol hard fork." Isn't that reason big enough?

Quote
Where IS the restriction at the moment?

The restriction is that changing nonce size don't affect only miners, but all participants in bitcoin project. Breaking compatibility is always the last possible step, because it brings unnecessary instability.

Although I may agree that bigger nonce size would be the cleanest solution, it is simply not feasible in the real world. Looks like some people are too focused in mining, but protocol change is not just about mining. There already exists solutions which solves issues for miners, so I'll definitely prefer these solutions before breaking whole network.

kano
Legendary
*
Online Online

Activity: 2464
Merit: 1040


Linux since 1997 RedHat 4


View Profile
October 06, 2012, 11:54:40 AM
 #61

Although I thought it was obvious - I added the quote to my post above for the reason for my post above ...

However, slush, even your post directly implies working around the problem rather than fixing it.

... and it won't be long before each of the software workarounds will not be sufficient and then a workaround will be necessary in the silicon also.


... or ........ just fix the actual problem in the first place (yeah 3 months have already been wasted)

It is clearly a problem and there is a clear fix - a LONG term fix - increase the nonce size to 128 bits.

Pool: https://kano.is Here on Bitcointalk: Forum BTC: 1KanoPb8cKYqNrswjaA8cRDk4FAS9eDMLU
FreeNode IRC: irc.freenode.net channel #kano.is Majority developer of the ckpool code
Help keep Bitcoin secure by mining on pools with full block verification on all blocks - and NO empty blocks!
jevon
Jr. Member
*
Offline Offline

Activity: 36
Merit: 0


View Profile
October 06, 2012, 11:57:08 AM
 #62

Any change to the block header is a hard fork upgrade, and should be avoided at all costs.

His point is stealing 2 bytes from version shouldn't be a hard fork or any fork. (unless 0.7.0 panics when it sees a new version number, I haven't read that code yet...  nVersion is a signed int, so you could stipulate the high bit set so it makes a negative version number for <=0.7.0)

The version bytes were put there for extensibility. We need some now. We could use them now, or save them for some future need for bytes.

OTOH, Stratum seems to solve this nicely. The CPU has to do about 10 hashes and send 32 bytes of merkle root for every 4 billion hashes the ASIC does. ASIC is fast but is it anywhere near 400,000,000 times faster than CPU? I think CPU is like ~4 Mhash, so ASIC would have to be 100 petahash to outrun CPU's part of the job.
kano
Legendary
*
Online Online

Activity: 2464
Merit: 1040


Linux since 1997 RedHat 4


View Profile
October 06, 2012, 12:06:10 PM
 #63

Repeating and explaining ... since it seems to be necessary ...

If the time to talk to the device is of the same order of magnitude as how long the device takes to process the work, then there is a problem.

I deal with this stuff in e.g. the cgminer Icarus and BFL drivers.
The issue of how much time is wasted talking to the device.
It is already noticeable in the last digit of the miner performance display.
With a 10x performance increase (ASIC) it will be noticeable even more.

What x performance will we get from a device this year ... next year ... after that ... ?

Nothing to do with CPU speed.

Pool: https://kano.is Here on Bitcointalk: Forum BTC: 1KanoPb8cKYqNrswjaA8cRDk4FAS9eDMLU
FreeNode IRC: irc.freenode.net channel #kano.is Majority developer of the ckpool code
Help keep Bitcoin secure by mining on pools with full block verification on all blocks - and NO empty blocks!
Pieter Wuille
Legendary
*
qt
Offline Offline

Activity: 1050
Merit: 1005


View Profile WWW
October 06, 2012, 12:17:39 PM
 #64

Why is this stupid BIP in 0.7.0 that's been forced on everyone, that also removed functionality from bitcoind, necessary?

I think you're missing something here. 0.7.0 was most certainly not forced on anyone. It does implement BIP34 (a fully backward-compatible change), which only takes effect as soon as a majority of mining power participates in it. Therefore it's indeed preferrable for miners and other infrastructure to upgrade to 0.7.0. But the key word in here is backward compatible. Every change that has ever been done, either had no protocol impact (like removing getmemorypool) or only a backward-compatible one (like BIP16 and BIP30).

What you are proposing is a completely incompatible upgrade. Blocks created by a new miner would simply be ignored by every single old node (not just old miners, everyone). The moment such a block gets created, and a majority of mining power is behind the change, there will instantly appear a fork in the block chain. One side maintained by the new nodes, one side by the old nodes. Every existing non-spent transaction output before the split would get to be spent once in each side. This would be a disaster. The only way such a "hard fork" as it is called (essentially a non-backward compatible change to the validity rules for blocks) is possible, is when it is very carefully planned in advance (let's say 1-2 years) and everyone agrees (not just a majority, there must be exceedingly high consensus about this).

I'm sorry, but no, a performance problem for miners is not worth a hard fork. Miners make money, I'm sure they'll find solutions on their own (like Stratum) which don't require the rest of the network to upgrade. If the hardware or the software can't deal with such high performance, switch to other hardware or software. Let the device do ntime rolling or calculate its own merkle root. Yes, I fully agree a 64-bit nonce would have made things easier, but there is simply no way of changing
that right now. I don't mean hard - it's just impossible. Some people would refuse to have a protocol change forced on them, and that's enough to ruin Bitcoin in the face of a hard fork.


aka sipa, core dev team

Tips and donations: 1KwDYMJMS4xq3ZEWYfdBRwYG2fHwhZsipa
slush
Legendary
*
Offline Offline

Activity: 1372
Merit: 1019



View Profile WWW
October 06, 2012, 12:19:30 PM
 #65

However, slush, even your post directly implies working around the problem rather than fixing it.

Sure. My post was about that I prefer "work around" in miners that breaking compatibility of whole bitcoin network.

Quote
I deal with this stuff in e.g. the cgminer Icarus and BFL drivers.

USB 2.0 has teoretical speed 480 Mbit/s. Block header has ~200 bytes. I know I'm over-simplifying it now, but teoretically it is possible to transfer up to 300000 block headers per second to the miner even with prehistoric USB 2.0.

Maybe we should focus to fixing protocols in these devices than fix protocol in bitcoin network? Why should device for $30k USD use serial port communication designed in the middle of 20th century?

jevon
Jr. Member
*
Offline Offline

Activity: 36
Merit: 0


View Profile
October 06, 2012, 12:22:27 PM
 #66

If the time to talk to the device is of the same order of magnitude as how long the device takes to process the work, then there is a problem.

Your sig says 54Gh/s, which would consume 12.5 merkle roots per second. You don't need to send the whole header, only the merkle root changes. That's 12.5 * 32 = 402 bytes per second.

It would be a little strange a device that can process 4 billion hashes faster than you can give it the 32 bytes to hash.

Edit:
Also it only needs a enough merkle roots to last for 1 second. After 1 second it can update nTime and cycle through its supply of merkle roots again. If you sent it the block header and 13 merkle roots, it would have all it needs.
kano
Legendary
*
Online Online

Activity: 2464
Merit: 1040


Linux since 1997 RedHat 4


View Profile
October 06, 2012, 12:26:43 PM
 #67

Why is this stupid BIP in 0.7.0 that's been forced on everyone, that also removed functionality from bitcoind, necessary?

I think you're missing something here. 0.7.0 was most certainly not forced on anyone. It does implement BIP34 (a fully backward-compatible change), which only takes effect as soon as a majority of mining power participates in it. Therefore it's indeed preferrable for miners and other infrastructure to upgrade to 0.7.0. But the key word in here is backward compatible. Every change that has ever been done, either had no protocol impact (like removing getmemorypool) or only a backward-compatible one (like BIP16 and BIP30).

What you are proposing is a completely incompatible upgrade. Blocks created by a new miner would simply be ignored by every single old node (not just old miners, everyone). The moment such a block gets created, and a majority of mining power is behind the change, there will instantly appear a fork in the block chain. One side maintained by the new nodes, one side by the old nodes. Every existing non-spent transaction output before the split would get to be spent once in each side. This would be a disaster. The only way such a "hard fork" as it is called (essentially a non-backward compatible change to the validity rules for blocks) is possible, is when it is very carefully planned in advance (let's say 1-2 years) and everyone agrees (not just a majority, there must be exceedingly high consensus about this).

I'm sorry, but no, a performance problem for miners is not worth a hard fork. Miners make money, I'm sure they'll find solutions on their own (like Stratum) which don't require the rest of the network to upgrade. If the hardware or the software can't deal with such high performance, switch to other hardware or software. Let the device do ntime rolling or calculate its own merkle root. Yes, I fully agree a 64-bit nonce would have made things easier, but there is simply no way of changing
that right now. I don't mean hard - it's just impossible. Some people would refuse to have a protocol change forced on them, and that's enough to ruin Bitcoin in the face of a hard fork.


So you've plotted out the timeline for the death of BTC yet?

Either of these will do:
1) When the number of transactions on the network is beyond what the current design can support
2) When sha2 IS broken (not an IF, a WHEN) and we have to move on to sha3

According to your argument, when either of these happen, BTC will die since a hard fork isn't possible.

Yes I know the soft fork in April was done badly, but just because it was fucked up then, doesn't mean soft or hard forks can't be done.

Pool: https://kano.is Here on Bitcointalk: Forum BTC: 1KanoPb8cKYqNrswjaA8cRDk4FAS9eDMLU
FreeNode IRC: irc.freenode.net channel #kano.is Majority developer of the ckpool code
Help keep Bitcoin secure by mining on pools with full block verification on all blocks - and NO empty blocks!
Luke-Jr
Legendary
*
expert
Offline Offline

Activity: 2366
Merit: 1001



View Profile
October 06, 2012, 12:29:29 PM
 #68

GBT was necessary even before ASIC, because the real problem isn't the nonce range being too small, it's that pooled mining today tends to centralize control too much. The primary key feature of GBT is decentralization; that it happens to also solve the ASIC problem came as a nice side-effect.

Pieter Wuille
Legendary
*
qt
Offline Offline

Activity: 1050
Merit: 1005


View Profile WWW
October 06, 2012, 12:30:15 PM
 #69

I'm sure that when the alternative is Bitcoin becoming useless (either by scaling issues or broken cryptography), getting a consensus about the necessity for upgrade won't be a problem (most likely still very hard, but doable).

Good luck getting the Bitcoin community convinced that it's necessary because of a performance problem for miners, who are already making money.

aka sipa, core dev team

Tips and donations: 1KwDYMJMS4xq3ZEWYfdBRwYG2fHwhZsipa
kano
Legendary
*
Online Online

Activity: 2464
Merit: 1040


Linux since 1997 RedHat 4


View Profile
October 06, 2012, 12:35:04 PM
 #70

If the time to talk to the device is of the same order of magnitude as how long the device takes to process the work, then there is a problem.

Your sig says 54Gh/s, which would consume 12.5 merkle roots per second. That's 12.5 * 32 = 402 bytes per second.

It would be a little strange a device that can process 4 billion hashes faster than you can give it the 32 bytes to hash.

No, I'm talking about the current silicon without implementing a merkle processor + coinbase generator in silicon.
The work arounds.

Go ahead and tell everyone that the millions they are throwing at ASIC at the moment will be worthless soon Smiley

As I have already said, no company would be silly enough to do Stratum or the other thing in silicon since there is the issue of meeting the random requirements of the 'next' hack solution to this problem and thus no idea how long any device using this silicon could be used.
ROI ..........

Pool: https://kano.is Here on Bitcointalk: Forum BTC: 1KanoPb8cKYqNrswjaA8cRDk4FAS9eDMLU
FreeNode IRC: irc.freenode.net channel #kano.is Majority developer of the ckpool code
Help keep Bitcoin secure by mining on pools with full block verification on all blocks - and NO empty blocks!
kano
Legendary
*
Online Online

Activity: 2464
Merit: 1040


Linux since 1997 RedHat 4


View Profile
October 06, 2012, 12:37:47 PM
 #71

GBT was necessary even before ASIC, because the real problem isn't the nonce range being too small, it's that pooled mining today tends to centralize control too much. The primary key feature of GBT is decentralization; that it happens to also solve the ASIC problem came as a nice side-effect.
A perceived problem that even MOST of the network disagrees with you on.

MOST of the network mines on pools - count the % of network using P2Pool or solomining and you will see your argument that GBT solves some control problem holds no water at all.

Pool: https://kano.is Here on Bitcointalk: Forum BTC: 1KanoPb8cKYqNrswjaA8cRDk4FAS9eDMLU
FreeNode IRC: irc.freenode.net channel #kano.is Majority developer of the ckpool code
Help keep Bitcoin secure by mining on pools with full block verification on all blocks - and NO empty blocks!
jevon
Jr. Member
*
Offline Offline

Activity: 36
Merit: 0


View Profile
October 06, 2012, 12:51:51 PM
 #72

If the time to talk to the device is of the same order of magnitude as how long the device takes to process the work, then there is a problem.

Your sig says 54Gh/s, which would consume 12.5 merkle roots per second. That's 12.5 * 32 = 402 bytes per second.

It would be a little strange a device that can process 4 billion hashes faster than you can give it the 32 bytes to hash.

No, I'm talking about the current silicon without implementing a merkle processor + coinbase generator in silicon.
So was I.

I'm sorry, maybe I'm missing something. The CPU increments the coinbase extra nonce and recalculates the merkle branch to make new merkle roots to send to the ASIC. Only the merkle roots need to go over the wire.
Luke-Jr
Legendary
*
expert
Offline Offline

Activity: 2366
Merit: 1001



View Profile
October 06, 2012, 12:52:31 PM
 #73

GBT was necessary even before ASIC, because the real problem isn't the nonce range being too small, it's that pooled mining today tends to centralize control too much. The primary key feature of GBT is decentralization; that it happens to also solve the ASIC problem came as a nice side-effect.
A perceived problem that even MOST of the network disagrees with you on.

MOST of the network mines on pools - count the % of network using P2Pool or solomining and you will see your argument that GBT solves some control problem holds no water at all.
p2pool and solo, while they do require decentralization, are not unique in supporting it, and both come with their own problems.

slush
Legendary
*
Offline Offline

Activity: 1372
Merit: 1019



View Profile WWW
October 06, 2012, 01:17:17 PM
 #74

It would be a little strange a device that can process 4 billion hashes faster than you can give it the 32 bytes to hash.

+1

As I have already said, no company would be silly enough to do Stratum or the other thing in silicon

Stop spreading FUD. Nobody is talking about implementing Stratum/GBT directly in chip.

How about that huge possible optimizations on wire protocol between computer and device? Why not to optimize this at first time? You told that current wire protocols has problems with handling few block headers per second. If that's true, then you can get many orders of magnitude in performance just by optimizing it, without breaking anything in bitcoin.

Oh, I forgot, it is because you need drama :-P.

gmaxwell
Moderator
Legendary
*
qt
Offline Offline

Activity: 2436
Merit: 1191



View Profile
October 06, 2012, 02:33:31 PM
 #75

So if nonce size isn't too small - why are we changing bitcoin at all to support ASIC?

We are not and have not.

Quote
Why is this stupid BIP in 0.7.0 that's been forced on everyone, that also removed functionality from bitcoind, necessary?

Wtf are you talking about.

Quote
Why have people called sticking more junk in the coinbase an 'extra' nonce? What? They gave it the wrong name?
The current nonce is big enough? It's not necessary?

Extranonce has been part of the design since day one. It was the only way to make your coinbase transactions have unique IDs other than constantly changing to a new public key whenever you update the candidate block. It's fundamental and is required for things independent of the nonce size.

Quote
Coz when you get a block header to hash, you can only hash 2^32 times before needing to change it.
Why does that restriction exist?

A _4 billion to 1_ work factor is pretty reasonable. There are a couple tradeoffs here. Header size is utterly critical for SPV nodes of various kinds, and even adding a single byte to the header increases the size by 1.25%. Having too much workfactor ratio between header only work and block update work encourages miners to build devices which never process transactions: Cheaper to just build a constant root with no transactions and increment the header until you solve a block. On a fresh design I may have given a bit more space to nonce, sure. But there is no need to change it now.

Quote
It's gonna work fine without any changes right?
Oh ... no, that's not correct Tongue

Yes, in fact it will.

Quote
Yes I know the soft fork in April was done badly, but just because it was fucked up then, doesn't mean soft or hard forks can't be done.

Huh? The P2SH deployment went pretty smoothly.  About the only bad thing that came out of it was that we discovered Bitcoin had a bug in handling large reorgs with lots of transactions.  Timed less well— with large miners unable to reorg because of it— this could have been pretty fatal for the network. But since it only really impacted non-mining nodes (plus the tiny minority of non-updated miners) due to fork being invalid P2SH it was harmless. It's very fortunate, because I'm not sure we would have discovered that BDB issue even by now wthout it.

Quote
Where IS the restriction at the moment?

That we're not permitted to throw you into a volcano.


Bitcoin will not be compromised
kano
Legendary
*
Online Online

Activity: 2464
Merit: 1040


Linux since 1997 RedHat 4


View Profile
October 06, 2012, 02:41:19 PM
 #76

It would be a little strange a device that can process 4 billion hashes faster than you can give it the 32 bytes to hash.

+1
-2, keep reading

Quote
As I have already said, no company would be silly enough to do Stratum or the other thing in silicon

Stop spreading FUD. Nobody is talking about implementing Stratum/GBT directly in chip.

How about that huge possible optimizations on wire protocol between computer and device? Why not to optimize this at first time? You told that current wire protocols has problems with handling few block headers per second. If that's true, then you can get many orders of magnitude in performance just by optimizing it, without breaking anything in bitcoin.

Oh, I forgot, it is because you need drama :-P.
At least make sure you know what you are talking about before hitting reply Tongue

Firstly - show me where I said "few block headers per second" - at least quote correctly ...

If you are dealing with work at a nonce range level, then for each nonce range you are passing work to the device.
If the transfer of the ~60-80bytes of hash data to the device is even within an order of magnitude of the device in processing that data, then you have a problem.

FPGA's that use serial over USB have the beginnings of this issue - as per: BFL, Icarus, ModMiner, Cainsmore.
... and the 2 companies that will be releasing their ASIC first are 2 of those, BFL and ModMiner, with Icarus not far behind.

The work arounds suggested imply implementing merlke root processing and coinbase generation processing in silicon ... as I have already explained ... but since it was a little difficult for you to understand I've repeated it. Maybe you are at the top of the first curve Tongue

Anyway, yet again, a work around to resolve the problem that the nonce size is too small.
However, how generic is that work around if it was implemented in silicon?
Would it 'always' work or would it only work with current implementations and need changing in the not too distant future?
Quote me: "ROI"

All these problems and questions are actually resolved by simply increasing the size of the nonce field.
The cause of the problem, and the correct and simplest solution to the problem.
The only argument against it is a hard-fork - which can be planned as a future change to resolve the problem before it becomes a problem.
Yeah a little forward thinking goes a long way.

All this melodrama about hard-forks being impossible - well when the first one comes along that is required 'now', then clearly that will be the end of BTC since everyone's argument is that it is impossible do a hard-fork ... ... which in this case is to fix the problem at the root cause - that then solves all related issues with one change - and a simple implementation everywhere.

As I'll repeat as I've implied before, this really is a good example of people failing to learn from the past.
In this case, a well know one, MSDOS and the early x86 architecture, where the design was based on an assumption that turned out to be an obvious underestimate not far down the track, and for years hacks were used to work around the problem because it was considered too difficult to fix it properly ... until it was fixed properly by removing the original constraints.

Pool: https://kano.is Here on Bitcointalk: Forum BTC: 1KanoPb8cKYqNrswjaA8cRDk4FAS9eDMLU
FreeNode IRC: irc.freenode.net channel #kano.is Majority developer of the ckpool code
Help keep Bitcoin secure by mining on pools with full block verification on all blocks - and NO empty blocks!
kano
Legendary
*
Online Online

Activity: 2464
Merit: 1040


Linux since 1997 RedHat 4


View Profile
October 06, 2012, 03:02:05 PM
 #77

So if nonce size isn't too small - why are we changing bitcoin at all to support ASIC?

We are not and have not.
Oh that's right, GBT related changes didn't happen in bitcoind - I forgot.
Quote
Quote
Why is this stupid BIP in 0.7.0 that's been forced on everyone, that also removed functionality from bitcoind, necessary?

Wtf are you talking about.
What?
There were that many BIPs added in 0.7.0 that you couldn't work out which one I was referring to?
Oh well.
Quote
Quote
Why have people called sticking more junk in the coinbase an 'extra' nonce? What? They gave it the wrong name?
The current nonce is big enough? It's not necessary?

Extranonce has been part of the design since day one. It is the only way to make your coinbase transactions have unique IDs other than constantly changing to a new public key whenever you update the candidate block. It's fundamental and is required for things independent of the nonce size.
Ah glad you agree, yes the nonce field is too small, the solutions using Stratum and GBT have had to extend it to the coinbase Smiley
So ... where is extra nonce referred to in the original Bitcoin document by Satoshi? Oh it isn't.
I'd guess you like the Roll ntime hack also?
Quote
Quote
Coz when you get a block header to hash, you can only hash 2^32 times before needing to change it.
Why does that restriction exist?

A _4 billion to 1_ work factor is pretty reasonable. There are a couple tradeoffs here. Header size is utterly critical for SPV nodes of various kinds, and even adding a single byte to the header increases the size by 1.25%. Having too much workfactor ratio between header only work and block update work encourages miners to build devices which never process transactions: Cheaper to just build a constant root with no transactions and increment the header until you solve a block. On a fresh design I may have given a bit more space to nonce, sure. But there is no need to change it now.
"you may have" Cheesy
Read previous post Tongue
Quote
Quote
It's gonna work fine without any changes right?
Oh ... no, that's not correct Tongue

Yes, in fact it will
Ah, OK, none of these Stratum or GBT changes were needed. OK.
Since they are hacks anyway, just throw them away then.
Quote
Quote
Where IS the restriction at the moment?

That we're not permitted to throw you in a volcano.

But you are permitted to run back to Luke's lap ... wan wan

Pool: https://kano.is Here on Bitcointalk: Forum BTC: 1KanoPb8cKYqNrswjaA8cRDk4FAS9eDMLU
FreeNode IRC: irc.freenode.net channel #kano.is Majority developer of the ckpool code
Help keep Bitcoin secure by mining on pools with full block verification on all blocks - and NO empty blocks!
Luke-Jr
Legendary
*
expert
Offline Offline

Activity: 2366
Merit: 1001



View Profile
October 06, 2012, 03:06:48 PM
 #78

Kano, you're confused because you're only in Bitcoin for "free money". Someday, you'll need to realize "free money" is not really a goal for Bitcoin as a whole.

slush
Legendary
*
Offline Offline

Activity: 1372
Merit: 1019



View Profile WWW
October 06, 2012, 03:18:28 PM
 #79

Quote from: slush
How about that huge possible optimizations on wire protocol between computer and device? Why not to optimize this at first time? You told that current wire protocols has problems with handling few block headers per second. If that's true, then you can get many orders of magnitude in performance just by optimizing it, without breaking anything in bitcoin.

Quote from: slush
USB 2.0 has teoretical speed 480 Mbit/s. Block header has ~200 bytes. I know I'm over-simplifying it now, but teoretically it is possible to transfer up to 300000 block headers per second to the miner even with prehistoric USB 2.0.

Maybe we should focus to fixing protocols in these devices than fix protocol in bitcoin network? Why should device for $30k USD use serial port communication designed in the middle of 20th century?

Once again, bolded especially for kano. Please respond to this.

Quote
If you are dealing with work at a nonce range level, then for each nonce range you are passing work to the device.

How many iterations is possible for one nonce range? 2^32? So 80 bytes per 2^32 hashes? Is that around 230 low-level "jobs" (nonce ranges) per one terahash per second? Isn't there teoretical wire limit 300000 nonce ranges per second for USB 2.0?

Quote
FPGA's that use serial over USB have the beginnings of this issue - as per: BFL, Icarus, ModMiner, Cainsmore.
... and the 2 companies that will be releasing their ASIC first are 2 of those, BFL and ModMiner, with Icarus not far behind.

Serial over USB is that obsolete technology which I was refering in my previous post. Looks like *there* is a space for improvements, *not* in bitcoin protocol.

Quote
The work arounds suggested imply implementing merlke root processing and coinbase generation processing in silicon ...

Nobody is proposing this.

Quote
Would it 'always' work or would it only work with current implementations and need changing in the not too distant future?

Yes. There're already technologies for feeding Petahashes per second to ASIC device. However device manufacturers will probably need to drop obsolete wire protocols and use something faster than Serial over USB.

Quote
All these problems and questions are actually resolved by simply increasing the size of the nonce field.
The cause of the problem, and the correct and simplesthardest solution to the problem.

Let me fix it for you.

Quote
The only argument against it is a hard-fork

It looks to me that you live in some alternate universe. Hard fork is definitely possible and there can be scenarios when it become necessary, but ASIC miners are not that case.

*headdesk*

gmaxwell
Moderator
Legendary
*
qt
Offline Offline

Activity: 2436
Merit: 1191



View Profile
October 06, 2012, 04:16:00 PM
 #80

Oh that's right, GBT related changes didn't happen in bitcoind - I forgot.
HUH? GBT is a rename of the poorly named getmemorypool which we've had for a long time— much longer than anyone was talking about 1TH/s mining devices— it's used by pool daemons and p2pool. Mostly for dynamic coinbase creation (e.g. to do fancy payouts) and because bitcoind coinbase creation is somewhat slow (it's synchronous, single threaded, and basically completely unoptimized).  The changes made in 0.7 beyond the rename were some extra fields to remove a potential DOS attack where some of the limits couldn't be properly enforced when the caller does dynamic transaction selection, support for the height in the coinbase, and some general cleanups to make it more 'designed' than accreted. BIP 22 makes no mention of 'asic', nor do the pull requests for the changes, and I don't believe asics _ever_ came up in discussion related to it around the reference client's development.

To the extent that GBT is useful for higher speed miners getmemorypool was absolutely as useful.

Quote
So ... where is extra nonce referred to in the original Bitcoin document by Satoshi? Oh it isn't.
The same place scripts are referenced as well as locktime and nbits.

Quote
There were that many BIPs added in 0.7.0 that you couldn't work out which one I was referring to?
It's awfully hard to work out what you're talking about when you haven't the foggiest clue about what you're talking about yourself.

Bitcoin will not be compromised
DoomDumas
Legendary
*
Offline Offline

Activity: 1001
Merit: 1000


Bitcoin forever !


View Profile WWW
October 07, 2012, 06:30:32 AM
 #81

Voting for fixing root cause instead of patching problems...
IMHO, Kano argue about something interesting, about seeing much much farter in the future than we are used to.. all must learn from history..

meanwhile, I doubt ASICs or "miner will" are a reason to rush things..
Must be well planned, if concidered needed..  As I like to say :

What do we want ?
Evidence based change !
When do we want it ?
After peer review !

was my 2 satoshi !
kano
Legendary
*
Online Online

Activity: 2464
Merit: 1040


Linux since 1997 RedHat 4


View Profile
October 07, 2012, 12:50:42 PM
 #82

...
Quote
So ... where is extra nonce referred to in the original Bitcoin document by Satoshi? Oh it isn't.
The same place scripts are referenced as well as locktime and nbits.
Quote it - coz it isn't there.
Have even ever read "Bitcoin: A Peer-to-Peer Electronic Cash System"?

Quote
Quote
There were that many BIPs added in 0.7.0 that you couldn't work out which one I was referring to?
It's awfully hard to work out what you're talking about when you haven't the foggiest clue about what you're talking about yourself.
Yes I guess for someone like you it is difficult to only associate BIP 22 with the thread title.

I'm not sure how you could associate the thread title with BIP 34 or 35, but that confusion is (in your opinion) 'having a clue'
So I guess there's no point discussing this any further with someone as 'clued' in as you are Tongue

Pool: https://kano.is Here on Bitcointalk: Forum BTC: 1KanoPb8cKYqNrswjaA8cRDk4FAS9eDMLU
FreeNode IRC: irc.freenode.net channel #kano.is Majority developer of the ckpool code
Help keep Bitcoin secure by mining on pools with full block verification on all blocks - and NO empty blocks!
Pieter Wuille
Legendary
*
qt
Offline Offline

Activity: 1050
Merit: 1005


View Profile WWW
October 07, 2012, 01:06:10 PM
 #83

BIP 22 doesn't have anything at all to do with the block validity rules, which is the one thing this is all about. It's about a client-specific way of making mining software interact with the daemon (but it can, and hopefully will, be adopted by other software too). You could replace the BIP22 implementation in the daemon you're using with any equivalent, and nothing would break - nobody would even notice - as long as you patch the software that interacts with it as well.

BIP 34 however does concern block validity rules. It's not directly related to mining, so maybe it is harder to see why it is relevant in this discussion.

aka sipa, core dev team

Tips and donations: 1KwDYMJMS4xq3ZEWYfdBRwYG2fHwhZsipa
kano
Legendary
*
Online Online

Activity: 2464
Merit: 1040


Linux since 1997 RedHat 4


View Profile
October 07, 2012, 01:07:36 PM
 #84

Kano, you're confused because you're only in Bitcoin for "free money". Someday, you'll need to realize "free money" is not really a goal for Bitcoin as a whole.
... lulz reality please.
Your desire to control bitcoin and your self-righteous intolerance have long ago blinded you to the FUD you regularly spread.

Pool: https://kano.is Here on Bitcointalk: Forum BTC: 1KanoPb8cKYqNrswjaA8cRDk4FAS9eDMLU
FreeNode IRC: irc.freenode.net channel #kano.is Majority developer of the ckpool code
Help keep Bitcoin secure by mining on pools with full block verification on all blocks - and NO empty blocks!
btharper
Sr. Member
****
Offline Offline

Activity: 389
Merit: 250



View Profile
October 07, 2012, 02:57:43 PM
 #85

...
Quote
So ... where is extra nonce referred to in the original Bitcoin document by Satoshi? Oh it isn't.
The same place scripts are referenced as well as locktime and nbits.
Quote it - coz it isn't there.
Have even ever read "Bitcoin: A Peer-to-Peer Electronic Cash System"?
From wiki
I don't know the code well enough to reference it, but here's a reference by Theymos on the wiki from December 2010, which at the very least predates anything fast on the network. Per http://bitcoin.sipa.be/ the network back then was under 100GH/s.

So yes, people have been worrying about 2^32 not being enough for a long time, however if you're able to spin the merkle root yourself it's not a problem. Or if you're able to get a new one with relatively low latency (ie. Not from a remote web server, ie. pool) every time, it's okay and you can hash away on it.

Nothing about spinning new merkle roots has to be in silicon, a small microprocessor (CPUs on my ASICs?!?!?) would work fine, and probably exists there for other reasons, although it may not have enough power to do anything useful like generate new merkle roots.
Pieter Wuille
Legendary
*
qt
Offline Offline

Activity: 1050
Merit: 1005


View Profile WWW
October 07, 2012, 03:04:55 PM
 #86

Generating a merkle root is exactly the same operation as the ASIC is specifically designed for already (double SHA256). No need to embed a CPU on the ASIC (an ASIC is a CPU, but one for a very specific purpose only).

That doesn't mean they should - it can be done at any stage (in the pool, in the computer controlling the asic, in some intermediate controller chip, ... I don't care). The point is that this is not hard; it's only different from how the current infrastructure works.

aka sipa, core dev team

Tips and donations: 1KwDYMJMS4xq3ZEWYfdBRwYG2fHwhZsipa
2112
Legendary
*
Offline Offline

Activity: 2100
Merit: 1025



View Profile
October 07, 2012, 07:59:59 PM
 #87

I have nothing to add, I'm just quoting this gem for the future reference.

Generating a merkle root is exactly the same operation as the ASIC is specifically designed for already (double SHA256). No need to embed a CPU on the ASIC (an ASIC is a CPU, but one for a very specific purpose only).

That doesn't mean they should - it can be done at any stage (in the pool, in the computer controlling the asic, in some intermediate controller chip, ... I don't care). The point is that this is not hard; it's only different from how the current infrastructure works.

Please comment, critique, criticize or ridicule BIP 2112: https://bitcointalk.org/index.php?topic=54382.0
Long-term mining prognosis: https://bitcointalk.org/index.php?topic=91101.0
btharper
Sr. Member
****
Offline Offline

Activity: 389
Merit: 250



View Profile
October 08, 2012, 02:35:56 AM
 #88

Generating a merkle root is exactly the same operation as the ASIC is specifically designed for already (double SHA256). No need to embed a CPU on the ASIC (an ASIC is a CPU, but one for a very specific purpose only).

That doesn't mean they should - it can be done at any stage (in the pool, in the computer controlling the asic, in some intermediate controller chip, ... I don't care). The point is that this is not hard; it's only different from how the current infrastructure works.
While the merkle root does use double SHA256, the optimizations are different. Where dsha(h) = SHA256(SHA256(h)), the merkle root is dsha(dsha(A+B) + dsha(C+C)) (for three transactions, one of which is the generation transaction). And the block hash is a highly optimized (for fewer operations which leads to quicker completion and higher speeds) down to precomputing the first round of SHA256, and running through one of two rounds of the inner iteration of SHA256 (the other being precomputed in the previous step) and the first 94% of the second iteration (the last 4 rounds don't matter).

TL;DR on that - merkle roots are a normal hash, insert data get answer. Block hashing wants a hash with a certain property and tries over an over with slightly different starting conditions to get a "special" end result. One cares about getting an answer for a specific input, one wants a specific output from a generic form of the input.

Also, because of several factors, it's a lot harder to make the merkle root generation in silicon (I'm sure it can be done, there's just not as much point, you can make them much more easily than hashing them. A few hundred hashes versus a few billion).

Also, an ASIC is by defenition Application specific. While you could make one to generate merkle roots via serial input, it'd be a lot of work for not a lot of gain. A CPU just does whatever you tell it to do. Generally on an embedded device like a mining rig a firmware update (more generally, a change in software) could completely change what the CPU is doing (for instance change how the merkle root is generated based on a new BIP), but the ASIC is stuck doing the exact same thing that it's orignally hardwired to do until it breaks.

Again to suggest a mining rig with fairly simple setup of plugging into ethernet and power cords (and configuring to a pool) could use a microprocessor to do all the work (talking to the network, deciding what transactions to include) EXCEPT crunching blocks which would be left to the ASIC part of the chip.
gmaxwell
Moderator
Legendary
*
qt
Offline Offline

Activity: 2436
Merit: 1191



View Profile
October 09, 2012, 02:17:47 PM
 #89

Quote
So ... where is extra nonce referred to in the original Bitcoin document by Satoshi? Oh it isn't.
The same place scripts are referenced as well as locktime and nbits.
Quote it - coz it isn't there.
Have even ever read "Bitcoin: A Peer-to-Peer Electronic Cash System"?
I embedded the quote steganographically in the wooshing sound you heard as you read my response.

… The paper was written something like a year before the publication of the system. It covers the core concepts but omits things such as the script system, which is easily the most complicated part of a full node implementation. Since the paper covers _very_ little of the actual implementation, including a number of critical design decisions (I provided some examples of uncovered things), it would have been surprising if it _had_ mentioned extranonce.

Quote
Yes I guess for someone like you it is difficult to only associate BIP 22 with the thread title.
I'm not sure how you could associate the thread title with BIP 34 or 35, but that confusion is (in your opinion) 'having a clue'
Because:
(1) None of them were about ASIC mining. So I had to assume you were confused, and given that you are confused all bets are off.
(2) BIP 22 is primarily documenting (and cleaning up) an API we've had since long before 0.7; we merged it from forrestv in Septemberish 2011. So your talk about new in 0.7 more or less precluded it.

Bitcoin will not be compromised
beekeeper
Sr. Member
****
Offline Offline

Activity: 406
Merit: 250


LTC


View Profile WWW
October 09, 2012, 03:04:38 PM
 #90

kano is right. I did encounter this bandwidth issue, its somehow the same annoyance for hardware developers as it was diff1 and multi GHs miners network bandwidth consumption for pool operators. Unfortunately, solving hardware bandwidth issues is not as simple as a workaround in software, most of the time it requires changing design and adding extra costs.

25Khs at 5W Litecoin USB dongle (FPGA), 45kHs overclocked
https://bitcointalk.org/index.php?topic=310926
Litecoin FPGA shop -> http://ltcgear.com
gmaxwell
Moderator
Legendary
*
qt
Offline Offline

Activity: 2436
Merit: 1191



View Profile
October 09, 2012, 03:41:32 PM
 #91

kano is right. I did encounter this bandwidth issue, its somehow the same annoyance for hardware developers as it was diff1 and multi GHs miners network bandwidth consumption for pool operators. Unfortunately, solving hardware bandwidth issues is not as simple as a workaround in software, most of the time it requires changing design and adding extra costs.
Then don't mis-design your hardware in the first place. Seriously, if you can't do the small bit of multiplication to figure out what your bandwidth requirements will be then you _have no business making mining hardware_, operating pools, etc... Nothing suggested avoids changing design and adding costs. At best the suggestions externalize cost on future bitcoin users.

Bitcoin will not be compromised
kano
Legendary
*
Online Online

Activity: 2464
Merit: 1040


Linux since 1997 RedHat 4


View Profile
October 09, 2012, 11:28:48 PM
 #92

Firstly regarding the change.

The current hash is actually 3 passes though the 64 stage hash function (2 x sha256 but the 1st sha256 is 80 bytes so requires 2 x 64 stages)
The first pass is unaffected by rolling the nonce value or any added data after it up to a certain size (and also would be unaffected by rolling the time value ... there you go hardware people - implement the Roll-NTime hack in there with a difficulty input also - but that would be risky ... and is short term)

Anyway, the size increase change would be to have a version 2 (or 3 or whatever is next at the time) block with the nonce field having 12 extra bytes (96 bits) added after the current nonce (which is currently on the end of the 80 bytes) making the block header 92 bytes.

The rolling would one of:
1) anywhere the hardware likes in the 128 bits that suited the hardware design best
OR
2) a subset given by a pool to allow the pool to avoid having to do any other major work than generate new merkle trees every time the transaction list needs updating (all the time Smiley)
I'd also add that the pool should be setting work difficulty high enough to appropriately (ask organofcorti) minimise work returned and work lifetime small enough to reduce transaction delays

What this change would mean is that ANY device designed with this option could be given work (and difficulty setting) and 'could' be given a time frame and then the device could hash up to the time frame and then stop, independent of the performance of the device (of course as some of the hardware vendors know, returning valid nonce's during hashing is the best way to give answers back)

Thus we also wouldn't have the other problem that the hacks cause:
Extra delays in Bitcoin transaction processing
since the time frames could be set to an appropriate value to minimise this effect (seconds) and ALL devices could guarantee that they could work for that (short) time frame and the mining software could spend that time processing results and setting up the next work ... as they all should (already Tongue)

---

Meanwhile, it is interesting to see people shift the problem around
e.g. slush saying it is the mining hardware problem - the hardware should have a faster 'wire' to deal with the problem - but it could rightfully be argued that it is also the pools problem - they should have faster hardware and better networks as required ... both are valid

This solution solves both

I will also give a very good example of how such arguments about the hardware ignore obvious limitations:
Xiangfu (as anyone with an Icarus should know who he is) had 91 USB Icarus devices connected to a single computer and a single cgminer instance running (with plenty of CPU to spare Smiley) ... so with such a setup, you have to consider 2 orders of magnitude in performance ...............

My solution is a long term solution that (of course) is not going to happen today, but it seems the dev fear of hard forks any time in the distant future (and the fact mentioned by someone else that this nonce issue was brought up 2 years ago) probably means it will never be fixed.

No idea if there is much else to say, but I guess if the discussion has no more merit (due to this last point) then this is a far as it will go.

---

Aside: there is a well known bug in the bitcoin difficulty calculation that gets it wrong (always) but since it's wrong by a small % and that it requires a hard fork, it has never been fixed. Yes, how much you get paid for your BTC mining work, is actually always a faction low Smiley

I make this comment so as to shed more light on other comments about hard forks

Pool: https://kano.is Here on Bitcointalk: Forum BTC: 1KanoPb8cKYqNrswjaA8cRDk4FAS9eDMLU
FreeNode IRC: irc.freenode.net channel #kano.is Majority developer of the ckpool code
Help keep Bitcoin secure by mining on pools with full block verification on all blocks - and NO empty blocks!
slush
Legendary
*
Offline Offline

Activity: 1372
Merit: 1019



View Profile WWW
October 10, 2012, 12:07:15 AM
 #93

Slush is saying that it is mining *software* problem. Even with future ASIC miners it will be feasible to prepare block headers on decent computer. If there is any real problem, then it is just using obsolete protocols between computer and mining device.

btharper
Sr. Member
****
Offline Offline

Activity: 389
Merit: 250



View Profile
October 10, 2012, 04:39:58 AM
 #94

Firstly regarding the change.

The current hash is actually 3 passes though the 64 stage hash function (2 x sha256 but the 1st sha256 is 80 bytes so requires 2 x 64 stages)
The first pass is unaffected by rolling the nonce value or any added data after it up to a certain size (and also would be unaffected by rolling the time value ... there you go hardware people - implement the Roll-NTime hack in there with a difficulty input also - but that would be risky ... and is short term)

Anyway, the size increase change would be to have a version 2 (or 3 or whatever is next at the time) block with the nonce field having 12 extra bytes (96 bits) added after the current nonce (which is currently on the end of the 80 bytes) making the block header 92 bytes.

The rolling would one of:
1) anywhere the hardware likes in the 128 bits that suited the hardware design best
OR
2) a subset given by a pool to allow the pool to avoid having to do any other major work than generate new merkle trees every time the transaction list needs updating (all the time Smiley)
I'd also add that the pool should be setting work difficulty high enough to appropriately (ask organofcorti) minimise work returned and work lifetime small enough to reduce transaction delays

What this change would mean is that ANY device designed with this option could be given work (and difficulty setting) and 'could' be given a time frame and then the device could hash up to the time frame and then stop, independent of the performance of the device (of course as some of the hardware vendors know, returning valid nonce's during hashing is the best way to give answers back)

Thus we also wouldn't have the other problem that the hacks cause:
Extra delays in Bitcoin transaction processing
since the time frames could be set to an appropriate value to minimise this effect (seconds) and ALL devices could guarantee that they could work for that (short) time frame and the mining software could spend that time processing results and setting up the next work ... as they all should (already Tongue)

---

Meanwhile, it is interesting to see people shift the problem around
e.g. slush saying it is the mining hardware problem - the hardware should have a faster 'wire' to deal with the problem - but it could rightfully be argued that it is also the pools problem - they should have faster hardware and better networks as required ... both are valid

This solution solves both

I will also give a very good example of how such arguments about the hardware ignore obvious limitations:
Xiangfu (as anyone with an Icarus should know who he is) had 91 USB Icarus devices connected to a single computer and a single cgminer instance running (with plenty of CPU to spare Smiley) ... so with such a setup, you have to consider 2 orders of magnitude in performance ...............

My solution is a long term solution that (of course) is not going to happen today, but it seems the dev fear of hard forks any time in the distant future (and the fact mentioned by someone else that this nonce issue was brought up 2 years ago) probably means it will never be fixed.

No idea if there is much else to say, but I guess if the discussion has no more merit (due to this last point) then this is a far as it will go.

---

Aside: there is a well known bug in the bitcoin difficulty calculation that gets it wrong (always) but since it's wrong by a small % and that it requires a hard fork, it has never been fixed. Yes, how much you get paid for your BTC mining work, is actually always a faction low Smiley

I make this comment so as to shed more light on other comments about hard forks
The example you give is probably the biggest reason this won't be put in there anytime soon. While allocating even another 300 or so bits (to keep things in three total rounds of SHA256) would be just fine by me, the difficulty involved in making such a change in very basic levels of the protocol. If we ever have to move away from SHA256 I think that would be a good time to do this, but right now the biggest problem is latency between pool servers and the other side of the USB cord. Getting things across the USB cord is becoming a problem and will need to be addressed by hardware manufactures. A few implementation details (GBT from getwork) will change, and things will keep on keeping on. Yea there's a bit of optimism in that, but I don't feel it's unwarranted.
kano
Legendary
*
Online Online

Activity: 2464
Merit: 1040


Linux since 1997 RedHat 4


View Profile
October 10, 2012, 05:23:04 AM
 #95

...
The example you give is probably the biggest reason this won't be put in there anytime soon. While allocating even another 300 or so bits (to keep things in three total rounds of SHA256) would be just fine by me, the difficulty involved in making such a change in very basic levels of the protocol. If we ever have to move away from SHA256 I think that would be a good time to do this, but right now the biggest problem is latency between pool servers and the other side of the USB cord. Getting things across the USB cord is becoming a problem and will need to be addressed by hardware manufactures. A few implementation details (GBT from getwork) will change, and things will keep on keeping on. Yea there's a bit of optimism in that, but I don't feel it's unwarranted.
Nope.

It's placing data in 96 bits of zeros ... as I said ... and it is still inside the 3rd round that already exists.
That's not a new round I'm adding - it's already there.
3 rounds is not a change, I'm simply pointing out what a lot of people don't even realise is there already.
... and ... that most of the data added in the 2nd round is 'zero'

If I am correct in understanding Inaba's comments about the BFL ASIC, they may even support this possibility already.
(i.e. if they do a proper full 64 round hash each time and the silicon doesn't decide how that is processed - rather the replaceable firmware)

---

Also, as I said, you already need to consider 2 orders of magnitude in dealing with the hardware ... before ASIC turns up.
ASIC is less than 2 months away and that's just version 1 of anyone's ASIC hardware.
... which is another order of magnitude at the absolute least.

Considering a forward planning change set to arrive in 2 years (or even one year) - who knows what the hashing performance will be by then.

Consider an old ATI 6950 graphics card - it has the equivalent of 1408 complex cores in it (integer shaders)
If version 1 of the ASIC has the equivalent of say only 100 and the complexity is well below that of an ATI core, then performance gains could easily be 128x in just the next generation of ASIC ... yeah I chose that 128 number on purpose Smiley

---

Unless you have something new to bring to the discussion, I think this one has ended.

Pool: https://kano.is Here on Bitcointalk: Forum BTC: 1KanoPb8cKYqNrswjaA8cRDk4FAS9eDMLU
FreeNode IRC: irc.freenode.net channel #kano.is Majority developer of the ckpool code
Help keep Bitcoin secure by mining on pools with full block verification on all blocks - and NO empty blocks!
btharper
Sr. Member
****
Offline Offline

Activity: 389
Merit: 250



View Profile
October 10, 2012, 05:47:01 AM
 #96

Nope.

It's placing data in 96 bits of zeros ... as I said ... and it is still inside the 3rd round that already exists.
That's not a new round I'm adding - it's already there.
3 rounds is not a change, I'm simply pointing out what a lot of people don't even realise is there already.
... and ... that most of the data added in the 2nd round is 'zero'

If I am correct in understanding Inaba's comments about the BFL ASIC, they may even support this possibility already.
(i.e. if they do a proper full 64 round hash each time and the silicon doesn't decide how that is processed - rather the replaceable firmware)

---

Also, as I said, you already need to consider 2 orders of magnitude in dealing with the hardware ... before ASIC turns up.
ASIC is less than 2 months away and that's just version 1 of anyone's ASIC hardware.
... which is another order of magnitude at the absolute least.

Considering a forward planning change set to arrive in 2 years (or even one year) - who knows what the hashing performance will be by then.

Consider an old ATI 6950 graphics card - it has the equivalent of 1408 complex cores in it (integer shaders)
If version 1 of the ASIC has the equivalent of say only 100 and the complexity is well below that of an ATI core, then performance gains could easily be 128x in just the next generation of ASIC ... yeah I chose that 128 number on purpose Smiley

---

Unless you have something new to bring to the discussion, I think this one has ended.
What I was saying is that you could add just over 300 bits of to the current data structure without messing with the three total rounds performed (the same three we have now).

Right now the move to bigger and better is cost prohibitive. While BFL and competitors can crank out massive machines doing in excess of a TH/s, getting MORE will cost more. Unless a giant like AMD gets into things, it won't be as cheap as a $200 graphics card.

Also in two years we'll be halfway through the 25 BTC blocks. At a quarter of the current reward (in 4 years) mining will be much less profitable (assuming BTC value doesn't shoot up, and I don't think expecting it to double each time the reward halves is reasonable).

Things are getting faster. Much faster. So what? It just sounds like panic to me. I do think we've reached the limits of what we can argue about though. If nothing else we're not getting anywhere.

Any chance we can get more details on the flexibility of the new ASICS?
kano
Legendary
*
Online Online

Activity: 2464
Merit: 1040


Linux since 1997 RedHat 4


View Profile
October 10, 2012, 06:22:59 AM
 #97

Depends on the NDA when I get one of the first ones Smiley

Pool: https://kano.is Here on Bitcointalk: Forum BTC: 1KanoPb8cKYqNrswjaA8cRDk4FAS9eDMLU
FreeNode IRC: irc.freenode.net channel #kano.is Majority developer of the ckpool code
Help keep Bitcoin secure by mining on pools with full block verification on all blocks - and NO empty blocks!
beekeeper
Sr. Member
****
Offline Offline

Activity: 406
Merit: 250


LTC


View Profile WWW
October 10, 2012, 07:54:11 AM
 #98

kano is right. I did encounter this bandwidth issue, its somehow the same annoyance for hardware developers as it was diff1 and multi GHs miners network bandwidth consumption for pool operators. Unfortunately, solving hardware bandwidth issues is not as simple as a workaround in software, most of the time it requires changing design and adding extra costs.
Then don't mis-design your hardware in the first place. Seriously, if you can't do the small bit of multiplication to figure out what your bandwidth requirements will be then you _have no business making mining hardware_, operating pools, etc... Nothing suggested avoids changing design and adding costs. At best the suggestions externalize cost on future bitcoin users.
Its not misdesign, its trade-off. The 32 bit nonce is old design, probably it looked great when only CPUs where doing mining, but right now it looks like a bottleneck.

25Khs at 5W Litecoin USB dongle (FPGA), 45kHs overclocked
https://bitcointalk.org/index.php?topic=310926
Litecoin FPGA shop -> http://ltcgear.com
gmaxwell
Moderator
Legendary
*
qt
Offline Offline

Activity: 2436
Merit: 1191



View Profile
October 10, 2012, 02:22:06 PM
 #99

Its not misdesign, its trade-off. The 32 bit nonce is old design, probably it looked great when only CPUs where doing mining, but right now it looks like a bottleneck.
It doesn't not look like a bottleneck. It provides a factor of four billion in reduction in whatever serial task exists outside of that. I have yet to see any evidence that it's a bottleneck.

Bitcoin will not be compromised
beekeeper
Sr. Member
****
Offline Offline

Activity: 406
Merit: 250


LTC


View Profile WWW
October 10, 2012, 02:56:46 PM
 #100

Its not misdesign, its trade-off. The 32 bit nonce is old design, probably it looked great when only CPUs where doing mining, but right now it looks like a bottleneck.
It doesn't not look like a bottleneck. It provides a factor of four billion in reduction in whatever serial task exists outside of that. I have yet to see any evidence that it's a bottleneck.

My current rig could count those 4 billions in 30 ms or less. That's less than usb handshake to xfer the new payload will take in average.

25Khs at 5W Litecoin USB dongle (FPGA), 45kHs overclocked
https://bitcointalk.org/index.php?topic=310926
Litecoin FPGA shop -> http://ltcgear.com
Gavin Andresen
Legendary
*
qt
Offline Offline

Activity: 1652
Merit: 1012


Chief Scientist


View Profile WWW
October 10, 2012, 03:43:56 PM
 #101

My current rig could count those 4 billions in 30 ms or less. That's less than usb handshake to xfer the new payload will take in average.

... so transfer 1,000 headers with different extranonces in one handshake.  Or don't handshake every time. Or modify the firmware that speaks whatever mining protocol you're sending at it to do the increment-extranonce-and-recompute-the-merkle-root-thing itself.

All of this discussion is useless; even if you could convince us core developers that we need A HARD FORK RIGHT THIS VERY MINUTE! there is absolutely zero chance we could make that happen before the ASICS start shipping.

So: plan accordingly.

How often do you get the chance to work on a potentially world-changing project?
kjj
Legendary
*
Offline Offline

Activity: 1302
Merit: 1000



View Profile
October 10, 2012, 03:48:11 PM
 #102

Its not misdesign, its trade-off. The 32 bit nonce is old design, probably it looked great when only CPUs where doing mining, but right now it looks like a bottleneck.
It doesn't not look like a bottleneck. It provides a factor of four billion in reduction in whatever serial task exists outside of that. I have yet to see any evidence that it's a bottleneck.

My current rig could count those 4 billions in 30 ms or less. That's less than usb handshake to xfer the new payload will take in average.

You are getting 140+ Ghash/sec on a single device?  Today?  In reality?

17Np17BSrpnHCZ2pgtiMNnhjnsWJ2TMqq8
I routinely ignore posters with paid advertising in their sigs.  You should too.
btharper
Sr. Member
****
Offline Offline

Activity: 389
Merit: 250



View Profile
October 10, 2012, 04:16:01 PM
 #103

Its not misdesign, its trade-off. The 32 bit nonce is old design, probably it looked great when only CPUs where doing mining, but right now it looks like a bottleneck.
It doesn't not look like a bottleneck. It provides a factor of four billion in reduction in whatever serial task exists outside of that. I have yet to see any evidence that it's a bottleneck.

My current rig could count those 4 billions in 30 ms or less. That's less than usb handshake to xfer the new payload will take in average.

You are getting 140+ Ghash/sec on a single device?  Today?  In reality?
+1

Most devices don't process one hash from 0 to 4 billion in (total hashrate/4 billion), they run several processes in parallel across several chips. So each of your 20 onboard chips are each (relatively) slowly chewing through work and needs a new piece of work every several hundred milliseconds.

Also as Gavin said, send more than one piece of work for each handshake. Between longpoll and everything else that's setup, I don't see any reason not to have several pieces of work queued up and ready to go on the device side of the USB cord.
beekeeper
Sr. Member
****
Offline Offline

Activity: 406
Merit: 250


LTC


View Profile WWW
October 10, 2012, 04:37:52 PM
 #104

My current rig could count those 4 billions in 30 ms or less. That's less than usb handshake to xfer the new payload will take in average.

... so transfer 1,000 headers with different extranonces in one handshake.  Or don't handshake every time. Or modify the firmware that speaks whatever mining protocol you're sending at it to do the increment-extranonce-and-recompute-the-merkle-root-thing itself.

All of this discussion is useless; even if you could convince us core developers that we need A HARD FORK RIGHT THIS VERY MINUTE! there is absolutely zero chance we could make that happen before the ASICS start shipping.

So: plan accordingly.


Bulk transactions are for video streaming. Anything derived from financial transactions should have every byte handshaked. Cheesy
Ofc, it can be done like that, but again, we are back at start, I have to overdesign chip controllers to cover extra bandwidth.
BTW: I never intended to start arguing, its rather, as I said, an annoyance I wanted to talk about.

25Khs at 5W Litecoin USB dongle (FPGA), 45kHs overclocked
https://bitcointalk.org/index.php?topic=310926
Litecoin FPGA shop -> http://ltcgear.com
slush
Legendary
*
Offline Offline

Activity: 1372
Merit: 1019



View Profile WWW
October 10, 2012, 04:49:45 PM
 #105

Ofc, it can be done like that, but again, we are back at start, I have to overdesign chip controllers to cover extra bandwidth.

It is nice to see you agree that the only problem is sub-optimal software in miners.

a) Optimize wrongly designed mining software
or
b) Hard fork of whole bitcoin network?

I vote for b)

Btw it would be quite comfortable for me to agree with you. With bigger nonce range I won't need to switch to Stratum protocol, because getwork protocol would be good enough for ages. But still I decided that there's much easier solution than trying to do hard fork, so I just re-designed protocol. Btw it isn't easy, I'm working on it almost two months on full time.

Stop calling optimizations "work arounds" and let's go back to work.

kano
Legendary
*
Online Online

Activity: 2464
Merit: 1040


Linux since 1997 RedHat 4


View Profile
October 10, 2012, 09:04:17 PM
 #106

My current rig could count those 4 billions in 30 ms or less. That's less than usb handshake to xfer the new payload will take in average.

... so transfer 1,000 headers with different extranonces in one handshake.  Or don't handshake every time. Or modify the firmware that speaks whatever mining protocol you're sending at it to do the increment-extranonce-and-recompute-the-merkle-root-thing itself.

All of this discussion is useless; even if you could convince us core developers that we need A HARD FORK RIGHT THIS VERY MINUTE! there is absolutely zero chance we could make that happen before the ASICS start shipping.

So: plan accordingly.

Yes - plan ... that's the point of this thread ... but not of the "We can't ever do a hard fork" devs Tongue
... and that your post clearly shows a lack of understanding 'planning'.

Plan for a future hard fork to allow a 2nd version block header.
The issue is obvious, the cause is obvious, the path to ultimately fix it is obvious, but then you make this stupid statement
"A HARD FORK RIGHT THIS VERY MINUTE"
... who said that? (other than you)

Though - as has already been pointed out - this issue came up 2 years ago ... yeah a lot of foresight back then ignoring it Tongue
No doubt nothing was learned from that mistake.

Pool: https://kano.is Here on Bitcointalk: Forum BTC: 1KanoPb8cKYqNrswjaA8cRDk4FAS9eDMLU
FreeNode IRC: irc.freenode.net channel #kano.is Majority developer of the ckpool code
Help keep Bitcoin secure by mining on pools with full block verification on all blocks - and NO empty blocks!
tacotime
Legendary
*
Offline Offline

Activity: 1484
Merit: 1000



View Profile
December 14, 2013, 08:38:16 PM
 #107

So... has this issue been resolved?  Will it still break all ASICs if they get too fast?

Code:
XMR: 44GBHzv6ZyQdJkjqZje6KLZ3xSyN1hBSFAnLP6EAqJtCRVzMzZmeXTC2AHKDS9aEDTRKmo6a6o9r9j86pYfhCWDkKjbtcns
TierNolan
Legendary
*
Offline Offline

Activity: 1176
Merit: 1001


View Profile
December 14, 2013, 08:57:07 PM
 #108

So... has this issue been resolved?  Will it still break all ASICs if they get too fast?

You can send much more complex info to miners now.  The "getblocktemplate" rpc call allows the pool to give miners enough info to generate their own headers on the fly.

1LxbG5cKXzTwZg9mjL3gaRE835uNQEteWF
btharper
Sr. Member
****
Offline Offline

Activity: 389
Merit: 250



View Profile
December 16, 2013, 12:27:35 AM
 #109

So... has this issue been resolved?  Will it still break all ASICs if they get too fast?

You can send much more complex info to miners now.  The "getblocktemplate" rpc call allows the pool to give miners enough info to generate their own headers on the fly.
Seconded. Between Stratum and GBT there really wasn't a problem and one failed to materialize for no reason. Everything's moving along nicely now. Probably an idea worth revisiting in the future if there's another, more important, reason to do a hard-fork (like switching to SHA-512 or an SHA-3 family item) which would break current ASICs due to the massive change in protocol.
kano
Legendary
*
Online Online

Activity: 2464
Merit: 1040


Linux since 1997 RedHat 4


View Profile
December 17, 2013, 05:17:27 AM
 #110

So... has this issue been resolved?  Will it still break all ASICs if they get too fast?

You can send much more complex info to miners now.  The "getblocktemplate" rpc call allows the pool to give miners enough info to generate their own headers on the fly.
Seconded. Between Stratum and GBT there really wasn't a problem and one failed to materialize for no reason. Everything's moving along nicely now. Probably an idea worth revisiting in the future if there's another, more important, reason to do a hard-fork (like switching to SHA-512 or an SHA-3 family item) which would break current ASICs due to the massive change in protocol.
Stratum, GBT and "getblocktemplate" have not solved this in any way whatsoever.

As before, each single nonce work item that allows for O(9) tests, is sent to the device.

At the moment, still no miners do any more than this.

The solution I presented a year ago would allow the information sent to the device to be able to produce O(9) more results with just a 32 bit extension to the nonce size - i.e be future proof by simply increasing it by O(9) (32 bits) O(19) (64 bits) or even O(28) (48bits) easily enough

The solution you are implying (that doesn't exist) would be to code the stratum protocol into the mining device, rather than having the extreme simplicity of just having a counter that is larger.

GBT is not even relevant to the topic since having to send up to a megabyte of data to the mining device at least every 30s and delaying work restart after an LP until that data is sent is ridiculous.

Pool: https://kano.is Here on Bitcointalk: Forum BTC: 1KanoPb8cKYqNrswjaA8cRDk4FAS9eDMLU
FreeNode IRC: irc.freenode.net channel #kano.is Majority developer of the ckpool code
Help keep Bitcoin secure by mining on pools with full block verification on all blocks - and NO empty blocks!
Pages: 1 2 3 4 5 6 [All]
  Print  
 
Jump to:  

Sponsored by , a Bitcoin-accepting VPN.
Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!