Bitcoin Forum
March 28, 2024, 01:14:38 PM *
News: Latest Bitcoin Core release: 26.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: « 1 [2] 3 4 5 6 »  All
  Print  
Author Topic: Handle much larger MH/s rigs : simply increase the nonce size  (Read 10043 times)
kjj
Legendary
*
Offline Offline

Activity: 1302
Merit: 1024



View Profile
June 25, 2012, 07:30:51 PM
 #21

You could always add a new block version number to exist alongside the current block version.  There is no need for a hard fork, nor to break everything in one day.

Doing that requires a hard fork, as it means some blocks will be valid to new nodes but not to old. The first block mined in the new system will kick out every old node permanently on a sidechain without new-version blocks.

Ahh, duh.  My bad, it would require a fork.

But not a fork that would break everything.  Non-upgradable miners could keep making the blocks that they know how to make, while their control nodes would accept blocks from the network that were at the new version.

17Np17BSrpnHCZ2pgtiMNnhjnsWJ2TMqq8
I routinely ignore posters with paid advertising in their sigs.  You should too.
You get merit points when someone likes your post enough to give you some. And for every 2 merit points you receive, you can send 1 merit point to someone else!
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction.
allten
Sr. Member
****
Offline Offline

Activity: 455
Merit: 250


You Don't Bitcoin 'till You Mint Coin


View Profile WWW
June 25, 2012, 08:08:45 PM
 #22

to the OP:
I think the idea is very reasonable.
For it to work, we need to have a transition period where the block with a nounce size of 32 bits still works a long side with a block and a nounce size of say 64 or whatever. The 32 bit nounce could be phased out after 4 to 8 years. plenty of time for hardware to naturally phase out anyways. Seems doable. Hope you write a detailed BIP.
Pieter Wuille
Legendary
*
qt
Offline Offline

Activity: 1072
Merit: 1170


View Profile WWW
June 25, 2012, 11:16:54 PM
 #23

Let us look at this from a theoretical point of view, rather than from what current infrastructure provides. The first pool that is still operational started in december 2010 - two years ago. By the time we can pull of a block format change, we are at least two years away anyway. At that time, I'm sure the Bitcoin infrastructure will be very different from now.

Assume blocks reach their maximum size: 1000000 bytes, all the time. The smallest typical transactions are 227 bytes (1 input from a compressed pubkey, 2 outputs to addresses). That means a maximum of 4405 transactions.

In a merkle tree with 4405 elements, the leaves are 13 levels deep. That means generation of a new piece of work (for 4 billion double-SHA256 operations worth of calculation), you need to increase the extranonce in the first transaction (the coinbase) 's output, and hash your way up to the merkle root. This requires 13 double-SHA256 operations. If this is offloaded to the GPU's/FPGA's/ASIC's/QuantumCPU's/... that are *already* doing precisely that hashing operation for the block header anyway, they get a 0.0000003% overhead. The only thing they'd require is an occasional (at 4405 transactions per 10 minutes, 7 times per second) update of the list of transactions. A phone connection with a modem suffices for that kind of traffic.

In short: nothing to worry about. The only problem is with the current infrastructure, which will evolve.

I do Bitcoin stuff.
kano (OP)
Legendary
*
Offline Offline

Activity: 4452
Merit: 1798


Linux since 1997 RedHat 4


View Profile
June 26, 2012, 12:17:59 AM
 #24

Well ... 2 minute block times also reduces block size almost 5 times on average ...
but that idea got chucked out when I brought that up last year Smiley
https://bitcointalk.org/index.php?topic=51504.0

Meanwhile ... why does the nonce exist?
From my understanding this is the reason, however, it will be too small if the network grows due to ASIC.

Pool: https://kano.is - low 0.5% fee PPLNS 3 Days - Most reliable Solo with ONLY 0.5% fee   Bitcointalk thread: Forum
Discord support invite at https://kano.is/ Majority developer of the ckpool code - k for kano
The ONLY active original developer of cgminer. Original master git: https://github.com/kanoi/cgminer
Pieter Wuille
Legendary
*
qt
Offline Offline

Activity: 1072
Merit: 1170


View Profile WWW
June 26, 2012, 12:28:55 AM
 #25

Meanwhile ... why does the nonce exist?
From my understanding this is the reason, however, it will be too small if the network grows due to ASIC.

Because of the nonce, we only need to recalculate the merkle root once every 4 billion hashes, instead of for every hash. In the current infrastructure, that merkle root is calculated on the server (typically) while the hashes are calculated by the miner. This means that there is an interaction between server and miner every 4 billion hashes. But the actual calculation per merkle root is nothing compared to the 4 billion hashes (see my post above) the miner already does. If the requesting of work becomes the bottleneck, the work generation can simply be moved to the miner.

No, this is not an issue. No, there is no need to increase the nonce size. Yes, 64 bit nonces would have things slightly easier for us, but all is required is a slightly more complex mining infrastructure and this inconvenience is nothing compared to doing a hard fork.

I do Bitcoin stuff.
kano (OP)
Legendary
*
Offline Offline

Activity: 4452
Merit: 1798


Linux since 1997 RedHat 4


View Profile
June 26, 2012, 02:15:29 AM
 #26

Meanwhile ... why does the nonce exist?
From my understanding this is the reason, however, it will be too small if the network grows due to ASIC.

Because of the nonce, we only need to recalculate the merkle root once every 4 billion hashes, instead of for every hash. In the current infrastructure, that merkle root is calculated on the server (typically) while the hashes are calculated by the miner. This means that there is an interaction between server and miner every 4 billion hashes. But the actual calculation per merkle root is nothing compared to the 4 billion hashes (see my post above) the miner already does. If the requesting of work becomes the bottleneck, the work generation can simply be moved to the miner.

No, this is not an issue. No, there is no need to increase the nonce size. Yes, 64 bit nonces would have things slightly easier for us, but all is required is a slightly more complex mining infrastructure and this inconvenience is nothing compared to doing a hard fork.

i.e. the correct solution based on the bitcoin spec is indeed to increase the nonce size - and as I mentioned later, if it's increased it may as well be 3 x 32bits ... or even 4 for a nice round number ... though I doubt 3 would run out for at least many ... decades? Smiley

Again, the solution others are saying is to move block generation and txn selection to the miner software - that seems indeed like a hack to me.
A hack to solve a problem with a very clear and specific solution.

The issue is - a fork.
Well, the equivalent of a fork was done in April (and yeah wasn't done very well) and that was for a much lesser reason.

These ASICs don't exist yet and seriously, that seems to be the only real argument anyone has against doing it properly is that BFL have announced their ASICs with a time frame.
Their last related announcement on a similar subject was last September which proved to be an over spec announcement made using a simulation that didn't even exist that they delivered more than 3 months late.

Hmm, who's in control here ...

Pool: https://kano.is - low 0.5% fee PPLNS 3 Days - Most reliable Solo with ONLY 0.5% fee   Bitcointalk thread: Forum
Discord support invite at https://kano.is/ Majority developer of the ckpool code - k for kano
The ONLY active original developer of cgminer. Original master git: https://github.com/kanoi/cgminer
btharper
Sr. Member
****
Offline Offline

Activity: 389
Merit: 250



View Profile
June 26, 2012, 08:02:17 AM
 #27

There's already an "extraNonce" field that's used in the coinbase transaction, incrementing it changes the merkle root and gives you a new batch of hashes to work on. From a recently generated coinbase, this value is 4294967295.

Maybe what we need is to update the way work is fetched to allow more efficient processing. Since the other block header fields don't change much - version, prev hash, merkle root, timestamp (with the exception of rollntime), and target. Why not just package get works to include one copy of those fields and 4-100 merkle roots (depending on client speed).

As far as generating the merkle root en masse, my laptop with very average performance could do ~1.8MH/s back when that mattered, which is about 4 million sha256() rounds in a second. I'm not intimately familiar with how the merkle roots are generated (n*log(n) hashes for n transactions?) but if we call it 400 transactions we get a bit south of 4000 hashes per merkle root and with heavy rounding and lots of assumptions every step of the way gets about one thousand merkle roots generated per second on an economy two-core laptop.
Checking slush's pool shows 430 getworks/s to handle 1.2GH/s of mining power, which could be handled twice over on much less hardware than a decent server (granted the server also has to do things like handle the miners connecting and running whatever backend is required). If you want to make sure you have enough capacity I'm sure you could use a GPU to do this and get easily another order of magnitude above a CPU.

For a dedicated miner running 1TH/s you would need to supply 250 merkle roots/s, but if you're running 1TH/s you could probably afford to mine solo (at least for current difficulty, and probably for a good while). Even on a pool, a condensed request like the one above would be under 10KByte/s (56Kbit dial-up is just slightly slower).

Will pools be affected by ever-climbing hash rates? Sure. Will it matter in the long run? I doubt it. Will it require a fork or protocol change. Almost certainly not (the logistics of changing something this low-level in the blockchain would be a nightmare).
Maybe you could ask one of the large pool operators slush or giga both come to mind off hand (and I know giga is already thinking about upgrading to ASIC and the changes that requires).
Pieter Wuille
Legendary
*
qt
Offline Offline

Activity: 1072
Merit: 1170


View Profile WWW
June 26, 2012, 08:13:08 AM
 #28

Again, the solution others are saying is to move block generation and txn selection to the miner software - that seems indeed like a hack to me.
A hack to solve a problem with a very clear and specific solution.

It is only a hack when considered from the viewpoint of the current infrastructure. There is no reason why you'd leave the hashing of the merkle root on the server instead of the client, especially as it is exactly the same operation (double SHA256).

Quote
The issue is - a fork.
Well, the equivalent of a fork was done in April (and yeah wasn't done very well) and that was for a much lesser reason.

As fas as I know, there has not been a single hard fork in Bitcoin's history. The changes for BIP16 and BIP30 were "soft forks", that only made a backward-compatible change (meaning only making some existing rules more strict). Even the much more severe bug fixes in juli-august 2010 (see CVEs) were in fact only soft forks. Soft forks are safe as soon as a majority of mining power enforces them.

Changing the serialization format of blocks or transactions, or introducing a new version for these, however does require a hard fork. Other changes that require a hard fork are changing the maximum block size, changing the precision of transaction amounts, or changing the mining subsidy function. All these need a much much higher level of consensus, as these require basically the entire Bitcoin network to upgrade (not just a majority, and not just miners). Everyone using an old client after the switch will get stuck in a sidechain with old miners (even if that is just 1% of the hashing power). If we ever do a hard fork (and we may need to), it will have to be planned, implemented and agreed upon years in advance.

Quote
Hmm, who's in control here ...

You are. Bitcoin is based on consensus, but you can only reach consensus as long as you can convince enough people there is a problem, and I personally still see this is a minor implementation inconvenience rather than a problem that will limit our growth.

I do Bitcoin stuff.
eleuthria
Legendary
*
Offline Offline

Activity: 1750
Merit: 1007



View Profile
June 26, 2012, 09:27:18 AM
 #29

I'm working with a few others on a draft to revise the way the mining protocol works.  Current outlook is very good, in that it should be able to support 256 TH/s -per device-, while almost eliminating all network traffic (total "getwork" to be downloaded by the miner is ~1 KB -total- between each longpoll).  Additionally, the 256 TH/s per device is a limit that can be readily increased in *2^8 increments.

More information coming soon.  The protocol design will require pools to be redesigned if they want to adapt to the changing landscape that ASICs may bring (I still don't think we'll see BFL's claimed specs).  However, this protocol would "future proof" the pooled mining design.

For the miner to utilize the protocol, they would either need mining software with direct support, or a local proxy which interprets the new protocol and translates it for older miners.  In the coming weeks I will hopefully be able to post a complete spec for mining software developers to consider implementing it.  Hopefully a proof-of-concept pool server will be available in the next 2 months.

This will require -no- change to Bitcoin's current protocols.  It is purely a change in the way pools interact with miners.

RIP BTC Guild, April 2011 - June 2015
kano (OP)
Legendary
*
Offline Offline

Activity: 4452
Merit: 1798


Linux since 1997 RedHat 4


View Profile
June 26, 2012, 09:57:55 AM
Last edit: June 26, 2012, 10:09:14 AM by kano
 #30

Well I guess I'm gonna have to spend some time looking at these other options coz I see all sorts of issues about having the miner generating blocks (why I call it hack): like having to pass merkle trees, or running a memory pool (that even bitcoind doesn't get that perfectly right still), or passing some encrypted protocol about reuse of data already passed back and forward and thus keeping track of that also, stuff like increasing the network load (not decreasing it) due to passing all this extra information, passing txn's back and forward and handing LP's and orphans, and all sorts of other issues that every miner program is now going to have to deal with: simply to get the miner to generate the coinbase txn i.e. do the work of the pool/bitcoind, rather than the pool/bitcoind doing it (and also rather than fixing the actual problem that the nonce will be too small when ASIC truly lands - next year possibly?)

Again, increasing the nonce size is quite simply elegant and also the actual correct solution based on the design.

This argument about a hard fork seems to be to find a way around the nonce solution, due to some deigned consideration that hard forks are not possible to be done ... before the problem occurs ... which is also why I bring this up now and not later when it becomes a problem.

This is of course normal in any programming environment, to on occasion see things in advance, and yet to not bite the bullet and make the required change that some may consider difficult until there's no time to do it and everyone can clearly see the problem because it has already presented itself. So instead to make a complex work around that is open to all sorts of issues - the biggest one in this case will be having every miner program implementing code from bitcoind at whatever level required (non trivial) and more to deal with passing this information around.

As for hard vs soft ... well in April there were multiple >3 block forks due to the issues of updating and multiple release candidates - so really no worse than a hard fork IMO in the number of people hashing on bad forks regularly back then.
That was of course caused by someone putting a poisoned transaction into the network to cause the exact problem that happened and as a result it was extremely similar to a hard fork.

Edit: heh, lets see what eleuthria's protocol has to say when he defines it ...

Edit2: while you're at it - kill that retarded design of LP - keeping an idle socket connection open sometimes for more than an hour ... wow deepbit/tycho must have had a bad hangover when they came up with that idea

Pool: https://kano.is - low 0.5% fee PPLNS 3 Days - Most reliable Solo with ONLY 0.5% fee   Bitcointalk thread: Forum
Discord support invite at https://kano.is/ Majority developer of the ckpool code - k for kano
The ONLY active original developer of cgminer. Original master git: https://github.com/kanoi/cgminer
eleuthria
Legendary
*
Offline Offline

Activity: 1750
Merit: 1007



View Profile
June 26, 2012, 10:27:33 AM
 #31

Edit: heh, lets see what eleuthria's protocol has to say when he defines it ...

Edit2: while you're at it - kill that retarded design of LP - keeping an idle socket connection open sometimes for more than an hour ... wow deepbit/tycho must have had a bad hangover when they came up with that idea

The new protocol will be based on a single TCP socket connection between the miner and the pool (or the proxy software and the pool).  All data is asynchronous.  Only one package of work will need to be sent from the pool to the miner between updates.  Updates would be either:  Traditional longpolls, or a list of new transactions.  It eliminates the current mess of a protocol where miners open new connections for work requests/submissions, and then hold a separate one open just to get notice of a new block.  Everything will use a single persistent connection.

I'm hoping to have this more formalized before I publish any kind of draft protocol for public comment/changes.  I didn't quite expect as much progress as we've had in the last few hours.  It's been an exciting couple of hours to say the least!

RIP BTC Guild, April 2011 - June 2015
Pieter Wuille
Legendary
*
qt
Offline Offline

Activity: 1072
Merit: 1170


View Profile WWW
June 26, 2012, 10:35:37 AM
 #32

As for hard vs soft ... well in April there were multiple >3 block forks due to the issues of updating and multiple release candidates - so really no worse than a hard fork IMO in the number of people hashing on bad forks regularly back then.
That was of course caused by someone putting a poisoned transaction into the network to cause the exact problem that happened and as a result it was extremely similar to a hard fork.

A hard forking change is one that causes an infinite split between old and new nodes, and is in no way comparable to any change we've ever done.

I do Bitcoin stuff.
Hawkix
Hero Member
*****
Offline Offline

Activity: 531
Merit: 505



View Profile WWW
July 01, 2012, 09:28:48 PM
 #33

I do not understand - the higher-than-1 difficulty shares should handle all this problems, shouldn't they?

Donations: 1Hawkix7GHym6SM98ii5vSHHShA3FUgpV6
http://btcportal.net/ - All about Bitcoin - coming soon!
btharper
Sr. Member
****
Offline Offline

Activity: 389
Merit: 250



View Profile
July 01, 2012, 09:37:40 PM
 #34

I do not understand - the higher-than-1 difficulty shares should handle all this problems, shouldn't they?
That may be part of the solution, but the other problem is that a 4GH/s device can run through an entire getwork (4 billion hashes) in one second (A SC Jalapeno at current specs goes through a whole getwork in ~1.15 seconds). So even very small miners will need to request more work. Higher difficulty shares just lower the number of found shares. The really fast machines like the 1TH/s device powers through 250 getworks per second, so more resources will be required to support these in the future.

Actually, come to think of it, higher shares would just increase variability.
kano (OP)
Legendary
*
Offline Offline

Activity: 4452
Merit: 1798


Linux since 1997 RedHat 4


View Profile
July 01, 2012, 10:47:50 PM
 #35

Yeah with a 64 bit nonce, if the device is big enough, it can power through 4 billion getworks a second ... but with a single getwork with the change I'm suggesting
... but as I mentioned 128 bit nonce would be best to future proof that for ... a very long time

Then high difficulty shares will also mean that fewer shares will be returned.

The overall result of managing the two properly (bigger nonce and higher difficulty shares) will mean that BTC can handle VERY large network growth for a VERY long time

Pool: https://kano.is - low 0.5% fee PPLNS 3 Days - Most reliable Solo with ONLY 0.5% fee   Bitcointalk thread: Forum
Discord support invite at https://kano.is/ Majority developer of the ckpool code - k for kano
The ONLY active original developer of cgminer. Original master git: https://github.com/kanoi/cgminer
kano (OP)
Legendary
*
Offline Offline

Activity: 4452
Merit: 1798


Linux since 1997 RedHat 4


View Profile
October 05, 2012, 02:24:04 PM
 #36

So meanwhile - 3 months later - and no one has actually really done anything with any sort of future consideration about this.

Once these 1TH/s mining devices turn up, they will be mining 232 1-difficulty shares a second - or 4ms a share.
THEY CAN'T MINE HIGHER DIFFICULTY SHARES SINCE THE NONCE SIZE IS ONLY 32 BITS

Now I'm not sure who thought that's OK, but sending 60 bytes of work to a USB device over serial at 115200 baud takes ... ~5ms - so with a device that was able to hash a nonce range at 1TH/s we are already well beyond a problem into:
More than half your mining time (if it was a single device) would be spent doing ... nothing.

Of course 1TH/s rigs are only a few months away ........ though they probably don't hash a nonce range that fast ...
Give it a year (if BTC hasn't died due to people ignoring this sort of stuff) the devices will simply be stunted due to the poor nonce implementation in BTC (yeah you all remember MSDOS ...)

Even implementing this is quite straight forward, allow for 2 types of blocks - each with a different version number, the second one to be available on a future date, that has a nonce size of 128 bits instead of 32.
Give it a 3 to 6 month time frame (well we already wasted 3 months doing nothing about it)

The latest bunch of implementations are not even related to a solution to this problem.
No one seems to care about it so I guess we'll hit a brick wall some time in the not too distant future and then the bitcoin devs will suddenly have to hack the shit out of bitcoin and implement a hard fork in a short time frame and screw it up like they've done in the past with soft forks.

Or ... as I mentioned before ... they could show a little forward planning ... but 3 months thrown away so far ...

Pool: https://kano.is - low 0.5% fee PPLNS 3 Days - Most reliable Solo with ONLY 0.5% fee   Bitcointalk thread: Forum
Discord support invite at https://kano.is/ Majority developer of the ckpool code - k for kano
The ONLY active original developer of cgminer. Original master git: https://github.com/kanoi/cgminer
Gavin Andresen
Legendary
*
qt
Offline Offline

Activity: 1652
Merit: 2164


Chief Scientist


View Profile WWW
October 05, 2012, 02:30:54 PM
 #37

I thought the consensus was that the mining devices just need a little extra software onboard to increment extranonce and recompute the merkle root.

I don't know nuthin about hardware/firmware design, or the miner<->pool communication protocols, but it seems to me that should be pretty easy to accomplish (the device will need to know the full coinbase transaction, a pointer to where the extranonce is in that transaction, and a list of transaction hashes so it can recompute the merkle root).

How often do you get the chance to work on a potentially world-changing project?
kano (OP)
Legendary
*
Offline Offline

Activity: 4452
Merit: 1798


Linux since 1997 RedHat 4


View Profile
October 05, 2012, 02:45:35 PM
 #38

The devices just sit then spinning a nonce range.
Nothing more, nothing less.

Implementing the bitocoin protocol of dealing with merkle trees and changing the coinbase is not what some ASIC company would consider doing unless they were looking to spend a lot of money every time that had to rewrite that (when their current hardware becomes door stops)

The device are simple - hard wired into silicon - they do the hash process and cycle the nonce.

Pool: https://kano.is - low 0.5% fee PPLNS 3 Days - Most reliable Solo with ONLY 0.5% fee   Bitcointalk thread: Forum
Discord support invite at https://kano.is/ Majority developer of the ckpool code - k for kano
The ONLY active original developer of cgminer. Original master git: https://github.com/kanoi/cgminer
Gavin Andresen
Legendary
*
qt
Offline Offline

Activity: 1652
Merit: 2164


Chief Scientist


View Profile WWW
October 05, 2012, 02:56:01 PM
 #39

The device are simple - hard wired into silicon - they do the hash process and cycle the nonce.

Okey doke.  I thought they had some firmware that knew how to talk over USB (or ethernet or whatever), too.

How often do you get the chance to work on a potentially world-changing project?
Mike Hearn
Legendary
*
expert
Offline Offline

Activity: 1526
Merit: 1128


View Profile
October 05, 2012, 03:03:33 PM
 #40

Yes, they do.

If USB1 is too slow to send new work to a 1TH ASIC fast enough then there's a simple solution - don't use USB1. If you're capable of running a rig of that speed you're capable of shoveling it work fast enough. This really isn't a problem the Bitcoin core devs need to solve, the ball is in the ASIC developers court.
Pages: « 1 [2] 3 4 5 6 »  All
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!