Bitcoin Forum
April 19, 2024, 08:48:34 PM *
News: Latest Bitcoin Core release: 26.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: [1] 2 3 4 5 6 »  All
  Print  
Author Topic: Handle much larger MH/s rigs : simply increase the nonce size  (Read 10044 times)
kano (OP)
Legendary
*
Offline Offline

Activity: 4466
Merit: 1798


Linux since 1997 RedHat 4


View Profile
June 23, 2012, 11:22:26 AM
 #1

It seems there are comments flying around about pools not being able to handle a larger network load with 1-difficulty shares caused by having devices that can hash 5/10/100 times faster, and that roll-n-time wont solve it.

So ... why not just add another 'nonce' 32 bit field after the current one?

Then pools and miners can use this as they see fit and I'm sure all the devs can understand how this will allow a single getwork to hash more shares or higher difficulty shares

The problem with roll-n-time is that each roll advances the block time into the future.
Instead, with a 2nd nonce field you can roll that field as many times as is valid and allowed by the pool (up to 4 billion times if hardware ever got that much faster)

Obviously getworks must still occur to add transactions into the work being processed, however, rigs that are able to process 100 times faster than your typical 400MH/s GPU will still only need the same number of getworks, not 100 times as many (with miner support of course) and with higher difficulty shares, less work will be returned also

So ... when's the fork gonna happen? Or will we wait until it's a critical problem ...

Pool: https://kano.is - low 0.5% fee PPLNS 3 Days - Most reliable Solo with ONLY 0.5% fee   Bitcointalk thread: Forum
Discord support invite at https://kano.is/ Majority developer of the ckpool code - k for kano
The ONLY active original developer of cgminer. Original master git: https://github.com/kanoi/cgminer
1713559714
Hero Member
*
Offline Offline

Posts: 1713559714

View Profile Personal Message (Offline)

Ignore
1713559714
Reply with quote  #2

1713559714
Report to moderator
Once a transaction has 6 confirmations, it is extremely unlikely that an attacker without at least 50% of the network's computation power would be able to reverse it.
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction.
DeepBit
Donator
Hero Member
*
Offline Offline

Activity: 532
Merit: 501


We have cookies


View Profile WWW
June 23, 2012, 11:25:36 AM
 #2

We already have time rolling for this.
Adding another field is unacceptable since it will break ASIC miners.

Welcome to my bitcoin mining pool: https://deepbit.net ~ 3600 GH/s, Both payment schemes, instant payout, no invalid blocks !
Coming soon: ICBIT Trading platform
Pieter Wuille
Legendary
*
qt
Offline Offline

Activity: 1072
Merit: 1174


View Profile WWW
June 23, 2012, 11:26:45 AM
 #3

See BIP 22. It's a bit complex, but basically it allows moving the entire block-generation process to external processes, which only need to contact bitcoind to receive new transactions or blocks.

I don't think a hard fork is warranted for an efficiency problem for miners. When a fork comes, we can always consider extending the nonce of course...

I do Bitcoin stuff.
kano (OP)
Legendary
*
Offline Offline

Activity: 4466
Merit: 1798


Linux since 1997 RedHat 4


View Profile
June 23, 2012, 11:33:30 AM
 #4

We already have time rolling for this.
...
Um ... no.
https://bitcointalk.org/index.php?topic=89029.0

And when a device that hashes 100 times a 400MH/s GPU comes out - you really think the time field is appropriate? Seriously?

...
Adding another field is unacceptable since it will break ASIC miners.
Name one ASIC miner ...

Pool: https://kano.is - low 0.5% fee PPLNS 3 Days - Most reliable Solo with ONLY 0.5% fee   Bitcointalk thread: Forum
Discord support invite at https://kano.is/ Majority developer of the ckpool code - k for kano
The ONLY active original developer of cgminer. Original master git: https://github.com/kanoi/cgminer
DeepBit
Donator
Hero Member
*
Offline Offline

Activity: 532
Merit: 501


We have cookies


View Profile WWW
June 23, 2012, 12:26:57 PM
 #5

We already have time rolling for this.
...
Um ... no.
https://bitcointalk.org/index.php?topic=89029.0

And when a device that hashes 100 times a 400MH/s GPU comes out - you really think the time field is appropriate? Seriously?
Yes. Also we have other options: fast pool protocols and client-side work generation.

Adding another field is unacceptable since it will break ASIC miners.
Name one ASIC miner ...
BFL SC, OpenASIC initiative, Vlad's something, Reclaimer.
Not to mention old Artforz's 350nm sASICs.

Welcome to my bitcoin mining pool: https://deepbit.net ~ 3600 GH/s, Both payment schemes, instant payout, no invalid blocks !
Coming soon: ICBIT Trading platform
kano (OP)
Legendary
*
Offline Offline

Activity: 4466
Merit: 1798


Linux since 1997 RedHat 4


View Profile
June 23, 2012, 12:29:51 PM
 #6

...
Adding another field is unacceptable since it will break ASIC miners.
Name one ASIC miner ...
BFL SC, OpenASIC initiative, Vlad's something, Reclaimer.
Not to mention old Artforz's 350nm ASICs.
Yeah I meant one that exists.

If bitcoin decisions are made based on unsubstantiated comments by companies like BFL - man are we in deep shit investing in BTC

Pool: https://kano.is - low 0.5% fee PPLNS 3 Days - Most reliable Solo with ONLY 0.5% fee   Bitcointalk thread: Forum
Discord support invite at https://kano.is/ Majority developer of the ckpool code - k for kano
The ONLY active original developer of cgminer. Original master git: https://github.com/kanoi/cgminer
Pieter Wuille
Legendary
*
qt
Offline Offline

Activity: 1072
Merit: 1174


View Profile WWW
June 23, 2012, 12:36:54 PM
 #7

Doing a hard fork requires an exceedingly high level of consensus, as it requires everyone in the network to upgrade (not just miners). Unless there is a security flaw in the protocol, I doubt we'll see one anytime soon.

The problem you are complaining about is an efficiency problem for miners. I doubt you'll get a large percentage of the Bitcoin community to even accept this is a problem. At least in my opinion, it is not, as there are much more scalable solutions already available, such as local work generation.

I do Bitcoin stuff.
DeepBit
Donator
Hero Member
*
Offline Offline

Activity: 532
Merit: 501


We have cookies


View Profile WWW
June 23, 2012, 12:41:16 PM
 #8

Yeah I meant one that exists.
If bitcoin decisions are made based on unsubstantiated comments by companies like BFL - man are we in deep shit investing in BTC
1. Do you understand that this change will break every existing bitcoin client and service ?

2. ASIC may be "broken" even at the design or testing stage. It doesn't matters if this is BFL or not.

3. Breaking protocol (and possible ASICs, and possible hardware solutions like bitcoin smartcards and so on) without a reason is like waving a big sign "No serious business here" for most investors.

Welcome to my bitcoin mining pool: https://deepbit.net ~ 3600 GH/s, Both payment schemes, instant payout, no invalid blocks !
Coming soon: ICBIT Trading platform
kano (OP)
Legendary
*
Offline Offline

Activity: 4466
Merit: 1798


Linux since 1997 RedHat 4


View Profile
June 23, 2012, 12:55:38 PM
 #9

Doing a hard fork requires an exceedingly high level of consensus, as it requires everyone in the network to upgrade (not just miners). Unless there is a security flaw in the protocol, I doubt we'll see one anytime soon.

The problem you are complaining about is an efficiency problem for miners. I doubt you'll get a large percentage of the Bitcoin community to even accept this is a problem. At least in my opinion, it is not, as there are much more scalable solutions already available, such as local work generation.

Actually, it's a problem for pools not miners.
If a pool can't handle larger devices, when large MH/s FPGA or ASIC devices show up - ok we'll have pools failing around us.

It's not the issue of handling a few more higher power devices, it's the simple fact that if people can get these devices, then either everyone will be getting devices that are close to an order of magnitude faster, or most of the backbone of BTC (the miners) will be gone and BTC will die.
Anyone can see there is no exaggeration in that if they have a little sense of understanding of today's BTC network.

Yes there are other suggestion, but this one simply ties in with how things work now ...

Yes I do understand the issues with doing a hard fork - hell look at the mess caused by the way it was done back in April and that was a minor change ...

Ignoring Deepbit (especially since most of his miners don't understand or pay much attention to BTC) I'd actually expect most pools to be looking for a solution already and this solution in itself fits, code and design wise, well and easily into the current design.

It is also a long term solution
If someone thinks 4 billion times the current network isn't enough, then ok add two 32 bit nonce fields (for a total of 3) and I can't see that running out before BTC is completely redesigned some time in the far future.

Pool: https://kano.is - low 0.5% fee PPLNS 3 Days - Most reliable Solo with ONLY 0.5% fee   Bitcointalk thread: Forum
Discord support invite at https://kano.is/ Majority developer of the ckpool code - k for kano
The ONLY active original developer of cgminer. Original master git: https://github.com/kanoi/cgminer
kano (OP)
Legendary
*
Offline Offline

Activity: 4466
Merit: 1798


Linux since 1997 RedHat 4


View Profile
June 23, 2012, 01:00:31 PM
 #10

Yeah I meant one that exists.
If bitcoin decisions are made based on unsubstantiated comments by companies like BFL - man are we in deep shit investing in BTC
1. Do you understand that this change will break every existing bitcoin client and service ?

2. ASIC may be "broken" even at the design or testing stage. It doesn't matters if this is BFL or not.

3. Breaking protocol (and possible ASICs, and possible hardware solutions like bitcoin smartcards and so on) without a reason is like waving a big sign "No serious business here" for most investors.
No it means that this hardware is being design by short-sighted people that should not be designing hardware.

The US government mandated that SHA-1 no longer be used within the government after 2010.
Why? Because it was broken.
Is this something that people expect to never happen to SHA-2?
Why is there an SHA-3?
Seriously your argument is nonsense since, who knows, tomorrow SHA-2 could be broken.

Pool: https://kano.is - low 0.5% fee PPLNS 3 Days - Most reliable Solo with ONLY 0.5% fee   Bitcointalk thread: Forum
Discord support invite at https://kano.is/ Majority developer of the ckpool code - k for kano
The ONLY active original developer of cgminer. Original master git: https://github.com/kanoi/cgminer
Pieter Wuille
Legendary
*
qt
Offline Offline

Activity: 1072
Merit: 1174


View Profile WWW
June 23, 2012, 01:20:45 PM
 #11

Bitcoin is certainly a lot more than just mining, but that doesn't mean the mining business is not part of it. While profitable, economics of scale will always lead to research and development of more efficient (and specialized) hardware. You may argue against the potential for centralization this brings, but from an economic point of view, it is inevitable. A hard fork that would render their investments void is a problem, and it will undermine trust in the Bitcoin system.

Yes, cryptographic primitives get broken and better ones are being developed all the time. I've already said that a security flaw is a very good reason for a hard fork - presumably, few people with an interest in Bitcoin will object against a fix for a fatal security flaw.

However, you still haven't convinced me there is a problem. The current getwork()-based mining requires new work every 4 billion hashes, yes. But when combined with local work generation, or even direct mining on top of getmemorypool(), there is no performance problem at all. A normal CPU can generate work for several TH/s easily when implemented efficiently. I believe a few pools already use this.

Unless it becomes clear that there is an inherent problem with the current system that will limit mining operation in the future (as opposed to implementation issues because of the current state of operations), I see no reason at all for a hard fork.

I do Bitcoin stuff.
Nachtwind
Hero Member
*****
Offline Offline

Activity: 700
Merit: 507



View Profile
June 23, 2012, 02:35:00 PM
 #12

...
Adding another field is unacceptable since it will break ASIC miners.
Name one ASIC miner ...
BFL SC, OpenASIC initiative, Vlad's something, Reclaimer.
Not to mention old Artforz's 350nm ASICs.
Yeah I meant one that exists.

If bitcoin decisions are made based on unsubstantiated comments by companies like BFL - man are we in deep shit investing in BTC

Might as well announce a ASIC that can contain up to three nonces and is modular enough.. would have the same backing as those of BFL and other companies..

Sorry.. turning down good ideas because some small fpga assembler (havent seen an asic yet!) announces some wonderhasher with non-meetable estimates is not a way to discuss the future of the whole project. What if someone had decided to change the bitcoin algo becaue GPU miners were invented? Should we end up like SolidCoin where important decision are made based upon childish ideologies?
2112
Legendary
*
Offline Offline

Activity: 2128
Merit: 1065



View Profile
June 23, 2012, 05:06:12 PM
 #13

kano, here's the way to strenghten your arguments.

I believe you are a programmer and actual co-developer of the mining program written in C++. Please run a test showing how many block headers per second can be generated on a contemporary CPU when the additional "nonce" space is actually put in the "coinbase" field.

ASIC mining chips will probably stay the same as long as the position of the 32-bit nonce field doesn't change within the block header. It makes sense to implement only the innermost loop in the hardware, any outer loops should be implemented in software that's driving the hashing hardware.

Please comment, critique, criticize or ridicule BIP 2112: https://bitcointalk.org/index.php?topic=54382.0
Long-term mining prognosis: https://bitcointalk.org/index.php?topic=91101.0
kano (OP)
Legendary
*
Offline Offline

Activity: 4466
Merit: 1798


Linux since 1997 RedHat 4


View Profile
June 24, 2012, 01:26:44 AM
 #14

...
However, you still haven't convinced me there is a problem. The current getwork()-based mining requires new work every 4 billion hashes, yes. But when combined with local work generation, or even direct mining on top of getmemorypool(), there is no performance problem at all. A normal CPU can generate work for several TH/s easily when implemented efficiently. I believe a few pools already use this.

Unless it becomes clear that there is an inherent problem with the current system that will limit mining operation in the future (as opposed to implementation issues because of the current state of operations), I see no reason at all for a hard fork.
Well the ASIC 'discussions' at the moment are suggesting 1TH/s devices ... these being the devices that are 10x what most people would normally use.

Looking at current technology, we have ~200 - ~800 MH/s devices in GPU and FPGA (of course there are lower also and a few slightly higher)
And looking around pools it is common to find users who have ~2 - ~5 GH/s spending a few thousand dollars on hardware.

gigavps received 4 FPGA rigs in the past couple of days that hash at around 25GH/s each - an order or magnitude in performance and cost

So if this ASIC performance change reality is even close to what is being suggested - an order of magnitude is expected and people like gigvps will be running multiple single devices that are in the hundreds of GH/s

Now this already puts a question mark over the statement:
"A normal CPU can generate work for several TH/s easily when implemented efficiently"

However, if we are talking more than an order of magnitude - then this statement is very questionable.

The point in all this is not that some people will be running faster hardware so yeah it may be an issue for them, it's that the whole network will be (at least) an order of magnitude faster in hashing since those who do not take up the new hardware will be gone due to the difficulty change being in the 10's of millions instead of 1's of millions and the cost to mine with current hardware prohibitive compared to the return.

The other solutions to this are saying that bitcoind code should be moved to the miner - decisions about how to construct a block header and what information to put into it.
i.e. a performance issue is solved by moving code out of bitcoind and into the end mining software.
Currently pools already do this anyway (but not to the miner) to make their own decisions about what txn's to include and how to construct the coinbase, but I see the idea that the miner code itself should take on this a very poor design solution ... almost a hack.

On top of all this is the network requirement increase on the current code base.
And order of magnitude higher I think is really not an option.

Pool: https://kano.is - low 0.5% fee PPLNS 3 Days - Most reliable Solo with ONLY 0.5% fee   Bitcointalk thread: Forum
Discord support invite at https://kano.is/ Majority developer of the ckpool code - k for kano
The ONLY active original developer of cgminer. Original master git: https://github.com/kanoi/cgminer
Pieter Wuille
Legendary
*
qt
Offline Offline

Activity: 1072
Merit: 1174


View Profile WWW
June 24, 2012, 08:46:38 AM
 #15

"A normal CPU can generate work for several TH/s easily when implemented efficiently"
However, if we are talking more than an order of magnitude - then this statement is very questionable.

That is one single CPU. If work generation becomes to heavy for a CPU, do it on two. If that becomes too much, do it on a GPU. By the time a GPU can't do work generation for your ASIC cluster behind it, it will be economically viable to move the work generation to an ASIC.

I do Bitcoin stuff.
Mike Hearn
Legendary
*
expert
Offline Offline

Activity: 1526
Merit: 1128


View Profile
June 24, 2012, 11:57:22 AM
 #16

If you look at what poolserverj is doing, it's clear that centralized pools can scale up more or less indefinitely. No protocol changes are needed and thus none will occur.
Inaba
Legendary
*
Offline Offline

Activity: 1260
Merit: 1000



View Profile WWW
June 25, 2012, 12:15:47 PM
 #17

What is Poolserverj doing that allows it to scale?  You mean the local work generation? 

I don't think PSJ can scale all that well in it's current form, but I could be wrong.  From when I investigated it (and ultimately rejected it as a back end), it had some serious limitations.  It may have improved since then, but it was my understanding the developer had abandoned it and it has not progressed in a very long time?


If you're searching these lines for a point, you've probably missed it.  There was never anything there in the first place.
DrHaribo
Legendary
*
Offline Offline

Activity: 2730
Merit: 1034


Needs more jiggawatts


View Profile WWW
June 25, 2012, 12:52:56 PM
 #18

rollntime will do just fine. If you don't want to roll even 1 second into the future and you need 3 nonce ranges per second, just get 3 sets of rollable work data. Problem solved. Also, being a few seconds into the future (or the past) is ok.

The other option besides rollntime is generating work locally and using getmemorypool (BIP22 as Pieter Wuille already mentioned) instead of getwork.

There's no need to change the format of bitcoin blocks.

▶▶▶ bitminter.com 2011-2020 ▶▶▶ pool.xbtodigital.io 2023-
kjj
Legendary
*
Offline Offline

Activity: 1302
Merit: 1024



View Profile
June 25, 2012, 05:43:16 PM
 #19

You could always add a new block version number to exist alongside the current block version.  There is no need for a hard fork, nor to break everything in one day.

17Np17BSrpnHCZ2pgtiMNnhjnsWJ2TMqq8
I routinely ignore posters with paid advertising in their sigs.  You should too.
Pieter Wuille
Legendary
*
qt
Offline Offline

Activity: 1072
Merit: 1174


View Profile WWW
June 25, 2012, 05:52:36 PM
 #20

You could always add a new block version number to exist alongside the current block version.  There is no need for a hard fork, nor to break everything in one day.

Doing that requires a hard fork, as it means some blocks will be valid to new nodes but not to old. The first block mined in the new system will kick out every old node permanently on a sidechain without new-version blocks.

I do Bitcoin stuff.
Pages: [1] 2 3 4 5 6 »  All
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!