Bitcoin Forum
December 08, 2016, 02:33:49 AM *
News: To be able to use the next phase of the beta forum software, please ensure that your email address is correct/functional.
 
   Home   Help Search Donate Login Register  
Pages: « 1 [2]  All
  Print  
Author Topic: Difficulty > 1 share adoption suggestion for pool operators.  (Read 3103 times)
rjk
Sr. Member
****
Offline Offline

Activity: 420


1ngldh


View Profile
June 23, 2012, 05:18:19 PM
 #21

Processing a proof of work server-side is just like mining, isn't it? In regards to the hashing operation that takes place - it just processes specified nonces and not random ones. I haven't really thought about it until now, but for a 1Thps pool, about how many nonces does it need to validate per second? I can see how a slow CPU would be a bottleneck, and I wonder if pool software could offload that processing to a video card - otherwise faster devices will trounce 'most any pool.

I haven't run the numbers, but if we assume that a dedicated server might be able to process perhaps 10mhps in a single thread (and that might be a bit high), is that enough to deal with (perhaps several) 1Thps miners flooding it with nonces to process?

Mining Rig Extraordinaire - the Trenton BPX6806 18-slot PCIe backplane [PICS] Dead project is dead, all hail the coming of the mighty ASIC!
1481164429
Hero Member
*
Offline Offline

Posts: 1481164429

View Profile Personal Message (Offline)

Ignore
1481164429
Reply with quote  #2

1481164429
Report to moderator
1481164429
Hero Member
*
Offline Offline

Posts: 1481164429

View Profile Personal Message (Offline)

Ignore
1481164429
Reply with quote  #2

1481164429
Report to moderator
1481164429
Hero Member
*
Offline Offline

Posts: 1481164429

View Profile Personal Message (Offline)

Ignore
1481164429
Reply with quote  #2

1481164429
Report to moderator
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction. Advertise here.
1481164429
Hero Member
*
Offline Offline

Posts: 1481164429

View Profile Personal Message (Offline)

Ignore
1481164429
Reply with quote  #2

1481164429
Report to moderator
1481164429
Hero Member
*
Offline Offline

Posts: 1481164429

View Profile Personal Message (Offline)

Ignore
1481164429
Reply with quote  #2

1481164429
Report to moderator
eleuthria
Legendary
*
Offline Offline

Activity: 1750


BTC Guild Owner


View Profile WWW
June 23, 2012, 05:54:33 PM
 #22

Processing a proof of work server-side is just like mining, isn't it? In regards to the hashing operation that takes place - it just processes specified nonces and not random ones. I haven't really thought about it until now, but for a 1Thps pool, about how many nonces does it need to validate per second? I can see how a slow CPU would be a bottleneck, and I wonder if pool software could offload that processing to a video card - otherwise faster devices will trounce 'most any pool.

I haven't run the numbers, but if we assume that a dedicated server might be able to process perhaps 10mhps in a single thread (and that might be a bit high), is that enough to deal with (perhaps several) 1Thps miners flooding it with nonces to process?

Yes.  Because a difficulty=1 share is approximately 2^32 hashes.  Pool software likely isn't using anywhere near the optimizations that mining software uses to evaluate hashes, so lets say that it can only verify ~10 KH/s worth of hashes.  That means it can verify 10,000 shares per second.  This is a very rough number, since no pool has ever had to verify even close to that.

BTC Guild currently processes ~380,000 shares in 20 minutes, or 316 shares per second.  If my 10,000/second number is accurate, that means BTC Guild could support ~31x the hash power it currently has, which is about 35 TH/s.

Now in this scenario, I'm starting to think sending out higher difficulty shares would be beneficial.  The likely bottleneck would be DB access time (inserting 100k rows every 10 seconds in batches with my current software setup).

Historically, the biggest problems BTC Guild has had is pushing out longpoll connections quickly.  When you starting having 4,000+ longpoll connections, it starts to bog things down on the server.  Normally the bulk of those connections are to inefficient miners, so its very difficult to say just where the bottleneck will lie when dealing with highly efficient and very fast mining hardware.

R.I.P. BTC Guild, 2011 - 2015.
BTC Guild Forum Thread
-ck
Moderator
Legendary
*
Offline Offline

Activity: 2002


Ruu \o/


View Profile WWW
June 24, 2012, 04:12:29 PM
 #23

Well I just committed a busload of changes to cgminer to help further reduce getwork load.

Primary developer/maintainer for cgminer and ckpool/ckproxy.
Pooled mine at kano.is, solo mine at solo.ckpool.org
-ck
Luke-Jr
Legendary
*
Offline Offline

Activity: 2086



View Profile
June 27, 2012, 11:57:44 PM
 #24

Would be nice if more miners implemented at least the X-Mining-Hashrate header...

-ck
Moderator
Legendary
*
Offline Offline

Activity: 2002


Ruu \o/


View Profile WWW
June 27, 2012, 11:58:59 PM
 #25

Would be nice if more miners implemented at least the X-Mining-Hashrate header...
Is that even required? If the pool detects a high hashrate itself, wouldn't that suffice?

Primary developer/maintainer for cgminer and ckpool/ckproxy.
Pooled mine at kano.is, solo mine at solo.ckpool.org
-ck
Luke-Jr
Legendary
*
Offline Offline

Activity: 2086



View Profile
June 28, 2012, 02:46:11 AM
 #26

Would be nice if more miners implemented at least the X-Mining-Hashrate header...
Is that even required? If the pool detects a high hashrate itself, wouldn't that suffice?
With load balancing miners, the pool might detect a much slower hashrate than work is being requested for. Also, not all pools (if any) have a means for the high-level statistics like hashrate to be communicated back to the core poolserver.

kano
Legendary
*
Offline Offline

Activity: 1932


Linux since 1997 RedHat 4


View Profile
June 28, 2012, 04:49:45 AM
 #27

Would be nice if more miners implemented at least the X-Mining-Hashrate header...
Is that even required? If the pool detects a high hashrate itself, wouldn't that suffice?
With load balancing miners, the pool might detect a much slower hashrate than work is being requested for. Also, not all pools (if any) have a means for the high-level statistics like hashrate to be communicated back to the core poolserver.
But of course all pools do.
Quite simply, the overall share submission rate.
Since this is a rather large statistical sample, it would also be way more accurate than the numbers provided by the miner software.

The miner program attempting to get the correct value is prone to all sorts of issues:
How long can the pool assume the miner is mining at the rate specified - that is specifically a guess.
What percentage of the hash rate is the miner providing to the pool - that is specifically a guess.
How accurate is the miner in determining the hash rate - another guess.
Overall - inaccurate.

How accurate is the share submission rate - 100%
How accurate is converting that to a Hash rate for anything but a tiny pool - extremely.
Even for a single miner, over a period of a day, converting 'U:' is very accurate.

Pool: https://kano.is BTC: 1KanoiBupPiZfkwqB7rfLXAzPnoTshAVmb
CKPool and CGMiner developer, IRC FreeNode #ckpool and #cgminer kanoi
Help keep Bitcoin secure by mining on pools with Stratum, the best protocol to mine Bitcoins with ASIC hardware
Koooooj
Member
**
Offline Offline

Activity: 75



View Profile
June 28, 2012, 11:13:21 AM
 #28

This increase in difficulty seems interesting to me. I have a line of thought that may lead to a solutions that vastly reduces the network and processing requirements of the pool servers. I'll try to lay out my line of thought. I'm sure that I have holes and/or inaccuracies in this, so please try not to dismiss it because of a trivial problem.

As I understand it, the standard way that pooled mining works is this: the pool server looks at the present block chain, pending transactions, and a generation transaction that sends 50 coins tothat the server, and puts together the input to the bitcoin hash function. Then, miners request ranges of nonces to check. The miners compute those hashes, and respond with the ones that have a hash below a certain value (much larger than whatever it takes to mine a block). The server then looks at the shares claimed and easily computes the few hashes for the claimed shares to give credit for working. The pool knows the miner is working honestly because the shares check out ok.

What I propose is that the pool gives each miner an address to send the 50 coins to. Then, each miner makes the input to the hash function be the block chain, existing transactions, a generation block sending 50 to the MINER, and a transaction sending 50 to the pool. The pool can still check that the block is valid, and only gives credit to the miner if it is playing by those rules. With this setup, each miner has a different generation transaction, so there is no reason to partition off nonces through a centralized server. It may cause larger network demands since the full block must be communicated, not just the nonce, but I feel like it could also reduce stale shares. It also requires that the miner hassince access to the full block chain, which not all do.

Let me know what you all think. Sorry if it's a bit wordy.
Luke-Jr
Legendary
*
Offline Offline

Activity: 2086



View Profile
June 28, 2012, 02:07:43 PM
 #29

This increase in difficulty seems interesting to me. I have a line of thought that may lead to a solutions that vastly reduces the network and processing requirements of the pool servers. I'll try to lay out my line of thought. I'm sure that I have holes and/or inaccuracies in this, so please try not to dismiss it because of a trivial problem.

As I understand it, the standard way that pooled mining works is this: the pool server looks at the present block chain, pending transactions, and a generation transaction that sends 50 coins tothat the server, and puts together the input to the bitcoin hash function. Then, miners request ranges of nonces to check. The miners compute those hashes, and respond with the ones that have a hash below a certain value (much larger than whatever it takes to mine a block). The server then looks at the shares claimed and easily computes the few hashes for the claimed shares to give credit for working. The pool knows the miner is working honestly because the shares check out ok.

What I propose is that the pool gives each miner an address to send the 50 coins to. Then, each miner makes the input to the hash function be the block chain, existing transactions, a generation block sending 50 to the MINER, and a transaction sending 50 to the pool. The pool can still check that the block is valid, and only gives credit to the miner if it is playing by those rules. With this setup, each miner has a different generation transaction, so there is no reason to partition off nonces through a centralized server. It may cause larger network demands since the full block must be communicated, not just the nonce, but I feel like it could also reduce stale shares. It also requires that the miner hassince access to the full block chain, which not all do.

Let me know what you all think. Sorry if it's a bit wordy.
BIP 0022

Pages: « 1 [2]  All
  Print  
 
Jump to:  

Sponsored by , a Bitcoin-accepting VPN.
Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!