Bitcoin Forum
December 06, 2016, 06:10:42 PM *
News: To be able to use the next phase of the beta forum software, please ensure that your email address is correct/functional.
 
   Home   Help Search Donate Login Register  
Pages: « 1 2 [3]  All
  Print  
Author Topic: Mining inefficiency due to discarded work  (Read 10504 times)
OneFixt
Member
**
Offline Offline

Activity: 84


View Profile
February 01, 2011, 08:05:45 AM
 #41

Only 1 answer get's the block.  So when you say "You can skip an answer by going on to the next work", are you saying that the likely hood of that skipped getwork/answer being *the answer* is sooooooooo small that it's just not worth it to find it, and we should just move on to the next getwork?

I'm saying that there are 1.224 × 10^63 answers which get the block, so skipping one of them is inconsequential.

The answer which counts is simply the first one of those 1.224 × 10^63 valid answers which is found, but it is by no means the only answer.

We simply stop looking after we find it.  If we had wanted to, we could hash the same block for a month and find about 4,380 valid answers for it in that time.

163c6YtwNbfVSyVvMQCBcmNX9RdYQdRqqa
1481047842
Hero Member
*
Offline Offline

Posts: 1481047842

View Profile Personal Message (Offline)

Ignore
1481047842
Reply with quote  #2

1481047842
Report to moderator
1481047842
Hero Member
*
Offline Offline

Posts: 1481047842

View Profile Personal Message (Offline)

Ignore
1481047842
Reply with quote  #2

1481047842
Report to moderator
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction. Advertise here.
FairUser
Sr. Member
****
Offline Offline

Activity: 261


View Profile WWW
February 01, 2011, 08:16:44 AM
 #42

Only 1 answer get's the block.  So when you say "You can skip an answer by going on to the next work", are you saying that the likely hood of that skipped getwork/answer being *the answer* is sooooooooo small that it's just not worth it to find it, and we should just move on to the next getwork?

I'm saying that there are 1.224 × 10^63 answers which get the block, so skipping one of them is inconsequential.

The answer which counts is simply the first one of those 1.224 × 10^63 valid answers which is found, but it is by no means the only answer.

We simply stop looking after we find it.  If we had wanted to, we could hash the same block for a month and find about 4,380 valid answers for it in that time.

OK, my mind just fucking flipped.  Nobody but you has said their are 1.224 × 10^63 answers to a block.  I've been under the impression (due to imformation provided by others) that only 1 answer was the correct answer to find the block, and that the getwork:solved ratio was always 1:1.
We've been talking about multiple solutions to a getwork....but that now makes a bit more sense as to why I'm seeing more than 1 answer in a getwork.

So ANY one of the possible 1.224 × 10^63 answers would catch me 50 bitcoins?


SO....

1.1579208923731619542357098500869e+77 (2^256) / 1.224e+63 (1.224x(10^63)) = 94601380095846.564888538386445008

94601380095846 / 30000000000 (pool speed) = 3153.3793365282 (seconds)
3153.3793365282 (seconds) / 60 (seconds) = 52.55632227547 (minutes)

That sounds about right.  You're the first one to explain this in that level of detail.  Thank you for clarifying.  Sure wish someone else had said it like that.
So if their are 1.224e+63 answers, the miner just says "10 seconds have passed so fuck it, move to the next getwork and hope we find something."  Something like that??
slush
Legendary
*
Offline Offline

Activity: 1358



View Profile WWW
February 01, 2011, 08:51:13 AM
 #43

only 1 answer was the correct answer to find the block, and that the getwork:solved ratio was always 1:1.

Man, everybody is talking you that finding a share from work is random. There is nothing as 'exactly one share hidden in every getwork'. There is 1 share in every getwork in AVERAGE, because difficulty 1 means there is one solution per every 2^32 attempts. And 2^32 is the size of nonce, which miners iterate. Nothing more.

And no, title of this topic should not be 'how is poclbm skipping blocks', because it isn't true.

FairUser
Sr. Member
****
Offline Offline

Activity: 261


View Profile WWW
February 01, 2011, 08:58:08 AM
 #44


And no, title of this topic should not be 'how is poclbm skipping blocks', because it isn't true.

You misquoted me.
I never said "how is poclbm skipping blocks". 
What I did say was "How Python OpenCL (poclbm) is mining inefficiently".
Can you read/see the difference?  If you're going to quote someone, get it right.

poclbm has found 10 blocks for your pool on my machines. 
It's the number of answer's per getwork that I've been questioning, NOT BLOCKS.
OneFixt
Member
**
Offline Offline

Activity: 84


View Profile
February 01, 2011, 09:20:25 AM
 #45

That sounds about right.  You're the first one to explain this in that level of detail.  Thank you for clarifying.  Sure wish someone else had said it like that.
So if their are 1.224e+63 answers, the miner just says "10 seconds have passed so fuck it, move to the next getwork and hope we find something."  Something like that??

Yep, you are losing a negligible percentage of valid answers by skipping the rest of a getwork, but you would invalidate an entire 5% of any correct answers which you find by waiting 30 seconds between getworks.

In other words, you reduce your chances of solving a block by 1/600 for every second of working on a potentially stale getwork.

You'd have to skip 4.49 x 10^64 full getworks in order to skip 1/600th of all valid solutions.

163c6YtwNbfVSyVvMQCBcmNX9RdYQdRqqa
Cablesaurus
Sr. Member
****
Offline Offline

Activity: 302



View Profile WWW
February 07, 2011, 09:37:15 PM
 #46

That sounds about right.  You're the first one to explain this in that level of detail.  Thank you for clarifying.  Sure wish someone else had said it like that.
So if their are 1.224e+63 answers, the miner just says "10 seconds have passed so fuck it, move to the next getwork and hope we find something."  Something like that??

Yep, you are losing a negligible percentage of valid answers by skipping the rest of a getwork, but you would invalidate an entire 5% of any correct answers which you find by waiting 30 seconds between getworks.

In other words, you reduce your chances of solving a block by 1/600 for every second of working on a potentially stale getwork.

You'd have to skip 4.49 x 10^64 full getworks in order to skip 1/600th of all valid solutions.

If I read this right you're confirming it's more potentially efficient to skip stale getworks @ 10s rather than wait longer to run through the whole getwork request right?

PCIe Extender Cables; Dummy Plugs, Fans; PSU Cables; Cases & More
Visit www.Cablesaurus.com and our forum thread at http://bitcointalk.org/index.php?topic=6128.0
OneFixt
Member
**
Offline Offline

Activity: 84


View Profile
February 09, 2011, 11:15:48 AM
 #47

That sounds about right.  You're the first one to explain this in that level of detail.  Thank you for clarifying.  Sure wish someone else had said it like that.
So if their are 1.224e+63 answers, the miner just says "10 seconds have passed so fuck it, move to the next getwork and hope we find something."  Something like that??

Yep, you are losing a negligible percentage of valid answers by skipping the rest of a getwork, but you would invalidate an entire 5% of any correct answers which you find by waiting 30 seconds between getworks.

In other words, you reduce your chances of solving a block by 1/600 for every second of working on a potentially stale getwork.

You'd have to skip 4.49 x 10^64 full getworks in order to skip 1/600th of all valid solutions.

If I read this right you're confirming it's more potentially efficient to skip stale getworks @ 10s rather than wait longer to run through the whole getwork request right?

Correct, you lose 1.67% efficiency with a 10s getwork interval, you might want to lower it to 5s.

163c6YtwNbfVSyVvMQCBcmNX9RdYQdRqqa
geebus
Sr. Member
****
Offline Offline

Activity: 258



View Profile WWW
February 13, 2011, 01:58:15 PM
 #48

If I run with an askrate of 5 seconds on an HD4850, that yields an average of 55M hash/s, it will take me roughly 78 seconds to iterate an entire 2^32 nonce, therefore I'm looking at 6.41% of the getwork and assuming I will find an answer, and that it will likely be within that 5 second span of time, each time. Correct? Theoretically thats what is being said, right?

I can see that working for a 6950 getting ~350M hash/s (13-14 seconds for a full 2^32) because I'd still be covering 40% of that getwork, but for slower cards, or CPU miners, it seems ridiculous to assume you would find an answer in the first 5, 10 or even 20 seconds.

m0mchills miner, as the code appears on the git hub, is forcing the askrate to be between 1 and 10 seconds. i.e. if I set my askrate to 20 seconds, the miner will automatically set it to 10.

It's inefficient to do so. Faster cards can iterate a getwork quick enough for 5 (or 10) seconds to be a worthwhile, slower cards cannot.

Lets assume for a moment that I have 2 cards. One running at ~55M hash/s and one at 350M hash/s. I process 1000 getworks with each, at a 5 second ask rate.

350M hash/s card:
2^32 / 350,000,000 = 12.7 s
39.7% of each getwork completed in 5 seconds.
Essentially, 397 of 1000 getworks were searched entirely.

55M hash/s card:
2^32 / 55,000,000 = 78.1 s
6.41% of each getwork completed in 5 seconds.
Essentially, 64 of 1000 getworks were searched entirely.

The statistical probability of an answer being found in the first 6.41% of a getwork is so low that you may as well not bother mining with an askrate that low, or switch to a CPU miner.

Feel like donating to me? BTC Address: 14eUVSgBSzLpHXGAfbN9BojXTWvTb91SHJ
OneFixt
Member
**
Offline Offline

Activity: 84


View Profile
February 14, 2011, 06:04:06 AM
 #49

If I run with an askrate of 5 seconds on an HD4850, that yields an average of 55M hash/s, it will take me roughly 78 seconds to iterate an entire 2^32 nonce, therefore I'm looking at 6.41% of the getwork and assuming I will find an answer, and that it will likely be within that 5 second span of time, each time. Correct? Theoretically thats what is being said, right?

I can see that working for a 6950 getting ~350M hash/s (13-14 seconds for a full 2^32) because I'd still be covering 40% of that getwork, but for slower cards, or CPU miners, it seems ridiculous to assume you would find an answer in the first 5, 10 or even 20 seconds.

m0mchills miner, as the code appears on the git hub, is forcing the askrate to be between 1 and 10 seconds. i.e. if I set my askrate to 20 seconds, the miner will automatically set it to 10.

It's inefficient to do so. Faster cards can iterate a getwork quick enough for 5 (or 10) seconds to be a worthwhile, slower cards cannot.

Lets assume for a moment that I have 2 cards. One running at ~55M hash/s and one at 350M hash/s. I process 1000 getworks with each, at a 5 second ask rate.

350M hash/s card:
2^32 / 350,000,000 = 12.7 s
39.7% of each getwork completed in 5 seconds.
Essentially, 397 of 1000 getworks were searched entirely.

55M hash/s card:
2^32 / 55,000,000 = 78.1 s
6.41% of each getwork completed in 5 seconds.
Essentially, 64 of 1000 getworks were searched entirely.

The statistical probability of an answer being found in the first 6.41% of a getwork is so low that you may as well not bother mining with an askrate that low, or switch to a CPU miner.

You are assuming that you are able iterate through some significant percentage of the entire available search space, which you cannot, even if you mined for the rest of your life on all of the hardware of the earth combined.

There is a set chance of finding a winning block on any single hash that you do, like playing the lottery.  It does not matter in what order you choose the tickets, out of a bucket of a near-infinite pile of tickets, one out of every X of which happens to be a winner.  Your chance of picking a winner is always the same on every draw.

One 32-bit getwork is equivalent to .0000000000000000000000000000000000000000000000000000000000000000037% of the entire search space.

163c6YtwNbfVSyVvMQCBcmNX9RdYQdRqqa
geebus
Sr. Member
****
Offline Offline

Activity: 258



View Profile WWW
February 14, 2011, 11:53:08 AM
 #50

That is really only a concern if I'm looking to solve "the block" with an answer. I realize that the likelihood of me solving the block is extremely low. My concern lies in finding a "share" in a pool environment similar to slush's pool.

With an askrate of 5 seconds, you likelihood to find a share in a getwork within 5 seconds on slower miners is so incredibly low that you're basically nullifying their chances of doing so.

Both of my previous examples cited GPUs. One at a fairly decent speed, and one much slower. That didn't even take into consideration CPU mining, where you may only be getting ~5M hash/s. It would take 859 seconds to process through a single getwork, and 5 seconds would be only 0.58% of that getwork.

This causes significantly high traffic on the server due to repetitive and constant getwork requests that yield little to no gains for the pool. Even 10 seconds would be low for a lot of miners.

If you adjusted the ask rate based on the speed of the card, and assumed that you could adjust anything slower than ~86M hash/s (50 seconds for 2^32 getwork) to a reasonable speed (around 20 - 25s perhaps) then you could significantly reduce traffic to the server, still yield roughly the same number of shares submitted over the same period of time, and would likely allow slower clients to discover results they normally would have missed or never gotten to (like, anything in the other 99.42% of the getwork).

Likewise, working off of the logic that it's all a matter of luck on whether or not those shares may solve the block, you would have the exact same chance of solving the block.

I don't know about you, but if I were the one running a pool, having my bandwidth and server resources reduced by a significant amount would make me happy.

Feel like donating to me? BTC Address: 14eUVSgBSzLpHXGAfbN9BojXTWvTb91SHJ
OneFixt
Member
**
Offline Offline

Activity: 84


View Profile
February 14, 2011, 03:34:59 PM
 #51

With an askrate of 5 seconds, you likelihood to find a share in a getwork within 5 seconds on slower miners is so incredibly low that you're basically nullifying their chances of doing so.

The chance of finding a share is still exactly the same whether you spend 50 seconds on 1 getwork or 5 seconds each on 10 getworks in a row. 

I don't know about you, but if I were the one running a pool, having my bandwidth and server resources reduced by a significant amount would make me happy.

I do run a pool, and I don't pay for stale shares since they are of no help to me in solving a block.  Slush does not pay for them for the same reason.

If your goal is to win, you should be playing today's lottery, not last week's.

If this still doesn't make sense, please re-read the technical specifications of bitcoin block generation and summarize them in a post, then I could correct any misunderstandings you may have about the process.

163c6YtwNbfVSyVvMQCBcmNX9RdYQdRqqa
Pages: « 1 2 [3]  All
  Print  
 
Jump to:  

Sponsored by , a Bitcoin-accepting VPN.
Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!