FairUser (OP)
Sr. Member
Offline
Activity: 1344
Merit: 264
bit.ly/3QXp3oh | Ultimate Launchpad on TON
|
|
February 01, 2011, 03:10:31 AM |
|
I wouldn't call it "more" hashing overhead, since it's the same number of kHash/s regardless of *what getwork* it's on. My kHash/s doesn't change just because I'm on a different getwork.
You can call it whatever, but with long getwork period, you are hashing shits for many % of time :-). No, I get the same number of accepted as I do with the normal miner. You can't ever expect to see (or find) the entire puzzle (the block) when you are choosing to ignore any part (the skipped hashes in a getwork) of that puzzle.
Well, getwork is not a puzzle. It is random walk, when you hit valid share time to time. Nonces are just numbers. It's irrelevant if you are trying to hash 0xaaaa or 0xffff. The probability that you hit valid share is still the same. But if I get to 0xcccc, find an answer and stop looking, I *could be missing* more answers. 1) Not ignoring nonces of the getwork when a hash is found
Well, this is the only point which make sense. Diablo already implemented this and if it isn't in m0mchil's, it would be nice to implement it, too. But it's definitely on m0mchil decision, not on us. That's why I posted in his thread. Also sorry for some impatient responses, but I'm responding those questions to pool users almost every day and it become little boring . It isn't anything personal. To be honest, it isn't long time ago when I had very similar questions as you have right now. But thanks to m0mchil, Diablo and few other people on IRC, now I know how much wrong I was . Maybe I'm totally wrong in thinking that ignored POSSIBLE answers COULD BE *THE* ANSWER for the block....since I've already found 10 blocks for the pool. If Diablo Miner does look through the entire 2^32 possible answers, then it is being 100% efficient. I'd like to see the same with m0mchill's miner, so I made the changes I wanted to see and it bothered when I realized it was ignoring possible answers.
|
|
|
|
FairUser (OP)
Sr. Member
Offline
Activity: 1344
Merit: 264
bit.ly/3QXp3oh | Ultimate Launchpad on TON
|
|
February 01, 2011, 03:12:15 AM |
|
Say I get 50 million hashes a second on my GPU. 2^32 / 50,000,000 = 86 seconds to process an entire keyspace. If my askrate is set to 5 seconds, I'm only checking 5.82% of each keyspace before moving on and assuming the getwork holds no answers.
You're potentially ignoring 94.18% of answers. Numbers obviously vary based on the speed of the GPU, but for a 5 second askrate to be effective, you would need 859 Million Hashes/s to process a keyspace of a single getwork, and even then the way m0mchills code is written, once it finds the first answer, it moves on to the next getwork anyway. This is flawed.
Exactly. The slower the GPU, and the lower the askrate the worse off your efficiency will be because more possible hashes are being ignored.
|
|
|
|
slush
Legendary
Offline
Activity: 1386
Merit: 1097
|
|
February 01, 2011, 03:22:08 AM |
|
Already working on a mod to check my local bitcoind between "work" (32 of them) in the python code for the current block.
Yes, this will work. More blocks == more pay for everyone
Irrelevant in this discussion. You are skipping some nonces, but you are crunching another nonces instead. No block lost. In your server stats, I want you to list: 1) The number of get requests for the CURRENT round
As pool hashrate is +- constant in one round, you can keep getwork/s * round time to get this. 2) The number of submitted hashes (both ACCEPTED and INVALID/STALE listed separately) for the CURRENT round.
I don't calculate it now, because I simplified the code with the last update, but I have those number for ~5 millions of shares. Stale blocks were something around 2 %. If you wanted to increase the accuracy of this, separate the INVALID/STALE hashes based on the reason they were rejected, ie (WRONG BLOCK) or (INVALID/ALREADY SUBMITTED). Then take (# of getwork this round)/(# of accepted/invalid(already submitted))*100 and publish that number in real time. That's how you check the efficiency of the pool's ability to search all hashes for each getwork sent out. This will show if you are really get that 1:1 ratio of getwork/solved hashes.
I have those numbers, but I'm not interested to make fancy GUI to provide this. I can publish database dump if you're interested. I'm not interested because pool does not earn bitcoins on getwork/share efficiency. Going to 1:1 ratio is mostly irrelevant, it's only game with numbers. I think you still don't understand this . What to do with slow CPU miners, which crunch whole space for 2-3 minutes? They should crunch whole nonce just to have fancy 1:1 ratio on pool page? Of course effectivity of network transfers is nice. You can buy stronger GPU and your getwork/submit efficiency will be higher. But this is not a point. The point is to consistently crunch valid blocks. Thats all. Btw I think we're slightly offtopic here.
|
|
|
|
FairUser (OP)
Sr. Member
Offline
Activity: 1344
Merit: 264
bit.ly/3QXp3oh | Ultimate Launchpad on TON
|
|
February 01, 2011, 03:38:29 AM |
|
I have those numbers, but I'm not interested to make fancy GUI to provide this. I can publish database dump if you're interested.
I would love do some stats on a DB dump. PM me the link (or post it). Thank you Btw I think we're slightly offtopic here.
Only slighty.
|
|
|
|
theymos
Administrator
Legendary
Offline
Activity: 5376
Merit: 13420
|
|
February 01, 2011, 03:57:14 AM |
|
SHA-256 returns a random number that is impossible to know before actually doing the work. Since the number returned is random, doing the hashes for one work gives you the exact same chance of solving a block as doing the hashes for another work.
It's like having two boxes full of raffle tickets. Each contains 2 winning tickets. If you find a winning ticket in one box, it doesn't help you (or hurt you) to continue drawing from that box. Nor is it more "efficient" in any way.
|
1NXYoJ5xU91Jp83XfVMHwwTUyZFK64BoAD
|
|
|
geebus
|
|
February 01, 2011, 04:14:29 AM |
|
I'm not interested because pool does not earn bitcoins on getwork/share efficiency. Going to 1:1 ratio is mostly irrelevant, it's only game with numbers. I think you still don't understand this . What to do with slow CPU miners, which crunch whole space for 2-3 minutes?
I think the question at hand here is, WHY does it not affect the pool's likelihood of finding the answer to the block when ignoring all of these potential answers to hashes? I understand that the ignored blocks are "compensated" to a degree by processing more getworks, which makes a user see that their share count is still going up, and that (total # of shares in the round) / (number of shares a user submitted for the round) = their payout... for the round. What I'm asking is, are we less likely to find the answer for THE BLOCK by not submitting these hashes? ...and I think we're entirely on topic considering we're discussing how m0mchill's miner is functioning, in the thread for m0mchill's miner. SHA-256 returns a random number that is impossible to know before actually doing the work. Since the number returned is random, doing the hashes for one work gives you the exact same chance of solving a block as doing the hashes for another work.
It's like having two boxes full of raffle tickets. Each contains 2 winning tickets. If you find a winning ticket in one box, it doesn't help you (or hurt you) to continue drawing from that box. Nor is it more "efficient" in any way.
If it takes you the same amount of time to draw 15 tickets from 1 box as it does to draw 1 ticket each from 15 boxes, you still have the same 15 tickets, but your chances of having a winning ticket from 1 box are higher if you hold 15 tickets from that box. Lets say there are 100 tickets in a box, and 2 are winners. You have a 2% chance that the ticket you draw from that box will be the winning ticket. If you draw 15 tickets from that box, you have a 15% chance. Now, if you have 15 boxes, each with 100 tickets, and 2 winners, and you draw 1 ticket from each, you still only have a 2% chance that it will be the winner. If it takes you the same amount of time to draw 15 tickets from 1 box as it does to draw 1 ticket each from 15 boxes, would you rather have a 2% chance, or a 15% chance?
|
Feel like donating to me? BTC Address: 14eUVSgBSzLpHXGAfbN9BojXTWvTb91SHJ
|
|
|
FairUser (OP)
Sr. Member
Offline
Activity: 1344
Merit: 264
bit.ly/3QXp3oh | Ultimate Launchpad on TON
|
|
February 01, 2011, 04:17:41 AM |
|
SHA-256 returns a random number that is impossible to know before actually doing the work. Since the number returned is random, doing the hashes for one work gives you the exact same chance of solving a block as doing the hashes for another work.
It's like having two boxes full of raffle tickets. Each contains 2 winning tickets. If you find a winning ticket in one box, it doesn't help you (or hurt you) to continue drawing from that box. Nor is it more "efficient" in any way.
So you have 4 winning tickets, and these tickets would make you eligible to win the grand prize of 50 bitcoins, but only 1 of the 4 tickets is the grand prize winning ticket. If I choose to quit looking in box 1 after finding just 1 ticket, and do the same for box two, you only find half the tickets. The grand prize winner ticket might have been left behind in the boxes, but I equally might get lucky and win the grand prize. I don't know about you, but I'd like to find all 4 tickets, not half of them.
|
|
|
|
theymos
Administrator
Legendary
Offline
Activity: 5376
Merit: 13420
|
|
February 01, 2011, 04:42:53 AM |
|
My metaphor was flawed due to my attempt at simplicity, so I will expand it:
There are an endless number of boxes. Each contains 100 tickets. The entire endless set of boxes as a whole has 1 winning ticket for every 99 non-winning tickets, though a single box may contain zero, one, or more winning tickets. The chance of drawing a winning ticket from any box is therefore 1 in 100. It does not matter whether you draw continuously from one box or draw only one ticket from each box: the odds are still 1 in 100.
Completing an entire work is like emptying each box in order. Getting new work after finding something is like moving to the next box after finding a winning ticket. You could also move to the next box every few minutes. There is no concept of efficiency, however, as the chance is always 1 in 100.
Likewise, there are an endless number of works. The entire set as a whole has an exact chance per hash. It doesn't matter how many hashes you do per work: the chance is always the pre-set amount.
|
1NXYoJ5xU91Jp83XfVMHwwTUyZFK64BoAD
|
|
|
geebus
|
|
February 01, 2011, 04:53:59 AM Last edit: February 01, 2011, 05:08:57 AM by geebus |
|
My metaphor was flawed due to my attempt at simplicity, so I will expand it:
There are an endless number of boxes. Each contains 100 tickets. The entire endless set of boxes as a whole has 1 winning ticket for every 99 non-winning tickets, though a single box may contain zero, one, or more winning tickets. The chance of drawing a winning ticket from any box is therefore 1 in 100. It does not matter whether you draw continuously from one box or draw only one ticket from each box: the odds are still 1 in 100.
Completing an entire work is like emptying each box in order. Getting new work after finding something is like moving to the next box after finding a winning ticket. You could also move to the next box every few minutes. There is no concept of efficiency, however, as the chance is always 1 in 100.
Likewise, there are an endless number of works. The entire set as a whole has an exact chance per hash. It doesn't matter how many hashes you do per work: the chance is always the pre-set amount.
This assumes there is only one answer per getwork. In some instances there are none, whereas in other instances there are multiple (for the sake of explanation we'll say there could be 5). Also, you also have a fixed number of boxes, as there are only a fixed number of 2^32 chunks of a whole block, so I'll ignore your "endless number" logic. So, lets talk say our "fixed number" is 100 boxes here, and we'll assume that there are a total of 100 winning tickets (shares) and of those 100 tickets, only 1 is the grand prize (the block). We'll also assume that 50 of the boxes are empty. You take 1 ticket from each box and you have a total of 50 winning tickets, with each of those winning tickets having a 0.5% chance that it will win the grand prize, and a 50% chance that you never drew the grand prize ticket. - OR - You could grab EVERY winning ticket from each one of the boxes, and have a 1% chance that each winning ticket could be the grand prize winner, and a 100% chance that at least one of your tickets is going to be the grand prize winner. Note: This also doesn't take into consideration the fact that one of the first boxes you drew tickets from could contain the winning ticket, but the winning ticket is not the first one drawn. In those instances, one of the ignored tickets (second, or later, ticket drawn from the box) could have been the grand prize winner. Using FairUser's fork of m0mchill's miner, the pool would get the block. Using m0mchill's miner, the pool would not.
|
Feel like donating to me? BTC Address: 14eUVSgBSzLpHXGAfbN9BojXTWvTb91SHJ
|
|
|
theymos
Administrator
Legendary
Offline
Activity: 5376
Merit: 13420
|
|
February 01, 2011, 05:05:16 AM |
|
This assumes there is only one answer per getwork. In some instances there are none, whereas in other instances there are multiple (for the sake of explanation we'll say there could be 5).
The boxes are getwork responses. I said: a single box may contain zero, one, or more winning tickets Also, you also have a fixed number of boxes, as there are only a fixed number of 2^32 chunks of a whole block, so I'll ignore your "endless number" logic. There are an unlimited number of unique getwork responses (nearly).
|
1NXYoJ5xU91Jp83XfVMHwwTUyZFK64BoAD
|
|
|
geebus
|
|
February 01, 2011, 05:16:50 AM |
|
There are an unlimited number of unique getwork responses (nearly).
No, there are a fixed number. If every getwork request is unique, and each are giving a portion of the total block, as a 2^32 keyspace, it's a large number (4,294,967,296 to be exact), but not "unlimited". At a speed of 24 billion hashes/s (roughly the speed of the pool, currently), it would take ~45 minutes to iterate through an entire block assuming the answer was not found until the last share of the last getwork. In either instance, over the same 45 minute period of time, each miner working in the pool would see roughly the same number of shares contributed using either miner, so your end payout per round is not effected. The frequency at which rounds are solved, by actually submitting all of the possible answers instead of ignoring a large percentage of them, stands to be improved by iterating through each getwork in it's entirety instead of a hit & run method.
|
Feel like donating to me? BTC Address: 14eUVSgBSzLpHXGAfbN9BojXTWvTb91SHJ
|
|
|
m0mchil
|
|
February 01, 2011, 05:44:29 AM Last edit: February 01, 2011, 06:00:53 AM by m0mchil |
|
May I kindly ask to move this discussion to another/separate thread, please? I had private discussion with geebus already. Will try to explain one more time here. So you have 4 winning tickets, and these tickets would make you eligible to win the grand prize of 50 bitcoins, but only 1 of the 4 tickets is the grand prize winning ticket. If I choose to quit looking in box 1 after finding just 1 ticket, and do the same for box two, you only find half the tickets. The grand prize winner ticket might have been left behind in the boxes, but I equally might get lucky and win the grand prize.
I don't know about you, but I'd like to find all 4 tickets, not half of them. With bitcoin you have many much 'boxes' than 'tickets'. To be exact, the boxes and small prize tickets (pool shares) are 2^224. Grand prize tickets are currently ~ 2^209. Only one in 2^15 boxes contains a grand prize ticket. Deciding to begin another box is a probabilistic win - see the Monty Hall problem. OK, not a win, but every box is equal - checking 100 tickets from a new box completely compensates not checking 100 tickets from the previous box. @FairUser - poclbm makes some assumptions which are counter intuitive at first look. Because it pulls jobs, there is assumption that job should live at most N seconds because otherwise you risk solving already solved block. The probability for this is roughly N/600, but practically always worse because network is growing. Because there is no single GPU capable of exhausting the 2^32 nonces in 5 (and even in 10) seconds, poclbm does not check for nonce overflow. Again, I kindly ask to open another thread for discussions not related to poclbm.
|
|
|
|
theymos
Administrator
Legendary
Offline
Activity: 5376
Merit: 13420
|
|
February 01, 2011, 06:14:48 AM |
|
I split this topic from python OpenCL bitcoin miner because, as m0mchil pointed out, the posts were largely unrelated to poclbm. Is this title OK?
|
1NXYoJ5xU91Jp83XfVMHwwTUyZFK64BoAD
|
|
|
geebus
|
|
February 01, 2011, 06:22:59 AM |
|
I split this topic from python OpenCL bitcoin miner because, as m0mchil pointed out, the posts were largely unrelated to poclbm. Is this title OK? Again, I kindly ask to open another thread for discussions not related to poclbm.
Aside from the few times that direct questions were posed to Slush, all aspects of this conversation have been directly related to, or in connection to, aspects of the functionality of poclbm. Likewise, the topic is actually pretty much as far off as you can be. We're not discussing duplicate work. At all. We're discussing how POCLBM is skipping work. It would be better suited to name the topic, "The inefficiencies of poclbm". With bitcoin you have many much 'boxes' than 'tickets'. To be exact, the boxes and small prize tickets (pool shares) are 2^224. Grand prize tickets are currently ~ 2^209. Only one in 2^15 boxes contains a grand prize ticket. Deciding to begin another box is a probabilistic win - see the Monty Hall problem. OK, not a win, but every box is equal - checking 100 tickets from a new box completely compensates not checking 100 tickets from the previous box.
The Monty Hall problem is not exactly what is going on here though. You're saying that the probability of moving on to the next getwork is more likely the case because you would statistically get a higher percent chance of solving the block by choosing a different answer, but the core basis of the argument is not true. You can't really say, "I have a higher chance in winning if I change my decision" if you're not looking at all the choices. To use the Monty Hall problem as an example, this would be if you were presented with 3 doors, you chose the first door and before the host could open a different door (thus changing your odds in favor of switching), you were presented with a completely new set of doors. Rinse and repeat over and over. Yes, you may get "lucky" and pick the correct door the first time, but in this instance, you have the option of opening all three doors, taking whatever prizes they may contain, and then moving on to the next set. Using that as a basis, I could just as easily say ((Total # of 2^32 Keyspaces) / (Collective Pool Hashrate)) / (Number of Active Workers) = N and then hash an entire block myself, only process every Nth nonce and be just as effective as the collective pool is, skipping to the next getwork on each found hash.
|
Feel like donating to me? BTC Address: 14eUVSgBSzLpHXGAfbN9BojXTWvTb91SHJ
|
|
|
FairUser (OP)
Sr. Member
Offline
Activity: 1344
Merit: 264
bit.ly/3QXp3oh | Ultimate Launchpad on TON
|
|
February 01, 2011, 06:31:01 AM |
|
I split this topic from python OpenCL bitcoin miner because, as m0mchil pointed out, the posts were largely unrelated to poclbm. Is this title OK? Perhaps "How Python OpenCL (poclbm) is mining inefficiently"
|
|
|
|
Syke
Legendary
Offline
Activity: 3878
Merit: 1193
|
|
February 01, 2011, 06:32:35 AM |
|
This is the key point right here: every box is equal - checking 100 tickets from a new box completely compensates not checking 100 tickets from the previous box.
Due to the way hashes are essentially random, it doesn't matter if you switch before completing work.
|
Buy & Hold
|
|
|
geebus
|
|
February 01, 2011, 06:40:46 AM |
|
Not every getwork is equal. Not every hash is equal. It's not random. There is a single answer per block. If that answer is never provided because you skipped it, it doesn't fucking matter how many wrong answers you provide to try and make up for it. They will still be wrong.
|
Feel like donating to me? BTC Address: 14eUVSgBSzLpHXGAfbN9BojXTWvTb91SHJ
|
|
|
FairUser (OP)
Sr. Member
Offline
Activity: 1344
Merit: 264
bit.ly/3QXp3oh | Ultimate Launchpad on TON
|
|
February 01, 2011, 06:43:22 AM |
|
This is the key point right here: every box is equal - checking 100 tickets from a new box completely compensates not checking 100 tickets from the previous box.
Due to the way hashes are essentially random, it doesn't matter if you switch before completing work. Yes it does. It you don't complete your work, you *might* be skipping the correct answer/hash/nonce for the block. If every getwork is only issued once, and the nonce/answer is skipped because your miner quit looking after finding only 1 answer (when more than 1 is possible), then YOU COULD HAVE SKIPPED THE ANSWER FOR THE BLOCK.
|
|
|
|
OneFixt
Member
Offline
Activity: 84
Merit: 11
|
|
February 01, 2011, 07:53:50 AM Last edit: February 01, 2011, 10:13:41 AM by OneFixt |
|
Not every getwork is equal. Not every hash is equal. It's not random. There is a single answer per block. If that answer is never provided because you skipped it, it doesn't fucking matter how many wrong answers you provide to try and make up for it. They will still be wrong.
The entire search-space for a single block is 2^256 hashes. There are 2^224 answers per block when difficulty is 1. At the present difficulty of 22012.4941572, there are 2^224 / 22012.4941572 valid answers. That's 1.224 × 10^63 valid answers. Out of 1.157 × 10^77 total answers. The pool can process about 30,000,000,000 hashes per second. That's 3.0 × 10^10 hashes per second. It would take 3.859e+66 seconds for the pool to search through every possible hash in a single block. That's 6.432 × 10^64 minutes. That's 1.072 × 10^63 hours. That's 4.467 × 10^61 days. That's 1.223 × 10^59 years. That's 8.901 × 10^48 times the age of the universe. We are searching through a possible 115 quattuorvigintillion, 792 trevigintillion, 89 duovigintillion, 237 unvigintillion, 316 vigintillion, 195 novemdecillion, 423 octodecillion, 570 septendecillion, 985 sexdecillion, 8 quindecillion, 687 quattuordecillion, 907 tredecillion, 853 duodecillion, 269 undecillion, 984 decillion, 665 nonillion, 640 octillion, 564 septillion, 39 sextillion, 457 quintillion, 584 quadrillion, 7 trillion, 913 billion, 129 million, 639 thousand and 936 hashes for every block. Trying to find, at the current difficulty, one of 1 vigintillion, 224 novemdecillion, 756 octodecillion, 562 septendecillion, 96 sexdecillion, 912 quindecillion, 245 quattuordecillion, 974 tredecillion, 145 duodecillion, 865 undecillion, 520 decillion, 4 nonillion, 272 octillion, 488 septillion, 786 sextillion, 20 quintillion, 128 quadrillion, 241 trillion, 774 billion, 877 million, 474 thousand and 816 valid answers. You can skip a valid answer by skipping an entire getwork 22012 times. This will leave you with a possible 1.224 × 10^63 - 1 valid answers. That's 1.224 × 10^39 times more valid answers than there are stars in the universe.
|
163c6YtwNbfVSyVvMQCBcmNX9RdYQdRqqa
|
|
|
FairUser (OP)
Sr. Member
Offline
Activity: 1344
Merit: 264
bit.ly/3QXp3oh | Ultimate Launchpad on TON
|
|
February 01, 2011, 08:00:34 AM |
|
Not every getwork is equal. Not every hash is equal. It's not random. There is a single answer per block. If that answer is never provided because you skipped it, it doesn't fucking matter how many wrong answers you provide to try and make up for it. They will still be wrong.
The entire search-space for a single block is 2^256 hashes. There are 2^224 answers per block when difficulty is 1. At the present difficulty of 22012.4941572, there are 2^224 / 22012.4941572 valid answers. That's 1.224 × 10^63 valid answers. Out of 1.157 × 10^77 total answers. The pool can process about 30,000,000,000 hashes per second. That's 3.0 × 10^10 hashes per second. It would take 3.859e+66 seconds for the pool to search through every possible hash in a single block. That's 6.432 × 10^64 minutes. That's 1.072 × 10^63 hours. That's 4.467 × 10^61 days. That's 1.223 × 10^59 years. That's 8.901 × 10^48 times the age of the universe. You can skip an answer by going on to the next work. This will leave you with a possible 1.224 × 10^63 - 1 valid answers. That's 1.224 × 10^39 times more valid answers than there are stars in the universe. We are searching through a possible 115 quattuorvigintillion, 792 trevigintillion, 89 duovigintillion, 237 unvigintillion, 316 vigintillion, 195 novemdecillion, 423 octodecillion, 570 septendecillion, 985 sexdecillion, 8 quindecillion, 687 quattuordecillion, 907 tredecillion, 853 duodecillion, 269 undecillion, 984 decillion, 665 nonillion, 640 octillion, 564 septillion, 39 sextillion, 457 quintillion, 584 quadrillion, 7 trillion, 913 billion, 129 million, 639 thousand and 936 hashes for every block. Trying to find, at the current difficulty, one of 1 vigintillion, 224 novemdecillion, 756 octodecillion, 562 septendecillion, 96 sexdecillion, 912 quindecillion, 245 quattuordecillion, 974 tredecillion, 145 duodecillion, 865 undecillion, 520 decillion, 4 nonillion, 272 octillion, 488 septillion, 786 sextillion, 20 quintillion, 128 quadrillion, 241 trillion, 774 billion, 877 million, 474 thousand and 816 valid answers. Only 1 answer get's the block. So when you say "You can skip an answer by going on to the next work", are you saying that the likely hood of that skipped getwork/answer being *the answer* is sooooooooo small that it's just not worth it to find it, and we should just move on to the next getwork?
|
|
|
|
|