Bitcoin Forum
June 14, 2024, 06:33:10 AM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
  Home Help Search Login Register More  
  Show Posts
Pages: « 1 ... 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 [154] 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 »
3061  Bitcoin / Pools / Re: Cooperative mining (>10000Mhash/s, join us!) on: February 01, 2011, 03:59:38 PM
What about the fees "earned" by a found block? Are these collected solely by the coordinator of the mining server, or are these sent to the miners, split according to their shares for this block?

http://bitcointalk.org/index.php?topic=1976.msg42122#msg42122
3062  Bitcoin / Bitcoin Discussion / MtGox account compromised on: February 01, 2011, 12:28:50 PM
Facebook is very, very unlikely to be hacked in the same way that MtGox or PlentyOfFish is hacked.

I'm not paranoic, but don't trust anyone's security just because it's big player. Facebook logins can be hacked, too. Personally I also use facebook login to some pages, but I'll think twice to use it for my bank account login (which mtgox is)...
3063  Bitcoin / Mining software (miners) / Re: RPC Miners (CPU/4way/CUDA/OpenCL) on: February 01, 2011, 12:23:15 PM
is it possible to use Tor with the miner

It is possible, but the network latency is big problem then. It will slow down your miner a lot.
3064  Bitcoin / Mining / Re: Mining inefficiency due to discarded work on: February 01, 2011, 08:51:13 AM
only 1 answer was the correct answer to find the block, and that the getwork:solved ratio was always 1:1.

Man, everybody is talking you that finding a share from work is random. There is nothing as 'exactly one share hidden in every getwork'. There is 1 share in every getwork in AVERAGE, because difficulty 1 means there is one solution per every 2^32 attempts. And 2^32 is the size of nonce, which miners iterate. Nothing more.

And no, title of this topic should not be 'how is poclbm skipping blocks', because it isn't true.
3065  Bitcoin / Mining / Re: Mining inefficiency due to discarded work on: February 01, 2011, 03:22:08 AM
Already working on a mod to check my local bitcoind between "work" (32 of them) in the python code for the current block. 

Yes, this will work.

Quote
More blocks == more pay for everyone

Irrelevant in this discussion. You are skipping some nonces, but you are crunching another nonces instead. No block lost.

Quote
In your server stats, I want you to list:
1) The number of get requests for the CURRENT round

As pool hashrate is +- constant in one round, you can keep getwork/s * round time to get this.

Quote
2) The number of submitted hashes (both ACCEPTED and INVALID/STALE listed separately) for the CURRENT round.

I don't calculate it now, because I simplified the code with the last update, but I have those number for ~5 millions of shares. Stale blocks were something around 2 %.

Quote
If you wanted to increase the accuracy of this, separate the INVALID/STALE hashes based on the reason they were rejected, ie (WRONG BLOCK) or (INVALID/ALREADY SUBMITTED).
Then take (# of getwork this round)/(# of accepted/invalid(already submitted))*100 and publish that number in real time.
That's how you check the efficiency of the pool's ability to search all hashes for each getwork sent out. 
This will show if you are really get that 1:1 ratio of getwork/solved hashes.

I have those numbers, but I'm not interested to make fancy GUI to provide this. I can publish database dump if you're interested.

I'm not interested because pool does not earn bitcoins on getwork/share efficiency. Going to 1:1 ratio is mostly irrelevant, it's only game with numbers. I think you still don't understand this Smiley. What to do with slow CPU miners, which crunch whole space for 2-3 minutes? They should crunch whole nonce just to have fancy 1:1 ratio on pool page?

Of course effectivity of network transfers is nice. You can buy stronger GPU and your getwork/submit efficiency will be higher. But this is not a point. The point is to consistently crunch valid blocks. Thats all.

Btw I think we're slightly offtopic here.
3066  Bitcoin / Mining / Re: Mining inefficiency due to discarded work on: February 01, 2011, 02:55:28 AM
I wouldn't call it "more" hashing overhead, since it's the same number of kHash/s regardless of *what getwork* it's on.  My kHash/s doesn't change just because I'm on a different getwork.

You can call it whatever, but with long getwork period, you are hashing shits for many % of time :-).

Quote
You can't ever expect to see (or find) the entire puzzle (the block) when you are choosing to ignore any part (the skipped hashes in a getwork) of that puzzle.

Well, getwork is not a puzzle. It is random walk, when you hit valid share time to time. Nonces are just numbers. It's irrelevant if you are trying to hash 0xaaaa or 0xffff. The probability that you hit valid share is still the same.

Quote
1) Not ignoring nonces of the getwork when a hash is found

Well, this is the only point which make sense. Diablo already implemented this and if it isn't in m0mchil's, it would be nice to implement it, too. But it's definitely on m0mchil decision, not on us.

Also sorry for some impatient responses, but I'm responding those questions to pool users almost every day and it become little boring Wink. It isn't anything personal. To be honest, it isn't long time ago when I had very similar questions as you have right now. But thanks to m0mchil, Diablo and few other people on IRC, now I know how much wrong I was Wink.
3067  Bitcoin / Mining / Re: Mining inefficiency due to discarded work on: February 01, 2011, 02:44:52 AM
Quote
This sample shows that 3 answers were accepted, 1 invalid, then 1 more accepted.  Notice the Invalid answer is the same as the 1st accepted answer.
31/01/2011 06:46:34, Getting new work.. [GW:291]
31/01/2011 06:46:38, 41b15022, accepted at 6.25% of getwork()
31/01/2011 06:46:53, c597d2b5, accepted at 31.25% of getwork()
31/01/2011 06:46:59, 9babdadf, accepted at 50.0% of getwork()
31/01/2011 06:47:07, 41b15022, invalid or stale at 75.0% of getwork()
31/01/2011 06:47:11, 1ba08127, accepted at 87.5% of getwork()

Did you noticed that it is the same nonce? It is absolutely fine that second attempt was rejected. As I wrote before, I don't know the reason *why* there was second attempt. Afaik, nonces should be sorted from 0 to ffffffff, but maybe there is some simple answer behind mixing of nonces (more solving thread or whatever). Maybe it was only reupload after lost connection, whatever. Maybe m0mchil will response this, I don't know miner internals.

Quote
why was the 5th hash accepted?

because - as I already wrote - share can be rejected for more reasons. The reason for this rejection is that miner uploaded same nonce twice. It is not relevant to any bug in getwork, skipping some nonce ranges or any other weird stuff you are arguing here.

Quote
Likewise, can you explain to me exactly how ignoring a potentially large amount of hashes that could be the answer to the block doesn't effect the pool solving the block?

Because with skipping some nonce range, shit happen. In fact, you are skipping zillions of existing nonces. How can you live with that? :-)

Quote
I think FairUser has shown quite plainly that multiple valid answers can be found within the same getwork.

And I said it is nice, but not necessary optimization of miner. This only optimize network latencies, because miner is asking less often, but it does not improve probability that share will be found.
3068  Bitcoin / Mining / Re: Mining inefficiency due to discarded work on: February 01, 2011, 02:22:52 AM
If 4 getworks are requested without having valid answers to submit back, and then on the 5th, it finds one answer and submits it back, then moves on without checking the remaining keyspace for more answers, you have a 20% efficiency.

Maybe I'm wrong, but you are not paid for higher getwork/submit efficiency, but for finding valid blocks. So you are optimizing wrong thing Wink. Maybe you can get 100% getwork/submit ratio, but you are crunching old jobs. But it is your choice and your hashing power.
3069  Bitcoin / Mining / Re: Mining inefficiency due to discarded work on: February 01, 2011, 02:10:55 AM
This brings up the question; If some getwork() simply do not have answers, is this due to a 2^32 keyspace not being an actual equal portion of the block, or is this due to overlapping 2^32 getworks?

There is no reason why there should be valid share in every getwork.

Quote
Do we share an overlap of 2^16 (arbitrary figure for the sake of example) in our respective keyspaces?

No overlapping, every getwork is unique. Read more how getwork() works. Especially the extranonce part.

Quote
Meaning, am I getting invalid or stale because there are multiple people working on the same exact portions of keyspace? If so, isn't that an issue with the getwork patch?

No. It may be because another bitcoin block was introduced in meantime between getwork() and submit. Then share from old block cannot be candidate for new bitcoin block. Read my last posts in pool thread. By the way, this is not pool/m0mchil miner related, it is how bitcoin works.

Quote
I've also asked m0mchill about the askrate, and it seems his answer to why the client fixes the askrate is basically a "fuck it if it doesn't find it quick enough". Although, he speaks more eloquently than that.

And he was true. Longer ask rate, more hashing overhead.

Quote
He has also stated that, yes, we are ignoring large portions of the keyspace because we submit the first hash and ignore anything else in the keyspace, whether it's found in the first 10% of the keyspace, or the last 10% of the keyspace. He believes that this is trivial though, since you are moving on to another keyspace quick enough.

By skipping some nonce space, you don't cut your probability to find valid share/block. There is the same probability of finding share/block by crunching any nonce.

Quote
So, we're not searching the entire keyspace in a provided getwork. What if one of those possible answers you're ignoring is the answer to the block? You just fucked the pool out of a block.

Definitely not. It is only nice optimization for pool to continue hashing of current job, it save some network roundtrips, but it basically does not affect pool success rate.
3070  Bitcoin / Mining / Re: Mining inefficiency due to discarded work on: February 01, 2011, 02:01:18 AM
Notice that this getwork got 5 answers, and reported all of them to slush's server and were accepted.
29/01/2011 01:41:42, Getting new work.. [GW:29]
29/01/2011 01:41:57, 9d5069e9, accepted at 40.625% of getwork()
29/01/2011 01:41:58, 09fc7161, accepted at 43.75% of getwork()
29/01/2011 01:41:59, 27af8f3c, accepted at 46.875% of getwork()
29/01/2011 01:42:02, ad105798, accepted at 56.25% of getwork()
29/01/2011 01:42:12, 0e920ae8, accepted at 75.0% of getwork()

I see no problem here.

Quote
This sample shows that 3 answers were accepted, 1 invalid, then 1 more accepted.  Notice the Invalid answer is the same as the 1st accepted answer.
31/01/2011 06:46:34, Getting new work.. [GW:291]
31/01/2011 06:46:38, 41b15022, accepted at 6.25% of getwork()
31/01/2011 06:46:53, c597d2b5, accepted at 31.25% of getwork()
31/01/2011 06:46:59, 9babdadf, accepted at 50.0% of getwork()
31/01/2011 06:47:07, 41b15022, invalid or stale at 75.0% of getwork()
31/01/2011 06:47:11, 1ba08127, accepted at 87.5% of getwork()

I don't know why miner uploaded same nonce twice, but it is fine that second attempt wasn't counted. Btw I'm expecting that nonce is rising from zero, so looks weird for me that nonces are mixed.

Quote
This sample shows 4 getworks, all without a single answer, then just a single answer for 1 getwork.
31/01/2011 06:04:21, Getting new work.. [GW:212]
31/01/2011 06:04:54, Getting new work.. [GW:213]
31/01/2011 06:05:26, Getting new work.. [GW:214]
31/01/2011 06:05:59, Getting new work.. [GW:215]
31/01/2011 06:06:32, Getting new work.. [GW:216]
31/01/2011 06:06:43, d08c8f3d, accepted at 31.25% of getwork()
31/01/2011 06:07:04, Getting new work.. [GW:217]

That's fine, isn't it?

Quote
Changes include:
- Does not stop when first answer/hash/nonce is found.  It continues searching until all possible hashes have been searched.

I thought that miner is already doing this. At least Diablo's isn't stopping when 'block' is found. Don't forget that this is tweak only for pool, because with crunching real-difficulty hashes, second nonce cannot be valid bitcoin block in any case...

Quote
- Removes the hard coded limit of 10 seconds for the askrate.  (My card does a single getwork in 30 seconds, so I just set the askrate to 3600 and forget it)

You should put it back. Otherwise you are crunching invalid work for long time (~ 30/600 seconds, so 5% of time is wasted). Plus pool now reject shares which cannot be valid bitcoin blocks, for more info read pool thread. So this isn't improvement in any case.

Quote
The advantages of these modifications are:
- Raises efficiency of getwork/answers to 100%.  Right now it's between 20-30%.
- Lowers the amount of requests and bandwidth used by pool servers.

how did you calculated 20-30%? It isn't correct.

Yes, less often getwork update saves some resources, but will make mining much less effective.

Personally I don't understand why are you posting this, because I already answered all those questions in PMs...

Quote
If I'm correct in my findings, how many more blocks could the pool be finding by searching all of the 2^32 hashes per getwork??  
How many blocks have we missed because we didn't search the entire getwork??

Nothing or not any significant amount. Learn basics of statistics and visit pool stats page...
3071  Bitcoin / Development & Technical Discussion / Re: [RFC] Testnet reset on: February 01, 2011, 12:58:55 AM
A simple solution I can think of :
 - If no block has been generated for 24h,, 48h or whatever divide difficulty by 2

...which doesn't solve current problem with testnet. Testnet difficulty is not insanely high, it is still possible to mine a block with a decent hardware in a hour or so. But it is too high to effective testing some software, where you need many blocks to pass all tests.

I see your point and partially agree. Some better algorithm to lower difficulty should be adopter both to testnet and main net. But for current situation, changes which Gavin proposed are fine for me.
3072  Bitcoin / Pools / Re: Cooperative mining (>10000Mhash/s, join us!) on: January 31, 2011, 06:55:11 PM
Looks okay.

Yes, looks like we had hard times in previous 20 hours. Hope it is behind us :-).
3073  Bitcoin / Pools / Re: Cooperative mining (>10000Mhash/s, join us!) on: January 31, 2011, 05:18:18 PM
Have you introduced some problem with the distribution of work? The probability of finding so few blocks over the last 20 hours is about 10^-5 ...

It is weird, but I don't see any troubles on the pool side. We already had days where only few blocks were found.
3074  Economy / Speculation / Re: Bitcoin Technical Analysis on: January 31, 2011, 09:16:37 AM
What is next?

Correction. Price is almost at 0.5, so many people begin to withdraw their bitcoins because they don't believe it will cross over 0.5 easily. Same situation as you highlighted @ 0.3 level...
3075  Bitcoin / Pools / Re: Cooperative mining (>10000Mhash/s, join us!) on: January 31, 2011, 09:08:03 AM
Then most CPU's are going to be starving.  And I thought the pool would be the only way to go with a CPU......I guess CPU's will get the short end of the stick on this one.

I think you didn't get the point of the last update. This change does not affect calculating rewards in any way, because everybody has the same probability to hit 'stale' share. It only affect absolute numbers of shares in round, there will be ~1% less shares per round (in global and also per worker).
3076  Bitcoin / Pools / Re: Cooperative mining (>10000Mhash/s, join us!) on: January 31, 2011, 07:44:49 AM
That way anyone that submits an old work from the previous block doesn't get an invalid.

Worker doesn't get an 'invalid or stale', but the share from previous block cannot be valid Bitcoin block, so I don't see the reason why to accept this.

Quote
I understand the desire to cut off those abusing the system, which is a good thing, but you should make sure it doesn't affect those playing by the rules.

Well, I'm sure. This change affected only counting of shares, not validating hashes against bitcoind. Even if I made some strange mistake in this update (which I don't expect), pool don't miss any block, because every share is still fully checked.
3077  Bitcoin / Mining / Re: Mining inefficiency due to discarded work on: January 30, 2011, 11:50:45 PM
Is every getwork unique?  If so, how did I get the same answer twice?

Hi FairUser,

'invalid or stale' for unique nonces can be explained by pool update. Pool now doesn't accept shares when new bitcoin block come (see pool thread for more). Now it works in the same way as standalone mining.

About same getworks - it looks like network troubles, which are more and more common. I hope that it will be solved by protocol change, which significantly reduce network communication. Will be introduced this week.
3078  Bitcoin / Pools / Re: Cooperative mining (>10000Mhash/s, join us!) on: January 30, 2011, 11:43:47 PM
Today's pool update introduced small change in counting shares. Only submitted shares which are valid for current Bitcoin block are calculated. So if your miner ask for job and submit a share, it isn't calculated if new Bitcoin block arrived in meantime and your share cannot be a candidate for next Bitcoin block. This does not affect fairness of pool, when your miners are configured correctly. Please check your miners, if they does not use custom getwork timeout. Default period of getwork (typically 5 seconds) is the best settings. In this way, you should obtain 'stale' shares only with ~1% probability.

Please note, that also other miner settings can affect time between getwork() and share submit. For example, "-f 1" parameter in Diablo miner rised latency between getwork and submit significantly (from <5s to >10s for ATI 5970). I solved this by "-f 5".
3079  Bitcoin / Pools / Re: Cooperative mining (>10000Mhash/s, join us!) on: January 30, 2011, 11:28:22 PM
Thanks for the explanation; is this latency an exploitable vulnerability though? just wondering.

I don't think so, but I'm not an expert for this.  You can ask anybody else in #bitcoin-dev, because this is not pool-related question.
3080  Bitcoin / Pools / Re: Cooperative mining (>10000Mhash/s, join us!) on: January 30, 2011, 09:14:49 PM
I just restarted pool server application. All of you who changed worker passwords on website recently, please check if your workers are working correctly. I see some "Bad password" messages in server log. This is because I still didn't fixed reloading worker credentials from database to running application, so if you changed passwords before few days, NOW it was applied O:-).
Pages: « 1 ... 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 [154] 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 »
Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!