I'm not sure how the proof of work flow would go though. If the pool only gets the address and the branch, it cannot submit the block to the public chain. Are you expecting the miner to do that alone? Or are you suggesting different paths for solved block versus share found?
This will force pool managers to rely on the miners to work on useful branches, and it will mean that if a miner has a poor connection, he may be submitting useless shares for which he will get full credit.
This is true even now - miners can fail to submit the precious one true solution due to many reasons. Even on purpose. And yes, I expect the miner to submit his blocks. ... And avoiding recomputing the entire merkle tree on each 'getwork' call will eliminate the biggest CPU sucker left in 'getwork' after my changes. Feel free to pull this in your code. My patch is tested against pushpoold and seems to work fine.
|
|
|
Even if too late in this thread, I'd like to make some comments. Partly because I feel some guilt over the notorious 'getwork' RPC call. It was made with only one purpose - to experiment with mining outside of Satoshi's code. I never imagined it will feed heavy loaded servers in the way it is doing it now. Sadly, it made pooled mining possible and at the same time allowed de-democratization of the mining process. I believe this can be fixed. Right now, classic 'getwork' is re-processing all transactions whenever there are new ones and 60 seconds have passed or when there is new block (1). Worse, because each worker needs its own hash space, the Merkle tree is recalculated entirely with each request (2). When the size of transaction pool (unconfirmed transactions) gets really large, this becomes unfeasible. There were episodes with significant numbers of spam transactions which proved this. Some months ago I made https://github.com/m0mchil/bitcoin/tree/poolmodeAbout (1), transaction processing was moved out of the RPC thread (to main.cpp, ProcessTransactions) to make 'getwork' always return as fast as possible. For (2) UpdateMerkleTree was introduced to allow only specific branch to be rebuilt (specifically the first one). Bitcoind was creating new thread handle for each connection accepted (I guess it was needed for connection timeout guard) - this was removed (see rpc.cpp, around 'boost::thread api_caller') because with pools the server is always used locally, by trusted process(es). Even still single threaded, 'getwork' performance improved drastically. But it is time for a new scheme. I see Gavin's monitorX patch as a good candidate. We need something like 'monitorTransactionPool' to push whenever there is change in the set of transactions currently ready to be included in a block. Also, pools should be changed to allow miners to just prove they included pool's coin base in the block they solve. This is possible by sending the transaction with pool's address in it and the next Merkle branch. Then miners will have complete control over which transactions to include and which block chain to build on. I am intending to have this implemented soon.
|
|
|
...and how does the client inform the server that it supports this feature?
There is version in user-agent, server can determine miner and act accordingly.
|
|
|
New version is up
Changes:
- support for server provided failback hosts - most of JSON-RPC fields made optional (to reduce pools bandwidth) - increased default primary server retry interval to 10 getworks - moved job processing to main thread - improvements by gominoa, enolan @ github - options separation, server names, quiet output
|
|
|
m0mchil: Why'd you remove the retry-on-network-error code when you merged my branch? This greatly improves yield on many pools. Also, why remove GW/Efficiency? Useful data, that! Finally, when will phatk be optional? Why would you need to retry submission? Because you either a) have connectivity problems or b) the pool is overloaded. Anyway, probability of result being valid is going down with time. As for GW/Efficiency - this will soon be irrelevant because of some new protocols being developed. Even now, it assumes difficulty of 1 and will show (@ 400 Mh/s) efficiency of 50% at non-'time rolling' pools and anything above 100% (even 700%) with 'time rolling' ones (Eligius). Feel free to explain to users what/why is this. Not that it is wrong, it just doesn't make my life exactly easier. Finally, what exactly is wrong with phatk? As far as I know, it's better on everything AMD 5xxx and up (majority of users). Nvidia users should have their own optimized miner anyway.
|
|
|
But... PayPal seems to tolerate other currencies and exchanges. See Virwox for example. What are the differences compared to CoinPal?
|
|
|
This means you have both Nvidia and AMD drivers. Just add --platform=0 or --platform=1 to choose one of them.
|
|
|
@kindle Yes, it is on by default if the hardware supports it.
Both miners are now the same in terms of hash checking. Phoenix has far better documented and structured code. It also cares about 'efficiency', something I'm tired of explaining that actually doesn't matter.
Also, Phoenix seems to have different way of load tuning (aggression) which on my particular setup results in more laggy behavior... and perhaps slightly better performance for dedicated mining rigs, not sure.
I hope I won't attract anger with blatantly copying their BFI_INT support. I tried something like this a month ago but didn't understand there is "elf within elf".
|
|
|
Grinder, it would be really nice if you mention what your setup consists of. Thanks
|
|
|
New version is up. Changes:
- BFI_INT (~10% performance improvement) - TCP keep-alive
|
|
|
At least on linux you can try to export GPU_USE_SYNC_OBJECTS=1 environment variable.
|
|
|
I think I've found a bug in poclbm. If I'm right, it goes like this:
The OpenCL part is supposed to calculate the 8 first bytes of the hash. No. The kernel computes correctly only the 4 first bytes. It's confusing, because there is a code in BitcoinMiner.cl BelowOrEquals() which checks 8 bytes - this produces better assembler for some reason, at least in my setup. It can be replaced with 'if (H == 0)' (but it was slower). That's exactly why the targets are hard coded to difficulty of 1 (00000000 FFFF0000). The 4 first bytes are calculated fine, but the next ones are not, so they end up being essentially random numbers. In specific situations, it could lead to the miner ignoring a valid block.
Actually you are right, but in a different way - because of this I should use hard coded kernel target of 00000000 FFFFFFFF in order to not lose 1 thousandth of a percent of all valid difficulty=1 candidates. I'll do this with the next release. ... I saw the target given to OpenCL is hardcoded. I changed 0xFFFF0000 to 0x80000000. That should cause the OpenCL part to return nonces that result in hashes having the first 8 bytes lower than 0x0000000080000000, right? Why you check for 'lower' with a 'greater than' operator? Where did the '0x80000000' came from? Anyway, thank you for your comments, they are always welcome.
|
|
|
I'm using the following flags : -v -w128
Looking from the hardware comparison table I should be able to pull out some more, possibly up to 350. Do you think those kinds of results are only possible on a dedicated box running Linux? I'm not sure whether to downgrade the SDK to 2.1/2.2 to test it though. Also heard people were having problems with the GUI miner on 2.1, thoughts?
You can try with lower -f, i.e. under 20. Or you can overclock your GPU (core, memory doesn't affect performance) a little. I don't think downgrading to 2.2 will bring much more.
|
|
|
Thanks nelisky! I'll add this in next version.
|
|
|
Version 20110325 is a good 5%-10% slower on my 2nd and 3rd card compared to version 20110311.
There is a change in the way '-f' works. Simply said, just use lower '-f' now to achieve same performance/desktop lag as before. 5% seems too much though, what's your setup? Is the first card faster?
|
|
|
Working much better for me now. Before this update, I had the miner restart if it threw a RPCError in getwork. Now I see a bunch of "warning: job finished, miner is idle," but it keeps chugging along. I haven't looked at the new code yet, but would some pre-fetching help?
No. You need as fresh jobs as possible. Try to figure out why it fails to fetch new one in a reasonable time. If not in pool - there is an issue currently which can make getwork() extremely slow, see http://bitcointalk.org/index.php?topic=4853.0
|
|
|
Here is a temporary workaround for pool operators http://github.com/m0mchil/bitcoin/tree/poolmodeI renamed CreateNewBlock to ProcessTransactions and call it on every new block or incoming transaction. If less than 60 seconds elapsed since last execution it does nothing. Then CreateNewBlock simply provides most recent snapshot of processed transactions. The bottleneck is moved from RPC thread to network thread. Of course this will be obsolete when actual problem is fixed. As I understand it, now orphan transactions get verified (ECDSA) when their parents become available.
|
|
|
New version is up. All changes are pool related.
- long polling is slightly better in preventing stales to escape to server; - miner now supports 'time rolling' whenever there is 'X-Roll-NTime' header in HTTP response; - improved check for end of current task
|
|
|
|