cdhowie (OP)
|
|
April 06, 2011, 10:26:05 PM Last edit: May 04, 2011, 11:07:03 PM by cdhowie |
|
Edit 2011-05-04: The software has been released! ( View post) ---------- Hey all, I'm trying to gauge interest for this. I've been hacking for about a week and a half on a mining proxy written in PHP using MySQL for the data store. I've been running my own miners against it with no problems for the last week. The basic idea is that you can run multiple miners against multiple pools (same or different, it doesn't matter) and miners can fail-over to other pools if something happens to its preferred pool. Additionally, using the web interface, you can manage pool assignments from any physical location; your miners won't notice that anything has changed when you switch them between pools. The information on the dashboard can also be used to help determine when a miner goes AWOL. Here's the more detailed list of how it all works: - Multiple pools can be defined. Pools can be globally enabled/disabled for all workers.
- Multiple workers can be defined, each with their own credentials to be used against the proxy itself.
- Each worker can associate with as many pools as you have defined, and can have its own credentials to be used with that pool. (In other words, you can have worker A and worker B both working slush's pool, but each using their own worker accounts.)
- Worker-pool associations can be individually enabled/disabled without affecting other workers, or other pools associated with the worker.
- Worker-pool associations can be ranked by priority. The highest priority association will be used -- unless that pool is down or not responding, then the next-highest will be tried.
- All getwork requests are partially logged in the database, and all work submissions are logged as well. This includes which worker sent the request, which pool ultimately handled the request, and (in the case of work submissions) whether the share was accepted or not.
All this is manageable from a web-based control panel. Right now the project is not terribly polished -- not well enough for a release anyway -- but the core seems to be working great. If there is any interest in such a project, I will probably release it under the AGPL. I'm interested in the views and perspectives of my fellow miners as to whether this project would have any value to the wider community. Mandatory screenshot:
|
Tips are always welcome and can be sent to 1CZ8QgBWZSV3nLLqRk2BD3B4qDbpWAEDCZ Thanks to ye, we have the final piece.PGP key fingerprint: 2B7A B280 8B12 21CC 260A DF65 6FCE 505A CF83 38F5 SerajewelKS @ #bitcoin-otc
|
|
|
xenon481
|
|
April 06, 2011, 10:32:39 PM |
|
I really like the idea of being able to do failovers, but the best part is that it is a great starting place for doing a really sophisticated implementation of Raulo's Pool Hopping Exploit!
|
Tips Appreciated: 171TQ2wJg7bxj2q68VNibU75YZB22b7ZDr
|
|
|
cdhowie (OP)
|
|
April 06, 2011, 10:40:59 PM |
|
I really like the idea of being able to do failovers, but the best part is that it is a great starting place for doing a really sophisticated implementation of Raulo's Pool Hopping Exploit! Just to be clear, I take no liability for any forks of the project.
|
Tips are always welcome and can be sent to 1CZ8QgBWZSV3nLLqRk2BD3B4qDbpWAEDCZ Thanks to ye, we have the final piece.PGP key fingerprint: 2B7A B280 8B12 21CC 260A DF65 6FCE 505A CF83 38F5 SerajewelKS @ #bitcoin-otc
|
|
|
slush
Legendary
Offline
Activity: 1386
Merit: 1097
|
|
April 06, 2011, 10:53:18 PM |
|
You did first metapool. It was just matter of the time, but still - congratz Edit: Does this solve long polling somehow?
|
|
|
|
jgarzik
Legendary
Offline
Activity: 1596
Merit: 1099
|
|
April 06, 2011, 10:55:33 PM |
|
IMO it makes much more sense to add multi-pool support to each mining client. That way it doesn't break long-polling, and you can more easily utilize pool-specific features as they appear (such as using BDP). A meta-pool is an additional point of failure.
|
Jeff Garzik, Bloq CEO, former bitcoin core dev team; opinions are my own. Visit bloq.com / metronome.io Donations / tip jar: 1BrufViLKnSWtuWGkryPsKsxonV2NQ7Tcj
|
|
|
cdhowie (OP)
|
|
April 06, 2011, 11:49:13 PM |
|
You did first metapool. It was just matter of the time, but still - congratz Thanks. Edit: Does this solve long polling somehow?
The current revision does not, but this is going to be implemented pretty soon. Obviously it will only work on pools that support LP themselves; it will just proxy the LP request. IMO it makes much more sense to add multi-pool support to each mining client.
I agree. But I don't have the desire to hack on every client out there to implement something like this. Further, this approach also gives me other benefits like the ability to manage miners remotely, retarget their pool assignment without restarting them, etc. If someone implements a consistent multi-pool interface in all the mining clients, I would probably deprecate this project. But in the meantime it fills the gap and also gives you some other nifty features that client-side multi-pool support alone wouldn't. That way it doesn't break long-polling, and you can more easily utilize pool-specific features as they appear (such as using BDP). LP is only "broken" in that I have not yet implemented it. There's no technical reason it can't be done, I've just been focused on other aspects of the proxy. A similar BDP proxy could be implemented alongside the existing HTTP-based proxy, with minimal database schema changes. Once that is done, you could theoretically run HTTP-only miners against a BDP pool efficiently, by having the HTTP proxy get work from the local BDP hub process. A meta-pool is an additional point of failure.
To a degree, yes. I run my proxy on my LAN, so network failure is extremely unlikely. A software bug is the only thing I can think of that would cause a problem (other than the whole box going down, at which point I lose DNS too), and with (... checking DB ...) 82,000 getwork requests processed in the last 2.5 days, my confidence in the proxy code is very high at this point.
|
Tips are always welcome and can be sent to 1CZ8QgBWZSV3nLLqRk2BD3B4qDbpWAEDCZ Thanks to ye, we have the final piece.PGP key fingerprint: 2B7A B280 8B12 21CC 260A DF65 6FCE 505A CF83 38F5 SerajewelKS @ #bitcoin-otc
|
|
|
cdhowie (OP)
|
|
April 12, 2011, 06:42:43 PM |
|
Long-polling proxying is now implemented. The only remaining feature on my list is connection pooling to take advantage of HTTP 1.1 keep-alive connections, but I'm not sure how feasible this is in PHP without using some external connection-pooling daemon. I might make a release before this feature is implemented.
|
Tips are always welcome and can be sent to 1CZ8QgBWZSV3nLLqRk2BD3B4qDbpWAEDCZ Thanks to ye, we have the final piece.PGP key fingerprint: 2B7A B280 8B12 21CC 260A DF65 6FCE 505A CF83 38F5 SerajewelKS @ #bitcoin-otc
|
|
|
mlg.odk
Newbie
Offline
Activity: 5
Merit: 0
|
|
April 18, 2011, 03:35:58 PM |
|
This project appears to be very interesting and is in fact exactly what I've been looking for to connect all the machines I have here to one single worker account. Is there any release date set already?
|
|
|
|
cdhowie (OP)
|
|
April 19, 2011, 05:09:16 PM |
|
This project appears to be very interesting and is in fact exactly what I've been looking for to connect all the machines I have here to one single worker account.
If you are using e.g. slush's pool, you should still have a separate account for each worker. My proxy allows multiple miners to authenticate to it with separate credentials, and the proxy will then authenticate to pools using credentials stored for that worker. In other words, each worker-pool assignment has its own pool credentials. Is there any release date set already?
It's "as soon as I clean up the Git repo." I hope to get to that this week.
|
Tips are always welcome and can be sent to 1CZ8QgBWZSV3nLLqRk2BD3B4qDbpWAEDCZ Thanks to ye, we have the final piece.PGP key fingerprint: 2B7A B280 8B12 21CC 260A DF65 6FCE 505A CF83 38F5 SerajewelKS @ #bitcoin-otc
|
|
|
mlg.odk
Newbie
Offline
Activity: 5
Merit: 0
|
|
April 19, 2011, 08:27:53 PM |
|
If you are using e.g. slush's pool, you should still have a separate account for each worker. My proxy allows multiple miners to authenticate to it with separate credentials, and the proxy will then authenticate to pools using credentials stored for that worker. In other words, each worker-pool assignment has its own pool credentials.
Generally this is how I do it. However, let's take slush's pool and a GPU cluster which is not dedicated to generating Bitcoins and does this only when being idle as example: The amount of GPUs and machines available constantly changes. While there is a max. limit of machines, I'd still require a software to dynamically associate a worker account to each machine. This is because (here's where slush's pool comes in) slush's reward calculating formula includes the time at which the last share was submitted. Of course this is a great way to prevent cheating, however it also means that if one machine goes offline or switches to a different task, the reward it would usually have made quickly shrinks to zero. Therefore using a proxy tool like yours for such a cluster is more effective and way easier to manage and automate. It's "as soon as I clean up the Git repo." I hope to get to that this week. Great! I'll see to drop you some coins when it is out
|
|
|
|
cdhowie (OP)
|
|
April 20, 2011, 12:12:34 PM |
|
Generally this is how I do it. However, let's take slush's pool and a GPU cluster which is not dedicated to generating Bitcoins and does this only when being idle as example: The amount of GPUs and machines available constantly changes. While there is a max. limit of machines, I'd still require a software to dynamically associate a worker account to each machine. This is because (here's where slush's pool comes in) slush's reward calculating formula includes the time at which the last share was submitted. Of course this is a great way to prevent cheating, however it also means that if one machine goes offline or switches to a different task, the reward it would usually have made quickly shrinks to zero. Therefore using a proxy tool like yours for such a cluster is more effective and way easier to manage and automate.
The way I understand the math, all of your recently-submitted shares are considered, no matter which worker they were submitted through -- so running several workers on one worker account or several should make no difference to the amount of reward you get when the block is solved. Even if one machine drops out... since the shares are considered collectively, it doesn't matter which account they were submitted through. They will age just like every other share you submit. (Correct me if I'm wrong, slush.) In other words, the older shares you submitted from the still-active machines have aged and become worthless too, you just can't tell because they are submitting enough new shares to keep the reward up. Now, having said that, my proxy doesn't "dynamically associate a worker account to each machine." You still need to set up worker accounts in my proxy script. The difference is that you can assign those worker accounts to more than one pool. (Although you could probably hack it to do what you want. )
|
Tips are always welcome and can be sent to 1CZ8QgBWZSV3nLLqRk2BD3B4qDbpWAEDCZ Thanks to ye, we have the final piece.PGP key fingerprint: 2B7A B280 8B12 21CC 260A DF65 6FCE 505A CF83 38F5 SerajewelKS @ #bitcoin-otc
|
|
|
mlg.odk
Newbie
Offline
Activity: 5
Merit: 0
|
|
April 20, 2011, 01:40:25 PM |
|
The way I understand the math, all of your recently-submitted shares are considered, no matter which worker they were submitted through -- so running several workers on one worker account or several should make no difference to the amount of reward you get when the block is solved. Even if one machine drops out... since the shares are considered collectively, it doesn't matter which account they were submitted through. They will age just like every other share you submit. (Correct me if I'm wrong, slush.) In other words, the older shares you submitted from the still-active machines have aged and become worthless too, you just can't tell because they are submitting enough new shares to keep the reward up.
This is interesting, in that case there still is the clear advantage of being able to quickly switch to a different pool in case that the current one goes offline for whatever reason. Now, having said that, my proxy doesn't "dynamically associate a worker account to each machine." You still need to set up worker accounts in my proxy script. The difference is that you can assign those worker accounts to more than one pool. (Although you could probably hack it to do what you want. ) Ah I think you misunderstood me there, sorry if I was not clear enough. I ment to say that the alternative to what I said was your software; I understood the purpose of your software correctly.
|
|
|
|
pwnyboy
|
|
April 23, 2011, 10:36:39 PM |
|
What is the status of this? I'd like to start testing and maybe contributing code if you're ready to release it into the wild.
|
|
|
|
BitLex
|
|
April 23, 2011, 11:30:33 PM |
|
i would also like to test this, so please keep us up to date.
|
|
|
|
anisoptera
Member
Offline
Activity: 308
Merit: 10
|
|
May 04, 2011, 06:39:21 PM |
|
+1 for wanting to test this please
|
|
|
|
pwnyboy
|
|
May 04, 2011, 08:10:12 PM |
|
After recent strangeness with Deepbit, I'm putting a 10 BTC bounty on this, contingent upon ultimate release of the working product and sourcecode before I'm forced to write my own. Looking forward to getting up and running.
|
|
|
|
cdhowie (OP)
|
|
May 04, 2011, 11:04:44 PM |
|
Alright, I've cleaned up the repo and finished adding the final touches (for now...). Grab the code: https://github.com/cdhowie/Bitcoin-mining-proxySetup directions are in the INSTALL file. Please read the whole thing before asking questions; I took a long time writing it and proofread it several times to make sure it's complete and correct. Let me know if anything is unclear or broken. And please feel free to use GitHub's issue tracking to report issues. If you like the software and find it useful, there is a GPG-signed Bitcoin address on the about page where you can send donations. Any amount would be appreciated. For those interested in reliability information, I've been running one miner against it consistently for a full month, and a few other miners against it sporadically. The getwork proxy has always worked 100% for me in my deployed copy, except during power failures. @pwnyboy: Thanks for offering the bounty! I was already getting it ready for release when you posted, so you don't have to send the bounty if you don't want (since it really wasn't responsible for motivating me; I just finally got off my lazy butt since everyone's been waiting for it). But I won't turn it down either. If you still want to send, feel free to use the donation address.
|
Tips are always welcome and can be sent to 1CZ8QgBWZSV3nLLqRk2BD3B4qDbpWAEDCZ Thanks to ye, we have the final piece.PGP key fingerprint: 2B7A B280 8B12 21CC 260A DF65 6FCE 505A CF83 38F5 SerajewelKS @ #bitcoin-otc
|
|
|
pwnyboy
|
|
May 06, 2011, 04:32:47 AM |
|
Thanks for the release! I'll dedicate some time to pulling and installing it tomorrow, and pass that money along once it's up. After all, a promise is a promise Thanks again.
|
|
|
|
cdhowie (OP)
|
|
May 06, 2011, 04:36:29 AM |
|
Thanks for the release! I'll dedicate some time to pulling and installing it tomorrow, and pass that money along once it's up. After all, a promise is a promise Thanks again. Cool, thanks! Hope the software works for you. If you run into any trouble, let me know. I intend to support the software I publish. (That goes for everyone.)
|
Tips are always welcome and can be sent to 1CZ8QgBWZSV3nLLqRk2BD3B4qDbpWAEDCZ Thanks to ye, we have the final piece.PGP key fingerprint: 2B7A B280 8B12 21CC 260A DF65 6FCE 505A CF83 38F5 SerajewelKS @ #bitcoin-otc
|
|
|
aahzmundus
|
|
May 06, 2011, 04:40:52 AM |
|
If you run into any trouble, let me know. I intend to support the software I publish. (That goes for everyone.)
He is serious about this, worked with me on and off for a good hour helping me figure stuff out step by step. Already made a small donation, well worth it.
|
|
|
|
|