TheSeven (OP)
|
|
February 10, 2012, 04:45:23 PM |
|
The priority values just define how hashrate will be distributed between the pools in the long term. If one pool has two times the priority of another one, it will get twice the hashrate compared to the other one on average. The absolute values don't matter, just the relative differences do. Assuming you have one pool with priority 1000 and another one with 2000 that's the same as if the first one had priority 1 and the second one 2.
This should not make job requests fail, but if a pool has issues, these values do influence MPBM's failover behavior. It is recommended to have at least two pools with similar priority levels (let's say no more than a factor of 2 or 3 apart from each other), otherwise MPBM might try to fetch work from your primary pool too agressively during an outage, causing the work buffer to run empty before it attempts to fetch work from another pool. This is especially important if you use the bias values I have posted above to make it distribute hashrate more evenly, the default values should be a bit more tolerant here (by "punishing" pools a lot harder for failed requests or stales). I'd recommend setting up two (or more) pools with the same priority for load balancing and failover, and if you want to, additionally the demo pool entries (at a lower priority) if you want to donate some hashrate.
|
My tip jar: 13kwqR7B4WcSAJCYJH1eXQcxG5vVUwKAqY
|
|
|
allinvain
Legendary
Offline
Activity: 3080
Merit: 1080
|
|
February 10, 2012, 05:01:14 PM |
|
Excellent reply. Thanks. I intend to setup more than one, but first I'm waiting for Mr. ngzhang to deliver another fpga board. Right now I have 1 Icarus board mining with your miner. I must admit I'm a big sucker for colored terminals and I find watching the miner's output the equivalent of "seeing into the matrix"
|
|
|
|
|
TheSeven (OP)
|
|
February 10, 2012, 06:17:57 PM |
|
You can only install 64bit drivers on a 64bit OS, and Win7 x64 has mandantory driver signing. You have 4 options: - Build a WinUSB driver inf file for the device (I might do that at some point if nobody else wants to bother with it) and install a recent libusb dll that can use WinUSB.
- Find a signed libusb driver version and build a driver inf file based on that.
- Disable driver signing (BCD patch) and use an arbitrary libusb device or filter driver.
- Keep D2XX driver and set "useftd2xx": True in the worker parameters section (where the serial number is).
|
My tip jar: 13kwqR7B4WcSAJCYJH1eXQcxG5vVUwKAqY
|
|
|
nbtcminer
|
|
February 10, 2012, 06:22:37 PM Last edit: February 10, 2012, 06:35:20 PM by nbtcminer |
|
@TheSeven: Thanks for the quick reply! I might try building my own WinUSB driver inf file (but we'll see how that goes). For now I think the easiest thing to do is to just use the D2XX driver (I want to test your miner over the weekend). Edit: If I wanted to add the FTDI D2xx line would it in: config.py (under the workers section) or \workers\fpgaming\x6500.py Thanks in advance for the help! Cheers! You can only install 64bit drivers on a 64bit OS, and Win7 x64 has mandantory driver signing. You have 4 options: - Build a WinUSB driver inf file for the device (I might do that at some point if nobody else wants to bother with it) and install a recent libusb dll that can use WinUSB.
- Find a signed libusb driver version and build a driver inf file based on that.
- Disable driver signing (BCD patch) and use an arbitrary libusb device or filter driver.
- Keep D2XX driver and set "useftd2xx": True in the worker parameters section (where the serial number is).
|
|
|
|
TheSeven (OP)
|
|
February 10, 2012, 06:46:49 PM |
|
In config.py, like this: { \ "type": worker.fpgamining.x6500.X6500Worker, \ "deviceid": "ABCDEFGH", \ "useftd2xx": True, \ }, \ See x6500.py for more available config options to be put there.
|
My tip jar: 13kwqR7B4WcSAJCYJH1eXQcxG5vVUwKAqY
|
|
|
nbtcminer
|
|
February 10, 2012, 11:36:30 PM |
|
@Theseven: Awesome! I've got it up and running with the D2XX drivers and things look ok so far! However I'm an error that i used to get with the old X6500 as well: Error while requesting job from "pool" timed out Any idea on that one? In config.py, like this: { \ "type": worker.fpgamining.x6500.X6500Worker, \ "deviceid": "ABCDEFGH", \ "useftd2xx": True, \ }, \ See x6500.py for more available config options to be put there.
|
|
|
|
freshzive
|
|
February 11, 2012, 03:22:02 AM |
|
So what's the proper way to specifiy a backup pool? I set my primary pool to a priority of '1000' and the backup to '1', yet my backup pool seems to be getting lots of shares still for some reason... Thanks for the help coblee, got it running successfully with all 5 x6500s in one instance If your main pool behaves normally this should only happen shortly after starting the miner (or if the backup pool is incredibly lucky). I might have to adjust some of the bias values in the default configuration to improve the startup behavior, in order to not confuse people... You might want to try something like these: getworkbias = -3000 longpollkillbias = 3000 getworkfailbias = -5000 jobstartbias = 0 jobfinishbias = 3000 sharebias = 1000 uploadfailbias = 0 stalebias = -10000 biasdecay = 0.9995 That should make the hashrate distribution more predictable and less pool luck dependent. Use that with priority 10-20 for the main and 1 for the backup pool. So I tried this, and it actually seemed to be sending MORE shares to my backup pool than it had previously (this was with priority set to 20 for main and 1 for backup). With the default settings and priorities of 1000 (main) and 1 (backup), it was sending about 100Mh/s of 2000Mh/s total to my backup pool over the course of ~24 hours. Any other ideas? Thanks.
|
|
|
|
TheSeven (OP)
|
|
February 11, 2012, 04:29:38 AM |
|
@Theseven: Awesome! I've got it up and running with the D2XX drivers and things look ok so far! However I'm an error that i used to get with the old X6500 as well: Error while requesting job from "pool" timed out Any idea on that one? Sounds like your pool is having issues (or you didn't configure it correctly)?
|
My tip jar: 13kwqR7B4WcSAJCYJH1eXQcxG5vVUwKAqY
|
|
|
TheSeven (OP)
|
|
February 11, 2012, 04:31:30 AM |
|
So I tried this, and it actually seemed to be sending MORE shares to my backup pool than it had previously (this was with priority set to 20 for main and 1 for backup).
With the default settings and priorities of 1000 (main) and 1 (backup), it was sending about 100Mh/s of 2000Mh/s total to my backup pool over the course of ~24 hours.
Any other ideas? Thanks.
Interesting. It shouldn't be doing that unless your main pool is having severe trouble. Can you send me a screenshot of the full statistics and your config file (with passwords removed if neccessary)?
|
My tip jar: 13kwqR7B4WcSAJCYJH1eXQcxG5vVUwKAqY
|
|
|
cypherdoc
Legendary
Offline
Activity: 1764
Merit: 1002
|
|
February 11, 2012, 03:06:18 PM |
|
To op,
will there ever be a version of this great software for non coders?
|
|
|
|
freshzive
|
|
February 11, 2012, 04:10:24 PM |
|
Sent you a BTC for development of this great software
|
|
|
|
allinvain
Legendary
Offline
Activity: 3080
Merit: 1080
|
|
February 11, 2012, 04:16:25 PM |
|
Uploaded with ImageShack.usTheSeven, I'm wondering if you can tell me if the "failed job requests" , "Share upload retries" and "Efficiency" stats are within normal ranges. I know the failed job requests and share upload retries varies from person to person, but shouldn't the efficiency be higher? For a while it was at 98%. What does the efficiency level refer to?
|
|
|
|
nbtcminer
|
|
February 11, 2012, 04:30:55 PM |
|
@Theseven: It's BTCGuild, since they swapped servers yesterday things have been a bit weird. Cheers, nbtcminer Sounds like your pool is having issues (or you didn't configure it correctly)?
|
|
|
|
allinvain
Legendary
Offline
Activity: 3080
Merit: 1080
|
|
February 11, 2012, 04:36:02 PM |
|
It seems it's sporadic. It sometimes goes pretty bad..checkout all the red lol: Uploaded with ImageShack.us
|
|
|
|
TheSeven (OP)
|
|
February 11, 2012, 05:45:37 PM |
|
TheSeven, I'm wondering if you can tell me if the "failed job requests" , "Share upload retries" and "Efficiency" stats are within normal ranges. I know the failed job requests and share upload retries varies from person to person, but shouldn't the efficiency be higher? For a while it was at 98%. What does the efficiency level refer to?
The failed job request, upload retries and stale share values are looking like there's connectivity trouble. Either a very slow (or rather high-latency) internet connection, a DDoS affecting the pool, or something similar. The efficiency value is basically your luck minus invalids/stales, it should usually be around 85-110% after some time (it needs some minutes to stabilize). However in your case something seems to have gone wrong with hashrate measurement at some point (look at your average hashrate), which will of course screw up the efficiency calculations. If you account for that, your effective efficiency is around 98%, which is pretty much normal.
|
My tip jar: 13kwqR7B4WcSAJCYJH1eXQcxG5vVUwKAqY
|
|
|
TheSeven (OP)
|
|
February 11, 2012, 05:53:56 PM |
|
To op,
will there ever be a version of this great software for non coders?
Well, you don't really require coding skills to use this software. Currently it isn't really documented well yet (this project was launched just days ago), and some general programming experience will of course make the installation and configuration easier, because programmers will usually grasp the general concepts of how this kind of thing works faster. However, if you look at the configuration file with some common sense, you'll probably recognize some patterns, how the definitions in there look like, which should enable you to adapt them to fit your needs. It really isn't all that complicated. Just ask for help if you don't understand something. I'm planning to remove the MPBM configuration file one day, and add a web interface instead through which it can be configured. This will require some major changes and thus take quite some time. Let's see how much spare time I'll have during the next months...
|
My tip jar: 13kwqR7B4WcSAJCYJH1eXQcxG5vVUwKAqY
|
|
|
TheSeven (OP)
|
|
February 11, 2012, 05:55:25 PM |
|
Sent you a BTC for development of this great software Thanks a lot, I really appreciate that
|
My tip jar: 13kwqR7B4WcSAJCYJH1eXQcxG5vVUwKAqY
|
|
|
allinvain
Legendary
Offline
Activity: 3080
Merit: 1080
|
|
February 11, 2012, 06:33:17 PM |
|
TheSeven, I'm wondering if you can tell me if the "failed job requests" , "Share upload retries" and "Efficiency" stats are within normal ranges. I know the failed job requests and share upload retries varies from person to person, but shouldn't the efficiency be higher? For a while it was at 98%. What does the efficiency level refer to?
The failed job request, upload retries and stale share values are looking like there's connectivity trouble. Either a very slow (or rather high-latency) internet connection, a DDoS affecting the pool, or something similar. The efficiency value is basically your luck minus invalids/stales, it should usually be around 85-110% after some time (it needs some minutes to stabilize). However in your case something seems to have gone wrong with hashrate measurement at some point (look at your average hashrate), which will of course screw up the efficiency calculations. If you account for that, your effective efficiency is around 98%, which is pretty much normal. I think it was the pool as it has now stabilized. The efficiency is slowly climbing back up too. Average hashrate is also dropping to a more realistic figure. As for latency it's kind of funny cause I chose to use BTC Guild specifically due to the low ping and low number of hops from me to their server. I'll keep an eye on it and see how it goes. Props to you for coding a neat miner.
|
|
|
|
coblee
Donator
Legendary
Offline
Activity: 1654
Merit: 1351
Creator of Litecoin. Cryptocurrency enthusiast.
|
|
February 12, 2012, 11:26:40 AM |
|
I have 2 Icarus boards and after mining for a while, the stats start to look weird: Worker │Accepted│Accepted│Stale shares│Invalid shares│Current│ Average │Temperature│Effi- │ Current name │ jobs │ shares │ (rejected) │ (K not zero) │MHash/s│ MHash/s │(degrees C)│ciency│ pool Icarus board on /dev/tty.usbserial │ 4389│ 2195│ 23 (1.0%)│ 2 (0.1%)│ 379.70│ 373.31│ Unknown │ 94.7%│ Eclipse Icarus board on /dev/tty.usbserial20│ 4459│ 2272│ 23 (1.0%)│ 7 (0.3%)│ 379.36│129609.80│ Unknown │ 0.3%│ Eclipse
The accepted shares are about the same, but the 2nd board is shwoing a wrong average mhash/s and efficiency. In the logs I do notice a that the 2nd board had timed out a few times. TheSeven, any ideas?
|
|
|
|
|