ngzhang
|
|
February 09, 2012, 04:10:29 PM |
|
i must say, it works and have a very high efficiency. but the form is a bit strange on my machine. python 2.7.2 and curses 2.2 (AMD64)
|
|
|
|
freshzive
|
|
February 09, 2012, 04:25:29 PM |
|
So does the hotplug not work with multiple x6500s? Or am I doing something wrong? I have 5, it seems like they all connect, but they never start hashing and I keep getting lots of Errno 19 where they disconnect.
|
|
|
|
freshzive
|
|
February 09, 2012, 04:30:59 PM |
|
Yeah, weird. When I specify them each by serial # it works perfectly.
|
|
|
|
thirdlight
|
|
February 09, 2012, 04:31:55 PM |
|
So does the hotplug not work with multiple x6500s? I have it working with multiple X6500s - on windows, though.
|
|
|
|
freshzive
|
|
February 09, 2012, 04:34:51 PM |
|
So does the hotplug not work with multiple x6500s? I have it working with multiple X6500s - on windows, though. Yeah, maybe it's my Mac? Also, specifying the serial # in the config does nothing. When using the single x6500 miner, it automatically connects to device 0.
|
|
|
|
TheSeven (OP)
|
|
February 09, 2012, 05:50:30 PM |
|
Are you sure that there's a bitstream running on the FPGAs? If they just get validation job timeouts after 3 minutes that might just be an FPGA that wasn't booted. MPBM won't upload bitstreams unless you instruct it to in the config file.
|
My tip jar: 13kwqR7B4WcSAJCYJH1eXQcxG5vVUwKAqY
|
|
|
freshzive
|
|
February 09, 2012, 06:59:29 PM |
|
Very sure. I programmed them using the x6500 software. I have all 5 up and running using 5 instances of your software with single x6500....but the hotplug won't work for whatever reason. Any suggestions?
|
|
|
|
coblee
Donator
Legendary
Offline
Activity: 1654
Merit: 1350
Creator of Litecoin. Cryptocurrency enthusiast.
|
|
February 09, 2012, 07:01:18 PM |
|
Very sure. I programmed them using the x6500 software. I have all 5 up and running using 5 instances of your software with single x6500....but the hotplug won't work for whatever reason. Any suggestions?
I never got the hotplug working on my Mac either. I got the same error message. So I gave up and just specified the serial and that's been working fine.
|
|
|
|
TheSeven (OP)
|
|
February 09, 2012, 08:13:09 PM |
|
Very sure. I programmed them using the x6500 software. I have all 5 up and running using 5 instances of your software with single x6500....but the hotplug won't work for whatever reason. Any suggestions?
No idea how hotplug cannot work if single workers do, that's really weird. They both enumerate all devices on the bus, just that the single worker skips everything where the serial doesn't match, and that hotplug spawns a single worker for every serial that it finds on the bus. But you should be able to just copy the worker definition in the config file for all 5 boards instead of running 5 MPBM instances. That's basically what hotplug would do anyway.
|
My tip jar: 13kwqR7B4WcSAJCYJH1eXQcxG5vVUwKAqY
|
|
|
coblee
Donator
Legendary
Offline
Activity: 1654
Merit: 1350
Creator of Litecoin. Cryptocurrency enthusiast.
|
|
February 09, 2012, 08:17:59 PM |
|
Very sure. I programmed them using the x6500 software. I have all 5 up and running using 5 instances of your software with single x6500....but the hotplug won't work for whatever reason. Any suggestions?
No idea how hotplug cannot work if single workers do, that's really weird. They both enumerate all devices on the bus, just that the single worker skips everything where the serial doesn't match, and that hotplug spawns a single worker for every serial that it finds on the bus. But you should be able to just copy the worker definition in the config file for all 5 boards instead of running 5 MPBM instances. That's basically what hotplug would do anyway. I have 2 x6500 workers in a single instance of MPBM on my Mac.
|
|
|
|
freshzive
|
|
February 09, 2012, 09:02:36 PM |
|
Weird, specifying them by serial # doesn't seem to work on my Mac. No matter what serial # I put, it seems to connect to device #0 first. The second instance then connects to device #1, and so on. Coblee can I see what your config file looks like? Would love to only have 1 instance running. Thanks for all the help
|
|
|
|
coblee
Donator
Legendary
Offline
Activity: 1654
Merit: 1350
Creator of Litecoin. Cryptocurrency enthusiast.
|
|
February 09, 2012, 11:31:20 PM |
|
Here's the relevant part: # Single X6500 worker { \ "type": worker.fpgamining.x6500.X6500Worker, \ "deviceid": "AH00WOWD", \ "firmware": "worker/fpgamining/firmware/ztexmerge_200mhz.bit", \ "takeover": True, \ # "uploadfirmware": True, \ }, \
# Single X6500 worker { \ "type": worker.fpgamining.x6500.X6500Worker, \ "deviceid": "AH00WI18", \ "firmware": "worker/fpgamining/firmware/ztexmerge_200mhz.bit", \ "takeover": True, \ # "uploadfirmware": True, \ }, \
|
|
|
|
coblee
Donator
Legendary
Offline
Activity: 1654
Merit: 1350
Creator of Litecoin. Cryptocurrency enthusiast.
|
|
February 09, 2012, 11:33:14 PM |
|
When using the icarus worker, if I stop the miner and restart it, the icarus worker will fail to start with this message: Icarus board on /dev/tty.usbserial: Timeout waiting for validation job to finish
It seems like I have to disconnect and reconnect them in order for it to work again. Any ideas?
|
|
|
|
freshzive
|
|
February 10, 2012, 06:06:02 AM |
|
So what's the proper way to specifiy a backup pool? I set my primary pool to a priority of '1000' and the backup to '1', yet my backup pool seems to be getting lots of shares still for some reason... Thanks for the help coblee, got it running successfully with all 5 x6500s in one instance
|
|
|
|
finway
|
|
February 10, 2012, 07:56:09 AM |
|
Mark.
|
|
|
|
TheSeven (OP)
|
|
February 10, 2012, 08:55:55 AM |
|
So what's the proper way to specifiy a backup pool? I set my primary pool to a priority of '1000' and the backup to '1', yet my backup pool seems to be getting lots of shares still for some reason... Thanks for the help coblee, got it running successfully with all 5 x6500s in one instance If your main pool behaves normally this should only happen shortly after starting the miner (or if the backup pool is incredibly lucky). I might have to adjust some of the bias values in the default configuration to improve the startup behavior, in order to not confuse people... You might want to try something like these: getworkbias = -3000 longpollkillbias = 3000 getworkfailbias = -5000 jobstartbias = 0 jobfinishbias = 3000 sharebias = 1000 uploadfailbias = 0 stalebias = -10000 biasdecay = 0.9995 That should make the hashrate distribution more predictable and less pool luck dependent. Use that with priority 10-20 for the main and 1 for the backup pool. If you want it to just distribute hashrate accordingly to the priority/hashrate settings and completely ignore the pool's behavior, use this: getworkbias = -3000 longpollkillbias = 3000 getworkfailbias = 3000 jobstartbias = 0 jobfinishbias = 3000 sharebias = 0 uploadfailbias = 0 stalebias = 0 biasdecay = 0.9995 This will make it flood the primary pool with requests though if it's down and might end up not being able to fetch work quick enough in that situation, so this should only be used with multiple load-balanced pools at similar priority levels.
|
My tip jar: 13kwqR7B4WcSAJCYJH1eXQcxG5vVUwKAqY
|
|
|
nbtcminer
|
|
February 10, 2012, 01:55:00 PM |
|
For those with Win x64, where are you getting libusb from? I'm basically stuck at the x6500 driver; no backend available
|
|
|
|
TheSeven (OP)
|
|
February 10, 2012, 02:04:06 PM |
|
For those with Win x64, where are you getting libusb from? I'm basically stuck at the x6500 driver; no backend available
In theory, building a WinUSB driver for the FTDI should work, libusb can nowadays use WinUSB as a backend. I haven't got around to building such a driver yet though.
|
My tip jar: 13kwqR7B4WcSAJCYJH1eXQcxG5vVUwKAqY
|
|
|
allinvain
Legendary
Offline
Activity: 3080
Merit: 1080
|
|
February 10, 2012, 03:49:29 PM |
|
Wohoo..got it working. Just a note for those on Win 7 x64. First download and install curses mython module from here: http://www.lfd.uci.edu/~gohlke/pythonlibs/#cursesOn a different note, what percent stale shares are you folks seeing? Edit: Whoops, replied to wrong thread. But still relevant. I meant to reply to the Icarus thread.
|
|
|
|
allinvain
Legendary
Offline
Activity: 3080
Merit: 1080
|
|
February 10, 2012, 04:00:34 PM |
|
In regards to this:
"priority": 30, \
I take it the higher the number the higher the priority. And if so what is a recommended priority setting. I'm testing with BTC Guild for example, and I just kept the stock config's 30 level priority. This might be too high?? Could this explain failed job requests?
|
|
|
|
|