Bitcoin Forum
December 04, 2024, 02:39:10 AM *
News: Latest Bitcoin Core release: 28.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: « 1 2 3 4 5 6 7 8 9 10 11 12 13 14 [15] 16 17 18 19 20 21 »  All
  Print  
Author Topic: hashkill - testing bitcoin miner plugin  (Read 90940 times)
Clipse
Hero Member
*****
Offline Offline

Activity: 504
Merit: 502


View Profile
June 11, 2011, 06:48:55 PM
 #281

Not sure why you have such poor eff cause Im on 100-104% the whole time , confused tho how I can get more than 100% eff.

...In the land of the stale, the man with one share is king... >> Clipse

We pay miners at 130% PPS | Signup here : Bonus PPS Pool (Please read OP to understand the current process)
xanatos
Newbie
*
Offline Offline

Activity: 42
Merit: 0


View Profile
June 13, 2011, 05:05:15 AM
 #282

@xanatos, nice Smiley

Anyway, why did you remove that one?

Quote
__attribute__((reqd_work_group_size(64, 1, 1)))

I understand that workgroup size is configurable in poclbm. However, in most cases, 64 should be the best one. Also, hardcoding the required workgroup size helps the OpenCL compiler to do better the register allocation stuff as it "knows" the workgroup size in compile time and do not make worst-case assumptions. You are losing performance due to this.

Another thing is (don't know if that's possible with pyopencl) - don't use clenqueuereadbuffer() (or whatever it's equivalent is). Use clenqueuemapbuffer() instead. It's noticably faster. Hm really started wondering about modifying some python miner to incorporate that kernel there, looks like a quick way to make it portable to windows. Besides, there are obvious problems with the non-ocl part which are due to code inmaturity.

Tried the first... No measurable difference (technically I first tried it with 64, but it was much slower, then changed it to 256, and there was no difference with the version without the attribute). Tried the second... Mmmmh... perhaps there was a difference, but it was very small. Tried even changing the size of the output buffer. You use a one-unit array, poclbm uses a 256 sized array, but no difference. The memory bandwitdth of the graphical adaptor is probably great enough that 1kb of data every second or so isn't truly measurable.

Tomorrow I'll modify the poclbm kernel to work with your frontend. It shouldn't be much difficult.
gat3way (OP)
Sr. Member
****
Offline Offline

Activity: 256
Merit: 250


View Profile
June 14, 2011, 06:46:30 AM
 #283

There shouldn't be much of a difference (though device-host transfers would be slower with larger buffers of course). BTW is mapping/unmapping device memory possible with pyopencl?
Dusty
Hero Member
*****
Offline Offline

Activity: 731
Merit: 503


Libertas a calumnia


View Profile WWW
June 14, 2011, 07:18:16 AM
 #284

Not sure why you have such poor eff cause Im on 100-104% the whole time , confused tho how I can get more than 100% eff.
gat3way, I've the same question, can you please explain how this can happen?

Also I make a followup to my test: while for a single device I get more shares using server side statistics in respect to hashkill, the result is better for hashkill if I use multiple cards.

I guess there are a lot of details to check... btw I still get low efficiency, around 86%.

Articoli bitcoin: Il portico dipinto
gat3way (OP)
Sr. Member
****
Offline Offline

Activity: 256
Merit: 250


View Profile
June 14, 2011, 07:22:20 AM
 #285

It is possible and a matter of luck. Efficiency is calculated based on the number of shares divided by the number of getworks received.

For a single getwork, you may have zero, one or more submitted shares and it is a matter of luck. If you have 3 getworks requested and found 4 shares while processing them - then yes, efficiency would be 125%. In the ideal scenario, efficiency would get close to 100% after hashkill has been working long enough.
Clipse
Hero Member
*****
Offline Offline

Activity: 504
Merit: 502


View Profile
June 14, 2011, 11:26:58 AM
 #286

Not sure why you have such poor eff cause Im on 100-104% the whole time , confused tho how I can get more than 100% eff.
gat3way, I've the same question, can you please explain how this can happen?

Also I make a followup to my test: while for a single device I get more shares using server side statistics in respect to hashkill, the result is better for hashkill if I use multiple cards.

I guess there are a lot of details to check... btw I still get low efficiency, around 86%.

Not sure why you are unhappy with 86% efficiency, if your pool dont have issue with efficiency then the ratio shouldnt matter that much to the end user.

What you should care about is amount of stales/invalid compared to your submitted shares.

...In the land of the stale, the man with one share is king... >> Clipse

We pay miners at 130% PPS | Signup here : Bonus PPS Pool (Please read OP to understand the current process)
Dusty
Hero Member
*****
Offline Offline

Activity: 731
Merit: 503


Libertas a calumnia


View Profile WWW
June 14, 2011, 11:56:51 AM
 #287

Thank you gat3way for the explainations, I still have a lot of "holes" in the comprehension of the whole thing but I'm beginning to understand.

Not sure why you are unhappy with 86% efficiency, if your pool dont have issue with efficiency then the ratio shouldnt matter that much to the end user.
What you should care about is amount of stales/invalid compared to your submitted shares.

Ok, these are my stats:
Quote
Speed: 1469 MHash/sec [proc: 5452] [subm: 4755] [stale: 64] [eff: 87%]
What I can't understand is why I've so many processed shares and not all of them is being submitted.

How comes a share is processed and not submitted?
In which scenarios this can happen?

Articoli bitcoin: Il portico dipinto
gat3way (OP)
Sr. Member
****
Offline Offline

Activity: 256
Merit: 250


View Profile
June 14, 2011, 12:28:18 PM
 #288

There seems to be a misunderstanding here. Shares are not "processed". getworks are. Each of the 'procd' getworks result in zero, one or more shares. Your stats indicate that:

* You've requested a new getwork 5452 times since the program was run.
* Working on those 5452 getworks, you found 4755 shares and submitted them
* You have 64 stale (or invalid) shares - you submitted them but the pool rejected them.

Now since hashkill requests getworks in advance, if you by chance have a share per each getwork (not zero and not more than one), then you would still not have processed=submitted. That's because a queue is being filled in "in advance".

Multiple "short" blocks in a row are likely to bring that "efficiency" down. That's because on a new block, all the getworks in a queue that are already counted as "processed" are discarded. Efficiency is calculated for the whole program run, not the current block.

Connection failures (e.g unable to connect to the pool to send a share) obviously drops efficiency as well.
Dusty
Hero Member
*****
Offline Offline

Activity: 731
Merit: 503


Libertas a calumnia


View Profile WWW
June 14, 2011, 02:07:10 PM
 #289

Fine, thanks for all the details.

Articoli bitcoin: Il portico dipinto
antz123
Newbie
*
Offline Offline

Activity: 12
Merit: 0



View Profile
June 14, 2011, 03:53:14 PM
 #290

Anyone know why putting hashkill to run from a batch file produces this error?

hashkill-gpu: error while loading shared libraries: libOpenCL.so: cannot open shared object file: No such file or directory

It works fine if I run it from the terminal window, but produces this error if I run my batch file independent of the terminal window.

Causing problems with automatic startup of the miner after logging in...
AngelusWebDesign
Sr. Member
****
Offline Offline

Activity: 392
Merit: 250


View Profile
June 14, 2011, 06:35:38 PM
 #291

I just noticed today that Hashkill doesn't work nearly as well with Slush's pool as it does with Deepbit.

I'm getting 10-11% stale shares now with Slush.

Doesn't Hashkill work with Slush's long polling?
gat3way (OP)
Sr. Member
****
Offline Offline

Activity: 256
Merit: 250


View Profile
June 14, 2011, 07:12:58 PM
 #292

Slush's pool does not support long polling.

Quote
Anyone know why putting hashkill to run from a batch file produces this error?

hashkill-gpu: error while loading shared libraries: libOpenCL.so: cannot open shared object file: No such file or directory

It works fine if I run it from the terminal window, but produces this error if I run my batch file independent of the terminal window.

Causing problems with automatic startup of the miner after logging in...

Put the export LD_LIBRARY_PATH=... line in your script.
AngelusWebDesign
Sr. Member
****
Offline Offline

Activity: 392
Merit: 250


View Profile
June 14, 2011, 07:27:38 PM
 #293

So if I use, say, Poclbm it won't tell me about the stale shares, even though I'm probably getting just as many?

Do you think Hashkill is more likely to get stales, the way it's designed?  It might need long polling more than this or that other miner.
gat3way (OP)
Sr. Member
****
Offline Offline

Activity: 256
Merit: 250


View Profile
June 14, 2011, 07:42:14 PM
 #294

Doesn't poclbm display stales?

Some pools do report them (deepbit for sure). As for is hashkill more likely to get them...it depends on the pool and your "luck" mostly. Hashkill does flush the queues, but it does not immediately cancel the current getwork so if you have a share in the current NDRange, it would readily submit it and it would get display as stale. I could of course not submit that, but it does not matter...users would feel more happy about this of course, but they would not benefit from that in any way (other than feeling happier about less stales being indicated).
antz123
Newbie
*
Offline Offline

Activity: 12
Merit: 0



View Profile
June 15, 2011, 08:47:51 AM
 #295

Quote
Anyone know why putting hashkill to run from a batch file produces this error?

hashkill-gpu: error while loading shared libraries: libOpenCL.so: cannot open shared object file: No such file or directory

It works fine if I run it from the terminal window, but produces this error if I run my batch file independent of the terminal window.

Causing problems with automatic startup of the miner after logging in...

Put the export LD_LIBRARY_PATH=... line in your script.

Thanks very much for that, got my problem fixed. A really awesome miner, I'm liking the interface with the user too.
kripz
Full Member
***
Offline Offline

Activity: 182
Merit: 100


View Profile
June 16, 2011, 12:05:38 PM
 #296

One thing i've noticed is that it generates less shares compared to the others but has a higher hash rate?

 Merged mining, free SMS notifications, PayPal payout and much more.
http://btcstats.net/sig/JZCODg2
gat3way (OP)
Sr. Member
****
Offline Offline

Activity: 256
Merit: 250


View Profile
June 16, 2011, 03:00:49 PM
 #297

Got some bad news...multi-pool support (failover/load-balance) will be delayed. I've had some problems making this work correctly (especially as far as LP stuff is concerned). Another thing is that I will be quite busy the following week or two and won't have time to work on it.

Regarding recent DDoS attack on slush/deepbit pools, this is not good. Nevertheless, there is a tip that may be helpful: it's simple. Just run 2 or more instances against different pools. Since hashkill utilizes all GPUs, GPU load will be balanced nicely. Once a pool is DDoS'd, connections to it would fail and then the other instances will utilize more GPU power. To understand it better, here is an example:

You have 2x5870 cards, running at 400MH/s each, 800MH/s overall. You run two instances - first one running against slush's pool, the second one - against deepbit. GPU load is balanced - you'd roughly spend 400MH/s mining for bitcoin.cz and 400MH/s mining for deepbit.net. Then imagine deepbit.net gets attacked and your connections to it fail. Instance #2 would wait until it can successfully reconnect. No GPU power would be wasted though - now the GPUs would be fully utilized by instance #1 running against bitcoin.cz getting 800MH/s. Once deepbit.net goes online again, you would get GPU utilization balanced all by itself.

This is very quick and dirty load-balancing scheme otherwise hard to do with multi-GPU configurations and one miner per GPU. I am using it and it works nicely.
sniper_sniperson
Full Member
***
Offline Offline

Activity: 124
Merit: 100


View Profile
June 16, 2011, 03:50:04 PM
 #298

What type of pool's list do you implement for that? Some sort of [pool address] - [include in failover=1/0] - [workername] - [password]?
gat3way (OP)
Sr. Member
****
Offline Offline

Activity: 256
Merit: 250


View Profile
June 16, 2011, 04:10:42 PM
 #299

It is going to be in the same format as the command line. In a text file, one per line.
backburn
Member
**
Offline Offline

Activity: 111
Merit: 10


★Trash&Burn [TBC/TXB]★


View Profile
June 17, 2011, 09:57:17 PM
 #300

* Integrated support for getting stats from pools (currently only bitcoinpool.com, deepbit.net and mining.bitcoin.cz)

Keep up the good work, love your changes!

Mind adding support for BitClockers.com Bitcoin Mining Pool stats?

JSON API:
Pool Stats: http://bitclockers.com/api/
User Stats: http://bitclockers.com/api/APIKEYHERE/

If you need any more information I'd love to help.
Pages: « 1 2 3 4 5 6 7 8 9 10 11 12 13 14 [15] 16 17 18 19 20 21 »  All
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!