Bitcoin Forum
April 27, 2024, 09:06:56 AM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: « 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 [40] 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 »
  Print  
Author Topic: DiabloMiner GPU Miner  (Read 866201 times)
DiabloD3 (OP)
Legendary
*
Offline Offline

Activity: 1162
Merit: 1000


DiabloMiner author


View Profile WWW
June 27, 2011, 05:56:48 AM
 #781

Slush pool says to avoid new Diablominer. Luckily i had a backup of old diablominer & using it now.

http://forum.bitcoin.org/index.php?topic=1976.msg285661#msg285661
I even started a thread on this subject weeks ago and just got a lot of BS replies from people that couldn't believe it. The effective hash rate on my pool which = my payut didn't lie. It was actually slower..

I can't comment on later versions of phoenix, but I log all my shares with phatk+phoenix svn r64 (which is what I'm mostly running) and the hash meter expected shares agrees quite closely with the actual ones. ::shrugs::


Slush pool says to avoid new Diablominer. Luckily i had a backup of old diablominer & using it now.

http://forum.bitcoin.org/index.php?topic=1976.msg285661#msg285661

This is a very good point, I haven't tried the new build yet since  the build I pulled down 2 weeks ago is working so well for  my 6 GPU system but this is a concern, the async work is great so hopefully it can be sorted.

IIRC, the async work is what is incompatible with slush. It has nothing to do with the new kernel at all.  Basically when the miner would otherwise be idle it increments the ntime anyways, hoping that the pool will still accept those shares.  On slush, it won't so that work is wasted, but it would have been wasted regardless.  The solution to that is to use a better pool, IMO.

It's also broken with flexible mining proxy. Does anyone know how to get back to pre async?

Perhaps an option to specify no ntime increments.

Pools and proxies that can't handle ntime being incremented by one or two need to be fixed.

"Bitcoin: mining our own business since 2009" -- Pieter Wuille
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction.
DiabloD3 (OP)
Legendary
*
Offline Offline

Activity: 1162
Merit: 1000


DiabloMiner author


View Profile WWW
June 27, 2011, 06:01:24 AM
 #782

Update: Added bitless's hack.

My 5850@918 on 2.1 went from 369 to 378, so a 2.4% increase.

dishwara
Legendary
*
Offline Offline

Activity: 1855
Merit: 1016



View Profile
June 27, 2011, 07:14:38 AM
 #783

Update: Added bitless's hack.

My 5850@918 on 2.1 went from 369 to 378, so a 2.4% increase.
Please, it would be very very helpful if you add a version number or any number in the zip file itself.
So, that it will be easy to know which diablominer is which.
All your miner files comes in only one file name DiabloMiner.zip, which makes it very hard to know which is new & which is old.
Thanks for having a great miner.
Druas
Member
**
Offline Offline

Activity: 78
Merit: 10


View Profile
June 27, 2011, 08:21:43 AM
 #784

Update: Added bitless's hack.

My 5850@918 on 2.1 went from 369 to 378, so a 2.4% increase.
Damn yeah that is a rather noticeable increase. Overall, since the kernel has "mutated" I am showing a 3.6% increase.
N4rk0
Newbie
*
Offline Offline

Activity: 27
Merit: 0


View Profile
June 27, 2011, 06:11:16 PM
 #785

In the part where you enqueue the work to the gpu and then read the output buffer , why do you alternate two buffers? ( buffer and output are two arrays of two buffers)
N4rk0
Newbie
*
Offline Offline

Activity: 27
Merit: 0


View Profile
June 27, 2011, 09:13:24 PM
 #786

Probably i got it , you use two queues so you don't have to call clFinish to wait for the current work to end before reading the buffer.
DiabloD3 (OP)
Legendary
*
Offline Offline

Activity: 1162
Merit: 1000


DiabloMiner author


View Profile WWW
June 28, 2011, 12:54:06 AM
 #787

In the part where you enqueue the work to the gpu and then read the output buffer , why do you alternate two buffers? ( buffer and output are two arrays of two buffers)

Because its faster on some setups.

wasabi
Newbie
*
Offline Offline

Activity: 39
Merit: 0


View Profile
June 29, 2011, 08:59:18 PM
 #788

He alternates the buffer so the kernel can be executing on a different buffer at the same time as the buffer read on the other. They can both execute out of order. If it happens to run out of order, anyways.
DiabloD3 (OP)
Legendary
*
Offline Offline

Activity: 1162
Merit: 1000


DiabloMiner author


View Profile WWW
June 29, 2011, 09:42:40 PM
 #789

He alternates the buffer so the kernel can be executing on a different buffer at the same time as the buffer read on the other. They can both execute out of order. If it happens to run out of order, anyways.

Oddly, no. I do that by using two (formerly three) threads, each thread having its own queue.

So, that would be 4 (formerly 6) buffers.

The problem is the Radeon driver sometimes does not finish copying the buffer quickly due to system IO load (since SDK 2.1 does not support DMA, nor does 2.4 on Linux). Alternating buffer (which costs me nothing) means I can schedule the kernel execution without having to wait for the previously used buffer to unlock.

So, its executing in parallel, just out strictly out of order.

DareC
Member
**
Offline Offline

Activity: 83
Merit: 10


View Profile
June 29, 2011, 11:50:30 PM
 #790

Anyone who lost speed with the new kernel should try the latest version. It is now much faster for me.
iopq
Hero Member
*****
Offline Offline

Activity: 658
Merit: 500


View Profile
June 30, 2011, 03:28:51 AM
Last edit: June 30, 2011, 04:04:17 AM by iopq
 #791

the latest diablo miner is 2mh/s faster than poclbm on my card! I even patched the maj function
edit: but I'm getting a third of my shares rejected lolwat
Man From The Future
Sr. Member
****
Offline Offline

Activity: 371
Merit: 250



View Profile
June 30, 2011, 01:32:09 PM
 #792

Is it possible to make it so that if connection is lost, instead of losing any found blocks, store them and try to submit them when connection is resumed? They may end up being rejected, if it was a long period of no ocnnectivity, but it's more likely that they'll be good?

THE ONE STOP SOLUTION FOR THE CRYPTO WORLD
▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄
Facebook   /  Twitter   /  Reddit   /  Medium   /  Youtube   /
      ▄▄█████████▄▄
   ▄█████████████████▄
  █████▀▀  ███  ▀▀█████
 ████     █████     ████
████     ███████
███▀    ████ ████
███▄   ████   ████
████  ████▄▄▄▄▄████  ████
 ███████████████████████
  █████▄▄       ▄▄█████
   ▀█████████████████▀
      ▀▀█████████▀▀

▄██▀▀▀▀▀▀▀▀▀▀▀▀▀██▄
▄██▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀██▄
▄█▀                       ▀█▄
▄▄▄▄ ▄█                           █▄ ▄▄▄▄
█   ███▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀███   █
▀▀█▀                                 ▀█▀▀
▄▀                                     ▀▄
▄▄▀▄▄▄▄                                 ▄▄▄▄▀▄▄
█       ▀▀▄                           ▄▀▀       █
█          █                         █          █
█▀▀▄▄▄▄▄▄▄███▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀███▄▄▄▄▄▄▄▀▀█
▒▀▄       ██▀▀▀▀▀▀▀▀▀▀▀▀█▀█▀▀▀▀▀▀▀▀▀▀▀▀██       ▄▀▒
▒█▀▀▀▀▄▄  █              ▀              █  ▄▄▀▀▀▀█▒
▒█      █ ▀▄                           ▄▀ █      █▒
▒▀▄▀▄▄▄▄▀  █▀▀▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▀▀█  ▀▄▄▄▄▀▄▀▒
▒▒▒▀▄▄▄▄▄ █                             █ ▄▄▄▄▄▀▒▒▒
 ▒▒▒▒▒▒▀▀▀▀▀▄▄▄▄▄▄███████████████▄▄▄▄▄▄▀▀▀▀▒▒▒▒▒▒▒
██
██
██
██
██
██
██
██
██
██
██
██
DiabloD3 (OP)
Legendary
*
Offline Offline

Activity: 1162
Merit: 1000


DiabloMiner author


View Profile WWW
June 30, 2011, 01:48:08 PM
 #793

the latest diablo miner is 2mh/s faster than poclbm on my card! I even patched the maj function
edit: but I'm getting a third of my shares rejected lolwat

Because it was already patched.

DiabloD3 (OP)
Legendary
*
Offline Offline

Activity: 1162
Merit: 1000


DiabloMiner author


View Profile WWW
June 30, 2011, 01:48:51 PM
 #794

Is it possible to make it so that if connection is lost, instead of losing any found blocks, store them and try to submit them when connection is resumed? They may end up being rejected, if it was a long period of no ocnnectivity, but it's more likely that they'll be good?

Yes it is possible, seeing as DiabloMiner already does this. DiabloMiner will submit a share no matter what, mere network problems will not stop it.

padrino
Legendary
*
Offline Offline

Activity: 1428
Merit: 1000


https://www.bitworks.io


View Profile WWW
July 01, 2011, 01:52:06 AM
 #795

I didn't see this coming but it seems since bitcoins.lc patched their bitcoind to add support for socket reuse my primary rig running Diablo started getting 15% rejects. It seems my miners running 400Mh/s or so are alright but my main rig running at 2.3Gh/s is having a really hard time, other pools are no problems but bitcoins.lc is hating that speed on one worker. The backend is pushpool.

How would I go about running an instance per GPU or something similar, I hate the idea of doing it but it looks like that might be the only option?

1CPi7VRihoF396gyYYcs2AdTEF8KQG2BCR
https://www.bitworks.io
DiabloD3 (OP)
Legendary
*
Offline Offline

Activity: 1162
Merit: 1000


DiabloMiner author


View Profile WWW
July 01, 2011, 02:01:25 AM
 #796

I didn't see this coming but it seems since bitcoins.lc patched their bitcoind to add support for socket reuse my primary rig running Diablo started getting 15% rejects. It seems my miners running 400Mh/s or so are alright but my main rig running at 2.3Gh/s is having a really hard time, other pools are no problems but bitcoins.lc is hating that speed on one worker. The backend is pushpool.

How would I go about running an instance per GPU or something similar, I hate the idea of doing it but it looks like that might be the only option?

I think you mean keepalive support. DiabloMiner supports keepalive, and the use of it does not effect rejections either way.

It sounds like bitcoins.lc screwed something up.

padrino
Legendary
*
Offline Offline

Activity: 1428
Merit: 1000


https://www.bitworks.io


View Profile WWW
July 01, 2011, 02:13:39 AM
 #797

I think you mean keepalive support. DiabloMiner supports keepalive, and the use of it does not effect rejections either way.

It sounds like bitcoins.lc screwed something up.

Running with debug now, seeing "Forcing getwork update due to nonce saturation" in bursts of 8-10 at once, does that provide any hints?

1CPi7VRihoF396gyYYcs2AdTEF8KQG2BCR
https://www.bitworks.io
DiabloD3 (OP)
Legendary
*
Offline Offline

Activity: 1162
Merit: 1000


DiabloMiner author


View Profile WWW
July 01, 2011, 02:27:26 AM
 #798

I think you mean keepalive support. DiabloMiner supports keepalive, and the use of it does not effect rejections either way.

It sounds like bitcoins.lc screwed something up.

Running with debug now, seeing "Forcing getwork update due to nonce saturation" in bursts of 8-10 at once, does that provide any hints?

Nope, only that you seem to have 4 or 5 GPUs.

padrino
Legendary
*
Offline Offline

Activity: 1428
Merit: 1000


https://www.bitworks.io


View Profile WWW
July 01, 2011, 02:35:32 AM
 #799

I think you mean keepalive support. DiabloMiner supports keepalive, and the use of it does not effect rejections either way.

It sounds like bitcoins.lc screwed something up.

Running with debug now, seeing "Forcing getwork update due to nonce saturation" in bursts of 8-10 at once, does that provide any hints?

Nope, only that you seem to have 4 or 5 GPUs.

6 GPUs on this one, holing off turning on other 2 GPUs until I get this under control.... bitcoins.lc is looking at it to see what they can figure out, thanks for your help. It seems it's not an issue if I run a worker per GPU keeping the per worker hash rate lower.

While I I'm posting quick note on another topic, I remember seeing a post stating -f 0 was not a good idea, I always run that on my poclbm miners, any insight on what it should be for dedicated cards on a headless system?

1CPi7VRihoF396gyYYcs2AdTEF8KQG2BCR
https://www.bitworks.io
DiabloD3 (OP)
Legendary
*
Offline Offline

Activity: 1162
Merit: 1000


DiabloMiner author


View Profile WWW
July 01, 2011, 02:38:40 AM
 #800

I think you mean keepalive support. DiabloMiner supports keepalive, and the use of it does not effect rejections either way.

It sounds like bitcoins.lc screwed something up.

Running with debug now, seeing "Forcing getwork update due to nonce saturation" in bursts of 8-10 at once, does that provide any hints?

Nope, only that you seem to have 4 or 5 GPUs.

6 GPUs on this one, holing off turning on other 2 GPUs until I get this under control.... bitcoins.lc is looking at it to see what they can figure out, thanks for your help. It seems it's not an issue is a run a worker per GPU keeping the per worker hash rate lower.

While I I'm posting quick note on another topic, I remember seeing a post stating -f 0 was not a good idea, I always run that on my poclbm miners, any insight on what it should be for dedicated cards on a headless system?

If it is 6, you should be seeing about 12 in a burst.

-f 1 is recommended, it'll push it up significantly.

As miner per GPU, this won't fix your problem. It sounds like their patch just doesn't work right. I use a single networking thread to process all async getworks, and a single thread to process all async sendworks (ie, two threads total). If it is trying to pair stuff to TCP sessions, then it is 100% broken, and the guy that wrote the patch doesn't understand how HTTP works either.

Pages: « 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 [40] 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 »
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!