Bitcoin Forum

Bitcoin => Mining software (miners) => Topic started by: FalconFour on September 03, 2011, 12:02:17 AM



Title: Look... all these GPU miners waste 100% CPU time for nothing... WHY?!
Post by: FalconFour on September 03, 2011, 12:02:17 AM
Can someone answer me this?

All these miners - Phoenix, ufasoft, cgminer, Diablo... every one I've tried so far, with both AMD and nVidia GPUs, always seem to run in a blind "game loop", consuming as much CPU power as is allocated to them. They don't use any intelligent control schemes to loop with less CPU, they just go 100% full-time.

That is HUGE. Power consumption is the #1 problem with Bitcoin hashing; for many people that don't consider it, they actually waste money by mining, getting a few bucks (and a warm fuzzy feeling), but then getting slapped with a huge power bill that eats it all away. I'm running my rig outside on the porch (2nd floor) to offset the 2:3 consumption:A/C cooling ratio problem (for about every 2 watts consumed by electronics, it takes 3 watts to remove the heat produced via air-conditioning). And even with that, it consumes 200 watts at 200MHash/sec on a 6770 and an underclocked (1.2GHz/200MHz/0.95v) Core 2 Quad. The meter reports I've spent $4.50 in electricity to produce ~0.57 Bitcoin over the past week (yuck!). That's not taking into account the losses in refining my mining methods, and it's due to improve, but that's a VERY tiny profit to be made from the amount of environmental resources consumed to get there...

One of the bigger oversights of GPU mining is the CPU factor. Having a miner eat up 100% of a single core makes the PC think there's an important process running that needs additional power to accelerate the process. That's completely untrue in Bitcoin mining - it just checks/updates the GPU's progress more often. Having a higher clock speed does NOTHING to increase the speed of the GPU process! Maybe a 1-2% change at worst, but going from full clock to minimal clock reduces the CPU's power consumption by more than half!

I brought this up once in the Phoenix thread, but got pretty much ignored with just one reply suggesting it was my nVidia drivers at fault. Now that I'm using ATI and seeing exactly the same behavior (and also with an nVidia Quadro driver, which is completely different), I know it's not the driver. It's the miner's loop. I've just got to wonder.... has anyone taken steps to address this in their setups? And are the developers of these miners able to do anything about the "game loop" problem?


Title: Re: Look... all these GPU miners waste 100% CPU time for nothing... WHY?!
Post by: simonk83 on September 03, 2011, 12:09:53 AM
http://forums.amd.com/devforum/messageview.cfm?catid=390&threadid=153211&enterthread=y


Title: Re: Look... all these GPU miners waste 100% CPU time for nothing... WHY?!
Post by: Vladimir on September 03, 2011, 12:15:19 AM
fix your drivers

what ArtForz said here might be helpful:

https://bitcoin.org.uk/forums/topic/82-getting-full-performance-out-of-hd6990/


Title: Re: Look... all these GPU miners waste 100% CPU time for nothing... WHY?!
Post by: simonk83 on September 03, 2011, 12:20:06 AM
Thanks for reporting this, this is a known issue with multi-gpu configurations and we are working on a solution.

-------------------------
Micah Villmow
Advanced Micro Devices Inc.
--------------------------------

From the AMD forum link above.


Title: Re: Look... all these GPU miners waste 100% CPU time for nothing... WHY?!
Post by: FalconFour on September 03, 2011, 01:38:04 AM
None of that has anything to do with what I said... I never even mentioned multi-GPU systems and I even said it originally occurred with nVidia but also happens with AMD...  ???

I would just love to see one single screenshot of anyone running Phoenix without it consuming 100% of a core (i.e. 25% on a quad, 50% on a dual, etc). Anyone? I honestly don't know of a single configuration where any miner - any GPU miner at all - can operate without eating up a full CPU core...

edit: Here's my "list of shame" - everything I've run a miner on, hence everything that runs with 100% CPU - from the thread at https://bitcointalk.org/index.php?topic=6458.msg467406#msg467406 :
  • Atom D510 with nVidia Ion (with clock tweaks, runs stable at 4.66 Mhash/sec)
  • Core 2 Quad Q6600 with nVidia GeForce 8800GTS (tweaked, runs stable at ~22.5 Mhash/sec)
  • Core 2 Duo with Quadro NVS 290 (stock, ~3Mhash/sec)
  • Core i5 M430 with nVidia GeForce GT 325M (Optimus, tweaked, stable at ~12Mhash/sec)
  • Core 2 Quad Q6600 with Sapphire HD6770
All running Windows 7 SP1 x64, all the latest drivers from nVidia or AMD. Even when the AMD OpenCL package is installed (by itself) with an nVidia GPU - which also allows the system to crunch OpenCL using the CPU - the effect is the same: 100% usage of one core.


Title: Re: Look... all these GPU miners waste 100% CPU time for nothing... WHY?!
Post by: Coolhwip on September 03, 2011, 01:46:32 AM
AMD 11.6 drivers using Phoenix with one of the earlier revisions of Diapolo's modified phatk kernel (i7-920 with an HD6970 rig). I've upgraded to newer kernels, drivers and dual GPUs on that rig now, but my brother's rig still has that same combo. I'll take a screenshot of it later tonight.


Title: Re: Look... all these GPU miners waste 100% CPU time for nothing... WHY?!
Post by: bcforum on September 03, 2011, 04:00:12 AM
I would just love to see one single screenshot of anyone running Phoenix without it consuming 100% of a core (i.e. 25% on a quad, 50% on a dual, etc). Anyone? I honestly don't know of a single configuration where any miner - any GPU miner at all - can operate without eating up a full CPU core...

I'm running phoenix-r112, Cat 11.6, Ubuntu 11.04, SDK 2.4 and my CPU usage is:

<drumroll>

0%

Seriously, the NX client uses more CPU to update the remote display than Phoenix does.

Too lazy to post a screenshot though....


Title: Re: Look... all these GPU miners waste 100% CPU time for nothing... WHY?!
Post by: johnj on September 03, 2011, 04:04:50 AM
I would just love to see one single screenshot

https://i.imgur.com/s0eCU.png


Title: Re: Look... all these GPU miners waste 100% CPU time for nothing... WHY?!
Post by: FalconFour on September 03, 2011, 04:22:23 AM
... whuuut the fiiiduzzle. o_O That seems to be breaking the laws of physics.

How the hell am I a geek with 14 years of analyzing low-level software/hardware interactions, breaking apart and studying Windows, doing all the things that "can't be done" and "not recommended" and proving them wrong, writing interface code for microcontrollers to help repair PC problems, manually editing an offline registry to install non-default boot drivers for a new RAID controller that BSOD's 0x7b on startup, disassembling and editing assembly code of utilities to hack around undesirable blocks and error conditions, correcting major thermal/electrical/software flaws in every single homebuilt PC I'd ever seen...
And yet... SOMEHOW... of all the PCs I'd done all the latest and updated things to set up... I have never seen a single one of those systems do anything but eat up 100% of a CPU core during crunching?

...

Now that I know it's possible, the new task for tonight: figure out what the hell I've been doing wrong =P

edit: Hmm, could it be that I've got Catalyst 11.8? I was kinda wondering why the new Vision Engine control panel was missing and it was still called Catalyst Control Center instead... but AMD branding screws with my head, maybe it's the same thing. Well, if anyone could save me the hassle, I'm going to go research why the people above are saying 11.6 ;)


Title: Re: Look... all these GPU miners waste 100% CPU time for nothing... WHY?!
Post by: johnj on September 03, 2011, 04:29:14 AM
I had 100% on SDK 2.4.  I rolled back to 2.1, runnin smooth.


Title: Re: Look... all these GPU miners waste 100% CPU time for nothing... WHY?!
Post by: kripz on September 03, 2011, 05:00:16 AM
... whuuut the fiiiduzzle. o_O That seems to be breaking the laws of physics.

How the hell am I a geek with 14 years of analyzing low-level software/hardware interactions, breaking apart and studying Windows, doing all the things that "can't be done" and "not recommended" and proving them wrong, writing interface code for microcontrollers to help repair PC problems, manually editing an offline registry to install non-default boot drivers for a new RAID controller that BSOD's 0x7b on startup, disassembling and editing assembly code of utilities to hack around undesirable blocks and error conditions, correcting major thermal/electrical/software flaws in every single homebuilt PC I'd ever seen...
And yet... SOMEHOW... of all the PCs I'd done all the latest and updated things to set up... I have never seen a single one of those systems do anything but eat up 100% of a CPU core during crunching?

...

Now that I know it's possible, the new task for tonight: figure out what the hell I've been doing wrong =P

edit: Hmm, could it be that I've got Catalyst 11.8? I was kinda wondering why the new Vision Engine control panel was missing and it was still called Catalyst Control Center instead... but AMD branding screws with my head, maybe it's the same thing. Well, if anyone could save me the hassle, I'm going to go research why the people above are saying 11.6 ;)

Please, re-read the first 4 posts...


Title: Re: Look... all these GPU miners waste 100% CPU time for nothing... WHY?!
Post by: FalconFour on September 03, 2011, 05:11:13 AM
Please, re-read the first 4 posts...
Why, what, did they change? *scrolls up*...

First reply (2nd "post" if you're counting my own, which of course I re-read after posting for accuracy): irrelevant forum post.
Second reply: Another irrelevant forum post referring to an unrelated issue.
Third reply: A quote from an irrelevant forum post.
Fourth reply: my reply saying that these were irrelevant to the issue.

... And this, now the 11th reply, is me saying "what part of this don't you understand, to the point where you think you can talk down to me like 'please re-read'?". In fact:

What part of this don't you understand, to the point where you think you can talk down to me like "please re-read"?

I found a post describing difference in behavior between 11.6, 11.7, and 11.8 (https://bitcointalk.org/index.php?topic=33611.0), which brought me to uninstalling, cleaning (manually, via registry & file system - don't feel like installing software to uninstall software), and installing 11.6. Now we'll see how that runs... and I'll owe someone a beer that can actually make a visible post about this very most recent issue regarding 11.8 (the current release) of Catalyst. Believe me... I looked for an "AMD tips and info" thread. Didn't find one. Hell, maybe I'll throw a 0.02bcn donation to the first guy that can actually reply in a civilized manner, like the experienced tech I am, not some kind of computer-newbie...


Title: Re: Look... all these GPU miners waste 100% CPU time for nothing... WHY?!
Post by: FalconFour on September 03, 2011, 05:56:55 AM
Wow. HUGE difference going from 11.8 to 11.6. The CPU usage was, at first, lower, but as I changed settings around to make it better, it actually got worse, and back to 25% again (quad core - but U/C+U/V'd to 1.2GHz). Performance was roughly the same the whole time I was changing things. I finally found a combination of options that result in ~3% CPU (remember, 1.2GHz), and a ~25 watt power reduction with the new drivers and lower CPU usage! I'm now using Phoenix 1.6.2 with rollntime mod (https://bitcointalk.org/index.php?topic=38629.0), with modded phatk (https://bitcointalk.org/index.php?topic=25860.0) with the options: VECTORS2 DEVICE=0 WORKSIZE=256 FASTLOOP=true - it was actually removing "aggression=(tried 10~15)" and turned fastloop back on, that caused the dramatic reduction in CPU. It now runs at 160MH/s, but is returning results at the same blistering rate as I'm used to; I've also found that hash rate is an inaccurate measure of performance, as different settings vary wildly in their result-return rate, so I'm happy with 160 until I find something better (also not O/Cing or U/Cing the GPU since I haven't yet found how to get OverDrive to U/C the RAM).

That said, I've gotta say thanks for showing that a properly configured system *is* indeed capable of using less process time than 100% (even if not 0%, which is understandable). johnj, since you were the guy that posted what I actually semi-rhetorically asked for (and was the key to understanding the situation), I owe 'ya an e-beer. Drop an address and I'll cough up 0.02 (out of the 0.7 I've mined in the last 2 weeks, ugh), just as thanks for not posting crap like "hurp multi-gpu known issue noob" (when I'm not even dual-GPUing) ;)


Title: Re: Look... all these GPU miners waste 100% CPU time for nothing... WHY?!
Post by: johnj on September 03, 2011, 06:28:24 AM
I'm on a 5770, and I *think* its the same as the 6770.  I'm runnin VECTORS AGGRESSION=7 WORKSIZE=256 and getting 230mh/s with 1000/300 clocks.

I was running 11.6 on SDK 2.4 getting 226mh/s on the same settings.

Keep the .02... the transaction fee would eat half of that :o

Edit:  https://i.imgur.com/OPEHt.png


Title: Re: Look... all these GPU miners waste 100% CPU time for nothing... WHY?!
Post by: Vladimir on September 03, 2011, 08:20:50 AM
Please, re-read the first 4 posts...

LOL, indeed. Noobs tend to ignore one line answers to their one page questions.


Title: Re: Look... all these GPU miners waste 100% CPU time for nothing... WHY?!
Post by: Gabi on September 03, 2011, 11:31:12 AM
I'm mining with 0% cpu usage, is my computer extraterrestrial???


Title: Re: Look... all these GPU miners waste 100% CPU time for nothing... WHY?!
Post by: MiningBuddy on September 03, 2011, 11:59:31 AM
I've always wondered why so many people struggle with this.
Between my 5 5870's and 2 6990's I've never once experienced this issue, I run various driver versions from 11.1 -> 11.8, 2 linux boxes & a Windows machine, varied SDK versions 2.1 -> 2.4 and all using phoenix.
Maybe I'm just lucky  :-\


Title: Re: Look... all these GPU miners waste 100% CPU time for nothing... WHY?!
Post by: bcforum on September 03, 2011, 01:33:43 PM
I've always wondered why so many people struggle with this.
Between my 5 5870's and 2 6990's I've never once experienced this issue, I run various driver versions from 11.1 -> 11.8, 2 linux boxes & a Windows machine, varied SDK versions 2.1 -> 2.4 and all using phoenix.
Maybe I'm just lucky  :-\

Maybe you read the instructions before you started randomly plugging things together?


Title: Re: Look... all these GPU miners waste 100% CPU time for nothing... WHY?!
Post by: sveetsnelda on September 03, 2011, 06:09:16 PM
In case anyone is still confused, this is only a *Windows* issue on later versions of AMD's OpenCL SDK.  Earlier versions do not have this issue.  However, as far as I know, none of those earlier versions supported the 6000 series cards (but I could be wrong here).

On Linux, the GPU_USE_SYNC_OBJECTS environment variable can be used to change this behavior.  The Windows version of the SDK doesn't seem to have a way to change this.  I ran amdocl.dll through IDA (a disassembler) and I see GPU_USE_SYNC_OBJECTS as a compile-time variable, but it doesn't look like it checks for a registry key/environment variable/etc at runtime.  I'm sure that there is a way to alter the DLL to make it happen, but without having the debug symbols for the DLL, it could take a really long time to find.


Title: Re: Look... all these GPU miners waste 100% CPU time for nothing... WHY?!
Post by: BkkCoins on September 04, 2011, 01:23:50 AM
On Linux, the GPU_USE_SYNC_OBJECTS environment variable can be used to change this behavior. 
I had 100% cpu problems on Linux before rolling back to the 11.6 driver. So you're saying with this environment variable set I could probably move up to newer drivers without the cpu going bananas?

Is there a good reason on Linux for wanting to use newer drivers? I should probably resist the temptation to muck up a system running smoothly now.


Title: Re: Look... all these GPU miners waste 100% CPU time for nothing... WHY?!
Post by: iopq on September 04, 2011, 12:07:14 PM
tl;dr use 11.6 with 2.1 SDK
http://developer.amd.com/sdks/AMDAPPSDK/downloads/pages/AMDAPPSDKDownloadArchive.aspx


Title: Re: Look... all these GPU miners waste 100% CPU time for nothing... WHY?!
Post by: sveetsnelda on September 05, 2011, 12:01:54 AM
I had 100% cpu problems on Linux before rolling back to the 11.6 driver. So you're saying with this environment variable set I could probably move up to newer drivers without the cpu going bananas?
Correct.

Is there a good reason on Linux for wanting to use newer drivers? I should probably resist the temptation to muck up a system running smoothly now.
Good question.  I'm actually not sure.  I doubt it though.


Title: Re: Look... all these GPU miners waste 100% CPU time for nothing... WHY?!
Post by: -ck on September 07, 2011, 01:00:32 AM
On Linux, 11.6 driver with ANY sdk is fine. 2.1, 2.4 and 2.5 do not chew up CPU. It's all in the driver. 11.7 and 11.8 drivers use 100% CPU no matter how many cards you use, nor if you set that environment variable or not.

A fix for that would be most appreciated.


Title: Re: Look... all these GPU miners waste 100% CPU time for nothing... WHY?!
Post by: ssateneth on September 07, 2011, 11:40:07 PM
tl;dr use 11.6 with 2.1 SDK
http://developer.amd.com/sdks/AMDAPPSDK/downloads/pages/AMDAPPSDKDownloadArchive.aspx

I tried 11.6 with 2.1 sdk and phoenix 1.6.2 with diapolo's phatk kernel (phatk 2.2 doesnt support sdk 2.1 for some reason), still had 100% cpu with 1 graphics card. wouldve tried guiminer but it needs 2.4 sdk or higher.

sapphire 5830, win7 x64 ultimate, above stuff.


Title: Re: Look... all these GPU miners waste 100% CPU time for nothing... WHY?!
Post by: FalconFour on September 08, 2011, 12:00:48 AM
I tried 11.6 with 2.1 sdk and phoenix 1.6.2 with diapolo's phatk kernel (phatk 2.2 doesnt support sdk 2.1 for some reason), still had 100% cpu with 1 graphics card. wouldve tried guiminer but it needs 2.4 sdk or higher.

sapphire 5830, win7 x64 ultimate, above stuff.
"durr, hurr, this is noobie stuff, hurr, you're doing it wrong, hurr hurr read the note on your forehead, hurr."

- quoth the beginning of this topic.
(to: everyone else ITT, not at quoted:)
Guess it's not such an easy problem to solve after all, huh. Maybe if people would look past the low post count and actually read what people (e.g. I) actually wrote in their posts, more stuff like this would come to light and get resolved. Are we done durr-hurring that everyone except the little falcon-brain has their stuff working 100% properly?

BTW, I found that the CPU-usage problem occurs any time the client tries to make the GPU work harder than it wants to work, with the "golden" 11.6 drivers all set up properly. Feed too high an "aggression" setting at it, and it'll slow the desktop responsiveness down, but send CPU usage through the roof. Pretty easy to screw up a setting like that, considering there are no warnings against doing so... doesn't sound so "durr hurr" to me.


Title: Re: Look... all these GPU miners waste 100% CPU time for nothing... WHY?!
Post by: iopq on September 08, 2011, 12:59:08 AM
tl;dr use 11.6 with 2.1 SDK
http://developer.amd.com/sdks/AMDAPPSDK/downloads/pages/AMDAPPSDKDownloadArchive.aspx

I tried 11.6 with 2.1 sdk and phoenix 1.6.2 with diapolo's phatk kernel (phatk 2.2 doesnt support sdk 2.1 for some reason), still had 100% cpu with 1 graphics card. wouldve tried guiminer but it needs 2.4 sdk or higher.

sapphire 5830, win7 x64 ultimate, above stuff.
what? I've used phatk2.2 with 2.1 SDK


Title: Re: Look... all these GPU miners waste 100% CPU time for nothing... WHY?!
Post by: ssateneth on September 08, 2011, 07:19:12 PM
phatk 2.2 was saying that it doesn't support my graphics cards when I had 11.6 and 2.1 SDK installed.

Regardless, I tried diapalo's phatk and the cpu bug was there anyways. Too much of a headache so i went back to 11.8 2.5


Title: Re: Look... all these GPU miners waste 100% CPU time for nothing... WHY?!
Post by: ArtForz on September 08, 2011, 07:34:05 PM
Just tested, cgminer happily mining at 1% CPU on a win7 box on dual 6970s, cat 11.6 (driver only) + sdk 2.4


Title: Re: Look... all these GPU miners waste 100% CPU time for nothing... WHY?!
Post by: PcChip on September 09, 2011, 05:10:42 AM
All five of my Win7 boxes have always had Phoenix1.5 and CGminer eat up 100% of the CPUs with various catalyst versions and always SDK2.4+


To the OP - For changing ram clocks, try MSI Afterburner Beta, TriXX, and BarelyClocked.  I use combinations of those utilities throughout my Win7 boxes on my LAN with TightVNC and have every card at 300 MHz RAM (except the one my 3 year old uses to play Starcraft2, obviously)


Title: Re: Look... all these GPU miners waste 100% CPU time for nothing... WHY?!
Post by: iopq on September 09, 2011, 09:12:02 AM
All five of my Win7 boxes have always had Phoenix1.5 and CGminer eat up 100% of the CPUs with various catalyst versions and always SDK2.4+


To the OP - For changing ram clocks, try MSI Afterburner Beta, TriXX, and BarelyClocked.  I use combinations of those utilities throughout my Win7 boxes on my LAN with TightVNC and have every card at 300 MHz RAM (except the one my 3 year old uses to play Starcraft2, obviously)
sc2 runs fine for me, even while mining
even while underclocking ram


Title: Re: Look... all these GPU miners waste 100% CPU time for nothing... WHY?!
Post by: PrEzi on September 11, 2011, 09:38:43 AM
New leaked version of Catalyst drivers 11.9

Build Info:
 DriverVer=08/10/2011, 8.890.0.0000
 8.89-110810a2-124125C

Is 100% CPU Usage free during OpenCL computations!
You can get it from Guru3D forums
http://forums.guru3d.com/showthread.php?t=350638


Title: Re: Look... all these GPU miners waste 100% CPU time for nothing... WHY?!
Post by: CosicMiner on September 11, 2011, 07:18:39 PM
Trying out 11.9 and still experiencing 100% CPU.
Win7 X64 Dual 6990 rig.


Title: Re: Look... all these GPU miners waste 100% CPU time for nothing... WHY?!
Post by: Grinder on September 12, 2011, 12:23:54 PM
Just tested, cgminer happily mining at 1% CPU on a win7 box on dual 6970s, cat 11.6 (driver only) + sdk 2.4
Can you specify what you mean by "driver only"? I have W7 64 bit, 2x6950, cat 11.6, sdk 2.4 and cgminer, and it still uses 100% cpu.


Title: Re: Look... all these GPU miners waste 100% CPU time for nothing... WHY?!
Post by: Starcraftman on September 12, 2011, 12:41:26 PM
New leaked version of Catalyst drivers 11.9

Build Info:
 DriverVer=08/10/2011, 8.890.0.0000
 8.89-110810a2-124125C

Is 100% CPU Usage free during OpenCL computations!
You can get it from Guru3D forums
http://forums.guru3d.com/showthread.php?t=350638

This mostly worked, cpu usage reduced to 2-3 percent with phoenix 1.6.2. , WORKSIZE=128 AGGRESSION=8. Oddly, cpu usage goes up with any aggression level that's higher or lower than 8 for me.


Title: Re: Look... all these GPU miners waste 100% CPU time for nothing... WHY?!
Post by: gogusrl on September 13, 2011, 03:58:17 PM
https://i.imgur.com/hXPNWl.jpg (http://imgur.com/hXPNW)

Sempron 140 with 3 x 6950 modded to 6970. Cata 11.6. Hashing speed drops when I connect with teamviewer on the computer and ~10% cpu is teamviewer alone.


Title: Re: Look... all these GPU miners waste 100% CPU time for nothing... WHY?!
Post by: flower1024 on September 13, 2011, 04:14:52 PM
Hell, maybe I'll throw a 0.02bcn donation to the first guy that can actually reply in a civilized manner, like the experienced tech I am, not some kind of computer-newbie...

did you try cgminer and did it have the same issue?

if yes: driver fault, if no: you're right and it could be phonenix.

sorry cant't help further

did i qualify for the 0.02btc :D


Title: Re: Look... all these GPU miners waste 100% CPU time for nothing... WHY?!
Post by: Grinder on September 13, 2011, 07:32:05 PM
Perhaps it's because my Windows machine has an Intel CPU? Anyone with Intel able to avoid 100% CPU usage with more than one AMD 69xx GPU?


Title: Re: Look... all these GPU miners waste 100% CPU time for nothing... WHY?!
Post by: Xenomorph on September 13, 2011, 08:12:36 PM
The first time I experienced the 100% CPU bug was with an NVidia card (GeForce GT 330M). When I started mining I tried that and other NVidia cards (including a 9800 GX2 and 8800 GT).
Things were mostly fine when I switched to ATI cards. Two 5830s and a 5750. Catalyst 11.6 drivers, 0% CPU.

When I finally upgraded to Catalyst 11.8, then every single system hit 100% CPU. Setting the priority to "Below Normal" prevents it from slowing things down, but my CPUs still generates heat when at 100% load, 24/7.

I use GUIMiner, and it doesn't recognize my ATI cards when I use older drivers and SDK.


Title: Re: Look... all these GPU miners waste 100% CPU time for nothing... WHY?!
Post by: ssateneth on September 14, 2011, 09:57:00 PM
https://i.imgur.com/hXPNWl.jpg (http://imgur.com/hXPNW)

Sempron 140 with 3 x 6950 modded to 6970. Cata 11.6. Hashing speed drops when I connect with teamviewer on the computer and ~10% cpu is teamviewer alone.

which sdk? 2.1, 2.4, or 2.5?


Title: Re: Look... all these GPU miners waste 100% CPU time for nothing... WHY?!
Post by: Grinder on September 15, 2011, 08:23:07 AM
which sdk? 2.1, 2.4, or 2.5?
2.1 never gives 100% cpu, but unfortunately that doesn't support AMD 6xxx GPUs.


Title: Re: Look... all these GPU miners waste 100% CPU time for nothing... WHY?!
Post by: bcforum on September 15, 2011, 03:55:39 PM
which sdk? 2.1, 2.4, or 2.5?

I got a very slight increase (0.1%) in performance going from 2.4 to 2.5


Title: Re: Look... all these GPU miners waste 100% CPU time for nothing... WHY?!
Post by: Mousepotato on September 15, 2011, 04:24:31 PM
I never really experienced the 100% CPU issue with 11.5/2.1.  However I just did the full upgrade bundle with 11.8 a few days ago and sure enough my CPU meter is pegged at 100%.  So then I completely uninstalled the drivers and re-installed 11.5/2.1... aaaaand my CPU is still at 100% during mining.  FML


Title: Re: Look... all these GPU miners waste 100% CPU time for nothing... WHY?!
Post by: hitndahedfred on June 18, 2013, 02:13:52 PM
Can someone answer me this?

All these miners - Phoenix, ufasoft, cgminer, Diablo... every one I've tried so far, with both AMD and nVidia GPUs, always seem to run in a blind "game loop", consuming as much CPU power as is allocated to them. They don't use any intelligent control schemes to loop with less CPU, they just go 100% full-time.

That is HUGE. Power consumption is the #1 problem with Bitcoin hashing; for many people that don't consider it, they actually waste money by mining, getting a few bucks (and a warm fuzzy feeling), but then getting slapped with a huge power bill that eats it all away. I'm running my rig outside on the porch (2nd floor) to offset the 2:3 consumption:A/C cooling ratio problem (for about every 2 watts consumed by electronics, it takes 3 watts to remove the heat produced via air-conditioning). And even with that, it consumes 200 watts at 200MHash/sec on a 6770 and an underclocked (1.2GHz/200MHz/0.95v) Core 2 Quad. The meter reports I've spent $4.50 in electricity to produce ~0.57 Bitcoin over the past week (yuck!). That's not taking into account the losses in refining my mining methods, and it's due to improve, but that's a VERY tiny profit to be made from the amount of environmental resources consumed to get there...

One of the bigger oversights of GPU mining is the CPU factor. Having a miner eat up 100% of a single core makes the PC think there's an important process running that needs additional power to accelerate the process. That's completely untrue in Bitcoin mining - it just checks/updates the GPU's progress more often. Having a higher clock speed does NOTHING to increase the speed of the GPU process! Maybe a 1-2% change at worst, but going from full clock to minimal clock reduces the CPU's power consumption by more than half!

I brought this up once in the Phoenix thread, but got pretty much ignored with just one reply suggesting it was my nVidia drivers at fault. Now that I'm using ATI and seeing exactly the same behavior (and also with an nVidia Quadro driver, which is completely different), I know it's not the driver. It's the miner's loop. I've just got to wonder.... has anyone taken steps to address this in their setups? And are the developers of these miners able to do anything about the "game loop" problem?
===========================================================================================================

I have found out the ATI drivers written after version 11.6 are buggy. It appears to "automatically" have use 100% of the core. Even though it appears that some of the "newer" drivers perform "better". It is not necessarily the case.

I have read that it can be remedied by using this variable in the args of your batch file.

'sext GPU_USE_SYNC_OBJECTS"

Hit


Title: Re: Look... all these GPU miners waste 100% CPU time for nothing... WHY?!
Post by: crazyates on June 18, 2013, 09:48:51 PM
Herpy Derp
http://i34.photobucket.com/albums/d138/trebs8/mccoyDeadThread.jpg