FalconFour (OP)
|
|
September 03, 2011, 12:02:17 AM |
|
Can someone answer me this?
All these miners - Phoenix, ufasoft, cgminer, Diablo... every one I've tried so far, with both AMD and nVidia GPUs, always seem to run in a blind "game loop", consuming as much CPU power as is allocated to them. They don't use any intelligent control schemes to loop with less CPU, they just go 100% full-time.
That is HUGE. Power consumption is the #1 problem with Bitcoin hashing; for many people that don't consider it, they actually waste money by mining, getting a few bucks (and a warm fuzzy feeling), but then getting slapped with a huge power bill that eats it all away. I'm running my rig outside on the porch (2nd floor) to offset the 2:3 consumption:A/C cooling ratio problem (for about every 2 watts consumed by electronics, it takes 3 watts to remove the heat produced via air-conditioning). And even with that, it consumes 200 watts at 200MHash/sec on a 6770 and an underclocked (1.2GHz/200MHz/0.95v) Core 2 Quad. The meter reports I've spent $4.50 in electricity to produce ~0.57 Bitcoin over the past week (yuck!). That's not taking into account the losses in refining my mining methods, and it's due to improve, but that's a VERY tiny profit to be made from the amount of environmental resources consumed to get there...
One of the bigger oversights of GPU mining is the CPU factor. Having a miner eat up 100% of a single core makes the PC think there's an important process running that needs additional power to accelerate the process. That's completely untrue in Bitcoin mining - it just checks/updates the GPU's progress more often. Having a higher clock speed does NOTHING to increase the speed of the GPU process! Maybe a 1-2% change at worst, but going from full clock to minimal clock reduces the CPU's power consumption by more than half!
I brought this up once in the Phoenix thread, but got pretty much ignored with just one reply suggesting it was my nVidia drivers at fault. Now that I'm using ATI and seeing exactly the same behavior (and also with an nVidia Quadro driver, which is completely different), I know it's not the driver. It's the miner's loop. I've just got to wonder.... has anyone taken steps to address this in their setups? And are the developers of these miners able to do anything about the "game loop" problem?
|
feed the bird: 187CXEVzakbzcANsyhpAAoF2k6KJsc55P1 (BTC) / LiRzzXnwamFCHoNnWqEkZk9HknRmjNT7nU (LTC)
|
|
|
simonk83
|
|
September 03, 2011, 12:09:53 AM |
|
|
|
|
|
Vladimir
|
|
September 03, 2011, 12:15:19 AM |
|
|
-
|
|
|
simonk83
|
|
September 03, 2011, 12:20:06 AM Last edit: September 03, 2011, 12:32:05 AM by simonk83 |
|
Thanks for reporting this, this is a known issue with multi-gpu configurations and we are working on a solution.
------------------------- Micah Villmow Advanced Micro Devices Inc. --------------------------------
From the AMD forum link above.
|
|
|
|
FalconFour (OP)
|
|
September 03, 2011, 01:38:04 AM Last edit: September 03, 2011, 01:52:05 AM by FalconFour |
|
None of that has anything to do with what I said... I never even mentioned multi-GPU systems and I even said it originally occurred with nVidia but also happens with AMD... I would just love to see one single screenshot of anyone running Phoenix without it consuming 100% of a core (i.e. 25% on a quad, 50% on a dual, etc). Anyone? I honestly don't know of a single configuration where any miner - any GPU miner at all - can operate without eating up a full CPU core... edit: Here's my "list of shame" - everything I've run a miner on, hence everything that runs with 100% CPU - from the thread at https://bitcointalk.org/index.php?topic=6458.msg467406#msg467406 : - Atom D510 with nVidia Ion (with clock tweaks, runs stable at 4.66 Mhash/sec)
- Core 2 Quad Q6600 with nVidia GeForce 8800GTS (tweaked, runs stable at ~22.5 Mhash/sec)
- Core 2 Duo with Quadro NVS 290 (stock, ~3Mhash/sec)
- Core i5 M430 with nVidia GeForce GT 325M (Optimus, tweaked, stable at ~12Mhash/sec)
- Core 2 Quad Q6600 with Sapphire HD6770
All running Windows 7 SP1 x64, all the latest drivers from nVidia or AMD. Even when the AMD OpenCL package is installed (by itself) with an nVidia GPU - which also allows the system to crunch OpenCL using the CPU - the effect is the same: 100% usage of one core.
|
feed the bird: 187CXEVzakbzcANsyhpAAoF2k6KJsc55P1 (BTC) / LiRzzXnwamFCHoNnWqEkZk9HknRmjNT7nU (LTC)
|
|
|
Coolhwip
Member
Offline
Activity: 119
Merit: 10
|
|
September 03, 2011, 01:46:32 AM |
|
AMD 11.6 drivers using Phoenix with one of the earlier revisions of Diapolo's modified phatk kernel (i7-920 with an HD6970 rig). I've upgraded to newer kernels, drivers and dual GPUs on that rig now, but my brother's rig still has that same combo. I'll take a screenshot of it later tonight.
|
|
|
|
bcforum
|
|
September 03, 2011, 04:00:12 AM |
|
I would just love to see one single screenshot of anyone running Phoenix without it consuming 100% of a core (i.e. 25% on a quad, 50% on a dual, etc). Anyone? I honestly don't know of a single configuration where any miner - any GPU miner at all - can operate without eating up a full CPU core...
I'm running phoenix-r112, Cat 11.6, Ubuntu 11.04, SDK 2.4 and my CPU usage is: <drumroll> 0%Seriously, the NX client uses more CPU to update the remote display than Phoenix does. Too lazy to post a screenshot though....
|
If you found this post useful, feel free to share the wealth: 1E35gTBmJzPNJ3v72DX4wu4YtvHTWqNRbM
|
|
|
johnj
|
|
September 03, 2011, 04:04:50 AM |
|
I would just love to see one single screenshot
|
1AeW7QK59HvEJwiyMztFH1ubWPSLLKx5ym TradeHill Referral TH-R120549
|
|
|
FalconFour (OP)
|
|
September 03, 2011, 04:22:23 AM |
|
... whuuut the fiiiduzzle. o_O That seems to be breaking the laws of physics. How the hell am I a geek with 14 years of analyzing low-level software/hardware interactions, breaking apart and studying Windows, doing all the things that "can't be done" and "not recommended" and proving them wrong, writing interface code for microcontrollers to help repair PC problems, manually editing an offline registry to install non-default boot drivers for a new RAID controller that BSOD's 0x7b on startup, disassembling and editing assembly code of utilities to hack around undesirable blocks and error conditions, correcting major thermal/electrical/software flaws in every single homebuilt PC I'd ever seen... And yet... SOMEHOW... of all the PCs I'd done all the latest and updated things to set up... I have never seen a single one of those systems do anything but eat up 100% of a CPU core during crunching? ... Now that I know it's possible, the new task for tonight: figure out what the hell I've been doing wrong =P edit: Hmm, could it be that I've got Catalyst 11.8? I was kinda wondering why the new Vision Engine control panel was missing and it was still called Catalyst Control Center instead... but AMD branding screws with my head, maybe it's the same thing. Well, if anyone could save me the hassle, I'm going to go research why the people above are saying 11.6
|
feed the bird: 187CXEVzakbzcANsyhpAAoF2k6KJsc55P1 (BTC) / LiRzzXnwamFCHoNnWqEkZk9HknRmjNT7nU (LTC)
|
|
|
johnj
|
|
September 03, 2011, 04:29:14 AM |
|
I had 100% on SDK 2.4. I rolled back to 2.1, runnin smooth.
|
1AeW7QK59HvEJwiyMztFH1ubWPSLLKx5ym TradeHill Referral TH-R120549
|
|
|
kripz
|
|
September 03, 2011, 05:00:16 AM |
|
... whuuut the fiiiduzzle. o_O That seems to be breaking the laws of physics. How the hell am I a geek with 14 years of analyzing low-level software/hardware interactions, breaking apart and studying Windows, doing all the things that "can't be done" and "not recommended" and proving them wrong, writing interface code for microcontrollers to help repair PC problems, manually editing an offline registry to install non-default boot drivers for a new RAID controller that BSOD's 0x7b on startup, disassembling and editing assembly code of utilities to hack around undesirable blocks and error conditions, correcting major thermal/electrical/software flaws in every single homebuilt PC I'd ever seen... And yet... SOMEHOW... of all the PCs I'd done all the latest and updated things to set up... I have never seen a single one of those systems do anything but eat up 100% of a CPU core during crunching? ... Now that I know it's possible, the new task for tonight: figure out what the hell I've been doing wrong =P edit: Hmm, could it be that I've got Catalyst 11.8? I was kinda wondering why the new Vision Engine control panel was missing and it was still called Catalyst Control Center instead... but AMD branding screws with my head, maybe it's the same thing. Well, if anyone could save me the hassle, I'm going to go research why the people above are saying 11.6 Please, re-read the first 4 posts...
|
|
|
|
FalconFour (OP)
|
|
September 03, 2011, 05:11:13 AM |
|
Please, re-read the first 4 posts...
Why, what, did they change? *scrolls up*... First reply (2nd "post" if you're counting my own, which of course I re-read after posting for accuracy): irrelevant forum post. Second reply: Another irrelevant forum post referring to an unrelated issue. Third reply: A quote from an irrelevant forum post. Fourth reply: my reply saying that these were irrelevant to the issue. ... And this, now the 11th reply, is me saying "what part of this don't you understand, to the point where you think you can talk down to me like 'please re-read'?". In fact: What part of this don't you understand, to the point where you think you can talk down to me like "please re-read"?I found a post describing difference in behavior between 11.6, 11.7, and 11.8, which brought me to uninstalling, cleaning (manually, via registry & file system - don't feel like installing software to uninstall software), and installing 11.6. Now we'll see how that runs... and I'll owe someone a beer that can actually make a visible post about this very most recent issue regarding 11.8 (the current release) of Catalyst. Believe me... I looked for an "AMD tips and info" thread. Didn't find one. Hell, maybe I'll throw a 0.02bcn donation to the first guy that can actually reply in a civilized manner, like the experienced tech I am, not some kind of computer-newbie...
|
feed the bird: 187CXEVzakbzcANsyhpAAoF2k6KJsc55P1 (BTC) / LiRzzXnwamFCHoNnWqEkZk9HknRmjNT7nU (LTC)
|
|
|
FalconFour (OP)
|
|
September 03, 2011, 05:56:55 AM |
|
Wow. HUGE difference going from 11.8 to 11.6. The CPU usage was, at first, lower, but as I changed settings around to make it better, it actually got worse, and back to 25% again (quad core - but U/C+U/V'd to 1.2GHz). Performance was roughly the same the whole time I was changing things. I finally found a combination of options that result in ~3% CPU (remember, 1.2GHz), and a ~25 watt power reduction with the new drivers and lower CPU usage! I'm now using Phoenix 1.6.2 with rollntime mod, with modded phatk with the options: VECTORS2 DEVICE=0 WORKSIZE=256 FASTLOOP=true - it was actually removing "aggression=(tried 10~15)" and turned fastloop back on, that caused the dramatic reduction in CPU. It now runs at 160MH/s, but is returning results at the same blistering rate as I'm used to; I've also found that hash rate is an inaccurate measure of performance, as different settings vary wildly in their result-return rate, so I'm happy with 160 until I find something better (also not O/Cing or U/Cing the GPU since I haven't yet found how to get OverDrive to U/C the RAM). That said, I've gotta say thanks for showing that a properly configured system *is* indeed capable of using less process time than 100% (even if not 0%, which is understandable). johnj, since you were the guy that posted what I actually semi-rhetorically asked for (and was the key to understanding the situation), I owe 'ya an e-beer. Drop an address and I'll cough up 0.02 (out of the 0.7 I've mined in the last 2 weeks, ugh), just as thanks for not posting crap like "hurp multi-gpu known issue noob" (when I'm not even dual-GPUing)
|
feed the bird: 187CXEVzakbzcANsyhpAAoF2k6KJsc55P1 (BTC) / LiRzzXnwamFCHoNnWqEkZk9HknRmjNT7nU (LTC)
|
|
|
johnj
|
|
September 03, 2011, 06:28:24 AM |
|
I'm on a 5770, and I *think* its the same as the 6770. I'm runnin VECTORS AGGRESSION=7 WORKSIZE=256 and getting 230mh/s with 1000/300 clocks. I was running 11.6 on SDK 2.4 getting 226mh/s on the same settings. Keep the .02... the transaction fee would eat half of that Edit:
|
1AeW7QK59HvEJwiyMztFH1ubWPSLLKx5ym TradeHill Referral TH-R120549
|
|
|
Vladimir
|
|
September 03, 2011, 08:20:50 AM |
|
Please, re-read the first 4 posts...
LOL, indeed. Noobs tend to ignore one line answers to their one page questions.
|
-
|
|
|
Gabi
Legendary
Offline
Activity: 1148
Merit: 1008
If you want to walk on water, get out of the boat
|
|
September 03, 2011, 11:31:12 AM |
|
I'm mining with 0% cpu usage, is my computer extraterrestrial???
|
|
|
|
MiningBuddy
|
|
September 03, 2011, 11:59:31 AM |
|
I've always wondered why so many people struggle with this. Between my 5 5870's and 2 6990's I've never once experienced this issue, I run various driver versions from 11.1 -> 11.8, 2 linux boxes & a Windows machine, varied SDK versions 2.1 -> 2.4 and all using phoenix. Maybe I'm just lucky
|
|
|
|
bcforum
|
|
September 03, 2011, 01:33:43 PM |
|
I've always wondered why so many people struggle with this. Between my 5 5870's and 2 6990's I've never once experienced this issue, I run various driver versions from 11.1 -> 11.8, 2 linux boxes & a Windows machine, varied SDK versions 2.1 -> 2.4 and all using phoenix. Maybe I'm just lucky Maybe you read the instructions before you started randomly plugging things together?
|
If you found this post useful, feel free to share the wealth: 1E35gTBmJzPNJ3v72DX4wu4YtvHTWqNRbM
|
|
|
sveetsnelda
|
|
September 03, 2011, 06:09:16 PM |
|
In case anyone is still confused, this is only a *Windows* issue on later versions of AMD's OpenCL SDK. Earlier versions do not have this issue. However, as far as I know, none of those earlier versions supported the 6000 series cards (but I could be wrong here).
On Linux, the GPU_USE_SYNC_OBJECTS environment variable can be used to change this behavior. The Windows version of the SDK doesn't seem to have a way to change this. I ran amdocl.dll through IDA (a disassembler) and I see GPU_USE_SYNC_OBJECTS as a compile-time variable, but it doesn't look like it checks for a registry key/environment variable/etc at runtime. I'm sure that there is a way to alter the DLL to make it happen, but without having the debug symbols for the DLL, it could take a really long time to find.
|
14u2rp4AqFtN5jkwK944nn741FnfF714m7
|
|
|
BkkCoins
|
|
September 04, 2011, 01:23:50 AM |
|
On Linux, the GPU_USE_SYNC_OBJECTS environment variable can be used to change this behavior.
I had 100% cpu problems on Linux before rolling back to the 11.6 driver. So you're saying with this environment variable set I could probably move up to newer drivers without the cpu going bananas? Is there a good reason on Linux for wanting to use newer drivers? I should probably resist the temptation to muck up a system running smoothly now.
|
|
|
|
|