Bitcoin Forum
May 07, 2024, 03:26:22 PM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: [1]
  Print  
Author Topic: [SOLVED] increase GPU hashing power by stressing CPU  (Read 4025 times)
zefir (OP)
Donator
Hero Member
*
Offline Offline

Activity: 919
Merit: 1000



View Profile
April 22, 2012, 01:03:23 PM
Last edit: April 23, 2012, 08:47:43 PM by zefir
 #1

Edit: Mystery solved.
P4man had the right idea: as long as CPU frequency management is enabled, stressing the CPU will force it to run at max speed that is also beneficial to miner SW. Additionally, DeathAndTaxes clarified the reliability of mid-term hash-rate measures that need to be considered to estimate the potential improvement.



Hello miners,

I observed something I fail to fully understand and post here for you to confirm and maybe take advantage of.

Recently I had to generate some personalized addresses with vanitygen. I first ran it on my laptop but soon found it not acceptable, since it will take days and generate heat and noise in my office. The only thing keeping me back to offload that task to the miner in my basement were doubts that it might negatively influence the mining performance.

Tl;dr: it did not, stressing the CPU increased my GPU hashing power by ~1%

Details:
Mining with Ubuntu and latest cgminer, my long term average hash-rate is about 1732MH/s (2*7970+1*6950). The CPU (Sempron 145) is mostly idling, system draws 750W at wall (not optimized: uses HD for cgminer development, not down-clocked CPU and GPU-mem, etc.).

After about an hour running vanitigen (100% CPU load: 97 for vanitigen, 2 for cgminer), the short term average increases by 1% to 1750MH/s. The power draw is then at 760W.

To eliminate some mid-term fluctuations caused by pool issues or local temperature extremes, I repeated the tests several times, either
  • start and run it without vanitygen for ~2h, start vanitygen and see the average GH/s settling to a +1% average
  • start and run with vanitygen runnng for ~2h, stop vanitygen and see the hashing power decreasing by 1%

I suspect this could be an effect of the CPU power saving features effective when CPU is idling and being disabled when CPU is fully loaded. Also, it might be that the Linux scheduler is more reactive when not entering the idle-task. Or some cgminer related thing that is specific to my setup.

Whatever, YMMV, but if you can reproduce, please share your results. With my low-scale setup the increased power cost roughly matches the performance gain, in higher scaled setups (e.g. 3*5970/6990) the gain might outperform the cost. Plus you can stress the CPU with mining instead of running vanitygen for some extra coins Smiley


Good Luck!

1715095582
Hero Member
*
Offline Offline

Posts: 1715095582

View Profile Personal Message (Offline)

Ignore
1715095582
Reply with quote  #2

1715095582
Report to moderator
1715095582
Hero Member
*
Offline Offline

Posts: 1715095582

View Profile Personal Message (Offline)

Ignore
1715095582
Reply with quote  #2

1715095582
Report to moderator
"There should not be any signed int. If you've found a signed int somewhere, please tell me (within the next 25 years please) and I'll change it to unsigned int." -- Satoshi
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction.
1715095582
Hero Member
*
Offline Offline

Posts: 1715095582

View Profile Personal Message (Offline)

Ignore
1715095582
Reply with quote  #2

1715095582
Report to moderator
1715095582
Hero Member
*
Offline Offline

Posts: 1715095582

View Profile Personal Message (Offline)

Ignore
1715095582
Reply with quote  #2

1715095582
Report to moderator
1715095582
Hero Member
*
Offline Offline

Posts: 1715095582

View Profile Personal Message (Offline)

Ignore
1715095582
Reply with quote  #2

1715095582
Report to moderator
phorensic
Hero Member
*****
Offline Offline

Activity: 630
Merit: 500



View Profile
April 22, 2012, 10:50:56 PM
 #2

A 1% increase is so small you can blame it on normal variance.
Clipse
Hero Member
*****
Offline Offline

Activity: 504
Merit: 502


View Profile
April 23, 2012, 02:45:37 AM
 #3

The extra wattage usage is a overall loss cost wise compared to the 1% gpu gain, if this in fact did give the 1% gain.

Also, 100% cpu usage on a sempron 145 only using 10Watt more than idle? That doesnt sound right.

...In the land of the stale, the man with one share is king... >> Clipse

We pay miners at 130% PPS | Signup here : Bonus PPS Pool (Please read OP to understand the current process)
zefir (OP)
Donator
Hero Member
*
Offline Offline

Activity: 919
Merit: 1000



View Profile
April 23, 2012, 08:02:02 AM
 #4

A 1% increase is so small you can blame it on normal variance.
True for one shot measures.

But as said, I based the results on my long term hash rate from running the miner continuously for more than a week. Plus running the mid-term measures (2h+) repeatedly gives perfect correlation.

As for the 1%, given that the GPU mining SW potential is nearly maxed out (OpenCL guys are fighting for the last 0.x% gains), 1% is huge.

Seriously, I won't buy this if I heard from someone else. But it is easy to test - just give it a try and report your results.

zefir (OP)
Donator
Hero Member
*
Offline Offline

Activity: 919
Merit: 1000



View Profile
April 23, 2012, 08:19:47 AM
 #5

The extra wattage usage is a overall loss cost wise compared to the 1% gpu gain, if this in fact did give the 1% gain.

Also, 100% cpu usage on a sempron 145 only using 10Watt more than idle? That doesnt sound right.
In my setup I buy the additional 18 MH/s for 10W, which is slightly worse than my average efficiency of ~2.3MH/J. This might turn to the positive side if you had a 3*5970/6990 setup.

As for the 10W delta, that is what I measure. I also expected a more significant jump, but it seems that running cgminer the CPU never enters those low power states where it is said to use only 9W. I'd guess the delta is between 28 and 39W.

BTW, since my miner is pointing to your pool, if you had long-term stats we could even check if there is a difference in effective hashrate.

AzN1337c0d3r
Full Member
***
Offline Offline

Activity: 238
Merit: 100

★YoBit.Net★ 350+ Coins Exchange & Dice


View Profile
April 23, 2012, 04:39:21 PM
 #6

As for the 10W delta, that is what I measure. I also expected a more significant jump, but it seems that running cgminer the CPU never enters those low power states where it is said to use only 9W. I'd guess the delta is between 28 and 39W.

If this were true, then you shouldn't be gaining performance due to high-latency CPU state transitions.

zefir (OP)
Donator
Hero Member
*
Offline Offline

Activity: 919
Merit: 1000



View Profile
April 23, 2012, 05:56:52 PM
 #7

I'm with you folks: can't believe what I'm saying, but the measures still hold.

RFT here stands for 'request for test', so if you're in the mood just repeat the test (it's really trivial) and report your measures.

Other speculations on why this can't be are useless, since we all agree that this theoretically should not work.

Thanks, zefir

P4man
Hero Member
*****
Offline Offline

Activity: 518
Merit: 500



View Profile
April 23, 2012, 06:02:37 PM
 #8

Its probably the CPU downclocking to 800 or whatever MHz without load. You could try disabling C&Q, I suspect you will get the same 1% boost without stressing it.

Fiyasko
Legendary
*
Offline Offline

Activity: 1428
Merit: 1001


Okey Dokey Lokey


View Profile
April 23, 2012, 06:03:29 PM
 #9

I lose a good 10mhash per gpu when my cpu is loaded at 100%, I always drop it to two cores. (cg miner)

http://bitcoin-otc.com/viewratingdetail.php?nick=DingoRabiit&sign=ANY&type=RECV <-My Ratings
https://bitcointalk.org/index.php?topic=857670.0 GAWminers and associated things are not to be trusted, Especially the "mineral" exchange
Clipse
Hero Member
*****
Offline Offline

Activity: 504
Merit: 502


View Profile
April 23, 2012, 06:44:32 PM
 #10

I'm with you folks: can't believe what I'm saying, but the measures still hold.

RFT here stands for 'request for test', so if you're in the mood just repeat the test (it's really trivial) and report your measures.

Other speculations on why this can't be are useless, since we all agree that this theoretically should not work.

Thanks, zefir

Will have selectable periods with next major update to webstats, its allmost ready.

...In the land of the stale, the man with one share is king... >> Clipse

We pay miners at 130% PPS | Signup here : Bonus PPS Pool (Please read OP to understand the current process)
DeathAndTaxes
Donator
Legendary
*
Offline Offline

Activity: 1218
Merit: 1079


Gerald Davis


View Profile
April 23, 2012, 06:56:36 PM
Last edit: April 23, 2012, 08:26:23 PM by DeathAndTaxes
 #11

I'm with you folks: can't believe what I'm saying, but the measures still hold.

As stated above 1% is well inside the natural variance in a 2 hour test.   It is even within normal variance in a 12 hours (~2GH/s) test.  Run your test and control for 72 hours+ and you might have something.  I mean you could have not fart, "tested", farted, "tested" and concluded that farting before starting a run improves GPU performance by 1%.

That isn't to say that there isn't a real change but 2 hours doesn't prove anything.

Mining is a Poisson process.  At 1.732 GH/s your expected # of shares per 120 minutes is 2404.  A 1% deviation would be 2433 actual.

Expected: 2404
Actual: 2433
The odds of getting 2433 (or greater) events when expected in 2404 is ~ 30%.  
So there is only a 70% confidence that your observation is the result of an actual chance.  That sounds good but in statistics 70% confidence is worthless.

Now over 72 hours the expected # of shares is
Expected: 104526
1% over expected: 105571
The odds of getting 105,571 or more shares in 72 hours would be <1% so there is a 99% confidence that the observed difference is "real".


zefir (OP)
Donator
Hero Member
*
Offline Offline

Activity: 919
Merit: 1000



View Profile
April 23, 2012, 08:24:04 PM
 #12

A rock solid analysis as always, DaT.

I admit that I missed to protocol occurrences of farts or similar potential influences on the results, so your claims are valid.

Alas, let's put Poisson aside for a short while (advanced stochastic is no friend of mine  Embarrassed). At a lower understanding level, if I repeat 12 measures (6 times with and 6 times without stimulus (CPU stress here)) over 120 minutes - how probable is it to have 6 times 1% higher average hashrates with the stressed CPU than compared to the control measures only through variance? Isn't that 1/32, corresponding to ~97% confidence (if we treat the results as binary)?

Thanks for the insight into validity of those mid-term hash-rate estimations. Though, in this case I guess P4man is on the right track:
Its probably the CPU downclocking to 800 or whatever MHz without load. You could try disabling C&Q, I suspect you will get the same 1% boost without stressing it.
Bingo! I set CPU clock to fixed 800MHz with 'CPU Frequency Scaling Monitor' when I installed the rig and did not notice (headless system) that it is adjusted to 2.8GHz when on full load. You're fully right, if I disable Cool n Quiet in the BIOS and the CPU runs at full speed, I get same numbers without stressing it.

Mystery solved. Thanks!

Pages: [1]
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!