Dunno what part, it just wouldn't boot properly, stopping at GPU initialisation. Cards would sometimes power on and often wouldn't. Taking them all out and putting a basic card in it still wouldn't reliably start up.
|
|
|
Dynamic mode in cgminer will allow you to mine and game at the same time as well. If you don't specify an intensity it uses dynamic mode.
|
|
|
I'll be taking an extended break from coding on cgminer shortly since most things are stable at the moment for my sanity.
This begins now and I have disabled all notifications from the forum and github so do not be surprised when I don't respond for many days. Email me if it's urgent but try to use the forums please as there are heaps of helpful people here. Thanks everyone for your understanding.
|
|
|
Trying to get familiar with some of the params I never use: --scan-time|-s <arg> Upper bound on time spent scanning current work, in seconds (default: 60) --expiry|-E <arg> Upper bound on how many seconds after getting work we consider a share from it stale (default: 120)
Does scan-time or expiry have any effect if pool is using LP? Would there be any advantage to setting it shorter when using p2pool for example (avg LP interval ~10 sec)? --retry-pause|-R <arg> Number of seconds to pause, between retries (default: 5)
I am assuming this refer to pool <-> miner communication not miner <-> API client communication. Or is it not used for pool mining and only used for solo/bitcoind mining? scan time is set high intentionally with longpoll since longpoll tells the miner when to get new work. Setting it less than longpoll time will only make you throw out good work. Expiry is irrelevant when you have submit stale enabled or the pool asks for submitold (as p2pool does). Retry pause is between miner and pool after each communication failure. They really shouldn't happen at all when talking to a p2pool node running on the same machine but maybe if talking to a node elsewhere they might.
|
|
|
I had this sort of weird "almost working" behaviour when my motherboard died. The workshop only believed me when they couldn't get it to boot.
|
|
|
There are so many optimizations possible with the new sdk it sucks everyone is staying away from it That is absolutely not true. Both Diablo and I have spent an awful lot of time optimising for this SDK and still are doing so.
|
|
|
Some of my 5970's are not accepting any commands from cgminer. I had to use afterburner on about half my rigs. This is to be expected?
That would be precisely what 3 posts in a row just said.
|
|
|
Can cgminer set GPU/MEM frequencies out of BIOS ranges? For example, something like MSI Afterburner in Windows can do?
The answer is no, it just can set what the ADL / the driver allows it to, which is more limited, what AfterBurner can offer. Dia The answer is yes, it will allow you to send commands outside the bios range but the GPU can happily ignore them. It can do more than the bios range, but less than windows specific tools that bypass the driver and poke it directly.
|
|
|
If you're using linux + git, try the current git tree please to see if it fixes it.
Worked like a champ. Thanks very much! Sent a couple coins your way. Thanks for testing. I'm considering wrapping up the few minor changes since 2.3.1 and releasing 2.3.2 before I take my break. Likely the only people who would notice a difference are those with twin GPUs going from 2.3.1-2 to 2.3.2. I'm not making any changes that might break something at this moment in time. What do people think? Should I bother?
|
|
|
I tried this but the engine clock on the first gpu get stuck at 850. That's the stock speed for the card, the other clocks get changed correctly. If I change it from within cg miner it works.
"auto-gpu": true, "auto-fan" : true, "gpu-engine" : "600-850,800-800,800-800,800-800,800-800", "gpu-fan" : "100-100,0-85,0-85,0-85,0-85", "gpu-memclock" : "150,150,150,150,150", "temp-target" : "80,70,70,75,75",
You're telling it the upper limit for gpu engine is 850 or 800 depending on the card, so yeah it's listening to you. gpu 0 should drop clock speed below 850 if the temp reaches over 80. How would you have written it? Your settings are fine for that. The threshold where clockspeed goes down is 3 degrees over the setting (see temp hysteresis) to prevent the clockspeed being changed too easily when it takes a few seconds for the fans to catch up.
|
|
|
I tried this but the engine clock on the first gpu get stuck at 850. That's the stock speed for the card, the other clocks get changed correctly. If I change it from within cg miner it works.
"auto-gpu": true, "auto-fan" : true, "gpu-engine" : "600-850,800-800,800-800,800-800,800-800", "gpu-fan" : "100-100,0-85,0-85,0-85,0-85", "gpu-memclock" : "150,150,150,150,150", "temp-target" : "80,70,70,75,75",
You're telling it the upper limit for gpu engine is 850 or 800 depending on the card, so yeah it's listening to you.
|
|
|
Yes I've audited the code a million times and can't find the bug. For some reason some dual GPU cards the auto-fan control isn't doing anything and it's sitting at 85% at all times.
I think I figured it out: ga->lasttemp seems to always contain the value for the temperature of the first GPU on the twin GPU card. "temp" always contains the temperature of the GPU that is highest between the two. Therefore, if the 2nd GPU on the card is hotter (and it almost always is on 5970s), it's never going to adjust. ga->lasttemp needs to be modified to contain the value of the highest temp GPU between the two so that they're comparing "apples to apples". The only reason my one rig is working and the other two aren't is because the 2nd GPU happens to stay a little cooler than the first. The other two have hotter 2nd GPUs (as they should, since the hot air from the first GPU blows across it). I'd make the change myself, but you really don't want to see my coding "skills". I'm a good reverse-engineer, but I'm a shit coder (as far as keeping things clean goes). If you're using linux + git, try the current git tree please to see if it fixes it.
|
|
|
Yes I've audited the code a million times and can't find the bug. For some reason some dual GPU cards the auto-fan control isn't doing anything and it's sitting at 85% at all times.
I think I figured it out: ga->lasttemp seems to always contain the value for the temperature of the first GPU on the twin GPU card. "temp" always contains the temperature of the GPU that is highest between the two. Therefore, if the 2nd GPU on the card is hotter (and it almost always is on 5970s), it's never going to adjust. ga->lasttemp needs to be modified to contain the value of the highest temp GPU between the two so that they're comparing "apples to apples". The only reason my one rig is working and the other two aren't is because the 2nd GPU happens to stay a little cooler than the first. The other two have hotter 2nd GPUs (as they should, since the hot air from the first GPU blows across it). I'd make the change myself, but you really don't want to see my coding "skills". I'm a good reverse-engineer, but I'm a shit coder (as far as keeping things clean goes). Well spotted, thanks. I'll be taking an extended break from coding on cgminer shortly since most things are stable at the moment for my sanity.
|
|
|
Yes I've audited the code a million times and can't find the bug. For some reason some dual GPU cards the auto-fan control isn't doing anything and it's sitting at 85% at all times.
|
|
|
Did some more testing: bool result = W[117].x & W[117].y; //around 2400 false positives/second @400MH/s | GPU ISA: w: AND_INT R0.w, R0.x, PV1350.y bool result = W[117].x * W[117].y; //around 2 false positives/second (yes, only 2!) @400MH/s | GPU ISA: t: MULLO_INT R0.w, R0.x, PV1350.y bool result = min(W[117].x,W[117].y); //no false | GPU ISA: w: MIN_UINT R0.w, PV1350.y, PV1350.x
Food for thought! Nice. Might be good for the older SDK suited kernels that suck with any().
|
|
|
anyone have any recommended settings for cgminer I have been getting a lot of rejects.. about 10% even at a low intensity.. tried dynamic and its still at 10% anyone got a magic number?
See cgminer README.
|
|
|
If you're on SDK 2.6 with the current cgminer, you can use -k diablo for good performance. Though it still isn't any *better* than 2.1 with the default phatk kernel that cgminer uses.
|
|
|
Hi, I don't know if this is the correct forum where ask my question, but since it is cgminer related I start from here I've got a five 5870 rig I've just set up, it runs xubuntu 11.10 with catalyst 11.8 (the one that installs using xubuntu proprietary drivers applet) and I've installed AMD SDK 2.4. If I don't use GPU_USE_SYNC_OBJECTS=1 CPU usage goes to 90%, If I use it, on the other hand, cgminer uses from 20 to 35% of CPU and CPU is a sempron 2.8 Ghz Dud catalyst driver. Use 11.6 or 11.11+ on linux.
|
|
|
I'm currently playing around with p2pool, too ... so ne need to add --submit-stale as this is forced, if needed via SUBMITOLD, right? The LPs occur quite often for p2pool, so what would you suggest as a good intensity, perhaps in relation to MH/s ... could that be an idea to let CGMINER compute the best value for -I with p2pool (and I'm not talking about the normal -I d switch).
Yes to the first question, README to the second.
|
|
|
New release: Version 2.2.7 - February 20, 2012 ...... reject ratio higher for me, about 3% instead 0,5% at 2.1.2 my conf. p2pool 462b252 multi merged mining bitcoind 0.6 atiumdag 8.920.0.0 (Catalyst 11.12) / Win7 64 OpenCL 1.1 AMD-APP-SDK-v2.5 (793.1) Version 2.3.1 - February 24, 2012 2.3.1-2 reject ratio still about 3% for me... cgminer supports the SUBMITOLD extension now and p2pool is telling cgminer to submit the stale shares. So yep, it's working.
|
|
|
|