dlasher
|
|
September 09, 2011, 04:23:40 AM |
|
Well I honestly am completely baffled since the build has -ldl added... Unless the LDFLAGS are not being passed at all. God I hate autofoo tools. Anyone with a clue out there?
PM me, happy to give you temporary access to a box to take a peek if you want. I realize I haven't said this enough, but I _LOVE_ cgminer....
|
|
|
|
iopq
|
|
September 09, 2011, 04:25:59 AM |
|
Changing S will not significantly change much of anything at all I'm afraid. Your GPU will probably finish a work item in significantly less than 1 minute and then get more work. The 1 minute cutoff is used to decide the latest possible that a solution would be allowed to be submitted. So if you find a share in say 30 seconds, you have up to 30 more seconds to submit it (bad network conditions will limit that). cgminer only gets work as often as it needs to keep your GPU busy, and GPUs should never run out and hit the 1 minute cutoff unless you significantly increase the number of threads per GPU. The other thing is cgminer can use existing work to generate more work (rolltime it's called) to keep getting useful shares out of the same work for up to 1 minute. If you significantly decrease -s to less than how long it takes for your GPU to find a share, you are more likely to throw work away unnecessarily. All in all, don't bother changing it, but it would be ideal to get at least 1/m Utility (i.e. 1 accepted share per minute) per thread, which would be 2 per device.
so if I get about 1/m per thread utility, wouldn't increasing that value to say 90 seconds be beneficial? because when a share is taking slightly longer, I don't want to have to get new work often
|
|
|
|
-ck (OP)
Legendary
Offline
Activity: 4242
Merit: 1644
Ruu \o/
|
|
September 09, 2011, 04:27:59 AM |
|
Changing S will not significantly change much of anything at all I'm afraid. Your GPU will probably finish a work item in significantly less than 1 minute and then get more work. The 1 minute cutoff is used to decide the latest possible that a solution would be allowed to be submitted. So if you find a share in say 30 seconds, you have up to 30 more seconds to submit it (bad network conditions will limit that). cgminer only gets work as often as it needs to keep your GPU busy, and GPUs should never run out and hit the 1 minute cutoff unless you significantly increase the number of threads per GPU. The other thing is cgminer can use existing work to generate more work (rolltime it's called) to keep getting useful shares out of the same work for up to 1 minute. If you significantly decrease -s to less than how long it takes for your GPU to find a share, you are more likely to throw work away unnecessarily. All in all, don't bother changing it, but it would be ideal to get at least 1/m Utility (i.e. 1 accepted share per minute) per thread, which would be 2 per device.
so if I get about 1/m per thread utility, wouldn't increasing that value to say 90 seconds be beneficial? because when a share is taking slightly longer, I don't want to have to get new work often Pools don't accept shares older than a certain age. Depending on the pool, it's somewhere between 60 and 120 seconds, so I have to default to the safe lower level.
|
Developer/maintainer for cgminer, ckpool/ckproxy, and the -ck kernel 2% Fee Solo mining at solo.ckpool.org -ck
|
|
|
bitlane
Internet detective
Sr. Member
Offline
Activity: 462
Merit: 250
I heart thebaron
|
|
September 09, 2011, 04:34:42 AM |
|
Changing S will not significantly change much of anything at all I'm afraid. Your GPU will probably finish a work item in significantly less than 1 minute and then get more work. The 1 minute cutoff is used to decide the latest possible that a solution would be allowed to be submitted. So if you find a share in say 30 seconds, you have up to 30 more seconds to submit it (bad network conditions will limit that). cgminer only gets work as often as it needs to keep your GPU busy, and GPUs should never run out and hit the 1 minute cutoff unless you significantly increase the number of threads per GPU. The other thing is cgminer can use existing work to generate more work (rolltime it's called) to keep getting useful shares out of the same work for up to 1 minute. If you significantly decrease -s to less than how long it takes for your GPU to find a share, you are more likely to throw work away unnecessarily. All in all, don't bother changing it, but it would be ideal to get at least 1/m Utility (i.e. 1 accepted share per minute) per thread, which would be 2 per device.
So -s can't be used as an easy-share cherry picker ? low -s, perhaps -s 15, so that your GPU gets 15 seconds to solve otherwise it moves on. This combined with a large que should theoretically help with a higher output, no ? A sort of 'PPS cheat'....LOL, just pick off the easy ones and dump the rest....lol
|
|
|
|
-ck (OP)
Legendary
Offline
Activity: 4242
Merit: 1644
Ruu \o/
|
|
September 09, 2011, 05:13:52 AM |
|
no
|
Developer/maintainer for cgminer, ckpool/ckproxy, and the -ck kernel 2% Fee Solo mining at solo.ckpool.org -ck
|
|
|
freakfantom
Newbie
Offline
Activity: 73
Merit: 0
|
|
September 09, 2011, 06:01:19 AM |
|
Well... Now I see this, though cgminer 2.0 with your modified exe functioned just fine. I have windows 7 x64 2x6990 if it matters. https://i.imgur.com/XfKtz.png
|
|
|
|
-ck (OP)
Legendary
Offline
Activity: 4242
Merit: 1644
Ruu \o/
|
|
September 09, 2011, 06:05:05 AM |
|
Well that's correct, there is no fan control on the 2nd GPUs of 6990s. What will look odd though is the order of the cards. That's just how they happen to be enumerated by the driver and I have no control over it. It looks like GPU 0 and 2 are one dual core card and 1 and 3 are the other card.
|
Developer/maintainer for cgminer, ckpool/ckproxy, and the -ck kernel 2% Fee Solo mining at solo.ckpool.org -ck
|
|
|
zaytsev
Newbie
Offline
Activity: 59
Merit: 0
|
|
September 09, 2011, 09:10:13 AM |
|
ck, did you know that you are like crazy cool, man? I am running 2.0.1 and well, I don't know how did it manage to achieve it, but it down-clocked my memory to 300 Mhz, which was not possible using pplib commands on my old FirePro V5700 (sic!). This saves me like 8C or so. It doesn't seem to be able to read RPMs though, Fan Speed: 65% (-1 RPM). But well, it's already something! Also, I am sick of the curses bug, so here is how to compile cgminer statically against latest curses: 1) Go to http://ftp.gnu.org/gnu/ncurses/ and download the latest version 2) Unpack, do ./configure --prefix=$HOME/opt/ncurses --enable-static --disable-shared && make && make install 3) Go to cgminer, do CFLAGS="-O2 -Wall -march=native" CPPFLAGS="-I$HOME/opt/ncurses/include" LDFLAGS="-L$HOME/opt/ncurses/lib" ./configure --prefix=/opt/cgminer 4) Enjoy! Might be worth adding to the readme.
|
|
|
|
mmortal03
Legendary
Offline
Activity: 1762
Merit: 1011
|
|
September 09, 2011, 09:16:23 AM |
|
MASSIVE UPGRADE VERSION 2.0 - Links in top post
Major feature upgrade - GPU monitoring, (over)clocking and fan control for ATI GPUs. Wow, this will be great if it works on my non-reference Gigabyte Super Overclock 5870 in a ViDock/netbook setup. The Gigabyte OC Guru tool doesn't work in that scenario (doesn't see the card), and the Catalyst Control Center isn't available for it to use the OverDrive features, either. So far, because the OC Guru doesn't work in this setup, I've had to use reference cards with OverDrive, only, if I want to overclock. I just tested it on my desktop, and it overclocks the card just fine, so at least I know in advance that it doesn't matter whether it's a non-reference card or not. The real test is going to be if it also detects the card and can overclock it when plugged into my ViDock setup over at a friend's place. I'll try to test this in the next few days. Thanks for your work, I'll definitely be sending a donation if it works! Okay, bummer, so I just tested using cgminer to overclock my Gigabyte Super Overclock 5870 card when running it through my x1 ViDock, and it won't take the overclocking commands. In comparison, I've had no problem overclocking a reference 5870 when running it over my x1 ViDock, and I've had no problem overclocking the non-reference Gigabyte 5870 itself with cgminer when it is plugged into a regular x16 PCIe slot on a rig. So, might anyone know why it won't allow me to overclock it when plugged into the ViDock? The one thing I haven't tried is to see if the card will still overclock when plugged directly into an x1 PCIe slot on a rig. The reason I mention this is because the only thing I can think of is that it might need the higher pins on the x16 slot for the overclocking commands to actually be sent to it, but then I'm not knowledgeable enough to know if that is even plausible. Keep in mind that GPU-Z detects the card properly when running it through the ViDock, and cgminer can detect the card just fine for mining, so it's not like the card isn't being detected at all.
|
|
|
|
The00Dustin
|
|
September 09, 2011, 10:00:02 AM |
|
Doh, -lpthread, not -pthread is also an issue it seems?
Anyway, try pulling the git tree again please. Same problem my friend.. unless I add ldl I get: /usr/bin/ld: cgminer-adl.o: undefined reference to symbol 'dlopen@@GLIBC_2.1' /usr/bin/ld: note: 'dlopen@@GLIBC_2.1' is defined in DSO /lib/libdl.so.2 so try adding it to the linker command line /lib/libdl.so.2: could not read symbols: Invalid operation collect2: ld returned 1 exit status make[2]: *** [cgminer] Error 1 make[2]: Leaving directory `/RAM/new/cgminer' make[1]: *** [all-recursive] Error 1 make[1]: Leaving directory `/RAM/new/cgminer' make: *** [all] Error 2 Well I honestly am completely baffled since the build has -ldl added... Unless the LDFLAGS are not being passed at all. God I hate autofoo tools. Anyone with a clue out there? Your error is different than mine, but on the off chance that it is caused by differing versions of something, see my post: https://bitcointalk.org/index.php?topic=28402.msg512233#msg512233Don't forget that when you manually specifiy CFLAGS or LDFLAGS (or whatever else) on the command line, your specification overrides anything the script would do. As such, the -ldl must be added manually because the autmatic stuff isn't applying. This may not be true when you export, and may not be true as a general rule of thumb, but has been true in my experience on Fedora 15.
|
|
|
|
-ck (OP)
Legendary
Offline
Activity: 4242
Merit: 1644
Ruu \o/
|
|
September 09, 2011, 10:35:35 AM |
|
Hmm I doubt it. The final Makefile ends up: cgminer_LINK = $(CCLD) $(AM_CFLAGS) $(CFLAGS) $(cgminer_LDFLAGS) $(LDFLAGS) -o $@ and Makefile.am is: cgminer_LDFLAGS = $(PTHREAD_FLAGS) $(DLOPEN_FLAGS) and configure ends up with echo " LDFLAGS..............: $LDFLAGS $PTHREAD_FLAGS $DLOPEN_FLAGS" which is: LDFLAGS..............: -lpthread -ldl So... somebody is lying
|
Developer/maintainer for cgminer, ckpool/ckproxy, and the -ck kernel 2% Fee Solo mining at solo.ckpool.org -ck
|
|
|
vapourminer
Legendary
Offline
Activity: 4480
Merit: 4028
what is this "brake pedal" you speak of?
|
|
September 09, 2011, 10:38:14 AM |
|
new version is sweet!
I like the cleaner output. the gpu and mem clock ranges rock. and the emergency temp cutout. and performance is better than any other miner Ive run before by an easy 6+ percent. and I could go on..
I see Im gonna have to redo my sig with updated figures heh
|
|
|
|
-ck (OP)
Legendary
Offline
Activity: 4242
Merit: 1644
Ruu \o/
|
|
September 09, 2011, 10:39:46 AM |
|
new version is sweet!
I like the cleaner output. the gpu and mem clock ranges rock. and the emergency temp cutout. and performance is better than any other miner Ive run before by an easy 6+ percent. and I could go on..
I see Im gonna have to redo my sig with updated figures heh
|
Developer/maintainer for cgminer, ckpool/ckproxy, and the -ck kernel 2% Fee Solo mining at solo.ckpool.org -ck
|
|
|
Departure
|
|
September 09, 2011, 12:22:52 PM |
|
Just recently changed to linux for mining, is there any reason why the activity on all my gpu's are only sitting between 85% - 90% which is producing lower hash rates? I am using the same cgminer settings as I did on windows and on windows all Gpu's where 99% activity and ofcause a higher hash rate
|
|
|
|
-ck (OP)
Legendary
Offline
Activity: 4242
Merit: 1644
Ruu \o/
|
|
September 09, 2011, 12:28:27 PM |
|
Just recently changed to linux for mining, is there any reason why the activity on all my gpu's are only sitting between 85% - 90% which is producing lower hash rates? I am using the same cgminer settings as I did on windows and on windows all Gpu's where 99% activity and ofcause a higher hash rate
Did you set intensity?
|
Developer/maintainer for cgminer, ckpool/ckproxy, and the -ck kernel 2% Fee Solo mining at solo.ckpool.org -ck
|
|
|
Departure
|
|
September 09, 2011, 12:32:08 PM |
|
Just recently changed to linux for mining, is there any reason why the activity on all my gpu's are only sitting between 85% - 90% which is producing lower hash rates? I am using the same cgminer settings as I did on windows and on windows all Gpu's where 99% activity and ofcause a higher hash rate
Did you set intensity? yes I set -I 9
|
|
|
|
-ck (OP)
Legendary
Offline
Activity: 4242
Merit: 1644
Ruu \o/
|
|
September 09, 2011, 12:34:19 PM |
|
In that case, no, there is no reason.
|
Developer/maintainer for cgminer, ckpool/ckproxy, and the -ck kernel 2% Fee Solo mining at solo.ckpool.org -ck
|
|
|
Departure
|
|
September 09, 2011, 12:35:50 PM |
|
K thanks for the reply, I must have stuffed something up while setting up Xbuntu for mining... looks like I might go back to windows
|
|
|
|
os2sam
Legendary
Offline
Activity: 3583
Merit: 1094
Think for yourself
|
|
September 09, 2011, 01:49:59 PM |
|
Just tried 2.0.1 and get the error "Failed to init GPU thread 0, disabling device 0"
after pressing enter it shows both GPU''s disabled and trying to enable them causes a crash.
WinXP SP3 Cat 11.6 Sam
|
A: Because it messes up the order in which people normally read text. Q: Why is top-posting such a bad thing? A: Top-posting. Q: What is the most annoying thing on usenet and in e-mail?
|
|
|
gnar1ta$
Donator
Hero Member
Offline
Activity: 798
Merit: 500
|
|
September 09, 2011, 03:41:09 PM |
|
[P]ool management [G]PU management [S]ettings [D]isplay options [Q]uit GPU0: [875/750MHz 69.5C 57%] [359.5/130.4 Mh/s] [Q:57 A:52 R:1 HW:0 E:91% U:1.74/m] GPU1: [875/750MHz 75.5C 85%] [359.5/138.0 Mh/s] [Q:53 A:53 R:1 HW:0 E:100% U:1.77/m] GPU2: [875/750MHz 69.5C 51%] [359.5/137.3 Mh/s] [Q:53 A:49 R:1 HW:0 E:92% U:1.64/m] GPU3: [875/750MHz 73.0C 85%] [359.4/137.2 Mh/s] [Q:66 A:54 R:1 HW:0 E:82% U:1.80/m] GPU4: [875/750MHz 69.5C 49%] [360.4/137.5 Mh/s] [Q:57 A:46 R:0 HW:0 E:81% U:1.54/m] This is amazing, no monitoring scripts or separate windows needed. Having trouble with the auto-gpu though, can anyone tell me how to properly set it so the fan will jump to 100% before the engine clock is lowered?
|
Losing hundreds of Bitcoins with the best scammers in the business - BFL, Avalon, KNC, HashFast.
|
|
|
|