Just recently changed to linux for mining, is there any reason why the activity on all my gpu's are only sitting between 85% - 90% which is producing lower hash rates? I am using the same cgminer settings as I did on windows and on windows all Gpu's where 99% activity and ofcause a higher hash rate
Did you set intensity?
|
|
|
new version is sweet!
I like the cleaner output. the gpu and mem clock ranges rock. and the emergency temp cutout. and performance is better than any other miner Ive run before by an easy 6+ percent. and I could go on..
I see Im gonna have to redo my sig with updated figures heh
|
|
|
Hmm I doubt it. The final Makefile ends up: cgminer_LINK = $(CCLD) $(AM_CFLAGS) $(CFLAGS) $(cgminer_LDFLAGS) $(LDFLAGS) -o $@ and Makefile.am is: cgminer_LDFLAGS = $(PTHREAD_FLAGS) $(DLOPEN_FLAGS) and configure ends up with echo " LDFLAGS..............: $LDFLAGS $PTHREAD_FLAGS $DLOPEN_FLAGS" which is: LDFLAGS..............: -lpthread -ldl So... somebody is lying
|
|
|
Well that's correct, there is no fan control on the 2nd GPUs of 6990s. What will look odd though is the order of the cards. That's just how they happen to be enumerated by the driver and I have no control over it. It looks like GPU 0 and 2 are one dual core card and 1 and 3 are the other card.
|
|
|
Changing S will not significantly change much of anything at all I'm afraid. Your GPU will probably finish a work item in significantly less than 1 minute and then get more work. The 1 minute cutoff is used to decide the latest possible that a solution would be allowed to be submitted. So if you find a share in say 30 seconds, you have up to 30 more seconds to submit it (bad network conditions will limit that). cgminer only gets work as often as it needs to keep your GPU busy, and GPUs should never run out and hit the 1 minute cutoff unless you significantly increase the number of threads per GPU. The other thing is cgminer can use existing work to generate more work (rolltime it's called) to keep getting useful shares out of the same work for up to 1 minute. If you significantly decrease -s to less than how long it takes for your GPU to find a share, you are more likely to throw work away unnecessarily. All in all, don't bother changing it, but it would be ideal to get at least 1/m Utility (i.e. 1 accepted share per minute) per thread, which would be 2 per device.
so if I get about 1/m per thread utility, wouldn't increasing that value to say 90 seconds be beneficial? because when a share is taking slightly longer, I don't want to have to get new work often Pools don't accept shares older than a certain age. Depending on the pool, it's somewhere between 60 and 120 seconds, so I have to default to the safe lower level.
|
|
|
Changing S will not significantly change much of anything at all I'm afraid. Your GPU will probably finish a work item in significantly less than 1 minute and then get more work. The 1 minute cutoff is used to decide the latest possible that a solution would be allowed to be submitted. So if you find a share in say 30 seconds, you have up to 30 more seconds to submit it (bad network conditions will limit that). cgminer only gets work as often as it needs to keep your GPU busy, and GPUs should never run out and hit the 1 minute cutoff unless you significantly increase the number of threads per GPU. The other thing is cgminer can use existing work to generate more work (rolltime it's called) to keep getting useful shares out of the same work for up to 1 minute. If you significantly decrease -s to less than how long it takes for your GPU to find a share, you are more likely to throw work away unnecessarily. All in all, don't bother changing it, but it would be ideal to get at least 1/m Utility (i.e. 1 accepted share per minute) per thread, which would be 2 per device.
|
|
|
Doh, -lpthread, not -pthread is also an issue it seems?
Anyway, try pulling the git tree again please.
Same problem my friend.. unless I add ldl I get: /usr/bin/ld: cgminer-adl.o: undefined reference to symbol 'dlopen@@GLIBC_2.1' /usr/bin/ld: note: 'dlopen@@GLIBC_2.1' is defined in DSO /lib/libdl.so.2 so try adding it to the linker command line /lib/libdl.so.2: could not read symbols: Invalid operation collect2: ld returned 1 exit status make[2]: *** [cgminer] Error 1 make[2]: Leaving directory `/RAM/new/cgminer' make[1]: *** [all-recursive] Error 1 make[1]: Leaving directory `/RAM/new/cgminer' make: *** [all] Error 2
Well I honestly am completely baffled since the build has -ldl added... Unless the LDFLAGS are not being passed at all. God I hate autofoo tools. Anyone with a clue out there?
|
|
|
I don't understand how this is possible.
CGMiner is the FASTEST miner I have ever used and accomplishes equally impressive results on 3 different generations/sub-generations of cards that I mine with.
--snip--
To say I am happy, would be an understatement. I will definately support and donate to this project.
Thanks for the excellent feedback
|
|
|
New version 2.0.1 - links in top post.
Executive summary of new features: Fanspeeds, gpu engine speeds now accept ranges, eg: --auto-gpu --gpu-engine 750-950,945,700-930,960
Temperature targets are per-device, eg: --temp-cutoff 95,105
Disable adl option --no-adl
Should detect more cards that it can monitor/clock now, even if only with partial support, including 2nd cores in dual core cards.
Temps and fanspeeds in status line
Full changelog: - Hopefully fix building on 32bit glibc with dlopen with -lpthread and -ldl - ByteReverse is not used and the bswap opcode breaks big endian builds. Remove it. - Ignore whether the display is active or not since only display enabled devices work this way, and we skip over repeat entries anwyay. - Only reset values on exiting if we've ever modified them. - Flag adl as active if any card is successfully activated. - Add a thermal cutoff option as well and set it to 95 degrees by default. - Change the fan speed by only 5% if it's over the target temperature but less than the hysteresis value to minimise overshoot down in temperature. - Add a --no-adl option to disable ADL monitoring and GPU settings. - Only show longpoll received delayed message at verbose level. - Allow temperatures greater than 100 degrees. - We should be passing a float for the remainder of the vddc values. - Implement accepting a range of engine speeds as well to allow a lower limit to be specified on the command line. - Allow per-device fan ranges to be set and use them in auto-fan mode. - Display which GPU has overheated in warning message. - Allow temperature targets to be set on a per-card basis on the command line. - Display fan range in autofan status. - Setting the hysteresis is unlikely to be useful on the fly and doesn't belong in the per-gpu submenu. - With many cards, the GPU summaries can be quite long so use a terse output line when showing them all. - Use a terser device status line to show fan RPM as well when available. - Define max gpudevices in one macro. - Allow adapterid 0 cards to enumerate as a device as they will be non-AMD cards, and enable ADL on any AMD card. - Do away with the increasingly confusing and irrelevant total queued and efficiency measures per device. - Only display values in the log if they're supported and standardise device log line printing.
|
|
|
[...] Win32 does not use dlopen so link in -ldl only when not on win32 and display what ldflags are being passed on ./configure. [...]
FreeBSD doesn't have a separate -ldl either, since dlopen() and dlclose() are part of the FreeBSD libc library. The same might be true for MacOSX and any other BSD OS. Thanks, but there's no ADL support on any BSD OS so you won't be using dlopen anyway.
|
|
|
Aha! Well that's why it thinks the device doesn't support ADL, because it doesn't support the full feature set. I'm pretty sure I can fix that. Is there only one fan setting for both GPUs then?
|
|
|
Interesting. So some call must be failing on the 2nd core and cgminer is disabling ADL on that card inappropriately. I think I'll just enable ADL on any AMD recognised card.
Do you get anything useful with ADL in the message if you start with -D -T ?
|
|
|
1) Dual GPU cards don't get adjusted/disabled correctly. For example, a 5970 will only disable the one GPU that has the temp sensor: GPU 3: [80.5 C] [DISABLED /78.5 Mh/s] [Q:2 A:11 R:1 HW:0 E:550% U:1.81/m] GPU 4: [327.3/324.5 Mh/s] [Q:25 A:23 R:1 HW:0 E:92% U:3.78/m] Out of curiosity, what does AMDOverdriveCtrl -h return as a list of adapters on machines with dual core GPUs like this one? INF: Nr. of Adapters: 13 INF: Adapter index: 0, active, ID:9879600, ATI Radeon HD 5700 Series INF: Adapter index: 1, inact., ID:9879600, ATI Radeon HD 5700 Series INF: Adapter index: 2, inact., ID:9879600, ATI Radeon HD 5700 Series INF: Adapter index: 3, active, ID:10442944, ATI Radeon HD 5700 Series INF: Adapter index: 4, inact., ID:10442944, ATI Radeon HD 5700 Series INF: Adapter index: 5, inact., ID:10442944, ATI Radeon HD 5700 Series INF: Adapter index: 6, active, ID:11009344, ATI Radeon HD 5800 Series INF: Adapter index: 7, inact., ID:11009344, ATI Radeon HD 5800 Series INF: Adapter index: 8, inact., ID:11009344, ATI Radeon HD 5800 Series INF: Adapter index: 9, active, ID:11573824, ATI Radeon HD 5900 Series INF: Adapter index: 10, inact., ID:11573824, ATI Radeon HD 5900 Series INF: Adapter index: 11, inact., ID:11573824, ATI Radeon HD 5900 Series INF: Adapter index: 12, active, ID:12191920, ATI Radeon HD 5900 Series $ lspci |grep VGA 07:00.0 VGA compatible controller: ATI Technologies Inc Hemlock [ATI Radeon HD 5900 Series] 08:00.0 VGA compatible controller: ATI Technologies Inc Radeon HD 5800 Series (Cypress LE) 0b:00.0 VGA compatible controller: ATI Technologies Inc Juniper [Radeon HD 5700 Series] 0c:00.0 VGA compatible controller: ATI Technologies Inc Juniper [Radeon HD 5700 Series] Interesting. Perhaps it supports only a few commands but not others since it does show up as a discrete device. What can you set manually with other software? Are GPU speeds linked for example?
|
|
|
No, there's no cross platform portable code for those. I'm having enough trouble reading anything off the second cores as well.
ok there may not be platform portable code... however you could implement 2 versions wrap them around #ifdef WIN32 // win32 code version #else // linux version #endif // WIN32 Or I could not. It was a bootload of work importing the adl sdk and sponsored code.
|
|
|
No, there's no cross platform portable code for those. I'm having enough trouble reading anything off the second cores as well.
How is the new overclocking feature implemented anyway? Maybe I can figure it out myself why it doesn't work for me... Using the ATI display library SDK http://developer.amd.com/gpu/adlsdk/Pages/default.aspx
|
|
|
No, there's no cross platform portable code for those. I'm having enough trouble reading anything off the second cores as well.
|
|
|
Queued requests per card and efficiency is an increasingly irrelevant value and poorly understood. I figure I'll just remove it.
|
|
|
Look harder. From the readme: The output line shows the following: [(5s):204.4 (avg):203.1 Mh/s] [Q:56 A:51 R:4 HW:0 E:91% U:2.47/m]
Each column is as follows: A 5 second exponentially decaying average hash rate An all time average hash rate The number of requested work items The number of accepted shares The number of rejected shares The number of hardware erorrs The efficiency defined as the accepted shares / requested work and can be >100 The utility defines as the number of shares / minute
|
|
|
1) Dual GPU cards don't get adjusted/disabled correctly. For example, a 5970 will only disable the one GPU that has the temp sensor: GPU 3: [80.5 C] [DISABLED /78.5 Mh/s] [Q:2 A:11 R:1 HW:0 E:550% U:1.81/m] GPU 4: [327.3/324.5 Mh/s] [Q:25 A:23 R:1 HW:0 E:92% U:3.78/m] Out of curiosity, what does AMDOverdriveCtrl -h return as a list of adapters on machines with dual core GPUs like this one? The output should be something like: INF: Nr. of Adapters: 16 INF: Adapter index: 0, active, ID:29775168, AMD Radeon HD 6900 Series INF: Adapter index: 1, inact., ID:29775168, AMD Radeon HD 6900 Series INF: Adapter index: 2, inact., ID:29775168, AMD Radeon HD 6900 Series INF: Adapter index: 3, inact., ID:29775168, AMD Radeon HD 6900 Series INF: Adapter index: 4, active, ID:31979904, AMD Radeon HD 6900 Series INF: Adapter index: 5, inact., ID:31979904, AMD Radeon HD 6900 Series INF: Adapter index: 6, inact., ID:31979904, AMD Radeon HD 6900 Series INF: Adapter index: 7, inact., ID:31979904, AMD Radeon HD 6900 Series INF: Adapter index: 8, active, ID:34184080, AMD Radeon HD 6900 Series INF: Adapter index: 9, inact., ID:34184080, AMD Radeon HD 6900 Series INF: Adapter index: 10, inact., ID:34184080, AMD Radeon HD 6900 Series INF: Adapter index: 11, inact., ID:34184080, AMD Radeon HD 6900 Series INF: Adapter index: 12, active, ID:36388256, AMD Radeon HD 6900 Series INF: Adapter index: 13, inact., ID:36388256, AMD Radeon HD 6900 Series INF: Adapter index: 14, inact., ID:36388256, AMD Radeon HD 6900 Series INF: Adapter index: 15, inact., ID:36388256, AMD Radeon HD 6900 Series
Anyone with 6990, even one on linux to try this? "Out of curiosity, what does AMDOverdriveCtrl -h return as a list of adapters on machines with dual core GPUs like this one?"
|
|
|
|