PLaci1982
Full Member
Offline
Activity: 168
Merit: 100
Live long and prosper. \\//,
|
|
September 07, 2011, 10:49:56 AM |
|
2.0.0 works really well on Win 7, but not on Win XP.... I have a XP box with a HD 5770, and everything works well until I try to close the program by pressing Q... After hitting Q, I get a sys msg about that cgminer.exe encountered an error, but the program still runs afterwards... Clicking X to close the window closes the program, but the OC clocks doesn't change back to original, and the memory clock does stay the same after a restart...
|
Hardware Expert / WinXP, Win7 Expert
1J5oPkyGVdb4mv44KGZQYsHS2ch6e1t4rc
|
|
|
Shevek
|
|
September 07, 2011, 11:29:06 AM |
|
As per the README: grab AMD's ADL, unzip and copy the header (*.h) files from the "include" directory to the "cgminer/ADL_SDK" directory. After that, configure should pick up card control support and it'll be available in the compiled binary. Any other link or torrent to download ADL_SDK? I hate write down forms for AMD. TIA
|
Proposals for improving bitcoin are like asses: everybody has one 1SheveKuPHpzpLqSvPSavik9wnC51voBa
|
|
|
cablepair
|
|
September 07, 2011, 12:27:41 PM |
|
Hi Gents
Just want to report a possible bug here of some kind.
Here is what I am running.
Windows 7 64x Cgminer 2.0 AMD 11.8 Drivers
(I have been running cgminer since 1.5.something and I am a HUGE fan) CGMINER 2.0 Has Same Great Mining performance as 1.6.2 if I clock the cards with my own util (Saphire Trixx or MSI Afterburner)
But if I use CGMiner to clock my cards, I am getting significantly less performance like 20 Mhash per card difference (total of 3 cards in machine)
I am 100% positive I am using the correct syntax to clock the cards at command line and its the same if I go into the program and use CGMiner to clock them manually
it does not matter if I have auto-tune features on or off, I know the developer is a linux guy, but has any one run into this issue yet? I would love to be able to use cgminer to clock my cards and start mining with a nice batch file (more time for madden 2012) Any Ideas?
|
|
|
|
os2sam
Legendary
Offline
Activity: 3586
Merit: 1098
Think for yourself
|
|
September 07, 2011, 01:33:54 PM |
|
2.0.0 works really well on Win 7, but not on Win XP.... I have a XP box with a HD 5770, and everything works well until I try to close the program by pressing Q... After hitting Q, I get a sys msg about that cgminer.exe encountered an error, but the program still runs afterwards... Clicking X to close the window closes the program, but the OC clocks doesn't change back to original, and the memory clock does stay the same after a restart... I get the same behavior on my WinXP system. But the program seems to work fine.
|
A: Because it messes up the order in which people normally read text. Q: Why is top-posting such a bad thing? A: Top-posting. Q: What is the most annoying thing on usenet and in e-mail?
|
|
|
os2sam
Legendary
Offline
Activity: 3586
Merit: 1098
Think for yourself
|
|
September 07, 2011, 01:34:51 PM |
|
As per the README: grab AMD's ADL, unzip and copy the header (*.h) files from the "include" directory to the "cgminer/ADL_SDK" directory. After that, configure should pick up card control support and it'll be available in the compiled binary. Any other link or torrent to download ADL_SDK? I hate write down forms for AMD. TIA You have the option to bypass the registration and go straight to the download. Sam
|
A: Because it messes up the order in which people normally read text. Q: Why is top-posting such a bad thing? A: Top-posting. Q: What is the most annoying thing on usenet and in e-mail?
|
|
|
os2sam
Legendary
Offline
Activity: 3586
Merit: 1098
Think for yourself
|
|
September 07, 2011, 01:38:49 PM |
|
The adl include files are only needed for compiling it. I have compiled adl support into the windows binaries myself. I tried very hard to make the code work on both platforms, and even I'm surprised the windows version works as well as the linux one. There is no ADL support for any other operating systems though.
Thanks for the clarification. After updating to Catalyst 11.6 everything is working fine for me on WinXP. Thanks again for your work, Sam
|
A: Because it messes up the order in which people normally read text. Q: Why is top-posting such a bad thing? A: Top-posting. Q: What is the most annoying thing on usenet and in e-mail?
|
|
|
Shevek
|
|
September 07, 2011, 01:50:00 PM |
|
As per the README: grab AMD's ADL, unzip and copy the header (*.h) files from the "include" directory to the "cgminer/ADL_SDK" directory. After that, configure should pick up card control support and it'll be available in the compiled binary. Any other link or torrent to download ADL_SDK? I hate write down forms for AMD. TIA You have the option to bypass the registration and go straight to the download. Ooopss! Thanks a lot!
|
Proposals for improving bitcoin are like asses: everybody has one 1SheveKuPHpzpLqSvPSavik9wnC51voBa
|
|
|
MadHacker
|
|
September 07, 2011, 02:18:26 PM |
|
Now if this $%&##^ 100% CPU bug can be squashed I'll be laughing. Bloody AMD *shakes fist*.
just out of curiosity... can't a Sleep(1); be added into each thread? this would fix the 100% cpu bug? 1ms shouldn't affect the mining at any signifigant level
|
|
|
|
-ck (OP)
Legendary
Offline
Activity: 4284
Merit: 1645
Ruu \o/
|
|
September 07, 2011, 03:28:22 PM |
|
Now if this $%&##^ 100% CPU bug can be squashed I'll be laughing. Bloody AMD *shakes fist*.
just out of curiosity... can't a Sleep(1); be added into each thread? this would fix the 100% cpu bug? 1ms shouldn't affect the mining at any signifigant level It's while the GPU code is executing that the CPU usage is high due to the driver consuming useless cycles. Sleeping when it comes back to the CPU will do nothing for that.
|
Developer/maintainer for cgminer, ckpool/ckproxy, and the -ck kernel 2% Fee Solo mining at solo.ckpool.org -ck
|
|
|
sharky112065
|
|
September 07, 2011, 03:56:52 PM |
|
Hi Gents
Just want to report a possible bug here of some kind.
Here is what I am running.
Windows 7 64x Cgminer 2.0 AMD 11.8 Drivers
(I have been running cgminer since 1.5.something and I am a HUGE fan) CGMINER 2.0 Has Same Great Mining performance as 1.6.2 if I clock the cards with my own util (Saphire Trixx or MSI Afterburner)
But if I use CGMiner to clock my cards, I am getting significantly less performance like 20 Mhash per card difference (total of 3 cards in machine)
I am 100% positive I am using the correct syntax to clock the cards at command line and its the same if I go into the program and use CGMiner to clock them manually
it does not matter if I have auto-tune features on or off, I know the developer is a linux guy, but has any one run into this issue yet? I would love to be able to use cgminer to clock my cards and start mining with a nice batch file (more time for madden 2012) Any Ideas?
Cgminer cannot down clock the memory as much as MSI Afterburner or Trixx on some cards. So if that is happening and you do not have enough power you may be exceeding the power you need to get a steady Mhash out of you cards. That is what happened to me anyway. The developer tells me it is because he uses the ATI stuff to change settings. MSI Afterburner and Trixx bypass the ATI stuff and change some settings on their own directly.
|
Donations welcome: 12KaKtrK52iQjPdtsJq7fJ7smC32tXWbWr
|
|
|
os2sam
Legendary
Offline
Activity: 3586
Merit: 1098
Think for yourself
|
|
September 07, 2011, 04:16:05 PM |
|
Hi Gents
Just want to report a possible bug here of some kind.
---Snip---
But if I use CGMiner to clock my cards, I am getting significantly less performance like 20 Mhash per card difference (total of 3 cards in machine)
I'm using WinXP with 11.6 and am using it to overclock a Radeon 5770 and 5830 to 950Mhz and underclock the memory to 300MHz and I verified that the settings took with GPU Shark. I could not do that with the ATI CCC. And my hash rate has improved by 60 to 70Mhs for the pair of GPU's. Sam
|
A: Because it messes up the order in which people normally read text. Q: Why is top-posting such a bad thing? A: Top-posting. Q: What is the most annoying thing on usenet and in e-mail?
|
|
|
cablepair
|
|
September 07, 2011, 04:36:25 PM |
|
Hi Gents
Just want to report a possible bug here of some kind.
---Snip---
But if I use CGMiner to clock my cards, I am getting significantly less performance like 20 Mhash per card difference (total of 3 cards in machine)
I'm using WinXP with 11.6 and am using it to overclock a Radeon 5770 and 5830 to 950Mhz and underclock the memory to 300MHz and I verified that the settings took with GPU Shark. I could not do that with the ATI CCC. And my hash rate has improved by 60 to 70Mhs for the pair of GPU's. Sam Well for one thing ATI CCC is completely useless IMO anyway. But Hi Gents
Just want to report a possible bug here of some kind.
Here is what I am running.
Windows 7 64x Cgminer 2.0 AMD 11.8 Drivers
(I have been running cgminer since 1.5.something and I am a HUGE fan) CGMINER 2.0 Has Same Great Mining performance as 1.6.2 if I clock the cards with my own util (Saphire Trixx or MSI Afterburner) / But if I use CGMiner to clock my cards, I am getting significantly less performance like 20 Mhash per card difference (total of 3 cards in machine)
I am 100% positive I am using the correct syntax to clock the cards at command line and its the same if I go into the program and use CGMiner to clock them manually
it does not matter if I have auto-tune features on or off, I know the developer is a linux guy, but has any one run into this issue yet? I would love to be able to use cgminer to clock my cards and start mining with a nice batch file (more time for madden 2012) Any Ideas?
Cgminer cannot down clock the memory as much as MSI Afterburner or Trixx on some cards. So if that is happening and you do not have enough power you may be exceeding the power you need to get a steady Mhash out of you cards. That is what happened to me anyway. The developer tells me it is because he uses the ATI stuff to change settings. MSI Afterburner and Trixx bypass the ATI stuff and change some settings on their own directly. you may have an interesting point on that cgminer cannot downclock the memory, is it because it cant downclock past 300? You could be right. I was trying to downclock my mem to 180 which is the ideal spot for the cards I was testing this on. I dont think it has anything to do with power though, this rig has a corsair 1200 watt PSU but I bet your right on with the downclock thing ckolivas: can you verify that CGMINER can't downclock mem past 300? This would be a feature I would very much like. I know the main stream thought is 300 is the sweet spot for mem, and I think this myth exists because it is the lowest point you can downclock in some early versions of software. In any case I have 9 GPU's and have been mining since BTC was worth 0.85 USD - my point being? I have thoroughly tested these cards every which way possible, and although 1 of my cards prefers 300 for memclock (my xfx 5830 which I hate BTW) all my 6870's and 5870's LOVE 180 plus I am getting some energy savings there (even if its not a lot it adds up)
|
|
|
|
The00Dustin
|
|
September 07, 2011, 04:40:23 PM Last edit: September 09, 2011, 09:53:40 AM by The00Dustin |
|
When I build 2.0.0 (with the original or newer -1 source), it works fine as long as I don't add the ADL header files to the ADL_SDK folder. When I do that, I get this at the end of the make step: /usr/bin/ld: cgminer-adl.o: undefined reference to symbol 'dlclose@@GLIBC_2.2.5' /usr/bin/ld: note: 'dlclose@@GLIBC_2.2.5' is defined in DSO /lib64/libdl.so.2 so try adding it to the linker command line /lib64/libdl.so.2: could not read symbols: Invalid operation collect2: ld returned 1 exit status make[2]: *** [cgminer] Error 1 make[2]: Leaving directory `/usr/src/cgminer-2.0.0' make[1]: *** [all-recursive] Error 1 make[1]: Leaving directory `/usr/src/cgminer-2.0.0' make: *** [all] Error 2 This is running Fedora 15 with GLIBC 2.14-4. I'm guessing that is the problem, and I'm guessing it is due to the AMD header files and outside of ck's control, but wanted to report it just in case. I may look for some repo with a newer GLIBC at some point if I can find time, but in the meantime, FYI: It would appear that you have to have a pretty recent version of GLIBC to compile the gpu monitoring support. EDIT: I actually resolved this issue by defining LDFLAGS to point at ..../ati-stream-sdk-v2.1-lnx64/lib/x86_64/ I guess that means my guess was wrong. For further clarification: To get this running on Fedora 15 x86_64, I am running ./configure, gathering the CFLAGS and LDFLAGS settings from Makefile, and re-running ./configure with CFLAGS using those settings plus a -I..../ati-stream-sdk-v2.1-lnx64/..../includes/ setting and LDFLAGS using those settings plus a -L..../ati-stream-sdk-v2.1-lnx64/lib/x86_64/ (where the .... sections indicate paths that may vary per machine that I don't remember and can't see at this very moment). Finally, because Fedora 15's JSON library is too old, I am editing the Makefile to use the source's included JSON instead of my installed JSON (I haven't tried setting JSON_INCLUDES for .lconfigure yet).
|
|
|
|
Sekioh
|
|
September 07, 2011, 04:49:56 PM |
|
I know the main stream thought is 300 is the sweet spot for mem, and I think this myth exists because it is the lowest point you can downclock in some early versions of software.
Can we not test this with some clever CUDA/OpenCL code? run a light gpu loop thats heavy on memory operations for X amount of seconds, count ops/sec and write back out of gpu to the cpu to write to disk, then throttle down the memory more and repeat, you should be able to see a bottleneck at some point? (in a slightly off topic but related issue, I can't clock my card, I got drivers installed and can mine, but EVERY clocking app {cgminer, overdrive, ccc (wont even start), clock tool} either doesn't run or has all sliders grayed out... nobody's helping in the technical forums D:> tried different drivers and ccc versions 11.5 thru 11.8 and it can't be messed up installs, I got frustrated and even reformatted and installed windows over and fresh installed the drivers for two of the versions .6 and .
|
|
|
|
toasty
Member
Offline
Activity: 90
Merit: 12
|
|
September 07, 2011, 05:09:52 PM |
|
The new GPU features are awesome! A few suggestions/requests: 1) Dual GPU cards don't get adjusted/disabled correctly. For example, a 5970 will only disable the one GPU that has the temp sensor: GPU 3: [80.5 C] [DISABLED /78.5 Mh/s] [Q:2 A:11 R:1 HW:0 E:550% U:1.81/m] GPU 4: [327.3/324.5 Mh/s] [Q:25 A:23 R:1 HW:0 E:92% U:3.78/m] 2) It'd be awesome if cgminer would record the current clocks on startup, and restore them on exit if the auto tuning changed them at all. 3) Pressing "G" when you have more than a couple of cards makes it impossible to read the output, because the window the output is displayed to is too small unless you have a HUGE screen. 4) I'd love a way to specify different temperature thresholds per GPU on the command line. If I have different model cards in there, they have different points where they're happy. 5770s get crashy above 90-95, where 5970 and 6990 cards idle near there at times. 5) My ideal dream would be a way of somehow saying "Any 5970 cards you see, set the temperature thresholds to X/Y, the voltage to Z, etc. Any 5770 cards, the temperature threshold is..." so that I don't have to look up which cards are in which system, just to pass that along to cgminer. 6) Temperatures >100C should be allowed, no matter how bad of an idea that sounds. We have some cards that go up to 105-107C without issue. 7) Specifying an overclock/underclock range that cgminer is allowed to adjust the clock in would be handy. One step further, having it attempt to determine (maybe even saved into a local file) how high the clock was able to go without problems, and self-tuning the max clock rate while under the threshold temperature.
|
|
|
|
exahash
|
|
September 07, 2011, 05:26:30 PM |
|
One step further, having it attempt to determine (maybe even saved into a local file) how high the clock was able to go without problems, and self-tuning the max clock rate while under the threshold temperature. ^^^ That's the ticket. Plus keep track of how long it was able to run at that clock rate and use that info to drive the adjustments. I've got a bunch of cards that don't like to run for more than 20-30 mins at elevated clocks, and it can take a couple days to dial them in.
|
|
|
|
toasty
Member
Offline
Activity: 90
Merit: 12
|
|
September 07, 2011, 06:38:25 PM |
|
[2011-09-07 13:36:44] Overheat detected, increasing fan to 100% [2011-09-07 13:36:46] Overheat detected, increasing fan to 100% [2011-09-07 13:36:49] Overheat detected, increasing fan to 100% [2011-09-07 13:36:51] Overheat detected, increasing fan to 100% [2011-09-07 13:36:53] Overheat detected, increasing fan to 100% [2011-09-07 13:36:55] Overheat detected, increasing fan to 100% This should probably identify which GPU it's talking about, and maybe have some kind of throttling added to it, if a card's temperature is wiggling around the threshold.
|
|
|
|
os2sam
Legendary
Offline
Activity: 3586
Merit: 1098
Think for yourself
|
|
September 07, 2011, 06:59:40 PM |
|
I know the main stream thought is 300 is the sweet spot for mem, and I think this myth exists because it is the lowest point you can downclock in some early versions of software.
(in a slightly off topic but related issue, I can't clock my card, I got drivers installed and can mine, but EVERY clocking app {cgminer, overdrive, ccc (wont even start), clock tool} either doesn't run or has all sliders grayed out... nobody's helping in the technical forums D:> tried different drivers and ccc versions 11.5 thru 11.8 and it can't be messed up installs, I got frustrated and even reformatted and installed windows over and fresh installed the drivers for two of the versions .6 and . Sorry if I'm stating the overly obvious, but in the ATI CCC did you unlock the overclocking page? When I first started messing with this stuff I looked at the overclock page a bunch of times before I realized that the lock was actually a button. I was really irritated about it being grayed out too. Sam
|
A: Because it messes up the order in which people normally read text. Q: Why is top-posting such a bad thing? A: Top-posting. Q: What is the most annoying thing on usenet and in e-mail?
|
|
|
The00Dustin
|
|
September 07, 2011, 07:03:40 PM |
|
2) It'd be awesome if cgminer would record the current clocks on startup, and restore them on exit if the auto tuning changed them at all. Are you having trouble in that department, or did you just miss this? STARTUP / SHUTDOWN: When cgminer starts up, it tries to read off the current profile information for clock and fan speeds and stores these values. When quitting cgminer, it will then try to restore the original values. Changing settings outside of cgminer while it's running may be reset to the startup cgminer values when cgminer shuts down because of this.
|
|
|
|
os2sam
Legendary
Offline
Activity: 3586
Merit: 1098
Think for yourself
|
|
September 07, 2011, 07:08:54 PM |
|
The new GPU features are awesome! A few suggestions/requests:
3) Pressing "G" when you have more than a couple of cards makes it impossible to read the output, because the window the output is displayed to is too small unless you have a HUGE screen.
In Windoze I created a shortcut and changed the layout properties to 55 rows (Window height) instead of the default 25. Now I can see the info for both of my GPU's at once. You can also modify the pixels per character under the Fonts tab in conjunction with the layout window height to get more window realestate. Sam
|
A: Because it messes up the order in which people normally read text. Q: Why is top-posting such a bad thing? A: Top-posting. Q: What is the most annoying thing on usenet and in e-mail?
|
|
|
|