flound1129
|
 |
July 26, 2013, 04:17:06 AM |
|
I have finally gotten some mining going after a lot of fooling around with drivers. I'm running xp 32 bit, Radeon 6670s, sdk 2.5 and catalyst 11.7. I have two machines that I believe are almost identical. However one I have running fine at 94 khash/sec.
The big problem is the other machine that seems to be getting a lot of hardware errors (hw). Anyone hav any ideas what can cause this? I'm just running it standard with I 15 and have tried fooling around with the core Clock and memory. Thanks for help I just don't understand hat can cause hw errors.
Btw this is scrypt mining, coincidentally at flounds pool multipool.in. An yes this stratum issue is really annoying, hits both my miners to cause often disconnects which I can only assume are slowing my hashing. I've never gotten good results with I > 13 on any of my miners.
|
Multipool - Always mine the most profitable coin - Scrypt, X11 or SHA-256!
|
|
|
kano
Legendary
Offline
Activity: 4676
Merit: 1858
Linux since 1997 RedHat 4
|
 |
July 26, 2013, 05:50:33 AM |
|
Why does this happen? https://i.imgur.com/BPqpMjU.pngIn cgminer I have queue="1", why is cgminer continuing to submit difficulty 16 shares after the pool diff changes to 32? This is in 3.3.1. Irrelevant of the value of queue, it is not ideal for cgminer to throw away work it is doing (or just done) when diff changes. Tell the pool to fix their code. ... an issue with stratum that was swept under the carpet with a hack. I am the pool, and this code isn't live yet. I'm trying to patch stratum to do it the correct way. What do you suggest? Obviously you can't reject valid work you sent to the miner. How long will a miner take to mine work? Ignoring crappy slow hardware below 100MH/s ... 100MH/s will take 43 seconds. The stratum flaw is that difficulty is not part of the work. Read around here: https://bitcointalk.org/index.php?topic=108533.msg1287015#msg1287015It should be part of that. The hack is to send them together ... though of course that doesn't guarantee they will arrive together ... But either way, the point is that when you get work and have a difficulty, any work that reaches that difficulty should be accepted by the pool. As mentioned in that link, ignoring those shares, e.g. on a PPS pool, is simply the pool ripping off the miners.
|
|
|
|
San1ty
|
 |
July 26, 2013, 08:03:47 AM |
|
Hi All,
I'm currently running CGMiner on 2 7950's (NO CROSSFIRE), with the following settings:
--scrypt --api-listen --api-network --api-port 4028 -I 20,20 -g 1 --thread-concurrency 22400 --lookup-gap 2 --temp-target 75,75 --temp-overheat 80,80 --temp-cutoff 85,85 --gpu-memclock 1250,1250 --gpu-engine 300-1100,300-1100 --auto-fan --auto-gpu
I'm getting extremely frustrated with cgminer results as they make no sense to me, for example:
GPU: 1, Rate: 310kh/s, Temp: 75 °C, Fan Percent: 37%, GPU Clock: 1100, Mem Clock: 1250, Intensity 20, HW Errors: 0 GPU: 2, Rate: 660kh/s, Temp: 76 °C, Fan Percent: 100%, GPU Clock: 500, Mem Clock: 1250, Intensity 20, HW Errors: 0
How the hell is it possible that GPU 2 has a higher hashrate with the lowest GPU Clock? Why is my Fan Percent on GPU 2 100%? Why isn't my Fan Percent not going up on GPU 1 as temps are about equal? Why isn't my GPU Clock of GPU 1 throttling down?
This is driving me nuts, I have went through the documentation and numerous forum posts but no help.
Please, Anyone, Help me out!
Cheers,
|
Found my posts helpful? Consider buying me a beer :-)!: BTC - 1San1tyUGhfWRNPYBF4b6Vaurq5SjFYWk NXT - 17063113680221230777
|
|
|
Aurum
|
 |
July 26, 2013, 08:44:39 AM |
|
Hi All,
I'm currently running CGMiner on 2 7950's (NO CROSSFIRE), with the following settings:
--scrypt --api-listen --api-network --api-port 4028 -I 20,20 -g 1 --thread-concurrency 22400 --lookup-gap 2 --temp-target 75,75 --temp-overheat 80,80 --temp-cutoff 85,85 --gpu-memclock 1250,1250 --gpu-engine 300-1100,300-1100 --auto-fan --auto-gpu
I'm getting extremely frustrated with cgminer results as they make no sense to me, for example:
GPU: 1, Rate: 310kh/s, Temp: 75 °C, Fan Percent: 37%, GPU Clock: 1100, Mem Clock: 1250, Intensity 20, HW Errors: 0 GPU: 2, Rate: 660kh/s, Temp: 76 °C, Fan Percent: 100%, GPU Clock: 500, Mem Clock: 1250, Intensity 20, HW Errors: 0
How the hell is it possible that GPU 2 has a higher hashrate with the lowest GPU Clock? Why is my Fan Percent on GPU 2 100%? Why isn't my Fan Percent not going up on GPU 1 as temps are about equal? Why isn't my GPU Clock of GPU 1 throttling down?
This is driving me nuts, I have went through the documentation and numerous forum posts but no help.
Please, Anyone, Help me out! Cheers,
I don't know the answer to your questions but I never had a lick of luck with auto-gpu. All my 7970s run great on this configuration: "scrypt" : true, "intensity" : "13,13", "vectors" : "1", "worksize" : "256", "lookup-gap" : "2", "thread-concurrency" : "10240,10240", "shaders" : "2048,2048", "gpu-engine" : "1050,1050", "gpu-fan" : "0-85", "gpu-memclock" : "1650,1650", "gpu-vddc" : "0,0", "temp-cutoff" : "95", "temp-overheat" : "88", "temp-target" : "75", "temp-hysteresis" : "9", "api-port" : "4028", "auto-fan" : true, "expiry" : "120", "scan-time" : "111", "queue" : "1", "failover-only" : true, "gpu-dyninterval" : "77", "gpu-threads" : "1", "hotplug" : "0", "log" : "5", "shares" : "0", "kernel-path" : "/usr/local/bin"
|
ghghghfgh
|
|
|
Aurum
|
 |
July 26, 2013, 09:02:39 AM |
|
--scrypt --api-listen --api-network --api-port 4028 -I 20,20 -g 1 --thread-concurrency 22400 --lookup-gap 2 --temp-target 75,75 --temp-overheat 80,80 --temp-cutoff 85,85 --gpu-memclock 1250,1250 --gpu-engine 300-1100,300-1100 --auto-fan --auto-gpu
Another thing, I see you're putting all this in a command line. Better to minimize your batch file and edit your conf file. Here's my batch: timeout /t 20 cgminer.exe -c MultiPool_Miner_Aurum.conf pause exitI make a shortcut and put it in my Startup folder. I set my BIOS so that if the power goes out it restarts. The timeout is to allow my miner to discover a network connection and finish loading stuff before launching cgminer. The pause gives me a chance to read the run summary before closing the cmd window.
|
ghghghfgh
|
|
|
Aurum
|
 |
July 26, 2013, 09:26:21 AM |
|
Every time I go looking for answers I see no end of posts saying you must put these lines in your batch file: setx GPU_MAX_ALLOC_PERCENT 100 setx GPU_USE_SYNC_OBJECTS 1 For Win7 users I think this is nonsense. I deleted them from my batch file and from Windows Environmental Variables since they stay there forever until deleted. If I open Windows Resource Monitor I use significantly less CPU with a multi-core CPU than if I set GPU_USE_SYNC_OBJECTS 1, which as best as I can tell forces the use of a single core. Aren't CPUs designed to direct traffic? Why not let them do their job? Most of my miners use a low power consumption AMD Sempron 145 CPU which only has one core anyway. Why even tell windows to use only one core when there is only one core?
As for GPU_MAX_ALLOC_PERCENT 100 it seems to do nothing. Trying running for 24 hours with it set to 100, 40 and then deleting it. I don't see any difference.
|
ghghghfgh
|
|
|
San1ty
|
 |
July 26, 2013, 09:33:39 AM |
|
Every time I go looking for answers I see no end of posts saying you must put these lines in your batch file: setx GPU_MAX_ALLOC_PERCENT 100 setx GPU_USE_SYNC_OBJECTS 1 For Win7 users I think this is nonsense. I deleted them from my batch file and from Windows Environmental Variables since they stay there forever until deleted. If I open Windows Resource Monitor I use significantly less CPU with a multi-core CPU than if I set GPU_USE_SYNC_OBJECTS 1, which as best as I can tell forces the use of a single core. Aren't CPUs designed to direct traffic? Why not let them do their job? Most of my miners use a low power consumption AMD Sempron 145 CPU which only has one core anyway. Why even tell windows to use only one core when there is only one core?
As for GPU_MAX_ALLOC_PERCENT 100 it seems to do nothing. Trying running for 24 hours with it set to 100, 40 and then deleting it. I don't see any difference.
Thank you very much for your advice, I'll try excluding --auto-gpu (although I think it's a long shot  ). I have set those two user variables in windows 8, but like you said, I'm not sure they are doing much. Also what I didn't mention yet is my OS Spec: Windows 8 64-Bit Catalyst Drivers 13.1 Anything wrong with that?
|
Found my posts helpful? Consider buying me a beer :-)!: BTC - 1San1tyUGhfWRNPYBF4b6Vaurq5SjFYWk NXT - 17063113680221230777
|
|
|
Aurum
|
 |
July 26, 2013, 09:44:52 AM |
|
Every time I go looking for answers I see no end of posts saying you must put these lines in your batch file: setx GPU_MAX_ALLOC_PERCENT 100 setx GPU_USE_SYNC_OBJECTS 1 For Win7 users I think this is nonsense. I deleted them from my batch file and from Windows Environmental Variables since they stay there forever until deleted. If I open Windows Resource Monitor I use significantly less CPU with a multi-core CPU than if I set GPU_USE_SYNC_OBJECTS 1, which as best as I can tell forces the use of a single core. Aren't CPUs designed to direct traffic? Why not let them do their job? Most of my miners use a low power consumption AMD Sempron 145 CPU which only has one core anyway. Why even tell windows to use only one core when there is only one core?
As for GPU_MAX_ALLOC_PERCENT 100 it seems to do nothing. Trying running for 24 hours with it set to 100, 40 and then deleting it. I don't see any difference.
Thank you very much for your advice, I'll try excluding --auto-gpu (although I think it's a long shot  ). I have set those two user variables in windows 8, but like you said, I'm not sure they are doing much. Also what I didn't mention yet is my OS Spec: Windows 8 64-Bit Catalyst Drivers 13.1 Anything wrong with that? Hi, Your main problem is twofold: intensity is way too high for 7xxx GPUs, and thread concurrency is too high. I have 32-bit and 64-bit Windows 7 PCs and have never tried Win8. For 7XXXs I'd recommend uninstalling Catalyst 13.1. Running AMDs cleanup utility. Running Guru3D Driver Sweeper. Then install Catalyst 13.4. I never tried 13.1 and it may work just fine. But, 7XXXs run great with the latest drivers so I just stay current. Just my two cents worth.
|
ghghghfgh
|
|
|
HellDiverUK
|
 |
July 26, 2013, 09:45:57 AM |
|
Every time I go looking for answers I see no end of posts saying you must put these lines in your batch file: setx GPU_MAX_ALLOC_PERCENT 100 setx GPU_USE_SYNC_OBJECTS 1 For Win7 users I think this is nonsense.
As for GPU_MAX_ALLOC_PERCENT 100 it seems to do nothing. Trying running for 24 hours with it set to 100, 40 and then deleting it. I don't see any difference.
I hear what you're saying, but you're wrong. You may have found that, but I couldn't mine on a machine with 4GB RAM with 2x7950 until I added the setx commands. Yes, it should be possible to just do it once, but Windows being Windows does like to change stuff (be it some other application, Windows Update or even ATI driver). It does no harm to leave the commands there. Really, people wouldn't be recommending it, if it didn't work.
|
|
|
|
HellDiverUK
|
 |
July 26, 2013, 09:47:58 AM |
|
For 7XXXs I'd recommend uninstalling Catalyst 13.1. Then install Catalyst 13.4. But, 7XXXs run great with the latest drivers so I just stay current.
Disagree. 12.8 are still the most consistently stable version for me. They literally just work. Test on Windows 7 machines with 2x7950, another with a 6970, and with another machine with a 7850 and 7770.
|
|
|
|
San1ty
|
 |
July 26, 2013, 09:55:29 AM |
|
For 7XXXs I'd recommend uninstalling Catalyst 13.1. Then install Catalyst 13.4. But, 7XXXs run great with the latest drivers so I just stay current.
Disagree. 12.8 are still the most consistently stable version for me. They literally just work. Test on Windows 7 machines with 2x7950, another with a 6970, and with another machine with a 7850 and 7770. Do you need to have connected monitors or resistors when using 12.8? What's the download link for 12.8 with included SDK?
|
Found my posts helpful? Consider buying me a beer :-)!: BTC - 1San1tyUGhfWRNPYBF4b6Vaurq5SjFYWk NXT - 17063113680221230777
|
|
|
rav3n_pl
Legendary
Offline
Activity: 1361
Merit: 1003
Don`t panic! Organize!
|
 |
July 26, 2013, 10:00:07 AM |
|
Win7 x64 drivers 13.4 7970 gives me 714kh/s on 1050/1500 settings on cgminer 3.2.1
|
|
|
|
HellDiverUK
|
 |
July 26, 2013, 10:11:29 AM |
|
Do you need to have connected monitors or resistors when using 12.8? What's the download link for 12.8 with included SDK?
No, I've NEVER needed to run dummy plugs or any of that crap. My GPU rigs are all using Intel iGPU for display, the AMD cards are doing mining only. There's a "Previous Versions" link on the right hand side of AMD's driver download page, all older version are available there.
|
|
|
|
Jcw188
|
 |
July 26, 2013, 12:36:22 PM |
|
I have finally gotten some mining going after a lot of fooling around with drivers. I'm running xp 32 bit, Radeon 6670s, sdk 2.5 and catalyst 11.7. I have two machines that I believe are almost identical. However one I have running fine at 94 khash/sec.
The big problem is the other machine that seems to be getting a lot of hardware errors (hw). Anyone hav any ideas what can cause this? I'm just running it standard with I 15 and have tried fooling around with the core Clock and memory. Thanks for help I just don't understand hat can cause hw errors.
Btw this is scrypt mining, coincidentally at flounds pool multipool.in. An yes this stratum issue is really annoying, hits both my miners to cause often disconnects which I can only assume are slowing my hashing. I've never gotten good results with I > 13 on any of my miners. I tried it will lower intensity and was still getting HW errors. What can cause HW errors? I was reading about fooling around with thread concurrency or shaders to try to fix this. It's just that this machine is almost the same as the other one and the other one works great so I don't know what could be different on this one causing HW.
|
|
|
|
kano
Legendary
Offline
Activity: 4676
Merit: 1858
Linux since 1997 RedHat 4
|
 |
July 26, 2013, 01:41:25 PM |
|
I have finally gotten some mining going after a lot of fooling around with drivers. I'm running xp 32 bit, Radeon 6670s, sdk 2.5 and catalyst 11.7. I have two machines that I believe are almost identical. However one I have running fine at 94 khash/sec.
The big problem is the other machine that seems to be getting a lot of hardware errors (hw). Anyone hav any ideas what can cause this? I'm just running it standard with I 15 and have tried fooling around with the core Clock and memory. Thanks for help I just don't understand hat can cause hw errors.
Btw this is scrypt mining, coincidentally at flounds pool multipool.in. An yes this stratum issue is really annoying, hits both my miners to cause often disconnects which I can only assume are slowing my hashing. I've never gotten good results with I > 13 on any of my miners. I tried it will lower intensity and was still getting HW errors. What can cause HW errors? I was reading about fooling around with thread concurrency or shaders to try to fix this. It's just that this machine is almost the same as the other one and the other one works great so I don't know what could be different on this one causing HW. It's not fooling around with settings, it doing what you are supposed to do as the SCRYPT-README tells you to do. Settings and performance on scrypt are affected buy many things: how much available RAM the computer has, how much RAM the GPU has, how many GPUs you are trying to run, the PCI-e bandwidth of the motherboard and of course the driver versions also depend on which GPUs you are using ...
|
|
|
|
Aurum
|
 |
July 26, 2013, 03:18:34 PM |
|
Is "gpu-dyninterval" : "7" only used when using "intensity" : d 
|
ghghghfgh
|
|
|
Aurum
|
 |
July 26, 2013, 03:23:28 PM |
|
Win7 x64 drivers 13.4 7970 gives me 714kh/s on 1050/1500 settings on cgminer 3.2.1
What TC, I, temp-target and are you changing GPU Vddc  1050/1500 won't even run on my 7970s.
|
ghghghfgh
|
|
|
Jcw188
|
 |
July 26, 2013, 05:33:44 PM |
|
I have finally gotten some mining going after a lot of fooling around with drivers. I'm running xp 32 bit, Radeon 6670s, sdk 2.5 and catalyst 11.7. I have two machines that I believe are almost identical. However one I have running fine at 94 khash/sec.
The big problem is the other machine that seems to be getting a lot of hardware errors (hw). Anyone hav any ideas what can cause this? I'm just running it standard with I 15 and have tried fooling around with the core Clock and memory. Thanks for help I just don't understand hat can cause hw errors.
Btw this is scrypt mining, coincidentally at flounds pool multipool.in. An yes this stratum issue is really annoying, hits both my miners to cause often disconnects which I can only assume are slowing my hashing. I've never gotten good results with I > 13 on any of my miners. I tried it will lower intensity and was still getting HW errors. What can cause HW errors? I was reading about fooling around with thread concurrency or shaders to try to fix this. It's just that this machine is almost the same as the other one and the other one works great so I don't know what could be different on this one causing HW. It's not fooling around with settings, it doing what you are supposed to do as the SCRYPT-README tells you to do. Settings and performance on scrypt are affected buy many things: how much available RAM the computer has, how much RAM the GPU has, how many GPUs you are trying to run, the PCI-e bandwidth of the motherboard and of course the driver versions also depend on which GPUs you are using ... Ok thanks. The readme says must use SDK 2.6 or later. I have 2.5 running scrypt on a similar machine so I figured it would work on this machine too. I think I will try the --shaders command, and lowering parameters like intensity. Then try SDK 2.6 but that wasn't working at all for me last time I tried.
|
|
|
|
SkyNet
Member

Offline
Activity: 80
Merit: 10
|
 |
July 26, 2013, 07:06:23 PM |
|
I am getting a quite strange error message when trying to scrypt mine with 3x 7950s in an older Core2Duo machine. Without setting ANY parameter (not even intensity) just pool data, I am getting the following error: Error -5: Enqueueing kernel onto command queue. (clEnqueueNDRangeKernel) All 3 GPUs are set to OFF and no mining whatsoever Drivers ver 12.8 and SDK 2.7 Any idea what that could be? Could it be that there is an issue with the machine having only 2gb of ram?
|
Tips: 1JmQ78JprWePM3EapnacPFfAtTrob8ofmU
|
|
|
Aurum
|
 |
July 26, 2013, 07:41:17 PM |
|
I am getting a quite strange error message when trying to scrypt mine with 3x 7950s in an older Core2Duo machine. Without setting ANY parameter (not even intensity) just pool data, I am getting the following error: Error -5: Enqueueing kernel onto command queue. (clEnqueueNDRangeKernel) All 3 GPUs are set to OFF and no mining whatsoever Drivers ver 12.8 and SDK 2.7 Any idea what that could be? Could it be that there is an issue with the machine having only 2gb of ram?
" Without setting ANY parameter..." So you don't any *.conf in your cgminer folder? " All 3 GPUs are set to OFF..." How did you set the GPUs to off if you didn't set any parameters? As for memory, I keep hearing how scrypt mining is very RAM intensive. I've heard you need more RAM than you have video memory. I don't see that with my miners, e.g.: Dedicated 4x7970 miner has 12 GB GDDR5 plus 10 GB DDR3 and I still have 8.69 GB available. Dedicated 3x7970 miner has 9 GB GDDR5 plus 4 GB DDR3 and I still have 3.02 GB available. 2x7970 on my workstation, several other programs running, has 9 GB GDDR5 plus 8 GB DDR3 and I still have 5.36 GB available. You might want to go to 4 GB but I suspect your problem is elsewhere.
|
ghghghfgh
|
|
|
|