Bitcoin Forum
December 07, 2016, 06:36:21 PM *
News: To be able to use the next phase of the beta forum software, please ensure that your email address is correct/functional.
 
   Home   Help Search Donate Login Register  
Pages: « 1 ... 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 [223] 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 ... 830 »
  Print  
Author Topic: OFFICIAL CGMINER mining software thread for linux/win/osx/mips/arm/r-pi 4.9.2  (Read 4821704 times)
This is a self-moderated topic. If you do not want to be moderated by the person who started this topic, create a new topic.
kano
Legendary
*
Offline Offline

Activity: 1932


Linux since 1997 RedHat 4


View Profile
March 04, 2012, 04:52:51 AM
 #4441

...
Just a theoretical question. How many FPGA boards could cgminer run? If I'd have a dozen or two USB hubs (each taking a USB port + ID) and a theoretical number of 100 ZTEX or Icarus boards, would that still work?  (I am going to cancel my vacation, sell the car and get a loan on the house, lol, just kidding). I am just asking because with GPUs I guess cgminer was always limited because of the fact that you can only install a few in a computer. But now with FPGAs a larger number should be possible (new cgminer user speaking).
Current code says ... 32
It's just a #define in miner.h so it may be possible to just increase it and it will all work.
I'd expect that to be the case.

However, the API has a TODO I put in it that there could be issues about the amount of data returned by "devs"
I've not bothered to work out the limit or intercept it before it causes havoc yet ... since:
The API limit is certainly above 32 so with the current code limit it's fine.

Edit: though ... I'd have no idea what would happen if you had 32 devices and didn't use '-T' ...
Curses may fail since there's not enough space on a standard 24 line terminal.
At least certainly set your terminal bigger (taller) before trying 32.

Pool: https://kano.is BTC: 1KanoiBupPiZfkwqB7rfLXAzPnoTshAVmb
CKPool and CGMiner developer, IRC FreeNode #ckpool and #cgminer kanoi
Help keep Bitcoin secure by mining on pools with Stratum, the best protocol to mine Bitcoins with ASIC hardware
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction. Advertise here.
1481135781
Hero Member
*
Offline Offline

Posts: 1481135781

View Profile Personal Message (Offline)

Ignore
1481135781
Reply with quote  #2

1481135781
Report to moderator
1481135781
Hero Member
*
Offline Offline

Posts: 1481135781

View Profile Personal Message (Offline)

Ignore
1481135781
Reply with quote  #2

1481135781
Report to moderator
this time
Jr. Member
*
Offline Offline

Activity: 55


View Profile
March 04, 2012, 05:17:03 PM
 #4442

Is there a step I'm missing to save settings? I use S,W,Enter from the menu to save the configuration as cgminer.conf but it does not appear to save my settings in the cgminer.conf file. It just looks like below. Also when I enter in values and save, they do not take when I start up with cgminer -c cgminer.conf

{
"pools" : [
   {
      "url" : "x",
      "user" : "x",
      "pass" : "x"
   },
   {
      "url" : "x",
      "user" : "x,
      "pass" : "x"
   },
   {
      "url" : "x",
      "user" : "x",
      "pass" : "x"
   },
   {
      "url" : "x",
      "user" : "x",
      "pass" : "x"
   }
],

"intensity" : "d",
"vectors" : "2",
"worksize" : "128",
"kernel" : "phatk",
"gpu-engine" : "0-0",
"gpu-fan" : "0-85",
"gpu-memclock" : "0",
"gpu-memdiff" : "0",
"gpu-powertune" : "0",
"gpu-vddc" : "0.000",
"temp-cutoff" : "95",
"temp-overheat" : "85",
"temp-target" : "75",
"api-port" : "4028",
"expiry" : "120",
"gpu-dyninterval" : "7",
"gpu-platform" : "0",
"gpu-threads" : "2",
"log" : "5",
"queue" : "0",
"retry-pause" : "5",
"scan-time" : "60",
"temp-hysteresis" : "3",
"shares" : "0",
"kernel-path" : "/usr/local/bin"
}

kano
Legendary
*
Offline Offline

Activity: 1932


Linux since 1997 RedHat 4


View Profile
March 04, 2012, 09:42:44 PM
 #4443

S/W/Enter will save it in the default cgminer.conf file.
(e.g. on linux ~/.cgminer/cgminer.conf)

It will load that as well as the one you specify with -c
(which will be the same file twice if the -c points to the same file - which it doesn't in your case)

So you don't want to also specify -c on the command line if you are saving it in the default one

Pool: https://kano.is BTC: 1KanoiBupPiZfkwqB7rfLXAzPnoTshAVmb
CKPool and CGMiner developer, IRC FreeNode #ckpool and #cgminer kanoi
Help keep Bitcoin secure by mining on pools with Stratum, the best protocol to mine Bitcoins with ASIC hardware
this time
Jr. Member
*
Offline Offline

Activity: 55


View Profile
March 04, 2012, 10:48:51 PM
 #4444

I should have stated that I set values engine=775 mem=300 volts=.95 fan=40 prior to saving. The only
setting that is actually being saved is my user/pass/pool info. Nothing else shows as being saved when I
open cgminer.conf in notepad. If I close and run cgminer from the command line again (with no switch) it
will remember everything but my fan setting. When I restart the computer, the only info it remembers is
the user/pass/pool info, which makes sense because that's all that's being written to the cgminer.conf file.


My issue is what I'm doing wrong so that I'm only saving the user/pass/pool info and nothing else. I've
also tried editing in the other settings in cgminer.conf in notepad but that seems to have no effect, it just remembers the pool info. Originally, I started it up with a .bat file with the user name and pass if that makes a difference.
os2sam
Legendary
*
Offline Offline

Activity: 1918


Think for yourself


View Profile
March 05, 2012, 02:23:06 AM
 #4445

sam: install 11.9 if your using win7 with 5xxx series cards - it does not get better than that and believe me I have done the testing

Well, I'll run it up the flagpole and see what happens.
Thanks,
Sam

I just ran it up the flag pole.  Catalyst 11.9 which has SDK 2.5 still has the 100% CPU utilization bug, which was what I kind of thought.  I'll do some benchmarking to compare with what I was getting with the 11.6 and 100% CPU Utilization.
Sam

A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?
A: Top-posting.
Q: What is the most annoying thing on usenet and in e-mail?
Vbs
Hero Member
*****
Offline Offline

Activity: 504


View Profile
March 05, 2012, 10:50:20 AM
 #4446

I just ran it up the flag pole.  Catalyst 11.9 which has SDK 2.5 still has the 100% CPU utilization bug, which was what I kind of thought.  I'll do some benchmarking to compare with what I was getting with the 11.6 and 100% CPU Utilization.
Sam

Since you are doing some benchies, also try this:
1) Run Catalyst 12.1 installer -> Custom install -> Unselect All button -> Select GPU Driver only -> Next... Untill it's installed
2) Run Catalyst 11.11 installer -> Custom install -> Unselect All button -> Select SDK Runtime only -> Next... Untill it's installed
3) Done!

That should give you a driver without the 100% CPU bug and the last version of SDK 2.5 runtime.
DeathAndTaxes
Donator
Legendary
*
Offline Offline

Activity: 1218


Gerald Davis


View Profile
March 05, 2012, 01:31:15 PM
 #4447

I just ran it up the flag pole.  Catalyst 11.9 which has SDK 2.5 still has the 100% CPU utilization bug, which was what I kind of thought.  I'll do some benchmarking to compare with what I was getting with the 11.6 and 100% CPU Utilization.
Sam

Since you are doing some benchies, also try this:
1) Run Catalyst 12.1 installer -> Custom install -> Unselect All button -> Select GPU Driver only -> Next... Untill it's installed
2) Run Catalyst 11.11 installer -> Custom install -> Unselect All button -> Select SDK Runtime only -> Next... Untill it's installed
3) Done!

That should give you a driver without the 100% CPU bug and the last version of SDK 2.5 runtime.

This.  Can we make it a sticky?  I don't know how many times we have post about.  12.1 made my mining slow.  Well no 12.1 driver didn't.  The fact that you hit express install and it dumped the worthless SDK 2.6 did. Smiley
Panda Mouse
Member
**
Offline Offline

Activity: 88


Gliding...


View Profile
March 05, 2012, 01:41:03 PM
 #4448

This is the official thread for support and development of cgminer, the combined GPU, bitforce and cpu miner written in c, cross platform for windows, linux and osx, with overclocking, monitoring, fanspeed control and remote interface capabilities....


I found interesting problem:

I have mobo Gigabyte with PCIe 1x16x and 3x1x, then I used resers.
I used 4xVTX7970.
Procesor Sempron 145 have not enauch power to pull all 4 with their 670MHs/s.
When I use 2 cards - there is no problem at 1190/1250

I think that is something wrong with cgminer (with diablominer engine). Processor usage even with 2 cards is 100%.
Pure diablominer in this configuration use 7-15% processor.

Is this only my problem ?
A better - 4 core AMD X4 FX-4100 will solve this situation?
DeathAndTaxes
Donator
Legendary
*
Offline Offline

Activity: 1218


Gerald Davis


View Profile
March 05, 2012, 01:50:04 PM
 #4449

I found interesting problem:

I have mobo Gigabyte with PCIe 1x16x and 3x1x, then I used resers.
I used 4xVTX7970.
Procesor Sempron 145 have not enauch power to pull all 4 with their 670MHs/s.
When I use 2 cards - there is no problem at 1190/1250

I think that is something wrong with cgminer (with diablominer engine). Processor usage even with 2 cards is 100%.
Pure diablominer in this configuration use 7-15% processor.

Is this only my problem ?
A better - 4 core AMD X4 FX-4100 will solve this situation?

First.  COME ON MAN.  Copying 1500 lines just to add " Mac OS X?"  really?  Please be respectful and edit down your asininely long quote.

As far as CPU.  There is something else going on.  I run 4x heavily overclocked 5970s (3.3 GH/s total) off the same Sempron 145 which I have underclocked and undervolted down  to 1.2GHz.  cgminer uses about 10% cpu time.  I switched to diablo kernel to test it and it didn't change. 

Which OS, which driver, which SDK?
Panda Mouse
Member
**
Offline Offline

Activity: 88


Gliding...


View Profile
March 05, 2012, 02:04:59 PM
 #4450

I found interesting problem:

I have mobo Gigabyte with PCIe 1x16x and 3x1x, then I used resers.
I used 4xVTX7970.
Procesor Sempron 145 have not enauch power to pull all 4 with their 670MHs/s.
When I use 2 cards - there is no problem at 1190/1250

I think that is something wrong with cgminer (with diablominer engine). Processor usage even with 2 cards is 100%.
Pure diablominer in this configuration use 7-15% processor.

Is this only my problem ?
A better - 4 core AMD X4 FX-4100 will solve this situation?

First.  COME ON MAN.  Copying 1500 lines just to add " Mac OS X?"  really?  Please be respectful and edit down your asininely long quote.

As far as CPU.  There is something else going on.  I run 4x heavily overclocked 5970s (3.3 GH/s total) off the same Sempron 145 which I have underclocked and undervolted down  to 1.2GHz.  cgminer uses about 10% cpu time.  I switched to diablo kernel to test it and it didn't change. 

Which OS, which driver, which SDK?

First - I'm very sorry, never again  :-) (1500 lines quote).
Second - Windows 7, AMD Catalyst 12.2, 2.6.
I've heard that started from 12.1, the Catalyst has an error and uses 100% of processor.
Unfortunately with my 7970 i have to use 12.2  :-(
kano
Legendary
*
Offline Offline

Activity: 1932


Linux since 1997 RedHat 4


View Profile
March 05, 2012, 02:31:42 PM
 #4451

...
PLEASE, as D&T suggested, edit that post of yours and REMOVE/DELETE the ENTIRE quote.

Pool: https://kano.is BTC: 1KanoiBupPiZfkwqB7rfLXAzPnoTshAVmb
CKPool and CGMiner developer, IRC FreeNode #ckpool and #cgminer kanoi
Help keep Bitcoin secure by mining on pools with Stratum, the best protocol to mine Bitcoins with ASIC hardware
DeathAndTaxes
Donator
Legendary
*
Offline Offline

Activity: 1218


Gerald Davis


View Profile
March 05, 2012, 02:40:49 PM
 #4452

What is your intensity setting? I bet it is more than 7.

Intensity is the most critical setting for stability. Set the intensity low than play with the rest.

No reason Intensity needs to be less than 7 on a 7970.  I run 8 on 5970 and that is only because I am using p2pool.  With conventional pool intensity 9 is more appropriate.  conman I believe found optimal intensity at 10 or 11 on 7970.
The00Dustin
Hero Member
*****
Offline Offline

Activity: 806


View Profile
March 05, 2012, 02:51:09 PM
 #4453

conman I believe found optimal intensity at 10 or 11 on 7970.
However, in Windows intensity needs to be 9 or lower due to cpu usage.  With the 7970, I'm not sure there are a lot of options, but PandaMouse may want to read the last 10 pages (or more) to see, as this stuff (7970s and Windows CPU bug versions, not necessarily together) was discussed pretty heavily at some point.
Panda Mouse
Member
**
Offline Offline

Activity: 88


Gliding...


View Profile
March 05, 2012, 03:05:53 PM
 #4454

I found interesting problem:

I have mobo Gigabyte with PCIe 1x16x and 3x1x, then I used resers.
I used 4xVTX7970.
Procesor Sempron 145 have not enauch power to pull all 4 with their 670MHs/s.
When I use 2 cards - there is no problem at 1190/1250

I think that is something wrong with cgminer (with diablominer engine). Processor usage even with 2 cards is 100%.
Pure diablominer in this configuration use 7-15% processor.

Is this only my problem ?
A better - 4 core AMD X4 FX-4100 will solve this situation?

First.  COME ON MAN.  Copying 1500 lines just to add " Mac OS X?"  really?  Please be respectful and edit down your asininely long quote.

As far as CPU.  There is something else going on.  I run 4x heavily overclocked 5970s (3.3 GH/s total) off the same Sempron 145 which I have underclocked and undervolted down  to 1.2GHz.  cgminer uses about 10% cpu time.  I switched to diablo kernel to test it and it didn't change. 

Which OS, which driver, which SDK?

First - I'm very sorry, never again  :-) (1500 lines quote).
Second - Windows 7, AMD Catalyst 12.2, 2.6.
I've heard that started from 12.1, the Catalyst has an error and uses 100% of processor.
Unfortunately with my 7970 i have to use 12.2  :-(

What is your intensity setting? I bet it is more than 7.

Intensity is the most critical setting for stability. Set the intensity low than play with the rest.
I run Sempron downclocked with MSI "Cooling" profile, with 3 x 7970, CPU rarely gets above 30% with intensity set to 7 for all three GPUs.
I get a steady ~660 Mh/s per card.  No more CPU spikes, "idle for 60 seconds" errors.

BTW, I run 5 threads per GPU as it gives me the right, steady load in MY setup.


Yes - you are right, mamy thanks. But The00Dustin have right too. When I increase Intensity > 9 then processor allways rich 100%.
I used 1960/1350, two threads (there is no difference when I used 5) per GPU, Powertune 0, I have 4x680, stable (till now :-()
os2sam
Legendary
*
Offline Offline

Activity: 1918


Think for yourself


View Profile
March 05, 2012, 03:38:19 PM
 #4455

I just ran it up the flag pole.  Catalyst 11.9 which has SDK 2.5 still has the 100% CPU utilization bug, which was what I kind of thought.  I'll do some benchmarking to compare with what I was getting with the 11.6 and 100% CPU Utilization.
Sam

Since you are doing some benchies, also try this:
1) Run Catalyst 12.1 installer -> Custom install -> Unselect All button -> Select GPU Driver only -> Next... Untill it's installed
2) Run Catalyst 11.11 installer -> Custom install -> Unselect All button -> Select SDK Runtime only -> Next... Untill it's installed
3) Done!

That should give you a driver without the 100% CPU bug and the last version of SDK 2.5 runtime.

That's kind of my plan.  I'm tinkering with 11.9 now and will then go to 11.11 and 12.1 per yours and others suggestions.  I want to do some testing of my own before I settle on the combination you suggest.

Is the SDK 2.5 in 11.11 differenet/better than what shipped with 11.9?

I'm still perplexed that 11.6 has 100% utilization on Win7 but not on WinXP.  I guess I'll have to let that one go Smiley
Thanks,
Sam

A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?
A: Top-posting.
Q: What is the most annoying thing on usenet and in e-mail?
DeathAndTaxes
Donator
Legendary
*
Offline Offline

Activity: 1218


Gerald Davis


View Profile
March 05, 2012, 03:45:24 PM
 #4456

You might be lucky with your "-I 9" for a while until the system crashes in the middle of the night and you lose 10 hours of run time :-(
Running with more than 7 creates more problems than it solves.  Hash rate does not improve by much as GPUs are at 99% already, so what is the point.  To crash cgminer?

GPU being at 99% has nothing to do with how much of that is USEFUL WORK (i.e. shares)
I have a windows workstation w/ 3x 5970 water cooled which has been running for 7 months now.

If you are driving a display it is a good idea to set that GPU (AND ONLY THAT GPU) to d intensity.  Too high of intensity on the GPU used by the OS for driving the display can cause system instabilities.   I set 1 core to d and the rest to 9 and the rig runs fast and efficient without crashes.  This is heavily overclocked and overvolted.   Lowering the intensity of all GPUs even those never used for anything other than mining is simply wasteful.

5 threads per GPU?  What exactly do you think that accomplishes?
Panda Mouse
Member
**
Offline Offline

Activity: 88


Gliding...


View Profile
March 05, 2012, 03:54:14 PM
 #4457

What is your intensity setting? I bet it is more than 7.

Intensity is the most critical setting for stability. Set the intensity low than play with the rest.

No reason Intensity needs to be less than 7 on a 7970.  I run 8 on 5970 and that is only because I am using p2pool.  With conventional pool intensity 9 is more appropriate.  conman I believe found optimal intensity at 10 or 11 on 7970.

That was Linux.  My Windows 7/GD70 experience is different.  I was running 9, 10, 11, tried various other settings but I was constantly crashing cgminer.  I thought that it was my core/mem/vddc settings, but by accident I found out that on WINDOWS, intensity is the most critical setting.

7 is optimal on Windows with multiple overclocked cards.  Less than 7, runs well, but the hash rate suffers.
More than 7, CPU and GPU overloads are likely.  Depending what else is installed and running.

On Windows 7, overclocked cards with 9+  sooner or later get "idle for 60 seconds", "too busy GPU event log" etc.  That is on a clean, mean
and well tuned system.  It might take an hour, it might take few hours, but within one day cgminer crashes when run with -I 9/10/11 and overclocked core/memory.

Now, with -I 7 and 5 threads, my GPU loads are constant at 99%.

BTW, cgminer gets C5 exception (werfault.exe) when GPU gets overloaded and restarted by the BIOS.

You might be lucky with your "-I 9" for a while until the system crashes in the middle of the night and you lose 10 hours of run time :-(
Running with more than 7 creates more problems than it solves.  Hash rate does not improve by much as GPUs are at 99% already, so what is the point.  To crash cgminer?


Thank a lot, I'll use your experience at once  :-) Workss great (Win 7, 1170/1350 -I 7).
Vbs
Hero Member
*****
Offline Offline

Activity: 504


View Profile
March 05, 2012, 04:17:39 PM
 #4458

Is the SDK 2.5 in 11.11 differenet/better than what shipped with 11.9?

They are all different between SDK packs or Catalyst versions. As for "better", that's more difficult to answer... Tongue

Awhile ago I checked some of them:
Code:
Package - Version Number
------------------------
SDK2.4  - SDK 2.4.595.10
11.6    - SDK 2.4.650.9   <- Newest 2.4
11.7    - SDK 2.5.684.213
SDK2.5  - SDK 2.5.684.213
11.8    - SDK 2.5.709.2
11.9    - SDK 2.5.732.1
11.10   - SDK 2.5.775.2
11.11   - SDK 2.5.793.1   <- Newest 2.5
SDK2.6  - SDK 2.6.831.4
11.12   - SDK 2.6 (10.0.831.4)
12.1    - SDK 2.6 (10.0.851.4)

DeathAndTaxes
Donator
Legendary
*
Offline Offline

Activity: 1218


Gerald Davis


View Profile
March 05, 2012, 04:26:58 PM
 #4459

So you go and try it on WINDOWS with multiple 7970s and post your findings.

Why would I buy 7970s just to test out your flawed theory?

Try turning the SINGLE GPU connected to a display to dynamic as suggested in the README and you can increase the intensity to where it should be without issue.   As far as 5 threads being optimal?  LOLZ. 
kano
Legendary
*
Offline Offline

Activity: 1932


Linux since 1997 RedHat 4


View Profile
March 05, 2012, 10:41:29 PM
 #4460

...
Try it before you laugh.
The point he has already stated is that if you are using more than one GPU in your computer - you are wasting the others.

The stability issue is the display GPU.
Set the other GPU's to higher intensity.

You are reducing the performance of the other GPU's by using the same settings on all of them that you have determined are needed to keep the display GPU stable.

Pool: https://kano.is BTC: 1KanoiBupPiZfkwqB7rfLXAzPnoTshAVmb
CKPool and CGMiner developer, IRC FreeNode #ckpool and #cgminer kanoi
Help keep Bitcoin secure by mining on pools with Stratum, the best protocol to mine Bitcoins with ASIC hardware
Pages: « 1 ... 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 [223] 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 ... 830 »
  Print  
 
Jump to:  

Sponsored by , a Bitcoin-accepting VPN.
Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!