kcobra
Member
Offline
Activity: 87
Merit: 10
|
|
April 07, 2013, 01:05:07 AM |
|
With cgminer is there any issue with setting one scrypt based pool/coin as the primary and difference scrypt based pool/coin as the fail over? Say LTC as the primary and NVC as the fail over. Thanks.
|
|
|
|
-ck (OP)
Legendary
Offline
Activity: 4242
Merit: 1644
Ruu \o/
|
|
April 07, 2013, 01:50:13 AM |
|
With cgminer is there any issue with setting one scrypt based pool/coin as the primary and difference scrypt based pool/coin as the fail over? Say LTC as the primary and NVC as the fail over. Thanks.
Wow you didn't even make it 3 lines into the README?
|
Developer/maintainer for cgminer, ckpool/ckproxy, and the -ck kernel 2% Fee Solo mining at solo.ckpool.org -ck
|
|
|
ssateneth
Legendary
Offline
Activity: 1344
Merit: 1004
|
|
April 07, 2013, 05:36:05 AM |
|
ckolivas i found a bug. When you start cgminer with a conf file and you change, say, the GPU engine speed of one of the devices, and Write Config file from within cgminer, it doesnt save the updated engine speed. It will save the speed that it originally loaded as.
|
|
|
|
-ck (OP)
Legendary
Offline
Activity: 4242
Merit: 1644
Ruu \o/
|
|
April 07, 2013, 05:39:10 AM |
|
ckolivas i found a bug. When you start cgminer with a conf file and you change, say, the GPU engine speed of one of the devices, and Write Config file from within cgminer, it doesnt save the updated engine speed. It will save the speed that it originally loaded as.
That's intentional because otherwise it would dynamically write whatever the GPU speed was at the time which is far more dangerous.
|
Developer/maintainer for cgminer, ckpool/ckproxy, and the -ck kernel 2% Fee Solo mining at solo.ckpool.org -ck
|
|
|
roy7
|
|
April 07, 2013, 05:46:52 AM |
|
Would like to request a feature. If you launch with -c filename.conf then change that to the default filename when going to Write Settings.
|
|
|
|
kano
Legendary
Offline
Activity: 4592
Merit: 1851
Linux since 1997 RedHat 4
|
|
April 07, 2013, 06:22:21 AM Last edit: April 07, 2013, 10:26:58 AM by kano |
|
USB does, yes. But not the devices in question.
If that's the case, then I suppose I see no real benefit to using libusb either. cgminer already implements functionality that uses the advantages of libusb ... ... Another is the usb API stats All devices have statistics recorded about all I/O to them, including the initial control transfers that the serial-USB code doesn't even know about Yes, this is made possible using libusb. Too bad it's completely useless.Just thought I'd point out this little gem of utter stupidity by the retard Luke-Jr So he's been hashing away on a BFL SC for how many days now? Edit: 6 or 7 days! And ... the latest problem has been a supposed 500ms latency issue with I/O Pity they chose the crappy miner code to test all this on - coz cgminer (my USB statistics) already reports all this through the API on a per command level. Min, Max, Total, Count ... (average = Total/Count of course) So I guess they would have known about this how many days ago if they were testing with cgminer instead of the 30 year old crap code used in the crap clone?
|
|
|
|
sephtin
Newbie
Offline
Activity: 47
Merit: 0
|
|
April 07, 2013, 07:42:25 AM |
|
Anyone else noticing cgminer not correctly controlling and/or reporting vddc correctly on 79xx cards? Example on 7950: Afterburner - Set GPU to 1100core/1600mem @1100mV CGMiner -- using conf file with GPU set to 1100core/1600mem @1100mV ... "thread-concurrency" : "24000,24000,24000,24000", "shaders" : "0,0,0,0", "gpu-engine" : "850-1100,850-1100,850-1100,850-1100", "gpu-fan" : "0-100,0-100,0-100,0-100", "gpu-memclock" : "1600,1600,1600,1600", "gpu-memdiff" : "0,0,0,0", "gpu-powertune" : "20,20,20,20", "gpu-vddc" : "1.100,1.100,1.100,1.100", ...
Start CGMiner, and it reports 1.250V under GPU Management: GPU 0: 683.0 / 682.7 Kh/s | A:165 R:3 HW:0 U:2.62/m I:20 67.0 C F: 100% (4416 RPM) E: 1100 MHz M: 1600 Mhz V: 1.250V A: 99% P: 20% Last initialised: [2013-04-07 01:28:34] Intensity: 20 Thread 0: 679.2 Kh/s Enabled ALIVE
GPU 1: 678.7 / 671.3 Kh/s | A:165 R:1 HW:0 U:2.62/m I:20 58.0 C F: 100% (3983 RPM) E: 1100 MHz M: 1600 Mhz V: 1.250V A: 99% P: 20% Last initialised: [2013-04-07 01:28:34] Intensity: 20 Thread 1: 675.6 Kh/s Enabled ALIVE
GPU 2: 661.7 / 656.1 Kh/s | A:190 R:2 HW:0 U:3.02/m I:20 58.0 C F: 100% (4025 RPM) E: 1100 MHz M: 1600 Mhz V: 1.250V A: 99% P: 20% Last initialised: [2013-04-07 01:28:34] Intensity: 20 Thread 2: 660.1 Kh/s Enabled ALIVE
GPU 3: 671.6 / 671.1 Kh/s | A:172 R:1 HW:0 U:2.73/m I:20 55.0 C F: 100% (4358 RPM) E: 1100 MHz M: 1600 Mhz V: 1.250V A: 99% P: 20% Last initialised: [2013-04-07 01:28:34] Intensity: 20 Thread 3: 667.7 Kh/s Enabled ALIVE
[E]nable [D]isable [I]ntensity [R]estart GPU [C]hange settings Or press any other key to continue
Afterburner and trixx report that mV is set to 1100. GPUz never shows it above 1.080v. And setting it to 1.250 in afterburner (shows 1.250 in afterburner, trixx, and GPUz shows 1.180'ish under load). There's also a HUGE difference when measured at the wall... so I'm convinced that it's cgminer that's not correctly reporting the voltage.. I'm on the latest beta drivers currently (13.3B3), but have tried stable (13.1), same issue. I'm mining altcoin, but I'd be surprised if that was related (??).. ? If there's any additional info I can provide, let me know. Edit (more detail): On Win7x64, using cgminer-2.11.4-windows from the apps/cgminer on github. (Same issue on 2.11.3). Confirmed with one other with 7950s, so it isn't just me.
|
|
|
|
-ck (OP)
Legendary
Offline
Activity: 4242
Merit: 1644
Ruu \o/
|
|
April 07, 2013, 08:19:23 AM |
|
Anyone else noticing cgminer not correctly controlling and/or reporting vddc correctly on 79xx cards?
[SNIP]
cgminer simply reports back whatever the ATI Display Library (ADL) says the voltage is, and can only modify whatever that ADL allows.
|
Developer/maintainer for cgminer, ckpool/ckproxy, and the -ck kernel 2% Fee Solo mining at solo.ckpool.org -ck
|
|
|
kano
Legendary
Offline
Activity: 4592
Merit: 1851
Linux since 1997 RedHat 4
|
|
April 07, 2013, 08:22:18 AM |
|
VERY old long running known issue ... The ATI ADL library, written by ATI of course, decides what is allowed. If you send a change, to a value it doesn't like, it will say OK, then change it back to the default. Go discuss that with ATI ... if you think it's a problem
|
|
|
|
.m.
|
|
April 07, 2013, 08:37:22 AM |
|
|
|
|
|
ShadesOfMarble
Donator
Hero Member
Offline
Activity: 543
Merit: 500
|
|
April 07, 2013, 11:10:14 AM |
|
How can I put the "--device|-d <arg>" parameter in a config file? Say I only want to use devices 2 and 3 for mining.
I tried "device" : "2,3" but that didn't work.
|
|
|
|
Crazy rider89
|
|
April 07, 2013, 12:33:38 PM Last edit: April 07, 2013, 12:57:08 PM by Crazy rider89 |
|
Hi guys, I'm trying to mining LTC using 2x5870 with this line: cgminer -o stratum+tcp://coinotron.com:3334 -u xxxx -p x --script
The command "--scrypt" aren't accepted by cgminer!
the output is: [2013-04-07 12:51:40] cgminer: --scrypt: unrecognized option
Then i tried with cgminer --enable-scrypt, but the output is:
cgminer: --enable-scrypt: unrecognized option
Even if the command "--enable-scrypt" is on the README file!
|
|
|
|
pekv2
|
|
April 07, 2013, 12:47:59 PM |
|
Have you tried using cgminer.conf?
|
|
|
|
Crazy rider89
|
|
April 07, 2013, 12:56:04 PM |
|
Thanks, i solved (i have to add "--enable-scrypt" after the "./configure" command)
but now with this line: cgminer -o stratum+tcp://coinotron.com:3334 -u xxxx -p x --script i get a very low hash speed for 2x5870: (5s):58.75K (avg):58.34Kh/s | A:2 R:0 HW:0 U:1.1/m WU:144.2/m
It seems that cgminer are using CPU rather than GPU, How can i fix?
|
|
|
|
pekv2
|
|
April 07, 2013, 01:05:12 PM |
|
Thanks, i solved (i have to add "--enable-scrypt" after the "./configure" command)
but now with this line: cgminer -o stratum+tcp://coinotron.com:3334 -u xxxx -p x --script i get a very low hash speed for 2x5870: (5s):58.75K (avg):58.34Kh/s | A:2 R:0 HW:0 U:1.1/m WU:144.2/m
It seems that cgminer are using CPU rather than GPU, How can i fix?
You need to find which platform your gpus are, e.g. 0,1,2 or 3. cgminer.conf platform cmd. Example: Consolidated Litecoin Mining Guide for 5xxx, 6xxx, and 7xxx GPUs
|
|
|
|
Crazy rider89
|
|
April 07, 2013, 01:15:19 PM |
|
if i write: cgminer -n, i get: [2013-04-07 15:14:09] CL Platform 0 vendor: Advanced Micro Devices, Inc. [2013-04-07 15:14:09] CL Platform 0 name: AMD Accelerated Parallel Processing [2013-04-07 15:14:09] CL Platform 0 version: OpenCL 1.1 AMD-APP-SDK-v2.4 (595.10) [2013-04-07 15:14:09] Platform 0 devices: 2 [2013-04-07 15:14:09] 0 Cypress [2013-04-07 15:14:09] 1 Cypress [2013-04-07 15:14:09] 2 GPU devices max detected
I think that the gpus are correctly configured.
|
|
|
|
pekv2
|
|
April 07, 2013, 01:23:25 PM |
|
I have no idea then, probably wait for someone else with knowledge. sorry.
|
|
|
|
sephtin
Newbie
Offline
Activity: 47
Merit: 0
|
|
April 07, 2013, 03:32:49 PM |
|
Anyone else noticing cgminer not correctly controlling and/or reporting vddc correctly on 79xx cards?
[SNIP]
cgminer simply reports back whatever the ATI Display Library (ADL) says the voltage is, and can only modify whatever that ADL allows. VERY old long running known issue ... The ATI ADL library, written by ATI of course, decides what is allowed. If you send a change, to a value it doesn't like, it will say OK, then change it back to the default. Go discuss that with ATI ... if you think it's a problem This might sound crazy... but is there a way to know if you're getting invalid information? Or known drivers/GPUs/?? that don't report back correct info? If so, putting unknown rather than incorrect information might be cleaner. I was unaware of the issue (Did I miss it in the "No one but sephtin ever read's me")? Doesn't sound like a major overhaul... feature request perhaps? Edit: s
|
|
|
|
sephtin
Newbie
Offline
Activity: 47
Merit: 0
|
|
April 07, 2013, 03:43:21 PM |
|
Both of them appear to fail on *nix, which I'd prefer to use... I actually completely gave up on Linux,because cgminer incorrectly reported 1.250, and thought they weren't appropriately setting voltages. I then switched to windows, and because afterburner adjusts and shows voltages correctly... I thought I was ok, until I started seeing cgminer reporting back 1.250... so I gave up completely and just started setting configs for 1.250 and overclocking (to get as much hashing as possible to compensate as I could), as it seemed undervolting wasn't possible with the cards I had. This isn't the case, and it's simply being reported incorrectly by cgminer. At least I know now... 1.250 vs. 1.100 has been a significant cost on utility bill. Anyway, now that I know, there's a very good chance I'll get back to linux (yay!). I'm good, and glad to know that this is known.
|
|
|
|
Uest3
Newbie
Offline
Activity: 30
Merit: 0
|
|
April 07, 2013, 04:13:51 PM |
|
I need some help with my Cgminer https://bitcointalk.org/index.php?topic=169290.0 , thanks ./cgminer -n result: CL Platform 0 vendor: Advanced Micro Devices, Inc. CL Platform 0 name: AMD Accelerated Parallel Processing CL Platform 0 version: OpenCL 1.2 AMD-APP (1016.4) Error -1: Getting Device IDs (num) clDevicesNum returned error, no GPUs usable 0 GPU devices max detected When I try to start Cgminer Started cgminer 2.11.4 Error -1: Getting Device IDs (num) clDevicesNum returned error, no GPUs usable All devices disabled, cannot mine!
|
|
|
|
|