Sapphire radeon 7970 on air, cgminer 2.4.1, default cgminer settings (which work out to -v 1 -w 64 -k poclbm) : GPU 0: 74.0C 3027RPM | 734.9/735.6Mh/s | A:445 R:3 HW:0 U: 10.51/m I:11 73.0 C F: 52% (2929 RPM) E: 1230 MHz M: 1080 Mhz V: 1.170V A: 99% P: 5% Ambient temp ~17 C. AMD driver 8.921 AMD OpenCL SDK 2.6 on debian linux stable 64 bit. (note fan speed looks slightly different because GPU status line taken a few seconds later).
|
|
|
Thanks ! & Thanks for all the new versions lately, Really appreciate all the effort you are making ! Makes life much easier ! (Sent some coins) And I received them, much appreciated, thanks
|
|
|
Hi Ckolivas, I'm running 2.4.1. on all my miners, all have the same primary pool, all without (--no-pool-disable) as that is no longer necessary if I understand correctly One of the miners keeps my primary pool (Clipse) as rejecting even as the others are mining on his pool cgminer version 2.4.1 - Started: [2012-05-06 16:00:06] -------------------------------------------------------------------------------- (5s):406.3 (avg):323.6 Mh/s | Q:10820 A:5465 R:123 HW:0 E:51% U:4.5/m TQ: 1 ST: 4 SS: 11 DW: 1425 NB: 139 LW: 8011 GF: 9 RF: 18 Connected to http://pool.ABCPool.co:8332 with LP as user XXXXX Block: 000005d5bcd29c2c9a7c10e7c64c7ea1... Started: [12:20:52] -------------------------------------------------------------------------------- [P]ool management [G]PU management [S]ettings [D]isplay options [Q]uit GPU 0: 76.5C 3462RPM | 324.9/323.6Mh/s | A:5465 R:123 HW:0 U:4.47/m I: 5 --------------------------------------------------------------------------------
0: Rejecting Alive Priority 0: http://pool.bonuspool.co.cc:80 User:XXXX 1: Enabled Alive Priority 1: http://pool.ABCPool.co:8332 User:XXXXX 2: Enabled Alive Priority 2: http://pit.deepbit.net:8332 User:XXXX 3: Enabled Alive Priority 3: http://mine2.btcguild.com:8332 User:XXXXX
Current pool management strategy: Failover [A]dd pool [R]emove pool [D]isable pool [E]nable pool [C]hange management strategy [S]witch pool [I]nformation Or press any other key to continue
I am sure LPs from the Bonuspool are working again, so I don't see why it should act like this Thanks, Brat It has to actually *accept* a share from the work generated from a longpoll which only happens once every 10 minutes on average, and you may not actually get a share if you're unlucky, especially with only one device. Unfortunately the code is also only tested in artificial situations, so I can't say with 100% certainty it will work in all situations either. In this scenario, it appears not to have.
|
|
|
Gonna be interesting what affect this has on singles ...
Likely: Nil.
|
|
|
No in fact that is totally unexpected, though I don't have 14GH in one machine to kick around so I'm not sure how much ram that would require. Unless you have some kind of low limit on number of threads, pid_max, some ulimit set, some cgroup limitation or otherwise, I've tried hard so far to keep resource usage low on cgminer.
It's actually a stock BAMT install.. I haven't dug into the configuration of BAMT itself, I had assumed it was stock Debian limits as far as system variables go. Defaults on debian are not that restrictive. Maybe something different in the setup. Check the output of all the following commands: cat /proc/sys/kernel/pid_max cat /proc/sys/kernel/threads-max cat /proc/sys/vm/max_map_count ulimit Limits all look normal to me.. thread_max is low, but shouldn't have caused a problem: root@bfl-1:~# cat /proc/sys/kernel/pid_max 32768 root@bfl-1:~# cat /proc/sys/kernel/threads-max 28097 root@bfl-1:~# cat /proc/sys/vm/max_map_count 65530 root@bfl-1:~# ulimit unlimited None of those are likely to be hitting the limits then. So unless you're running some (possibly java) software concurrently that is woeful in resource usage, I really have no idea why you're having this issue.
|
|
|
You deserve more but, is what I have now... Thank you ckolivas!!
$ bitcoind sendtoaddress 148KkS2vgVi4VzUi4JcKzM2PMaMVPi3nnq 2.0 85063f3ec9d5a0836cfd7dedeff0c283f6c27c2c532593e4794dfb076f4b3b20
Best! Thiago
Thank you kind sir!
|
|
|
There's a new CGMiner version out that deals with this:
if a pool can't connect or has rejects for 3 mins it will be disabled, on the next LP CGMiner will check whether the disabled pool is back up and if so it will enable it and if you have your pool management on failover it will go back to mining on the primary pool
so no more need for the --no-pool-disable
Woops, even I cant keep up with the cgminer development Just trying to keep everyone happy...
|
|
|
NEW VERSION - 2.4.1, MAY 6 2012
Human readable changelog: --benchmark won't crash A pool will only be disabled if it rejects shares for at least 3 minutes in a row. Then it will be checked every longpoll to see if it is accepting shares, and if so, will be re-enabled. Should work when Icarus is combined with BFL More accurate hashrate on Icarus Ztex quad 1.15y support Extra device stats in RPC API
Full changelog - In the unlikely event of finding a block, display the block solved count with the pool it came from for auditing. - Display the device summary on exit even if a device has been disabled. - Use correct pool enabled enums in api.c. - Import Debian packaging configs - Ensure we test for a pool recovering from idle so long as it's not set to disabled. - Fix pool number display. - Give cgminer -T message only if curses is in use. - Reinit_adl is no longer used. - API 'stats' allow devices to add their own stats also for testing/debug - API add getwork stats to cgminer - accesible from API 'stats' - Don't initialise variables to zero when in global scope since they're already initialised. - Get rid of unitialised variable warning when it's false. - Move a pool to POOL_REJECTING to be disabled only after 3 minutes of continuous rejected shares. - Some tweaks to reporting and logging. - Change FPGA detection order since BFL hangs on an ICA - API support new pool status - Add a temporarily disabled state for enabled pools called POOL_REJECTING and use the work from each longpoll to help determine when a rejecting pool has started working again. Switch pools based on the multipool strategy once a pool is re-enabled. - Removing extra debug - Fix the benchmark feature by bypassing the new networking code. - Reset sequential reject counter after a pool is disabled for when it is re-enabled. - Icarus - correct MH/s and U: with work restart set at 8 seconds - ztex updateFreq was always reporting on fpga 0 - Trying harder to get 1.15y working - Specifying threads on multi fpga boards extra cgpu - Missing the add cgpu per extra fpga on 1.15y boards - API add last share time to each pool - Don't try to reap curls if benchmarking is enabled.
|
|
|
It's actually a stock BAMT install.. I haven't dug into the configuration of BAMT itself, I had assumed it was stock Debian limits as far as system variables go.
I have 14 singles and 2 5870s on one computer. This is a bamt install. Here is the results of free -m total used free shared buffers cached Mem: 2021 798 1223 0 93 414 -/+ buffers/cache: 291 1730 Swap: 0 0 0
That's about the sort of resources I would have expected, unless something else is also running on the box, it seems unlikely that it would run out of resources.
|
|
|
No in fact that is totally unexpected, though I don't have 14GH in one machine to kick around so I'm not sure how much ram that would require. Unless you have some kind of low limit on number of threads, pid_max, some ulimit set, some cgroup limitation or otherwise, I've tried hard so far to keep resource usage low on cgminer.
It's actually a stock BAMT install.. I haven't dug into the configuration of BAMT itself, I had assumed it was stock Debian limits as far as system variables go. Defaults on debian are not that restrictive. Maybe something different in the setup. Check the output of all the following commands: cat /proc/sys/kernel/pid_max cat /proc/sys/kernel/threads-max cat /proc/sys/vm/max_map_count ulimit
|
|
|
CGminer keeps exiting with this error: [2012-05-04 19:12:37] Failed to tq_push work in submit_work_sync
I have to restart it manually at this point.
Usually this would imply something drastically wrong like running out of resources to spawn a new thread such as out of memory or hitting some thread limit. Given that people have successfully run cgminer on openwrt routers, I can't really envision what sort of set up would hit those limits unless you had massive hashrates, lots of pools set up, and minimal memory on the machine. edit: Certainly in older versions cgminer would spawn lots and lots of communication threads but that shouldn't be the case in 2.4.0+ It's version 2.4. 8 pools, hashrate of that machine is about 14 GH/s, it's got 2 gigs of RAM. Am I seriously coming up against a wall at only 14 GH/s? I had planned on adding another 20 - 30 GH/s to that machine. No in fact that is totally unexpected, though I don't have 14GH in one machine to kick around so I'm not sure how much ram that would require. Unless you have some kind of low limit on number of threads, pid_max, some ulimit set, some cgroup limitation or otherwise, I've tried hard so far to keep resource usage low on cgminer.
|
|
|
I upgraded to 2.4.0 and had issues with not being able to alter the engine and mem clocks, they were using the old config file. After deleting the config file in the .cgminer directory, engine and mem clock changing functionality returned.
Is cgminer smart enough to know that if I use cammand line arguments to not use the config file?
No, it adds any command line arguments to any config file it loads. It will display which config file was loaded though so you know it was used. There appears to be some conflict with the logic in how the config file + command line arguments are working together, from what I could tell. The 2.3.2 config file had gpu-engine "850," gpu-memclock "300," When I unpacked and ran the new built 2.4.0 I used the command line argument: --gpu-engine 900,850 --gpu-memclock 250 Once it started I hit 'G' to see what the clocks were set at and the clocks were engine 850, memory 300, not what I used in the command line. Stopping and restarting had no effect on being able to alter the clocks. The only solution was to delete the config file. It adds command line arguments after it has loaded the config file. So the config file was setting clock speed as you can see and your command line arguments were trying to set devices that don't exist.
|
|
|
I upgraded to 2.4.0 and had issues with not being able to alter the engine and mem clocks, they were using the old config file. After deleting the config file in the .cgminer directory, engine and mem clock changing functionality returned.
Is cgminer smart enough to know that if I use cammand line arguments to not use the config file?
No, it adds any command line arguments to any config file it loads. It will display which config file was loaded though so you know it was used.
|
|
|
CGminer keeps exiting with this error: [2012-05-04 19:12:37] Failed to tq_push work in submit_work_sync
I have to restart it manually at this point.
Usually this would imply something drastically wrong like running out of resources to spawn a new thread such as out of memory or hitting some thread limit. Given that people have successfully run cgminer on openwrt routers, I can't really envision what sort of set up would hit those limits unless you had massive hashrates, lots of pools set up, and minimal memory on the machine. edit: Certainly in older versions cgminer would spawn lots and lots of communication threads but that shouldn't be the case in 2.4.0+
|
|
|
What I'd love is to have a command to have cgminer reload the config file without restarting.
That way you could change or edit the files and just trigger a restart in the api.
Just restart it from the menu or the API, and it will reload the config file.
|
|
|
Can the "--no-pool-disable" option be set via the .conf file?
"no-pool-disable" : true,
|
|
|
Hasng since yesterday thanks to nelski: Quad Ztex ans Singles: I take it this means his quad code is working? Nelisky, do you mind pushing a git pull for your updated code ?
|
|
|
Would it be too aggressive towards pools to request more work than you actually work on? My layman approach would be: I know I have e.g. 4GH power and will need new work once a second. As soon as work is given to a device, immediately ask primary pool for more (instead of waiting for a device to do so) and if that one does not respond within 600ms, ask backup pool(s). The cost for such an pro-active approach would be that you ask for more work than you can handle, but you would not have to wait that long to see some pool is dead. As for my cgminer version: you're just too fast/active , it is from 24h ago and includes the pool scheduling improvements you added for 2.4.0. Can't catch up pulling with the speed of improvements you add. Well, we already ask for work from backup pools when the primary pool can't keep up. But there's a difference between asking for extra work cause you can't keep your queue full, and running out entirely. As I said, the distinction needs to be made or you can't flag the primary pool as "officially fucked".
|
|
|
Not to mention issues with how your shares are potentially used without your consent on rEligius.
|
|
|
By the way, your longpoll error message is pre 2.4.0 "longpoll failed for XXX, retrying every 30s" is the current 2.4.0 message and should only appear once until it starts working again, so I'm not convinced you're running the latest git.
|
|
|
|