cgminer-2.3.6 + OpenCl from AMD Catalyst 12.5 Beta = 462Mh/s cgminer-2.4.0 + OpenCl from AMD Catalyst 12.5 Beta = 418Mh/s
Considering none of the opencl, GPU or kernel code was changed between 2.3.6 and 2.4.0, I find that a little more than a little unlikely.
|
|
|
Here is my best guess: 2. Intel HD Graphics = 3-10 mhash
That seems a bit optimistic IMO. I'm guessing it would be around 500khash. LOL, you may be the first person to estimate even less than I would have.
|
|
|
I use cgminer-2.3.6 on my HD 5870 and received 460-462 Mh/s. In the new version cgminer-2.4.0 was just 418 Mh/s. Sys. Win7x64 + OpenCl from AMD Catalyst 12.5 Beta http://www.ngohq.com/home.php?page=Files&go=giveme&dwn_id=1624Maybe, you should pay attention to the new drivers and optimize a new version of CGMINER Wrong, you should downgrade your driver and read the FAQ.
|
|
|
We're not there yet I'm guessing, even with the new version of CGMiner cgminer version 2.4.0 - Started: [2012-05-03 10:31:14] -------------------------------------------------------------------------------- (5s):314.4 (avg):296.8 Mh/s | Q:573 A:285 R:18 HW:0 E:50% U:3.7/m TQ: 2 ST: 9 SS: 5 DW: 158 NB: 9 LW: 731 GF: 7 RF: 6 Connected to http://pool.bonuspool.co.cc:80 with LP as user DutchBrat Block: 00000a50b14e181ee2eddb662905d6da... Started: [11:40:04] -------------------------------------------------------------------------------- [P]ool management [G]PU management [S]ettings [D]isplay options [Q]uit GPU 0: 73.0C 2436RPM | 312.3/296.8Mh/s | A:285 R:18 HW:0 U: 3.73/m I: 5 --------------------------------------------------------------------------------
[2012-05-03 11:45:07] Accepted cd1d2d67.dbb1ffa3 GPU 0 pool 0 [2012-05-03 11:45:10] Accepted aecd4b84.ee3321f3 GPU 0 pool 0 [2012-05-03 11:45:14] Accepted 5ac588ed.f944086e GPU 0 pool 0 [2012-05-03 11:45:39] Accepted 4ba3b78c.43439465 GPU 0 pool 0 [2012-05-03 11:45:46] Accepted 033d897c.54c84c35 GPU 0 pool 0 [2012-05-03 11:46:16] Pool 0 communication failure, caching submissions [2012-05-03 11:46:37] Pool 0 communication resumed, submitting work [2012-05-03 11:46:37] Rejected bc157d83.3ba10608 GPU 0 pool 0 [2012-05-03 11:46:53] Rejected dcc02b84.454b2080 GPU 0 pool 0 [2012-05-03 11:46:59] Rejected 5396d980.319d0554 GPU 0 pool 0 [2012-05-03 11:47:06] Rejected b659cd6f.299f9fcd GPU 0 pool 0 [2012-05-03 11:47:26] Accepted 17cc28c1.95018954 GPU 0 pool 0 [2012-05-03 11:47:32] Accepted 708e48de.01d0c49c GPU 0 pool 0
That's the code being polite... you should see what it would have done before.
|
|
|
The previous code definitely was not polite and had a "take no prisoners" approach so it would bombard by default for the miner's benefit and if anything went wrong at the pool end, the pool would get hammered.
|
|
|
Bumping this up to stress again that all users of cgminer should immediately upgrade to the latest version! You will definitely see a performance improvement since the longpoll handling has been fixed to not cause occasional problems with PoolServerJ-based pools. https://bitcointalk.org/index.php?topic=28402.0 - Get it now! Hopefully the pool itself should also notice a significant decrease in load as people migrate to this version.
|
|
|
Cool, seems CGMINER 2.4 just got released, hopefully no surprises for me I definitely had services like yours in mind when I developed it, so I hope not
|
|
|
NEW VERSION: 2.4.0 - May 3, 2012
This version has a fairly significant upgrade to the way networking is done, so there is a minor version number update instead of a micro version, but it has already been heavily tested.
Human readable changelog: A whole networking scheduler of sorts was written for this version, designed to scale to any sized workload with the fastest networking possible, while minimising the number of connections in use, reusing them as much as possible. The restart feature was added to the API to restart cgminer remotely. If you're connected to a pool that starts rejecting every single share, cgminer will now automatically disable it unless you add the --no-pool-disable option. Once a pool stops responding, cgminer won't keep trying to open a flood of extra connections. Failing BFL won't cause cgminer to stop; it'll just disable the device, which an attempt may be made to re-enable it. Hashrates on FPGAs may be more accurate (though still not ideal). Longpoll messages won't keep going indefinitely while a pool is down.
Full changelog: - Only show longpoll warning once when it has failed. - Convert hashes to an unsigned long long as well. - Detect pools that have issues represented by endless rejected shares and disable them, with a parameter to optionally disable this feature. - Bugfix: Use a 64-bit type for hashes_done (miner_thread) since it can overflow 32-bit on some FPGAs - Implement an older header fix for a label existing before the pthread_cleanup macro. - Limit the number of curls we recruit on communication failures and with delaynet enabled to 5 by maintaining a per-pool curl count, and using a pthread conditional that wakes up when one is returned to the ring buffer. - Generalise add_pool() functions since they're repeated in add_pool_details. - Bugfix: Return failure, rather than quit, if BFwrite fails - Disable failing devices such that the user can attempt to re-enable them - Bugfix: thread_shutdown shouldn't try to free the device, since it's needed afterward - API bool's and 1TBS fixes - Icarus - minimise code delays and name timer variables - api.c V1.9 add 'restart' + redesign 'quit' so thread exits cleanly - api.c bug - remove extra ']'s in notify command - Increase pool watch interval to 30 seconds. - Reap curls that are unused for over a minute. This allows connections to be closed, thereby allowing the number of curl handles to always be the minimum necessary to not delay networking. - Use the ringbuffer of curls from the same pool for submit as well as getwork threads. Since the curl handles were already connected to the same pool and are immediately available, share submission will not be delayed by getworks. - Implement a scaleable networking framework designed to cope with any sized network requirements, yet minimise the number of connections being reopened. Do this by create a ring buffer linked list of curl handles to be used by getwork, recruiting extra handles when none is immediately available. - There is no need for the submit and getwork curls to be tied to the pool struct. - Do not recruit extra connection threads if there have been connection errors to the pool in question. - We should not retry submitting shares indefinitely or we may end up with a huge backlog during network outages, so discard stale shares if we failed to submit them and they've become stale in the interim.
|
|
|
As reported in another thread, the latest GIT updates improved cgminer performance with Clipse's bonuspool noticeably. With 2.3.6 my router had hundreds of active connections to the pools, now there are max. 20. If you're mining with bonuspool, you should try and build cgminer from latest GIT sources. Thanks for the feedback. This update has been significant enough to release a new version so I'm planning on releasing 2.4.0 soon.
|
|
|
Additionally, I noticed that the total mining power at TripleMining averages to about 80GH/s, as opposed to Deepbit's total mining power at over 3TB/s, which gave me the assumption that all the major/super-dedicated miners mine there. It's almost certainly the opposite. Major super miners care about profits, and profits are at their lowest on a hoppable prop pool with 3% fees or non-hoppable but heart-stopping PPS 10% fee. If anything's keeping people at deepbit, it's inertia, perceived advantage of mining with the biggest, language issues, and botnets. I work in concert with some of the biggest miners that use cgminer when developing it, and none of them use deepbit except as a backup pool.
|
|
|
127 is a "bug" value. It means the driver has crashed and is reporting nonsense. It does not actually mean the temperature is 127.
|
|
|
Massive update to git tree:
I've basically implemented a kind of "network scheduler" into cgminer. It is designed to increase the number of connections to cope with virtually any sized hashing (see the 91 device example above with 33GH on one machine), yet minimise the amount of open connections used at any time, and reuse connections as much as possible.
This is the relevant part of the changelog (reverse order), though there are other commits to the master branch as well: Limit the number of curls we recruit on communication failures and with delaynet enabled to 5 by maintaining a per-pool curl count, and using a pthread conditional that wakes up when one is returned to the ring buffer. Reap curls that are unused for over a minute. This allows connections to be closed, thereby allowing the number of curl handles to always be the minimum necessary to not delay networking. Use the ringbuffer of curls from the same pool for submit as well as getwork threads. Since the curl handles were already connected to the same pool and are immediately available, share submission will not be delayed by getworks. Implement a scaleable networking framework designed to cope with any sized network requirements, yet minimise the number of connections being reopened. Do this by create a ring buffer linked list of curl handles to be used by getwork, recruiting extra handles when none is immediately available.
|
|
|
okey, so here is my 2cents There is something funky about this file. I'm on Centos 5.X and getting this error. to fix, add a semicoln on the line after die: it should look like this: die: ; pthread_cleanup_pop(true);
and then all is well It shouldn't be need, but it is... I have no idea why.... That's very interesting and likely an issue with the difference between headers and implementation of pthread_cleanup_pop in that distribution/gcc and all the more modern ones we're compiling on. The reason is that the pthread_cleanup_* functions are implemented as macros so you can't see just why this is an issue unless you spit out the pre-processor output. Either way it should be easy to implement something like that as a fix, thanks.
|
|
|
One of the reasons I limited a previous version to only displaying 8 devices. Sigh. The interface was never designed with so many devices in mind and with that many you should really be using a different front end via the API instead.
|
|
|
Hi guys! Updated CGminer to 2.3.6 and Catalyst to 12.4 and the framerate has dropped on my 3x5870 GPUs, 6950 is OK!
I cleaned OS from all drivers and 2.6 SDK, installed 2.5 SDK and 12.4 drivers and I can not return the performance back.
3x5870 are below 400 MH\s, I am used to see them working at ~430-450 MH\s.
Please tell me how to configure CGminer or OS to get MAXIMUM performance!
THANX
My CGminer config:
cgminer -o *** -u *** -p *** -I d,10,10,10 --gpu-fan 80-100 --gpu-engine 850,970-990,920,960 --gpu-memclock 300 ll ll ll ll V V V V 6950, 5870, 5870, 5870
As per the readme... delete the .bin files when you change SDKs if you want the new SDK to "take".
|
|
|
I guess reading a value off is not likely to cause a problem, whereas updating it concurrently would. There is no locking around calls to ADL since it just hooks into the functions directly in the driver.
|
|
|
The main bitcoin client does not support longpoll. Using pool software that can perform a longpoll for you will decrease your stale shares, and therefore increase the possibility to find a block. That is a very good reason imho No point setting up pool software for this. If you use cgminer, set your primary pool to your local bitcoind, and then set just about any other actual pool as your backup, it will use the longpoll from the backup pools to help out your bitcoind.
|
|
|
OK, then if it already works how do we pass options to cgminer to fire it up?
Try: cgminer -n Should report any SDKs installed and any OpenCL compatible GPUs detected. Presumably the GPUs need drivers that advertise their OpenCL capability.
|
|
|
ckolivas, any chance you can take a look at the possibility of churning some code out to take advantage of the Intel HD GPU's integrated into Sandy Bridge CPU's (and now Ivy Bridge)? Apparently they just released OpenCL SDK to allow access to the GPU portion of the CPU instead of just the CPU itself. See post #22 here - https://bitcointalk.org/index.php?topic=71366.msg870723#msg870723If it's opencl then they should already work. Performance, on the other hand, is likely to be shit. Not because of my opencl code, but because they are pissweak.
|
|
|
|