Massive update to git tree:
I've basically implemented a kind of "network scheduler" into cgminer. It is designed to increase the number of connections to cope with virtually any sized hashing (see the 91 device example above with 33GH on one machine), yet minimise the amount of open connections used at any time, and reuse connections as much as possible.
This is the relevant part of the changelog (reverse order), though there are other commits to the master branch as well: Limit the number of curls we recruit on communication failures and with delaynet enabled to 5 by maintaining a per-pool curl count, and using a pthread conditional that wakes up when one is returned to the ring buffer. Reap curls that are unused for over a minute. This allows connections to be closed, thereby allowing the number of curl handles to always be the minimum necessary to not delay networking. Use the ringbuffer of curls from the same pool for submit as well as getwork threads. Since the curl handles were already connected to the same pool and are immediately available, share submission will not be delayed by getworks. Implement a scaleable networking framework designed to cope with any sized network requirements, yet minimise the number of connections being reopened. Do this by create a ring buffer linked list of curl handles to be used by getwork, recruiting extra handles when none is immediately available.
|
|
|
okey, so here is my 2cents There is something funky about this file. I'm on Centos 5.X and getting this error. to fix, add a semicoln on the line after die: it should look like this: die: ; pthread_cleanup_pop(true);
and then all is well It shouldn't be need, but it is... I have no idea why.... That's very interesting and likely an issue with the difference between headers and implementation of pthread_cleanup_pop in that distribution/gcc and all the more modern ones we're compiling on. The reason is that the pthread_cleanup_* functions are implemented as macros so you can't see just why this is an issue unless you spit out the pre-processor output. Either way it should be easy to implement something like that as a fix, thanks.
|
|
|
One of the reasons I limited a previous version to only displaying 8 devices. Sigh. The interface was never designed with so many devices in mind and with that many you should really be using a different front end via the API instead.
|
|
|
Hi guys! Updated CGminer to 2.3.6 and Catalyst to 12.4 and the framerate has dropped on my 3x5870 GPUs, 6950 is OK!
I cleaned OS from all drivers and 2.6 SDK, installed 2.5 SDK and 12.4 drivers and I can not return the performance back.
3x5870 are below 400 MH\s, I am used to see them working at ~430-450 MH\s.
Please tell me how to configure CGminer or OS to get MAXIMUM performance!
THANX
My CGminer config:
cgminer -o *** -u *** -p *** -I d,10,10,10 --gpu-fan 80-100 --gpu-engine 850,970-990,920,960 --gpu-memclock 300 ll ll ll ll V V V V 6950, 5870, 5870, 5870
As per the readme... delete the .bin files when you change SDKs if you want the new SDK to "take".
|
|
|
I guess reading a value off is not likely to cause a problem, whereas updating it concurrently would. There is no locking around calls to ADL since it just hooks into the functions directly in the driver.
|
|
|
The main bitcoin client does not support longpoll. Using pool software that can perform a longpoll for you will decrease your stale shares, and therefore increase the possibility to find a block. That is a very good reason imho ![Smiley](https://bitcointalk.org/Smileys/default/smiley.gif) No point setting up pool software for this. If you use cgminer, set your primary pool to your local bitcoind, and then set just about any other actual pool as your backup, it will use the longpoll from the backup pools to help out your bitcoind.
|
|
|
OK, then if it already works how do we pass options to cgminer to fire it up?
Try: cgminer -n Should report any SDKs installed and any OpenCL compatible GPUs detected. Presumably the GPUs need drivers that advertise their OpenCL capability.
|
|
|
ckolivas, any chance you can take a look at the possibility of churning some code out to take advantage of the Intel HD GPU's integrated into Sandy Bridge CPU's (and now Ivy Bridge)? Apparently they just released OpenCL SDK to allow access to the GPU portion of the CPU instead of just the CPU itself. See post #22 here - https://bitcointalk.org/index.php?topic=71366.msg870723#msg870723If it's opencl then they should already work. Performance, on the other hand, is likely to be shit. Not because of my opencl code, but because they are pissweak.
|
|
|
I'll be moving akbash away from cgminer/bfgminer APIs and poke ADL/driver directly to sense miner activity. Crashes will be detected through debug events in addition to polling and enumerating of processes.
Little heads up: Poking ADL concurrently from two applications can and will lead to crashes.
|
|
|
- Use longpolls from backup pools with failover-only enabled just to check for block changes, but don't use them as work. I don't have enough data or it to be definitive yet, but I'm wondering if this didn't slightly lower my Utility. I am mining on a a merged pool that sometimes uses submitold, but have a backup pool that doesn't. Both use merged mining, and I am using --failover-only. So, I'm wondering, when a pool other than 0 sends an LP before pool 0, is pool 0's work discarded even though it might still be good? It seems to me like it would be, and then for the time between the LP on the backup pool and the LP on the primary pool, work wouldn't be done. This may not be true, because it may immediately request new work that may then also be discarded when my pool does LP (I have seen this take 20 seconds with no share submitted, but sometimes it takes significantly longer than that to find a share at ~318 MH/s). That having been said, I am asking because my U is at 4.35 (which rounds to 4.4 in the main stats, over only 6125 shares) where it was at 4.41 before (over tens of thousands of shares over weeks of work). Obviously we are only talking about a difference of .06 in my U, which may be statistically insignificant, but it is still potentially 1.5% less shares being submitted over >12 hours. That difference is surely enough to be accounted by in variance which in U is usually +/-10%, but the full discussion of its effects is here: https://bitcointalk.org/index.php?topic=28402.msg873742#msg873742Theoretically you might be losing a *tiny* bit of work across longpolls with --failover-only but in my experience it is less than 2 seconds' worth of work every 10 minutes.
|
|
|
/**FPGA's will not take over once I release my Zero-Point Energy Generator. The enrergy it produces will be free, but the device is gonna cost you. ![Cheesy](https://bitcointalk.org/Smileys/default/cheesy.gif) Someone suggested using phatk2 with 5800 and 5900 cards but when I designate '-k phatk2' cgminer said 'you can't do that'. Using search on this thread I noticed ckolivas states cgminer automatically chooses which phatk kerenl to use, because there are several available I guess. Also, when I check the config file after creating it while running, it isn't obvious which phatk is being used, it just lists 'phatk'. Do I use '--verbose -T' to check which phatk kernel is in use? When starting from the commandline (Not -c), can I specify which specific phatk kernel to use instead of letting automatic choose? Does it matter which SDK is in use to get the benefeits suggested for phatk2? I'd like to reduce stales/DOA for p2pool, rejected shares and Longpolling (hardware errors are zero). Is this more a combination of engine/mem, gpu threads, intensity, vectors, and worksize rather than a specific kernel? There is only one phatk in cgminer and it's called -k phatk, and it's actually phatk2.2 as you would know it. If you do not specify anything, it is chosen by default on pretty much all 5X and 6X cards with any SDK before 2.6. If you have 5X cards, STICK TO AN OLDER SDK, 2.1, 2.4 or 2.5 and let it choose phatk. To do the best with p2pool, read the readme.
|
|
|
When you run connected to a pool/server that doesn't support longpoll, cgminer will find a longpoll from another server and use that to help it detect block changes (as kinlo said). Also, some work will always go to the backup pools (which is why you still get accepted shares from p2pool while connected to your bitcoind) unless you enable the --failover-option. The large number of longpolls is perfectly normal on p2pool.
|
|
|
the post you linked to indicates that autodetect for bfl only works in linux ... not windows .... as I indicated, I am running windows
Oh, I'll shut up then ![Embarrassed](https://bitcointalk.org/Smileys/default/embarrassed.gif) If there's no autodetect in the windows code it's because it hasn't been done yet.
|
|
|
I don't know what to tell you ... it didn't work before, now it works with gpu disabled. Is there a reason why bfl's are not auto detected like they are in ufasoft's miner?
Darn now I got mixed up. BFL should be autodetected, it's Icarus that doesn't sorry. See Kano's post. edit, here: https://bitcointalk.org/index.php?topic=28402.msg874680#msg874680
|
|
|
epoch indicated in his post that i needed to disable the gpu functionality then manually select the com port of the bfl ... that worked and i'm now able to run one instance of cgminer for the gpu and a separate instance for the bfl with gpu disabled
No, it is not necessary to disable the GPU functionality, but you do need to specify the com ports.
|
|
|
got it ... thank you very much ... didn't realize you had to disable the gpu to get the bfl to work ... guess i'll have to run two instances ...thanks again
Wait where did you hear that? That was never said, and you can run GPUs with BFL. Gigavps runs 14 BFL with 2 GPUs getting 13GH on one machine.
|
|
|
Is there a windows binary with bitforce support turned on?
Pretty sure it should be in all BitFORCE-capable versions (since like 2.2.0); if not, I know it is for ... Yes all binaries have it. Peddle not ye wares here.
|
|
|
7970 can run as low as engine less 150.
1050 Eng = 900 Mem 1125 Eng = 975 Mem
etc...
Hitting "G" in cgminer (which will be running in screen -- 'screen -r' will attach you) will show you all the details.
Which you can do at startup with the memdiff option: --gpu-memdiff -150
|
|
|
Melbourne, my town? Sure why not... though mentioning a venue and time would be good...
|
|
|
[2012-04-29 13:59:37] LONGPOLL from pool 3 detected new block [2012-04-29 13:59:39] LONGPOLL from pool 0 requested work restart
You now have a way of knowing which pool is LIKELY (though not surely) to have found that block if you have most of the major pools in your set up. This means you now have a way to *cough* choose who to hop to, if you're so inclined... I'd donate a BTC or two for a new management strategy that acted on that info. Sam It wouldn't be hard to do now, but realistically for hopping to be worthwhile you need to only do it on prop pools for the maximum benefit, know their hashrate and do it for the magic percentage of duration. Then you have to factor in for different payschemes and how long to stay there and where to hop back to and so on... I had no intention of developing and maintaining such a database, which would be very fluid and change day by day. On the other hand if all you wanted was one that would hop to the pool that found the latest block each time, and you plugged in what pools you wanted it to work on, that would be very easy to do.
|
|
|
|