Lower f number = higher aggression = typically more cpu. It means the GPU is working a larger set of nonces before finishing work and asking for more. If poclbm -f argument is anything like diablominer -f, it is essentially trying to complete an entire kernel run -f amount of times per second. Assuming a 1 ghash/sec GPU, -f 1 would attempt to complete an 1 billion hashes in a single kernel run before asking for more nonces to hash, which would be about once every second. -f 10 would attempt to complete 100 million hashes per kernel run, requiring new nonce ranges 10 times per second. -f 100 would do 10 million hashes per kernel run and ask for more nonce ranges 100 times/second. Requiring new work more often generally leaves the gpu more open to other requests, such as rendering desktop, or games, or whatever, which means less lag. Some drivers had a bug that made high cpu usage with higher aggressions. Lastly, people use 2.1 SDK by installing the standalone SDK here https://dl.dropbox.com/u/9768004/ATIStreamSDK_dev.msi or locating it on AMD's website (The .msi I linked is part of the package that AMD offers for 2.1 SDK, but it is the only thing you need to enable 2.1 sdk. Everything else is useless as far as mining goes). It is an entirely different platform than AMD APP (2.4+) so it can be installed at the same time. Your miner needs to be able to select a different platform though. Some miners don't allow you to do this though, but instead list the same device under a different platform as a different device, so you would mine under the "new" device. Say you had 2 5870's. Generally, your newer SDK would come up first (AMD APP) and you would have device 0 and 1, and maybe device 2 if the miner shows cpu devices as well. SDK 2.1 would be listed after that and show device 3 (cpu) then device 4 and 5 as the 5870's. For some reason, 2.1 SDK lists the CPU as the first device. AMD APP lists the CPU as the last device. Oh, TY for donation too Ok, I got the standalone installed. I see the extra devices under GUIMiner. It's mining away with 12.1ccc & 2.1 SDK. From what I can tell for 2.1 & 2.5, either they both seem to mine away at the same speeds or 2.1 might seem more stable at mhash/s or 2.1 seems like it might have lost 1 or 2 mhash/s. Very hard to tell if there is a difference. And actually, my desktop is laggy when 2.1 is used in which I believe you had stated something about this previously. Good to know that I got this working though, with 2.1 and learned about the extra devices. So, from your experience, what difference can you tell between 2.1 and 2.5? 2.1 is generally a more aggressive SDK. You will notice more desktop lag and even mouse lag at lower aggressions on 2.1 compared to 2.4/2.5. It's great for dedicated 5xxx/60xx-68xx cards. You may need to use slightly different memory speeds or miner/kernel settings, but for me, its a couple mhash more per card using phoenix + phatk2
|
|
|
Small donation sent to you, jackrabbit & ssateneth.
Lower f number = higher aggression = typically more cpu. It means the GPU is working a larger set of nonces before finishing work and asking for more. If poclbm -f argument is anything like diablominer -f, it is essentially trying to complete an entire kernel run -f amount of times per second. Assuming a 1 ghash/sec GPU, -f 1 would attempt to complete an 1 billion hashes in a single kernel run before asking for more nonces to hash, which would be about once every second. -f 10 would attempt to complete 100 million hashes per kernel run, requiring new nonce ranges 10 times per second. -f 100 would do 10 million hashes per kernel run and ask for more nonce ranges 100 times/second. Requiring new work more often generally leaves the gpu more open to other requests, such as rendering desktop, or games, or whatever, which means less lag. Some drivers had a bug that made high cpu usage with higher aggressions. Lastly, people use 2.1 SDK by installing the standalone SDK here https://dl.dropbox.com/u/9768004/ATIStreamSDK_dev.msi or locating it on AMD's website (The .msi I linked is part of the package that AMD offers for 2.1 SDK, but it is the only thing you need to enable 2.1 sdk. Everything else is useless as far as mining goes). It is an entirely different platform than AMD APP (2.4+) so it can be installed at the same time. Your miner needs to be able to select a different platform though. Some miners don't allow you to do this though, but instead list the same device under a different platform as a different device, so you would mine under the "new" device. Say you had 2 5870's. Generally, your newer SDK would come up first (AMD APP) and you would have device 0 and 1, and maybe device 2 if the miner shows cpu devices as well. SDK 2.1 would be listed after that and show device 3 (cpu) then device 4 and 5 as the 5870's. For some reason, 2.1 SDK lists the CPU as the first device. AMD APP lists the CPU as the last device. Oh, TY for donation too
|
|
|
WHATS WORK Sempron 145 ... UCC ON 2 core 3ghz ASRock 970 Extreme 4 2GB ram+250Gb 2.5" HDD 6950(70)+5870+5850+5770=1440mhs avg.....want to build 6970+5870+3*5850(or maybe 6970+2*5870+2*5850)...no money....))) PSU Chieftec BPS-1200C open rig
Win 7 x64 SP1
I have the same board, and it does work with 4 GPUs. I am actually running 5 on one right now. (same CPU too) BUT, the last two I have bought have a bunk PCIe x16 slot! Both of the last two I bought had a bad PCIE4 slot, that is the middle x16 slot. I have a third coming from NewEgg right now... I hope it works, because I like this board otherwise! Are you plugging cards direct into the slots or using a riser? Also some boards are really picky and insist certain cards be plugged in certain slots, complaining that you need to populate certain slots first to enable crossfire/SLI, and refuse to boot further unless you do what it tells you to do. Anyways, if you are using risers, you will probably need to short PCI-E presence pins. also make sure your risers arent defective or broken. Bending them a lot or plugging them in/unplugging a lot is very stressful on risers and can break the delicate solder points for the wires unless both ends are 1x physical width, which makes them effortless to plug in/unplug
|
|
|
was gonna say, sounds like your cards are still running at stock speeds, would be an issue with whatever you're using to OC with, and I recommend Sapphire Trixx for software OC, or BIOS flash for headless or set and forget rigs.
Too bad I didn't see this earlier, would've been an easy couple of bitpennies.
|
|
|
IIRC, 11.12 and beyond come with 2.6 SDK, not 2.5.
Anyways, I use 11.12 + 2.1 SDK on all my rigs and phoenix 2 miner, no cpu bug
12% cpu indicates you have an 8 core cpu? If its a single core cpu though.. 12% sounds "ok" to me, might have a weird setting which causes exponentially higher cpu with higher aggressions/lower -f flags. I had that for a while, but it eventually fixe ditself with a driver change, but I dont know which. It was a long time ago.
|
|
|
I'm running these 5 cards and drawing 740W at the wall with a 750W 80PLUS Gold HALE90. Assuming 87% efficiency, this is about 644W, still more than enough headroom.
5830 @ 705/150/0.95v: 230 mhash/sec 5830 @ 880/175/0.95v: 284.5 mhash/sec 5850 @ 1010/211/1.088v: 421.5 mhash/sec 5850 @ 1035/216/1.088v: 432 mhash/sec 5870 @ 720/145/0.95v: 331 mhash/sec
CPU is an Athlon II X2 250 (dual core 3GHz) underclocked to single core 800MHz 1.05v on an ASRock 890GX Extreme4
You can even go 0.85-0.9v with your underclocked athlon II. My X2 250 on my Gigabyte 890GPA runs fine at stock clock (with CnQ and no core disabling) on 1.15v (default is 1.4v) even if the CPU is at 100% utilization. I didnt mess with voltage because I didnt want to troubleshoot it if it started acting up. The voltage is at default because cool n quiet has its own un-modifiable voltage, so all I really did it just turned off a core and went into windows power management and put the minimum and maximum CPU duty (whatever its called) at 5%.
|
|
|
I'm running these 5 cards and drawing 740W at the wall with a 750W 80PLUS Gold HALE90. Assuming 87% efficiency, this is about 644W, still more than enough headroom.
5830 @ 705/150/0.95v: 230 mhash/sec 5830 @ 880/175/0.95v: 284.5 mhash/sec 5850 @ 1010/211/1.088v: 421.5 mhash/sec 5850 @ 1035/216/1.088v: 432 mhash/sec 5870 @ 720/145/0.95v: 331 mhash/sec
CPU is an Athlon II X2 250 (dual core 3GHz) underclocked to single core 800MHz 1.05v on an ASRock 890GX Extreme4
|
|
|
I haven't used this chip you're talking about. Considering though a GPU like 5750 hashes for about 170MH/s and the APU of AMD A8-3850 (which is 6550) does a 65MH/s, I seriously doubt this Intel chip can go up to 100MH/s. Waiting for XX55XX to post real life results from his new laptop I have a desktop with a 3770k as I said before. Isn't it not possible at the moment to run an OpenCL miner with the HD4000 gfx? Nothing comes up in guiminer. You will likely need to install Intel's OpenCL package. Also, GUIMiner -only- looks for AMD APP (SDK 2.4 or higher) before "enabling" the rest of the program. Once you have 2.4, 2.5, or 2.6 installed, GUIMiner will run normally. You will then have to configure your miner (GUIMiner is not a miner. It is a frontend. A GUI. Nothing more.) to use the Intel OpenCL platform instead, and hope it recognizes the HD4000 graphics. I think you can use GPU Caps Viewer to analyze each OpenCL platform and compatible devices for each platform if GUIMiner is acting weird, to make sure the HD4000 is an acceptable device for Intel OpenCL.
|
|
|
As far as the miner to use, a large population will tell you to use cgminer, but I prefer phoenix because it gives me about 1-2% more mhash than cgminer (stupid variable intensities. even setting a high intensity is still variable, which is why its slower).
Not debating what you find in hashrate difference, but correction: high intensities are not variable, reporting looks like it is because of the asynchronous nature of the threads reporting back their hashes done. Only the longterm average is accurate and only after enough time has passed since it's an all time average of true hashes done which will include any dips across longpoll etc. There is no variable intensity in anything but dynamic mode in cgminer. Sorry if I came off attacking/crabby, must've not been well rested when I posted it. I probably related it to DiabloMiner, which, when I used it, only allowed a variable amount of nonces per run and could only change it through a framerate argument, and thought if the hashrate was shaky, like DM, it must have a variable amount of nonces per run. But the async nature would make sense too, thanks for clearing that up.
|
|
|
can you post a closer up pic of one sitting on a card, thanks.
Here's a K100 on my non-reference Gigabyte 5770: http://imgur.com/a/TWTts#0 wow, talk about overkill. I'm willing to bet maybe 2 heatpipes out of the 5 are actually contacting the die.
|
|
|
Going to poke in that I think 1x > 1x risers that are modded to accept 16x wide cards are great. Extremely easy to plug in/unplug so no stress on your wrist or the computer parts, and less likely to break solder joints on the riser itself.
|
|
|
Not really sure what to say on this one. Only times that I've seen lower mhash for no apparent reason is on an overloaded PSU (bad/fluctuating voltage probably causes some sort of throttling on the GPU itself)
|
|
|
Amd Sdk 2.5 with catalyst 11.12 works very good.
or jou could just run catalyst 11.11c, but you might get a 100% cpu bug. I've only gotten bug with 3 of my 7 systems though.
11.11 and prior either always have cpu bug, or maybe have CPU bug. I can verify that 11.12 and 12.1 never have CPU bug. They also work great for any 5xxx/6xxx card. They are also compatible with 2.1 SDK which is the fastest SDK you want to run with any 5xxx and 60xx-68xx card, but if you cant figure out how to install 2.1 sdk and use it, use 2.4 or 2.5 (3% slower than 2.1). 11.12 and higher have 2.6 SDK with them, which is really slow and bad, and -only- recommended (even required) if you're using a 7xxx card. You will have to find a way to install 2.4/2.5 on top of 2.6 in order to revert to the faster SDK. As far as the miner to use, a large population will tell you to use cgminer, but I prefer phoenix because it gives me about 1-2% more mhash than cgminer (stupid variable intensities. even setting a high intensity is still variable, which is why its slower).
|
|
|
I have 1GB version. At 800MHz it generates 325MH/s, downvolted to 0.887V. At stock (810MHz) it's 830MH/s. I can overclock it to 950MHz and then it's 400MH/s at stock voltage but heat and noise are not worth it.
830mhash off a 6950? I will buy that card right now. Name your price.
|
|
|
You must not pay for electricity. Pretty sure overvolting that huge amount is just making you lose any money gained from overclocking it.
|
|
|
You want $45 for a plastic fan? What the fuck. GL with that bro.
|
|
|
I'm running 2 5850 stock volt/1015 core, 1 5870 0.95v, and 2 5830 0.95v and I'm drawing 740W at the wall with a HALE90 (80plus gold). Assuming 87% efficiency, thats about 644W DC current, still plenty of headroom on my PSU.
|
|
|
I tried watching video when you first made thread, but it was painful to watch (boring/narrative). I remember having what appeared to be video card problems back when I was using an 8800GTX (nvdisp dll bsod), but it wasnt the video card, it was my system RAM. I was running it too fast/too tight of timings. I made the ram slower/looser and no more bsod.
|
|
|
So I got 2 of these from ebay http://www.gigabyte.com/products/product-page.aspx?pid=3354They don't have voltage control, so they are stuck at 1.088v, but on the plus side, they clock relatively high (1015 and 1035). Unfortunately, one of the 5850's runs about 15C hotter than the other and won't clock as high as a result. From what I can tell, it is not bad contact between GPU and heatsink, but rather bad contact between heatpipes and the fins. I say this because the hot 5850's heatpipes are much hotter to the touch than the cooler 5850. So... I imagine there is some sort of low melting point metal that joins the heatpipes to the cooling fins. Would it be dangerous to bake the heatsink in an oven in order to reflow the contact points? If not, what temperature should I bake it and and for how long? Also, I have some 60/40 "fine electrical rosin core solder". Would it be possible to use this somehow to increase the amount of contact between the heatpipes and the fins? I don't have any other materials readily available, but I can probably go to the home depot to get something if it would be better suited. The first thing to do would be to leave them alone. The next thing to do would be to get some arctic silver, clean the GPUs and heatsinks with alcohol and reapply thermal grease, and swap the heat sinks between the cards. If the problem follows the heatsink, then it might have been manufactured differently or have bad pipes (which have coolant in them to distribute heat). I get the feeling you didnt read my post entirely. The contact between heatpipes and GPU core is 100% fine. The problem lies with the contact between the heatpipes and the cooling fins. If the heatpipes are hot and the GPU is hot (my situation), then there is good contact between GPU and heatsink. If the heatpipes are cool and the GPU is hot, then there is bad contact between GPU and heatsink. Regreasing the GPU core will do nothing for my problem, because thats not where the problem is, and you cant exactly shove a bunch of arctic silver/noctua/whatever you use into the cooling fins... In any case, looks like I'll just have to suck it up, or maybe try other BIOS or see if gigabyte has OC software.
|
|
|
|