Yep, he did measure. I am pretty sure his PSU had active PFC, so pf = 1.0.
|
|
|
jsMiner actually performs even faster in Chrome than in node.js!
I am not suprised either by the performance gain I got. Some people in #bitcoin-dev wrongly claimed it was already optimized, when I knew it was not.
Don't have time to set up a pull req., sorry.
|
|
|
Hmm, you could have split the first one into 0-99.9 and 100-999.9. That would show if there are still any CPU miners.
Unfortunately the poll feature only allows up to 5 answers. And I really wanted the ranges above 1 Gh/s like I did.
|
|
|
To be able to recuperate the costs under 1 year, you'll need to be running 7x 6990s (or 11, according to another guy's 680 MH/s stock 6990).
Yes. As you guessed, my buyers are large miners. (I wouldn't trust that 680 Mhash/s number -- couldn't reproduce it.) grue: $200, not $20. As I explained earlier, it makes sense for my price to be indexed on BTC, not USD.
|
|
|
I have been running 0.3.22rc3 for ~7 days on Ubuntu 10.04 64-bit. A few miners pointed to it. Solved a block or two. Seems to be solid.
|
|
|
I spent 15min, for fun, looking at possible optimizations for jsMiner. My patch simply inlines some functions, pre-computes constants, and caches array values, and ends up increasing its performance by ~1.7x under V8 (benchmarked within nodejs 0.1.97) on my Core i3-i380UM (1.33GHz), from 8.3 kHash/s to 13.9 kHsh/s. This means jsMiner is "only" between 1/38th and 1/59th the performance of cpuminer 1.0.1: * 'cryptopp' algo: 525 kHash/s * '4way' algo: 825 kHash/s See http://pastie.org/1995279
|
|
|
Only 31 voters so far? Bump.
|
|
|
Ati's are better at raw integer calcs, while nvidia cards are better at floating point calcs. The former is a better fit for bitcoin hashing, the latter a better fit for scientific simulation.
This "floating point Nvidia myth" keeps being repeated, but it is not true. I am the author of: https://en.bitcoin.it/wiki/Why_a_GPU_mines_faster_than_a_CPU#Why_are_AMD_GPUs_faster_than_Nvidia_GPUs? The same 2x-3x performance advantage that AMD has applies to both integer and floating point GPU instructions. It used to be that Nvidia had a (slight) fp advantage, but not anymore. Even when comparing against Nvidia's professional Tesla range which has a fully unlocked double precision unit: * HD 6970 = 5100 single precision GFLOPS and 1275 double precision GFLOPS * Tesla 20xx = 1030 single precision GFLOPS and 515 double precision GFLOPS
|
|
|
For 4x5970, I have done it with as low as 1120W of total PSU power (2 x 560W): http://blog.zorinaq.com/?e=42 But don't do this unless you have a clamp-meter for precise measurements, and a highly power-optimized config (low power CPU, diskless, etc). At stock clocks/voltages, count 275W per 5970 and 346W per 6990 (my measurements). Multiply by how many cards you have. Add some headroom for the rest of the system: 50W minimum. But that's cutting it very close. For most folks I would recommend to count the TDP of each card (294W per 5970, 375W per 6990) and to add 200W headroom for the rest of the system. Or even more for significant overclock/overvolt mods.
|
|
|
All the above. I have 3x5970 rigs, 4x5970 rigs, 3x6990 rigs, etc. What is your pb? Are the install guides you read incomplete?
|
|
|
Let's collect, in this thread, anecdotes about mining accidents having caused physical damage such as: high temperatures destroying hardware, insulation melting on power cords, fires (gasp!), etc.
I'll start with one from a friend of mine (who shall remain anonymous): he lives in a place with old 120V electrical wiring. He put a rig of ~1.6kW (~13A), as measured by a kill-a-watt, on a 20A circuit for about half a day, until he started smelling smoke in his apartment, apparently coming from the wiring inside the walls. He completely stopped using this circuit, and now runs his rig on a dedicated 240V circuit. A 20A circuit is normally rated 16A for continuous loads by the National Electric Code, but that old wiring was likely defective and the insulation probably started melting.
Another from me: a fan failed on one of my HD 5970s. My monitoring data showed that the fan speed dropped to 0 percent for some reason, causing the temperature of one of the GPUs to quickly spike to 105 C for about half an hour, while the other GPU remained at a relatively safer 90 C. My miner then hung, causing the temperatures to drop back to normal idle levels. This has destroyed one of the GPUs on this card. Since then, any attempt to launch a GPGPU app quickly triggers an ASIC hang. The fan on this card still works, so perhaps it was a firmware bug controlling the fan that stopped it.
|
|
|
I am slashing the price down to 250 BTC.
I measured my miner at 802 Mhash/sec on a properly overclocked Radeon HD 6990 (BIOS switch at "overclocked" position 1; with "aticonfig --odsc=915,1260" to further overclock the GPU to 915 MHz and mem to 1260 MHz). And 840 Mhash/sec at 960 MHz, although the hardware wasn't very stable at this speed.
|
|
|
1. Sell it 2. Acquire same-price used RADEON card 3. profit
Yep. You can probably sell your GTX 460 for ~$100, which will get you an AMD card able to do ~100Mhash/s.
|
|
|
The price is now 2.48 BTC per Ghash/s per day (valid from May 27 00:01 UTC).
|
|
|
Hi I'm new to bitcoin and have many things to learn, may I know how you guys come up with these figures ? I'll soon be expanding my mining capacity too and would like to know at which point in the future should I consider mining solo and why.
By definition, at difficulty 1 you must hash an average of ~4 billion blocks (exactly 2**32) to solve one. So at the current difficulty of 244139, you have hash 2**32*244139 blocks. Just divide this by you hashing rate to find how long it would take to solve one.
|
|
|
< shrug > if the (old) Bitcoin Calculator is wrong, then it's wrong. but that's what it says for 10 Gh/s.
We are both right. I quoted numbers for 7567 Mh/s. For 10 Gh/s it is indeed ~29 hours.
|
|
|
|