Install Sapphire Trixx and use the "Disable UPLS" option
|
|
|
This is what a free market looks like people. This is the exact thing that Bitcoin stands for and why it was made.
giving the means of productions for creating whatever coin you want into the hand of the people is plain marxism and has nothing to do with free market or the idea bitcoin stands for. i would suggest you take your propaganda back to leningrad, comrad, and leave the bitcoin to the free world. Keep your ideologies out of this, nobody cares even if you had a clue about what you're talking about. Bitcoin was made for "free market" or "free world"? lol. First of all, read the pdf. Second, Bitcoin is even more important to people that are not free.
|
|
|
Do your Hynix and Elpida cards offer the same hashrate? From what I understand, Hynix > Elpida on clock speed and timings, but it doesn't have to always be so.
|
|
|
if you use risers and the dual-psu. However you're paying for risers and Add2PSU adapter that will be worthless when you end mining
You don't need Add2PSU. Just use paper clip to short green and black wires in 24 pin ATX connector. Plug both PSUs to the same power bar/surge protector and flip the switch to turn both of them at the time. That adapter is useless. You are right about that 3 cards per motherboard simplifies things. No support racks to hold the GPUs. One PSU per rig. But if you are building 10+ rigs, space is a factor so you want to run more than 3 cards per motherboard. Ideally, 5-6 cards per system. Whatever you can squeeze on the same 15A@120V circuit. Understand that shorting pins on power supplies is yet another point of failure and asking for trouble if one is a newbie/intermediate miner. I don't disagree with your post and perspective though. Shorting pins is a point of failure? I highly doubt that, not if you do it right. Exceptionally easy with a single strand of cat6 solid. In most cases it is not even necessary, and the riser cables I use have a 1x presence pin jump built in anyway. Besides, I think the jump is just for BIOS initiation, after that point you could probably remove it till you restart. It is up to him to decide Running 3 in one mobo directly is a great way to melt the cards down and obliterate the fans in well under 6 months. I know from experience, no amount of external fan velocity or volume will keep three cards stacked that close in any decent running temps.
Did you undervolt the cards? That said, running cards directly on the motherboard isn't even my main point, but it is rather to not connect more cards than the available PCI-e 16x slots. It doesn't matter if the cards are undervolted or not, they will overheat directly on the motherboard. The reference cooler designs are better, but there are no blower style 280/270 cards that I know of and the 290's run so hot you'd be nuts to try. It would likely limit your hashrate, and the costs of risers might be made up by the extra hashpower over the life of the machine. All that heat will play hell on the motherboard too, especially if it's not supported properly. That's a lot of weight, and a lot of heat... Why exactly is that you recommend staying to x16 slots only? Is it just some wild hunch or something? Doesn't really make sense, since you advocate the use of riser cables now apparently? The x16 risers are horribly designed and more prone to failure with the extra 15 lanes of unnecessary connections on a crummy ribbon cable, and they are a serious airflow restriction for absolutely no reason! If you use x1 risers, what difference does the slot make? You'd then probably need to jump the x1 presence pin at that point, too. I have several rigs with multiple power supplies running 6 cards per board, and they do just fine. I would have spent double on supporting hardware if I stayed with 3 cards/rig, for no reason. Do you realize that what we are discussing here is initial rigs? Do you read the intent or do you want to go on cherry picking and derail? Will you pay or help him if he makes initial mis-steps (e.g. gets faulty risers, fails to short the pins, etc...)? @OP: With time improve your design and efficiency. Now, start SIMPLE and SAFER.
|
|
|
if you use risers and the dual-psu. However you're paying for risers and Add2PSU adapter that will be worthless when you end mining
You don't need Add2PSU. Just use paper clip to short green and black wires in 24 pin ATX connector. Plug both PSUs to the same power bar/surge protector and flip the switch to turn both of them at the time. That adapter is useless. You are right about that 3 cards per motherboard simplifies things. No support racks to hold the GPUs. One PSU per rig. But if you are building 10+ rigs, space is a factor so you want to run more than 3 cards per motherboard. Ideally, 5-6 cards per system. Whatever you can squeeze on the same 15A@120V circuit. Understand that shorting pins on power supplies is yet another point of failure and asking for trouble if one is a newbie/intermediate miner. I don't disagree with your post and perspective though. Shorting pins is a point of failure? I highly doubt that, not if you do it right. Exceptionally easy with a single strand of cat6 solid. In most cases it is not even necessary, and the riser cables I use have a 1x presence pin jump built in anyway. Besides, I think the jump is just for BIOS initiation, after that point you could probably remove it till you restart. It is up to him to decide Running 3 in one mobo directly is a great way to melt the cards down and obliterate the fans in well under 6 months. I know from experience, no amount of external fan velocity or volume will keep three cards stacked that close in any decent running temps.
Did you undervolt the cards? That said, running cards directly on the motherboard isn't even my main point, but it is rather to not connect more cards than the available PCI-e 16x slots.
|
|
|
if you use risers and the dual-psu. However you're paying for risers and Add2PSU adapter that will be worthless when you end mining
You don't need Add2PSU. Just use paper clip to short green and black wires in 24 pin ATX connector. Plug both PSUs to the same power bar/surge protector and flip the switch to turn both of them at the time. That adapter is useless. You are right about that 3 cards per motherboard simplifies things. No support racks to hold the GPUs. One PSU per rig. But if you are building 10+ rigs, space is a factor so you want to run more than 3 cards per motherboard. Ideally, 5-6 cards per system. Whatever you can squeeze on the same 15A@120V circuit. Understand that shorting pins on power supplies is yet another point of failure and asking for trouble if one is a newbie/intermediate miner. I don't disagree with your post and perspective though.
|
|
|
The fans using so much power !?
From what I've read, power use is linear with clock speed but quadratic with voltage. In a very crude calculation, if my 7970 is pulling 13A from the VDDC line PSU at 1V, 1Ghz, that's already 156 W.
See how many Amps (A), Gpu-z is reporting on the Sensors tab, "VDDC current In". Then consider another 30% for the remaining of the card (VDDC is only the GPU core afaik)
So, 1.25(v) * 1.25 * 156(mine) * 1.15(clock) * 1.3 = 365 W
|
|
|
My advice that you and others may or may not agree with: forget risers, dual-psu and shorting pins. The motherboard supports 3 cards, put 3 cards.
Let's say you want a setup with 5 - 6 280x cards. Another base motherboard + CPU (e.g. Sempron 190) + ram + boot pendrive costs ~$200. You'd obviously not need a 2nd base system and save those $200 if you use risers and the dual-psu. However you're paying for risers and Add2PSU adapter that will be worthless when you end mining, shorting pins (you may fry a motherboard this way) or pay more for better risers, running the motherboard out of spec, and concentrating all your hash power on a single point of failure.
Make sure you undervolt the cards and use a large fan nearby.
|
|
|
Perhaps your core speed is too high when related with the ram. Append --gpu-engine 800 --gpu-memclock 1400 to your command line, and then try again increasing the --gpu-engine 10 mhz at a time
|
|
|
Ok, I deleted this previously, posting again...
Scrypt is very sensitive to vram timings. The 7870's have lot of variation regarding ram suppliers from good Hynix to cheap Elpida and furthermore the bios programs those timings in a weird way. They are the equivalent of the usual 9-9-9-24 you see for system ram (eg. CAS, TRp) that one can tweak on the pc bios. They obviously also exist for video memory, but afaik there isn't a tweaking utility for video.
What you can try is to force flash a bios from a card that gives high hash rate. However if the vrms and ram are incompatible or can't handle the timings, voltages, etc... it will crash and you may even brick the card.
|
|
|
Try -I 20 -g 1 --thread-concurrency 15232
|
|
|
Sorry for the offtopic, but why wouldn't you buy BTC with paypal? Isn't the risk on the btc seller side?
|
|
|
You sure you want to carry that crap around?
|
|
|
Don't flash it to a 6970, you will kill it with overvoltage. Instead, extract the 6950 bios, unlock the shaders with RBE128 (?) and flash it back.
You'll get ~480 Kh/s, core 880Mhz, ram 1500mhz
|
|
|
Can you push that ram a bit more? Hash rate tends to be proportional to ram clock, if the bios doesn't introduce higher latencies at higher clocks. Mine can handle up to 1800 Mhz (unstable) and 1750 (24h solid).
Could you post a screenshot of GPU-z "Sensors" tab, scrolled down to the bottom, while mining? Thanks.
Edit: with undervolting, you're pulling ~700 Kh/s out of a XFX? Hmm...
|
|
|
Very nice results! Wish I knew about that other thread before, could have been of assistance.
|
|
|
What pool do you mine on? If your pool switches coins, that easily locks up cards. The fix? Install CGWatcher and have it restart mining when cards go sick or cgminer freezes etc. If you mine on a multicoin pool try mining only litecoin (for example) for 24 hours and see if you get a crash....if not then just use the fix I listed above. here is my btc address if my solution works for you 1KnyGG1sySxmCGAD2AAukCWPT8T1o22rhK Thanks for the troubleshooting tip, however that is not a solution because this is not a videocard lock, it's at CPU or chipset level. I've ran with and without CGWatcher. It also happened when a Nvidia 6200TC was providing video or just with teamviewer virtual display driver. The keyboard leds don't toggle and the wireless connections fall from my router
|
|
|
Locked but BIOS moddable: vtx3d 7970 X-edition
|
|
|
|