A cheap price for a POS Bronze PSU with a 70% efficiency rating 'under typical load' sounds really appealing. LOL. A mining rig that's on 24/7 is NOT a typical load. Do yourself and your rig a favor and buy a PSU that can handle the workload you're asking of it. Go with a EVGA G2/P2 that has a 90% or better efficiency that will also save you money in the long run. You can go with two smaller PSU's as long as there are enough PCI-E connectors for all the cards you want to use.
Hey thanks. I thought this was rated up to 89% efficiency. Could you link me the page where you find the 70% efficiency rating? If that is truly the case, then yeps you are definitely right, that is not a very good rating at all for a mining rig. The description says 'More than 70% efficiency at typical load operation' but Coolermaster says 'up to 88% efficiency @ typical load'. Either way a Bronze PSU is not a good choice for a mining rig and I wouldn't trust it as far as I could throw it in the trash. Go with a Gold rated or better PSU's from a top tier brand like EVGA or Corsair. They have better components, better efficiency, better warranty, run cooler and save you money in the long run.
|
|
|
A cheap price for a POS Bronze PSU with a 70% efficiency rating 'under typical load' sounds really appealing. LOL. A mining rig that's on 24/7 is NOT a typical load. Do yourself and your rig a favor and buy a PSU that can handle the workload you're asking of it. Go with a EVGA G2/P2 that has a 90% or better efficiency that will also save you money in the long run. You can go with two smaller PSU's as long as there are enough PCI-E connectors for all the cards you want to use.
|
|
|
Im using 8 GTX 1070 GPUS and have set page file to 16000mb im getting this error: can someone help please
UDA error - cannot allocate big buffer for DAG. Check readme.txt for possible solutions. CUDA error - cannot allocate big buffer for DAG. Check readme.txt for possible solutions. Setting DAG epoch #143 for GPU0 GPU 0 failed GPU 0, CUDA error 11 - cannot write buffer for DAG GPU 7 failed Setting DAG epoch #143 for GPU7 GPU 7, CUDA error 11 - cannot write buffer for DAG
Try setting the PCI-E slots to GEN 2 in the motherboard Bios.
|
|
|
Can anyone comment on the power consumption for the RX580 cards. I have two rigs each has 6 RX580 and both are dual mining ETH+LBC (Eth: 176 Mh/s + LBC: 510 Mh/s). Each rig consumes about 1020 Watts reading from the wall-meter while GPU-Z reads less than 130 Watts for each card. I think they should all be around 6*130 + 70(system) ~ 850 Watts total. My setting (-cclock 1150 + -mclock 2100 -cvddc 870 -mvddc 870)
The power usage shown in GPU-Z is for the card VGA power connectors. You also need to add 40-50W for each riser. 1000-1100 W is about right for a six card RX 580 rig when dual mining. The risers themselves do NOT use anywhere close to even 40W. The power load for risers is negligible even with six of them. It's the GPUs themselves that are using the power. I don't think the power sensors in GPUs are all that accurate anyway plus they fluctuate so wildly. I have switched all my RX BIOSes to Anorak's powersave versions. I lose about 3-5% hash rate but it saves almost that much in power and reduces heat output and wear on the cards. When I see 6 GPUs consuming over 1000W and then I look at an Antminer that can produce 4x more revenue on 1200W I wonder why I'm even bothering with GPUs... You are wrong. powered risers use between 40W - 50 W each, which is NOT reported in GPU-Z. This is easy to verify in a dual PSU setup like I have where all the risers are connected to the same PSU as the motherboard and the secondary PSU only powers the tops of the cards.
|
|
|
1070Ti will be the most efficient card for mining Ethash algo. It won't be available for long after release, so catch some if you can.
My guess is that the hashrate will be around 36-38
It will be apparently built with 16nm fabrication technology so it's kind of a boring, slightly juiced up 1070 (or slightly gimped 1080 in case of GDDR5X) with nothing new. And its power consumption will be apparently 180 watts, same as most factory overclocked 1070s. Therefore I think it will only do 32-34 Mh/s doing Eth but being a new card will probably mean it will be overpriced for miners. Not worth it for more than $400 for ETH. You can already get 30-31 MH/s on ETH + 50% better dual mining performance with a RX 580. Around that price it would be good for Equihash. Otherwise a 1080 gets ~570 H/s for $500.
|
|
|
1070Ti will be the most efficient card for mining Ethash algo. It won't be available for long after release, so catch some if you can.
My guess is that the hashrate will be around 36-38
If it has GDDR5X memory, it will be less than a 1080 on Ethash, so ~20-22 MH/s. ![](https://ip.bitcointalk.org/?u=https%3A%2F%2Fpostimg.org%2Fimage%2Fgx8fqhvv9%2F&t=663&c=hOJty-UG78Gf_w) It's all rumors at this point. Others sites are claiming it has GDDR5X, specically to deter miners from buying them all, which would make sense since it's a limited run card. https://www.techspot.com/news/70982-nvidia-rumored-working-gtx-1070-ti.html
|
|
|
1070Ti will be the most efficient card for mining Ethash algo. It won't be available for long after release, so catch some if you can.
My guess is that the hashrate will be around 36-38
If it has GDDR5X memory, it will be less than a 1080 on Ethash, so ~20-22 MH/s.
|
|
|
Dont trust them
they are consitently way way optimistic on thier reports, for instance they still have not updated the ETHereum profits with the difficulty bomb diff increase and its been over 48 hours since that happened, this means to me they are updating the difficulties manually or with a delayed script and not polling from the the block chains directly .
In crypto you need timely data to know when you should move your rigs to another algo or currency, whatamine is too slow for this.
By default, Whattomine.com uses a 24 hour average for difficulty and price in it's calculations. All you need to do is change it to 'Current price' and 'Current difficulty' in the sort options to use the current data. I think the default options give a ore accurate representation of a coins recent profitability. Difficulty and price change every minute and adjusting your mining to match that is pointless. Didn't know that , that should be the default setting though I think the default options give a more accurate representation of a coins recent profitability. Difficulty and price change every minute and adjusting your mining to match that is pointless. By the time you accumulate enough coins to cash out, the price will have changed.
|
|
|
Dont trust them
they are consitently way way optimistic on thier reports, for instance they still have not updated the ETHereum profits with the difficulty bomb diff increase and its been over 48 hours since that happened, this means to me they are updating the difficulties manually or with a delayed script and not polling from the the block chains directly .
In crypto you need timely data to know when you should move your rigs to another algo or currency, whatamine is too slow for this.
By default, Whattomine.com uses a 24 hour average for difficulty and price in it's calculations. All you need to do is change it to 'Current price' and 'Current difficulty' in the sort options to use the current data.
|
|
|
Can anyone comment on the power consumption for the RX580 cards. I have two rigs each has 6 RX580 and both are dual mining ETH+LBC (Eth: 176 Mh/s + LBC: 510 Mh/s). Each rig consumes about 1020 Watts reading from the wall-meter while GPU-Z reads less than 130 Watts for each card. I think they should all be around 6*130 + 70(system) ~ 850 Watts total. My setting (-cclock 1150 + -mclock 2100 -cvddc 870 -mvddc 870)
The power usage shown in GPU-Z is for the card VGA power connectors. You also need to add 40-50W for each riser. 1000-1100 W is about right for a six card RX 580 rig when dual mining.
|
|
|
Is there still no way to set the individual fan speeds per card in smOS?
|
|
|
For Ethash, the Sapphire Radeon RX 580 Nitro+ SE - 8GB will get 30-31 MH/s after a simple Bios mod with the Polaris Bios Editor v1.6 'one click timing patch', which is about the same as a GTX 1070 and the RX 580 will have 50% better dual mining performance. Great cards and they run cool too. https://bitcointalk.org/index.php?topic=1917282.msg22139323#msg22139323
|
|
|
If I remove the Zotac from this rig. And do it this way -
AX 1200i - 5 cards 150 watts ea. System and 3 risers - total - 1000 watts
Antec 1000 - 2 cards at 150 watts ea and 1 card at 170 watts. Plus 5 risers at 250 watts - total 720 watts
Would work for now?
With two 1000 W PSU you should be able to run 9 1070 without a problem if you adjust the power limit on the cards. Where are you getting your power consumption values from? For Nvidia cards, the power consumption shown in software tools includes the risers. For AMD cards, you need to add 40-50 W for the riser. You need to get a power meter so you can measure the actual power draw from the wall and divide the power across both PSU's evenly to keep them at an efficient power level.
|
|
|
You need to reduce the power limit on the cards to reduce the power consumption. Even then I don't think it's possible to run six 1070's on a single 1000W PSU. I have a Zotac AMP! Exteme GTX 1070 mining ZEC and it uses ~185 W at 75% power limit.
|
|
|
I'm selling my Gtx 1060s because they are not very good at mining. Whether it was the Hynix memory, or the bad components used, I only got 17Mh/s out of each card after OCing without touching the voltage. I cut my fingers twice with the fan blades, and whenever the network difficulty changes some of the cards hashrates drops to 1Mhs. I'm getting 22 Mhs from 570 8Gb and want to know how to increase hashrate other than increasing the memory clock, thanks.
Just get out now while you still can. Those GPU's going to start taking toes. ![Shocked](https://bitcointalk.org/Smileys/default/shocked.gif) Stop takin a P00P at him ![Tongue](https://bitcointalk.org/Smileys/default/tongue.gif) To the OP : Look up Bios modding, your 580 8GB should be able to get 29mh easy peasy at low overclocks. His "570 8GB" should hit 29.5 with alt ethash coins or 29 if he is using windows 10+ Claymore 10 + Blockchain drivers. Also dual mining LBR he should get about 65 mh's with LBR or about 730 with DCR All he has to do is A. Up the memory clock B. Run Polaris 1.64 and click the Memory Timing button to load the specific memory timing for the card. Nice new feature of the Polaris fork" and he is good to go, Also I would put the following in the Start.bat for his 570 -cclock 1175 -mclock 2050 -cvddc 900 mvddc 900 -dcri 20 (with LBR) -dcri 25 (with Decred) Where do you put the cvddc settings, at the beginning of the file or the end? And also can you undervolt Nvidia cards too? It makes no difference where you put it in the command line. As noted in the Claymore README, Nvidia cards don't support undevolting. You can adust the power limit % on Nvidia cards using Afterburner in Windows, which reduces the power consumption target when set at a negative value.
|
|
|
1% max is normal. You can lower the rejected share rate by lowering the memory overclock, but the hash rate will also go down. Network latency is also a factor, so you should mine at a pool with a server closest to you.
|
|
|
Just like any market, Crypto mining works in cycles. What you described was a peak of one of those cycles and definetly not the norm. What's important to remember is the overall trend is growing and upward. In the video below BBT does a good explanation from the perspective of someone that's been mining for a long time of why you should keep mining, even when it's not the most profitable time to do so. https://www.youtube.com/watch?v=iCSdLSP1sv0&feature=youtu.be&t=364
|
|
|
2nd and 3rd server PSUs are powering all GPUs
Does not boot to bios. Screen does not register a signal.
Doesn't matter which slots are plugged in and which are empty, the 14th one just kills it.
Is this getting to look like a bad motherboard or one that I may have already damaged from improper initial configurations?
Also the B250 chipset only supports up to 12 dedicated PCI-E lanes natively. The rest of the PCI-E slots use a PCI-E expansion chip. Nvidia cards are notorious for having PCI-E compatibility problems when using expansion chips, which is why they don't work with those x1 to 3 x1 PCI-E expansion cards.
|
|
|
2nd and 3rd server PSUs are powering all GPUs
Does not boot to bios. Screen does not register a signal.
Doesn't matter which slots are plugged in and which are empty, the 14th one just kills it.
Is this getting to look like a bad motherboard or one that I may have already damaged from improper initial configurations?
If 13 GPU's are working, I doubt it's a bad motherboard. More likely it's a Nvidia driver limitation.
|
|
|
|