chem1990
Newbie
Offline
Activity: 12
Merit: 0
|
 |
January 03, 2018, 11:08:31 AM |
|
Now that we are in power discussions, I have a question too. Is it mandatory to connect the additional molex connectors on motherboard to psu? All GPUs and risers get the powers directly from PSU so are they needed too?
if you are talking about asrock btc+ h110, in the manual it says that one od the two molex connectors need to be powered(i think closest to cpu). the other one can be unpowered. there is a sata coneector too and no info about it in manual so i think it must be... i connected all three
|
|
|
|
papampi
Full Member
 
Offline
Activity: 686
Merit: 140
Linux FOREVER! Resistance is futile!!!
|
 |
January 03, 2018, 12:58:19 PM |
|
Hello friends, i have some doubts(5) when configuring the 1bash.
1-TEAMVIEWER="NO" # YES or NO # If you use Team Viewer to remotely connect to this rig set this to YES it's possible to install teamviewer without emulating through that paid program ? if yes, how ?
2-TARGET_TEMP= here is the temperature that i desire, not the one that will be, right ? i want my rigs at 65ºC but that never comes with my power limit, thats ok ? or the mining rig will be reset until it gets too 65ºC ?
3-MINIMAL_FAN_SPEED= is self-regulated ? like afterburner curve ?
4-__CORE_OVERCLOCK=100 # Global Core overclock for all GPU's if not using INDIVIDUAL settings MEMORY_OVERCLOCK=100 # Global Memory overclock for all GPU's if not using INDIVIDUAL settings
if i don't need OC i can leave that with 0 ? 0 is the normal setting ?
5- How to configure # ZEC
ZEC_WORKER="myworkernameonsuprnova.zec"
# replace_with_your_ZEC_address
ZEC_ADDRESS="i don't need an adress since im mining on suprnova, so i leave it here blank ?"
ZEC_POOL="zec.suprnova.cc"
ZEC_PORT="2142"
thank you for your attention once more and your kindness
1- Just set it to yes 2- Thats the target temp that system tries to keep your cards below it, if they reach it, temp control will raise fan speed to keep temp below that. 3- your fan start at that speed and will go high if needed 4- set them to 0 5- ZEC_WORKER="suprnova worker name" # rig-1, rig-10, myrig, ... ZEC_ADDRESS="suprnova user name" ZEC_POOL="zec.suprnova.cc" ZEC_PORT="2142"
|
|
|
|
bumbu100
Newbie
Offline
Activity: 44
Merit: 0
|
 |
January 03, 2018, 02:05:09 PM |
|
I have stucked trying to find a way to mine in 2 or 3 separate sesions, with separate wallets. The reason: more people mining on the same rig (13 GPU 1080).
Thanks and wish you a very happy new year !
Nobody knows that??There must be a way... maybe something like a fee. Big fee - up to 50%.
|
|
|
|
WaveFront
Member

Offline
Activity: 126
Merit: 10
|
 |
January 03, 2018, 02:31:17 PM Last edit: January 03, 2018, 02:55:56 PM by WaveFront |
|
So in practice, each PCIe6+2 output is powering 3 cards and each SATA output is powering 3 raisers. I could replace the SATA connectors on the raisers with molex connectors that probably can handle more power (I will try this now), but I see that my best alternative is to put another PSU adding more PCIe 6+2 outputs
Hi WaveFront, I am using the same MB with 11 Zotac 1070TIs and Server PS. To get a better understanding of the power distribution I was measuring some parts. In my case the risers are using 25W each - means 2Amps Most PS having a chain of 5 SATA connectors on each channel - this is more than the wire can handle. As you have only 3 connector areas for each voltage range at the SATA connector, it would be 0,7 Amps for a single connector area which is IMO quite high - as we are using it 24 hours a day. 3 Risers is IMO the maximum and to change to Molex is not bad ;-) I have my cards limitted to 130W - the whole system is using 1800W, which is an additional power consumption of 370W for the rest of the system. Now that we are in power discussions, I have a question too. Is it mandatory to connect the additional molex connectors on motherboard to psu? All GPUs and risers get the powers directly from PSU so are they needed too? Hi Papampi, At least on the Asrock H110 BCT+, it depends how many PCI slots are occupied. When I assembled this last rig, I had only a feed to the molex connector closest to the CPU. After placing the 4th GPU, the mobo started to display a warning message about connecting the other molex and refused to boot until that connector was wired to the PSU. Not sure if it is really required, but the mobo designers thought it was important enough to program a safety check into the firmware.
|
|
|
|
WaveFront
Member

Offline
Activity: 126
Merit: 10
|
 |
January 03, 2018, 02:55:03 PM |
|
So in practice, each PCIe6+2 output is powering 3 cards and each SATA output is powering 3 raisers. I could replace the SATA connectors on the raisers with molex connectors that probably can handle more power (I will try this now), but I see that my best alternative is to put another PSU adding more PCIe 6+2 outputs
Hi WaveFront, I am using the same MB with 11 Zotac 1070TIs and Server PS. To get a better understanding of the power distribution I was measuring some parts. In my case the risers are using 25W each - means 2Amps Most PS having a chain of 5 SATA connectors on each channel - this is more than the wire can handle. As you have only 3 connector areas for each voltage range at the SATA connector, it would be 0,7 Amps for a single connector area which is IMO quite high - as we are using it 24 hours a day. 3 Risers is IMO the maximum and to change to Molex is not bad ;-) I have my cards limitted to 130W - the whole system is using 1800W, which is an additional power consumption of 370W for the rest of the system. Hi Monck, Thanks for your answer. Which model of PSUs are you using? I assume you have two PSU connected.
|
|
|
|
papampi
Full Member
 
Offline
Activity: 686
Merit: 140
Linux FOREVER! Resistance is futile!!!
|
 |
January 03, 2018, 02:59:23 PM |
|
I have stucked trying to find a way to mine in 2 or 3 separate sesions, with separate wallets. The reason: more people mining on the same rig (13 GPU 1080).
Thanks and wish you a very happy new year !
Nobody knows that??There must be a way... maybe something like a fee. Big fee - up to 50%. No option for that yet I think leenoox was working on it for next version.
|
|
|
|
WaveFront
Member

Offline
Activity: 126
Merit: 10
|
 |
January 03, 2018, 03:01:34 PM |
|
Hello, I am powering a mobo (Asrock H110 Pro BTC+) with a 1200w power supply. The board is hosting 10 GTX 1060 6gb. In 1bash I limit the consumption to 80W per card. Whenever I add an 11th card, the system becomes very unstable, rebooting after few minutes Ethminer is running.
I tried to swap cards, risers and slots without finding a solution. I believe it's a power problem, nevertheless the 11 cards system absorb a total of 910w, that is still within the limits of the power supply. Any ideas on how I can diagnose the problem?
It is a power problem. Even though your PS is listed as 1200W you have to account for efficiency and power loss. A premium Platinum rated power supply has an efficiency of 93-95%, Gold has 86-90% while Bronze rated power supply has 80-85% efficiency, the rest is dissipated in heat. When you add consumption from the motherboard, CPU, HDD or SSD you are running your PS at its limit, hence the instability and random reboots. Since we are running these power supplies 24/7 I recommend using Platinum power supplies and load them no more than 85%. They cost a bit more but they'll pay off due to power savings. On a side note, I also use 1060's mining ETH. I have them set at 75W and seen no significant improvement if I increase over 75W. Try lowering yours to 75W, it might save enough juice for your 11th card without restarting. Hi Leenox, Thanks for your answer. I lowered to power to 75W per card, but the system is still rather unstable. The power supply that I have is a Corsair HX1200i . It's rated Platinum. I was hoping to use the integrated Corsair link to diagnose these kind of problems but there are no drivers for linux. Now, I am faced with two options: 1) buy an extra smaller power supply to complement the existing one, for example an HX850 2) leave it as a 10 GPU rig and start afresh with a new rig Option 1 would be slightly more cost effective, but I never assembled a rig with two power supplies and I am concerned with possible power loops. What do you think? I use these add2psu in my 12 card rigs with two 700-750 Watt powers for 12x1060 and biostar btc+ motherboards How many PCIe 6+2 connections your Corsair HX1200i PSU gives you ? Usually they have 6 or 8 connectors and I think you use IDE to PCIe or SATA to PCIe 6 pin cables for the rest, and that could cause problems. I usually try avoid those cables and go with lower powers but keep the number of 6 pin connectors same as number of my cards So for example I get 2x700 Watt PSU that each has at least 6 6+2 PCIe power cables to get exact 12 6 pin power for the cards and use the IDE and SATA cables only for risers not main power of cards Hi Papampi, The HX1200i gives 6 PCIe 6+2 outputs. However, two are taken by the motherboard, that leaves me effectively 4 outputs. The PSU has also 6 SATA/molex outputs. Again two taken by the motherboard. So in practice, each PCIe6+2 output is powering 3 cards and each SATA output is powering 3 raisers. I could replace the SATA connectors on the raisers with molex connectors that probably can handle more power (I will try this now), but I see that my best alternative is to put another PSU adding more PCIe 6+2 outputs Doesnt it gives you 8 pin to dual 2+6 pin cables? If not you can get 6 pin to 6 pin cables or 8 pin to dual 6 pin cables, just dont use molex or sata to 6 pin cables I think adding a 600 watt with 4 x 6 pin is more than enough for you For the future you need 2 x PSU that gives you these at least for 12 x rigs ATX connector 20+4 pin: 1 4+4 pin CPU: 1 6+2 pin PCIe: 6 SATA + IDE: At least 8 I use 2x Antec TruePower Classic Series 750 W for my 12x1060 1 x 24(20+4)-pin 1 x 8(4+4)-pin ATX12V/EPS12V 6 x 8(6+2)-pin PCI-E 6 x Molex 9 x SATA Hi Papampi, The Antec TruePower Classic Series 750, seems a better choice than the Corsair HX750. What I like about the Corsair is the fact that the connectors are modular, but this feature is defeated by the fact that you have to buy proprietary cables (at premium prices). Also the price of the Antec is far more competitive.
|
|
|
|
bumbu100
Newbie
Offline
Activity: 44
Merit: 0
|
 |
January 03, 2018, 03:24:25 PM |
|
I have stucked trying to find a way to mine in 2 or 3 separate sesions, with separate wallets. The reason: more people mining on the same rig (13 GPU 1080).
Thanks and wish you a very happy new year !
Nobody knows that??There must be a way... maybe something like a fee. Big fee - up to 50%. No option for that yet I think leenoox was working on it for next version. ok, that's a good news... any idea when it will be ready?
|
|
|
|
leenoox
|
 |
January 03, 2018, 04:13:30 PM |
|
I have stucked trying to find a way to mine in 2 or 3 separate sesions, with separate wallets. The reason: more people mining on the same rig (13 GPU 1080).
Thanks and wish you a very happy new year !
Nobody knows that??There must be a way... maybe something like a fee. Big fee - up to 50%. No option for that yet I think leenoox was working on it for next version. ok, that's a good news... any idea when it will be ready? Probably in a month or so... Will be announced when it is ready for test
|
|
|
|
leenoox
|
 |
January 03, 2018, 04:21:26 PM |
|
Hello, I am powering a mobo (Asrock H110 Pro BTC+) with a 1200w power supply. The board is hosting 10 GTX 1060 6gb. In 1bash I limit the consumption to 80W per card. Whenever I add an 11th card, the system becomes very unstable, rebooting after few minutes Ethminer is running.
I tried to swap cards, risers and slots without finding a solution. I believe it's a power problem, nevertheless the 11 cards system absorb a total of 910w, that is still within the limits of the power supply. Any ideas on how I can diagnose the problem?
It is a power problem. Even though your PS is listed as 1200W you have to account for efficiency and power loss. A premium Platinum rated power supply has an efficiency of 93-95%, Gold has 86-90% while Bronze rated power supply has 80-85% efficiency, the rest is dissipated in heat. When you add consumption from the motherboard, CPU, HDD or SSD you are running your PS at its limit, hence the instability and random reboots. Since we are running these power supplies 24/7 I recommend using Platinum power supplies and load them no more than 85%. They cost a bit more but they'll pay off due to power savings. On a side note, I also use 1060's mining ETH. I have them set at 75W and seen no significant improvement if I increase over 75W. Try lowering yours to 75W, it might save enough juice for your 11th card without restarting. Hi Leenox, Thanks for your answer. I lowered to power to 75W per card, but the system is still rather unstable. The power supply that I have is a Corsair HX1200i . It's rated Platinum. I was hoping to use the integrated Corsair link to diagnose these kind of problems but there are no drivers for linux. Now, I am faced with two options: 1) buy an extra smaller power supply to complement the existing one, for example an HX850 2) leave it as a 10 GPU rig and start afresh with a new rig Option 1 would be slightly more cost effective, but I never assembled a rig with two power supplies and I am concerned with possible power loops. What do you think? I use these add2psu in my 12 card rigs with two 700-750 Watt powers for 12x1060 and biostar btc+ motherboards How many PCIe 6+2 connections your Corsair HX1200i PSU gives you ? Usually they have 6 or 8 connectors and I think you use IDE to PCIe or SATA to PCIe 6 pin cables for the rest, and that could cause problems. I usually try avoid those cables and go with lower powers but keep the number of 6 pin connectors same as number of my cards So for example I get 2x700 Watt PSU that each has at least 6 6+2 PCIe power cables to get exact 12 6 pin power for the cards and use the IDE and SATA cables only for risers not main power of cards Hi Papampi, The HX1200i gives 6 PCIe 6+2 outputs. However, two are taken by the motherboard, that leaves me effectively 4 outputs. The PSU has also 6 SATA/molex outputs. Again two taken by the motherboard. So in practice, each PCIe6+2 output is powering 3 cards and each SATA output is powering 3 raisers. I could replace the SATA connectors on the raisers with molex connectors that probably can handle more power (I will try this now), but I see that my best alternative is to put another PSU adding more PCIe 6+2 outputs Doesnt it gives you 8 pin to dual 2+6 pin cables? If not you can get 6 pin to 6 pin cables or 8 pin to dual 6 pin cables, just dont use molex or sata to 6 pin cables I think adding a 600 watt with 4 x 6 pin is more than enough for you For the future you need 2 x PSU that gives you these at least for 12 x rigs ATX connector 20+4 pin: 1 4+4 pin CPU: 1 6+2 pin PCIe: 6 SATA + IDE: At least 8 I use 2x Antec TruePower Classic Series 750 W for my 12x1060 1 x 24(20+4)-pin 1 x 8(4+4)-pin ATX12V/EPS12V 6 x 8(6+2)-pin PCI-E 6 x Molex 9 x SATA Hi Papampi, The Antec TruePower Classic Series 750, seems a better choice than the Corsair HX750. What I like about the Corsair is the fact that the connectors are modular, but this feature is defeated by the fact that you have to buy proprietary cables (at premium prices). Also the price of the Antec is far more competitive. Yeah, your PS doesn't have enough plugs for all the GPU's and risers. I am using two HX1000i on my 13 GPU rigs. HX1000i has quite a few connectors  It is recommended you power only 2 risers per SATA cable, 3 at most with careful monitoring, make sure cables don't overheat and melt.
|
|
|
|
leenoox
|
 |
January 03, 2018, 04:29:57 PM |
|
I love the gpumap, great idea!
Yup, Leenoox idea and his awesome code... Thanks leenoox, I will port it to windows. I'm glad you guys like the gpumap. car1999, please post the windows port when it's done
|
|
|
|
monck
Newbie
Offline
Activity: 15
Merit: 0
|
 |
January 03, 2018, 04:30:29 PM |
|
Hi Monck, Thanks for your answer. Which model of PSUs are you using? I assume you have two PSU connected.
I am using a small ATX PS for supporting the mainboard and two HP DPS-800GB A with home made cabling to the GPUs
|
|
|
|
bumbu100
Newbie
Offline
Activity: 44
Merit: 0
|
 |
January 03, 2018, 05:43:37 PM |
|
Hello, I am powering a mobo (Asrock H110 Pro BTC+) with a 1200w power supply. The board is hosting 10 GTX 1060 6gb. In 1bash I limit the consumption to 80W per card. Whenever I add an 11th card, the system becomes very unstable, rebooting after few minutes Ethminer is running.
I tried to swap cards, risers and slots without finding a solution. I believe it's a power problem, nevertheless the 11 cards system absorb a total of 910w, that is still within the limits of the power supply. Any ideas on how I can diagnose the problem?
It is a power problem. Even though your PS is listed as 1200W you have to account for efficiency and power loss. A premium Platinum rated power supply has an efficiency of 93-95%, Gold has 86-90% while Bronze rated power supply has 80-85% efficiency, the rest is dissipated in heat. When you add consumption from the motherboard, CPU, HDD or SSD you are running your PS at its limit, hence the instability and random reboots. Since we are running these power supplies 24/7 I recommend using Platinum power supplies and load them no more than 85%. They cost a bit more but they'll pay off due to power savings. On a side note, I also use 1060's mining ETH. I have them set at 75W and seen no significant improvement if I increase over 75W. Try lowering yours to 75W, it might save enough juice for your 11th card without restarting. Hi Leenox, Thanks for your answer. I lowered to power to 75W per card, but the system is still rather unstable. The power supply that I have is a Corsair HX1200i . It's rated Platinum. I was hoping to use the integrated Corsair link to diagnose these kind of problems but there are no drivers for linux. Now, I am faced with two options: 1) buy an extra smaller power supply to complement the existing one, for example an HX850 2) leave it as a 10 GPU rig and start afresh with a new rig Option 1 would be slightly more cost effective, but I never assembled a rig with two power supplies and I am concerned with possible power loops. What do you think? I use these add2psu in my 12 card rigs with two 700-750 Watt powers for 12x1060 and biostar btc+ motherboards How many PCIe 6+2 connections your Corsair HX1200i PSU gives you ? Usually they have 6 or 8 connectors and I think you use IDE to PCIe or SATA to PCIe 6 pin cables for the rest, and that could cause problems. I usually try avoid those cables and go with lower powers but keep the number of 6 pin connectors same as number of my cards So for example I get 2x700 Watt PSU that each has at least 6 6+2 PCIe power cables to get exact 12 6 pin power for the cards and use the IDE and SATA cables only for risers not main power of cards Hi Papampi, The HX1200i gives 6 PCIe 6+2 outputs. However, two are taken by the motherboard, that leaves me effectively 4 outputs. The PSU has also 6 SATA/molex outputs. Again two taken by the motherboard. So in practice, each PCIe6+2 output is powering 3 cards and each SATA output is powering 3 raisers. I could replace the SATA connectors on the raisers with molex connectors that probably can handle more power (I will try this now), but I see that my best alternative is to put another PSU adding more PCIe 6+2 outputs Doesnt it gives you 8 pin to dual 2+6 pin cables? If not you can get 6 pin to 6 pin cables or 8 pin to dual 6 pin cables, just dont use molex or sata to 6 pin cables I think adding a 600 watt with 4 x 6 pin is more than enough for you For the future you need 2 x PSU that gives you these at least for 12 x rigs ATX connector 20+4 pin: 1 4+4 pin CPU: 1 6+2 pin PCIe: 6 SATA + IDE: At least 8 I use 2x Antec TruePower Classic Series 750 W for my 12x1060 1 x 24(20+4)-pin 1 x 8(4+4)-pin ATX12V/EPS12V 6 x 8(6+2)-pin PCI-E 6 x Molex 9 x SATA Hi Papampi, The Antec TruePower Classic Series 750, seems a better choice than the Corsair HX750. What I like about the Corsair is the fact that the connectors are modular, but this feature is defeated by the fact that you have to buy proprietary cables (at premium prices). Also the price of the Antec is far more competitive. Yeah, your PS doesn't have enough plugs for all the GPU's and risers. I am using two HX1000i on my 13 GPU rigs. HX1000i has quite a few connectors  It is recommended you power only 2 risers per SATA cable, 3 at most with careful monitoring, make sure cables don't overheat and melt. I am using HP server PSU's 1250W. Is working perfectly, no issues, price is good. Just need attach the cables and create a strip betwen 3 pins to permanently turn it on. Same config as you, 13 cards. Using MBO's PS for 4 GPU's and HP server PS for the rest of them.
|
|
|
|
CryptAtomeTrader44
Full Member
 
Offline
Activity: 340
Merit: 103
It is easier to break an atom than partialities AE
|
 |
January 03, 2018, 06:04:02 PM |
|
|
|
|
|
papampi
Full Member
 
Offline
Activity: 686
Merit: 140
Linux FOREVER! Resistance is futile!!!
|
 |
January 03, 2018, 06:44:21 PM |
|
I like to know how to use server PSU for mining too.
|
|
|
|
|
martyroz
|
 |
January 03, 2018, 08:27:48 PM |
|
The power supply that I have is a Corsair HX1200i . It's rated Platinum. I was hoping to use the integrated Corsair link to diagnose these kind of problems but there are no drivers for linux.
Now, I am faced with two options: 1) buy an extra smaller power supply to complement the existing one, for example an HX850 2) leave it as a 10 GPU rig and start afresh with a new rig
Option 1 would be slightly more cost effective, but I never assembled a rig with two power supplies and I am concerned with possible power loops. What do you think?
I have 6 rigs and all PSU are from Corsair; TX850 (non-modular) RM1000x (6 PCIE capable - comes with 4 * double headers) HX1200 (6 PCIE capable - comes with 4 * double headers) HX1200 (as above) HX1000i (6 PCIE capable - comes with 4 * double headers) AX1200i ( 8 PCIE capable - comes with 2 * double headers and 4 * single headers) We always need one PCIE for the CPU, so here is the max card set-up for each PSU (single header / double header); RM1000x / 10 / 5 HX1200 / 10 / 5 HX1000i / 10 / 5 AX1200i / 14 / 7 The AX1200i could technically do 14 cards if single 8 pin - but you would have to buy an additional 5 * PCIE double headers (they go for about $15 ea.). You also would break the golden rule that says 2 risers per peripheral power string. All of these PSU's are capable of 6 peripheral strings max, which is 12 cards if two cards per string. You would have to buy more of those too, or mishmash your riser set-up (3 molex, 2 sata, etc) I have bought a fair few additional cables for the above. Just be aware of the compatibility differences when ordering single cables on eBay, etc. Additionally, you can buy full cable replacement kits to get the extra PCIE cables, I recently did this at a cost of $100 just to get 4 more PCIE double headers as my regular eBay guy was sold out in December. HXi and AXi cables are compatible with each other (Type 3) HX and RMx/RMi cables are compatible with each other (Type 4) Here is what I mine with each PSU: TX850 - 1080ti / 1080 (blower) / 1070ti (mini) RM1000x - 4*Vega56 HX1200 - 4*Vega64 HX1200 - 4*1080ti / 1070ti AX1200i - 3*1080 / 2*1070ti / 1070 / 1080ti HX1000i - 3*1080 / 1080ti If I had my time again, I would buy only AX1200i / AX1500i. They have great adaptability with the two extra PCIE slots and have uniform compatibility with each other. The exception being HX1000i for Vega rigs (4-6). Stick with the cable compatibility type so you don't have to stock both lines and check when using.... ><
|
|
|
|
bumbu100
Newbie
Offline
Activity: 44
Merit: 0
|
 |
January 03, 2018, 09:08:45 PM |
|
I have stucked trying to find a way to mine in 2 or 3 separate sesions, with separate wallets. The reason: more people mining on the same rig (13 GPU 1080).
Thanks and wish you a very happy new year !
Nobody knows that??There must be a way... maybe something like a fee. Big fee - up to 50%. No option for that yet I think leenoox was working on it for next version. ok, that's a good news... any idea when it will be ready? Probably in a month or so... Will be announced when it is ready for test Thank you, leenoox ! Waiting for those good news
|
|
|
|
WaveFront
Member

Offline
Activity: 126
Merit: 10
|
 |
January 03, 2018, 09:49:35 PM |
|
The power supply that I have is a Corsair HX1200i . It's rated Platinum. I was hoping to use the integrated Corsair link to diagnose these kind of problems but there are no drivers for linux.
Now, I am faced with two options: 1) buy an extra smaller power supply to complement the existing one, for example an HX850 2) leave it as a 10 GPU rig and start afresh with a new rig
Option 1 would be slightly more cost effective, but I never assembled a rig with two power supplies and I am concerned with possible power loops. What do you think?
I have 6 rigs and all PSU are from Corsair; TX850 (non-modular) RM1000x (6 PCIE capable - comes with 4 * double headers) HX1200 (6 PCIE capable - comes with 4 * double headers) HX1200 (as above) HX1000i (6 PCIE capable - comes with 4 * double headers) AX1200i ( 8 PCIE capable - comes with 2 * double headers and 4 * single headers) We always need one PCIE for the CPU, so here is the max card set-up for each PSU (single header / double header); RM1000x / 10 / 5 HX1200 / 10 / 5 HX1000i / 10 / 5 AX1200i / 14 / 7 The AX1200i could technically do 14 cards if single 8 pin - but you would have to buy an additional 5 * PCIE double headers (they go for about $15 ea.). You also would break the golden rule that says 2 risers per peripheral power string. All of these PSU's are capable of 6 peripheral strings max, which is 12 cards if two cards per string. You would have to buy more of those too, or mishmash your riser set-up (3 molex, 2 sata, etc) I have bought a fair few additional cables for the above. Just be aware of the compatibility differences when ordering single cables on eBay, etc. Additionally, you can buy full cable replacement kits to get the extra PCIE cables, I recently did this at a cost of $100 just to get 4 more PCIE double headers as my regular eBay guy was sold out in December. HXi and AXi cables are compatible with each other (Type 3) HX and RMx/RMi cables are compatible with each other (Type 4) Here is what I mine with each PSU: TX850 - 1080ti / 1080 (blower) / 1070ti (mini) RM1000x - 4*Vega56 HX1200 - 4*Vega64 HX1200 - 4*1080ti / 1070ti AX1200i - 3*1080 / 2*1070ti / 1070 / 1080ti HX1000i - 3*1080 / 1080ti If I had my time again, I would buy only AX1200i / AX1500i. They have great adaptability with the two extra PCIE slots and have uniform compatibility with each other. The exception being HX1000i for Vega rigs (4-6). Stick with the cable compatibility type so you don't have to stock both lines and check when using.... >< Hi Martyroz, Are the PCIE cables between the AX, HX and RM series compatible?
|
|
|
|
martyroz
|
 |
January 03, 2018, 10:00:04 PM |
|
HXi and AXi cables are compatible with each other (Type 3) HX and RMx/RMi cables are compatible with each other (Type 4)
Hi Martyroz, Are the PCIE cables between the AX, HX and RM series compatible? ^^ =]
|
|
|
|
|