Keefe
|
 |
December 22, 2013, 04:00:56 PM |
|
I think the V2 M-boards have empty positions for screw blocks. I'd get some PCIe cable extenders from eBay, cut off the male end, and solder the wires into those holes. That would add two more sockets so you can power the rig with four power cables.
 http://www.ebay.com/itm/251382532478I don't know if the wires in that cable are too thick for those board holes.
|
|
|
|
Doff
|
 |
December 22, 2013, 04:36:24 PM |
|
My M-board has the screw blocks on it, looks like I just need to remove the screw real quick and attach the ring connectors. What I don't know is what wires do I then place in the ring connectors once I cut the extender. I realize one should contain the ground wires and the other power but not being very savvy when it comes to wiring period I'm not sure what goes where.
|
|
|
|
Keefe
|
 |
December 22, 2013, 04:41:09 PM |
|
My M-board has the screw blocks on it, looks like I just need to remove the screw real quick and attach the ring connectors. What I don't know is what wires do I then place in the ring connectors once I cut the extender. I realize one should contain the ground wires and the other power but not being very savvy when it comes to wiring period I'm not sure what goes where.
On a 6-pin PCIe connector, the three wires on the clip side are GND and the other three are +12V. 
|
|
|
|
Doff
|
 |
December 22, 2013, 04:56:33 PM |
|
That looks easy enough, thank you Keefe 
|
|
|
|
Asinine
Member

Offline
Activity: 62
Merit: 10
|
 |
December 22, 2013, 07:33:57 PM |
|
Some of the (I believe newer) V3 boards (there are more than one type of V3 m-board) have the screw down posts as well as pcie power connectors. The one I have has those loose pci-e H-Card slots people where complaining about that result in flapping cards. The other one is missing the posts, but the pci-e slots are stiffer.
Note: I am not talking about the V1 or V2 that have the short ISA-like card slot
|
|
|
|
bobcaticus
|
 |
December 22, 2013, 09:24:11 PM |
|
 I snapped a quick shot of a V3 M-board. (Be gentle on my photoshop skills - I had a couple minutes to snap a photo while the relatives are cooking in the other room) You'll see the PCI-E plugs on the left, and the screwdowns on the right. The V2 has technically the same layout. V1 is screwdown only. From what I have been told, you can plug into either/or. Some people have both plugged in, but either should pull enough power from the PSU.
|
|
|
|
Keefe
|
 |
December 22, 2013, 09:43:22 PM |
|
 I snapped a quick shot of a V3 M-board. (Be gentle on my photoshop skills - I had a couple minutes to snap a photo while the relatives are cooking in the other room) You'll see the PCI-E plugs on the left, and the screwdowns on the right. The V2 has technically the same layout. V1 is screwdown only. From what I have been told, you can plug into either/or. Some people have both plugged in, but either should pull enough power from the PSU. The concern isn't getting enough power to the rig, it's avoiding overheating of the connectors, since they weren't designed for 320W+ on a single cable.
|
|
|
|
twmz
|
 |
December 22, 2013, 09:54:13 PM |
|
Ok, so I borrowed a volt meter from a friend to reduce the overclocking on my rig, but I have a question for those that have made adjustments (this is a v3 rig, so has the v2.2 H-boards).
Do you adjust the trimpot while the rig is powered up but not mining? Or do you power everything down, make an guess at how far to adjust the trimpot, then power it back on to measure and see if you guessed right, and then repeat?
|
Was I helpful? 1 TwmzX1wBxNF2qtAJRhdKmi2WyLZ5VHRs WoT, GPGBitrated user: ewal.
|
|
|
Keefe
|
 |
December 22, 2013, 09:57:11 PM |
|
Ok, so I borrowed a volt meter from a friend to reduce the overclocking on my rig, but I have a question for those that have made adjustments (this is a v3 rig, so has the v2.2 H-boards).
Do you adjust the trimpot while the rig is powered up but not mining? Or do you power everything down, make an guess at how far to adjust the trimpot, then power it back on to measure and see if you guessed right, and then repeat?
I always adjust voltage (either with a pencil on the older boards or with the trimpot on newer ones) while powered but not hashing. It might be a bad idea, but it hasn't caused me any problems.
|
|
|
|
Keefe
|
 |
December 22, 2013, 10:15:52 PM |
|
There's another reason to use more than 2 cables: reducing the power bill. Let's assume the wires from the PSU to the rig are 18 gauge and 2 ft long, and the rig uses 600W. With two cables (don't use a single cable with two plugs!), we have 6 wires for each of + and -, each carrying 8.3A of current. Each of those 12 wires has a resistance of 0.0128 ohms*, so the voltage drop at 8.3A is 0.106V and the wasted power is 0.88W per wire, for a total waste of 10.6W. If you figure on running these rigs for 6 months and your power cost is $0.15/kwh (mine's twice that), you waste $6.95 just to heat the wires. The connectors probably add significant resistance and therefore waste even more power. Now let's run the numbers for 4 cables instead of 2. 24 wires, 0.0128 ohms, 4.15A, 0.053V, 0.22W/wire, 5.3W total wasted on heating the wires. As a general rule, doubling the number of wires (or their cross-section area, i.e. 15 gauge vs 18 gauge) sharing the same load halves the power wasted. You save at least $3.47 over 6 months and reduce the risk of fire due to overloaded connectors. Since my power costs twice as much as the examples above, I went a little overkill and made cables with one 12 gauge wire instead of three 18 gauge wires, for my modular PSUs, with male PCIe plugs on one end to fit the PSU and ring terminals on the other, for a total of eight 12 gauge wires (four +, four -) per rig. * http://en.wikipedia.org/wiki/American_wire_gauge
|
|
|
|
Doff
|
 |
December 22, 2013, 11:24:59 PM |
|
There's another reason to use more than 2 cables: reducing the power bill. Let's assume the wires from the PSU to the rig are 18 gauge and 2 ft long, and the rig uses 600W. With two cables (don't use a single cable with two plugs!), we have 6 wires for each of + and -, each carrying 8.3A of current. Each of those 12 wires has a resistance of 0.0128 ohms*, so the voltage drop at 8.3A is 0.106V and the wasted power is 0.88W per wire, for a total waste of 10.6W. If you figure on running these rigs for 6 months and your power cost is $0.15/kwh (mine's twice that), you waste $6.95 just to heat the wires. The connectors probably add significant resistance and therefore waste even more power. Now let's run the numbers for 4 cables instead of 2. 24 wires, 0.0128 ohms, 4.15A, 0.053V, 0.22W/wire, 5.3W total wasted on heating the wires. As a general rule, doubling the number of wires (or their cross-section area, i.e. 15 gauge vs 18 gauge) sharing the same load halves the power wasted. You save at least $3.47 over 6 months and reduce the risk of fire due to overloaded connectors. Since my power costs twice as much as the examples above, I went a little overkill and made cables with one 12 gauge wire instead of three 18 gauge wires, for my modular PSUs, with male PCIe plugs on one end to fit the PSU and ring terminals on the other, for a total of eight 12 gauge wires (four +, four -) per rig. * http://en.wikipedia.org/wiki/American_wire_gaugeSo you just made me realize I need to buy two of those Extenders. Is it ok to connect two ring connectors to each terminal like that? At least that's the only way I see connecting for pci connections.
|
|
|
|
Keefe
|
 |
December 22, 2013, 11:48:36 PM |
|
There's another reason to use more than 2 cables: reducing the power bill. Let's assume the wires from the PSU to the rig are 18 gauge and 2 ft long, and the rig uses 600W. With two cables (don't use a single cable with two plugs!), we have 6 wires for each of + and -, each carrying 8.3A of current. Each of those 12 wires has a resistance of 0.0128 ohms*, so the voltage drop at 8.3A is 0.106V and the wasted power is 0.88W per wire, for a total waste of 10.6W. If you figure on running these rigs for 6 months and your power cost is $0.15/kwh (mine's twice that), you waste $6.95 just to heat the wires. The connectors probably add significant resistance and therefore waste even more power. Now let's run the numbers for 4 cables instead of 2. 24 wires, 0.0128 ohms, 4.15A, 0.053V, 0.22W/wire, 5.3W total wasted on heating the wires. As a general rule, doubling the number of wires (or their cross-section area, i.e. 15 gauge vs 18 gauge) sharing the same load halves the power wasted. You save at least $3.47 over 6 months and reduce the risk of fire due to overloaded connectors. Since my power costs twice as much as the examples above, I went a little overkill and made cables with one 12 gauge wire instead of three 18 gauge wires, for my modular PSUs, with male PCIe plugs on one end to fit the PSU and ring terminals on the other, for a total of eight 12 gauge wires (four +, four -) per rig. * http://en.wikipedia.org/wiki/American_wire_gaugeSo you just made me realize I need to buy two of those Extenders. Is it ok to connect two ring connectors to each terminal like that? At least that's the only way I see connecting for pci connections. I have 4 ring terminals on each screw. I'm using 4 of these for a rig: 
|
|
|
|
twmz
|
 |
December 23, 2013, 12:00:47 AM |
|
I have 4 ring terminals on each screw. I'm using 4 of these for a rig:  Don't you lose most of the benefit of distributing the load across many more wires if you don't use all 6 wires from each PCIe connector? Your photo looks like you're only using one + and one GND from the PCIe connector.
|
Was I helpful? 1 TwmzX1wBxNF2qtAJRhdKmi2WyLZ5VHRs WoT, GPGBitrated user: ewal.
|
|
|
Keefe
|
 |
December 23, 2013, 12:09:28 AM |
|
I have 4 ring terminals on each screw. I'm using 4 of these for a rig:  Don't you lose most of the benefit of distributing the load across many more wires if you don't use all 6 wires from each PCIe connector? Your photo looks like you're only using one + and one GND from the PCIe connector. No, each splice has three 18 gauge wires (with extra thick insulation) from the PCIe plug on one end and one 12 gauge wire on the other end. 
|
|
|
|
Anddos
|
 |
December 23, 2013, 12:20:22 AM |
|
Why is the price so dam high for 400GH?
|
| █ █ █ █ █ █ █ █ █
| | █ █ █ █ █ █ █ █ █
| | █ █ █ █ █ █ █ █ █
| ☑
|
|
|
|
Doff
|
 |
December 23, 2013, 01:11:07 AM |
|
Why is the price so dam high for 400GH?
They are just for looks and don't actually want to sell them. (Sorry couldn't help myself)
|
|
|
|
Keefe
|
 |
December 23, 2013, 01:56:48 AM |
|
|
|
|
|
Doff
|
 |
December 23, 2013, 02:18:07 AM |
|
So one last question. It should be safe to have the two PCI Plugged in on the Front of my V3 Board, and 2 More going to the Back of the board on the Terminals from the same PSU of course.
|
|
|
|
Keefe
|
 |
December 23, 2013, 02:19:40 AM |
|
So one last question. It should be safe to have the two PCI Plugged in on the Front of my V3 Board, and 2 More going to the Back of the board on the Terminals from the same PSU of course.
Yes
|
|
|
|
Trongersoll
|
 |
December 23, 2013, 08:23:07 PM |
|
*kicks Daves desk* Hey! Wake up! You got orders waiting to be shipped! 
|
|
|
|
|