|
seriouscoin
|
|
June 24, 2012, 11:54:23 PM |
|
Liking the power board. When it is available will it be possible to just add those to our shipments if we request/pay for them?
Actuallly I was looking at the post before. Yes that's very much the idea that you can do more or less what you want with the wiring. It's always difficult to get PCIE leads to wire up nicely in a rig always the wrong length and it's hard to split them nicely. Hopefully this board will be of use here. I imagine it will get used for other non-Cairnsmore uses as well. ATX PSU are hard to beat in efficiency terms and high power at a reasonable cost.
The plan is that this is an initial version and we will follow up with one supporting Ethernet as well. That enhanced one won't be done for 2-3 months yet. Depends on how busy we are,
Currently your board only limits 24pin and PCI-e connectors, i want to know why? Every PSU still has 2-3 molex 4-pin wires (i'm counting only wires because other connectors share the same wire are useless). Why not maximizing the available wires of a PSU? to save few bucks cents on the connector? Most powersupplies can be maxed out on PCIe and +12v EPS pins. I don't know why you're throwing a hissy fit over obsolete and bulky 4-pin molex connectors. The better question is why aren't the 8-pin EPS connectors being utilized. Its not hissy fit when there are other enterprise PSU that we can use. The 4-pin molex is still very popular outside of PC.
|
|
|
|
Gomeler
|
|
June 25, 2012, 01:15:41 AM |
|
Liking the power board. When it is available will it be possible to just add those to our shipments if we request/pay for them?
Actuallly I was looking at the post before. Yes that's very much the idea that you can do more or less what you want with the wiring. It's always difficult to get PCIE leads to wire up nicely in a rig always the wrong length and it's hard to split them nicely. Hopefully this board will be of use here. I imagine it will get used for other non-Cairnsmore uses as well. ATX PSU are hard to beat in efficiency terms and high power at a reasonable cost.
The plan is that this is an initial version and we will follow up with one supporting Ethernet as well. That enhanced one won't be done for 2-3 months yet. Depends on how busy we are,
Currently your board only limits 24pin and PCI-e connectors, i want to know why? Every PSU still has 2-3 molex 4-pin wires (i'm counting only wires because other connectors share the same wire are useless). Why not maximizing the available wires of a PSU? to save few bucks cents on the connector? Most powersupplies can be maxed out on PCIe and +12v EPS pins. I don't know why you're throwing a hissy fit over obsolete and bulky 4-pin molex connectors. The better question is why aren't the 8-pin EPS connectors being utilized. Its not hissy fit when there are other enterprise PSU that we can use. The 4-pin molex is still very popular outside of PC. Most enterprise PSUs that I have seen are 12vdc bulk powersupplies that then feed an in-chassis 5vdc and 3.3vdc power supply that steps down 12vdc. Then again your idea of enterprise and mine may differ. To me enterprise = rack mount gear. There might be a market for an adapter that could accept a rack-mount PSU and output a metric shit ton of PCIe 6-pin connectors. Problem with that though is you have to deal with 40mm fans..
|
|
|
|
rjk
Sr. Member
Offline
Activity: 448
Merit: 250
1ngldh
|
|
June 25, 2012, 01:40:17 AM |
|
|
|
|
|
Garr255
Legendary
Offline
Activity: 938
Merit: 1000
What's a GPU?
|
|
June 25, 2012, 01:43:55 AM |
|
Does that mean you're scrapping the awesome rig?
|
“First they ignore you, then they laugh at you, then they fight you, then you win.” -- Mahatma Gandhi
Average time between signing on to bitcointalk: Two weeks. Please don't expect responses any faster than that!
|
|
|
rjk
Sr. Member
Offline
Activity: 448
Merit: 250
1ngldh
|
|
June 25, 2012, 01:47:13 AM |
|
Does that mean you're scrapping the awesome rig? Well, at this point, it's hardly worth it to have 20ghps that uses ~5-7kw of power and instantly turns it into heat.... What I would love to see most is a single-slot PCIe card with a bunch of some kind of asic on it.
|
|
|
|
seriouscoin
|
|
June 25, 2012, 01:48:34 AM |
|
Does that mean you're scrapping the awesome rig? Well, at this point, it's hardly worth it to have 20ghps that uses ~5-7kw of power and instantly turns it into heat.... What I would love to see most is a single-slot PCIe card with a bunch of some kind of asic on it. That would be a dream ... watercool it too!
|
|
|
|
DiabloD3
Legendary
Offline
Activity: 1162
Merit: 1000
DiabloMiner author
|
|
June 25, 2012, 01:51:17 AM |
|
Liking the power board. When it is available will it be possible to just add those to our shipments if we request/pay for them?
Actuallly I was looking at the post before. Yes that's very much the idea that you can do more or less what you want with the wiring. It's always difficult to get PCIE leads to wire up nicely in a rig always the wrong length and it's hard to split them nicely. Hopefully this board will be of use here. I imagine it will get used for other non-Cairnsmore uses as well. ATX PSU are hard to beat in efficiency terms and high power at a reasonable cost.
The plan is that this is an initial version and we will follow up with one supporting Ethernet as well. That enhanced one won't be done for 2-3 months yet. Depends on how busy we are,
Currently your board only limits 24pin and PCI-e connectors, i want to know why? Every PSU still has 2-3 molex 4-pin wires (i'm counting only wires because other connectors share the same wire are useless). Why not maximizing the available wires of a PSU? to save few bucks cents on the connector? Most powersupplies can be maxed out on PCIe and +12v EPS pins. I don't know why you're throwing a hissy fit over obsolete and bulky 4-pin molex connectors. The better question is why aren't the 8-pin EPS connectors being utilized. Its not hissy fit when there are other enterprise PSU that we can use. The 4-pin molex is still very popular outside of PC. Most enterprise PSUs that I have seen are 12vdc bulk powersupplies that then feed an in-chassis 5vdc and 3.3vdc power supply that steps down 12vdc. Then again your idea of enterprise and mine may differ. To me enterprise = rack mount gear. There might be a market for an adapter that could accept a rack-mount PSU and output a metric shit ton of PCIe 6-pin connectors. Problem with that though is you have to deal with 40mm fans.. To be fair, thats how all high efficiency PSUs work now. AC->DC 12v, DC->DC 12v->whatever for everything else.
|
|
|
|
Gomeler
|
|
June 25, 2012, 01:58:59 AM |
|
Liking the power board. When it is available will it be possible to just add those to our shipments if we request/pay for them?
Actuallly I was looking at the post before. Yes that's very much the idea that you can do more or less what you want with the wiring. It's always difficult to get PCIE leads to wire up nicely in a rig always the wrong length and it's hard to split them nicely. Hopefully this board will be of use here. I imagine it will get used for other non-Cairnsmore uses as well. ATX PSU are hard to beat in efficiency terms and high power at a reasonable cost.
The plan is that this is an initial version and we will follow up with one supporting Ethernet as well. That enhanced one won't be done for 2-3 months yet. Depends on how busy we are,
Currently your board only limits 24pin and PCI-e connectors, i want to know why? Every PSU still has 2-3 molex 4-pin wires (i'm counting only wires because other connectors share the same wire are useless). Why not maximizing the available wires of a PSU? to save few bucks cents on the connector? Most powersupplies can be maxed out on PCIe and +12v EPS pins. I don't know why you're throwing a hissy fit over obsolete and bulky 4-pin molex connectors. The better question is why aren't the 8-pin EPS connectors being utilized. Its not hissy fit when there are other enterprise PSU that we can use. The 4-pin molex is still very popular outside of PC. Most enterprise PSUs that I have seen are 12vdc bulk powersupplies that then feed an in-chassis 5vdc and 3.3vdc power supply that steps down 12vdc. Then again your idea of enterprise and mine may differ. To me enterprise = rack mount gear. There might be a market for an adapter that could accept a rack-mount PSU and output a metric shit ton of PCIe 6-pin connectors. Problem with that though is you have to deal with 40mm fans.. To be fair, thats how all high efficiency PSUs work now. AC->DC 12v, DC->DC 12v->whatever for everything else. In principal but typically in a rack-mount server that I've seen/worked with there are redundant 12vdc PSUs feeding a single power distribution block. Principal is the same, the hardware layout is different.
|
|
|
|
DiabloD3
Legendary
Offline
Activity: 1162
Merit: 1000
DiabloMiner author
|
|
June 25, 2012, 02:08:50 AM |
|
Liking the power board. When it is available will it be possible to just add those to our shipments if we request/pay for them?
Actuallly I was looking at the post before. Yes that's very much the idea that you can do more or less what you want with the wiring. It's always difficult to get PCIE leads to wire up nicely in a rig always the wrong length and it's hard to split them nicely. Hopefully this board will be of use here. I imagine it will get used for other non-Cairnsmore uses as well. ATX PSU are hard to beat in efficiency terms and high power at a reasonable cost.
The plan is that this is an initial version and we will follow up with one supporting Ethernet as well. That enhanced one won't be done for 2-3 months yet. Depends on how busy we are,
Currently your board only limits 24pin and PCI-e connectors, i want to know why? Every PSU still has 2-3 molex 4-pin wires (i'm counting only wires because other connectors share the same wire are useless). Why not maximizing the available wires of a PSU? to save few bucks cents on the connector? Most powersupplies can be maxed out on PCIe and +12v EPS pins. I don't know why you're throwing a hissy fit over obsolete and bulky 4-pin molex connectors. The better question is why aren't the 8-pin EPS connectors being utilized. Its not hissy fit when there are other enterprise PSU that we can use. The 4-pin molex is still very popular outside of PC. Most enterprise PSUs that I have seen are 12vdc bulk powersupplies that then feed an in-chassis 5vdc and 3.3vdc power supply that steps down 12vdc. Then again your idea of enterprise and mine may differ. To me enterprise = rack mount gear. There might be a market for an adapter that could accept a rack-mount PSU and output a metric shit ton of PCIe 6-pin connectors. Problem with that though is you have to deal with 40mm fans.. To be fair, thats how all high efficiency PSUs work now. AC->DC 12v, DC->DC 12v->whatever for everything else. In principal but typically in a rack-mount server that I've seen/worked with there are redundant 12vdc PSUs feeding a single power distribution block. Principal is the same, the hardware layout is different. Well, HW layout is still actually the same, its just that the DC->DC VRMs are in the PSU housing instead of the modules (which also may not be outputting 12v, some I've seen output 24v for high wattage situations). Its kinda shitty when you realize that although your 12v is redundant, your 5v and 3.3v isn't.
|
|
|
|
yohan (OP)
|
|
June 25, 2012, 07:09:32 AM Last edit: June 25, 2012, 07:29:31 AM by yohan |
|
I believe it is correct that the EPS12V connectors have the positive and the negative swapped relative to a 6/8-pin PCIe connector, so be careful. Even if it fits, you might end up with reverse polarity and that would be bad. So the 6-pin goes like this (with the connector latch on top): And I believe the ESP12V goes like this (also with the connector latch on top): But I don't have one next to me and so I can't check. Yes they are opposite and that is one of the most stupid things they did in the PSU spec. They could have been the same. The 2x4 connectors are differently polarised but here's the worst bit the 2x3 PCIE can plug into the EPS socket and is totally the wrong way round so lots of smoke and probably fire if you do that. This was a side reason for not putting on the EPS on the PDB. They also didn't do a good job going to the 2x4 PCIE either. Adding sense and not capacity over the 2x3. However that is what we have to work with. It's common in general industry to have Point Of Load structure where power is distributed at higher voltage and lower current within a rack or rig. This makes for smaller copper wires and less distribution loss. Against that your local regulations stages are not 100% efficient so it's a balance between copper loss and POL regulator loss. The distribution voltage varies with system and usually depends on amount of power. In our extreme board Merrick1 we use 48V but that needs special PSUs at the front end and each board can be using up to 1000W. 12V is a good choice for Cairnsmore1 although it can actually operate with a bit higher distribution voltage if needed.
|
|
|
|
Cablez
Legendary
Offline
Activity: 1400
Merit: 1000
I owe my soul to the Bitcoin code...
|
|
June 25, 2012, 12:39:35 PM |
|
Yes they are opposite and that is one of the most stupid things they did in the PSU spec. They could have been the same. The 2x4 connectors are differently polarised but here's the worst bit the 2x3 PCIE can plug into the EPS socket and is totally the wrong way round so lots of smoke and probably fire if you do that. This was a side reason for not putting on the EPS on the PDB.
They also didn't do a good job going to the 2x4 PCIE either. Adding sense and not capacity over the 2x3. However that is what we have to work with.
I know right!!! What the heck was molex thinking with the minifit-jr? I know of one case where the PCIe/EPS issue did happen around here, luckily it turned out well.
|
Tired of substandard power distribution in your ASIC setup??? Chris' Custom Cablez will get you sorted out right! No job too hard so PM me for a quote Check my products or ask a question here: https://bitcointalk.org/index.php?topic=74397.0
|
|
|
rjk
Sr. Member
Offline
Activity: 448
Merit: 250
1ngldh
|
|
June 25, 2012, 01:06:44 PM |
|
12V is a good choice for Cairnsmore1 although it can actually operate with a bit higher distribution voltage if needed.
I come prepared for all contingencies! How does 36kw of 48vdc sound? Keeping the estimate of 60 watts per board, that gives us 36,000 / 60 = 600 boards. 600 boards x $640 = $384,000.00.
|
|
|
|
yohan (OP)
|
|
June 25, 2012, 01:55:20 PM |
|
12V is a good choice for Cairnsmore1 although it can actually operate with a bit higher distribution voltage if needed.
I come prepared for all contingencies! How does 36kw of 48vdc sound? Keeping the estimate of 60 watts per board, that gives us 36,000 / 60 = 600 boards. 600 boards x $640 = $384,000.00. That's some serious power and bad if you short it. Although I do of an incident of a spanner droppin over a battery out of a submarine and that was bad apparently.
|
|
|
|
ebereon
|
|
June 25, 2012, 06:32:59 PM |
|
Is anyone with an early board still not mining?
I'd be happy with even the 50mh/s bitstream to work at this point...
100Mh/s bitstream is already on the unit. On the first post of this thread you can find the web page to get everything to play with. The twin_test bitstream is not working propably on my unit (SN: 62-0015). Only 2 -4 hours it is hashing @ ~350Mh/s and stop after this time. I'm back to the shipping bitstream atm.
|
|
|
|
ebereon
|
|
June 25, 2012, 06:53:02 PM |
|
Ebe is this currently working for you?
It won't work for me. Still showing 0.00 on MPBM
It is, but i think there must be some luck. Try that with cgminer 2.4.3 first and if cgminer is working you then can switch to mpbm. If I get it not working with these steps, I shutdown the unit for 10 minutes, after that it's working "better". I'm back to the shipping bitstream, that is stable over more days for me. Better 100Mh/s over night then nothing. Sometimes it's working 4 hours, but then it stops and the orange led turn on and don't get it working again.
|
|
|
|
daemonic
Newbie
Offline
Activity: 49
Merit: 0
|
|
June 25, 2012, 07:01:45 PM |
|
Is anyone with an early board still not mining?
I'd be happy with even the 50mh/s bitstream to work at this point...
100Mh/s bitstream is already on the unit. On the first post of this thread you can find the web page to get everything to play with. The twin_test bitstream is not working propably on my unit (SN: 62-0015). Only 2 -4 hours it is hashing @ ~350Mh/s and stop after this time. I'm back to the shipping bitstream atm. I have the same issue that twin_test will appear to mine at 350MH/s for 2-4 hours and then the amber led will start to show making it look like it waiting on work. What i have found though, is if you leave SW6 1 on rather than off, (so SW1 and SW6 are all on) after flashing the twin_test, if you configure the workers in mpbm to use 57600 then it connects fine and mines for hours so far without issue (touch wood, fingers crossed, etc) but only at 190MH/s per PGA (pair?) I know its slower MH/s, but its better than it dropping out whilst im asleep or something
|
|
|
|
ebereon
|
|
June 25, 2012, 07:24:35 PM |
|
I have the same issue that twin_test will appear to mine at 350MH/s for 2-4 hours and then the amber led will start to show making it look like it waiting on work. What i have found though, is if you leave SW6 1 on rather than off, (so SW1 and SW6 are all on) after flashing the twin_test, if you configure the workers in mpbm to use 57600 then it connects fine and mines for hours so far without issue (touch wood, fingers crossed, etc) but only at 190MH/s per PGA (pair?) I know its slower MH/s, but its better than it dropping out whilst im asleep or something There is no bitstream that make more the 190Mh/s per pair, if you see more it's a sw problem. cgminer need --icarus-timing option to be set. the unit is giving the miner sw the wrong parameters. You only flashed 1 fpga per pair, the second is not working at the moment. the miner thinks it have 2 icarus (380Mh/s per board), thats why it shows wrong 380mh/s per pair. This board can only hash 380mh/s at moment (2x190) not more. Check with you pool stats. But i will also try the SW6 on, if thats works for me in any way. Thanks!
|
|
|
|
yohan (OP)
|
|
June 25, 2012, 08:08:32 PM |
|
We think we know what the problem is where performance drops after some time usually on one FPGA. It's related to the comms and basically messages coming back with results are being lost. It won't be anything more than a short term issue and will be sorted out with our own bitstream.
|
|
|
|
ebereon
|
|
June 25, 2012, 08:09:12 PM |
|
I'm playing at the moment with different bitstreams. I wonder why the icarus bitstream "190M_V4.bit" is the same as the "twin_test.bit" ?? I did a binary compare and it's the same 99,9%. Only the date and the directory from where it is compiled is different (comare the first bits in HEX). @yohan: Is the "twin_test.bit" bitstream the same as the "190M_V4.bit" bitstream from icarus? I thought your team have make some changes to it to get it work? If it is the same and i think so after the binary compare then read this: 190M for test. in this bitsteam, the FPGA will continue working until nonce_to, even found a valid nonce. Found at https://github.com/ngzhang/Icarus/blob/master/Downloads/bitsteam/V4/Then it makes sense that the board is stop working after some time... Never the less, i'm testing the 190M_V3.bit, it looks more stable at the moment (0% invalids with both fpga's so far), but know for sure in some hours.
|
|
|
|
|