TomKeddie
|
|
May 29, 2013, 11:29:55 PM |
|
I guess im just concerned how I would configure and power all those 16 boards. that's a lot of usb devices
Not sure about power but I'm got 32 USB serial ports hanging off a atom based linux box, one python script per serial port, works really well (at only 300 MH/s per port). The raspberry pi choked at 16 ports ;-)
|
|
|
|
Lollaskates
|
|
May 30, 2013, 12:37:49 AM Last edit: May 30, 2013, 12:56:32 AM by Lollaskates |
|
I guess im just concerned how I would configure and power all those 16 boards. that's a lot of usb devices
Not sure about power but I'm got 32 USB serial ports hanging off a atom based linux box, one python script per serial port, works really well (at only 300 MH/s per port). The raspberry pi choked at 16 ports ;-) choked at 16 ports or was 16 the limit? I'll be getting 16 boards total, curious how I should split them up. Im thinking rasp pi is the best price way to go about it, but how does one go about actually powering these boards? Also would RevB (with 512MB ram) be necessary, or would the A be good enough? We splice the PCI-E power connectors, but how do we actually trigger the PSU to come on?
|
|
|
|
steamboat (OP)
|
|
May 30, 2013, 01:17:57 AM |
|
I guess im just concerned how I would configure and power all those 16 boards. that's a lot of usb devices
Not sure about power but I'm got 32 USB serial ports hanging off a atom based linux box, one python script per serial port, works really well (at only 300 MH/s per port). The raspberry pi choked at 16 ports ;-) choked at 16 ports or was 16 the limit? I'll be getting 16 boards total, curious how I should split them up. Im thinking rasp pi is the best price way to go about it, but how does one go about actually powering these boards? Also would RevB (with 512MB ram) be necessary, or would the A be good enough? We splice the PCI-E power connectors, but how do we actually trigger the PSU to come on? Powering them with an ATX PSU is the most efficient way to go. If you aren't connecting the PSU to a host, the easiest way is to short the green power sense wire with any of the black grounds. Easy link
|
|
|
|
|
mjmvisser
Newbie
Offline
Activity: 58
Merit: 0
|
|
May 30, 2013, 02:07:29 AM |
|
Im thinking rasp pi is the best price way to go about it, but how does one go about actually powering these boards? Also would RevB (with 512MB ram) be necessary, or would the A be good enough? We splice the PCI-E power connectors, but how do we actually trigger the PSU to come on?
The RevA Pi doesn't have on-board network, so unless you want to mess around with a USB network dongle, the RevB is your best bet. As for power, I've been doing back-of-the-napkin calculations with 32W per board. 8 boards is 256W. You can buy a 400W power supply for $20. You can force a PSU on with a carefully placed paperclip. Just google "manually turn on atx psu". cheers
|
|
|
|
ik2013
|
|
May 30, 2013, 03:09:52 AM |
|
Im thinking rasp pi is the best price way to go about it, but how does one go about actually powering these boards? Also would RevB (with 512MB ram) be necessary, or would the A be good enough? We splice the PCI-E power connectors, but how do we actually trigger the PSU to come on?
The RevA Pi doesn't have on-board network, so unless you want to mess around with a USB network dongle, the RevB is your best bet. As for power, I've been doing back-of-the-napkin calculations with 32W per board. 8 boards is 256W. You can buy a 400W power supply for $20. You can force a PSU on with a carefully placed paperclip. Just google "manually turn on atx psu". cheers Personally I would go with the model B Rpi for the increased RAM and on board network. It is only a few dollars more. I am currently working on a solution to enable ACPI-like shutdown on the Rpi, should have something to show in a few weeks once the board is finalized and in my hands. You could turn on the PSU with the Rpi but that would require a separate power brick to keep the Rpi running. My solution utilizes an Arduino clone running on the 5VSB rail (always on) to watch for the press on the case switch. Once that happens it pulls pin 16 (PS_ON) on the ATX connector low to turn on the supply. Sounds easy enough (famous last words?) Stay tuned.. EDIT: Now that I think about it you might be able to power the Rpi from the 5VSB rail just like I am planning to do with the Arduino...the PSU I am looking at gives 3 Amps on that rail...
|
|
|
|
steamboat (OP)
|
|
May 30, 2013, 04:45:37 AM |
|
I'd be curious to know what the maximum number of boards which can be connected is as well. So far netbooks seem like the best solution for me as they come with a preconfigured battery backup and I can run a large number from each host.
|
|
|
|
Reckman
|
|
May 30, 2013, 04:47:21 AM |
|
I'd be curious to know what the maximum number of boards which can be connected is as well. So far netbooks seem like the best solution for me as they come with a preconfigured battery backup and I can run a large number from each host.
most people say 127 per USB bus, i think it has to do with the "mac" address assigning
|
|
|
|
ik2013
|
|
May 30, 2013, 05:06:19 AM |
|
I'd be curious to know what the maximum number of boards which can be connected is as well. So far netbooks seem like the best solution for me as they come with a preconfigured battery backup and I can run a large number from each host.
most people say 127 per USB bus, i think it has to do with the "mac" address assigning Yes the USB spec is 127 devices, I think the poster earlier that maxed out at 10 may have had a different issue besides how many devices the Rpi could enumerate...more likely a RAM issue. Is it safe to assume he was talking about those K1-type devices? As I understand it the Klondike boards when properly tiled will enumerate as one device. The K1-type would enumerate each board as it's own device, therefore the Rpi was having a tough time keeping up with 10 at once. From the info available now the K16 should not suffer this same issue. PLEASE correct me if I'm wrong, I'm putting a lot of faith in the Rpi right now to get the job done lol
|
|
|
|
steamboat (OP)
|
|
May 30, 2013, 07:41:45 AM |
|
I'd be curious to know what the maximum number of boards which can be connected is as well. So far netbooks seem like the best solution for me as they come with a preconfigured battery backup and I can run a large number from each host.
most people say 127 per USB bus, i think it has to do with the "mac" address assigning Yes the USB spec is 127 devices, I think the poster earlier that maxed out at 10 may have had a different issue besides how many devices the Rpi could enumerate...more likely a RAM issue. Is it safe to assume he was talking about those K1-type devices? As I understand it the Klondike boards when properly tiled will enumerate as one device. The K1-type would enumerate each board as it's own device, therefore the Rpi was having a tough time keeping up with 10 at once. From the info available now the K16 should not suffer this same issue. PLEASE correct me if I'm wrong, I'm putting a lot of faith in the Rpi right now to get the job done lol I don't think the USB spec is going to be the bottleneck, as you said I suspect it will be a ram/processor problem. All purchases received, recorded, replied.
|
|
|
|
LaserHorse
|
|
May 30, 2013, 10:11:51 AM |
|
Chip amount: 20 Payment amount: 1.72 Sending Address: 1LJ94uW7dQeUJJ5MBn7ej7sRPbdugYaYo1 TX ID: e59b477c245f317fb7429d6aa95cbe2e7dfc6a3ca3498fe71d53ed9e40bde057
|
|
|
|
Lollaskates
|
|
May 30, 2013, 04:20:20 PM |
|
Okay, so i've been trying to work out the most feasible way to power say, 16 boards on a single 750w PSU (estimating ~35w / board, for 560w total and leaving ~20% for breathing room). A single PCI-E 6 pin plug can handle 75w, 8 pin plugs can handle 150w. the 6+2pins can handle 150w, but its unclear as to how they are rated (gauge). If the +2 is daisy chained off the 6, then the 6 is rated for 150w. if the +2 has its own dedicated wires, then you're still limited to 75w / connector (on the 6 pin). Safely, lets assume we can split the 75w into two 6 pin connectors, allowing 2 boards / connector. Lets use this PSU: http://www.newegg.com/Product/Product.aspx?Item=N82E168171390068 x Peripheral 8 x SATA 4 x PCI-E (6+2) In PCI-E alone, that nets us 4 connections, split each, 8 boards. However, if these are wired to support 150w each, then that is all 16 boards. The caveat with this is you'd need to make sure your initial splits are able to handle 150w, or it wont work. Again, lets be safe and just stick to 75w/2 connections. So 8 boards. The best way to connect the rest? Each molex is able to handle 11A @ 12v for 135w per chain. (3 boards effective) Sata power is 9A @ 12v, for 108w per chain. (3 boards max, 2 boards effective) It seems at first we can safely attach the remaining 8 boards via molex->PCI-E 6 pin adapters, however that is likely 4 molex connectors / chain, so two chains (really 6 boards). We'd need to utilize one of the sata power chains to get our last 2 boards up and running. After all is said and done, we're utilizing an estimated ~46.66A of the available 62A on the 12v rail. That leaves a grand total of: 4x PCI-E splitters [8 boards] ( http://www.newegg.com/Product/Product.aspx?Item=N82E16812198016 - $4.31 ea.) 3x (2x)Molex->(2x)PCI-E [6 boards] ( http://www.newegg.com/Product/Product.aspx?Item=N82E16812198022 - $5.27 ea.) 2x SATAp->PCI-E [2 boards] ( http://www.amazon.com/Monoprice-8-inch-15pin-Express-Power/dp/B009GUP6O0/ref=cm_cr_pr_product_top - $2.80 ea.) ---------------- $99.99 PSU (after MIR) $38.65 Adapters -------- $138.64 Total Not bad for cost to power these guys. Please critique this freely, I'm going to be implementing this and I'd hate to be wrong.
|
|
|
|
503guy
|
|
May 30, 2013, 05:53:55 PM |
|
I have 50 chips from another group buy that will be assembled into working miners. Would you accept other boards for your hosting service (they won't be Klondike boards)?
|
|
|
|
wrenchmonkey
|
|
May 30, 2013, 06:44:24 PM |
|
Okay, so i've been trying to work out the most feasible way to power say, 16 boards on a single 750w PSU (estimating ~35w / board, for 560w total and leaving ~20% for breathing room). A single PCI-E 6 pin plug can handle 75w, 8 pin plugs can handle 150w. the 6+2pins can handle 150w, but its unclear as to how they are rated (gauge). If the +2 is daisy chained off the 6, then the 6 is rated for 150w. if the +2 has its own dedicated wires, then you're still limited to 75w / connector (on the 6 pin). Safely, lets assume we can split the 75w into two 6 pin connectors, allowing 2 boards / connector. Lets use this PSU: http://www.newegg.com/Product/Product.aspx?Item=N82E168171390068 x Peripheral 8 x SATA 4 x PCI-E (6+2) In PCI-E alone, that nets us 4 connections, split each, 8 boards. However, if these are wired to support 150w each, then that is all 16 boards. The caveat with this is you'd need to make sure your initial splits are able to handle 150w, or it wont work. Again, lets be safe and just stick to 75w/2 connections. So 8 boards. The best way to connect the rest? Each molex is able to handle 11A @ 12v for 135w per chain. (3 boards effective) Sata power is 9A @ 12v, for 108w per chain. (3 boards max, 2 boards effective) It seems at first we can safely attach the remaining 8 boards via molex->PCI-E 6 pin adapters, however that is likely 4 molex connectors / chain, so two chains (really 6 boards). We'd need to utilize one of the sata power chains to get our last 2 boards up and running. After all is said and done, we're utilizing an estimated ~46.66A of the available 62A on the 12v rail. That leaves a grand total of: 4x PCI-E splitters [8 boards] ( http://www.newegg.com/Product/Product.aspx?Item=N82E16812198016 - $4.31 ea.) 3x (2x)Molex->(2x)PCI-E [6 boards] ( http://www.newegg.com/Product/Product.aspx?Item=N82E16812198022 - $5.27 ea.) 2x SATAp->PCI-E [2 boards] ( http://www.amazon.com/Monoprice-8-inch-15pin-Express-Power/dp/B009GUP6O0/ref=cm_cr_pr_product_top - $2.80 ea.) ---------------- $99.99 PSU (after MIR) $38.65 Adapters -------- $138.64 Total Not bad for cost to power these guys. Please critique this freely, I'm going to be implementing this and I'd hate to be wrong. I'm thinking you might be ahead to just run a a couple of beefed up lines straight off the PSU board, rather than messing around with existing wiring. You could run a pair of 12 gauge lines, which is acceptable for chassis wiring up to 41A @ 12v, giving you 984w in total between with two lines. You're going to be overhauling a lot of connections anyway, so IMHO, you're better off eliminating everything you don't need. You could even remove the ATX header, short the appropriate power sense wiring, and run the PSU off of the main power switch on the back of the unit. It'd be a mighty clean little setup. This is most likely the route I'll be taking. I'll only have a 10 board cluster, starting out, so I'll easily be able to get away with the 450w PSUs I already have on hand. If your PSU doesn't have a main switch, for some inexplicable reason (or even if it does, and you just want to get extra fancy), you could put in a toggle switch across the power sense circuit. Hell, you could get REALLY bold and wire it in with a relay or opto-isolators and remotely control it for doing a power cycle, if you needed to do a reset... You could get pretty out of control with it, if you wanted.
|
|
|
|
cardcomm
|
|
May 30, 2013, 07:03:36 PM |
|
Email and payment sent for an additional 15 chips to round out a previous order in this same batch.
Will be planning to make use of the board assembly option.
Thanks again!
|
|
|
|
newtothescene
|
|
May 30, 2013, 07:18:56 PM |
|
This may have already been asked/answered, so my apologies if it has. I did read the original post, but may have overlooked. For batch 4, if we order today about how long in weeks/months would be a good estimate of when the assembled product would be shipped? I see from other similar projects that a healthy lead time of 7-10 weeks is normal. Wanted to ask/confirm before moving forward with anything.
Thanks!
|
|
|
|
rocks
Legendary
Offline
Activity: 1153
Merit: 1000
|
|
May 30, 2013, 07:24:19 PM |
|
I'm thinking you might be ahead to just run a a couple of beefed up lines straight off the PSU board, rather than messing around with existing wiring. You could run a pair of 12 gauge lines, which is acceptable for chassis wiring up to 41A @ 12v, giving you 984w in total between with two lines.
This sounds interesting although I'm not familiar with this path. If you could describe details on how to implement this setup that would be great. i.e. what cabling and connectors are needed? Thanks,
|
|
|
|
ewitte
Member
Offline
Activity: 98
Merit: 10
|
|
May 30, 2013, 07:30:40 PM |
|
Sent BTC for 16 more chips and email with Transaction information - Total 34 chips - Two boards
|
Donations BTC - 13Lgy6fb4d3nSYEf2nkgBgyBkkhPw8zkPd LTC - LegzRwyc2Xhu8cqvaW2jwRrqSnhyaYU6gZ
|
|
|
wrenchmonkey
|
|
May 30, 2013, 07:58:25 PM |
|
I'm thinking you might be ahead to just run a a couple of beefed up lines straight off the PSU board, rather than messing around with existing wiring. You could run a pair of 12 gauge lines, which is acceptable for chassis wiring up to 41A @ 12v, giving you 984w in total between with two lines.
This sounds interesting although I'm not familiar with this path. If you could describe details on how to implement this setup that would be great. i.e. what cabling and connectors are needed? Thanks, So many different possibilities on how you'd implement the wiring aspect, depending on your skills, time, etc... The simplest method would probably be cut and splice with a bunch of these: http://www.ebay.com/itm/10-PCI-E-PCIe-6-pin-video-card-power-connector-cable-/280394200360?pt=US_Power_Cables_Connectors&hash=item4148cbf528 The wiring up to the point that they branch off should be 12 gauge, but after that, each individual connector is obviously wired up sufficiently for its own power draw. Of course, if you're looking for a cleaner look, with a bit more effort, these: http://www.ebay.com/itm/PC-PCI-Express-8-6-2-pin-Power-Connector-Female-x-2-pcs-/261188499328?pt=LH_DefaultDomain_0&hash=item3cd00c3780and these: http://www.ebay.com/itm/Male-Female-ATX-PCI-E-EPS-CPU-Connector-Pins-PC-Computer-PSU-Power-Supply-/121117721214?pt=LH_DefaultDomain_0&hash=item1c332dc27eAnother option would be to build a power distribution block, and just run all of your PCIe connectors off of that, from one central location. It's really up to you. Just make sure you're using a power supply with a single 12v rail, and make sure you use sufficient wiring (2 12 gauges should be sufficient), and do whatever works best with your particular setup/layout. *Note, I'm not responsible if you burn down your house, or place of business. None of this should be taken as advice, or even taken seriously. This is just the opinion of the author. I'm not responsible for you misunderstanding what I'm saying (or understanding and following it). You're a big kid, do your research. This is just a spitballin' idea, that happens to also be the route I intend to go.
|
|
|
|
ewitte
Member
Offline
Activity: 98
Merit: 10
|
|
May 30, 2013, 08:02:18 PM |
|
For those of us running up many many video cards 32W per board is nothing (subjective "many")
|
Donations BTC - 13Lgy6fb4d3nSYEf2nkgBgyBkkhPw8zkPd LTC - LegzRwyc2Xhu8cqvaW2jwRrqSnhyaYU6gZ
|
|
|
|