Bitcoin Forum
March 28, 2024, 01:35:45 PM *
News: Latest Bitcoin Core release: 26.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: [1] 2 3 4 5 6 7 »  All
  Print  
Author Topic: Cairnsmore2 - What would you like?  (Read 11553 times)
yohan (OP)
Sr. Member
****
Offline Offline

Activity: 462
Merit: 251



View Profile
May 20, 2012, 10:55:31 AM
 #1

Ok I will start this one off by saying that we are looking at a range of FPGA technologies to base the product on not that any of you might think we would start putting GPUs in our products although you never know. Some of our decisions will be based on how we do this week with more fully loading the Cairnsmore1.

We are probably thinking of 19" rack as a basis for this product and we already have some backplane designs that we did previously that might be useful either directly or in an adapted form. This also fits well with power supply availability and so on. This would also allow a modular purchase of a system that would be easy to upgrade and add to as time goes on.

One of our aims will is to be a very competitive FPGA solution in the market for large scale mining.

We have an initial individual card target of 4-5 GH/s+.

We are looking at our cooling technology and we are testing a new idea this week in a different product that might get adopted into Cairnsmore2.

Timeline - we are likely to limited by FPGA lead time which is typically 6-8 weeks so August-September is likely to be the initial availability of this system.

What we would like to hear from you guys is what you would like in interfaces? USB?, Ethernet - 100/1G/10G?, Cable PCIe?

And any particular features you might think we need to include?

Yohan
1711632945
Hero Member
*
Offline Offline

Posts: 1711632945

View Profile Personal Message (Offline)

Ignore
1711632945
Reply with quote  #2

1711632945
Report to moderator
Even if you use Bitcoin through Tor, the way transactions are handled by the network makes anonymity difficult to achieve. Do not expect your transactions to be anonymous unless you really know what you're doing.
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction.
Lethos
Sr. Member
****
Offline Offline

Activity: 476
Merit: 250


Keep it Simple. Every Bit Matters.


View Profile WWW
May 20, 2012, 11:45:25 AM
 #2

Good to hear you are already thinking bigger, so 19" rack design, so you going for a more standard motherboard sized PCB with merrick1 like amount of  processors on-board?

USB has some perks by being an interface users know. As long as a single usb could handle a larger board, without any downsides it would be ideal.
Otherwise I would consider looking at Ethernet port if it helps eliminates any downsides to using a single usb for a large scale FPGA board. Modded Routers after all are apparently starting to being used as a means to interface with FPGA's.
A bigger board could be a problem for a PCI-E slot most likely, so I can't see that as a good choice, though it is often requested to have one. If it was appropriate in size for a normal PCI-E board it would be popular, however since your planning to go bigger with the #2 I find it hard to believe it would be.

Have you got your own Mining software for these yet, or working closely with those that can work to optimise for the Cairnsmore series?

DiabloD3
Legendary
*
Offline Offline

Activity: 1162
Merit: 1000


DiabloMiner author


View Profile WWW
May 20, 2012, 12:38:35 PM
 #3

Ok I will start this one off by saying that we are looking at a range of FPGA technologies to base the product on not that any of you might think we would start putting GPUs in our products although you never know. Some of our decisions will be based on how we do this week with more fully loading the Cairnsmore1.

We are probably thinking of 19" rack as a basis for this product and we already have some backplane designs that we did previously that might be useful either directly or in an adapted form. This also fits well with power supply availability and so on. This would also allow a modular purchase of a system that would be easy to upgrade and add to as time goes on.

One of our aims will is to be a very competitive FPGA solution in the market for large scale mining.

We have an initial individual card target of 4-5 GH/s+.

We are looking at our cooling technology and we are testing a new idea this week in a different product that might get adopted into Cairnsmore2.

Timeline - we are likely to limited by FPGA lead time which is typically 6-8 weeks so August-September is likely to be the initial availability of this system.

What we would like to hear from you guys is what you would like in interfaces? USB?, Ethernet - 100/1G/10G?, Cable PCIe?

And any particular features you might think we need to include?

Yohan

Well, imo, all future boards from any manufacturer of any kind needs two functions: a fan controller and enough fan headers so that it will ramp fan speed to keep constant chip temp to prevent both fan failure (running fans at 100% load is generally bad, even industrial fans often fail after 2 years) and chip failure (due to thermal cycling).... and the other function is software programmable VRM so FGPAs can be underclocked+undervolted on demand so people can keep mining as the difficulty rises thus extending the life of the hardware for another 2-3 years.

The only real request I have beyond those two mandatory features is 28nm on some mining industry agreed upon FPGA (it seems everyone is leaning towards the largest artix 7). Continued 45nm usage seems to be dead, it just isn't cost effective enough.

Lethos
Sr. Member
****
Offline Offline

Activity: 476
Merit: 250


Keep it Simple. Every Bit Matters.


View Profile WWW
May 20, 2012, 01:11:30 PM
 #4

Diablo brings up very good points.
Having FPGA make use of lower nm tech would make an already huge gap in electrical costs even further down compared to normal GPU's.
Also it has proven that they overclock/undervolt better so making the most of that as Diablo said would be best for the continued success of moving FPGA forward.
You've already got multiple power connectors there, maybe too many. Is it worth making a few versions, one with a molex, one with a 6 pin, etc. Instead of giving multiple options in one board, would it make it any more cost effective and/or smaller, to only use the one.

yohan (OP)
Sr. Member
****
Offline Offline

Activity: 462
Merit: 251



View Profile
May 20, 2012, 01:25:25 PM
 #5

Lets start by clarifying that it won't be a single big board. Those are actually expensive to make and there is no logic in this design that needs that approach. The architecture is more of a controller card with processing cards linked by a backplane that wires it all together.The backplane should let us have 12 working boards maybe up to 19 in one run depending on design decisions made.

Similarly PCIe is a bit of overkill for this one internally but could be used to link a rack to a PC or several levels of rack. We might do a PCIe card but that is a different project.

28nm at the moment may not be viable especially Artix that is probably 6 months away. 28nm may also be expensive initially. With what we are doing on bitstreams, and partner offerings, might mean FPGA type might be relatively irrelevant. It's more about the system cost than anything. 45nm may still be the best option today but that is one of the things that we are looking at.

Voltage that is used for whatever FPGA we use is being looked at. We might put in a VRM but that is probably more complicated than is necessary and has it's own cost. There are other ways to do this.

Historically fans have been a reliability thing and we might put in monitoring or PWM but these features do have a cost themselves both in materials and electricity. That needs to considered given the fans we are currently using on Cairnsmore1 have 100K+ Hrs lifetime (11-12+ years) and have a 6 year warranty. The floating bearings that have come in the last few years are a lot to do with this reliability and alternative approach might be a planned fan replacement maintainence schedule.

A fan tray may be the way we do cooling for this design and that would have whatever fan headers are needed. We have some other ideas here as well and more on those when we have thought them through as little better to see if they are viable.

Power wise this will take power from the backplane and there won't be a choice there. The processing card isn't a replacement for Cairnsmore1 but for the bigger rack market. Different backplanes are a possibility including a mini setup maybe with a cut down number of slots but that is for later after we get the big solution out in the wild.

Yohan
DiabloD3
Legendary
*
Offline Offline

Activity: 1162
Merit: 1000


DiabloMiner author


View Profile WWW
May 20, 2012, 04:02:51 PM
 #6

Lets start by clarifying that it won't be a single big board. Those are actually expensive to make and there is no logic in this design that needs that approach. The architecture is more of a controller card with processing cards linked by a backplane that wires it all together.The backplane should let us have 12 working boards maybe up to 19 in one run depending on design decisions made.

Similarly PCIe is a bit of overkill for this one internally but could be used to link a rack to a PC or several levels of rack. We might do a PCIe card but that is a different project.

28nm at the moment may not be viable especially Artix that is probably 6 months away. 28nm may also be expensive initially. With what we are doing on bitstreams, and partner offerings, might mean FPGA type might be relatively irrelevant. It's more about the system cost than anything. 45nm may still be the best option today but that is one of the things that we are looking at.

Voltage that is used for whatever FPGA we use is being looked at. We might put in a VRM but that is probably more complicated than is necessary and has it's own cost. There are other ways to do this.

Historically fans have been a reliability thing and we might put in monitoring or PWM but these features do have a cost themselves both in materials and electricity. That needs to considered given the fans we are currently using on Cairnsmore1 have 100K+ Hrs lifetime (11-12+ years) and have a 6 year warranty. The floating bearings that have come in the last few years are a lot to do with this reliability and alternative approach might be a planned fan replacement maintainence schedule.

A fan tray may be the way we do cooling for this design and that would have whatever fan headers are needed. We have some other ideas here as well and more on those when we have thought them through as little better to see if they are viable.

Power wise this will take power from the backplane and there won't be a choice there. The processing card isn't a replacement for Cairnsmore1 but for the bigger rack market. Different backplanes are a possibility including a mini setup maybe with a cut down number of slots but that is for later after we get the big solution out in the wild.

Yohan


Using PCI-E connectors for the backplane connector (for connector only, obviously not electrically) isn't a bad idea. But how do you plan on putting all those boards in one box and not secure them somehow so they can't be accidentally disconnected inside of the case?

yohan (OP)
Sr. Member
****
Offline Offline

Activity: 462
Merit: 251



View Profile
May 20, 2012, 04:20:54 PM
Last edit: May 20, 2012, 04:36:17 PM by yohan
 #7

We are not likely to use PCIe in this way but it is a possibility for a quick picture have a look at http://www.schroff.co.uk/internet/html_e/index.html. So what we are taling about is a rack with card guides and a backplane at the back of the rack. The cards slide in and connect to the backplane.Another example http://uk.kontron.com/products/systems+and+platforms/microtca+integrated+platforms/om6060.html.

The backplane standard could be an industry standard or just something we do to suit the purpose. The main thing is that the metal work and supporting guides are standard things. With a full height 19" rack we might be looking at fitting up to 4-8 sub racks depending on height we adopt on the sub-rack and what we have in power supplies. So we might be able to do an entire rack with 0.5-1 TH/s if I can add up correctly. Then it is just a case of adding racks. In data centres you might find hundreds of these sorts of racks.
DiabloD3
Legendary
*
Offline Offline

Activity: 1162
Merit: 1000


DiabloMiner author


View Profile WWW
May 20, 2012, 04:23:34 PM
 #8

We are not likely to use PCIe in this way but it is a possibility for a quick picture have a look at http://www.schroff.co.uk/internet/html_e/index.html. So what we are taling about is a rack with card guides and a backplane at the back of the rack. The cards slide in and connect to the backplane.

The backplane standard could be an industry standard or just something we do to suit the purpose. The main thing is that the metal work and supporting guides are standard things. With a full height 19" rack we might be looking at fitting up to 4-8 sub racks depending on height we adopt on the sub-rack and what we have in power supplies. So we might be able to do an entire rack with 0.5-1 TH/s if I can add up correctly. Then it is just a case of adding racks. In data centres you might find hundreds of these sorts of racks.

I think those URLs are not the URLs you meant.

Lethos
Sr. Member
****
Offline Offline

Activity: 476
Merit: 250


Keep it Simple. Every Bit Matters.


View Profile WWW
May 20, 2012, 04:45:59 PM
 #9

We are not likely to use PCIe in this way but it is a possibility for a quick picture have a look at http://www.schroff.co.uk/internet/html_e/index.html. So what we are taling about is a rack with card guides and a backplane at the back of the rack. The cards slide in and connect to the backplane.Another example http://uk.kontron.com/products/systems+and+platforms/microtca+integrated+platforms/om6060.html.

The backplane standard could be an industry standard or just something we do to suit the purpose. The main thing is that the metal work and supporting guides are standard things. With a full height 19" rack we might be looking at fitting up to 4-8 sub racks depending on height we adopt on the sub-rack and what we have in power supplies. So we might be able to do an entire rack with 0.5-1 TH/s if I can add up correctly. Then it is just a case of adding racks. In data centres you might find hundreds of these sorts of racks.

So rather than a single big board, it's more like it's own dedicated 3-4u rack rig, that had multiple boards inside contained in it's own chassis?
I'm interested.

yohan (OP)
Sr. Member
****
Offline Offline

Activity: 462
Merit: 251



View Profile
May 20, 2012, 04:52:54 PM
 #10

Yes that is very much the concept. That also allows the processing cards to be added to or even replaced with newer better ones some way down the line.
Lethos
Sr. Member
****
Offline Offline

Activity: 476
Merit: 250


Keep it Simple. Every Bit Matters.


View Profile WWW
May 20, 2012, 05:14:00 PM
 #11

Then a Ethernet connection makes much more sense for something that will be a 3-4u Rig.

As this would for comparison sake, be similar to the mini-rig by butterflylabs (sorry - someone had to say it eventually).
I personally would never consider going with butterflylabs, mostly since it's not UK based. I like to deal with local merchants when I'm buying expensive equipment like that. What sort of price bracket could one expect something like this to fall into? Similar, more or less?

A Rig like that, would be a big investment, so for the dual purpose of having a bit more freedom when it comes to tweaking it for max performance and efficient and maybe finding a secondary purpose for it in case bit mining doesn't work for us a year or two from now. What options are available to you as a hardware engineer?
If VRM isn't an ideal choice what else is there? you hinted there maybe other options?

funnow
Full Member
***
Offline Offline

Activity: 347
Merit: 100


View Profile WWW
May 20, 2012, 05:26:45 PM
 #12

Maybe a good idea for a backplane: http://www.chassis-plans.com/single-board-computer/S6806-backplane.htm
There is also a Rack for this backplane
simon66
Sr. Member
****
Offline Offline

Activity: 423
Merit: 250


View Profile
May 20, 2012, 05:44:12 PM
 #13

Something that would be freaking amazing, is a Ethernet port or a Wifi chip built in. Doing so, will eliminate the need of a computer. The board can hash the given string (idk what kind of info the pool sends to the miner), then it does its job, and sends it back to the pool.

A port for a little LCD would be nice.

Something like this



Where it shows something like this:
Quote
Connected FPGAs: 5
FPGAs Working: 4 <---- Maybe one is off or there is something wrong. Else, it would be 5

Current work speed: 3.2 Gh/s <---- it can be Mh/s if 1 board is connected. Easy logic here.

Accepted Shares: 430
DOA Shares: 10

If you do choose to add a wifi chip you can add

Quote
Connected via: Wifi (Dlink) <---- if you have Ethernet then it will say Ethernet

What you guys think?
This will sell like water lol.
kokjo
Legendary
*
Offline Offline

Activity: 1050
Merit: 1000

You are WRONG!


View Profile
May 20, 2012, 05:51:45 PM
 #14

sup

"The whole problem with the world is that fools and fanatics are always so certain of themselves and wiser people so full of doubts." -Bertrand Russell
ionbasa
Newbie
*
Offline Offline

Activity: 20
Merit: 0


View Profile
May 20, 2012, 07:00:01 PM
 #15

Ethernet would defiantly be a good investment on the backplane. If using Ethernet it would be great if the backplane handled all of its own operations without a host of some sort(eg. doesn't need to be controlled from a pc.) If that were the case i would asume you guys would would be using an ARM chip and a cut down version of linux and some cheap flash memory.
yohan (OP)
Sr. Member
****
Offline Offline

Activity: 462
Merit: 251



View Profile
May 20, 2012, 07:47:07 PM
 #16

We are looking to make this rack stand alone and yes it is up against Butterfly's larger products. We would like to make this run free of a host PC. This is very much a big system concept. I hope we do a better product that the competitors but time will tell on that point. It's very unlikely we will do a PCIe backplane for this although we are looking at these for our general HPC products. The bought in industrial backplanes are usually very expensive so it's unlikely we would but that in. However we can use one of our standard ones that we have designed or even a derivative of one of them. The cost is quite reasonable doing it this way.

I would forget any secondary value on any sort of mining kit. If you are banking on that your equations will be wrong. FPGA families have replacements on average every 2 years and the old family will have limited value even still as chips never mind in a system that has either to be reused or silicon recovered. GPUs are even worse for this. Try selling a 2-3 year old GPU. It might have cost £500 but in 2/3 years you can buy a brand new board of equivalent performance usually for less than £100. Second hand maybe it goes for £30. I for one would not want to buy a ex-mining GPU given the stress put on them but of course most people don't mention that on Ebay.
Lethos
Sr. Member
****
Offline Offline

Activity: 476
Merit: 250


Keep it Simple. Every Bit Matters.


View Profile WWW
May 20, 2012, 08:11:21 PM
 #17

We are looking to make this rack stand alone and yes it is up against Butterfly's larger products. We would like to make this run free of a host PC. This is very much a big system concept. I hope we do a better product that the competitors but time will tell on that point. It's very unlikely we will do a PCIe backplane for this although we are looking at these for our general HPC products. The bought in industrial backplanes are usually very expensive so it's unlikely we would but that in. However we can use one of our standard ones that we have designed or even a derivative of one of them. The cost is quite reasonable doing it this way.

I would forget any secondary value on any sort of mining kit. If you are banking on that your equations will be wrong. FPGA families have replacements on average every 2 years and the old family will have limited value even still as chips never mind in a system that has either to be reused or silicon recovered. GPUs are even worse for this. Try selling a 2-3 year old GPU. It might have cost £500 but in 2/3 years you can buy a brand new board of equivalent performance usually for less than £100. Second hand maybe it goes for £30. I for one would not want to buy a ex-mining GPU given the stress put on them but of course most people don't mention that on Ebay.

Sounds like a good plan, not having to put a computer together as a host for it, certainly would make the overall costs lowered for us, assuming an existing one wasn't able to be used.

I would not try to resell an old mining gpu or rig, that wouldn't be fair to the poor sod who got it after 2 or 3 years of abuse. Old computers in my house nearly always end up having a secondary purpose once they out-lived there usefulness. At least until they completely fail.

The re-purposing would not it's primary benefit, it's just one of the reasons why having a modifiable system has it's benefit. As a programmer and Designer I would have a use of a GPU farm, if I had the tools to adapt it's purpose, admittedly not a huge one. However I do a fair amount of experimental software in my programming, it's what interested me by bitcoin.
My bigger reason is modding it for extra performance from a new or tweaked bios. I'm just late to the party as such, so still learning.
I don't want to take this off-topic though by what I have plans for. I know FPGA all hold that risk, I just know of those FPGA that have allowed modifications have grown to have tweaks done to it that increased performance by 10-20% which is worth considering as a good selling point.

rjk
Sr. Member
****
Offline Offline

Activity: 448
Merit: 250


1ngldh


View Profile
May 20, 2012, 08:17:53 PM
 #18

This is awesome, I love the way you guys think.

If you were going to consider a PCIe based modular system, note that PICMG 1.3 is an industry standard already, and seems to me that it would suit. This would allow modules to be swapped out when the future performance increases come along. You could cut costs by using a single cheap PCIe switching/fanout chip and splitting the lanes so that each slot would only get 1x electrical connectivity. This is plenty for this application, and would even work for some video and GPU compute applications such as regular video card based bitcoin mining.

There are cases and power supplies already designed around the standard too, so that is an advantage.

Mining Rig Extraordinaire - the Trenton BPX6806 18-slot PCIe backplane [PICS] Dead project is dead, all hail the coming of the mighty ASIC!
Gomeler
Hero Member
*****
Offline Offline

Activity: 697
Merit: 500



View Profile
May 20, 2012, 09:22:40 PM
 #19

Something modular with a buy-in price that isn't $10-20k USD. $500 chassis and $1-2k blades make it possible to have high density but gradual expansion for small guys. Big guys can just buy a chassis with all the blades populated.

Ethernet + simple configuration via USB. Controlling software to set IP address and mining information. http server running that displays the health/output of the chassis. Perhaps consider a small embedded linux controlling system so advanced users can SSH in to poke around and run custom scripts.
DILLIGAF
Full Member
***
Offline Offline

Activity: 196
Merit: 100



View Profile
May 21, 2012, 01:03:03 AM
 #20

Something modular with a buy-in price that isn't $10-20k USD. $500 chassis and $1-2k blades make it possible to have high density but gradual expansion for small guys. Big guys can just buy a chassis with all the blades populated.

Ethernet + simple configuration via USB. Controlling software to set IP address and mining information. http server running that displays the health/output of the chassis. Perhaps consider a small embedded linux controlling system so advanced users can SSH in to poke around and run custom scripts.

This with a definite on the embedded linux so you could even run your own/different from shipped miner if wanted.
Pages: [1] 2 3 4 5 6 7 »  All
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!