Bitcoin Forum
May 04, 2024, 12:25:58 PM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: « 1 2 3 4 [5] 6 7 »  All
  Print  
Author Topic: Minimalist Spartan6-LX150 board  (Read 49901 times)
fizzisist
Hero Member
*****
Offline Offline

Activity: 720
Merit: 525



View Profile WWW
November 04, 2011, 03:40:10 AM
 #81

You know FPGA mining is becoming legit, when 2-3 vendors are trying to snipe customers from each others' threads.  Roll Eyes

Haha, very true! Catfish, the truth is that all of the FPGA mining products you see here can be run by anyone who has managed to mine on a GPU. In fact, I think they are even easier to use (less complicated driver installs, overclocking, fan speeds, etc., and no need to even open up your tower to install it).

1714825558
Hero Member
*
Offline Offline

Posts: 1714825558

View Profile Personal Message (Offline)

Ignore
1714825558
Reply with quote  #2

1714825558
Report to moderator
1714825558
Hero Member
*
Offline Offline

Posts: 1714825558

View Profile Personal Message (Offline)

Ignore
1714825558
Reply with quote  #2

1714825558
Report to moderator
1714825558
Hero Member
*
Offline Offline

Posts: 1714825558

View Profile Personal Message (Offline)

Ignore
1714825558
Reply with quote  #2

1714825558
Report to moderator
In order to get the maximum amount of activity points possible, you just need to post once per day on average. Skipping days is OK as long as you maintain the average.
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction.
1714825558
Hero Member
*
Offline Offline

Posts: 1714825558

View Profile Personal Message (Offline)

Ignore
1714825558
Reply with quote  #2

1714825558
Report to moderator
rph
Full Member
***
Offline Offline

Activity: 176
Merit: 100


View Profile
November 04, 2011, 08:39:22 AM
 #82

Haha, very true! Catfish, the truth is that all of the FPGA mining products you see here can be run by anyone who has managed to mine on a GPU. In fact, I think they are even easier to use (less complicated driver installs, overclocking, fan speeds, etc., and no need to even open up your tower to install it).

Yeah, I'd agree with that. I just hope you guys don't sell so many that the difficulty becomes driven by FPGAs instead of GPUs.
Create some OPEC-style quotas or something..

-rph

Ultra-Low-Cost DIY FPGA Miner: https://bitcointalk.org/index.php?topic=44891
DeathAndTaxes
Donator
Legendary
*
Offline Offline

Activity: 1218
Merit: 1079


Gerald Davis


View Profile
November 04, 2011, 12:55:45 PM
 #83

Haha, very true! Catfish, the truth is that all of the FPGA mining products you see here can be run by anyone who has managed to mine on a GPU. In fact, I think they are even easier to use (less complicated driver installs, overclocking, fan speeds, etc., and no need to even open up your tower to install it).

Yeah, I'd agree with that. I just hope you guys don't sell so many that the difficulty becomes driven by FPGAs instead of GPUs.
Create some OPEC-style quotas or something..

-rph


Well given that FPGA have a long hardware payoff period I don't see FPGA putting much downward pressure on prices.

It will however put a floor on hashing power.  GPU are very electrical dependent.  At $3 per BTC @ current difficulty translates into roughly $0.15 per kWh on even the most efficient GPU.  Thus people who's electrical price is above the break even tend to quit and push hashing power & difficulty down.

FPGA however once bought are a sunk cost and have an electrical cost of <$0.50 per BTC meaning that they will continue to run likely no matter what.  The FPGA portion of hashing power already purchased (currently ~0%) will be "immune" to price changes.  What that means is as the FPGA portion grows the relationship between price/difficulty and hashing power will be come less linear. 

Even if prices spike I don't see a massive rush to buy FPGA but rather a slow continual rollout.  The long hardware payback period will make miners more cautious.  As an example when BTC prices hit $30 at 1.5M difficulty it became a no brainer to buy more GPU.  Even unsustianble as that would be.  The payback period was like 40 days.  If you mined for 40 days you could payoff a card.  FPGA however would still need a significant period of time to payoff the hardware so price spikes will have less influence on sales.


It will be an interesting dynamic to watch because I am sure 2012 will be the year of the FPGA.
gmaxwell
Moderator
Legendary
*
Offline Offline

Activity: 4158
Merit: 8382



View Profile WWW
November 04, 2011, 01:15:03 PM
 #84

You know FPGA mining is becoming legit, when 2-3 vendors are trying to snipe customers from each others' threads.  Roll Eyes

I prefer the kind of evidence where the margins get down to 20% over COGS. Wink
DeathAndTaxes
Donator
Legendary
*
Offline Offline

Activity: 1218
Merit: 1079


Gerald Davis


View Profile
November 04, 2011, 01:22:23 PM
 #85

So what is release date?  You may want to update title so people don't think the project is dead.

Got a quesiton about the backplane.  You say it has a 48W powersupply but also it has a SATA power connector?  By PSU do you mean it steps down the voltage from SATA power connector what is required for each board?

Any discounts if someone buys a backplane + 6 boards?
How heavy are the boards?  Your photo has the backplane "down" and the cards plugged into it.  Would there be an issue if the backplane was mounted vertically and the cards acting like "shelves" or would that put too much pressure on the connector.  Just getting some ideas on density and mounting.  I would want to have them enclosed in a case.
eldentyrell (OP)
Donator
Legendary
*
Offline Offline

Activity: 980
Merit: 1004


felonious vagrancy, personified


View Profile WWW
November 05, 2011, 06:54:11 PM
 #86

So what is release date?

Sorry, I wasn't aware that the title could be altered!  I've updated it now.

Got a quesiton about the backplane.  You say it has a 48W powersupply

I've switched to a 72W supply.

but also it has a SATA power connector?  By PSU do you mean it steps down the voltage from SATA power connector what is required for each board?

Exactly.  Technically it is a "DC-to-DC point of load switching regulator."

Nice part is that the 72W supply can feed from either +5V or +12V (the old 48W supply could only use +12V).

Any discounts if someone buys a backplane + 6 boards?

At the moment I am neither taking orders nor announcing a ship date nor guaranteeing that either of these things will happen.  If you have immediate need for an FPGA mining device I suggest you look into the fine offerings by fpgaminer/ngzhang or ztex (or rph although I think he said he's not selling his).

How heavy are the boards?  Your photo has the backplane "down" and the cards plugged into it.  Would there be an issue if the backplane was mounted vertically and the cards acting like "shelves" or would that put too much pressure on the connector.

That works fine.  Those connectors are seriously heavy-duty stuff.  Unfortunately they're expensive too: even in qty50 I still had to pay $2.60 per board for each pair of connectors (male+female).  But there's almost no voltage drop across the gigantic pins and they can carry more current than I'll ever need.


The printing press heralded the end of the Dark Ages and made the Enlightenment possible, but it took another three centuries before any country managed to put freedom of the press beyond the reach of legislators.  So it may take a while before cryptocurrencies are free of the AML-NSA-KYC surveillance plague.
DeathAndTaxes
Donator
Legendary
*
Offline Offline

Activity: 1218
Merit: 1079


Gerald Davis


View Profile
November 05, 2011, 07:30:52 PM
 #87

Well I am not looking to buy until January.  I am disappointed you aren't looking to make a commercial run.  Still hopefully you make your personal run, learn some things and come back with a "round 2" offering.

When it comes to mining I always think big.  I replaced all my hodgepodge collection of GPU with nothing but 5970s because I like the density (2.2GH/rig without extenders or dual power supplies).  That kind of thinking let me get 10GH/s in my garage.  I like your board design because once again ... density.

10GH would be ~ 50 FPGA boards.  Now I have no intention on buying 50 boards all at once but I also like to plan for the "end game".  50 boards lying around and connected by a rats nets of USB cables doesn't seem appealing to me.  Maybe it is my times working in a datacenter or maybe it is just OCD but I like to see everything in their place.

Your design seems to have higher density and provide for more efficient racking.  One backplane w/ 6 cards ~= 1.2GH/s.  If you ever offered a larger backplane of say 10 cards powered by a single PCIe connector well that is even more interest.  Take a 4U server case put 2 backplanes and a tiny ITX motherboard in it.  A low end 500/600W power supply w/ 2 PCIE power connectors could power the entire thing.  20 cards or ~4GH/ in a single 4U case.   Power density would even be low enough to rack these things up in a standard datacenter rack.

Anyways even if you don't sell any in the near future I hope you keep on the project. If you make changes for "round 2" think about density.  It is the advantage you have over other designs, an advantage some would be willing to pay a slight premium for.  Open air rigs (either GPU or FPGA) are fine for now but the "end game" goal would be high hashing density in standardized server cases.  Nobody wants a garage or office full or whirling, noisy open air rigs they just happen to be the most efficient.  GPU likely will never work in standard case do to high thermal load but FPGA ... might.
bulanula
Hero Member
*****
Offline Offline

Activity: 518
Merit: 500



View Profile
November 07, 2011, 09:30:36 PM
 #88

FPGA is much better for me than GPUs because less heat and noise right now but the price and performance leaves much to be desired.

Guess you can say the GPUs are old, inefficient, powerful gas guzzler motors while the FPGAs are new electric vehicles to keep the car analogy going Smiley
DeathAndTaxes
Donator
Legendary
*
Offline Offline

Activity: 1218
Merit: 1079


Gerald Davis


View Profile
November 07, 2011, 09:45:31 PM
 #89

FPGA is much better for me than GPUs because less heat and noise right now but the price and performance leaves much to be desired.

Guess you can say the GPUs are old, inefficient, powerful gas guzzler motors while the FPGAs are new electric vehicles to keep the car analogy going Smiley

However they are getting close.  While GPU may be cheap there is a limit on how many can be powered by a single rig and high efficiency power supplies aren't cheap either.  A good price point for a GPU rig is $1 per MH/s.   FPGA are getting closer every day.

GPU rig $1 per MH & 2MH/W @ $0.10 per kWh.
1GH rig = $1000 hardware cost + $438 per year.  Total cost over 2 years = $2314.

FPGA Rig (22MH/W @ $0.10 per kWh)
1GH rig (@ $2.50 per MH) = $2500 hardware costs + $40 per year.  Total cost over 3 years = $2620.
1GH rig (@ $2.00 per MH) = $2000 hardware costs + $40 per year.  Total cost over 3 years = $2120.
1GH rig (@ $1.50 per MH) = $1500 hardware costs + $40 per year.  Total cost over 3 years = $1620.

Given FPGA massively lower operating costs if they even get close to GPU they are the smart place to deploy new hardware.
ElectricMucus
Legendary
*
Offline Offline

Activity: 1666
Merit: 1057


Marketing manager - GO MP


View Profile WWW
November 07, 2011, 09:54:00 PM
 #90

Why aren't there any pcie cards with only a few fpgas & some power converters on it?

That would be the most cost effective solution as modern fpgas have native pcie endnodes and pcie even has a jtag interface built in. All we need is drivers, a 2 layer pcb and a panel sheet to mount it.
eldentyrell (OP)
Donator
Legendary
*
Offline Offline

Activity: 980
Merit: 1004


felonious vagrancy, personified


View Profile WWW
November 07, 2011, 10:30:15 PM
 #91

Why aren't there any pcie cards with only a few fpgas & some power converters on it?

Lack of demand.

Other than bitcoin, there are not many uses for large FPGAs outside of (1) development boards and (2) integration into a product of some kind (like a router) in which the user is not even aware that FPGAs are involved.  The second one is where Xilinx's large-device profits come from.

If you're buying a dev board, you're either an academic (in which case Xilinx cuts you a special deal) or you don't mind paying $4,000+ for a card with all sorts of doo-dads you'll never use since odds are the development costs of the product you're working on make expenses like this irrelevant.  That's why with each generation of chips you see Xilinx (or one of its partners) produce some uber-board with everything-and-the-kitchen-sink on it.  FWIW, many of these "kitchen sink" boards have PCIe interfaces.

The printing press heralded the end of the Dark Ages and made the Enlightenment possible, but it took another three centuries before any country managed to put freedom of the press beyond the reach of legislators.  So it may take a while before cryptocurrencies are free of the AML-NSA-KYC surveillance plague.
ngzhang
Hero Member
*****
Offline Offline

Activity: 592
Merit: 501


We will stand and fight.


View Profile
November 08, 2011, 05:02:24 PM
 #92

Why aren't there any pcie cards with only a few fpgas & some power converters on it?

That would be the most cost effective solution as modern fpgas have native pcie endnodes and pcie even has a jtag interface built in. All we need is drivers, a 2 layer pcb and a panel sheet to mount it.

Why use PCIE interfaces? USB1.1 is much better.
DeathAndTaxes
Donator
Legendary
*
Offline Offline

Activity: 1218
Merit: 1079


Gerald Davis


View Profile
November 08, 2011, 05:13:57 PM
Last edit: November 08, 2011, 08:39:09 PM by DeathAndTaxes
 #93

Why aren't there any pcie cards with only a few fpgas & some power converters on it?

That would be the most cost effective solution as modern fpgas have native pcie endnodes and pcie even has a jtag interface built in. All we need is drivers, a 2 layer pcb and a panel sheet to mount it.

Why use PCIE interfaces? USB1.1 is much better.

I am thinking scalability and density.  Evetually if Bitcoin grows and flourishes, mining will move beyond hobbyist and garages full of open rigs full of noisy cards into high density datacenter capable designs.  Getting large number of GPU in a rack mount server is unviable due to the thermal load.  FPGA make that possible someday.  A PCIe board can supply power and data over single connector which makes deployment easier.  More importantly it provides a way to securely mount multiple FPGA using existing standards (off the shelf motherboards, rackmount chassis, ATX power supplies, etc). 

I would love someday to be able to put a FPGA array in a co-location datacenter to reduce risk of loss due to theft, power, fire, damage. A full length board would be able to mount maybe 5 FPGA for half-height board and maybe 10 for full height board.  That creates some interesting datacenter quality arrays.  A 2U server could mount 4 boards, 20 FPGAs (or more).  That is ~4GH/s on 300W in a 2U space.  A standard datacenter rack could hold a 80GH/s of hashing power, run on a single 30A 208V power connection and make things like remote power control, KVM over IP, and enterprise grade redundant power supplies more economical.



bulanula
Hero Member
*****
Offline Offline

Activity: 518
Merit: 500



View Profile
November 08, 2011, 08:34:33 PM
 #94

Why aren't there any pcie cards with only a few fpgas & some power converters on it?

That would be the most cost effective solution as modern fpgas have native pcie endnodes and pcie even has a jtag interface built in. All we need is drivers, a 2 layer pcb and a panel sheet to mount it.

Why use PCIE interfaces? USB1.1 is much better.

I am thinking scalability and density.  Evetually Bitcoin will move beyond hobbyist and open boards into high density datacenter designs.  Getting large number of GPU in a rack mount server is simply impossible due to the thermal load.  FPGA make that possible someday. I see that as the endgame for FPGA.  A PCIe board can supply power and data over single connector.  It also make a convinent way to mount multiple FPGA inside a standardized chasis.  I would love someday to put a FPGA array in a co-location datacenter to reduce risk of loss due to theft, power, fire, damage.  

I full length board would be able to mount maybe 5 FPGA for half-height board and maybe 10 for full height board.  That creates some interesting datacenter quality arrays.  2U server could mount 4 boards or 20 FPGA total for ~4GH/s using maybe 300W for entire system  (at the wall).  A standard datacenter rack could hold a 80GH and run on a single 30A 208V power connection.  The higher density would make things like remote power control and KVM over IP economical.

Too bad the demand is too low now. I think BFL labs is scam too. I mean why go through all that development when the price of BTC can crash any day now and people will stop buying mining equipment etc. Even in other industries FPGA is almost never heard of. I never heard about FPGAs until Bitcoin etc.
aTg
Legendary
*
Offline Offline

Activity: 1358
Merit: 1000



View Profile
November 08, 2011, 08:38:43 PM
 #95

I am thinking scalability and density.  Evetually Bitcoin will move beyond hobbyist and open boards into high density datacenter designs.  Getting large number of GPU in a rack mount server is simply impossible due to the thermal load.  FPGA make that possible someday. I see that as the endgame for FPGA.  A PCIe board can supply power and data over single connector.  It also make a convinent way to mount multiple FPGA inside a standardized chasis.  I would love someday to put a FPGA array in a co-location datacenter to reduce risk of loss due to theft, power, fire, damage.  

I was thinking exactly that, but we could not start from here with that standard design for a rack?
I think that having many small modules with a single FPGA is not efficient for spending on individual fans and especially because a single USB controller could handle an entire plate of FPGA's so each module in the rack may be connected via USB to a hub and a computer within the same cabinet.
DeathAndTaxes
Donator
Legendary
*
Offline Offline

Activity: 1218
Merit: 1079


Gerald Davis


View Profile
November 08, 2011, 08:48:12 PM
 #96

I was thinking exactly that, but we could not start from here with that standard design for a rack?
I think that having many small modules with a single FPGA is not efficient for spending on individual fans and especially because a single USB controller could handle an entire plate of FPGA's so each module in the rack may be connected via USB to a hub and a computer within the same cabinet.

Agreed but PCIe "solves" 3 solutions

1) power distribution
2) data connectivity
3) standardized mounting
4) server sized cooling not individual board cooling

Sure you could have larger boards, and figure out a way to rig usb cables to a hub to the host, run custom power lines to each of them, and then figure out some non-standard method to securely mount and cool them  However using PCIe allows you to leverage existing technology like chassis with redundant midplane cooling, backplanes for securely mounting cards, ATX motherboards for connectivity and power.  I don't think we will see PCIe solutions anytime soon but on the other hand I can't imagine if Bitcoin is around in 5 years that the "solution" is a bunch of usb boards jury rigged inside a case connected to usb hub.

For example take a look at this "industrial chassis"
http://www.adlinktech.com/PD/marketing/Datasheet/RK-440/RK-440_Datasheet_1.pdf

Notice the midplane fans designed to cool expansion cards and the 18 expansion slots.  It uses a "single board computer" where the "motherboard" is actually mounted perpendicular to a backplane just like any other expansion card.  This is the kind of setup that is used for other "industrial" servers like cable video mulxiplexing, high speed network switching, digital signal processing, etc. 
ztex
Donator
Sr. Member
*
Offline Offline

Activity: 367
Merit: 250

ZTEX FPGA Boards


View Profile WWW
November 08, 2011, 09:06:00 PM
 #97

I don't think we will see PCIe solutions anytime soon but on the other hand I can't imagine if Bitcoin is around in 5 years that the "solution" is a bunch of usb boards jury rigged inside a case connected to usb hub.

Development costs are much higher (driver development, ...). Due to the small amounts sold this results in significant higher prices of such boards.

However future solutions will look like, unless you do not want to invest $100000s it will either be ugly or cheap.


Dexter770221
Legendary
*
Offline Offline

Activity: 1029
Merit: 1000


View Profile
November 08, 2011, 09:08:25 PM
 #98

FPGA chip that can give resonable MH/$ (1?) cost at least 150$. If you want to put 6 of them to one card that gives 900$ for chips only. PCB and other necessary parts that would be 300$ more. And 300$ for manufacture cost and spedition. Thats 1500$ per card that can only mine and achive ~1.2GH/s (using ~50W of power). Thats quiet a big price... When I've started my adventure with bitcoin I spend 1000$ for PC that can produce 800MH/s and that wasn't easy decision... I've done that becuse I'm using PC also for work (writing programs). My child needs new pair of boots, so I will never spend 1500$ for some card that can be worthless in few months... Propably most of bitminers are in familiar situation... Thats why there's no such card, to small demand. Give me 2k$ and i will design such a card and make one prototype...

Under development Modular UPGRADEABLE Miner (MUM). Looking for investors.
Changing one PCB with screwdriver and you have brand new miner in hand... Plug&Play, scalable from one module to thousands.
bulanula
Hero Member
*****
Offline Offline

Activity: 518
Merit: 500



View Profile
November 08, 2011, 09:10:34 PM
 #99

I was thinking exactly that, but we could not start from here with that standard design for a rack?
I think that having many small modules with a single FPGA is not efficient for spending on individual fans and especially because a single USB controller could handle an entire plate of FPGA's so each module in the rack may be connected via USB to a hub and a computer within the same cabinet.

Agreed but PCIe "solves" 3 solutions

1) power distribution
2) data connectivity
3) standardized mounting
4) server sized cooling not individual board cooling

Sure you could have larger boards, and figure out a way to rig usb cables to a hub to the host, run custom power lines to each of them, and then figure out some non-standard method to securely mount and cool them  However using PCIe allows you to leverage existing technology like chassis with redundant midplane cooling, backplanes for securely mounting cards, ATX motherboards for connectivity and power.  I don't think we will see PCIe solutions anytime soon but on the other hand I can't imagine if Bitcoin is around in 5 years that the "solution" is a bunch of usb boards jury rigged inside a case connected to usb hub.

For example take a look at this "industrial chassis"
http://www.adlinktech.com/PD/marketing/Datasheet/RK-440/RK-440_Datasheet_1.pdf

Notice the midplane fans designed to cool expansion cards and the 18 expansion slots.  It uses a "single board computer" where the "motherboard" is actually mounted perpendicular to a backplane just like any other expansion card.  This is the kind of setup that is used for other "industrial" servers like cable video mulxiplexing, high speed network switching, digital signal processing, etc. 

Hey D&T :

Since you seem very knowledgeable on these damn backplates I have one question for you. Do you think they can be used to mine with them etc. ?

I researched this and I think it is quite nice solution but not that cheap unfortunately. Eg will the costs be insane ? Power distribution in backplane ? Bandwidth in backplane etc. ? Thanks !
aTg
Legendary
*
Offline Offline

Activity: 1358
Merit: 1000



View Profile
November 08, 2011, 09:14:40 PM
 #100

Something like that?

http://jchblue.blogspot.com/2009/08/pico-computing-fpga-cluster.html

16 Xilinx Spartan XC3S5000 FPGAs

Pages: « 1 2 3 4 [5] 6 7 »  All
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!