Bitcoin Forum
April 25, 2024, 01:20:38 PM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
   Home   Help Search Login Register More  
Poll
Question: Wich FPGA shall be used on our prototype ?
Xilinx Spartan 6 LX 150 - 17 (70.8%)
Altera Cyclone IV 75k - 7 (29.2%)
Total Voters: 24

Pages: « 1 2 3 4 5 6 7 8 [9] 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 »
  Print  
Author Topic: Modular FPGA Miner Hardware Design Development  (Read 119229 times)
mimarob
Full Member
***
Offline Offline

Activity: 354
Merit: 103



View Profile
July 04, 2011, 09:22:20 AM
 #161

Watching this thread with great interest, I know vhdl better than verilog :-)
1714051238
Hero Member
*
Offline Offline

Posts: 1714051238

View Profile Personal Message (Offline)

Ignore
1714051238
Reply with quote  #2

1714051238
Report to moderator
1714051238
Hero Member
*
Offline Offline

Posts: 1714051238

View Profile Personal Message (Offline)

Ignore
1714051238
Reply with quote  #2

1714051238
Report to moderator
1714051238
Hero Member
*
Offline Offline

Posts: 1714051238

View Profile Personal Message (Offline)

Ignore
1714051238
Reply with quote  #2

1714051238
Report to moderator
There are several different types of Bitcoin clients. The most secure are full nodes like Bitcoin Core, but full nodes are more resource-heavy, and they must do a lengthy initial syncing process. As a result, lightweight clients with somewhat less security are commonly used.
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction.
Olaf.Mandel
Member
**
Offline Offline

Activity: 70
Merit: 10


View Profile
July 04, 2011, 09:23:18 AM
 #162

[...]
What about the Xilinx XC6SLX150-3CSG484C? It's cheaper than the EP4CE75 and will definitely allow for higher hash rates.
As I already mentioned multiple times, ArtForz (a bitcoin early adopter with a huge mining farm) claims to run 190MH/s on that one, and I think we can trust him. Sadly I haven't managed to reproduce this myself so far, as I don't have the time nor the processing power needed to do lots of synthesis runs to optimize it. He considered releasing the source code though... We might just need to poke him a bit more to actually do that.

Xilinx would be my preferred solution, because I read more of their datasheets. But while 190MHash/s is a very impressive number, I would really like to have someone state that this or that available code gives this or that performance. Especially since not all of us can compile code for that FPGA.

If you can compile for the  XC6SLX150, can you just take any of the currently available codes and compile it with default settings? Even a not optimised result is better than nothing! We just want to know if the FPGA can run a fully unrolled core with more than 84MHz.
makomk
Hero Member
*****
Offline Offline

Activity: 686
Merit: 564


View Profile
July 04, 2011, 10:22:47 AM
Last edit: July 04, 2011, 03:36:41 PM by makomk
 #163

Good to see you have allowed 3% for interface changes  Roll Eyes
Worse, actually - 3% for adding an interface other than JTAG at all ;-). I figure that if it's possible to offer a choice between a decent selection of basic interface options, that'll be enough and anything fancier like Ethernet is probably best done in an external microcontroller. Of course, that's a big if!

Edited to add:
If you can compile for the  XC6SLX150, can you just take any of the currently available codes and compile it with default settings? Even a not optimised result is better than nothing! We just want to know if the FPGA can run a fully unrolled core with more than 84MHz.
I've heard that with the default settings you can't actually get it to pass place-and-route. (The workaround is *probably* quite easy; modifying the Map settings to ignore user timing constraints and run in non-timing driven mode should work, though obviously I can't test this.)

Quad XC6SLX150 Board: 860 MHash/s or so.
SIGS ABOUT BUTTERFLY LABS ARE PAID ADS
O_Shovah (OP)
Sr. Member
****
Offline Offline

Activity: 410
Merit: 252


Watercooling the world of mining


View Profile
July 04, 2011, 08:11:31 PM
 #164

As this is basically down to  Xilinx Spartan 6 Lx 150  Vs  Altera Cyclone IV 75K i think we should have a poll on that.

Although i personally would prefere to have a design proofen to be working in simulation at least,plus we will be dependet on someone to provide us with the bitstream in case we use The Spartan.


Please make your decision on your FPGA of choice. This poll will be running until Saturday the 9.07.2011 22:00





Olaf.Mandel
Member
**
Offline Offline

Activity: 70
Merit: 10


View Profile
July 04, 2011, 08:18:51 PM
 #165

Xilinx Spartan 6 XC6LX150: cheaper, claims to be faster.
O_Shovah (OP)
Sr. Member
****
Offline Offline

Activity: 410
Merit: 252


Watercooling the world of mining


View Profile
July 04, 2011, 08:21:52 PM
 #166

Xilinx Spartan 6 XC6LX150: cheaper, claims to be faster.

Then give the poll your click  Wink

Olaf.Mandel
Member
**
Offline Offline

Activity: 70
Merit: 10


View Profile
July 05, 2011, 05:59:59 AM
 #167

[...]
Then give the poll your click  Wink

I scroll down to the end of the discussion too fast, it seems...
Olaf.Mandel
Member
**
Offline Offline

Activity: 70
Merit: 10


View Profile
July 05, 2011, 08:22:44 AM
 #168

While the poll for which FPGA to use is running, we can already decide on the specifics of the DIMM connector. This can probably be split into four steps:
  • Conceptual: see below
  • Electrical: Specify a table of signal name, voltage, current and comments (e.g. where to put pull-up resistors, ...)
  • Mechanical: Specify which DIMM connector to use and how much space to leave between DIMMs and around the DIMM in general
  • Pinout: Specify a table of pin number and signal name

To get the discussion started, here is a suggestion for the conceptual step: which features to include and how to solve certain issues. While I write this in a firm language, it is only a suggestion. Everyone on the board should comment or amend this and then O_Shovah should probably make a final selection.

The following signals to be included into the connector represent the minimum needed for our design of the DIMM:

SignalDescription
+VThe supply voltage for the FPGAs on the DIMM. Has a high current and a wide voltage range.
+VBUSThe supply voltage for all logic signals on the bus. Provided to the DIMM to power its interface logic.
GNDThe return for both +V and +VBUS. All logic signals are also relative to this signal.
TCKThe clock signal for the JTAG bus. Input into the DIMM.
TMSThe mode select signal for the JTAG bus. Input into the DIMM.
TDIThe serial data input signal for the JTAG bus. Input into the DIMM.
TDOThe serial data output signal for the JTAG bus. Output from the DIMM.

The following signals to be included into the connector are not strictly needed in all use cases. Their inclusion depends on the implementation of the feature listed below:

SignalFeatureDescription
DET_DIMMAuto bridgePin to allow the backplane to detect the presence of a DIMM in the slot. Shorted to GND on the DIMM.
DET_BPHybrid boardPin to allow the DIMM to detect the presence of a backplane. Shorted to GND on the backplane.
SCLEEPROMThe clock signal of an I2C bus. Input into the DIMM.
SDAEEPROMThe serial data signal of an I2C bus. Bidirectional I/O.
LEDInfo-LEDSignal to enable an LED on the DIMM. Input into the DIMM.

List of features:

  • Auto bridge: The backplane can automatically bridge the JTAG signals over unpopulated slots. If not implemented, jumpers need to be used to bridge open slots.
  • Hybrid board: The DIMM can also operate in a standalone mode without a backplane.
  • EEPROM: The DIMM contains an EEPROM to store details of the DIMM: type and number of FPGAs, batch number, serial number...
  • Info-LED: The backplane can switch on an LED on the edge of the DIMM under software control. May be used to identify defective boards to the user. This feature could also be implemented via I2C.

One additional issue we should discuss but which which has no implication on the current step of the specification process: Should the FPGAs also be connected to the I2C bus? If no, we can potentially save one bi-directional level shifter as the EEPROM can run at a different voltage than the FPGAs..
TheSeven
Hero Member
*****
Offline Offline

Activity: 504
Merit: 500


FPGA Mining LLC


View Profile WWW
July 05, 2011, 04:48:37 PM
 #169

  • Auto bridge: The backplane can automatically bridge the JTAG signals over unpopulated slots. If not implemented, jumpers need to be used to bridge open slots.

While this might be sensible (for cost reasons) for some low-cost backplanes, I don't think it will scale well for bigger backplanes with multi-FPGA cards.
Each board will need its dedicated I2C bus anyway, so why not have a dedicated JTAG bus as well?
For the cheap boards, you could just connect to the USB pins via the DIMM connector, and basically just have a hub and power supply on the backplane. The more expensive boards might have an ARM and ethernet.

One additional issue we should discuss but which which has no implication on the current step of the specification process: Should the FPGAs also be connected to the I2C bus? If no, we can potentially save one bi-directional level shifter as the EEPROM can run at a different voltage than the FPGAs..

I think it will be advantageous to connect I2C to the FPGAs, and at least have the option to transmit work/shares that way. A level shifter really doesn't cost much compared to an FPGA, and if we go for a 2.5V interface we can possibly remove it altogether.

Oh, and don't forget to add a means for boards to interrupt the backplane, e.g. when a share was found or keyspace was exhausted.

My tip jar: 13kwqR7B4WcSAJCYJH1eXQcxG5vVUwKAqY
Olaf.Mandel
Member
**
Offline Offline

Activity: 70
Merit: 10


View Profile
July 05, 2011, 07:32:17 PM
 #170

[...]
Each board will need its dedicated I2C bus anyway, so why not have a dedicated JTAG bus as well?

We only need one I2C bus, it just needs to be fragmented into different partitions by a switch. I mentioned one example for such a switch before, the NXP PCA9547PW. The reason why I2C needs to be partitioned is the limited availability of addresses on the bus. That is not a problem for the JTAG bus, though. Logically, you can make it as long as you like. Electrically, you need drivers in the TCK and TMS lines for a design with many chips.

Given that there wasn't more than one I2C bus planned and that no more than one JTAG chain are needed, can you clarify why you think more JTAG chains are needed?

For the cheap boards, you could just connect to the USB pins via the DIMM connector, and basically just have a hub and power supply on the backplane. The more expensive boards might have an ARM and ethernet.

So not use the JTAG or I2C signals on the bus connector at all, just the USB D+ and D- lines? That is a very interesting idea: it simplifies the design a lot if it works: none of the non-supply signals I mentioned in my last post are needed in that case, as the backplane can detect the presence of a DIMM the "USB" way. So a simple backplane contains wires and a couple of mini-USB connectors? Or it contains a home-grown USB-hub? (I am limiting myself to a cheap backplane in this discussion because the intelligent one with a CPU can be build on top of the cheap design in a second step).

This is basically shifting the interface chip completely on the DIMM, removing (by design, not material cost) the overhead of supporting hybrid DIMMs. Of the different options, it is not the cheapest, but certainly elegant:

  • slave-only DIMMs, USB-chip only on backplane: cheapest, JTAG and I2C on bus
  • hybrid DIMMs, USB-chip only on DIMMs: mid price, simple bus with only USB, but needs hub somewhere
  • hybrid DIMMs, USB-chip both on DIMMs and backplane: most expensive, JTAG and I2C on bus

[...]
Oh, and don't forget to add a means for boards to interrupt the backplane, e.g. when a share was found or keyspace was exhausted.

Is that actually needed? I agree that a later backplane that contains a CPU may make good use of the interrupt, but for the USB based devices it is only a question of how much data to transmit: you still need to use polling because USB does not have a direct IRQ. I admit that reading the GPIO value of an FT2232 connected to the IRQ signal is quicker than reading the JTAG chain. But how bad is that for even 10 boards each with 16 FPGAs?
O_Shovah (OP)
Sr. Member
****
Offline Offline

Activity: 410
Merit: 252


Watercooling the world of mining


View Profile
July 05, 2011, 08:04:44 PM
Last edit: July 05, 2011, 08:44:48 PM by O_Shovah
 #171

While the poll for which FPGA to use is running, we can already decide on the specifics of the DIMM connector.
Thanks to you Olaf for starting the further disscussion in my case. Smiley


For the cheap boards, you could just connect to the USB pins via the DIMM connector, and basically just have a hub and power supply on the backplane. The more expensive boards might have an ARM and ethernet.

So not use the JTAG or I2C signals on the bus connector at all, just the USB D+ and D- lines? That is a very interesting idea: it simplifies the design a lot if it works: none of the non-supply signals I mentioned in my last post are needed in that case, as the backplane can detect the presence of a DIMM the "USB" way. So a simple backplane contains wires and a couple of mini-USB connectors? Or it contains a home-grown USB-hub? (I am limiting myself to a cheap backplane in this discussion because the intelligent one with a CPU can be build on top of the cheap design in a second step).

This is basically shifting the interface chip completely on the DIMM, removing (by design, not material cost) the overhead of supporting hybrid DIMMs. Of the different options, it is not the cheapest, but certainly elegant:

  • slave-only DIMMs, USB-chip only on backplane: cheapest, JTAG and I2C on bus
  • hybrid DIMMs, USB-chip only on DIMMs: mid price, simple bus with only USB, but needs hub somewhere
  • hybrid DIMMs, USB-chip both on DIMMs and backplane: most expensive, JTAG and I2C on bus
Edited
I would prefere the last of you options: the DIMM and the backplane containing the i2c and the jtag.

This would offer the maximum flexebilitiy and i asume the additional hardware cost to be marginal compared to the FPGA and the power supply,
 

Olaf.Mandel
Member
**
Offline Offline

Activity: 70
Merit: 10


View Profile
July 05, 2011, 08:19:55 PM
 #172

[...]
  • slave-only DIMMs, USB-chip only on backplane: cheapest, JTAG and I2C on bus
  • hybrid DIMMs, USB-chip only on DIMMs: mid price, simple bus with only USB, but needs hub somewhere
  • hybrid DIMMs, USB-chip both on DIMMs and backplane: most expensive, JTAG and I2C on bus

I would prefere the last of you options as the backplane containing the hub and the DIMM containing the USB and the i2c+jtag.

This would offer the maximum flexebilitiy and i asume the additional hardware cost to be marginal compared to the FPGA and the power supply,

Can you reformulate your first sentence? It doesn't make sense to me. The last option in the above list is not about having a hub on the backplane but a JTAG + I2C bus. The hub is needed for the middle option.

In case you meant adding the USB signals of the hybrid DIMM to the JTAG and I2C bus as a third connection protocol: isn't this getting more complicated than it needs to be? You truly need only the JTAG connection or the USB connection. Adding I2C to JTAG makes sense because it allows you to read the description and serial number data of the DIMM. Adding I2C to USB makes no sense: the USB chip on the DIMM contains the I2C master for the local I2C bus that does not go beyond a single DIMM. The same for JTAG: if there is USB on the backplane connector, what is the point of adding JTAG to it? You would have to pay attention to who drives the individual signals, preventing burning out the interface chips if both try to send data at the same time.
O_Shovah (OP)
Sr. Member
****
Offline Offline

Activity: 410
Merit: 252


Watercooling the world of mining


View Profile
July 05, 2011, 08:51:48 PM
 #173

I hope i could clarify my opinion.

One interesting point for me would also be:

How much bandwith would be needed for a certain Mhash rate and might that become a problem for future faster FPGA's with I2C and JTAG ?

I promise i will read further into the specification of I2C as im not fully aware of its capabilitys yet.

lame.duck
Legendary
*
Offline Offline

Activity: 1270
Merit: 1000


View Profile
July 05, 2011, 09:29:33 PM
 #174

The bandwith requirement for the hashing is not the point, you need ca. 800 bit! for the data, and receive 32bits , both with severe protocol overhead, this is not the limiting factor i think. but its quite complicated to implement. i am workling on this since i have a card with 2 largish stratix and no documentation, so jtag is probably the easiest way to get the card running. But you have to trace which workload  is running on which fpga and to associate the returning nonce. Unfortunately my jtagd oder quartus_stp tends to produce errors which would render the whole systems workload useless (except you have a persistent database of course).

So i would prefer a serial connection per FPGA or SPI, with SPI as the  more adequate solution. But there is a serial solution that works already.

I2C, well it could be implemented with FPGA too, but i think spi is a better was due to the clock, which omits the need for clock recovery and so on plus the higher speed of spi.
TheSeven
Hero Member
*****
Offline Offline

Activity: 504
Merit: 500


FPGA Mining LLC


View Profile WWW
July 05, 2011, 10:30:54 PM
 #175

[...]
Each board will need its dedicated I2C bus anyway, so why not have a dedicated JTAG bus as well?

We only need one I2C bus, it just needs to be fragmented into different partitions by a switch. I mentioned one example for such a switch before, the NXP PCA9547PW. The reason why I2C needs to be partitioned is the limited availability of addresses on the bus. That is not a problem for the JTAG bus, though. Logically, you can make it as long as you like. Electrically, you need drivers in the TCK and TMS lines for a design with many chips.

Given that there wasn't more than one I2C bus planned and that no more than one JTAG chain are needed, can you clarify why you think more JTAG chains are needed?

Very long JTAG scan chains won't perform well. Boot up time will be huge, and if you use JTAG for data communication, the overhead will greatly increase on a long scan chain. This probably isn't an issue with like 4 FPGAs in the system, but it is one with 100 FPGAs. While we probably won't build such boards right now, we might want to do this at some point, so I'm just proposing to design the interface in away that allows for separate I2C buses and JTAG chains on each card, as the additional cost will not be an issue.

For the cheap boards, you could just connect to the USB pins via the DIMM connector, and basically just have a hub and power supply on the backplane. The more expensive boards might have an ARM and ethernet.

So not use the JTAG or I2C signals on the bus connector at all, just the USB D+ and D- lines? That is a very interesting idea: it simplifies the design a lot if it works: none of the non-supply signals I mentioned in my last post are needed in that case, as the backplane can detect the presence of a DIMM the "USB" way. So a simple backplane contains wires and a couple of mini-USB connectors? Or it contains a home-grown USB-hub? (I am limiting myself to a cheap backplane in this discussion because the intelligent one with a CPU can be build on top of the cheap design in a second step).

This is basically shifting the interface chip completely on the DIMM, removing (by design, not material cost) the overhead of supporting hybrid DIMMs. Of the different options, it is not the cheapest, but certainly elegant:

  • slave-only DIMMs, USB-chip only on backplane: cheapest, JTAG and I2C on bus
  • hybrid DIMMs, USB-chip only on DIMMs: mid price, simple bus with only USB, but needs hub somewhere
  • hybrid DIMMs, USB-chip both on DIMMs and backplane: most expensive, JTAG and I2C on bus

I would expose the I2C, JTAG and USB interfaces on the DIMM, so that the backplane can decide which one it wants to use. Right now we'll probably go for USB-only, which greatly simplifies backplane complexity and cost (the backplane is basically just a USB hub with DIMM sockets and power distribution), but more intelligent backplanes (which might be designed later) might have a processor that talks I2C and JTAG natively, so there's no reason to add the USB overhead there.

[...]
Oh, and don't forget to add a means for boards to interrupt the backplane, e.g. when a share was found or keyspace was exhausted.

Is that actually needed? I agree that a later backplane that contains a CPU may make good use of the interrupt, but for the USB based devices it is only a question of how much data to transmit: you still need to use polling because USB does not have a direct IRQ. I admit that reading the GPIO value of an FT2232 connected to the IRQ signal is quicker than reading the JTAG chain. But how bad is that for even 10 boards each with 16 FPGAs?

I'm talking about a dedicated IRQ line on each board, that can be triggered by all of the FPGAs. Depending on the number of available GPIOs, we might want to have one IRQ line per FPGA that connects to the FTDI, and or them together to some pin on the DIMM, which the backplane can then use to determine whether the card needs to be polled. While this exposed IRQ pin is probably worthless for a USB-based backplane, it can possibly increase efficiency for more intelligent backplanes, and it adds virtually no cost. We should really design the interface with flexibility and future expansion in mind.

In case you meant adding the USB signals of the hybrid DIMM to the JTAG and I2C bus as a third connection protocol: isn't this getting more complicated than it needs to be? You truly need only the JTAG connection or the USB connection. Adding I2C to JTAG makes sense because it allows you to read the description and serial number data of the DIMM. Adding I2C to USB makes no sense: the USB chip on the DIMM contains the I2C master for the local I2C bus that does not go beyond a single DIMM. The same for JTAG: if there is USB on the backplane connector, what is the point of adding JTAG to it? You would have to pay attention to who drives the individual signals, preventing burning out the interface chips if both try to send data at the same time.

I2C is a multi-master bus, so we can easily expose this one to the DIMM. Level shifters might get a bit complicated though because it has a bidirectional data line. We can possibly get around this by just using 2.5V or 3.3V for the FPGA's I/O bank that the bus is connected to.
For JTAG, I'd hope that the FTDI or whichever chip we'll use has a way to signal it that it should tristate its outputs.
This is a point that will need further investigation.

The bandwith requirement for the hashing is not the point, you need ca. 800 bit! for the data, and receive 32bits , both with severe protocol overhead, this is not the limiting factor i think. but its quite complicated to implement. i am workling on this since i have a card with 2 largish stratix and no documentation, so jtag is probably the easiest way to get the card running. But you have to trace which workload  is running on which fpga and to associate the returning nonce. Unfortunately my jtagd oder quartus_stp tends to produce errors which would render the whole systems workload useless (except you have a persistent database of course).

So i would prefer a serial connection per FPGA or SPI, with SPI as the  more adequate solution. But there is a serial solution that works already.

I2C, well it could be implemented with FPGA too, but i think spi is a better was due to the clock, which omits the need for clock recovery and so on plus the higher speed of spi.

I2C is specified for up to 400 kbit/s, including protocol overhead. I really wouldn't like to go for more than 10% bus utilization on average, to keep latencies low. This means that there should be no more than about 140GH/s per bus, which seems to be a ridiculously high number even for ASICs. So I2C should be suitable for this Smiley
Nevertheless I'd let the backplane choose whether to join the I2C buses of the cards with address space extenders, or whether to use dedicated I2C masters per card. This seems to not need any additional effort.

I don't see how SPI is any better than I2C. It will need a dedicated CE signal for each and every chip, limiting the number of chips per card to less than the 127 limit imposed by I2C (which could be worked around with address space extenders), and its clock recovery works basically the same way as I2Cs. It might be designed for higher clock speeds, but does neither support multiple masters, nor is it really a flexible bus system.

My tip jar: 13kwqR7B4WcSAJCYJH1eXQcxG5vVUwKAqY
lame.duck
Legendary
*
Offline Offline

Activity: 1270
Merit: 1000


View Profile
July 05, 2011, 11:11:57 PM
 #176

Of course you are right that every  spi slave needs an CE input, but on the polling host side, this could be solved with a lvttl 4:16 decoder that would be enough for 15 chips plus one 'no chip' selected. But with I2C you would have to implement 7 pins for the  FPGA adress, and have to think over mixed schemes with Dimm cards containing 1,2 ... 4 Chips, and some pins routed through to the dimm connector which would provide additional address select pins. Of course you could provide separate bitstream files for every FPGA in your box. I would consider this a bad idea, except you have a commercial license with incremental compilation or find a way to manipulate the bitstream.

Ok, writing this - there is a possibility to have a preset register that could be configured via jtag but there could be some other pitfalls with that, maybe for INT pin assignment or so.

And as you write, I2C is multimaster via a  open collector bus. Maybe you could  calculate the capacitive load of a system and if this will work out. If yes, you should calculate a second time, there is  also a 11 bit  adressing scheme in I2C but ... i would not recomment I2C.
TheSeven
Hero Member
*****
Offline Offline

Activity: 504
Merit: 500


FPGA Mining LLC


View Profile WWW
July 06, 2011, 12:24:15 AM
 #177

Of course you are right that every  spi slave needs an CE input, but on the polling host side, this could be solved with a lvttl 4:16 decoder that would be enough for 15 chips plus one 'no chip' selected. But with I2C you would have to implement 7 pins for the  FPGA adress, and have to think over mixed schemes with Dimm cards containing 1,2 ... 4 Chips, and some pins routed through to the dimm connector which would provide additional address select pins. Of course you could provide separate bitstream files for every FPGA in your box. I would consider this a bad idea, except you have a commercial license with incremental compilation or find a way to manipulate the bitstream.

Ok, writing this - there is a possibility to have a preset register that could be configured via jtag but there could be some other pitfalls with that, maybe for INT pin assignment or so.

And as you write, I2C is multimaster via a  open collector bus. Maybe you could  calculate the capacitive load of a system and if this will work out. If yes, you should calculate a second time, there is  also a 11 bit  adressing scheme in I2C but ... i would not recomment I2C.

That's why I was talking about one I2C bus per card. I'm fairly sure that the bus can handle quite a bunch of FPGAs on one card, at that couple of kilohertz the capacitive load isn't all that critical.

The address selection issue is fairly easy to solve as well: Just have 7 address selection pins on each FPGA, with appropriate Vcc/GND connections on the PCB, so that the FPGA position on the board determines its address.
The EEPROM on each card will have the same address on all boards anyway, so doing this for the FPGAs as well shouldn't hurt.

Feel free to come up with something better, but SPI currently doesn't seem to be much better to me.
Is there a sane way to drive 127 SPI slaves through an FTDI?

My tip jar: 13kwqR7B4WcSAJCYJH1eXQcxG5vVUwKAqY
Olaf.Mandel
Member
**
Offline Offline

Activity: 70
Merit: 10


View Profile
July 06, 2011, 03:23:51 AM
 #178

[...]
Is there a sane way to drive 127 SPI slaves through an FTDI?

For the FT2232D: while you also have a JTAG interface, you cannot do this efficiently. There is only one MPSSE (generic serial engine) on there, and that is used up for JTAG. So I2C would have to be done by bitbanging GPIOs.

For the FT2232H: this has two MPSSEs, so you can have both an efficient I2C and JTAG bus.

The reason why I reneged on my original suggestion of using the FT2232H: there is a ton of free software to use the FT2232D as a JTAG interface. There seems to be little to none that uses the FT2232H, though. We couldn't use existing JTAG software with the FT2232H. At that time, other protocols in addition to JTAG were only being discussed as a way to read out the EEPROM, and I figured that could be done by bitbanging. But if you want to send workloads to the FPGA via I2C, then a dedicated MPSSE seems in order.
phillipsjk
Legendary
*
Offline Offline

Activity: 1008
Merit: 1001

Let the chips fall where they may.


View Profile WWW
July 06, 2011, 05:31:47 AM
 #179

I would expose the I2C, JTAG and USB interfaces on the DIMM, so that the backplane can decide which one it wants to use. Right now we'll probably go for USB-only, which greatly simplifies backplane complexity and cost (the backplane is basically just a USB hub with DIMM sockets and power distribution), but more intelligent backplanes (which might be designed later) might have a processor that talks I2C and JTAG natively, so there's no reason to add the USB overhead there.

I was under the impression that the use of DIMM sockets for power distribution wouldn't work well and that power connectors on each card are going to be used. Unless "power distribution" means power distribution for interface logic in this context.

Are these DIMMS going to be keyed such that any memory chips placed in them won't get fried?

James' OpenPGP public key fingerprint: EB14 9E5B F80C 1F2D 3EBE  0A2F B3DE 81FF 7B9D 5160
lame.duck
Legendary
*
Offline Offline

Activity: 1270
Merit: 1000


View Profile
July 06, 2011, 07:21:36 AM
 #180

I was under the impression that the use of DIMM sockets for power distribution wouldn't work well and that power connectors on each card are going to be used. Unless "power distribution" means power distribution for interface logic in this context.

Are these DIMMS going to be keyed such that any memory chips placed in them won't get fried?

This would be quite  cost demanding, since the 'normal' or even server DIMM-sockets are produced in bulk quantities and much cheaper. The dimm-pc concept solves this by placing the power and ground pins so that in a normal board  VCC and GND will be shorted and a good  power supply would detect this and will shut off.
Pages: « 1 2 3 4 5 6 7 8 [9] 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 »
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!