Bitcoin Forum
November 19, 2024, 11:51:41 AM *
News: Latest Bitcoin Core release: 28.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: « 1 2 3 [4] 5 6 7 8 9 10 »  All
  Print  
Author Topic: ASIC Testing on Scrypt?  (Read 17510 times)
CoinBuzz
Sr. Member
****
Offline Offline

Activity: 490
Merit: 250



View Profile
September 02, 2013, 04:33:40 PM
 #61

Guys,

Look at this URL: http://www.coingeneration.com/

maybe that big hashrate is coming from users of this site.

Join ASAP: FREE BITCOIN
gica_contra
Sr. Member
****
Offline Offline

Activity: 350
Merit: 250



View Profile
September 02, 2013, 04:58:26 PM
 #62

Guys,

Look at this URL: http://www.coingeneration.com/

maybe that big hashrate is coming from users of this site.

a site where you have to pay before you get paid? i'll take two.
y0m0
Newbie
*
Offline Offline

Activity: 20
Merit: 0



View Profile
September 02, 2013, 05:04:15 PM
 #63

Maybe it's just Middlecoin pool pointing is entire pool hash rate to ltc mining ..... they have around 1219.1723 MH/s at the moment
CoinBuzz
Sr. Member
****
Offline Offline

Activity: 490
Merit: 250



View Profile
September 02, 2013, 05:49:53 PM
 #64

Maybe it's just Middlecoin pool pointing is entire pool hash rate to ltc mining ..... they have around 1219.1723 MH/s at the moment

1.2 Ghash, not 2Ghash !!

Join ASAP: FREE BITCOIN
WindMaster
Sr. Member
****
Offline Offline

Activity: 347
Merit: 250


View Profile
September 02, 2013, 07:56:22 PM
Last edit: September 02, 2013, 08:10:27 PM by WindMaster
 #65

Please... Those PCIe backplanes cost 2-3x more than a motherboard. This is a large amount of hashing power otherwise people wouldn't be discussing it.

Please...  How much do you think it really costs to slap together a large GPU farm worth of 4 layer boards with a handful of PLX PCIe bridge chips and PCIe x16 slots with only one lane connected to each?  Handling fanout of REFCLK and escaping the BGA packaged PCIe bridge IC's is about the only thing requiring any skill at all, the rest is just dirt simple impedance balancing and length matching of the lane data signals (which is certainly not rocket science).

Here, I'll throw everyone a bone here.  There are many consumer motherboards currently on the market whose chipset doesn't actually require that all 16 lanes of a PCIe x16 slot be merged to one endpoint.  That means all 16 lanes can be broken out to 16 separate single-lane PCIe devices.  With the correct motherboard, it works just fine to hack up some ribbon cable PCIe risers to split a single x16 slot (preferably one at the edge of the motherboard so you can position GPU's both above and below the motherboard) to 16 GPU's.  Dividing up the REFCLK signal to each of the 16 GPU's is the only thing requiring any active electronics and design skill (and really, not all that much), but given that REFCLK isn't synchronous to the PCIe lane data signalling and runs significantly slower (100MHz), it's not complicated to pull off either.  You can also easily pull it off as a PCIe backplane on a dirt simple 2 layer PCB, REFCLK fanout circuitry and all, with no PCIe bridge IC on there at all, for under $100 in single unit quantities (even at PCB houses in the US).  How's that cost comparison vs individual motherboards per couple GPU's looking now?

So, there you go, the secret to running 16 GPU's per commodity motherboard.  Just in time for GPU mining to have only marginal ROI.  As soon as GPU mining moves beyond the realm of feasibility, you'll see a lot more interesting revelations about GPU mining farms than that..
atomicchaos
Hero Member
*****
Offline Offline

Activity: 630
Merit: 500


View Profile
September 02, 2013, 10:38:18 PM
 #66

Please... Those PCIe backplanes cost 2-3x more than a motherboard. This is a large amount of hashing power otherwise people wouldn't be discussing it.

Please...  How much do you think it really costs to slap together a large GPU farm worth of 4 layer boards with a handful of PLX PCIe bridge chips and PCIe x16 slots with only one lane connected to each?  Handling fanout of REFCLK and escaping the BGA packaged PCIe bridge IC's is about the only thing requiring any skill at all, the rest is just dirt simple impedance balancing and length matching of the lane data signals (which is certainly not rocket science).

Here, I'll throw everyone a bone here.  There are many consumer motherboards currently on the market whose chipset doesn't actually require that all 16 lanes of a PCIe x16 slot be merged to one endpoint.  That means all 16 lanes can be broken out to 16 separate single-lane PCIe devices.  With the correct motherboard, it works just fine to hack up some ribbon cable PCIe risers to split a single x16 slot (preferably one at the edge of the motherboard so you can position GPU's both above and below the motherboard) to 16 GPU's.  Dividing up the REFCLK signal to each of the 16 GPU's is the only thing requiring any active electronics and design skill (and really, not all that much), but given that REFCLK isn't synchronous to the PCIe lane data signalling and runs significantly slower (100MHz), it's not complicated to pull off either.  You can also easily pull it off as a PCIe backplane on a dirt simple 2 layer PCB, REFCLK fanout circuitry and all, with no PCIe bridge IC on there at all, for under $100 in single unit quantities (even at PCB houses in the US).  How's that cost comparison vs individual motherboards per couple GPU's looking now?

So, there you go, the secret to running 16 GPU's per commodity motherboard.  Just in time for GPU mining to have only marginal ROI.  As soon as GPU mining moves beyond the realm of feasibility, you'll see a lot more interesting revelations about GPU mining farms than that..

You've never run a farm judging by your comments. Let me give you a clue why it's unique for a single location: Heat & Power.

BTC:113mFe2e3oRkZQ5GeqKhoHbGtVw16unnw2
DeathAndTaxes
Donator
Legendary
*
Offline Offline

Activity: 1218
Merit: 1079


Gerald Davis


View Profile
September 02, 2013, 10:49:27 PM
 #67

He also appears to be clueless about the BIOS and AMD driver limits which make 16 GPU per system an impossibility.  In theory using PCIe bridge is possible but nobody with any capital dumped it into custom GPU boards when you could dump it into custom ASICs and make a fortune (in the early days). 
FiiNALiZE
Hero Member
*****
Offline Offline

Activity: 868
Merit: 500

CryptoTalk.Org - Get Paid for every Post!


View Profile
September 02, 2013, 10:53:50 PM
 #68

Provided he uses 7950's and each GPU uses 200w while hashing at 700kh/s, he'll need around 2880 of them to get 2GH/s.


That's about 560,000 watts or 0.56MW.

Since 1w = 1 joule/sec, it should be enough to heat 1 gallon of water from 20c to boiling (100c) in roughly 2 seconds.

I know there are other factors, like the amount of open space, etc., but that should give you an idea of how much energy is consumed by that farm.

You'll need to spend some serious dough on cooling if you want to run a farm like that in one location.


 
                                . ██████████.
                              .████████████████.
                           .██████████████████████.
                        -█████████████████████████████
                     .██████████████████████████████████.
                  -█████████████████████████████████████████
               -███████████████████████████████████████████████
           .-█████████████████████████████████████████████████████.
        .████████████████████████████████████████████████████████████
       .██████████████████████████████████████████████████████████████.
       .██████████████████████████████████████████████████████████████.
       ..████████████████████████████████████████████████████████████..
       .   .██████████████████████████████████████████████████████.
       .      .████████████████████████████████████████████████.

       .       .██████████████████████████████████████████████
       .    ██████████████████████████████████████████████████████
       .█████████████████████████████████████████████████████████████.
        .███████████████████████████████████████████████████████████
           .█████████████████████████████████████████████████████
              .████████████████████████████████████████████████
                   ████████████████████████████████████████
                      ██████████████████████████████████
                          ██████████████████████████
                             ████████████████████
                               ████████████████
                                   █████████
.CryptoTalk.org.|.MAKE POSTS AND EARN BTC!.🏆
WindMaster
Sr. Member
****
Offline Offline

Activity: 347
Merit: 250


View Profile
September 02, 2013, 10:54:01 PM
 #69

You've never run a farm judging by your comments. Let me give you a clue why it's unique for a single location: Heat & Power.

You've never operated any sort of data center equipment at all judging by your comments.  We're talking pretty basic level data center engineering here.
WindMaster
Sr. Member
****
Offline Offline

Activity: 347
Merit: 250


View Profile
September 02, 2013, 11:03:21 PM
Last edit: September 03, 2013, 06:40:46 AM by WindMaster
 #70

He also appears to be clueless about the BIOS and AMD driver limits which make 16 GPU per system an impossibility.

Calling an arbitrary software limitation an "impossibility" is probably a bit naive.  I'm assuming we're discussing Linux here, as Windows has no place in a large GPU farm.  The kernel source code is not only readily available, it's documented "well enough" for most programmers to understand and modify it, and the portion of the Radeon drivers that directly interact with the kernel is supplied as source anyway.  It is not particularly difficult to fire up multiple X servers and instances of the Radeon kernel module with different GPU's controlled by each.

You can even half-ass it with a handful of Xen virtual machines with specific PCI devices mapped to specific virtual machines and not even fool the AMD drivers at all, without ever touching any code at all.

Here, I'll even save you the trouble of Googling how to accomplish it:
http://wiki.xen.org/wiki/Xen_PCI_Passthrough

The BIOS doesn't play a role in this.  Nor does it need to assign the PCI memory windows to all the GPU's, the Linux kernel is more than happy to enumerate the PCI buses and assign the windows itself.  Remember, hot-swappability was a design criteria for PCIe.  The BIOS is doing nothing once the Linux kernel fires up, Linux handles everything after that point.

Remember, DeathAndTaxes, just a couple weeks ago you posted that scrypt on GPU's operates entirely with on-die memory in the GPU and never touches external memory, and I called you out on it.  Lets maybe use some restraint before pulling out the "clueless" insult.
Beaflag VonRathburg
Sr. Member
****
Offline Offline

Activity: 472
Merit: 250



View Profile
September 02, 2013, 11:10:08 PM
 #71

You've never run a farm judging by your comments. Let me give you a clue why it's unique for a single location: Heat & Power.

You've never operated any sort of data center equipment at all judging by your comments.  We're talking pretty basic level data center engineering here.

You just quoted someone who operates a 50,000+ KH gpu mining operation. There's a very fine line between being technically knowledgeable and having experience mining. You're very obviously possess technical knowledge, but very little applicable to mining knowledge. Please stop while you refrain that appearance of possessing technical knowledge.

atomicchaos
Hero Member
*****
Offline Offline

Activity: 630
Merit: 500


View Profile
September 02, 2013, 11:11:28 PM
 #72

You've never run a farm judging by your comments. Let me give you a clue why it's unique for a single location: Heat & Power.

You've never operated any sort of data center equipment at all judging by your comments.  We're talking pretty basic level data center engineering here.

I do both, but you do not apply the same concepts, as a large farm is not going to have the same resources to put forth as a very large corporate data center. The output from this "farm" would be on par with Ebay's data center in terms of power usage and cooling. Try again.

BTC:113mFe2e3oRkZQ5GeqKhoHbGtVw16unnw2
WindMaster
Sr. Member
****
Offline Offline

Activity: 347
Merit: 250


View Profile
September 02, 2013, 11:12:47 PM
 #73

That's about 560,000 watts or 0.56MW.

Since 1w = 1 joule/sec, it should be enough to heat 1 gallon of water from 20c to boiling (100c) in roughly 2 seconds.

I know there are other factors, like the amount of open space, etc., but that should give you an idea of how much energy is consumed by that farm.

You'll need to spend some serious dough on cooling if you want to run a farm like that in one location.

The easiest way to cool it is to run a chilled water glycol loop to a small chiller plant and outside cooling tower, then either liquid-cool the GPU's (my preference) or air-cool the GPU's with water-cooled air handlers in the space.

Here, I'll calculate it for you.  560kW = 159 tons of cooling.  Here's a perfectly suitable 175 ton chiller option on eBay:
http://www.ebay.com/itm/2007-175-ton-Carrier-Centrifugal-Chiller-/221275176478?pt=LH_DefaultDomain_0&hash=item3385073e1e

Combine with an appropriately sized cooling tower and either order your GPU's with water blocks to operate directly on the glycol loop, or snag a few large surplus water-cooled air handlers if you insist on air cooling the GPU's.

This is just getting silly if you guys think that amount of heat rejection is unrealistic (or even hard) to accomplish at a single location.  Otherwise we need to change the debate from whether we're looking at a GPU farm transitioning from BTC to LTC, to instead be a debate about whether data centers really exist or whether they can be built and cooled.  Or for that matter, whether the technology exists to cool office buildings.
WindMaster
Sr. Member
****
Offline Offline

Activity: 347
Merit: 250


View Profile
September 02, 2013, 11:16:30 PM
 #74

You just quoted someone who operates a 50,000+ KH gpu mining operation.

I do not consider 50,000kH/sec (on scrypt) to be a large GPU mining operation.
atomicchaos
Hero Member
*****
Offline Offline

Activity: 630
Merit: 500


View Profile
September 02, 2013, 11:23:02 PM
 #75

Wind- We all know it is possible in concept and in practice with unlimited funds, but what you seem to be missing is the amount of money a facility to deal with this equipment and the footprint it would need is very large.

We're not saying it isn't a very large farm, we're saying it's quite unlikely, unless it's a major, major entity, which certainly could be the case. Regardless, thank you for your thoughts, and please try and apply your knowledge to real-world scenarios that factor into a setup like this, if a large farm. It is very unique, and that is why we're discussing it. If you think it's so simple to do cost effectively, take your thoughts a bit further and apply value to what you speak of, instead of just speaking on an engineering aspect.

I do not consider 50,000kH/sec (on scrypt) to be a large GPU mining operation.

I don't either, but it at least shows that a person speaks from experience, rather than theory. If you told me you operated a 50,000-100,000 kh/s farm, and you were able to design it as you indicated and make a ROI in less than 8 months, I'd listen much more intently. I think the overall point was that with the challenges that a small 50,000 kh/s farm brings, it's very unlikely to bring up a farm 40+ times that size instantly. That type of power consumption would not be under the radar.

BTC:113mFe2e3oRkZQ5GeqKhoHbGtVw16unnw2
FiiNALiZE
Hero Member
*****
Offline Offline

Activity: 868
Merit: 500

CryptoTalk.Org - Get Paid for every Post!


View Profile
September 02, 2013, 11:38:17 PM
 #76

Wind- We all know it is possible in concept and in practice with unlimited funds, but what you seem to be missing is the amount of money a facility to deal with this equipment and the footprint it would need is very large.

We're not saying it isn't a very large farm, we're saying it's quite unlikely, unless it's a major, major entity, which certainly could be the case. Regardless, thank you for your thoughts, and please try and apply your knowledge to real-world scenarios that factor into a setup like this, if a large farm. It is very unique, and that is why we're discussing it. If you think it's so simple to do cost effectively, take your thoughts a bit further and apply value to what you speak of, instead of just speaking on an engineering aspect.

I do not consider 50,000kH/sec (on scrypt) to be a large GPU mining operation.

I don't either, but it at least shows that a person speaks from experience, rather than theory. If you told me you operated a 50,000-100,000 kh/s farm, and you were able to design it as you indicated and make a ROI in less than 8 months, I'd listen much more intently. I think the overall point was that with the challenges that a small 50,000 kh/s farm brings, it's very unlikely to bring up a farm 40+ times that size instantly. That type of power consumption would not be under the radar.

The power used is equivalent to the power consumption of 430 households Smiley

 
                                . ██████████.
                              .████████████████.
                           .██████████████████████.
                        -█████████████████████████████
                     .██████████████████████████████████.
                  -█████████████████████████████████████████
               -███████████████████████████████████████████████
           .-█████████████████████████████████████████████████████.
        .████████████████████████████████████████████████████████████
       .██████████████████████████████████████████████████████████████.
       .██████████████████████████████████████████████████████████████.
       ..████████████████████████████████████████████████████████████..
       .   .██████████████████████████████████████████████████████.
       .      .████████████████████████████████████████████████.

       .       .██████████████████████████████████████████████
       .    ██████████████████████████████████████████████████████
       .█████████████████████████████████████████████████████████████.
        .███████████████████████████████████████████████████████████
           .█████████████████████████████████████████████████████
              .████████████████████████████████████████████████
                   ████████████████████████████████████████
                      ██████████████████████████████████
                          ██████████████████████████
                             ████████████████████
                               ████████████████
                                   █████████
.CryptoTalk.org.|.MAKE POSTS AND EARN BTC!.🏆
bitspill
Legendary
*
Offline Offline

Activity: 2087
Merit: 1015



View Profile
September 02, 2013, 11:48:08 PM
 #77

Why does it need to be under the radar?
This farm is not running out of some guys bedroom closet. It is a large operation and the power would not be unusual for a large office or a data center



-- I do not run any farms just speaking common sense Wink

{ BitSpill }
WindMaster
Sr. Member
****
Offline Offline

Activity: 347
Merit: 250


View Profile
September 03, 2013, 12:00:19 AM
Last edit: September 03, 2013, 12:16:25 AM by WindMaster
 #78

I do not consider 50,000kH/sec (on scrypt) to be a large GPU mining operation.

I don't either, but it at least shows that a person speaks from experience, rather than theory. If you told me you operated a 50,000-100,000 kh/s farm, and you were able to design it as you indicated and make a ROI in less than 8 months, I'd listen much more intently.

You're correct that I have not offered evidence of such.  And I will not be at this time.  But we'll revisit this question at a future date when no cryptocurrencies are still feasible to mine with GPU's.  No one is going to believe anything without photos (and even then, photos are routinely disputed on this forum), and that will not occur until GPU mining is dead.  Same reason no one else has posted photos of large professionally-operated GPU farms.  Take it for what it's worth, that's the best I can do for you, aside from the hints I've already dropped in the couple threads on this.


I think the overall point was that with the challenges that a small 50,000 kh/s farm brings, it's very unlikely to bring up a farm 40+ times that size instantly.

Ah, but it wasn't built instantly, it was built over a longer period of time for mining BTC.  The transition to LTC and the "hey, everyone check out the hash rate of that user on that pool" stunt did take a couple days, but that's not the length of time it took to build the farm.


That type of power consumption would not be under the radar.

This I'll agree with you on.  But there's no reason to believe it operates or needs to operate "under the radar".  If what you mean is that everyone everywhere will be aware that there's 700kW entering a building somewhere in the world and a column of hot air exiting a HVAC cooling tower adjacent to it, that part is not likely.  Large industrial customers routinely consume several MW to tens of MW, a single customer under 1MW is not going to attract any attention at all.
atomicchaos
Hero Member
*****
Offline Offline

Activity: 630
Merit: 500


View Profile
September 03, 2013, 12:16:40 AM
 #79

Look forward to chatting, love to hear from others on how they implemented, even after the fact. My expansion will never grow above 200,000 kh/s for GPUs.

I believe the power consumption would more likely be 1MW by extrapolation from my farm. And by "below Radar", I think it was meant that it was likely to be heard of by someone in the community.

In terms of switching over though, going from Bitcoin to Litecoin takes a bit of tweaking, but certainly could be pulled off. I'd question why they just switched though, as it would have been wiser to switch months ago.

BTC:113mFe2e3oRkZQ5GeqKhoHbGtVw16unnw2
Beaflag VonRathburg
Sr. Member
****
Offline Offline

Activity: 472
Merit: 250



View Profile
September 03, 2013, 12:53:22 AM
 #80

Look forward to chatting, love to hear from others on how they implemented, even after the fact. My expansion will never grow above 200,000 kh/s for GPUs.

I believe the power consumption would more likely be 1MW by extrapolation from my farm. And by "below Radar", I think it was meant that it was likely to be heard of by someone in the community.

In terms of switching over though, going from Bitcoin to Litecoin takes a bit of tweaking, but certainly could be pulled off. I'd question why they just switched though, as it would have been wiser to switch months ago.


That's the thing. If this was a large entity you would think that there would be people employed to manage this kind of setup or even the couple investors would operate it. Even if they switched a few machines a day over it would pay for labor and we would see a linear increase in difficulty. This thing popped out of nowhere, ran for a couple days, and is now gone. There was another user up around 1 million KH/s, but they've been bouncing around all over the place, slowly falling, and are around 600,000 KH/s now. Hopefully, whatever this thing is it just dies off and goes away.


Pages: « 1 2 3 [4] 5 6 7 8 9 10 »  All
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!