Bitcoin Forum
March 19, 2024, 11:18:42 AM *
News: Latest Bitcoin Core release: 26.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: « 1 2 3 [4]  All
  Print  
Author Topic: Difficulty post ASIC?  (Read 11556 times)
crazy_rabbit
Legendary
*
Offline Offline

Activity: 1204
Merit: 1001


RUM AND CARROTS: A PIRATE LIFE FOR ME


View Profile
July 03, 2012, 05:12:45 PM
 #61

If you already have GPU mining rigs, I assume you (and most people) will switch to Litecoin. In the past few days it's actually been a bit more profitable to mine LTC and sell for BTC then to mine BTC directly. And there are also a lot more Litecoins that can be mined. We just have to keep the interest of people like you a bit longer so that more services can be developed.
Just curious, how difficult is it to modify a ASIC rig to mine LTC?
I suppose there's always the possibility of a new xyzCoin based on different hashing algorithm(s), which will screw the ASIC?
It is impossible to mine LTC with that ASIC

You need to make a new ASIC, make the project, invest some millions $ to start making the chip and then you can mine LTC with your new ASIC.

It was supposed to be impossible to mine it on a GPU, look how that turned out they are hashing away right now..
GPUs are programmable devices, custom ASICs are single-purpose. A custom ASIC could be designed to be programmable, but then all you have is an expensive and slow FPGA or CPU kind of thing.

So LTC was the one design for CPU-mining only? Since they both use SHA-256, it won't be a surprise a LTC ASIC maker will be able to reuse most of the BTC ASIC design. In order to be truly effective, it has to make the hashing algorithm prohibitively complex for ASIC.
If an ASIC was specifically designed for Scrypt (the LTC proof-of-work), it would be many magnitudes faster than anything else. LTC does not make use of SHA256 for the proof-of-work.

Hopefully that litecoin ASIC is far in the future. I'm more concerned about all the Bitcoin GPU's jumping to the LTC ship in October/november.

more or less retired.
The trust scores you see are subjective; they will change depending on who you have in your trust list.
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction.
1710847122
Hero Member
*
Offline Offline

Posts: 1710847122

View Profile Personal Message (Offline)

Ignore
1710847122
Reply with quote  #2

1710847122
Report to moderator
1710847122
Hero Member
*
Offline Offline

Posts: 1710847122

View Profile Personal Message (Offline)

Ignore
1710847122
Reply with quote  #2

1710847122
Report to moderator
smoothie
Legendary
*
Offline Offline

Activity: 2492
Merit: 1473


LEALANA Bitcoin Grim Reaper


View Profile
July 03, 2012, 09:14:26 PM
 #62

If you already have GPU mining rigs, I assume you (and most people) will switch to Litecoin. In the past few days it's actually been a bit more profitable to mine LTC and sell for BTC then to mine BTC directly. And there are also a lot more Litecoins that can be mined. We just have to keep the interest of people like you a bit longer so that more services can be developed.
Just curious, how difficult is it to modify a ASIC rig to mine LTC?
I suppose there's always the possibility of a new xyzCoin based on different hashing algorithm(s), which will screw the ASIC?
It is impossible to mine LTC with that ASIC

You need to make a new ASIC, make the project, invest some millions $ to start making the chip and then you can mine LTC with your new ASIC.

It was supposed to be impossible to mine it on a GPU, look how that turned out they are hashing away right now..
GPUs are programmable devices, custom ASICs are single-purpose. A custom ASIC could be designed to be programmable, but then all you have is an expensive and slow FPGA or CPU kind of thing.

So LTC was the one design for CPU-mining only? Since they both use SHA-256, it won't be a surprise a LTC ASIC maker will be able to reuse most of the BTC ASIC design. In order to be truly effective, it has to make the hashing algorithm prohibitively complex for ASIC.
If an ASIC was specifically designed for Scrypt (the LTC proof-of-work), it would be many magnitudes faster than anything else. LTC does not make use of SHA256 for the proof-of-work.

Hopefully that litecoin ASIC is far in the future. I'm more concerned about all the Bitcoin GPU's jumping to the LTC ship in October/november.

The more I read about FPGAs and how much cache litecoin requires to do computations it may be a while before the hardware exists in a realistic cost/price manner to allow production without spending a ton.

███████████████████████████████████████

            ,╓p@@███████@╗╖,           
        ,p████████████████████N,       
      d█████████████████████████b     
    d██████████████████████████████æ   
  ,████²█████████████████████████████, 
 ,█████  ╙████████████████████╨  █████y
 ██████    `████████████████`    ██████
║██████       Ñ███████████`      ███████
███████         ╩██████Ñ         ███████
███████    ▐▄     ²██╩     a▌    ███████
╢██████    ▐▓█▄          ▄█▓▌    ███████
 ██████    ▐▓▓▓▓▌,     ▄█▓▓▓▌    ██████─
           ▐▓▓▓▓▓▓█,,▄▓▓▓▓▓▓▌          
           ▐▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▌          
    ▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓─  
     ²▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓╩    
        ▀▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▀       
           ²▀▀▓▓▓▓▓▓▓▓▓▓▓▓▀▀`          
                   ²²²                 
███████████████████████████████████████

. ★☆ WWW.LEALANA.COM        My PGP fingerprint is A764D833.                  History of Monero development Visualization ★☆ .
LEALANA BITCOIN GRIM REAPER SILVER COINS.
 
Gabi
Legendary
*
Offline Offline

Activity: 1148
Merit: 1008


If you want to walk on water, get out of the boat


View Profile
July 03, 2012, 09:40:51 PM
 #63

Making a Litecoin ASIC is possible, you just need to make it, go at a foundry, pay for the masks and everything (millions $$$) and then start making chip.


AzN1337c0d3r
Full Member
***
Offline Offline

Activity: 238
Merit: 100

★YoBit.Net★ 350+ Coins Exchange & Dice


View Profile
July 04, 2012, 12:49:47 PM
 #64

Making a Litecoin ASIC is possible, you just need to make it, go at a foundry, pay for the masks and everything (millions $$$) and then start making chip.

Why would you go out and make a Litecoin ASIC?

Scrypt (which is used by litecoin) is dominated by main memory speed, and modern day DRAMs already are one of the most cost-effective solutions you can buy anyways.

rjk
Sr. Member
****
Offline Offline

Activity: 448
Merit: 250


1ngldh


View Profile
July 04, 2012, 01:53:37 PM
 #65

Making a Litecoin ASIC is possible, you just need to make it, go at a foundry, pay for the masks and everything (millions $$$) and then start making chip.

Why would you go out and make a Litecoin ASIC?

Scrypt (which is used by litecoin) is dominated by main memory speed, and modern day DRAMs already are one of the most cost-effective solutions you can buy anyways.
Because you can build a fast cache into a custom ASIC, and it would be far faster than just using commodity DRAM, no matter what way you sliced it.

Mining Rig Extraordinaire - the Trenton BPX6806 18-slot PCIe backplane [PICS] Dead project is dead, all hail the coming of the mighty ASIC!
AzN1337c0d3r
Full Member
***
Offline Offline

Activity: 238
Merit: 100

★YoBit.Net★ 350+ Coins Exchange & Dice


View Profile
July 05, 2012, 10:00:46 AM
 #66

Because you can build a fast cache into a custom ASIC, and it would be far faster than just using commodity DRAM, no matter what way you sliced it.

Do you understand that the large proportion of die area in a CPU is devoted to cache space already? Furthermore that cache is pretty much AS FAST as we can make it, ASIC or otherwise.



What's that in the middle of the CPU? Oh it's the gigantic 20MB L3 cache of the Xeon E5/i7-39xx series.

Even if you were to produce full wafers of just cache, it wouldn't make sense to LTC mine with them unless you are generating thousands and thousands of wafers.

It's much cheaper to take advantage of economies of scale to buy COTS CPUs and LTC mine that way.

rjk
Sr. Member
****
Offline Offline

Activity: 448
Merit: 250


1ngldh


View Profile
July 05, 2012, 02:03:07 PM
 #67

It would perform better with a huge L1 cache, from what I understand. And all those 8 cores have extra x86 cruft that could be deleted to make room for more cache and dedicated hashers. Once you reach the magical memory mark of however many MB of cache you need, the rest is just processing power. If you are under that magical number (which for LTC is actually lower than reference Scrypt implementations, so it would be easier), then you have to worry about swapping out to slow onboard dram and that's where the performance loss is incurred.


Mining Rig Extraordinaire - the Trenton BPX6806 18-slot PCIe backplane [PICS] Dead project is dead, all hail the coming of the mighty ASIC!
2112
Legendary
*
Offline Offline

Activity: 2128
Merit: 1060



View Profile
July 05, 2012, 06:20:34 PM
 #68

It would perform better with a huge L1 cache, from what I understand.
Using cache memory is just a waste of power and die space, primarily because cache is a combination of CAM (Content-Addressable Memory) and SRAM (Static RAM). What one would need is eDRAM (embedded DRAM, http://en.wikipedia.org/wiki/EDRAM).

The other obvious savings are:

1) paging MMU: significant saving of power and huge gain in overclocking headroom. Segmented memory would be just fine, up to 4GB on x86 with paging disabled.

2) no need for TLB when paging is disabled

3) because scrypt is 100% predictable all thats really required is a huge pipelined read buffer, write buffer is much less important

4) when going off-chip the pipelined read buffer would need to be combined with narrower bus to avoid transfering useless data

5) when using on-chip eDRAM one can completely dispense with the need to have separate refresh circuitry and refresh cycle stealing. scrypt() is guaranteed to keep the dynamic memory regularly refreshed

I'll say that there's a lot of room for improvement when implementing scrypt() on FPGAs and ASICs and in comparison with the general purpose CPUs and GPUs.

Please comment, critique, criticize or ridicule BIP 2112: https://bitcointalk.org/index.php?topic=54382.0
Long-term mining prognosis: https://bitcointalk.org/index.php?topic=91101.0
AzN1337c0d3r
Full Member
***
Offline Offline

Activity: 238
Merit: 100

★YoBit.Net★ 350+ Coins Exchange & Dice


View Profile
July 06, 2012, 06:24:42 AM
 #69

1) paging MMU: significant saving of power and huge gain in overclocking headroom. Segmented memory would be just fine, up to 4GB on x86 with paging disabled.

2) no need for TLB when paging is disabled

3) because scrypt is 100% predictable all thats really required is a huge pipelined read buffer, write buffer is much less important

4) when going off-chip the pipelined read buffer would need to be combined with narrower bus to avoid transfering useless data

5) when using on-chip eDRAM one can completely dispense with the need to have separate refresh circuitry and refresh cycle stealing. scrypt() is guaranteed to keep the dynamic memory regularly refreshed

I'll say that there's a lot of room for improvement when implementing scrypt() on FPGAs and ASICs and in comparison with the general purpose CPUs and GPUs.

All this is true from a hardware design standpoint, but not from an economical standpoint. Even if you implement these things, you would not be able to beat out Intel's economies of scale in producing CPUs.

Your LTC-specific ASIC might be 10x faster, use 10x less power than a CPU (being extremely generous here). However, unless you are going to make millions of the things, you will never be able to make it anywhere as cheap as Intel makes a CPU on a MH/$ basis if you wish to recover your NRE.

lame.duck
Legendary
*
Offline Offline

Activity: 1270
Merit: 1000


View Profile
July 06, 2012, 09:23:07 AM
 #70

All this is true from a hardware design standpoint, but not from an economical standpoint. Even if you implement these things, you would not be able to beat out Intel's economies of scale in producing CPUs.

Your LTC-specific ASIC might be 10x faster, use 10x less power than a CPU (being extremely generous here). However, unless you are going to make millions of the things, you will never be able to make it anywhere as cheap as Intel makes a CPU on a MH/$ basis if you wish to recover your NRE.

You are missing some points: Intel sells CPUs for other purposes besides mining, so the prices are determined by other markets besides mining. A CPU cache has only a limited number of ports, which are limiting access speed. Having dedicated small(er) separate memory blocks for each (LTC) hashing unit would give much more performance.

Producing dedicatet LTC-ASICs could use smaller die sizes, which would result in higher yield per Wafer, and one could use much simpler die packaging.

While IBM offers eDRAM up to 10? Mbit per ASIC, i doubt if the savings in chips space will compensate for the (much) lower cycle time. There is a reason why the chip caches are build in SRAM technology (in the past, at least DEC used DRAM technology for the cache on their µVAX?II? Chips).
2112
Legendary
*
Offline Offline

Activity: 2128
Merit: 1060



View Profile
July 06, 2012, 02:19:06 PM
 #71

While IBM offers eDRAM up to 10? Mbit per ASIC, i doubt if the savings in chips space will compensate for the (much) lower cycle time.
Maybe yes, maybe no. scrypt() was designed by Colin Percival to intentionally interleave the memory access for blocks with Salsa20/8 block-mixing. There is even a parameter "r" describing how many Salsa's to apply (which ArtForz set to 1).

So the ultra-high-bandwidth with ultra-low-latency memory may be an overkill for scrypt() brute-forcer which does several scrypt() computations in parallel. The key to good performance is to avoid register spills to memory when doing Salsa rounds.

Please comment, critique, criticize or ridicule BIP 2112: https://bitcointalk.org/index.php?topic=54382.0
Long-term mining prognosis: https://bitcointalk.org/index.php?topic=91101.0
AzN1337c0d3r
Full Member
***
Offline Offline

Activity: 238
Merit: 100

★YoBit.Net★ 350+ Coins Exchange & Dice


View Profile
July 07, 2012, 05:52:59 AM
 #72

You also aren't going to compete with Intel cache speed/size unless you are on leading edge process nodes like 40 nm or 32 nm. Masks costs for one of those in on the order of 2-4 million dollars... Wafer costs: $3000-$5000.

Even if you produce 10,000 so called LTC-ASIC miners. Assuming 300 die/wafer (~200mm^2/die), you're still looking at minimum $210/die.

This is ignoring yield issues, packaging costs, and assumes that you get a functional mask set in 1 go (unlikely).


Pages: « 1 2 3 [4]  All
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!