Fuzzy (OP)
|
|
February 01, 2013, 09:04:59 PM |
|
Bitcoin was released into the wild in Jan 2009 and was first mined on CPUs, which everyone who learned about Bitcoin already had access to. Then around Sep 2010 the first GPU miner was released. It picked up fast because GPU's were available off of any computer store shelf, and most PC enthusiasts already had one in their machine. Sometime around May 2011 the first FPGA was demonstrated. FPGA chips were already available, but required assembly to work as Bitcoin miners. Now we have ASICs coming onto the market, which are purpose built chips, designed for the sole purpose of mining Bitcoins. The big question is, where do you go from here? Or is this the last major leap for Bitcoin hardware?
|
|
|
|
crazyates
Legendary
Offline
Activity: 952
Merit: 1000
|
|
February 01, 2013, 09:24:16 PM |
|
Or is this the last major leap for Bitcoin hardware?
Pretty much. You might see some efficiency improvements as the years go by, but from a performance standpoint, this is pretty much it.
|
|
|
|
DeathAndTaxes
Donator
Legendary
Offline
Activity: 1218
Merit: 1079
Gerald Davis
|
|
February 01, 2013, 10:12:32 PM |
|
The only major jumps left are catching up to current state of the art technology (28nm process). After that any future improvements would be limited to Moore's law.
The Avalon is built using 110nm so there are a jumps in performance using the same chip manufactured on a smaller process (80nm, 65m, 40nm, 28nm). Each one of these jumps would roughly double the efficiency (MH/J and MH/$). Also the design of the Avalon is likely not the "end all" in SHA-256 Microprocessor design. Some additional efficiency can be tweaked out of an improved chip design and improved construction, etc. How much? No idea lets guess 50% improvement is possible.
Still even if you put all that together you likely are talking a full custom, perfectly optimized, 28nm miner would be ~(110/28)^2*150% "only" 24x high efficiency as a theoretical upper bound.
|
|
|
|
hardcore-fs
|
|
February 01, 2013, 10:23:09 PM |
|
More people talking nonsense....
There are still algorithmic improvements, at some point in time further 'shortcuts' will be discovered. Spotted at least one myself... problem is I'm not a maths guru.. so I cannot figure out a formula, but I can see it happening.
|
BTC:1PCTzvkZUFuUF7DA6aMEVjBUUp35wN5JtF
|
|
|
bonker
|
|
February 01, 2013, 10:26:08 PM |
|
Bitcoin was released into the wild in Jan 2009 and was first mined on CPUs, which everyone who learned about Bitcoin already had access to. Then around Sep 2010 the first GPU miner was released. It picked up fast because GPU's were available off of any computer store shelf, and most PC enthusiasts already had one in their machine. Sometime around May 2011 the first FPGA was demonstrated. FPGA chips were already available, but required assembly to work as Bitcoin miners. Now we have ASICs coming onto the market, which are purpose built chips, designed for the sole purpose of mining Bitcoins. The big question is, where do you go from here? Or is this the last major leap for Bitcoin hardware? I reckon the jump is going to be algorithmic. I mean finding mathematical shortcuts to hashing. That's where the action will be.
|
|
|
|
DeathAndTaxes
Donator
Legendary
Offline
Activity: 1218
Merit: 1079
Gerald Davis
|
|
February 01, 2013, 10:29:39 PM |
|
More people talking nonsense....
There are still algorithmic improvements, at some point in time further 'shortcuts' will be discovered. Spotted at least one myself... problem is I'm not a maths guru.. so I cannot figure out a formula, but I can see it happening.
So why didn't these happen on GPU (which are pretty damn easy to reprogram). GPU algorithm mining efficiency has all but stalled. 18 months ago people were finding 10% here, and 3% there and that all essentially flatlined almost a year ago. I doubt there is much algorithm efficiency left. Now it is possible SHA-256 will be partially compromised which allows for the creation of "optimized" hashers which take that cryptographic flaw into account but that is hard to predict when or if it will happen.
|
|
|
|
crazyates
Legendary
Offline
Activity: 952
Merit: 1000
|
|
February 01, 2013, 10:31:23 PM |
|
More people talking nonsense....
There are still algorithmic improvements, at some point in time further 'shortcuts' will be discovered.
I reckon the jump is going to be algorithmic. I mean finding mathematical shortcuts to hashing. That's where the action will be.
For some reason, the wording of both your posts sounded really weird to me. Either way, you're posting very vague answers to the OPs question. Yes, there will be improvements, but no, there will be no major technological advancements that will produce the same jump as what we've seen from CPU -> GPU and now from GPU -> ASIC.
|
|
|
|
Jorgeminator
Member
Offline
Activity: 91
Merit: 10
|
|
February 01, 2013, 10:34:29 PM |
|
“640K ought to be enough for anybody.” -Bill Gates (1981)
Well you can clearly see how that went.
|
|
|
|
DeathAndTaxes
Donator
Legendary
Offline
Activity: 1218
Merit: 1079
Gerald Davis
|
|
February 01, 2013, 10:44:48 PM |
|
“640K ought to be enough for anybody.” -Bill Gates (1981)
Well you can clearly see how that went.
Which has nothing to do with the topic. An Avalon ASIC is roughly 17x more efficient than the average FPGA (170 MH/J vs 10 MH/J). An Avalon ASIC is roughly 85x more efficient than the average GPU (170 MH/J vs 2 MH/J). An Avalon ASIC is roughly 680x more efficient than the average CPU (170 MH/J vs 0.25 MH/J). Nobody said faster and more efficient ASICs aren't possible but there is no technology which would allow a 680x increased in efficiency over what the Avalon is capable of. Going to state of the art, fully custom, optimized 28nm ASIC might allow a 24x increase in performance however after that miners would be limited to Moore's law. There are no more "shortcuts". When you consider that at one time "good miners" were operating at 0.25 MH/W efficiency and bad guys could "cheat" (using 28nm full custom ASICs) acheive a roughly 16,000x "shortcut" the gap has been significantly closed.
|
|
|
|
jjiimm_64
Legendary
Offline
Activity: 1876
Merit: 1000
|
|
February 02, 2013, 01:30:52 AM |
|
my 2 bitcents:
in the future who knows.. quantum computing, positronic brains, chystaline entities?
</sarcasm>
|
1jimbitm6hAKTjKX4qurCNQubbnk2YsFw
|
|
|
organofcorti
Donator
Legendary
Offline
Activity: 2058
Merit: 1007
Poor impulse control.
|
|
February 02, 2013, 02:51:14 AM |
|
my 2 bitcents:
in the future science fiction who knows.. quantum computing, positronic brains, chystaline entities?
</sarcasm>
FTFY, and saved me from posting it myself
|
|
|
|
-ck
Legendary
Offline
Activity: 4298
Merit: 1645
Ruu \o/
|
|
February 02, 2013, 08:59:17 AM |
|
More people talking nonsense....
There are still algorithmic improvements, at some point in time further 'shortcuts' will be discovered. Spotted at least one myself... problem is I'm not a maths guru.. so I cannot figure out a formula, but I can see it happening.
So why didn't these happen on GPU (which are pretty damn easy to reprogram). GPU algorithm mining efficiency has all but stalled. 18 months ago people were finding 10% here, and 3% there and that all essentially flatlined almost a year ago. I doubt there is much algorithm efficiency left. Now it is possible SHA-256 will be partially compromised which allows for the creation of "optimized" hashers which take that cryptographic flaw into account but that is hard to predict when or if it will happen. D&T is right. I can tell you both Diablo and I, along with numerous others along the way, have spent many hours trying everything to further tweak the algorithms as used by GPUs, and we have been unable to squeeze anything more out of it. I suspect the on-chip algorithm on the ASICs closely resembles these optimised OpenCL kernels as used by GPUs.
|
Developer/maintainer for cgminer, ckpool/ckproxy, and the -ck kernel 2% Fee Solo mining at solo.ckpool.org -ck
|
|
|
Jorgeminator
Member
Offline
Activity: 91
Merit: 10
|
|
February 02, 2013, 10:06:18 AM |
|
“640K ought to be enough for anybody.” -Bill Gates (1981)
Well you can clearly see how that went.
Which has nothing to do with the topic. Nobody said faster and more efficient ASICs aren't possible but there is no technology which would allow a 680x increased in efficiency over what the Avalon is capable of. Going to state of the art, fully custom, optimized 28nm ASIC might allow a 24x increase in performance however after that miners would be limited to Moore's law. There are no more "shortcuts". I was referring to that quote because you can never be sure about the future, unless you've been there yourself. As others have said, quantum computing will most likely be available at some point in the future.
|
|
|
|
Puppet
Legendary
Offline
Activity: 980
Merit: 1040
|
|
February 02, 2013, 11:27:21 AM |
|
Asics will be in the same boat as CPUs/GPUs, as they are fundamentally using the same technology. Short of quantum computing, there hasnt been anything even on the horizon that promises a "quantum leap" beyond Moore's law for the past 30 or 40 years now. Keeping up with Moore's law will prove difficult enough by itself. Now anything is possible, but if something new does come along that offers another order of magnitude increase, it will almost certainly revolutionize a whole lot more than just bitcoin mining.
BTW, I dont expect to see any real improvements over the first generation of ASICs any time soon. In the short/mid term, they wont even keep up with Moore's law, because I believe there will be no financial incentive to invest in more advanced process nodes. We will get a price war first, and once we get to the bottom of that (orders of magnitude cheaper than today), most miners will be so far under water, I doubt there will be a market for slightly more energy efficient devices, especially considering the high NRE it would take to develop 28nm or smaller chips.
|
|
|
|
Fuzzy (OP)
|
|
February 02, 2013, 07:49:20 PM |
|
Some really good replies here by some really knowledgeable people. It looks like bitcoin has caught up, technology wise, to the rest of the industry. Nothing major to come besides the obvious efficiency and manufacturing improvements, not before the next few reward splits anyway.
|
|
|
|
Puppet
Legendary
Offline
Activity: 980
Merit: 1040
|
|
February 02, 2013, 10:50:14 PM |
|
Nothing major to come besides the obvious efficiency and manufacturing improvements, not before the next few reward splits anyway.
Well, I do expect a price war that will have pretty major consequences. Once all the ASIC vendors sold and delivered their initial runs and difficulty explodes, mining revenue per GH and thus demand will dry up unless they cut prices, and they should have very high per unit margins allowing them to do just that (ASICs cost almost nothing to produce). Causing even higher difficulty, requiring even lower prices for these devices to be marketable. Over and over. In that sense the next year or two will see pretty dramatic changes IMO, but it will be with mostly the same chips for sale now. Just for satoshi's on the bitcoin, so to speak.
|
|
|
|
qbits
|
|
February 03, 2013, 03:09:34 AM |
|
my 2 bitcents:
in the future who knows.. quantum computing, positronic brains, chystaline entities?
</sarcasm>
indeed. sufficiently big quantum computer would be able to find a suitable block hash in arbitrarily short time thus ending the bitcoin for good. of course such quantum computer would be useful for other things as well (can you spell SSH?)...
|
|
|
|
jjiimm_64
Legendary
Offline
Activity: 1876
Merit: 1000
|
|
February 03, 2013, 03:32:28 AM |
|
indeed. sufficiently big quantum computer would be able to find a suitable block hash in arbitrarily short time thus ending the bitcoin for good.
Not sure if you realize, the difficulty will be raised to offset any speed a computer can throw at the network..
|
1jimbitm6hAKTjKX4qurCNQubbnk2YsFw
|
|
|
BTCGOLD
Sr. Member
Offline
Activity: 958
Merit: 256
Betking.io - Best Bitcoin Casino
|
|
February 03, 2013, 04:10:05 AM |
|
There will be a huge improvement!! the european union just paid 1 billion € (1,35 billion usd) to research about "Graphene" http://en.wikipedia.org/wiki/Graphene and to research how to build a processor with graphene. They could run with up to 500 GHZ per core! just imagine 500 ghz quadcore .. yumm!
|
|
|
|
crazyates
Legendary
Offline
Activity: 952
Merit: 1000
|
|
February 03, 2013, 04:54:10 AM |
|
indeed. sufficiently big quantum computer would be able to find a suitable block hash in arbitrarily short time thus ending the bitcoin for good.
Not sure if you realize, the difficulty will be raised to offset any speed a computer can throw at the network.. One of the benefits that can sometimes get overlooked is the network security. When the network is measured in PH/s and ASICs are commonplace, it is much harder to attack the network. This was a common fear during the CPU and even GPU days.
|
|
|
|
|