Your idea is useless because there can always be ASICs that are better than generic hardware at specific tasks. Which is why they are called ASICs.
You might be confused about the state of art of certain classes of integrated circuits. I'll try to make this simple, as related to cryptography.
Speeding up the processing of a memory-hard algorithm requires two things: (1) faster memory (clock speed), and (2) more memory (parallelism and minimization of data set movement). Consumer RAM ("Memory" as it were) is nearly already AT the state of the art in terms of speed (DDR3 and soon DDR4). Adding MORE of it simply means adding more sticks of RAM. From a customized machine standpoint, this means adding more of the DDR SDRAM ICs (the constituent chips that make up a PC's RAM modules) and optimizing their data transports.
The argument that I recall the Scrypt paper making (and that I see the Momentum paper makes [2nd paragraph of introduction]) is that it becomes economically infeasible to design a system that outperforms a standard PC by such a margin that makes it worthwhile to develop said system. In other words, it costs too much to design a specialized system for a memory-hard algorithm when desktop PCs typically already perform really well with readily-available, relatively cheap hardware.
Q. Is someone going to develop an ASIC to handle the specific functions that the CPU in a desktop PC handles? Are they going to be able to do it faster and cheaper than Intel's latest tech?
A. Probably not, developing for the latest transistor size would be difficult and immensely costly.
Q. Is someone going to develop a memory ASIC that performs better than consumer RAM?
A. Probably not, consumer RAM is already near the state of the art.
Q. If someone DID actually build the above two ASICs and also built an efficient platform around them on such a scale that the (alt-)coin mining market would actually support, would it be economically feasible (that is, would you get at least 100% ROI)?
A. Most certainly not. The licensing alone would kill the hopes for feasibility.
I hope this makes sense. If I have misspoken or misinformed anywhere in the above, please do not hesitate to correct me.
The argument that I don't see made often enough is when economic feasibility isn't the goal, what happens? For instance, if a government wanted to build a system to take over and crumble a cryptocurrency network and [fiat] money was no object -- then yes, they could probably develop such a system. But any economist who knows about the engineering involved (quite a rare subset I would imagine!) would just tell this government to buy as much consumer PC hardware as they could rather than develop new tech. "Why re-invent the wheel?"
p.s. BombaUcigasa, you're coming off very trollish. You ask good questions, but try to lighten up a bit on your accusatory tone and people might take you more seriously. Thanks.
I agree with you that a desktop PC, or let's say a high end desktop PC and not cheapo all-in-one embedded models, are pretty efficient in terms of memory and computing power for the price point, when compared to an ASIC.
However, your point about an ASIC being unfeasible I do not agree with:
- You don't need a 7.1 sound chip, PCI connectors, SATA controllers, fancy BIOS, USB ports, half the northbridge architecture, any kind of storage medium, and many other things in your miner
- You are not restricted to a specific AT form, specification or rule, you can make your case as thermal needs require it to be, you can make smaller and cheaper motherboards
- You don't need super fancy EPUs and VRMs that cope with variable work load modes and power efficiency modes
- You don't need 80% of the CPU's functions to perform a single hash algorithm, things like virtualization, multimedia processing, graphics controller (
http://images.bit-tech.net/content_images/2012/04/intel-core-i7-3770k-review/ivb-5w.jpg,
http://wccftech.com/images/reviews/hardware/Processor/Intel-Core-i7-975-Extreme-Edition-Processor-Review/Core-Design-Areas.jpg)
- Even if you use a PCI board as implementation, you don't need everything from a GPU, you can discard 40% of the chip/s surface (
http://www.ixbt.com/video3/images/titan/diag_smx.png)
- The speed to storage trade-off can be adjusted in any machine, such that it uses less RAM and faster processor, or slower processor and more RAM for the same hashrate
So if you build an ASIC for Scrypt, it will be cheaper to make per unit, use less power and produce more hashes than a generic built PC. Sure, it will have just this purpose, but it will do it better in the long term. If the benefits of owning such a miner for 2 years produce sufficient return to cover the investment then it will be built. KnC miners take like a month to recoup now, don't they?
Most miners are not stupid, they take logical calculated evidence-based decisions. They buy new hardware when they observe opportunity, and stop the hardware or sell it when it is working at a loss. Just like Bitcoin ASICs are removing GPUs from the network, a memory hard algorithm will take out generic desktops from the network using that algorithm in lieu of headless dedicated optimized low-energy ASIC miners.
Q. Is someone going to develop an ASIC to handle the specific functions that the CPU in a desktop PC handles? Are they going to be able to do it faster and cheaper than Intel's latest tech?
A. Yes. KnC, ASICminer and others managed to make dedicated chips that can hash bitcoin faster and cheaper than ATI's chips.
Q. Is someone going to develop a memory ASIC that performs better than consumer RAM?
A. Yes. GPUs already use chips and architectures that offer more than double the speed of consumer montherboard RAM.
Q. If someone DID actually build the above two ASICs and also built an efficient platform around them on such a scale that the (alt-)coin mining market would actually support, would it be economically feasible (that is, would you get at least 100% ROI)?
A. Yes, it could be feasible if it would get 100% ROI.
The question should be: Can a cryptographic blockchain hash algorithm that can run exclusively only on common desktops with maximum efficiency be created?
The answer is: No.
The question is then: Why are we discussing this new proof of work?
I'm not trolling, I'm as sincere as possible even if I look like an asshole. Deceiving people and making fun of them is not my style.