If you replace a network hash for hash you end up with the same hashrate and 1/10 the ENTIRE NETWORK POWER CONSUMPTION.
Disregarding the normal fluctuation in value of coins being mined. I don't see how this conclusion is arrived at.
Nobody interested in these devices is planning on continuing along at the same hash rate they are now with a GPU rig or farm. They will want and need more to keep up. Nobody is going to buy one and go "OK, I got the same hash as my 15 1080ti at 10% of the electricity so now I'm good this is it." Once they re-equip en masse then network hash skyrockets. Just like it does with an ASIC machine. They hit the network and difficulty goes through the roof. People then are under pressure to buy more to keep up. And so on.
Perhaps this is a better way to illustrate:
I have one gpu. I hash 1000 h/s @ 1000w.
I buy one FPGA. I hash 10000 h/s @ 100w.
Am I going to buy just one FPGA then? No. I'm going to buy 10 of the things so I can run 100x the hash at the same power. Just like everyone else will. While we are all hashing more for the same power. We are all hashing more. And difficulty goes sky high to compensate for this.
Would like to know what I'm not understanding here.
Maths is hard...
Let's say a network has 1000 gpu miners each with 1Mh/s. Each GPU is like 300 watts. A total of 1000Mh/s and 300kW.
Now let's say we have a network of fpga using the same algo. 100 FPGA with 10Mh/s. Each FPGA is around 150 watts. A total of 1000Mh/s and 15kW.
So, by switching this network over to FPGA and replacing the hashrate hash for hash we have decreased the total electrical power consumption by 285kW (95% reduction in power consumption).
..
But, what we really want is ASIC resistance, right?
Ok...
Let's say we have a network of 1000 gpu miners each with 1Mh/s. Each GPU is like 300 watts. A total of 1000Mh/s and 300kW.
Someone goes and spends 2 months and $300,000 developing a 90nm asic that gets a 4x performance improvement over GPU plus significant cost reduction at scale. Now we have a device that will have 4Mh/s and consume let's say 100 watts... This is bad right? We have a secret miner on the network who paid very little cost to rapidly have an ASIC developed and they could quickly launch a 51% attack with as little as 250 devices!
Let's make the assumption that I've achieved significant cost reduction and FPGA can be bought for the price of a gpu and are available in the same quantities......
We've now replaced our 1000 gpu miners board for board with FPGA. So, we have 1000 fpga each with 10Mh/s. Each FPGA is around 150 watts. A total of 150kW (still significant power reduction over gpus...) and 10,000Mh/s.
Someone goes and spends 2 months and $300,000 developing a 90nm asic that gets a 4x performance improvement over a GPU plus significant cost reduction at scale. Now we have a device that will have 4Mh/s and consume let's say 100 watts... This is bad right? NOPE! This device would not be as profitable as the FPGA and is zero threat to the coin. The device would more than likely operate AT A LOSS depending on the cost of electricity for those running it. The total number of devices required to launch a 51% attack (and costs associated with it) would be increased by 10x.
By replacing these GPU board for board we've made high level cheap asics unprofitable, we've reduced the power consumption for the entire network by 50%, and significantly increased the security of the network.
...
So tell me, which is the better general purpose solution? GPU or FPGA?