Bitcoin Forum
April 26, 2024, 12:07:30 PM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: [1]
  Print  
Author Topic: True ASIC-proof Attack-Resistant CryptoCurrency Design  (Read 1885 times)
Vorksholk (OP)
Legendary
*
Offline Offline

Activity: 1713
Merit: 1029



View Profile WWW
November 01, 2013, 01:07:26 AM
 #1

Some people in another thread thought this was an interesting idea, so I'm gonna expand on it a bit. Before I lose you when you see the wall of text, the basic idea is that the algorithm(s) hard-coded into cryptocurrences are hard-coded into the ASIC chips. If this algorithm dynamically changed in a non-predictable fashion, ASICs would be MUCH harder if not impossible to design, and would certainly, if designed, be orders of magnitude less efficient than their static counterparts.

CryptoCurrency mining is essential. It's how the network continues. It creates blocks, which, to simplify, hold chunks of transactions, inputs and outputs that move money around. A currency is useless if you can't spend it. You can't spend current cryptos without putting the transaction into a block. You can't put the transaction inside a block if no one is making blocks. The trouble comes when people's ability to create blocks is abused. This can be done by a multitude of people in a variety of ways for many different reasons. Someone doesn't like a competing currency? A business that deals with monetary transactions doesn't like the system as they can't control it? A corrupt government wants to destroy free currency? They could execute a 51% attack, and rewrite the attacked parts of the chain. It's important to note that, given enough computational power, a knowledgeable person could rewrite the  blockchain back to the latest client-hard-coded and network-accepted block. They would release new blocks at a higher difficulty than the original chain, and would 'fork' or replace blocks on the chain.

51% attacks involve a ton of computational power, and a ridiculous amount of electricity. However, for a large vested interest, they're certainly not out of the question, no matter the coin. If a large institution really wanted to take down Litecoin, for example, they could develop a high-memory ASIC chip made for Scrypt(1024,1,1) calculations. And they could mine faster than the current network is mining. Sure, it would be expensive. But they also have a lot of money.

Another problem is the ASIC mining scene. Mining was developed to be free, distributed, open to anyone. However, due to the advent of ASIC devices, mining has been consolidated. It has become a rich-get-richer game. People have lost fortunes investing in ASICs with no clear delivery dates or ROI. While this is a part of free market, many see it as undesirable, myself included. I personally have invested significant money into ASICs, and for the most part, I'm happy with my investment. This isn't some deep-seated hatred towards ASIC companies, but rather a general analysis of the negative effects ASICs have had on the blockchain. Unlike consumer-oriented hardware (such as CPUs and GPUs), ASICs are not widely-available to consumers, they aren't carefully monitored for safety, and they don't have large, reliable companies behind them. It's rare that someone runs a scam around making their own GPU chips and selling them, haven't seen it yet. However, it's important to note that many scammers latch onto the idea of making fake ASIC companies to rip consumers off.

However, these above points aren't even the most important focus. They all revolve around the mining. Sure, mining is important, but it isn't the whole coin. Although it's rather unlikely due to the nature of hashing algorithms, what if a way of reversing SHA256 became possible? Let's step back and take a wider look at what hashing algorithms are. To simplify it, imagine I have a function called multiply. It accepts two numbers, and returns the result of multiplying them together. There's nothing special. Even this is, in its simplest form, a hashing algorithm. It loses some sort of data, some sort of understanding. Given this, say I call mult(6, 9). It returns 54. Now, it's trivial to list the possible inputs to mult(something, somethingelse) to give me 54, right? You could have (1,54), (2,27), (3,18), and (6, 9). Sure, 6,9 is in there. However, without any additional context, you can't tell me for sure which one is the original input. You could even say that the above possibilities all result in a collision--where two inputs result in one output.

Now, let's look at an actual example of how this loss of data results in protection. Hashing algorithms rely on circuit logic at some level. If you want some additional details, you can check out http://m.metamorphosite.com/one-way-hash-encryption-sha1-data-software. At any rate, say we have a XOR function. XOR (Exclusive-OR, which says that true,true = false, true,false = true, false,true=true, and false,false=false. Let's change this to binary:
1010010110101101 XOR:
0110101100101011 =:
1100111010000110

Given 1100111010000110, you could tell me all the possibilities for input. There are quite a few of them. As a result, it's hard to reverse. Now, imagine that both of those input lines were the result of other such functions, building a reverse tree feeding system. And the inputs for those came from circuit logic. Quickly, the possible sample area for brute-force reversing the algorithm becomes unthinkably large. However, if you were able to successfully reverse an algorithm like this, you would end up with a list of inputs that collide--give identical output for different inputs.

Understanding the complexity of the algorithm, it's hard to prove for sure that there isn't a method by which to reverse it. If a reversal mechanism was discovered for the algorithm, it would be a big deal. And given billions if not trillions of dollars at stake, if it's possible, it's likely someone will find a way. No matter your faith (or lack thereof) in modern cryptographic one-way hashing functions, creating a currency, it would be nice to build it in such a way that a broken hash function wouldn't be the end of the project/crypto currency.

How to protect against 51% attacks
51% attacks, as explained earlier, rely on the ability to create blocks at a higher difficulty than the network itself can produce. One powerful step forward in preventing 51% attacks and further distributing mining/block production has been the idea of Proof-Of-Stake, where you can mine blocks with hashing power, but a large percentage of blocks are also created by owning large amounts of the currency. As a result, anyone who wanted to 51% attack the coin would have to have a significant portion of stake in the coin, and would therefore be at an economic disadvantage by attacking. However, for large institutions, it may still make financial sense (in a twisted way) to invest millions, or even billions, into a currency, for the ability to destroy it.

Proof of Stake is a strong contender. However, another issue with 51% attacks is a group's ability to obtain ridiculous amounts of computational power. For simplicity, considering a coin that only has proof of work (hashing) and no proof of stake, say the total hash rate is 1TH. Each 7970 can generate 500MH. If someone could create an ASIC, it's not at all unlikely for them to easily be able to obtain 1TH of computational power. Considering only electrical consumption, an ASIC for Bitcoin currently is somewhere around 200-400x more efficient than GPUs for the same hash rate. This enables a long-term attack to only require a large initial investment. In perpetuity, even if a GPU farm of 1TH was somewhat cheaper to make, for a long-term effective 51% attack, power consumption would become a major price. Therefore, we can conclude that ASICs make the network easier to attack.

Many people will tell you that the advent of ASICs make Bitcoin more secure. This is true. This is because Bitcoin is vulnerable to ASICs, the network is protected when ASICs are in the hand of normal people, as it makes an ASIC-based attack harder. However, Bitcoin would be better off not even being able to have ASICs made for it. A smaller financial investment would protect the network with the same power, and a huge attack on Bitcoin would instead require exorbant amounts of consumer hardware rather than being able to pay a large initial investment, and then run the attack for a long time cheap. GPUs use a lot of power. We're talking an increase in orders of magnitude. GPUs cost much more to run over a period of time, making costly attacks even more costly. This exorbant amount of electricity would make a long-term attack infeasible, and would make short-term attacks much more expensive. If interested to the point where the party wanting to take over the network was willing to make their own GPUs to lower cost/unit, they would be consuming a large amount of power to run such a big farm, and the added complexity of creating GPUs or high-performance CPUs, even geared towards a specific purpose, would make the cost per unit much higher for the business.

If an ASIC can be created for a coin, then the coin is the most secure if ASICs have been made for it. This is because, if ASICs are possible but not-yet-public, then a private group would require much less investment to pull off a successful attack. However, if ASICs are out in the wild, the private attack group would require much more ASIC hardware and electricity.

So, we have concluded that, to prevent 51% attacks, a mix of proof-of-stake block minting and an algorithm or algorithms that can't be calculated efficiently by ASICs.

Preventing reversal-attacks
Though unlikely, it would be potentially possible for an algorithm to be reversed, and, given this idea, we have to build a coin with the idea that one or two of the algorithms that we use might end up eventually broken. If a currency uses only one algorithm, then that one algorithm being reversed would essentially spell doom for the currency. Patches could be applied and a new version released, but almost all faith would be lost in the currency, and when stakes are in the billions or trillions of dollars, a network hiccup like this is a HUGE deal, and would likely spell the end of public faith in crypto currencies.

The obvious solution to this is simply using multiple algorithms. However, there are some not-so-evident worries about using multiple hashing algorithms. The major one is entropy. Entropy is a measure of chaos. It's difficult to explain, but consider that we are generating random strings that are 16 characters long, and draw from a sample of 8 possible characters for each position. We would have 8^16 possibilities, so that would be our entropy. Hashing algorithms strive to be one-to-one mappings in a certain sample space of entropy. However, it's important to remember that, due to the data-losing nature of hashing algorithms, collisions are possible. To put this in scale, we have not yet found one single collision for SHA256. Don't let that deceive you, however, the math checks out in that it is most likely vulnerable. As a result, the hashing function is considered to lose entropy, or output less entropy than is put in. To put this in perspective, this then says that, if run enough, trillions upon trillions upon trillions of SHA256 functions would likely result in any input being converted to one single output. Since the entropy loss carries from round to round, if we could find the exact average entropy loss of the function, we could predict how many functions would needed to be calculated on a single input. However, that is neither the point nor the intention of this talk. Instead, I want to present the fact that running the same algorithm a ridiculous amount of times isn't smart. SHA256(SHA256("input")) isn't bad. Entropy loss is minimal and can be ignored. It's not a problem, we don't have to worry about it. However, we would have to worry if, for example, SHA256 was calculated a ridiculous number of times.

New algorithms are touted to have very little entropy reduction, however it's a hard concept to prove. The existence of collisions is not surprising, but finding one is. However, if in making a new coin that used a bunch of nested algorithms (say hashing algorithms a,b,c,d,e,f,g,and h) in such a fashion as a(b(c(d(e(f(g(h(input)))))))) would result in the output having something slightly lower than the lowest-entropy algorithm. So the only weakness of this system is one of the algorithms losing a significant portion of entropy over the sample area of the input.

While not a huge concern, this entropy reduction would have to be accounted for, perhaps even if only to make sure that, in a sample set of significant size (millions or billions) of hashes, collisions don't occur.

So, if we have 8 hashing algorithms, then all 8 would have to be broken to attack the network using reversal. For example, if a was reversible, then an attacker could, theoretically, only have to do b(c(d(e(f(g(h(input))))))). However, if any of the other non-outer-layer hashing algorithms were broken, it wouldn't at all affect the network security in reversing hashes, as it would most likely be faster to compute a hash rather than reverse a potentially-accepted output. Therefore, you would put your weakest algorithm in the center, and your strongest on the outside. Additionally, entropy-permitting, you could nest multiple times, so that a,b,c,d,e,f,g,and h all get run hundreds of times in the hashing process per completed 'hash round'. This idea enables an attack on function a to be orders of magnitude less significant, as a would still be calculated normally in all of the times it was used except for 1. So, nesting multiple algorithms, if done right, only strengthens the currency against algorithm reversal, and any losses in entropy are minimal and insignificant.

Alright, so you do a bunch of functions. Couldn't an ASIC be programmed to do that?
Certainly! We require another change to become ASIC-resistant or even ASIC-proof. Dynamic algorithms. Yes. a,b,c,d,e,f,g, and h have to change in a pre-determined but un-predictable manner. This doesn't mean that the dev changes the code every six months and has other people rewrite miners. This means the network itself reorganizes the hashing functions. In order for this to work, the hashing functions would have to be modular to an extent. And when the network retargets difficulty, it would also retarget the hashing algorithm. It would, based on data from the latest retarget block hash, systematically create a new hashing algorithm by rearranging the modular portions of the hashing algorithms. Say, for example, that algorithm A does a ton of stuff, and then has 32 different modular steps. Each step has a negligible affect on entropy. Based on, say, the last sixteen bits of the retarget block (block right before the retarget) are used to determine the order of the steps. In some way that would be easy-to-calculate for clients, but impossible to predict before the retarget block is mined, the hashing algorithm's steps would be reorganized based on those bits. For example, maybe each bit (a one or a zero) determines which step is used first. If we have 16 1's (so 1111111111111111 was the last 16 bits of the retarget-worthy hash), then we'll do step 7 first, but if we have 15 1's, we'll do step 3 first, etc.). This counting method would be nice as it wouldn't line this first modular piece up with the next way to determine the 2nd step, which, for example, could be based on the number of 1's in the first four and last four bits, etc). If each algorithm had 32 different modular components, some of which could be used multiple times, each algorithm would have a huge number of possible orders. As a result, it would be impossible infeasible to design an ASIC for each possibility. Some steps could be high-memory, so from when the high-memory algorithm is pushed to when it retargets, CPU miners would have an advantage. Other steps would be extremely low-memory, and when those were selected instead, GPUs would have an advantage. However, on average, the algorithms would lie nicely balanced, with some high-memory steps, but not enough to push GPUs out of the game, while still allowing CPUs a bit of room to play catch-up. As a side-note, the order in which algorithms are performed could be changed at the algorithm-level as well as at the hash level, so a could change, and a could be before b, when other times b could be before a, even though both times a is a different-but-similar algorithm consisting of a number of steps.

This dynamic idea would require GPU miners, as I understand it, to recreate a kernel and recompile in order to be efficient. However, if programmed correctly, this could be automatic and only result in minimal downtime during retarget blocks. Retargeting should be done often enough that an ASIC couldn't be produced during the algorithm's lifetime (2 weeks is way more often than necessary at this point in time, but might be a nice number simply because even an ASIC would be a pipe-dream) and would have to be long enough that GPU miners wouldn't be constantly shutting down to recreate kernels, etc.

Finally, it's important to note that such a network of evolving hashing algorithms would require significant thought in the algorithm design.
Some of the steps could be tried-and-true algorithms such as SHA256, blake, keccak, etc. Others could be created specifically to allow modularity by design in steps. The step order would be similar to a salt in function, but different in implementation. In order to create a secure-but-usable dynamic algorithm system, some algorithms could remain constant but buried-deep in the algorithm, and constantly reordered so that an ASIC couldn't be created to do a portion of the hashing function without excessive latency with having an ASIC do part of a hashing algorithm and a GPU or cpu do the other part, which is completely infeasible due to, as mentioned, latency. ASICs are effective because they can check the hashes themselves, they aren't sending billions of hashes back to the computer to check per second. The bandwidth of sending a billion hashes per second (1TH) to the computer for checking would be a minimum of 8TB of data per second. ASICs wouldn't be useful, even if they were able to do a portion of the hashing, especially if that portion was nested inside, and required sending from the ASIC to general-computing devices and back multiple times.

Sorry about the wall of text. It takes a while to explain this idea, and the people most likely to be interested would read this whole thing, I hope. If you have any additional questions, PM me. If you want to create this algorithm, be my guest, I would love to see it implemented. I don't currently have the programming skills required to complete this project, I simply wanted to introduce the idea so that perhaps someone with the particular skill-set required would pick up the lead.

VeriBlock: Securing The World's Blockchains Using Bitcoin
https://veriblock.org
1714133250
Hero Member
*
Offline Offline

Posts: 1714133250

View Profile Personal Message (Offline)

Ignore
1714133250
Reply with quote  #2

1714133250
Report to moderator
1714133250
Hero Member
*
Offline Offline

Posts: 1714133250

View Profile Personal Message (Offline)

Ignore
1714133250
Reply with quote  #2

1714133250
Report to moderator
No Gods or Kings. Only Bitcoin
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction.
1714133250
Hero Member
*
Offline Offline

Posts: 1714133250

View Profile Personal Message (Offline)

Ignore
1714133250
Reply with quote  #2

1714133250
Report to moderator
Spoetnik
Legendary
*
Offline Offline

Activity: 1540
Merit: 1011


FUD Philanthropist™


View Profile
November 01, 2013, 01:20:57 AM
 #2

interesting.. i got less than half way, will have to read the rest later. lol
i think your saying a system that will change the algo right ?
and yeah that does sound like a good idea Smiley

FUD first & ask questions later™
digitalindustry
Hero Member
*****
Offline Offline

Activity: 798
Merit: 1000


‘Try to be nice’


View Profile WWW
November 01, 2013, 01:55:00 AM
 #3

its an Awesome idea :

"(2 weeks is way more often than necessary at this point in time, but might be a nice number simply because even an ASIC would be a pipe-dream)"

: D

- its honestly one of the best ideas i've seen come out of this forum as a whole.

is Taco in the house ? this should be tried , Shake , Balthazar also ?


- an idea like this aims to achieve the high and notable objective of bringing back a form of market equity to cryptocurrency.  

- not a pipe dream;  "one for everyone" type of equity,  but a free market equity of which the principals of the market were built on.

- and then by extension supposedly what our nations soecity were built upon , the principal the anyone if hardworking or investing has a shot, has a chance.



- i really hope something comes from this .
and if it does please let me name it ha ha ha .

I really don't want to see "Algoswiitchcoin" or "Randomcoin" - would do no justification. at least not for your own design.

- Twitter @Kolin_Quark
digitalindustry
Hero Member
*****
Offline Offline

Activity: 798
Merit: 1000


‘Try to be nice’


View Profile WWW
November 01, 2013, 02:15:02 AM
 #4

Also one other point:


a principal such as the one outlined here would add "relevancy" to PoS:

i've not been sold on some aspects of PoS becasue of course if one understands that some entities have 0 profit motive, then PoS has some potential issues if currency units are in a degree of magnitude able to be consolidated though the PoW "emission" (to use a word Balthazar coined i think) .

So what i'm saying is :

the easier it is to gain an consolidate though PoW , then the easier and more likely that you might see an issue with the PoS aspect.

the other small issues i had with PoS was the incentive to horde , but that aside , its not a big issue if managed.

point being this idea of Vork's would effectively allow PoS (if chosen) to "live up to its potential" .

- Twitter @Kolin_Quark
gatra
Hero Member
*****
Offline Offline

Activity: 583
Merit: 505


CTO @ Flixxo, Riecoin dev


View Profile WWW
November 01, 2013, 02:40:44 AM
 #5

short version: he wants to make a dynamic algorithm for the PoW so ASICs are infeasible to make. He proposes having some predefined functions, and at retarget time deciding which and/or in what order to apply them, thus generating a new PoW algorithm that will be used until next retarget.

Is that it?



I say it wouldn't work, because you can have an ASIC that has one circuit for each of the predefined functions and then link them together as needed. You don't need to hardwire each possible combination, ASICs can implement conditionals!

Digression on ASIC design: if you have a hardwired circuit for each of the predefined functions (each of the modules that you combine), the naive way is to simply run only one at a time or to have them run in parallel and then only keep the result of the one you want, and repeat to do the next step. This would work very well, but you would be wasting resources: it's possible to do a smarter design where you have a sort of superscalar pipelined architecture where you have all of the involved circuits running at the same time in parallel at different steps of the calculation. This would kick the CPU's and GPU's asses because -once the pipeline is full- all the big parts of the circuit are doing useful work at the same time: no waste!

The only path I see to "ASIC-hostility" is having a memory intensive algorithm... one could make trade-offs for calculations vs memory use, but either way it costs you transistors.


           ▄▄▄██████████▄▄▄
       ▄▄██
██████████████████▄▄
     ▄█
█████▀████████████▀██████▄
   ▄█
█████████████████████████████▄
  ▄█
█████████▄█▀▀██████████████████▄
 ▄█
███████████▀██████▄▄█████▄███████▄
▄█
██████████▀██▄▄▄▄██▀▀▀▀▀███████████▄
█████████████▀▀██▀████████▀▀████████
█████████████▄█▀████████████████████
████████▀▀▀▀██▀▀▀▀██████████████████
▀█
██████▀▀▀▀██▀▀▀▀███████████████████▀
 ▀█
███████▄████▄▄███████████████████▀
  ▀█
███████████████████████████████▀
   ▀█
█████████████████████████████▀
     ▀█
█████▄████████████▄██████▀
       ▀▀██
██████████████████▀▀
           ▀▀▀██████████▀▀▀
riecoin       ▄▄█████████▄▄
    ▄██▀▀         ▀▀██▄
  ▄██▀              ▀██▄
 ▄██     ██▄▄          ██▄
▄██      █████▄▄        ██▄
██       ████████▄▄      ██
██       ███████████▄    ██
██       ██████████▀     ██
▀██      ███████▀       ██▀
 ▀██     ████▀         ██▀
  ▀██▄   █▀          ▄██▀
    ▀██▄▄         ▄▄██▀
       ▀▀█████████▀▀
.flixxo   
Vorksholk (OP)
Legendary
*
Offline Offline

Activity: 1713
Merit: 1029



View Profile WWW
November 01, 2013, 02:50:21 AM
 #6

short version: he wants to make a dynamic algorithm for the PoW so ASICs are infeasible to make. He proposes having some predefined functions, and at retarget time deciding which and/or in what order to apply them, thus generating a new PoW algorithm that will be used until next retarget.

Is that it?



I say it wouldn't work, because you can have an ASIC that has one circuit for each of the predefined functions and then link them together as needed. You don't need to hardwire each possible combination, ASICs can implement conditionals!

Digression on ASIC design: if you have a hardwired circuit for each of the predefined functions (each of the modules that you combine), the naive way is to simply run only one at a time or to have them run in parallel and then only keep the result of the one you want, and repeat to do the next step. This would work very well, but you would be wasting resources: it's possible to do a smarter design where you have a sort of superscalar pipelined architecture where you have all of the involved circuits running at the same time in parallel at different steps of the calculation. This would kick the CPU's and GPU's asses because -once the pipeline is full- all the big parts of the circuit are doing useful work at the same time: no waste!

The only path I see to "ASIC-hostility" is having a memory intensive algorithm... one could make trade-offs for calculations vs memory use, but either way it costs you transistors.

Interesting point, ASICs certainly can handle conditionals. I think this design would still make ASICs significantly more expensive and less efficient per area of board space, as, say, for example, that one of the algorithms needed step 18 done 6 times and step 4 done once, assuming each step was the same size, then the circuitry that does step 4 would be idle waiting for step 18 to go 6 rounds.

Perhaps a better idea would be to have the very circuit logic of each step change? So some times it would be something like (assuming given 8 16-bit chunks):
a = first 16, b = second 16, ... h = last 16
Then do circuit logic, such as A (a different, memory place-holder) = XOR(A,B)
Then do circuit logic, such as B (a different, memory place-holder) = XOR(B,C)
Then do circuit logic, such as C (a different, memory place-holder) = XOR(C,D)
Then do circuit logic, such as D (a different, memory place-holder) = XOR(D,E)
Then do circuit logic, such as E (a different, memory place-holder) = XOR(E,F)
Then do circuit logic, such as F (a different, memory place-holder) = XOR(F,G)
Then do circuit logic, such as G (a different, memory place-holder) = XOR(G,H)
Then do circuit logic, such as H (a different, memory place-holder) = XOR(H,A)
And then do some additional steps, don't feel like writing out an entire modular piece.

Then the next retarget, that step could be different, perhaps switching some of the XORs out for AND or ORs (trying to maintain balance...), switching out some of the digits, etc. We would need more data to draw from than the last block hash for the retarget, so perhaps the system could rely on a combination of the hashes of multiple of the past blocks in such a way that there would be enough granularity to allow all circuit-logic to be changed. So psuedo-code for the above would be something like:
a = first 16, b = second 16, ... h = last 16
Then do circuit logic, such as A (a different, memory place-holder) = Logic(something, somethingElse)
Then do circuit logic, such as B (a different, memory place-holder) = Logic(something, somethingElse)
Then do circuit logic, such as C (a different, memory place-holder) = Logic(something, somethingElse)
Then do circuit logic, such as D (a different, memory place-holder) = Logic(something, somethingElse)
Then do circuit logic, such as E (a different, memory place-holder) = Logic(something, somethingElse)
Then do circuit logic, such as F (a different, memory place-holder) = Logic(something, somethingElse)
Then do circuit logic, such as G (a different, memory place-holder) = Logic(something, somethingElse)
Then do circuit logic, such as H (a different, memory place-holder) = Logic(something, somethingElse)
And when retargeted, it would put something into each of Logic (AND, XOR, OR), and choose things for something, somethingElse based on block data.

You brought up a very good point, think an ASIC could handle the above efficiently? At this point, it'd almost have to have just chips dedicated to XOR, OR, and AND, which is pretty close to a specialized CPU.

VeriBlock: Securing The World's Blockchains Using Bitcoin
https://veriblock.org
gatra
Hero Member
*****
Offline Offline

Activity: 583
Merit: 505


CTO @ Flixxo, Riecoin dev


View Profile WWW
November 01, 2013, 03:06:03 AM
 #7

short version: he wants to make a dynamic algorithm for the PoW so ASICs are infeasible to make. He proposes having some predefined functions, and at retarget time deciding which and/or in what order to apply them, thus generating a new PoW algorithm that will be used until next retarget.

Is that it?



I say it wouldn't work, because you can have an ASIC that has one circuit for each of the predefined functions and then link them together as needed. You don't need to hardwire each possible combination, ASICs can implement conditionals!

Digression on ASIC design: if you have a hardwired circuit for each of the predefined functions (each of the modules that you combine), the naive way is to simply run only one at a time or to have them run in parallel and then only keep the result of the one you want, and repeat to do the next step. This would work very well, but you would be wasting resources: it's possible to do a smarter design where you have a sort of superscalar pipelined architecture where you have all of the involved circuits running at the same time in parallel at different steps of the calculation. This would kick the CPU's and GPU's asses because -once the pipeline is full- all the big parts of the circuit are doing useful work at the same time: no waste!

The only path I see to "ASIC-hostility" is having a memory intensive algorithm... one could make trade-offs for calculations vs memory use, but either way it costs you transistors.

Interesting point, ASICs certainly can handle conditionals. I think this design would still make ASICs significantly more expensive and less efficient per area of board space, as, say, for example, that one of the algorithms needed step 18 done 6 times and step 4 done once, assuming each step was the same size, then the circuitry that does step 4 would be idle waiting for step 18 to go 6 rounds.

Perhaps a better idea would be to have the very circuit logic of each step change? So some times it would be something like (assuming given 8 16-bit chunks):
a = first 16, b = second 16, ... h = last 16
Then do circuit logic, such as A (a different, memory place-holder) = XOR(A,B)
Then do circuit logic, such as B (a different, memory place-holder) = XOR(B,C)
Then do circuit logic, such as C (a different, memory place-holder) = XOR(C,D)
Then do circuit logic, such as D (a different, memory place-holder) = XOR(D,E)
Then do circuit logic, such as E (a different, memory place-holder) = XOR(E,F)
Then do circuit logic, such as F (a different, memory place-holder) = XOR(F,G)
Then do circuit logic, such as G (a different, memory place-holder) = XOR(G,H)
Then do circuit logic, such as H (a different, memory place-holder) = XOR(H,A)
And then do some additional steps, don't feel like writing out an entire modular piece.

Then the next retarget, that step could be different, perhaps switching some of the XORs out for AND or ORs (trying to maintain balance...), switching out some of the digits, etc. We would need more data to draw from than the last block hash for the retarget, so perhaps the system could rely on a combination of the hashes of multiple of the past blocks in such a way that there would be enough granularity to allow all circuit-logic to be changed. So psuedo-code for the above would be something like:
a = first 16, b = second 16, ... h = last 16
Then do circuit logic, such as A (a different, memory place-holder) = Logic(something, somethingElse)
Then do circuit logic, such as B (a different, memory place-holder) = Logic(something, somethingElse)
Then do circuit logic, such as C (a different, memory place-holder) = Logic(something, somethingElse)
Then do circuit logic, such as D (a different, memory place-holder) = Logic(something, somethingElse)
Then do circuit logic, such as E (a different, memory place-holder) = Logic(something, somethingElse)
Then do circuit logic, such as F (a different, memory place-holder) = Logic(something, somethingElse)
Then do circuit logic, such as G (a different, memory place-holder) = Logic(something, somethingElse)
Then do circuit logic, such as H (a different, memory place-holder) = Logic(something, somethingElse)
And when retargeted, it would put something into each of Logic (AND, XOR, OR), and choose things for something, somethingElse based on block data.

You brought up a very good point, think an ASIC could handle the above efficiently? At this point, it'd almost have to have just chips dedicated to XOR, OR, and AND, which is pretty close to a specialized CPU.

I'm not sure I understand this last part, but I think that you are just complicating the "glue" logic that links all the modules together. That makes it harder for both the CPU and the ASIC, but the latter would still be efficient.
Also, I didn't care about entropy because it's not a problem if each of your basic modules is a well studied algo like SHA-x. But if you want to randomize the logic (using and/xor/or at random) them you have to start being careful about it.


           ▄▄▄██████████▄▄▄
       ▄▄██
██████████████████▄▄
     ▄█
█████▀████████████▀██████▄
   ▄█
█████████████████████████████▄
  ▄█
█████████▄█▀▀██████████████████▄
 ▄█
███████████▀██████▄▄█████▄███████▄
▄█
██████████▀██▄▄▄▄██▀▀▀▀▀███████████▄
█████████████▀▀██▀████████▀▀████████
█████████████▄█▀████████████████████
████████▀▀▀▀██▀▀▀▀██████████████████
▀█
██████▀▀▀▀██▀▀▀▀███████████████████▀
 ▀█
███████▄████▄▄███████████████████▀
  ▀█
███████████████████████████████▀
   ▀█
█████████████████████████████▀
     ▀█
█████▄████████████▄██████▀
       ▀▀██
██████████████████▀▀
           ▀▀▀██████████▀▀▀
riecoin       ▄▄█████████▄▄
    ▄██▀▀         ▀▀██▄
  ▄██▀              ▀██▄
 ▄██     ██▄▄          ██▄
▄██      █████▄▄        ██▄
██       ████████▄▄      ██
██       ███████████▄    ██
██       ██████████▀     ██
▀██      ███████▀       ██▀
 ▀██     ████▀         ██▀
  ▀██▄   █▀          ▄██▀
    ▀██▄▄         ▄▄██▀
       ▀▀█████████▀▀
.flixxo   
Vorksholk (OP)
Legendary
*
Offline Offline

Activity: 1713
Merit: 1029



View Profile WWW
November 01, 2013, 03:10:40 AM
 #8

short version: he wants to make a dynamic algorithm for the PoW so ASICs are infeasible to make. He proposes having some predefined functions, and at retarget time deciding which and/or in what order to apply them, thus generating a new PoW algorithm that will be used until next retarget.

Is that it?



I say it wouldn't work, because you can have an ASIC that has one circuit for each of the predefined functions and then link them together as needed. You don't need to hardwire each possible combination, ASICs can implement conditionals!

Digression on ASIC design: if you have a hardwired circuit for each of the predefined functions (each of the modules that you combine), the naive way is to simply run only one at a time or to have them run in parallel and then only keep the result of the one you want, and repeat to do the next step. This would work very well, but you would be wasting resources: it's possible to do a smarter design where you have a sort of superscalar pipelined architecture where you have all of the involved circuits running at the same time in parallel at different steps of the calculation. This would kick the CPU's and GPU's asses because -once the pipeline is full- all the big parts of the circuit are doing useful work at the same time: no waste!

The only path I see to "ASIC-hostility" is having a memory intensive algorithm... one could make trade-offs for calculations vs memory use, but either way it costs you transistors.

Interesting point, ASICs certainly can handle conditionals. I think this design would still make ASICs significantly more expensive and less efficient per area of board space, as, say, for example, that one of the algorithms needed step 18 done 6 times and step 4 done once, assuming each step was the same size, then the circuitry that does step 4 would be idle waiting for step 18 to go 6 rounds.

Perhaps a better idea would be to have the very circuit logic of each step change? So some times it would be something like (assuming given 8 16-bit chunks):
a = first 16, b = second 16, ... h = last 16
Then do circuit logic, such as A (a different, memory place-holder) = XOR(A,B)
Then do circuit logic, such as B (a different, memory place-holder) = XOR(B,C)
Then do circuit logic, such as C (a different, memory place-holder) = XOR(C,D)
Then do circuit logic, such as D (a different, memory place-holder) = XOR(D,E)
Then do circuit logic, such as E (a different, memory place-holder) = XOR(E,F)
Then do circuit logic, such as F (a different, memory place-holder) = XOR(F,G)
Then do circuit logic, such as G (a different, memory place-holder) = XOR(G,H)
Then do circuit logic, such as H (a different, memory place-holder) = XOR(H,A)
And then do some additional steps, don't feel like writing out an entire modular piece.

Then the next retarget, that step could be different, perhaps switching some of the XORs out for AND or ORs (trying to maintain balance...), switching out some of the digits, etc. We would need more data to draw from than the last block hash for the retarget, so perhaps the system could rely on a combination of the hashes of multiple of the past blocks in such a way that there would be enough granularity to allow all circuit-logic to be changed. So psuedo-code for the above would be something like:
a = first 16, b = second 16, ... h = last 16
Then do circuit logic, such as A (a different, memory place-holder) = Logic(something, somethingElse)
Then do circuit logic, such as B (a different, memory place-holder) = Logic(something, somethingElse)
Then do circuit logic, such as C (a different, memory place-holder) = Logic(something, somethingElse)
Then do circuit logic, such as D (a different, memory place-holder) = Logic(something, somethingElse)
Then do circuit logic, such as E (a different, memory place-holder) = Logic(something, somethingElse)
Then do circuit logic, such as F (a different, memory place-holder) = Logic(something, somethingElse)
Then do circuit logic, such as G (a different, memory place-holder) = Logic(something, somethingElse)
Then do circuit logic, such as H (a different, memory place-holder) = Logic(something, somethingElse)
And when retargeted, it would put something into each of Logic (AND, XOR, OR), and choose things for something, somethingElse based on block data.

You brought up a very good point, think an ASIC could handle the above efficiently? At this point, it'd almost have to have just chips dedicated to XOR, OR, and AND, which is pretty close to a specialized CPU.

I'm not sure I understand this last part, but I think that you are just complicating the "glue" logic that links all the modules together. That makes it harder for both the CPU and the ASIC, but the latter would still be efficient.
Also, I didn't care about entropy because it's not a problem if each of your basic modules is a well studied algo like SHA-x. But if you want to randomize the logic (using and/xor/or at random) them you have to start being careful about it.

Yes, entropy loss would be something to be aware of for sure, the idea though is that the CPU miner could easily recreate and recompile an optimized hashing function, whereas the ASIC would be reduced to simply switching between circuit logic, making each part/module-calculator of the ASIC in such a way that there would be too much granularity for a given function for it to be implemented at the circuit level.

Entropy would come into play where you would only have small areas that could be changed, although I suppose ASIC circuits could do the known parts, and feed those parts into CPUs for the too-granular parts, this would cause quite the bottleneck... Food for thought for sure. This certainly isn't a perfect idea, unfortunately. Sad

VeriBlock: Securing The World's Blockchains Using Bitcoin
https://veriblock.org
digitalindustry
Hero Member
*****
Offline Offline

Activity: 798
Merit: 1000


‘Try to be nice’


View Profile WWW
November 01, 2013, 03:59:01 AM
 #9

Nothing is a perfect idea..

But I think its a great start, also the diff adjustment and time parameters would need to be well thought though as imagine a scenario where the algorithm changes from say sCrypt-jane to SHA256 and was held for say "2 weeks" .

Well obviously in the two weeks BFL would have shipped 1000s of ASICs to happy customers,  so imagine say 1 PTH hitting the  crypto for that 2 week period, then a change back to say SCrypt.

Interesting things would occur.  

Perhaps a switching from a selected amount of memory hard algorithms ?

Allowing of course for the issues you mentioned related to random entropy.  

Why this woulds still be very relevant is the multiple effect it would have on the already high cost/return exercise of creating an ASIC for Scrypt.

Also it would somewhat balance the field between cpu and gpu.  I.e a person with a system the has say , two gpu and a decent cpu , might be able to mine for quite while before anything relating to specialization came about.

- Twitter @Kolin_Quark
smolen
Hero Member
*****
Offline Offline

Activity: 524
Merit: 500


View Profile
November 01, 2013, 04:06:59 PM
 #10

what if a way of reversing SHA256 became possible?
There is nuance Smiley For any non-SHA256 coin reversing hash function wouldn't be the end - if such invention will be public. Let's suppose that someone discovered a way to pick the best nonce for scrypt hash (as it is used in Litecoin and clones) AND published this invention. Litecoin will not die, it will become SHA256 coin Smiley Now the hard task to solve would be not iterating through nonce in the block header, but iterating through wide nonce2 area of coinbase transaction, re-building Merkle tree with double SHA256 hash (hard) and then picking suitable nonce with new scrypt algorithm (easy).

Of course I gave you bad advice. Good one is way out of your price range.
smolen
Hero Member
*****
Offline Offline

Activity: 524
Merit: 500


View Profile
November 01, 2013, 04:33:52 PM
 #11

Regarding the loss of entropy (my math skills are rusted and English math-related vocabulary is almost non-existent, I'm pretty sure there should be a better term like "the size of image", but let's call it entropy) - the modular exponentiation can be considered as hash function without entropy loss. We can count on that well studied hash functions lose little if any entropy, otherwise they wouldn't be collision-resistant. To prevent entropy loss for homegrown hash, just gamma it with the output of some good hash, e.g. PowHash(header) = HomegrownHash(header) xor SHA3(header).

Of course I gave you bad advice. Good one is way out of your price range.
smolen
Hero Member
*****
Offline Offline

Activity: 524
Merit: 500


View Profile
November 01, 2013, 04:53:11 PM
 #12

I say it wouldn't work, because you can have an ASIC that has one circuit for each of the predefined functions and then link them together as needed. You don't need to hardwire each possible combination, ASICs can implement conditionals!
Yes, an attacker can implement hashing cores and add some routing and buffering logic to it
(Wild idea: someone from Intel or AMD with access to microcode format can possibly turn an ordinary off-the-shelf CPU into powerful miner, just like AES has been added to old processors via microcode update)

The only path I see to "ASIC-hostility" is having a memory intensive algorithm... one could make trade-offs for calculations vs memory use, but either way it costs you transistors.
If I understand correctly, eDRAM is established technology, almost any amount of RAM could be added to ASIC. But memory chips are cheap and NRE costs are high, that's destroys (or, at least, postpones) ASIC advantage.

Of course I gave you bad advice. Good one is way out of your price range.
fluidjax
Hero Member
*****
Offline Offline

Activity: 750
Merit: 601



View Profile
November 01, 2013, 09:43:08 PM
 #13

Many people can create a SHA hashing ASIC, the problem with a more complex POW is only a very select bunch of people might be able to create an ASIC, this would create a much higher risk of a 51% type attack. As the person who 'cracks' it first will unleash a massive amount of hashing power on the network.
spartacusrex
Hero Member
*****
Offline Offline

Activity: 718
Merit: 545



View Profile
January 11, 2014, 04:36:06 PM
 #14

At first, this seemed an impossible dream, but I've had a rather devious idea..  Grin

It is obvious, to me at least, that any piece of code that can be WRITTEN DOWN, can ALWAYS be converted to some form of ASIC.

So - let's assume that to be the case. That if you write a complicated algorithm that re-jigs the hash algorithm used in a POW system, you can ALWAYS write that as some form of ASIC.

Now let's attack this problem from a different angle.

Let's say - We want the ASIC implementation to be ONLY AS FAST as the CPU implementation.. rendering the ASIC implementation useless. Can that be done ?

I can think of something that does that. A Full Blown Computer..

You write an algorithm that writes TOTALLY NEW CODE. And not code that uses a couple of mixer functions like XOR or modular exponentiation.

I mean a complete working program that can ONLY BE RUN on a full blown CPU chip. You could make the algorithm use a minimum amount of differing logic, so that ONLY a cpu could run the whole brand new piece of code. 

To prevent entropy loss for homegrown hash, just gamma it with the output of some good hash, e.g. PowHash(header) = HomegrownHash(header) xor SHA3(header).
 

This sort of thing could be used to ensure that whatever the new piece of code the computer came up with was still a valid hash. You may have to use concatenation or nesting instead of a straight XOR depending on the use of the Hash. Concatenation preserves collision resistance, nesting preserves first pre-image resistance and XOR preserves pseudo-randomness.. Or maybe the newly-written code could be used for some other function, like determining which hash algo to use in the first place. Not the HASH algo itself. 

Thus - You would be writing a computer program that writes computer programs, and the programs that the computer writes can only be run on a full blown cpu.

Therefore, when someone does create the ASIC version, which they always can, it's fine. It still needs to run the code on a cpu. So they would need to put a full CPU on the ASIC chip board.

Which i think is just another name for, a computer..

This way the ASIC version is, just another computer. And no faster than the fastest computers.

As for the implementation of a program that writes working programs complicated enough for this scheme to work.. that would need to be something, a little bit special..  Cool

Life is Code.
tromp
Legendary
*
Offline Offline

Activity: 977
Merit: 1077


View Profile
January 29, 2014, 05:32:19 PM
 #15

All you need for ASIC resistance is a PoW that is constrained by memory latency
and uses a GB or more, with no feasible time-memory trade-off.
Pages: [1]
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!