the long reply i promised to that tezos

article. it refers only to the article. more on why i dont think tezos can work can be found on this thread not long ago.

(not the whole article is quoted below, some paragraphs are omitted for the sake of brevity)

The meltdown of “The DAO” and, more recently DDOS attacks on Ethereum have spurred a debate on the wisdom of “Turing completeness” in smart contract languages.

Tezos, a cryptographic ledger and smart contract platform offers Turing complete smart contracts in a formally specified language (with strong typing and static type checking). We seek to clarify some misconceptions about the relevance of “Turing completeness” in the context of smart contracts.

the article presents in a partial and inaccurate way the ideas i published against eth's turing completeness and lack of formal verification (beginning ~1y before the dao case, i was the first public voice and virtually the only till then, just google "tauchain turing complete" and see that others say also that tau is the only, at least months before that article). it then claims that those ideas were wrong and turing completeness is the way to go (which is a new piece of info to me, i didnt know tezos is turing complete, negative bonus points), and the way to maintain security tezos offers is again the old tau concept of static type checking (in contrast to the new tau and with other non-type-theoretic methods). in addition i refer to the (somewhat wrong) use of rice theorem i spoke a lot about (who else?). "spurred a debate" is far from describing what the sources of those ideas is.

therefore i take this late uninvited opportunity to respond.

Unfortunately, with power comes complexity. Rice’s theorem tells us that, in principle, no general procedure (human or mechanical) can decide for all programs whether they will terminate. In fact, it may not even predict any nontrivial property about their behavior.

this is somehow inaccurate and repeated along the article. the halting problem was posed and proved undecidable by turing himself. rice's theorem shows that many properties of machines can be translated to the halting problem (not so hard to see it: construct a machine that decide some nontrivial property, feed it with input from another machine, and you can determine whether the latter machine halts).

Some languages forego Turing completeness and their programs can be shown always to halt. Such is the case for the Bitcoin script for instance. A folk belief seems to be that the converse of Rice’s theorem is true. Namely, the misconception goes:

“Since the halting problem in Turing complete languages implies that we cannot predict the behavior of programs in general, then, as long as we write our programs in a language that guarantees termination, we’ll be able to predict all the interesting properties we’d like.”

This is not true, not in theory, not in practice. There are languages for which programs are guaranteed to halt for all input, but where you may not decide if any given input will cause the program to reach an error condition. (For the theoretically inclined: take for instance the language consisting of a single program f which checks if its input x is a proof of the inconsistency of ZFC. f always halt, but whether or not it ever returns “true” is undecidable in ZFC)

that's not entirely a misconception. decidability and termination go hand by hand: decidablity is the existence of a turing machine that decides the problem in finite time (not to be confused with completeness, where a complete theory can decide everything from within itself). so if type inference is guaranteed to terminate, then type inference is decidable. however that’d be a misconception indeed to try to identify decidability with termination on broader scopes.

In practice, Turing completeness is an idealization. Concretely, computers have a finite amount of memory and will only run for a finite amount of time before being turned off.

turing complete is not "real", that's right, but, the language (alongside the reasoning process) may be as abstract as well. more below.

but another point for now to be used later: the fact that machines are limited in real life, doesnt help you a single bit when it comes to undecidable problems, only worse. as an example, take hilbert’s 10th problem. given a multivarite polynomial, decide whether it has zeros over the integers. this problem is undecidable. and the fact that our machines are limited doesn't help us at all to solve it but only “makes the situation worse”.

Tezos (or Ethereum) smart contracts are bounded in time by a limit set per block and per transaction. In some trivial way, they are not quite Turing-complete. Better, since the number of steps in the execution of a contract is bounded, it is theoretically possible to predict the behavior of a given program for any possible input.

all good, but, in which sense, your (or others') type system take can take into account this boundess? even if it exists, the language typically just ignores it, so practically back to unbounded turing completeness, when trying to formally verify programs, with no information about the bounds ever propagating into the logic, hence nothing to promise the existence of a finite proof. recall that rice's theorem is not true at all when it comes to the specific machine implementation, however it still have much to say about the machine independent behavior of a program.

in other words, the mere existence of a bound external to the language, does not imply decidability of a langauge. this point is similar to, but different from, the point with hilbert’s 10th problem. these are two different issues.

what does the gas model essentially say? look at it the other way around: your contract will be forced to terminate after max known amount of steps, but, you cannot know apriori how many steps your contract is going to need! that's not avoiding the unsolvable problem, that's making it worse.

Since parsing an input takes an amount of time at least commensurate with the length of the input, the cap on the number of steps in the execution bounds the maximal size of any input that the contract may read. The following algorithm can thus decide the behavior of any contract:

for(all inputs < max size) {

run program for max time;

check property;

}

Of course, this algorithm is astronomically unpractical. If the input size were restricted to a measly kilobyte, there may still not be enough free energy in the universe to carry out the computation.

My point here isn’t to show that we can always prove all the true properties of time bounded smart contracts. Quite the opposite, my point is that the existence of a general algorithm and the existence of a practical algorithm are two very different things.

to the last statement, practical algorithm depends on the existence of any algorithm ofc, which cannot come unless the language is decidable.

Does such an efficient algorithm exist for Bitcoin script? The answer is no as well!

Consider the following Bitcoin script

scriptPubKey: OP_HASH256 0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef OP_EQUAL

This is a hash puzzle, as described on the bitcoin wiki. Notice that the hash I picked is clearly made up. The script clearly halts for all input, but we might be interested to know if this output is spendable. It all depends if there exists a preimage to the hash I made up, and that is absolutely unclear. We would have to brute force the hash function for billions of years to find out.

Bitcoin’s script lack of Turing completeness is not a magic bullet. It is of no help in deciding a fundamental property of this very simple script.

that'd be an innocent cheating (that partly “fixed” below). efficient and general (as below) algorithms do exist for btc scripts, yes. taking the full semantics of hash/sig and reaching a situation where its inversion is needed, is completely outside the interest of a script verifier. for all practical purposes, btc scripts can be verified very efficiently and generality.

As we’ve said before, Rice’s theorem essentially tells us that there exist programs which do terminate, and do behave as expected, but not provably so.

However, almost all correct programs of interest are going to be essentially provably correct. Moreover, the proof of their correctness will be relatively straightforward. This is especially true in the context of smart contract on cryptographic ledgers which generally implement simple and straightforward business logic (it is of course less true if you’re dealing with, say, computational algebra).

was "the dao" above "straight forward" as here?

A correctness proof might rely on admitting certain hard to prove conjectures. For instance, we may accept from heuristic arguments that hash functions are indeed hard to invert, and use it as a lemma in our proof, as is generally done in cryptography papers.

But overall, whenever a programmer writes a program, they generally keep a vague outline of a proof of correctness in their head. Sometimes that proof is incomplete or wrong — humans are fallible — but it always guides the writing. The programmer constantly reasons about he behavior of their program, to make sure it does what is intended.

this has a lot of truth. most of programs work or dont work because of "stupid reasons", not because some deep unknown math mystery. but what you explained is valid for one programmer. only then it fits all into one mind. but what if different people write different pieces of code that glue into one sw? you cannot then rely on the human knowing the proof. you need the automated (rather interactive) method, which can be done only in a decidable world.

The efficient frontier, and the new frontier created by automated theorem proving and formal verification

Programming languages exist below an expressiveness / ease to reason about frontier.

Expressive languages make it easy to describe certain desired behavior. While, in principle, all Turing languages can express the same type of computations, in practice, it can be very hard to express certain computations if the language is inadequate. A humorous example is the Malbolge language. While Malbolge is Turing complete, it is extremely hard to write programs in this language that do what they are intended to do.

Like expressiveness, ease to reason about is not a binary variable. Turing completeness has important theoretical and practical implications for the general possibility of reasoning about a language, but as we’ve seen earlier, it does not tell the whole story.

Very expressive languages tend to be harder to reason about, while some restricted languages (such as finite state machines) can be very easy to reason about at the cost of expressiveness. There is fortunately a way to push that frontier: formal verification.

being a high-level language by no means has a anything to do with being an expressive language. that’s a horrible mis-use of the term “expressiveness” on this scope. assembly and python have same expressiveness. the former is a low-level language and the latter is a high-level.

and that’s not just a terminology issue. you claim that expressiveness has a tradeoff with ability to reason over. that’s true. but that’s true for the “real” expressiveness! not for being high/low level!

Formal verification is a rising computer science discipline which produces provably correct programs. You might think of it as creating a machine checkable proof that a program is correct, though, oftentimes, the proof itself is a program. Smart contracts have few lines of code, but a lot at stake, making them a perfect candidate for this technique.

In particular, assisted, or even automated, theorem proving lets the programmer specify the properties they want to prove and lets the computer do most of the tedious work of coming up with a formal proof of correctness. This does not affect all languages equally.

While it is true that a provably correct program is still provably correct when compiled down to some assembly, it is much harder to derive proofs from the compiled version. This is true for humans, this is true for automated theorem provers. The compilation step throws away vital information about the structure of the program.

Since Tezos started in the summer of 2014, a core tenet of the project has been that smart-contract languages are a perfect target for formal verification. We wanted a language that would play nicely with theorem provers, but also that could also be easily reasoned about on its own. This led to the following choices:

there are many formal-verifiers for C, and excellent ones. like cbmc (which you’ll find under a different yet slightly less abused buzzword: “model checking”), or astree (has to do with “abstract interepretation” and “galois connections”). of course they’re limited and not intended for others to verify my code, but for me to verify my own code. that’s a big difference when it comes to a decentralized network. but let’s continue further before emphasizing the main point here:

We started with a formal specification of the language, to determine precisely and exactly what the program meant.

just to be clear, "what the program meant" means how to compile/interpret the program, not that it is correct. i cant see other interpretation to this sentence.

We made the language strongly typed, with algebraic datatypes and static type checking. Types are a powerful way to reason about the behavior of a program. They can ensure for instance that we do not mistakenly add apples and oranges, but they can be pushed further to enforce certain properties of the contract.

We made the language high-level. The contract that participants enter into isn’t some obfuscated assembly, but the language in which the contracts are written. This allows us to introduce useful high level primitives, such as functional maps, and functional sets. Our goal isn’t to build some arbitrary code execution engine, but an engine tailored to the needs of smart-contract writers.

good it speaks about “high level” and not “expressive” as above.

but high level has, as above, has nothing to do with essential ease to reason over. what it actually means i’ll explain at the conclusion.

All these properties facilitate proving facts about the contracts using automated theorem provers such as the Coq proof assistant, but they also make it easier to manually check the correctness of the programs.

weird. in which sense coq is auto and your approach is manual? coq is a semi-automatic typechecker as well. it typically supports undecidable theories. and for the parts that coq can be automated, i.e. type checking goes without further questions to the user, in what it is different from your proposed approach? do you propose to do all typechecks manual whatsoever?

Conclusion

Turing completeness was framed! There are programs for which properties are easy to understand and prove, and some for which they are difficult. There are programming languages which facilitate such proofs, and some which obscure them. It’s about trade-offs, not about some bright line around the halting problem.

to summarize the last quoted paragraphs,

tezos offer to do what everyone already did while knowing since the birth of computer science that it’s not achieable: to make a turing complete language that is easy to read. that human who reads the code (ok short smart contract code alright) will fully understand its behavior.

the whole offer boils to “we do a more readable language”.

yes you automatically prevent the programmer to mix apples with oranges, but that’s older than our both ages combined, in everyday programming langauges. it’s true even for assembly to some extent (and there are typed assembly languages).

excuse me but i have to dismiss the project due to boredom and lack of innovation. rooted in some good conceptions mixed with misconceptions. and also mixed with good intentions, i believe.