1. Buitins aren't really needed except for IO. Everything else can be described by the logic itself.

This really depends on which context you place yourself in. The logic itself is only concerned with describing and inferring relations between symbols in a mechanical, provable and objective way. It's entirely disconnected from the world. To be able to use the logic process actual knowledge about the world you need to add semantics to non-logical symbols via an interpretation. For that purpose, defining precisely the vocabulary and ontologies is of upmost importance for everyone interested in making any practical real-world use of Tau even if this has no bearing on the way the Tau is going to work on a logical level. Sensors and effector built-ins are even more important because they are connecting the logic to the state of the world and allowing it to query or affect that state.Let me give an example for what I meant. Roughly speaking, one can think of two ways to process integers: by using the CPU's built-in instructions, or to introduce number theory and let the calculations follow from the rules. The latter method sounds insane, but some things really work like that in real life, for example https://github.com/idris-lang/Idris-dev/blob/master/libs/prelude/Prelude/Nat.idr

So, builtins come to give the former option. You don't must do simply math or string operations in logic unless you need to. You can add integers using the builtin math:add, for example.

Effects, on the other hand, definitely have to have well defined builtins and policies, and cannot be implemented as part of the pure reasoning process.

You could complete the core of Tau without ever worrying about the way it's going to interface with the world, but then what you are going to release is an autistic sollipsistic reasoner that has no practical use other than making and verifying constructive proofs.

I cannot postpone Effects for later. Maybe for tau as a stand-alone cool toy, but not the Tau Chain: the client has be shipped with the genesis hard-coded (as in BTC) and to be able to bootstrap from there into downloading and executing all blocks.

We all know that's not the plan so you will have to specify, develop and document a standard library for Tau that will allow it to interract with the world in a way that is well enough standardized to allow consensus on its semantics and effects. Until this is done you will have 0% adoption because Tau will be essentially useless. You could wait longer to start worrying about that but then you lose a lot of benefits such as showing real world examples of what Tau can do, and having contributors help you with developing that layer.

Indeed I put strong emphasis on the points you mentioned, and maybe this is part of the reason I cannot give the final answers yet.

2. Builtins should be quite minimal. The rationale is that a client will have to be able to replay the chain from genesis, so the root chain must contain code that the clients support.

I second that. Built-ins are the weak point in the system because they need to run as native code (as opposed to Tau logic) that can't be formally tested by the logic. The more builtins there are the more attack surface there is to introduce backdoors and/or find zero-day exploits. Built-ins should also be entirely orthogonal, meaning that there should be one and only one way do something and everybody should be using that way. Everything else should be done in Tau. Actually, if you have a good JIT system that allows to run Tau code at a decend performance level, I would even advise against using built-ins for the DHT. Basic network and cryptographic built-ins are sufficient to restore the full DHT functionnality intrinsically. Whenever a built-in is needed for performance, what you could do is let people decide in the bootstrap file if they want to use the native built-ins or the pure logic version. You could also test the native built-ins using a Tau-level model. After sufficient unit test runs against the boundary and median values of the input parametes range of definition as well as randomized tests, one could decide (in the "belief" sense) with enough confidence that the native built-in and the pure logic are indeed equivalent. That would allow to even reimplement as native code pieces of logic with no side-effect but a pretty heavy computational profile (cryptographic operations come to mind).Computation on chain: What HMC mentioned is with respect to computation that is being done on the chain only, like a computation that is shared by everyone trustlessly. Application that runs locally and uses the chain only for certain operations (like a btc wallet) simply runs on the local machine and does not need to wait for the next block in order to continue local execution.

Recovering Turing completeness at this scope relies on the notion that Turing completeness can be recovered from tfpl by successive calls to it. Therefore, over time, computation on chain also becomes Turing complete.

This was implied in the question. The context of the question, the whole article, and HMC's answer assume that a blockchain that supports pseudo-turing-complete contracts has been implemented over Tau and we are discussing how contracts that in Ethereum would need to be preempted by the miner upon exhausting their resources could be made (or proven) to yield by themselves in Tau and resume on next block. So the question is how do you do that. Actually the question is more complex than that, so the best would be to quote it and start explaining from there.Recovering Turing completeness at this scope relies on the notion that Turing completeness can be recovered from tfpl by successive calls to it. Therefore, over time, computation on chain also becomes Turing complete.

Quote

In the context of the comparition with Ethereum, what I understand HMC to be saying is that the question of the maximum runtime of a contract is decidable which allows to break down the contract in execution segments and schedule them over multiple blocks.

Correct.

Quote

Hence my question. If the only thing that is decidable for any program (you don't control what contract creators put in a contrat) is the fact that the program will stop, this does absolutely nothing to make it anywhere near practical. If the program takes 5 years to stop, we aren't any better off than if it didn't stop at all.

Everything (once expressed in MLTT) is decidable, not only halting but all statements, including the exact runtime complexity. And of course you do control what the contract creators put in the contract: its code is open and you can query the reasoner for any question you have regarding it. If the reasoner doesn't answer instantly, you may ignore the contract. Conversly, the contract creators can supply a proof for "interesting" assertions.

Quote

So if I understand correctly, what HMC says above would work only for these contracts that were shipped together with a proof already written explicitely by the contract developer that establishes in polynomial time that the program will stop because the developer went through the deliberate design decision of making his program do everything in small chopped batches with yielding and continuations all over the place. Is that the spirit? And to ensure that every contract on the chain are following this paradigm, the contracts would need to come with the proof for the transaction to be validated and added to the blockchain? And do you also ship a proof that the proof indeed runs in polynomial time? And so on Smiley?

The time takes to verify the proof is given by the size of the proof (since it's linear), so it is easy to predict that complexity. Some verifications are even logarithmic at the proof size, like in amiller's merkle tree.

The questions of whether to require a proof with every contract, or in which cases to require that, or proof for what exactly, or what is the maximum length allowed and so on, this is almost-all context (app) dependent. I say "almost" because such considerations exist also on the root chain, but this again goes back to tau's users. Those are the rules of the network which we do not set, but the users set. We tend to be as minimalistic as possible on influencing the root's rules. On genesis, tau will indeed be usable only for fans/geeks/evangalists/techies and so on. The first users will shape the root chain, specifically: the rules of rulemaking of rulemaking (the rules of creating contexts or changing the root chain). At mentioned several times, tau can be equivalently described as a decentralized Nomic game. I think that it is more reasonable than developers hard-coding some necessarily-arbitrary rules. What do you think?