MLTT does not have decidable typecheck in general.
While technically true for general MLTT, it is not a problem in practice in an MLTT TFPL, particularly for tau. If it were generally a problem, languages like agda/idris would not be particularly useful for verified programming.
you mix decidability with consistency. of course there is a "problem" with agda/idris when the programmer tries to prove his code's correctness (even if it's correct indeed). the process is not always automatic, and must reject correct code sometimes, which is a clear situation of undecidability. once the programmer managed to convince the typechecker and got proof of correctness, then the proof can be trusted, but that's the situation even with programs in turing complete languages. dependent types simply let one express the requirements from the code in a great generality. proving correctness is another task that sometimes can be done without the language being decidable, but of course not always. there are many ways to verify programs, and dependent types is only one method used in real world applications, but tau needs the decidable and automatic one. indeed the literature on model checking mention those concerns (and more) regarding dependent types, and the state of the art of higher order model checking is msol based, which is possible only since Ong 2006's discovery. simply google "higher order model checking" and see why people are not satisfied from dependent types (hint: mainly due to undecidability but also from being quite complex) and the mainstream is focused on modern methods (hint: msol).
In the (few, largely meta-theoretical) cases where this does significantly complicate proofs in practice, postulates can be used to sidestep the problems/complexities. (This was the approach wrt "tau classic" design as well, community-agreed postulate rules within contexts.)
the "few" cases (which are in fact infinitely many) might be enough for malicious players to harm or fool the network by making it unsound and unfair. decidability is needed exactly because not all people say clear truth all the time. but again such restructure (the postulates) is not always possible, and of course is not automatic, and is undecidable problem to show that restructuring maintains the original behavior (as functional extensionality is undecidable on mltt+tfpl). one cannot maintain a huge database of truth if it requires manual intervention (that is not even promised to always work). but now im just repeating ideas i detailed in the blogpost (which you happily ignored).
But it is not the case that everything expressible is decidable.
If you restrict "expressible" to exclude meta-theoretic propositions, partial propositions, and non-productive propositions (again, as agda/idris do) this statement is simply false.
if i restrict "expressible" to "not all expressibles", then sure..
"expressible" is about syntax. not every well-typed statement well-formed in agda/idris would pass typecheck. being "total" is not part of the syntax, for example. deciding which program is total and which isn't is of course an unsolvable problem in general. MLTT is undecidable, and restricting it to TFPL would be another undecidable task (as i explained in the blogpost).
This situation led me to a conclusion that MLTT was a wrong choice, and another logic should be considered.
OK, let's assume he were right, and MLTT was somehow not suitable - for the sake of answering your actual question of what the problems with the proposed alternative are seen to be.....
you answered nothing regarding the problems i mentioned on the blogpost about MLTT or MLTT+TFPL. both being undecidable raise exactly the same concern with Turing complete languages. the fact that it can verify many programs doesn't make it tau adequate, or even smart-contracts adequate.
The remedy for this situation is to pick a different logic, a "complete" one.
As we (the non-Ohad "team" members) see it, tau (classic) necessitates a "computationally complete" language. We believe this is the case due to the problem statement itself: a fully generalized decentralized network.
tau is required to be decidable. maximum expressiveness is a bonus and can't come at the expense of decidability. just like usability cannot come at the expense of security. that's what tau is really about, and for good reasons. this, even though msol over lambdas does not restrict expressiveness more than mltt+tfpl if assuming infinite blockchain, as you assume in your claim that mltt+tfpl is computationally complete.
In order to be "general" the logic of the network needs to be able to "compute any computable function" in the processing of its rules, as otherwise you restrict the set of possible rules that the network can employ, and lose generality. Because tau also necessitates consistency (I don't think anyone will argue that point) the requirement of a computationally complete language would imply the necessity of a logically *incomplete* language. This is largely the central "dispute" between Ohad and the rest of the "team" at this time.
MLTT+TFPL is not computationally complete. no total language is computationally complete, as it can never interpret itself. even tau classic design requires a turing complete intervention between blocks. modern tau on the other hand is even able to interpret itself, and the language is not total, and is not even normalizing (though normalization can be expressed and decided), therefore in some aspects is way more generalized than mltt+tfpl (not strict generalization ofc), yet it is decidable.
Let's go on to see what alternative logic he did choose, though, to see the "lesser point of contention"...
indeed finding the right thing to do is important, yet can never justify doing the wrong thing.
Specifically, certain classes of Monadic Second Order Logic (MSO or MSOL) were found to be complete and consistent since the 1960s, but they did not have satisfactory expressiveness abilities.
The expressiveness of MSO is the specific concern. We do not (even after months of researching and discussion as to if Ohad "might be right") see how the "self defining nomic" that is tau can be expressed in MSO.
aside ignoring the rest of the paragraph on that blogpost where i mention a way more expressive msol (as a little example, regular trees can express all context free deritvion trees, while even second order recursion trees are context sensitive), i never saw even a small trial of you to reach it. in fact i never saw you studying msol seriously. "there is no msol design for X" means that design should be done. you can't expect modern theories to be as developed as obsolete theories. simply stating "it's impossible" is not my way of work. especially when the original solution (mltt) turns out to be inadequate. with turing complete cycles between blocks (as in the classic design), every logic is nomic-adequate, not securely though without decidability. moreover, even the old and weak regular trees turn turing complete under iteration as mentioned in the tata book, just like your claim regarding mltt tfpl.
Specifically, we believe that such an expression necessitates general algebraic data types,
that'd be a groundbreaking result. can you prove it?
Ohad will be the first to tell you that MSO omits multiplication,
haha no. in fact even the so-restricted regular trees can do quite much of arithmetic with add and mul, as can be seen here
https://staff.aist.go.jp/hitoshi.ohsaki/papers/kobayashi-ohsaki-rta08-fullpaper.pdf but what he seems to overlook is that it also omits "general products" which significantly limits the ability to even express something like a blockchain, let alone a tau.
no idea which kind of product you lack here (that say intersection types cannot give), and why you're so convinced there's only one way to implement a blockchain.
To mention three advantages: the existence of a direct computational model, the ability to infer programs, and simplicity of use.
The rest of the "team" actually got "luls for days" out of this line. MLTT has a computational model, has a term inference procedure, and is only slightly harder to use than, say, Haskell or OCaml. If these things were not the case then the MLTT programming languages and proof systems simply would not be.
well then many of contemporary cs literature can provide you plenty of entertainment, as those three are mentioned in many places. the fact that it can sometimes be done (term inference), doesn't mean it's decidable (i.e. that it's always can be done). and it is undecidable indeed under mltt(+tfpl). in summary, "almost everything" can "almost always" be done in undecidable languages, even turing complete ones. but that's not what tau is about.
As it is, they not only are, but are the most common and popular form of verified programming today.
the old ways are popular indeed in practice (though i wouldnt say *most* common and popular), but are only the old ways, and now we have new ways that people invented for very good reasons. as a small quote i'll paste from the beginning of
http://www-kb.is.s.u-tokyo.ac.jp/~koba/papers/hmc.pdf:
Program verification techniques have been studied extensively, due to the increasing
importance of software reliability. There are still limitations in the current veri-
fication technology, however. Software model checking [Ball et al. 2001; Ball and
Rajamani 2002; Beyer et al. 2007] has been mainly applied to imperative programs
with first-order procedures, and applications to higher-order programs with arbitrary
recursion have been limited. For higher-order programs, type systems have
been recognized as effective techniques for program verification. However, they often
require explicit type annotations (as in dependent type systems), or suffer from
many false alarms. For example, ML type system [Damas and Milner 1982] allows
automated type inference, but rejects many type-safe programs.
This article proposes a novel verification technique for higher-order programs.
The most distinguishing feature of our new technique is that it is sound, complete,
and fully automatic for the class of programs written in the simply-typed λ-calculus
with recursion and finite base types, and for a variety of verification problems,
including reachability (“Does a given program reach a program point fail?”), flow
analysis (“Does a subterm e of a given program evaluate to a value generated at
program point l?”), and resource usage verification (“Does a given program access
resources such as files and memory in a valid manner?”).
It cannot possibly exist since automatons are an explicit finite decision procedure, while dependent types are not completely decidable.
Again, true of general MLTT but not true of an MLTT TFPL, which omits metatheoretic, partial, and non-productive expressions. Such a language does have a direct automata translation, which is even employed in the "backends" of some MLTT based systems for efficient code generation.
just getting from mltt to mltt+tfpl would require a possibly infinite procedure....
This task is undecidable on MLTT
Actually, it is "semi-decidable"....
all undecidability problems of computation are in general only semi-undecidable, as Hilbert's 10th problem is semi-decidable.
We can have correct-by-construction programs and contracts, by merely stating what we expect them to do, and commanding the machine to find such a structure that admits our requirements.
Being semi-decidable, an MLTT TFPL can employ this as well. The only difference is that instead of being a strict decision procedure, it will come back with either a program, a proof of impossibility of the program, or a question about a refinement to our specification. After a finite number of such questions, it will come back with either a program or a proof of impossibility. As term inference is an "offline" process in tau usage, this semi-decidability is acceptable.
that's simply not true. it can happen some times, but it's impossible for this to happen all the time.
MSO logic is simpler to use for the reasons mentioned above: it is a simpler language, and does not require human intervention to convince that a true statement is true indeed.
I don't see MSO as particularly "simpler," myself. Further, given an "already expressed" proposition and its proof, no human intervention is required to convince an MLTT framework of it, either. These statements seems entirely disconnected, and do not really follow from the prior line of reasoning. Which is "simpler" is subjective and debatable. Also, in *any* type system, something that passes typecheck passes typecheck, so I'm not sure what is really trying to be said on this latter point.
msol is much simpler as it contains much less primitives and decision procedures are fully automatic. claiming otherwise would be either misleading or baffling misunderstanding. is might not be simpler for you but that's only because you're still beginner in msol while somehow experienced with mltt. it's not a matter of taste, mltt (far to mention mltt+tfpl) is way more complex than msol over lambdas, it's just a fact.
It also represents a more intuitive logic:
Whether classical or constructivist logic is "more intuitive" is a highly subjective matter. Personally, I find them rather equally intuitive. I've also spoken to people who reject classical outright as non-intuitive, and people who reject intuitionistic outright as non-intuitive.
vast majority of mathematicians and philosophers find the intuitionistic approach unintuitive. indeed there is a tiny minority of people thinking otherwise, but to picture it like it's a "matter of taste" would be a misrepresentation.
More formally, intuitionistic logic explicitly negates the law of excluded middle.
Erm, no. It does not have "not (p or (not p))" as an axiom, as this statement would suggest.
is not an axiom but follows from the axioms in few steps. is in the literature all over....
And this is of course highly unintuitive, and allows the possibility of human mistakes especially over complicated structures.
I'm not sure what this is intended to mean. Although "LEM" is not axiomatic in MLTT, neither is its negation, so it can always be assumed as a lemma for any given proposition.
[/quote]
and this is unintuitive. i never said LEM cannot be worked-around (except in proving existence by contradiction). i said it is unintuitive and by that more prone to human errors.
On the other hand, MSO is a Classical logic, means it includes the law of excluded middle: it's simply impossible for anything to be both true and false at the same time.
Being consistent, this is also impossible under MLTT, of course.
it is impossible to prove a statement to be both true and false under mltt, as it is consistent indeed, but, proving that a statement is false, doesn't amount as a proof that it's negation is true, in mltt. in classical logic things are either true or false, exclusively. just like human intuition works when it comes to formalism. and just like the vast majority of mathematicians in all eras hold.
The middle is completely excluded, while if we add this rule to MLTT, it loses its consistency.
MLTT only loses consistency if LEM is included as axiomatic.
sure. and this should give you some clues regarding the incapability of mltt to ultimately support LEM.
LEM can be freely applied over any proposition, otherwise.
no, not "any", otherwise it could be added as an axiom.
This is generally so well understood as a "basic concept" in the intuitionistic logic that Ohad's apparent misunderstanding of it has led some of us (myself and stoop, in particular) to question some of Ohad's grasp of the fundamentals of constructive intuitionistic logic, in the first place. (I'm particularly baffled as to how he can seemingly grasp Glivenko and the Godel-Gentzen negative translation, but cannot grasp the simple matter of the usage of LEM in MLTT.... it seems to me that to understand the one means to understand the other.....)
the double negation translation shows equi-provability, but my argument regarding LEM was about being intuitive. one could complexify-for-nothing say msol to become intuitionistic via double negation, but one cannot fix mltt to be classic, at least no one yet.
IN SUMMARY:
The first part of this argument is not really "new" or unique to tau's situation. The question of "general suitability" of intuitionistic logic goes back to the 1920s. I doubt that we will resolve this debate, but suffice to say that Ohad is the only one from the "team" who believes that MLTT is unsuitable specifically for tau.
being classic vs intuitionistic logic was only one practical advantage i mentioned of msol vs mltt. it has nothing to do with the core of my arguments.
IMO, the second part of this argument is the much more central matter: Can MSO be used to construct a tau at all. Nobody else from the "team" seems to be even remotely convinced that it could, and everyone (Ohad included) seems to have a strong understanding and agreement that if it can (at least to some degree) then there is a necessary loss of generality in doing so, which means that the original goal of a fully generalized overlay network would not be able to be met by it. (This latter point can be easily seen by the fact that MSO is a strict weakening relative to MLTT - anything provable in MSO is provable in MLTT, but the reverse is not true. This means that there are some rules/topologies which could be enacted on tau classic which could not have parallels on mso tau. We see this as a *big* concern.)
no decidable logic (including mltt+tfpl) was shown so far, rigorously, to support nomic game. you ask for something that you yourself didn't provide. and it requires work in which for your own reasons you refuse to do. still this is what i work on and i encountered that one cannot have a sound nomic game under mltt+tfpl. you're welcome to move into rigorousity rather handwaving and basis-less claims and present a design of a sound tau based on mltt+tfpl. i dont see how it can be better than a turing complete one.
in summary, you didnt answer to any of the concerns i raised about mltt+tfpl, and you didnt present any reason why msol over lambdas is not good enough for tau. your reply contained so many blatant and subtle mistakes, so apologies if i didn't cover them all on this reply. you're welcome to give a much more professional response to the blogpost. don't expect people to accept any statement that contains technical words.