@HMC, any further thoughts from you?
I had written up a response pretty quickly, but none of us were particularly thrilled with the tone or the length. We deliberated for a while (mostly stoop and I) as to if and how to better structure it, and as to if we should even continue to engage this thread at all.
Here is where we've ended up.
Note: this reply includes comments from both myself and stoopkid. I'll clearly distinguish stoop's responses from my own. (He has reviewed this post, and given his nod of approval to its' content.)
you mix decidability with consistency.
Not at all, and I stand by my statement. If agda/coq/idris et al had the particular properties that you claim they do then there would be cases where you could trivially send them off into an infinite loop in type-check/termination-check. Developers in these languages would often be left wondering if their compiler was ever going to finish, or if it was stuck. This would not be very practical, indeed. Luckily, this is
nonsense and simply isn't the case, as they all employ decidable, impredicative, intensional type theory. (Ofc there are also ITT TFPL systems which allow predicativity/extensionality (or even arbitrary rewrite) and are thus undecidable (even under a most conservative approximate) such as NuPRL or MetaMath's ITT.)
If you have an example of a known-terminating program which does loop the MLTT TFPL typecheck, despite termination check, as you describe (without using "type providers" or similar, anyway) then you should probably report it to the language developers as a bug, I'm sure they'd like to hear about it. (Such bugs *do* get reported from time to time. For example the only currently known case in agda looks to be
https://github.com/agda/agda/issues/1270 but these are *bugs* and treated as such, just as an inconsistency would be, meaning they are always rectified in the implementations. To date no-one has broken the MLTT TFPL *theory* itself, thus rendering all implementations "loopable.")
(I'm still not entirely sure what it is that you're "missing" here about practicality of MLTT TFPL, but the best I can *suspect* is that it might be the fact that predicates can be internally proven decidable, and one can work solely in such decidable predicates when reasoning about any known terminating algorithm.)
proving correctness is another task that sometimes can be done without the language being decidable, but of course not always.
I've asked you many times for any example of any proven-terminating algorithm which cannot possibly be expressed and checked as such in agda/coq/etc. You've been unable to produce one as of yet. Until you can, I have to consider your claim of "of course not always" as
nonsense.
see why people are not satisfied from dependent types (hint: mainly due to undecidability but also from being quite complex) and the mainstream is focused on modern methods (hint: msol).
More
nonsense. Nearly every "major" theorem prover in practical use today (Mizar being the notable exception, but then again it is arguably inconsistent by design) is based on dependent types, ITT, and MLTT specifically. Outside of academic study, MSOL is, afaik, currently applied only in a few cases of bounded model checking, and virtually always in a hybrid solver. (Meaning it is almost never used "alone" for general verification, as it would be of too little practical value. Every solver I'm familiar with which uses MSO combines it with at *least* an SMT solver for integer arith.)
Further, nearly every contemporary type system being developed or added to existing languages for practical use is based on dependent refinements. I don't see anyone adding MSO type systems to Scala or Rust, or working to build an "MSO Haskell" or "MSO Ocaml" or "MSO Java/C#".... Are you aware of some such efforts that I'm not finding? (There are such contemporary efforts to add dependent types to basically every popular language...)
the "few" cases (which are in fact infinitely many) might be enough for malicious players to harm or fool the network by making it unsound and unfair.
Nonsense. The only way that this could happen is by a context choosing to adopt an unsound postulate, which would be akin to a context intentionally adopting rules for "stalling" their own chain, or going mao, or... Yes, there are a lot of ways that a context can *choose* to harm/destroy itself. This will remain true regardless of the logic employed. It is not a sensible argument against MLTT for tau.
but again such restructure (the postulates) is not always possible
Epic nonsense. A postulate is NOT a restructuring of a proof, it is an intentional, partial omission of one. Further, a postulate may assert anything at all, even a direct inconsistency. There is, of course, no proof which can not be immediately closed by assuming a postulate.
and is undecidable problem to show that restructuring maintains the original behavior (as functional extensionality is undecidable on mltt+tfpl).
Partly nonsense. HoTT/univalence does give decidable function extensionality in MLTT. However, this is beside the point entirely as postulates (and other inference-affecting annotations) are not a "restructuring" of a proof at all, they cannot affect the behavior of a program - only its typecheck. (In other words, extensionality has nothing to do with it.) They can explicitly *omit* some behavior of a program, but generally do not need to as they can be erasable.
if i restrict "expressible" to "not all expressibles", then sure..
"expressible" is about syntax. not every well-typed statement well-formed in agda/idris would pass typecheck. being "total" is not part of the syntax, for example. deciding which program is total and which isn't is of course an unsolvable problem in general. MLTT is undecidable, and restricting it to TFPL would be another undecidable task (as i explained in the blogpost).
Nonsense upon
nonsense. Such an expression restriction is precisely what agda/idris/coq/etc do, and they are entirely decidable TFPLs, that remain computationally complete. Totality is not undecidable, in fact it is trivial. (It is also syntactic, just "case alignment," but that is little to do with anything here.) I think you must be confusing totality and termination/productivity here. What they "punt" on is a case of termination/productivity check, not of totality. Productivity check is undecidable in general. However, by restricting the syntax to reject recursive predicates which can not be asserted terminating/productive by size-change termination arguments, it is kept decidable. (Again, exactly what Agda/idris/coq/etc do. For the specific termination check used in Agda, for example, see Abel's "A Predicative Analysis of Structural Recursion." For an even more formal (and also somewhat more efficient, P-Space complete) treatment, which agda may or may not end up switching to, see Hyvernat's "The size-change termination principle for constructor based languages.")
Does this lead to any syntactic case pattern which must be rejected? Yes: for example in agda the mutual co-recursion as described on their wiki, as we've discussed at length. Does this mean they are not computationally complete, or that there are terminating algorithms which cannot be expressed and reasoned as terminating? No! Maybe you think that there is some terminating algorithm which cannot possibly be written without these particular "SCT-less" recursion patterns but, again, you have not been able to demonstrate one as asked. (It seems to me that any such case could just be "loop unrolled" to remove the mutual recursion, since it is known terminating.....)
Even if you could possibly provide such an example, the given algorithm could still be made to be accepted in typecheck by way of a postulate assumption or explicit annotation. (Or, similarly, if the termination were simply unknown, or "prohibitively difficult to prove," as might the case of your non-example "Collatz example.")
you answered nothing regarding the problems i mentioned on the blogpost about MLTT or MLTT+TFPL.
I didn't? It sure seems like I did... All well, agree to disagree, or whatever. Let's move on.
tau is required to be decidable.
I'm not so sure why this would be (as tau would need all proofs to be bounded to some "reasonable" size constraint, anyway... for practical purposes there is little difference between an infinite derivation and one that takes 1000 years to evaluate on contemporary hardware) but it is a moot point unless you can demonstrate a practical decidability problem by showing a case where the MLTT TFPL typecheck would necessarily loop infinitely over a terminating algorithm. (There aren't any... so I'm guessing I'm likely to be continuing to wait for this for quite some time.)
this, even though msol over lambdas does not restrict expressiveness more than mltt+tfpl if assuming infinite blockchain, as you assume in your claim that mltt+tfpl is computationally complete.
MLTT+TFPL is not computationally complete.
Most epic nonsense. I'm also not so sure that you understand what "computational completeness" means. MLTT is computationally complete, even without an outer loop. What is recovered by the "infinite blockchain" model is *Turing* completeness. These are not the same thing, as you should certainly understand by now. (Otherwise this is just more indication that you are certainly a long way away from being ready to build a tau, with any logic.)
MLTT (w/ or w/out TFPL constraints) is computationally complete - there is no boolean computable function which it cannot express, typecheck, and extract a (necessarily terminating) program for. Maybe it is the meaning of "computable function" that you are confused about? These are the boolean functions that "finite turing machines" can compute, in finite time/space resource. (Finite turing completeness is and always was the goal.)
Your "complete and consistent" MSO is (necessarily) not computationally complete. It is simply not possible to have a complete, consistent logic which is also computationally complete, as being computationally complete implies being able to express and reason over full arithmetic axioms. Any logic which is both consistent and computationally complete will necessarily be incomplete.
no total language is computationally complete, as it can never interpret itself.
Double nonsense. Self-interpretation is not a requirement for computational completeness. Even if it were, your argument still doesn't stand up as total languages *can* self interpret (contrary to conventional wisdom) as established recently by Brown-Palsberg. (Why you'd even make this claim is strange, since both Brown-Palsberg's self interpretation and Bauer's self interpreter for System T were discussed several times on the IRC....)
Note from stoopkid: "the conventional proof only says that there can be no total computable universal function for the total computable functions, and that any such universal function must be strictly partial, but that if a rewrite system is strongly normalizing then it necessarily cannot be this universal function, because a strongly normalizing rewrite system is equivalent to some total function, not any strictly partial function, and thus can't be a universal function for the total computable numbers, so the conventional diagonalization proof doesnt say anything about the expressibility of the total functions defining strongly normalizing rewrite systems in regards to whether or not they can be their own universal function which corresponds to not saying anything about the existence of self-interpreters in strongly normalizing rewrite systems."
therefore in some aspects is way more generalized than mltt+tfpl (not strict generalization ofc), yet it is decidable.
Nonsense. MSO is strictly weaker, as it is entirely and directly expressible in MLTT. Anything that can be expressed and proven in MSO can be expressed and proven in MLTT. The reverse is not the case. To call MSO "more general" exhibits a misunderstanding of *both* MSO and MLTT, and even of logic and the computable hierarchy in general.
aside ignoring the rest of the paragraph on that blogpost where i mention a way more expressive msol (as a little example, regular trees can express all context free deritvion trees, while even second order recursion trees are context sensitive), i never saw even a small trial of you to reach it. in fact i never saw you studying msol seriously.
Perhaps the reason you never "saw" could be because you drove all of your "team" into another channel, where we continue discussion without you...
"there is no msol design for X" means that design should be done. you can't expect modern theories to be as developed as obsolete theories. simply stating "it's impossible" is not my way of work.
Great, so go get to it.... it has actually been many (>6) months, now, since you originally decided MLTT isn't the way, and we haven't seen even the beginnings of an explanation of your design of an alternative. (Or any other new work on your part, since, for that matter...)
that'd be a groundbreaking result. can you prove it?
I'm not sure how this would be considered "groundbreaking." It is quite straightforward to see that a block is defined as a product of hashes....
Silly nonsense. (Sometimes I wonder if you even read the papers that you link.) Linear inequality of multiplication against a *constant* value does not make for a theory of multiplication. It is an extension of addition. (Multiplication by a constant is, of course, just iterated addition.) Further, the logic employed to establish this limited multiplication in that paper is not even MSO but is "monotone AC-TA." This adds to my perpetual suspicion that when you say "MSO" you don't really mean "MSO" but "something else..." Some "other" logic that you have in your head, but can't yet seem to name, describe, or even assert the actual existence of.
Of course it would be self-contradictory for you to claim that multiplication is possible in your hypothetical complete & consistent "somehow MSO-like but not really MSO" logic. As you rightly elaborate in your blog post, this would contradict Godel. If tau is to be built with a complete and consistent logic, then it will necessarily omit some portion of basic arithmetic - meaning there will be some math which it *cannot* do in its rules - meaning there will be some routing/topology rules which it *cannot* employ - meaning it will *not* meet the original goal of a fully general, decentralized network. (This is inescapable without accepting an incomplete logic.)
no idea which kind of product you lack here (that say intersection types cannot give),
If you don't know what a general product is, (and why GADTs are important for programming in a typed functional setting) or the difference between a product type and an intersection type, then you are certainly still a *long* way (another year or two, maybe?) from being ready to build a tau, using *any* logic.
Note from stoop: "you might explain why exactly MSO prevents the definability of the general product, even without appeal to godel; mso rules out quantification over functions of arity greater than 1, but the product of two types A and B is equivalent to the polymorphic type Pi C:* . (A -> B -> C) -> C which explicitly requires quantification over that binary relation in order to form it, and since this is just an almost trivial example of a GADT it's got no chance there. btw i'm curious if parametricity results can prove that you need quantification over a binary function in order to express the general product."
(I won't bother explaining this more than he already did, as any further detail can be readily googled up.)
and why you're so convinced there's only one way to implement a blockchain.
I never claimed that there is only one way to implement a blockchain. I do claim that any way to build a functional, self-defining blockchain seems to require GADTs. (Do you have a way to define a block validity function that does not have a domain of block headers which are products of hashes? I'd be very interested to see that! *That* would be "groundbreaking."
)
well then many of contemporary cs literature can provide you plenty of entertainment, as those three are mentioned in many places.
So the computational model of Agda doesn't actually exist, and the native compiler is just smoke-and-mirrors? The hole-filling term inference of Agda and its nice emacs interface are also an illusion? These tools, which are so easy and approachable that they've been compared to the interface of video games (quite an accomplishment for such a complex software) are somehow actually much less usable than they seem?
Nonsense,
nonsense, and
nonsense.
the old ways are popular indeed in practice (though i wouldnt say *most* common and popular), but are only the old ways, and now we have new ways that people invented for very good reasons.
Let's just ignore the fact that MSO predated MLTT by quite a bit, and is certainly well set in "the old ways" of the days of Frege and Quine...
MLTT was invented for a "good reason" too: specifically to establish a computationally complete, yet consistent logic.
as a small quote i'll paste from the beginning of
However, they often
require explicit type annotations (as in dependent type systems), or suffer from
many false alarms.
Personally I'll take a need to explicate types over a lack of general arithmetic/computation any day.
just getting from mltt to mltt+tfpl would require a possibly infinite procedure....
Someone should really let the agda/coq devs know... XD I don't think that they're aware that they've been doing the impossible this whole time...
all undecidability problems of computation are in general only semi-undecidable, as Hilbert's 10th problem is semi-decidable.
Sure, but this doesn't change the fact that this case of (semi/un/partial)-decidability is irrelevant, as it is an offline, user-interactive process and not a part of the network consensus process.
that's simply not true. it can happen some times, but it's impossible for this to happen all the time.
OK, so now I have two examples to request of you! Can you give an example of a hole in a known-terminating program specification which cannot be filled in finite interactivity steps by the agda hole-filler?
Until you can demonstrate an example fitting this claim, I have to assume that it is
nonsense as well.
msol is much simpler as it contains much less primitives and decision procedures are fully automatic. claiming otherwise would be either misleading or baffling misunderstanding. is might not be simpler for you but that's only because you're still beginner in msol while somehow experienced with mltt. it's not a matter of taste, mltt (far to mention mltt+tfpl) is way more complex than msol over lambdas, it's just a fact.
Again, I guess we'll just have to agree to disagree on this one. As I see it, any question of what is "simpler" or "more intuitive" is necessarily subjective, but we seem to have a difference of opinion on that.
I, personally, see MLTT as "simpler" because it is like the "yodawg of lambdas." If you understand lambda calculus then you already understand everything about MLTT except cumulativity. It's the very same abstraction/application, just done at a type level language instead of the term/object language of the program itself. It puts the same beta/eta applicative reduction in the type theory as is in the object language.
If you can grok lisp/scheme then you can grok MLTT. In fact, you basically already know it modulo the impredicative universe hierarchy.
With your hypothetical MSO/TA higher-order model framework it seems that one needs to learn not only the lambda calculus, but also FOPL, some general monadic logic, some set theory, some TA theory, some iterative TA theory, some graph theory, and you'd better know some modular congruence properties and Courcelle theoreom and Seese if you want to do any application at all, and Kobayashi and Ong if you want to do any contemporary, practical application at all... and then odds are where you end up at after all that study you still probably aren't equipped to know how to assert anything but the most basic correctness and reachability properties.... if that.
While I understand that the MSO logics might seem simpler at first glance, and might "sell better," by being just a classical logic, the reality of pragmatic application to "programming in the large" is very much a different story.
You've been studying these logics for ~6 months now, yourself? Can you show some of the proofs that you've written with them so that we can get an idea of your own level of sophistication in actually using the logics after this dedicated study? (Are you prepared, yet, to give even a simple proof or still "learning it" to get to that point, yourself?)
Note from Stoop: "he's been studying these systems 6 months and isn't proving anything in them, you should point out that we're now routinely building formalized propositions and verified programs and doing things like proofs of the uncountability of the reals and playing with GADTs and all kinds of other fun things that MSO can't even do, and that tau will come pretty much right away with general computability theory & theoretical comp sci libraries, and related results formalized and ready to work interoperably to aid in chain development, and that we're actually at the point of seriously examining how to actually put together a nomic block-chain in this system." ... "you might also note that everybody on our team probably has a better understanding of MSO than he does at this point. since we understand it completely fully as just a specific and not very expressive subset of MLTT. and are seeing a more complete picture that illuminates exactly why MSO has the specific properties it does and how those actually relate to the rest of MLTT."
In my experience, most programmers (even "low/entry level") can pick up the basics of MLTT to the point where they can prove over simple-but-useful programs in under a week of dedicated study. If they're experienced with functional languages (like lisp/scheme) beforehand it is usually more along the lines of 1-2 days of tinkering. If they're familiar with pure, statically typed functional languages (like haskell/ocaml) it is usually a matter of hours.
vast majority of mathematicians and philosophers find the intuitionistic approach unintuitive. indeed there is a tiny minority of people thinking otherwise, but to picture it like it's a "matter of taste" would be a misrepresentation.
Nonsense. The "vast majority" of real mathematicians/philosophers understand that there are trade-offs, and that they are best served by studying both. Considering that it is a century-old and still unresolved debate, I don't think you can claim that it can stand as anything but a "matter of taste" in general. Some mathematicians work in areas where constructivisim is *strictly* necessary, some mathematicians work in areas where classical sets bring ease, and some work in meta-theory where they're required to understand both intimately. To claim that the "vast majority" reject intuitionism just indicates to me that you haven't spent much time actually talking with many of them.
is not an axiom but follows from the axioms in few steps. is in the literature all over....
More
nonsense. I can't even...
Can you show any constructive proposition at all for which it cannot be applied? (If you can, you've invalidated Glivenko and Brouwer both, no?)
I think what you meant to say was that "not not (p or (not p))" follows in a few steps. (Again, per Glivenko! (1928))
(Side note: interested on-lookers should be encouraged to read
http://plato.stanford.edu/entries/intuitionistic-logic-development/#2.4 for a nice summary explanation of these concepts.)
and this is unintuitive. i never said LEM cannot be worked-around (except in proving existence by contradiction). i said it is unintuitive and by that more prone to human errors.
I am not sure what you find "unintuitive" about it, but again that is a subjective matter. We'll agree to disagree.
I don't see any case in which it would be "more prone to human errors" however. After all, we are talking about theorem provers, in which precluding human error is "the whole point." Could you clarify?
it is impossible to prove a statement to be both true and false under mltt, as it is consistent indeed, but, proving that a statement is false, doesn't amount as a proof that it's negation is true, in mltt. in classical logic things are either true or false, exclusively.
While I agree that in MLTT a statement can not always simply be negated to prove the inverse, this doesn't imply that things are not "either true or false, exclusively" in MLTT.
Of course everything in MLTT is "either true or false, exclusively." As you just stated yourself, it is consistent.
just like human intuition works when it comes to formalism. and just like the vast majority of mathematicians in all eras hold.
Our intuition must certainly work differently. That is OK. "Agree to disagree" and all.
As to how the "vast majority" of intuitions work, I won't be so arrogant as to assume.... and I don't think that there's been anything like a census... Feel free to show some statistics on the matter, however, if you know of such a poll!
sure. and this should give you some clues regarding the incapability of mltt to ultimately support LEM.
Not at all, and I'm not sure why you'd think it would. Can you give some example or explanation of an "incapability" of MLTT to "ultimately support LEM?"
no, not "any", otherwise it could be added as an axiom.
Nonsense. So I'll ask for a third demonstration from you, can you give me any true, constructive proposition for which LEM as a lemma does not hold in MLTT?
(Fun facts for the astute reader: Note that such a lemma will have as codomain a disjunct of the proposition P and P->_|_ so for any true proposition "inl" holds and for any false proposition "inr" holds. Being able to show such an example where a LEM lemma could not be applicable would actually require violating LEM by modifying the logic to be 3-valued, which is what justifies LEM as sound (for either classical axiom or intuitionistic proposition) in the first place! (Hence the name...)
The reason (I'm drastically paraphrasing) that it can not be taken as axiomatic in MLTT without violating consistency is that it would then also have to apply over the top of the universe hierarchy, which would create an infinite regress and prevent closure. (The cumulative hierarchy could then have no "top" and you're back in the "type in type" or "kind-inconsistency" situation.) The reason it can be applied over any predicate without violating consistency is that there is syntactically no way to apply it over the largest universe, without also implying a larger universe (which it is not applied over) in doing so.
This has some fascinating implications, the most basic of which is that "(P or (not P)) -> ((not not P) -> P)" is trivially provable constructively, but the inverse "((not not P) -> P) -> (P or (not P))" does not hold in general... (I think that this is probably what Ohad is actually trying to point out wrt his statements on proof by contradiction... but who knows) however if we quantify both sides of the inverse implication we can prove that if for all X "(((not not X) -> X)" holds then this implies that for all X "(X or (not X)))" does as well!
I'll leave these (quite informative!) proofs as an exercise for the reader.)
Also, here is a "haskeller level" proof that LEM is intuitionistically sound that you should have no trouble to follow:
http://okmij.org/ftp/Computation/lem.htmlThey go on to prove the existence of the number 42 by contradiction (strong reductio) and close "We have thus confirmed the old result that LEM is intuitionistically justified for decidable propositions."
... An "old result" indeed. I'm not sure why you would deny it!
the double negation translation shows equi-provability, but my argument regarding LEM was about being intuitive. one could complexify-for-nothing say msol to become intuitionistic via double negation, but one cannot fix mltt to be classic, at least no one yet.
You're not wrong. (Well, maybe you're wrong about what is/isn't intuitive, but we're agreeing to disagree on that, still. There's probably no way to establish that definitively, short of a census, anyway.) MSO could be double-negation-transformed into ITT. You're especially not wrong that this would be particularly "for-nothing" given that it can also be transformed directly in the positive, as well.
In fact, this is largely the least nonsensical thing that you've said here.
However, you're missing the point entirely! The double-negation transform wouldn't hold in general if it weren't the case that LEM could always be applied. (Maybe you should try working those 2 "exercise" proofs in that I just mentioned in ITT if you don't understand this...)
being classic vs intuitionistic logic was only one practical advantage i mentioned of msol vs mltt. it has nothing to do with the core of my arguments.
(I'm not sure why it is a "practical advantage" exactly... Can you clarify?)
Sure, the core of your (first) argument is "ITT/MLTT won't work" but you've yet to actually give any demonstration of evidence justifying any of the reasoning behind it. Instead you just make obviously-misguided claims about agda/coq/idris/etc which can be empirically dismissed by just "trying it at home" in agda/coq/idris/etc.... but you refuse to "try it at home" and then you purport it as justification for MLTT not working for a tau. There may be no bigger
nonsense than speaking the opposite of testable fact.
no decidable logic (including mltt+tfpl) was shown so far, rigorously, to support nomic game.
Nonsense. There are nomics in haskell, nomics in deontic logic, nomics in Drools rules, nomics in coq (predicative CIC derived from MLTT TFPL!), nomics in LKIF (which is expressible in a fragment of owl which is in turn expressible in MLTT TFPL!), nomics in relational algebra (also expressible in MLTT TFPL!), nomics in M$ Excel (the logic of which is probably expressible in MLTT, why not?)..... all of these are decidable frameworks supporting nomic games. Again you're simply making statements that are contrary to fact.
you ask for something that you yourself didn't provide. and it requires work in which for your own reasons you refuse to do. still this is what i work on and i encountered that one cannot have a sound nomic game under mltt+tfpl.
You "encountered" this, but can not adequately demonstrate or even explain it?
you're welcome to move into rigorousity rather handwaving and basis-less claims and present a design of a sound tau based on mltt+tfpl.
I'd be willing to bet that a "sound MLTT tau" will emerge before you even finish defining and implementing your still-entirely-hypothetical-but-somehow-"MSO-like" logic for your "better" design. :|
in summary, you didnt answer to any of the concerns i raised about mltt+tfpl, and you didnt present any reason why msol over lambdas is not good enough for tau.
NONSENSE. I've stated our position quite clearly, and completely. (Also, now, repeatedly.)
your reply contained so many blatant and subtle mistakes, so apologies if i didn't cover them all on this reply
If you feel we are incorrect, please prove it. (It is all that I or anyone else has ever asked of you)
Implement your MSO tau, and destroy our MLTT tau. We all wish you nothing but the best of luck in both endeavors, but I currently have nothing but skepticism that you will (or even can) accomplish either.
you're welcome to give a much more professional response to the blogpost.
I think that I'm, personally, just about done with wasting my time in responding to your
nonsense. My post was not in response to your blog post (else you would've seen it some time ago) it was in response to a specific request from your community to clarify the position of your lost "team" as most of your community were not privy to our initial reaction to your
nonsense in the IRC.
If you feel it was unprofessional of me to answer your community's request for an explanation of our perspective, then I apologize. I can refrain from polluting their perception with our reality in the future.
don't expect people to accept any statement that contains technical words.
Wut?
My apologies, also, for putting faith in your community's ability to comprehend my "technical words" if they choose to. I'm also quite sorry that you don't have, as I do, confidence in their abilities to understand (or, at least, learn) what we are talking about.
Stoopkid also asked that I mention that we are actively working on putting together introductory materials to explain "tau classic" from first principles, aimed at those without math/programming experience.
(Anyone who can speak a language has the innate ability to grasp logic, so this is just more of your
nonsense.)