thanks, i'll try to touch all raised points, with some relevant unraised points:
my door is widely open for constructive collaboration with similar projects, tezos or autonomic, as well as with anyone dealing with these topics. i heard that autonomic is practically abandoned, as stoopkid found a full time job, and hmc doesn't code, but that's only what i heard. however there are some inflexibilities from all sides, while im aware of my own, and i refer to the end-goal agenda (i.e. the exact meaning behind the vague words "self-defining p2p network" or so). there are many differences between the projects, many of them are unpublished yet, but i'll try to touch only the relevant parts and hopefully the most of them.
i completely admit that the time taken is much more than i thought (i couldnt anticipate the problems with the original design but there's more to it). and i recognize that the progress is not visible. but it exists, very significantly, and honestly i've never been "stuck". recently Dana Edwards is trying to show it on his posts (which are written according to his own views and interests, yet they supply a lot of updates, as we discuss the progress every day).
indeed Dana is one big help i got recently. and there's another one on his way to physically stay with me and finish things. and there's more to tell. im fortunate to get all the help i request for. im almost flooded with good people around offering help. may it keep being so.
and indeed the main coarse is to make the initial mission as small as possible. it'll end up with a language similar to n3 with negation, and a structure of functions with in/out args (not much more to it!). that'd be the engine. then for messaging can use the simplest possible way (smtp as a silly example) just to bootstrap the semi-centralized social network that allows exchanging documents, even in command-line, but still with the prover's help.
it will allow much more. this thing is really not the language, but the meta-language. it is not a programming language, but logic that is able to speak about programs (that much, that it can express auto-synthesis-from-spec).
and this points to one substantial difference between autonomic design and my design even if they meet somewhere in the semweb world: on autonomic, the logic is the code itself. the prover's flow is the program's flow. that's what i refer to as "the curry-howard way". and this raise a lot of constraints aside requiring the programmar to be a semi-autoprover, like, the reasoner's flow must never change so it cannot use optimizations. on tau we'll have one meta-language, that is suited for processing languages. it'll be able, for example, to implement logic that takes a doc+grammar+semantics, and produce a theory, by emitting code than can answer queries for that theory, and form the answer in a given shape. writing an rdf parser+reasoner using it would be trivial. moreover, it'll also be a generic compiler. in principle, it'd be a better choice to write any compiler (like C++ compiler) using this methodology rather existing ones. it'd be like taking yacc, but letting it not only the grammar rules, but also the semantic and emission rules. all in much more friendliness than yacc. same for natural languages. there are many readily-made context-free english grammars that can be used for "simple enough english", that won't be so simple at all.
(when i said "any compiler" i also refer to "many interpreters, but not any". it won't be able to interpret C++ ofc. unless the user removes from the solver the "monadic" restriction... more to say here but not very to the topic)
also recall that n3 is significantly different from owl, see timbl's n3logic design doc. i dont claim to recover owl, at least not yet and not directly. the intention of the language is not for semweb-like uses at all, but merely a
generic language processor, featuring the abilities of MSO and HORS (higher order recursion scheme). i dont claim yet to fulfill any semweb standard, but to help making it easy to import many of them. another rationale is to strip as much as possible and remain with the minimal core, at least for the beginning. yet another difference is monotonicity. we are specifically non-monotonic: you can change your mind even in a way that require revision of the previous beliefs, while rdf/owl/n3 are designed to be monotonic.
more specifically to the main point,
one of the recent conclusions is that the old design (same for tezos) cannot scale. it'll eventually dive into similar social problems like bitcoin, and will lead to centralization of control in which it'll be inherently unable to spread, if recalling human limitations. see Dana's description here
https://steemit.com/tauchain/@dana-edwards/what-tauchain-can-do-for-us-effective-altruism-tauchain#@dana-edwards/re-trafalgar-re-dana-edwards-what-tauchain-can-do-for-us-effective-altruism-tauchain-20170417t144525348zand this is a result of the language being complex and undecidable.
undecidability yields that whenever you say something, you need to prove it. and it might be the case that you say something true and you cannot prove it. but proving is a difficult mission, and cannot be fully automatic here. same for complexity: writing the curry-howard way, far to mention mltt, is hard. on the other hand, msol is so easy. let me quote from a recent email of mine explaining all msol in (max) 8th grader level (and it'll be even easier):
1. laballed graph is: take a piece of paper, draw circles (vertices) with words (labels) inside, and draw arrows (edges) between the circles with words (labels) as well. that's a labelled graph.
2. "atomic sentence" is to say either "x is labelled y" or "vertices x,y are adjacent with edge z".
3. propositional logic is all atomic sentences, glued with AND, OR, NOT, IMPLIES, and unlimited use of parenthesis. equivalently, using if-then-else.
4. first order logic is propositional logic, just the atomic sentences may contain holes (variables), with the additional prefix "for all" or "exists". i.e. "exists edge z such that x,y are adjacent with edge z" is a sentence that is true if and only if there's at least one edge in the graph.
5. monadic second order logic allows to do again the "forall" and "exists" trick, just like first order, but also for sets of vertices and edges. so you can say "there exists a set of vertices X such that if x is labelled t then x is a member of the set X".
that's all.
and not only easy, but decidability make us discover whenever we agreeor disagree, without even knowing. the most important event on tau is the case where everyone agree on something, let it be widely across the whole network, or among separated teams, or as a "social discovery" method (like "i'd like to find a mate that her theory about love stemming from her posts doesnt contradict my theory about love", or merely one's agent discovers that it is implied that someone doesnt like them. and that's where it begins to get addictive).
further, the first alpha will not deal with code at all. it'll deal only with logic, i.e. so-called "knowledge representation and resoning", and will not implement synthesis, self definition, and more. the second alpha is devoted for programs. so we'll begin with really minimal core, that'll give rise to collaborative dev, yet different and more lightweight than what the autonomic/tezos speak about.
as for monetization, i plan to begin using the existing tokens even before agoras or even the decentralized final tau chain, and that for the sake of paying for computational resources that has to do with the network itself, i.e., proving etc. already in the first network everyone will hold their own private data and calculate their own queries, while they might share data with each other. some people would like to pay, in agrs tokens, to a centralized entity or to other players, in order to do the calculations for them. especially when teams guided to some specific cause that require fast and comfortable solution.
(for the coin lockers, don't worry, you won't miss any profit opportunity, i'll never let it happen)
as for timed automata, note that there are many temporal logics in the world of formal verification and specification (LTL, CTL, mu-calculus, modal logic, modal mu calculus..). they are all subsets of mso! (and this fact is mentioned a lot over the literature). because the whole point with mso vs fo is that you can have as many sets as you like, that their definition depends on the definition of the indiviuals and vice versa (somehow like dependent types, but only first and monadic-second order, and the objects are n-relations [if hypergraphs are under concern]). it's just the decidability of mso was not known to be so wide until quite recently.
apropos optimal kolmogorov complexity of compression, kobayashi-ong typesystem has a striking such property www-kb.is.s.u-tokyo.ac.jp/~koba/papers/hosc-fpcd.pdf and we will use it on our framework (not on first version though)
and apropos better calculus, indeed we just have a meta-language. all languages and logics are welcome! if not by interpretation, then by compilation. this is yet another deep structural difference between tau and tezos/autonomic.
the global problem of dis-collaboration, is the very problem we're trying to make temporary. indeed i claim that tezos/autonomic cannot do that in practice. it's can maximum be a researchware. they're not engineered to allow thousands or millions to build one program or idea together, in an actually practical, for everyone, and much-better-than-today way. even if math promises that it's possible, the human nature should at least partially be considered.
thriving to generality (multi-language, multi-logic, everything is amendable) is ofc an aspect relevant to fragmentation of the industry point you raised.
the most important rationale that that subsumes the architecture difference between the new tau and the others, is the requirement to make it work in the real world, as above. in large scale, in real life, with real people, and real machines, and to be really very useful.