Bitcoin Forum
May 29, 2017, 09:57:49 AM *
News: Latest stable version of Bitcoin Core: 0.14.1  [Torrent]. (New!)
 
   Home   Help Search Donate Login Register  
Pages: « 1 2 3 4 5 6 7 8 9 10 11 12 13 [14] 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 ... 96 »
  Print  
Author Topic: Tau-Chain and Agoras Official Thread: Generalized P2P Network  (Read 165054 times)
freedomfighter
Full Member
***
Offline Offline

Activity: 203


View Profile
August 29, 2015, 05:58:03 PM
 #261


Since after all one has to develop economical part over tau, using tau's unique features, we're going to do it as Agoras. And we do need to distribute the coins somehow, hence the presale.

Can you explain more about this? What can I do, in terms of interfacing with Tau, with Agoras? Is it just another cryptocurrency that is just built using Tau, or does it have some special status or feature within Tau Chain? Why would anyone want to own Agoras over another cryptocurrency within Tau, or outside of Tau for that matter?

Agoras is indeed "just another app" over tau. This is important to bear in mind. Everyone can create more currencies over tau, and tau doesn't have a coin as for itself.

But Agoras is of course far beyond a cryptocurrency. It will have markets for trading code for money in a totally secure and trustless way (as in Code and Money blogpost), allow renting computational power with or without verification of execution, renting professionals in video conference and micropayments, smart contracts, pay-for-proof wheter it's math homework or a program to write, decentralized intelligent search engine, and even more.

What makes people who trade code asking Agoras in return of their services? Same applies for the other suggested markets, what obliges people to start using Agoras? They could ask, accept and take e.g. BTC, fiat or some other “app currency” instead?

Is there strong enough incentive for early adaptors and early coders to start asking compensations and stick in Agoras and what is that incentive? How sure it is that Agoras will be accepted as “the money” over tau? If not so and there’s no trading volume after IPO, price starts going down and coders easily abandon the Agoras ship…

Accepting BTC or fiat for features elevating tau's unique properties (such as decidability and proofs over code) is of course impossible - such features are impossible to be implement over existing chains, even ethereum's.
As for the risk that someone else will write another and better Agoras, I think such risk exists in every project that comes to mind.

Maybe to phrase what he is asking differently: after various markets are created over Tau (Agoras that will require the Agora coin to use them)-- what stops one to create competing markets, also over tau (parallel to Agoras) with BTC, ETC, or any other crypto tools. Won't these have an advantage over the Agoras market? (an advantage in the sense that they'll utilize other coins without limits).

OR, even if the above is correct, is your intention with the Agoras to enable features that can utilize any coin but that like in ETHEREUM, will require Agoras an an underlying gas/fee/whatever for every transaction while also enabling them as a "classic coin" within that Inner world?
1496051869
Hero Member
*
Offline Offline

Posts: 1496051869

View Profile Personal Message (Offline)

Ignore
1496051869
Reply with quote  #2

1496051869
Report to moderator
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction. Advertise here.
1496051869
Hero Member
*
Offline Offline

Posts: 1496051869

View Profile Personal Message (Offline)

Ignore
1496051869
Reply with quote  #2

1496051869
Report to moderator
klosure
Jr. Member
*
Offline Offline

Activity: 46


View Profile
August 29, 2015, 06:45:08 PM
 #262

- Do you have a specification (ontology, BNF grammar, spec document...) of the Tau language as it is now and/or as it is supposed to become once all the networking and crypto features are built in?
The syntax of the language itself is not new. The RDF family is basically all syntax-sugaring to NQuads http://www.w3.org/TR/n-quads/ so all RDF languages can be converted to NQuads, while plenty of converters out there. Current tau's code can read NQuads, JSON-LD, and (yet partial) Notation3.
I wasn't asking about RDF triples and NQuads: these are just a representation format. What I am asking is the specification of the Tau language itself. OWL2 for instance is the RDF implementation of a logic language called SROIQ(D) formally specified here that is a subset of description logic and its derivation as RDF based ontology is specified in great details here.
We know from the whitepaper that Tau is borrowing to constructive / intuitionist logic. But is it a subset, a variation, or does it implement the full logic? Is there a whitepaper or document covering the formalism? Is there a document that describes the ontology including the core logic and library of built-ins related properties?

Supporting network features etc. is done via builtins, which is also a common mechanism among RDF reasoners. See for example http://www.w3.org/2000/10/swap/doc/CwmBuiltins
Is Tau entirely structured around CWM? What about the builtins that are not already implemented by CWM? For instance DHT related features, RNG etc. Will you contribute them to the original CWM? Are you going to fork CWM to extend it? Or does CWM come with a way to register new builtins and if it does is it statical (compile time) or dynamic (runtime)?

- Where can I find a Tau code example that emulates simple procedural behavior like a loop for instance?

Let me use the cwm's builtins I just linked to demonstrate printing numbers from 1 to 10:
1 log:outputString 1.
{ ?x log:outputString ?x. (1 ?x) math:sum ?y. ?y math:lessThan 11 } => { ?y log:outputString ?y }.
see http://pastebin.com/H2gqLUzC
You are using the W3C swap/math and swap/log ontologies.
Are they a subset of the Tau ontology?
Again, that would need to be clarified. What IS valid Tau and what is not?

- How do you plan to make the transition from knowledge (decision of spawning a new socket for instance) to action (making a system call to create a new socket): will each i/o module poll the knowledge base periodically for relevant action? Will you place control points at key steps of the inference process to intercept propositions with a known action semantic and take action?
- Will input from i/o module trigger asynchronous reasonning or will it just add new propositions in the knowledge base for some synchronous process to reason with at next iteration?
At the syntax level, IO is just another builtin predicate, like { ?z a tau:tcpEndpoint } => { ?y tau:tcpConnect ?z }. At the typesystem level, we of course have Effect types/Monads https://en.wikipedia.org/wiki/Monad_(functional_programming)
The reasoning flow does not change at all due to IO (in general), but types originated from IO are tagged as effect types and of course aren't treated as "pure truth". A good reference is how Idris use side effects http://eb.host.cs.st-andrews.ac.uk/drafts/eff-tutorial.pdf
If I understand correctly your reply I/O originated events and data are being incorporated as new facts that will be (lazy-)evaluated / stated next time the reasoner does a resolution that needs i/o related facts, which in other words means synchronous i/o. I can see how this is going to add a lot of latency, but let me read that Idris paper first and I'll get back with more questions

- Is the interface between the knowledge base and the system level functionalities already developed or specified? Where can I find further details?
Apologies but I didn't really understand the question.
You gave this example above: "{ ?z a tau:tcpEndpoint } => { ?y tau:tcpConnect ?z }".
Are the built-ins behind tau:tcpEndpoint and tau:tcpConnect already developed or specified in a document?
What will be all the built-ins introduced by Tau?

- How can Tau guarantee that it can determine the upper bound of when a program can halt in less time than it would take to run the program? What if the program is already the shortest possible program to solve the problem it represents? At most I could imagine how code analysis could help determining a big-o estimate of the program complexity but then that won't help much to make sure that the program can run within the time boundaries of a given blockhain block as was suggested in the cointelegraph interview.

This goes right back to decidability, and to the point that one shouldn't expect the autoprover to prove all math by itself instantly.

Let me quote HMC's reply to the question "How does it compare to Ethereum" in the interview on CoinTelegraph
Quote from: HMC
To contrast, Tau is a decidable complete logic, per block, with Turing completeness recovered by the iteration of multiple blocks.  This means that we cannot compute any computable thing in one block, but we can know exactly how much resource expenditure a block will require to verify. We can still compute any computable thing; it may just require structuring the execution to run over the span of multiple blocks.  Because of this, Tau resources do not need to be backed directly by a token, and resource exchange can be negotiated by the users as they see fit.

In the context of the comparition with Ethereum, what I understand HMC to be saying is that the question of the maximum runtime of a contract is decidable which allows to break down the contract in execution segments and schedule them over multiple blocks. Hence my question. If the only thing  that is decidable for any program (you don't control what contract creators put in a contrat) is the fact that the program will stop, this does absolutely nothing to make it anywhere near practical. If the program takes 5 years to stop, we aren't any better off than if it didn't stop at all.

So if I understand correctly, what HMC says above would work only for these contracts that were shipped together with a proof already written explicitely by the contract developer that establishes in polynomial time that the program will stop because the developer went through the deliberate design decision of making his program do everything in small chopped batches with yielding and continuations all over the place. Is that the spirit? And to ensure that every contract on the chain are following this paradigm, the contracts would need to come with the proof for the transaction to be validated and added to the blockchain? And do you also ship a proof that the proof indeed runs in polynomial time? And so on Smiley?

Any example of a chopped contract?

Assume I write a tau or idris program that halts on the first zero of Riemann's Zeta which is not on the critical line. Of course, no one expect the prover to determine quickly if this program ever halt. If I knew how to do that magic, I'd already take $6M by solving the 6 math open problems of Clay institute Smiley
That's the point. So Tau will need to rely on a very self-constrained ultra-modular programming paradigm so that it is possible to prove that every chunk of code is both space and time bounded regardless of its input, while making sure that the proof itself doesn't become more complex than the chunk of code it vouches for...

I'm really curious how that will work in practice. A piece of code that demonstrates this would be more than welcome.

So assume I wrote some simpler code with runtime complexity that I know how to compute and prove, then I can specify my proof or parts of it, and let the prover finish the missing details.
Tau, the first Proof-of-PhD smart contract system Smiley

- Are there archives of IRC discussions / forum discussions / email discussions etc that can help understanding decisions made around the design?
I don't think there's anything one can easily track. ~10MB of text won't help anyone. So for now one can only ask and get answers, and with time we'll document everything.
I don't mind finding my way in a 10MB dump. I don't want to waste your time answering questions if the answers are already there somewhere.

Please let me know if I missed something or if other questions arise.
Proposed that nicely, how can I refuse? Here are a few more questions:
- Do you have specs about what exactly Agora is going to be. I mean other than the story of Bob and Alice . What kind of assets will Agora allow to trade? Will it features a full fledged distributed exchange like Ripple? Will it support other market paradigms like auctions or prediction markets?
- Have you started working on Agora?
- How long do you estimate you will need to complete Agora?
- Tau ETA?
- Agora ETA?
ohad
Hero Member
*****
Offline Offline

Activity: 725

http://idni.org


View Profile WWW
August 30, 2015, 03:08:16 AM
 #263

Maybe to phrase what he is asking differently: after various markets are created over Tau (Agoras that will require the Agora coin to use them)-- what stops one to create competing markets, also over tau (parallel to Agoras) with BTC, ETC, or any other crypto tools. Won't these have an advantage over the Agoras market? (an advantage in the sense that they'll utilize other coins without limits).

OR, even if the above is correct, is your intention with the Agoras to enable features that can utilize any coin but that like in ETHEREUM, will require Agoras an an underlying gas/fee/whatever for every transaction while also enabling them as a "classic coin" within that Inner world?

Thanks for the questions,

1. Whatever can be implemented with "old" coins like BTC, will probably be supported on Agoras, of course with limited features.
2. There will be no "gas" or fees for using tau or agoras. Otherwise, someone will create a clone with no fees, as you pointed out. We do try to come up with a design that is hard to compete with, of course, by all means we find.
3. No one essentialy prevents someone from creating Agoras competitor over tau, but it'll be hard to compete with tau's developers, especially in introducing the features we've been thinking on for over a year. No one even came up with a "small and simple" feature of agoras like Zennet. But again I think this points to a kind of risk that exists in any project, and usually doesn't come realistic, yet we do keep that in mind and try to make it more valuable to join us rather compete with us.

Please let me know if this answers.

Tau-Chain & Agoras
ohad
Hero Member
*****
Offline Offline

Activity: 725

http://idni.org


View Profile WWW
August 30, 2015, 03:08:35 AM
 #264

- Do you have a specification (ontology, BNF grammar, spec document...) of the Tau language as it is now and/or as it is supposed to become once all the networking and crypto features are built in?
The syntax of the language itself is not new. The RDF family is basically all syntax-sugaring to NQuads http://www.w3.org/TR/n-quads/ so all RDF languages can be converted to NQuads, while plenty of converters out there. Current tau's code can read NQuads, JSON-LD, and (yet partial) Notation3.
I wasn't asking about RDF triples and NQuads: these are just a representation format. What I am asking is the specification of the Tau language itself. OWL2 for instance is the RDF implementation of a logic language called SROIQ(D) formally specified here that is a subset of description logic and its derivation as RDF based ontology is specified in great details here.
We know from the whitepaper that Tau is borrowing to constructive / intuitionist logic. But is it a subset, a variation, or does it implement the full logic? Is there a whitepaper or document covering the formalism? Is there a document that describes the ontology including the core logic and library of built-ins related properties?

Supporting network features etc. is done via builtins, which is also a common mechanism among RDF reasoners. See for example http://www.w3.org/2000/10/swap/doc/CwmBuiltins
Is Tau entirely structured around CWM? What about the builtins that are not already implemented by CWM? For instance DHT related features, RNG etc. Will you contribute them to the original CWM? Are you going to fork CWM to extend it? Or does CWM come with a way to register new builtins and if it does is it statical (compile time) or dynamic (runtime)?

- Where can I find a Tau code example that emulates simple procedural behavior like a loop for instance?

Let me use the cwm's builtins I just linked to demonstrate printing numbers from 1 to 10:
1 log:outputString 1.
{ ?x log:outputString ?x. (1 ?x) math:sum ?y. ?y math:lessThan 11 } => { ?y log:outputString ?y }.
see http://pastebin.com/H2gqLUzC
You are using the W3C swap/math and swap/log ontologies.
Are they a subset of the Tau ontology?
Again, that would need to be clarified. What IS valid Tau and what is not?

- How do you plan to make the transition from knowledge (decision of spawning a new socket for instance) to action (making a system call to create a new socket): will each i/o module poll the knowledge base periodically for relevant action? Will you place control points at key steps of the inference process to intercept propositions with a known action semantic and take action?
- Will input from i/o module trigger asynchronous reasonning or will it just add new propositions in the knowledge base for some synchronous process to reason with at next iteration?
At the syntax level, IO is just another builtin predicate, like { ?z a tau:tcpEndpoint } => { ?y tau:tcpConnect ?z }. At the typesystem level, we of course have Effect types/Monads https://en.wikipedia.org/wiki/Monad_(functional_programming)
The reasoning flow does not change at all due to IO (in general), but types originated from IO are tagged as effect types and of course aren't treated as "pure truth". A good reference is how Idris use side effects http://eb.host.cs.st-andrews.ac.uk/drafts/eff-tutorial.pdf
If I understand correctly your reply I/O originated events and data are being incorporated as new facts that will be (lazy-)evaluated / stated next time the reasoner does a resolution that needs i/o related facts, which in other words means synchronous i/o. I can see how this is going to add a lot of latency, but let me read that Idris paper first and I'll get back with more questions

- Is the interface between the knowledge base and the system level functionalities already developed or specified? Where can I find further details?
Apologies but I didn't really understand the question.
You gave this example above: "{ ?z a tau:tcpEndpoint } => { ?y tau:tcpConnect ?z }".
Are the built-ins behind tau:tcpEndpoint and tau:tcpConnect already developed or specified in a document?
What will be all the built-ins introduced by Tau?

- How can Tau guarantee that it can determine the upper bound of when a program can halt in less time than it would take to run the program? What if the program is already the shortest possible program to solve the problem it represents? At most I could imagine how code analysis could help determining a big-o estimate of the program complexity but then that won't help much to make sure that the program can run within the time boundaries of a given blockhain block as was suggested in the cointelegraph interview.

This goes right back to decidability, and to the point that one shouldn't expect the autoprover to prove all math by itself instantly.

Let me quote HMC's reply to the question "How does it compare to Ethereum" in the interview on CoinTelegraph
Quote from: HMC
To contrast, Tau is a decidable complete logic, per block, with Turing completeness recovered by the iteration of multiple blocks.  This means that we cannot compute any computable thing in one block, but we can know exactly how much resource expenditure a block will require to verify. We can still compute any computable thing; it may just require structuring the execution to run over the span of multiple blocks.  Because of this, Tau resources do not need to be backed directly by a token, and resource exchange can be negotiated by the users as they see fit.

In the context of the comparition with Ethereum, what I understand HMC to be saying is that the question of the maximum runtime of a contract is decidable which allows to break down the contract in execution segments and schedule them over multiple blocks. Hence my question. If the only thing  that is decidable for any program (you don't control what contract creators put in a contrat) is the fact that the program will stop, this does absolutely nothing to make it anywhere near practical. If the program takes 5 years to stop, we aren't any better off than if it didn't stop at all.

So if I understand correctly, what HMC says above would work only for these contracts that were shipped together with a proof already written explicitely by the contract developer that establishes in polynomial time that the program will stop because the developer went through the deliberate design decision of making his program do everything in small chopped batches with yielding and continuations all over the place. Is that the spirit? And to ensure that every contract on the chain are following this paradigm, the contracts would need to come with the proof for the transaction to be validated and added to the blockchain? And do you also ship a proof that the proof indeed runs in polynomial time? And so on Smiley?

Any example of a chopped contract?

Assume I write a tau or idris program that halts on the first zero of Riemann's Zeta which is not on the critical line. Of course, no one expect the prover to determine quickly if this program ever halt. If I knew how to do that magic, I'd already take $6M by solving the 6 math open problems of Clay institute Smiley
That's the point. So Tau will need to rely on a very self-constrained ultra-modular programming paradigm so that it is possible to prove that every chunk of code is both space and time bounded regardless of its input, while making sure that the proof itself doesn't become more complex than the chunk of code it vouches for...

I'm really curious how that will work in practice. A piece of code that demonstrates this would be more than welcome.

So assume I wrote some simpler code with runtime complexity that I know how to compute and prove, then I can specify my proof or parts of it, and let the prover finish the missing details.
Tau, the first Proof-of-PhD smart contract system Smiley

- Are there archives of IRC discussions / forum discussions / email discussions etc that can help understanding decisions made around the design?
I don't think there's anything one can easily track. ~10MB of text won't help anyone. So for now one can only ask and get answers, and with time we'll document everything.
I don't mind finding my way in a 10MB dump. I don't want to waste your time answering questions if the answers are already there somewhere.

Please let me know if I missed something or if other questions arise.
Proposed that nicely, how can I refuse? Here are a few more questions:
- Do you have specs about what exactly Agora is going to be. I mean other than the story of Bob and Alice . What kind of assets will Agora allow to trade? Will it features a full fledged distributed exchange like Ripple? Will it support other market paradigms like auctions or prediction markets?
- Have you started working on Agora?
- How long do you estimate you will need to complete Agora?
- Tau ETA?
- Agora ETA?

Thanks for your questions, they raise important clarifications:

1. The OWL family is indeed a widely used ontology and known to suffer from logical complexities. Our goal is to implement RDFS (which is a relatively low level at this scope), and above it implement our type system (which probably amounts to adding predicates part-of and is-type-of, a good ref is ceur-ws.org/Vol-354/p63.pdf), and to recover OWL2 over those primitives.
I do not claim that any OWL ontology would be interpreted on tau just as any other OWL reasoner, and I do not promise any "easy migration" of existing code, as long as it is beyond just facts (i.e. without vars or rules).
So what I can tell now about tau's lang is:
a. Syntax is RDF family.
b. RDFS builtins will be supported as-is.
c. OWL will be supported as well but might not keep its common semantics' subtelties, as they change from reasoner to reasoner.
d. The rdf:Type predicate will be interpreted in a Martin-Lof dependent-type framework, as well as typechecking.
e. The language will also have Auth types for proof of execution (as in lambda-auth) and Effect types for IO.
f. The reasoner will be shipped with some builtins, this touches your next question and I'll emphasize on this in a moment.


2. The logic itself can describe everything that is computable. In other words, "all math that matters". It is so strong that it is considered as a new foundation of mathematics in general, see for example HoTT (Homotopy Type Theory). A great feature in Martin-Lof type theory is that you cannot even express a contradiction (like you can't even express Russel's paradox), therefore you never inconsistent.

3. Tau is not structured upon CWM at all. We do use Euler and Yieldprolog as references for inference and Idris/Agda as reference for typesystem, but not CWM. Moreover, I'm not aware of open-source stand-alone performance-oriented C++ reasoners out there that support N3Logic. We write everything from scratch, and in fact we already finished writing the part convering reasoning over triples. From here (except some more parsing features) is it all only features that RDF reasoners haven't met yet, at least not directly (like euler.yap jitting indirectly over swipl), to my knowledge.

4. What is valid Tau and what not is still early to answer. We will have to finish developing the language before we can say firmly "this is it".

5. To write "good" tau code one would definitely have to give a very good thinking, like any other construction. I do not claim that every first grader would write decentralized application over tau in natural language. But tau lets you focus on the logic and meaning of what you say, and help you express things unambiguously. Even if this will turn down to be more complex than say Python (which is of course a matter of taste), there are two important point to take into account:

a. Tau can have more frontends. The "real" interface is nothing but quads. Any sugaring over them can be de-sugared and fed to tau.
b. The more important point is code reuse, and to picture how programming would actually take place over tau. Say you want to write some complex program, and begin to break it up to functions and structures. Say for example that one required structure is red-black tree and one required function is balancing the tree. What you'd do nowadays is not reinvent the wheel but google this and find some code snippets for reuse. Those snippets may or may not fit your needs, and you still have to adapt them (if not reimplement them) to fit your code.
On the other hand, given tau, you can formalize only the requirements from the structure and the functions, and from here you have three options: either that code already exists in tau's records (might even be that your formal reqs themselves already exist) and you can find code that provably meet your needs, or, you can offer a reward to anyone that provable meets your requirements, or, you can write the code yourself, but this time the reasoner can give you a great help since it can prove you're correct or wrong in many cases. At the end, you end up with code that you can trust even more than after QA, since you have a mathematical proof that it meets its requirements.

6. It is worth mentioning that we intent to postpone to post-genesis everything that can be postponed to post-genesis. The a-f points I mentioned above are essential at-genesis, together with more features of course. Even the JIT is needed in order to specify the order of execution (possibly to future compilers) so client will be able to agree upon proof of execution. At this theme, all we do is give a reference, usable, and good enough client. Of course other implementations of the tau protocol are possible, just like there are many different bitcoin clients.
This might also shed some light on the common wondering "it must take years to build that monster". We indeed tend to pick the shortest path, and we believe it is shorter than estimated at first sight.

7. The IRC channel has become "home" for several friends and contain many personal and off-topic conversations. This is why I never published the logs in public. Nevertheless, you can simply join the channel and see right there how to get the logs right away.

8. I don't have specs for tau or agoras more than already published. Things are fluid during dev, ofc, and we better finish the client before we can give exact directions how to use it or what would be the features. Pretty much the same for ETA. I can throw numbers into air while apriori every number would be wrong. I can tell that 6 months till tau's genesis sounds horribly pessimistic for me, and another 6m months for agoras sounds strongly pessimistic. Sorry for being vague, but think of it for a second, and see that any other answer of mine would be a lie, and all I can do is give you the most truthful answers I have myself.

9. Have a look at http://www.cs.ru.nl/~freek/100 of course the syntax is terrible, and this, too, is why we build tau (HMC: apologies for using your catchphrase).

10. " If the only thing  that is decidable for any program (you don't control what contract creators put in a contrat) is the fact that the program will stop, this does absolutely nothing to make it anywhere near practical. If the program takes 5 years to stop, we aren't any better off than if it didn't stop at all." -- No, *everything* that can be expressed on the language is decidable! You can prove *all* *correct* theorems, and you can't even express their negation (i.e. expressing a contradiction).

I have to go now and I might add some more later. Will be happy to come back and see more questions Smiley

Tau-Chain & Agoras
klosure
Jr. Member
*
Offline Offline

Activity: 46


View Profile
August 30, 2015, 05:49:32 AM
 #265

Thanks for the answers and pointers.

From a stylistic perspective however I think the cross-sectional style in which you answer doesn't help clarity as there are bits of answers to every questions laid out in a completly different flow. Although this may seem to make more sense to you given that you have more perspective on how things fit together, people who are less familiar with the subject will have difficulties understanding how your answers address the questions. The numbering you introduce doesn't help since it doesn't map to anything in the set of questions.

Another problem of answering cross-sectionnally is that it lets a lot of stuff unanswered. For instance here you haven't replied about:
- the Tau specific builtins that you have already developed and/or specified.
- the discussion about HMC's quote and the way computation is meant to be broken down in code segments which time and space complexity is bounded and known (see the question again for the details) and the way continuation will be handled between these segments.
- how / whether you will have to prove that the proof itself can be verified in polynomial time and how to avoid this requirement leading to an infinite recursion of proofs.

Let's try to stick to the conventional quote-and-answer style going forward. Thanks.
ohad
Hero Member
*****
Offline Offline

Activity: 725

http://idni.org


View Profile WWW
August 30, 2015, 06:15:16 AM
 #266

Thanks for the answers and pointers.

From a stylistic perspective however I think the cross-sectional style in which you answer doesn't help clarity as there are bits of answers to every questions laid out in a completly different flow. Although this may seem to make more sense to you given that you have more perspective on how things fit together, people who are less familiar with the subject will have difficulties understanding how your answers address the questions. The numbering you introduce doesn't help since it doesn't map to anything in the set of questions.

Well sometimes like in this case I feel a need to step back and give more bg info. The numbering has nothing to do with the order of the questions.

Another problem of answering cross-sectionnally is that it lets a lot of stuff unanswered. For instance here you haven't replied about:
- the Tau specific builtins that you have already developed and/or specified.
- the discussion about HMC's quote and the way computation is meant to be broken down in code segments which time and space complexity is bounded and known (see the question again for the details) and the way continuation will be handled between these segments.
- the question of how / whether you will have to prove that the proof itself can be verified in polynomial time and how to avoid this requirement leading to an infinite recursion of proofs.

Let's try to stick to the conventional quote-and-answer style going forward. Thanks.

Indeed I missed a few clarifications and will do it right now:

Builtins: it is important to bear in mind two things:
1. Buitins aren't really needed except for IO. Everything else can be described by the logic itself.
2. Builtins should be quite minimal. The rationale is that a client will have to be able to replay the chain from genesis, so the root chain must contain code that the clients support.
That said, we will ship tau with some basic builtins (let along DHT and chain), like basic math, logic and string operations, as common in reasoners. The currently supported "small types" (which are again just for convenience like builtins) are XML types: XSD_INTEGER, XSD_STRING etc. So tau will have builtins to support basic operations on such.

Computation on chain: What HMC mentioned is with respect to computation that is being done on the chain only, like a computation that is shared by everyone trustlessly. Application that runs locally and uses the chain only for certain operations (like a btc wallet) simply runs on the local machine and does not need to wait for the next block in order to continue local execution.
Recovering Turing completeness at this scope relies on the notion that Turing completeness can be recovered from tfpl by successive calls to it. Therefore, over time, computation on chain also becomes Turing complete.

Verification time: Neither the reasoner nor the verifier can ever get into infinite loop (due to Euler path detection). The proof can be verified very quickly since it is written as a list of terms derived from each other, while annotating the rule and the substitution that yielded this derivation. Therefore it is simply linear.

Tau-Chain & Agoras
klosure
Jr. Member
*
Offline Offline

Activity: 46


View Profile
August 30, 2015, 07:09:53 AM
 #267

Well sometimes like in this case I feel a need to step back and give more bg info. The numbering has nothing to do with the order of the questions.
Giving an integrated view is useful and provides some perspetive, but only if the rest is clear enough to start with and allows people to step back and contemplate the whole thing. For this reason I'd like to keep high level answers separated from answering specific questions. Although this will introduce a bit of redundance, can we try to do go with that flow:
- answer questions in quote-and-answer style
- provide some perspective after the questions when applicable

Putting some more structure on this thread is going to cost you a bit of time now, but it's a worthwhile investment as this will allow potential contributors to jump in not to mention that many investors are probably waiting on the sideline until they get some kind of understanding of what Tau is really about. Even for those potential investors who won't understand the details, it still helps to sense that there is a wider understanding in the community and a growing consensus that Tau is achievable and standing on a sound theorical base.

Beside, when the time comes to write some documentation, you'll have some content ready to copy-paste in this thread.

1. Buitins aren't really needed except for IO. Everything else can be described by the logic itself.
This really depends on which context you place yourself in. The logic itself is only concerned with describing and inferring relations between symbols in a mechanical, provable and objective way. It's entirely disconnected from the world. To be able to use the logic process actual knowledge about the world you need to add semantics to non-logical symbols via an interpretation. For that purpose, defining precisely the vocabulary and ontologies is of upmost importance for everyone interested in making any practical real-world use of Tau even if this has no bearing on the way the Tau is going to work on a logical level. Sensors and effector built-ins are even more important because they are connecting the logic to the state of the world and allowing it to query or affect that state.

You could complete the core of Tau without ever worrying about the way it's going to interface with the world, but then what you are going to release is an autistic sollipsistic reasoner that has no practical use other than making and verifying constructive proofs. We all know that's not the plan so you will have to specify, develop and document a standard library for Tau that will allow it to interract with the world in a way that is well enough standardized to allow consensus on its semantics and effects. Until this is done you will have 0% adoption because Tau will be essentially useless. You could wait longer to start worrying about that but then you lose a lot of benefits such as showing real world examples of what Tau can do, and having contributors help you with developing that layer.

2. Builtins should be quite minimal. The rationale is that a client will have to be able to replay the chain from genesis, so the root chain must contain code that the clients support.
I second that. Built-ins are the weak point in the system because they need to run as native code (as opposed to Tau logic) that can't be formally tested by the logic. The more builtins there are the more attack surface there is to introduce backdoors and/or find zero-day exploits. Built-ins should also be entirely orthogonal, meaning that there should be one and only one way do something and everybody should be using that way. Everything else should be done in Tau. Actually, if you have a good JIT system that allows to run Tau code at a decend performance level, I would even advise against using built-ins for the DHT. Basic network and cryptographic built-ins are sufficient to restore the full DHT functionnality intrinsically. Whenever a built-in is needed for performance, what you could do is let people decide in the bootstrap file if they want to use the native built-ins or the pure logic version. You could also test the native built-ins using a Tau-level model. After sufficient unit test runs against the boundary and median values of the input parametes range of definition as well as randomized tests, one could decide (in the "belief" sense) with enough confidence that  the native built-in and the pure logic are indeed equivalent. That would allow to even reimplement as native code pieces of logic with no side-effect but a pretty heavy computational profile (cryptographic operations come to mind).

Computation on chain: What HMC mentioned is with respect to computation that is being done on the chain only, like a computation that is shared by everyone trustlessly. Application that runs locally and uses the chain only for certain operations (like a btc wallet) simply runs on the local machine and does not need to wait for the next block in order to continue local execution.
Recovering Turing completeness at this scope relies on the notion that Turing completeness can be recovered from tfpl by successive calls to it. Therefore, over time, computation on chain also becomes Turing complete.
This was implied in the question. The context of the question, the whole article, and HMC's answer assume that a blockchain that supports pseudo-turing-complete contracts has been implemented over Tau and we are discussing how contracts that in Ethereum would need to be preempted by the miner upon exhausting their resources could be made (or proven) to yield by themselves in Tau and resume on next block. So the question is how do you do that. Actually the question is more complex than that, so the best would be to quote it and start explaining from there.
ohad
Hero Member
*****
Offline Offline

Activity: 725

http://idni.org


View Profile WWW
August 30, 2015, 07:41:31 AM
 #268

1. Buitins aren't really needed except for IO. Everything else can be described by the logic itself.
This really depends on which context you place yourself in. The logic itself is only concerned with describing and inferring relations between symbols in a mechanical, provable and objective way. It's entirely disconnected from the world. To be able to use the logic process actual knowledge about the world you need to add semantics to non-logical symbols via an interpretation. For that purpose, defining precisely the vocabulary and ontologies is of upmost importance for everyone interested in making any practical real-world use of Tau even if this has no bearing on the way the Tau is going to work on a logical level. Sensors and effector built-ins are even more important because they are connecting the logic to the state of the world and allowing it to query or affect that state.

Let me give an example for what I meant. Roughly speaking, one can think of two ways to process integers: by using the CPU's built-in instructions, or to introduce number theory and let the calculations follow from the rules. The latter method sounds insane, but some things really work like that in real life, for example https://github.com/idris-lang/Idris-dev/blob/master/libs/prelude/Prelude/Nat.idr
So, builtins come to give the former option. You don't must do simply math or string operations in logic unless you need to. You can add integers using the builtin math:add, for example.
Effects, on the other hand, definitely have to have well defined builtins and policies, and cannot be implemented as part of the pure reasoning process.

You could complete the core of Tau without ever worrying about the way it's going to interface with the world, but then what you are going to release is an autistic sollipsistic reasoner that has no practical use other than making and verifying constructive proofs.

I cannot postpone Effects for later. Maybe for tau as a stand-alone cool toy, but not the Tau Chain: the client has be shipped with the genesis hard-coded (as in BTC) and to be able to bootstrap from there into downloading and executing all blocks.

We all know that's not the plan so you will have to specify, develop and document a standard library for Tau that will allow it to interract with the world in a way that is well enough standardized to allow consensus on its semantics and effects. Until this is done you will have 0% adoption because Tau will be essentially useless. You could wait longer to start worrying about that but then you lose a lot of benefits such as showing real world examples of what Tau can do, and having contributors help you with developing that layer.

Indeed I put strong emphasis on the points you mentioned, and maybe this is part of the reason I cannot give the final answers yet.

2. Builtins should be quite minimal. The rationale is that a client will have to be able to replay the chain from genesis, so the root chain must contain code that the clients support.
I second that. Built-ins are the weak point in the system because they need to run as native code (as opposed to Tau logic) that can't be formally tested by the logic. The more builtins there are the more attack surface there is to introduce backdoors and/or find zero-day exploits. Built-ins should also be entirely orthogonal, meaning that there should be one and only one way do something and everybody should be using that way. Everything else should be done in Tau. Actually, if you have a good JIT system that allows to run Tau code at a decend performance level, I would even advise against using built-ins for the DHT. Basic network and cryptographic built-ins are sufficient to restore the full DHT functionnality intrinsically. Whenever a built-in is needed for performance, what you could do is let people decide in the bootstrap file if they want to use the native built-ins or the pure logic version. You could also test the native built-ins using a Tau-level model. After sufficient unit test runs against the boundary and median values of the input parametes range of definition as well as randomized tests, one could decide (in the "belief" sense) with enough confidence that  the native built-in and the pure logic are indeed equivalent. That would allow to even reimplement as native code pieces of logic with no side-effect but a pretty heavy computational profile (cryptographic operations come to mind).

Computation on chain: What HMC mentioned is with respect to computation that is being done on the chain only, like a computation that is shared by everyone trustlessly. Application that runs locally and uses the chain only for certain operations (like a btc wallet) simply runs on the local machine and does not need to wait for the next block in order to continue local execution.
Recovering Turing completeness at this scope relies on the notion that Turing completeness can be recovered from tfpl by successive calls to it. Therefore, over time, computation on chain also becomes Turing complete.
This was implied in the question. The context of the question, the whole article, and HMC's answer assume that a blockchain that supports pseudo-turing-complete contracts has been implemented over Tau and we are discussing how contracts that in Ethereum would need to be preempted by the miner upon exhausting their resources could be made (or proven) to yield by themselves in Tau and resume on next block. So the question is how do you do that. Actually the question is more complex than that, so the best would be to quote it and start explaining from there.

Quote
In the context of the comparition with Ethereum, what I understand HMC to be saying is that the question of the maximum runtime of a contract is decidable which allows to break down the contract in execution segments and schedule them over multiple blocks.

Correct.

Quote
Hence my question. If the only thing  that is decidable for any program (you don't control what contract creators put in a contrat) is the fact that the program will stop, this does absolutely nothing to make it anywhere near practical. If the program takes 5 years to stop, we aren't any better off than if it didn't stop at all.

Everything (once expressed in MLTT) is decidable, not only halting but all statements, including the exact runtime complexity. And of course you do control what the contract creators put in the contract: its code is open and you can query the reasoner for any question you have regarding it. If the reasoner doesn't answer instantly, you may ignore the contract. Conversly, the contract creators can supply a proof for "interesting" assertions.

Quote
So if I understand correctly, what HMC says above would work only for these contracts that were shipped together with a proof already written explicitely by the contract developer that establishes in polynomial time that the program will stop because the developer went through the deliberate design decision of making his program do everything in small chopped batches with yielding and continuations all over the place. Is that the spirit? And to ensure that every contract on the chain are following this paradigm, the contracts would need to come with the proof for the transaction to be validated and added to the blockchain? And do you also ship a proof that the proof indeed runs in polynomial time? And so on Smiley?

The time takes to verify the proof is given by the size of the proof (since it's linear), so it is easy to predict that complexity. Some verifications are even logarithmic at the proof size, like in amiller's merkle tree.
The questions of whether to require a proof with every contract, or in which cases to require that, or proof for what exactly, or what is the maximum length allowed and so on, this is almost-all context (app) dependent. I say "almost" because such considerations exist also on the root chain, but this again goes back to tau's users. Those are the rules of the network which we do not set, but the users set. We tend to be as minimalistic as possible on influencing the root's rules. On genesis, tau will indeed be usable only for fans/geeks/evangalists/techies and so on. The first users will shape the root chain, specifically: the rules of rulemaking of rulemaking (the rules of creating contexts or changing the root chain). At mentioned several times, tau can be equivalently described as a decentralized Nomic game. I think that it is more reasonable than developers hard-coding some necessarily-arbitrary rules. What do you think?

Tau-Chain & Agoras
Tobo
Hero Member
*****
Offline Offline

Activity: 706


View Profile
August 30, 2015, 03:12:01 PM
 #269

What kind consensus mechanism will be used for the Tau-chain?
ohad
Hero Member
*****
Offline Offline

Activity: 725

http://idni.org


View Profile WWW
August 30, 2015, 04:06:43 PM
 #270

What kind consensus mechanism will be used for the Tau-chain?

It is for the users to decide. See last paragraph in my last comment, and I've just had a discussion about it here https://www.facebook.com/groups/870781236307340/permalink/994080593977403/

Tau-Chain & Agoras
ohad
Hero Member
*****
Offline Offline

Activity: 725

http://idni.org


View Profile WWW
August 30, 2015, 04:56:04 PM
 #271

Now we have a reddit https://www.reddit.com/r/tauchain/

Tau-Chain & Agoras
klosure
Jr. Member
*
Offline Offline

Activity: 46


View Profile
August 30, 2015, 05:52:08 PM
 #272

It is for the users to decide. See last paragraph in my last comment, and I've just had a discussion about it here https://www.facebook.com/groups/870781236307340/permalink/994080593977403/
Facebook? Seriously!? Please don't ask us to partake in this masquerade. Facebook is the complete opposite of everything that most people in the crypto, FOSS and hacking scenes stand for, and the first thing I hope Tau will help us get rid of for the sake of the collective mental health of our society.
ohad
Hero Member
*****
Offline Offline

Activity: 725

http://idni.org


View Profile WWW
August 30, 2015, 05:55:07 PM
 #273

It is for the users to decide. See last paragraph in my last comment, and I've just had a discussion about it here https://www.facebook.com/groups/870781236307340/permalink/994080593977403/
Facebook? Seriously!? Please don't ask us to partake in this masquerade. Facebook is the complete opposite of everything that most people in the crypto, FOSS and hacking scenes stand for, and the first thing I hope Tau will help us get rid of for the sake of the collective mental health of our society.

indeed tau is the opposite and comes to replace those systems, but, masquerade??

Tau-Chain & Agoras
klosure
Jr. Member
*
Offline Offline

Activity: 46


View Profile
August 30, 2015, 06:18:36 PM
 #274

A metaphore for all the posing, pretending, attention seeking, courting and other superficial and emotionally immature behaviors that this plateform encourages.

Can you please cross post here or on Reddit the discussion about the Nomic game so that the more privacy inclined among us can participate?
ohad
Hero Member
*****
Offline Offline

Activity: 725

http://idni.org


View Profile WWW
August 30, 2015, 06:21:15 PM
 #275

A metaphore for all the posing, pretending, attention seeking, courting and other superficial and emotionally immature behaviors that this plateform encourages.

Can you please cross post here or on Reddit the discussion about the Nomic game so that the more privacy inclined among us can participate?


Ron revived a very old fb group about tau, and I thank him for that. It doesn't mean it's going to be the main location of the happening. The main locations are still BTT, idni.org, github and IRC.

Tau-Chain & Agoras
ohad
Hero Member
*****
Offline Offline

Activity: 725

http://idni.org


View Profile WWW
September 01, 2015, 06:24:29 AM
 #276

New blog post: http://www.idni.org/blog/decentralized-democracy

Tau-Chain & Agoras
klosure
Jr. Member
*
Offline Offline

Activity: 46


View Profile
September 01, 2015, 07:51:47 AM
 #277

Thanks for the replies Ohad.
One earlier question left unanswered

Do you have specs about what exactly Agora is going to be. I mean other than the story of Bob and Alice . What kind of assets will Agora allow to trade? Will it feature a full fledged distributed exchange like Ripple? Will it support other market paradigms like auctions or prediction markets?

One more question: there are many references to "chains" in your answers, blog posts and papers.
- There was the blockchain planned for Zennet which was supposed to be used for handling the transactions in Zencoins, was supposed to use PoW, and wasn't supposed to have any inflation, which meant that miners would be rewarded with transaction fees alone.
- In the context of Agora, since Agora is a superset of Zennet, I assume that the originally planned blockchain comes along except that instead of being implemented as a native client, it will be emulated using Tau logic and network built-ins, and now the tokens are called Agora coins, everything else being the same (is that right?). But is it still PoW? If it is do you plan to have Tau open a server socket and let good old mining programs connect to it and do getWork calls?
- In the whitepaper about Tau, you discuss briefly using the block index of a PoW blockchain to serve as the arrow of time. This is a powerful idea but it raises the question of how fundamental the blockchain is to the design. Would you use a blockchain that is implemented independently to Tau as the blueprint for time, and allow Tau logic to query time using a hardcoded built-in? Or would you let the Tau programs query other Tau programs that implement a blockchain node and use whatever they say is the latest index as a time reference? The former would be a more global, reliable and trustable reference but would also add complexity to the base Tau framework and raise the question of how to incentivize the miners of the fundamental chain.
- In your latest block post as well as many of your answers, you seem to be saying that some of the Tau logic will be somewhat stored and exchanged over a blockchain which will allow nodes to share and publish knowledge. So would that chain be a native chain (not implemented in Tau logic but C++ for instance) or are we talking about a chain that would be created in the Tau bootstrapping file? Is this chain related to Agora or not? Does this chain have economic incentives? If it does what is it? If not how do you ensure that the chain remains consistent?
- Last but not least you make the parallel in your last blog article between chains and contexts. For example you talk about a "main chain" to refer to some sort of default context. Is the "chain" term in that article still referring to the "chain" in blockchain or is "chain" a linked-list of context elements multiplexed in physical blocks of a real blockchain?

I could go on, but the main point here is that it's not at all clear (not to me at least) at which level(s) you are planning to introduce blockchains, what it/they will be used for at each specific level, what type of incentive scheme will be introduced to keep these structures consistent accross the network, and how that relates to Tau and/or Agora?

I would appreciate if you can address the points above one by one in your answer.
ohad
Hero Member
*****
Offline Offline

Activity: 725

http://idni.org


View Profile WWW
September 01, 2015, 10:11:00 AM
 #278

Thanks for the replies Ohad.
One earlier question left unanswered

Do you have specs about what exactly Agora is going to be. I mean other than the story of Bob and Alice . What kind of assets will Agora allow to trade? Will it feature a full fledged distributed exchange like Ripple? Will it support other market paradigms like auctions or prediction markets?
One more question: there are many references to "chains" in your answers, blog posts and papers.
- There was the blockchain planned for Zennet which was supposed to be used for handling the transactions in Zencoins, was supposed to use PoW, and wasn't supposed to have any inflation, which meant that miners would be rewarded with transaction fees alone.
- In the context of Agora, since Agora is a superset of Zennet, I assume that the originally planned blockchain comes along except that instead of being implemented as a native client, it will be emulated using Tau logic and network built-ins, and now the tokens are called Agora coins, everything else being the same (is that right?). But is it still PoW? If it is do you plan to have Tau open a server socket and let good old mining programs connect to it and do getWork calls?

The workflow of the client can be summarizes as follows:
We have a peer to peer network such that every peer holds an ontology (its rules). Peers speak between themselves at the same language too - they query each other's reasoners, though indirectly. A wrapper around the reasoner recieve the event and query the reasoner "what should I do now", together with the event's information.
Therefore, one can quite naturally add rules to what happens when some query arrives, even locally. The rules doesn't necessarily have to be explicit, as their consequences are calculated by the reasoner.
The query itself can be a computation, some code to run, and even native code: I forgot to mention that we do plan to have FFI builtins. Of course, one has to allow using them, and the typesystem should handle them correctly like such types cannot be Auth types (execution verification).
What to do when a query to run code, how much to charge, under what conditions, all amounts to local rules in the client. Therefore, Zennet's design completely lost meaning, except the pricing formula zennet.sc/zennetpricing.pdf since it all amounts to little piece of tau code. Renting computational power won't need a distinct chain or a side chain, but only the coin it uses has to have it's timestamped ledger. Other data transfer can be done using the DHT layer, or by any other p2p rules.

The builtins will have multiple levels of abstraction. DHT (like get/set) and blockchain primitives (like getWork) will be builtins as for themselves, aside more low level ones like sockets.

Back to what assets Agoras will allow trading, I tend to adhere the principle of letting the users maximum flexibility and minimum intervention. So the goal to implement a stable coin, and a market where participants may place their bids, whether it is computational power or coding or anything, and with a user interface. Readily-made rulesets for common such operations will also be supplied. In addition, there are more applications that need a currency other than markets. The one I'm planning to put focus on under Agoras is a search engine. Remember that with large enough network, tau can crawl the web overnight. And after that night you don't end up with a one line interface tht google gives you, but the whole database is open. Of course computational costs has to be taken care carefully, so we plan to make the search engine self-incentivizing. In general, we'll do our best to make Agoras the best choice as a platform for any application that involves currency.

- In the whitepaper about Tau, you discuss briefly using the block index of a PoW blockchain to serve as the arrow of time. This is a powerful idea but it raises the question of how fundamental the blockchain is to the design. Would you use a blockchain that is implemented independently to Tau as the blueprint for time, and allow Tau logic to query time using a hardcoded built-in?

Yes, but the ontologies will have rich API to the DHT&Chain parameters, and can have a wide control on their flow. A simple example is the conditions for a block to be accepted. Chain builtins will also supply primitives of "before" and "after" that can be Auth types yet external, thanks to the chain's timestamping.

[/quote]
Or would you let the Tau programs query other Tau programs that implement a blockchain node and use whatever they say is the latest index as a time reference? The former would be a more global, reliable and trustable reference but would also add complexity to the base Tau framework and raise the question of how to incentivize the miners of the fundamental chain.

Not exactly, and as above, ontologies will control the chain only up to some extent, but the general flow of DHT&Chain is somehow hard coded.

- In your latest block post as well as many of your answers, you seem to be saying that some of the Tau logic will be somewhat stored and exchanged over a blockchain which will allow nodes to share and publish knowledge. So would that chain be a native chain (not implemented in Tau logic but C++ for instance) or are we talking about a chain that would be created in the Tau bootstrapping file? Is this chain related to Agora or not? Does this chain have economic incentives? If it does what is it? If not how do you ensure that the chain remains consistent?

Information is not stored on the chain - it is stored on the DHT layer, and in case a timestamp is needed, only a merkle root is enough to get into the chain. The root chain has to contain the protocol's definitions (the client's code) and merkle roots for timestamping.
In any case, we always speak about one chain only, and more chains only arise at the scope of pegging them in the root chain, as sidechains do.

- Last but not least you make the parallel in your last blog article between chains and contexts. For example you talk about a "main chain" to refer to some sort of default context. Is the "chain" term in that article still referring to the "chain" in blockchain or is "chain" a linked-list of context elements multiplexed in physical blocks of a real blockchain?

When I say 'chain' on our scope I always mean 'blockchain'.

I could go on, but the main point here is that it's not at all clear (not to me at least) at which level(s) you are planning to introduce blockchains, what it/they will be used for at each specific level, what type of incentive scheme will be introduced to keep these structures consistent accross the network, and how that relates to Tau and/or Agora?

An incentive scheme is indeed an extremely important (if not vital) to the future of the network. Still, the first users have to define it, or better, define how it can be changed. I do have thoughts and opinions about that, and I will vote and propose accordingly over the system.

Thanks for the good questions, will be happy for more.

Tau-Chain & Agoras
skunk
Sr. Member
****
Offline Offline

Activity: 320


View Profile
September 01, 2015, 10:48:33 AM
 #279

intersting... do you know bitnation? maybe the two project can join each other...

redfish64
Jr. Member
*
Offline Offline

Activity: 32


View Profile
September 02, 2015, 02:46:13 AM
 #280

I was wondering if anyone could recommend some reading material, or at least a general direction of what to study to help grasp the concepts of Tau better for those with primarily an imperative programming background (C++, Java, perl, etc.).

I've worked through some of "The Reasoned Schemer", and right now I'm learning Haskell, and planning on moving onto Idris once I get the basics down.

Does this seem like a good path? Other material?



Pages: « 1 2 3 4 5 6 7 8 9 10 11 12 13 [14] 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 ... 96 »
  Print  
 
Jump to:  

Sponsored by , a Bitcoin-accepting VPN.
Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!