Bitcoin Forum
November 22, 2017, 06:15:10 AM *
News: Latest stable version of Bitcoin Core: 0.15.1  [Torrent].
 
   Home   Help Search Donate Login Register  
Pages: « 1 2 3 4 5 6 7 8 9 10 11 12 13 [14] 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 ... 143 »
  Print  
Author Topic: Tau-Chain and Agoras Official Thread: Generalized P2P Network  (Read 258549 times)
ohad
Hero Member
*****
Offline Offline

Activity: 878

http://idni.org


View Profile WWW
August 30, 2015, 06:15:16 AM
 #261

Thanks for the answers and pointers.

From a stylistic perspective however I think the cross-sectional style in which you answer doesn't help clarity as there are bits of answers to every questions laid out in a completly different flow. Although this may seem to make more sense to you given that you have more perspective on how things fit together, people who are less familiar with the subject will have difficulties understanding how your answers address the questions. The numbering you introduce doesn't help since it doesn't map to anything in the set of questions.

Well sometimes like in this case I feel a need to step back and give more bg info. The numbering has nothing to do with the order of the questions.

Another problem of answering cross-sectionnally is that it lets a lot of stuff unanswered. For instance here you haven't replied about:
- the Tau specific builtins that you have already developed and/or specified.
- the discussion about HMC's quote and the way computation is meant to be broken down in code segments which time and space complexity is bounded and known (see the question again for the details) and the way continuation will be handled between these segments.
- the question of how / whether you will have to prove that the proof itself can be verified in polynomial time and how to avoid this requirement leading to an infinite recursion of proofs.

Let's try to stick to the conventional quote-and-answer style going forward. Thanks.

Indeed I missed a few clarifications and will do it right now:

Builtins: it is important to bear in mind two things:
1. Buitins aren't really needed except for IO. Everything else can be described by the logic itself.
2. Builtins should be quite minimal. The rationale is that a client will have to be able to replay the chain from genesis, so the root chain must contain code that the clients support.
That said, we will ship tau with some basic builtins (let along DHT and chain), like basic math, logic and string operations, as common in reasoners. The currently supported "small types" (which are again just for convenience like builtins) are XML types: XSD_INTEGER, XSD_STRING etc. So tau will have builtins to support basic operations on such.

Computation on chain: What HMC mentioned is with respect to computation that is being done on the chain only, like a computation that is shared by everyone trustlessly. Application that runs locally and uses the chain only for certain operations (like a btc wallet) simply runs on the local machine and does not need to wait for the next block in order to continue local execution.
Recovering Turing completeness at this scope relies on the notion that Turing completeness can be recovered from tfpl by successive calls to it. Therefore, over time, computation on chain also becomes Turing complete.

Verification time: Neither the reasoner nor the verifier can ever get into infinite loop (due to Euler path detection). The proof can be verified very quickly since it is written as a list of terms derived from each other, while annotating the rule and the substitution that yielded this derivation. Therefore it is simply linear.

Tau-Chain & Agoras
1511331310
Hero Member
*
Offline Offline

Posts: 1511331310

View Profile Personal Message (Offline)

Ignore
1511331310
Reply with quote  #2

1511331310
Report to moderator
1511331310
Hero Member
*
Offline Offline

Posts: 1511331310

View Profile Personal Message (Offline)

Ignore
1511331310
Reply with quote  #2

1511331310
Report to moderator
1511331310
Hero Member
*
Offline Offline

Posts: 1511331310

View Profile Personal Message (Offline)

Ignore
1511331310
Reply with quote  #2

1511331310
Report to moderator
Join ICO Now A blockchain platform for effective freelancing
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction. Advertise here.
1511331310
Hero Member
*
Offline Offline

Posts: 1511331310

View Profile Personal Message (Offline)

Ignore
1511331310
Reply with quote  #2

1511331310
Report to moderator
klosure
Jr. Member
*
Offline Offline

Activity: 49


View Profile
August 30, 2015, 07:09:53 AM
 #262

Well sometimes like in this case I feel a need to step back and give more bg info. The numbering has nothing to do with the order of the questions.
Giving an integrated view is useful and provides some perspetive, but only if the rest is clear enough to start with and allows people to step back and contemplate the whole thing. For this reason I'd like to keep high level answers separated from answering specific questions. Although this will introduce a bit of redundance, can we try to do go with that flow:
- answer questions in quote-and-answer style
- provide some perspective after the questions when applicable

Putting some more structure on this thread is going to cost you a bit of time now, but it's a worthwhile investment as this will allow potential contributors to jump in not to mention that many investors are probably waiting on the sideline until they get some kind of understanding of what Tau is really about. Even for those potential investors who won't understand the details, it still helps to sense that there is a wider understanding in the community and a growing consensus that Tau is achievable and standing on a sound theorical base.

Beside, when the time comes to write some documentation, you'll have some content ready to copy-paste in this thread.

1. Buitins aren't really needed except for IO. Everything else can be described by the logic itself.
This really depends on which context you place yourself in. The logic itself is only concerned with describing and inferring relations between symbols in a mechanical, provable and objective way. It's entirely disconnected from the world. To be able to use the logic process actual knowledge about the world you need to add semantics to non-logical symbols via an interpretation. For that purpose, defining precisely the vocabulary and ontologies is of upmost importance for everyone interested in making any practical real-world use of Tau even if this has no bearing on the way the Tau is going to work on a logical level. Sensors and effector built-ins are even more important because they are connecting the logic to the state of the world and allowing it to query or affect that state.

You could complete the core of Tau without ever worrying about the way it's going to interface with the world, but then what you are going to release is an autistic sollipsistic reasoner that has no practical use other than making and verifying constructive proofs. We all know that's not the plan so you will have to specify, develop and document a standard library for Tau that will allow it to interract with the world in a way that is well enough standardized to allow consensus on its semantics and effects. Until this is done you will have 0% adoption because Tau will be essentially useless. You could wait longer to start worrying about that but then you lose a lot of benefits such as showing real world examples of what Tau can do, and having contributors help you with developing that layer.

2. Builtins should be quite minimal. The rationale is that a client will have to be able to replay the chain from genesis, so the root chain must contain code that the clients support.
I second that. Built-ins are the weak point in the system because they need to run as native code (as opposed to Tau logic) that can't be formally tested by the logic. The more builtins there are the more attack surface there is to introduce backdoors and/or find zero-day exploits. Built-ins should also be entirely orthogonal, meaning that there should be one and only one way do something and everybody should be using that way. Everything else should be done in Tau. Actually, if you have a good JIT system that allows to run Tau code at a decend performance level, I would even advise against using built-ins for the DHT. Basic network and cryptographic built-ins are sufficient to restore the full DHT functionnality intrinsically. Whenever a built-in is needed for performance, what you could do is let people decide in the bootstrap file if they want to use the native built-ins or the pure logic version. You could also test the native built-ins using a Tau-level model. After sufficient unit test runs against the boundary and median values of the input parametes range of definition as well as randomized tests, one could decide (in the "belief" sense) with enough confidence that  the native built-in and the pure logic are indeed equivalent. That would allow to even reimplement as native code pieces of logic with no side-effect but a pretty heavy computational profile (cryptographic operations come to mind).

Computation on chain: What HMC mentioned is with respect to computation that is being done on the chain only, like a computation that is shared by everyone trustlessly. Application that runs locally and uses the chain only for certain operations (like a btc wallet) simply runs on the local machine and does not need to wait for the next block in order to continue local execution.
Recovering Turing completeness at this scope relies on the notion that Turing completeness can be recovered from tfpl by successive calls to it. Therefore, over time, computation on chain also becomes Turing complete.
This was implied in the question. The context of the question, the whole article, and HMC's answer assume that a blockchain that supports pseudo-turing-complete contracts has been implemented over Tau and we are discussing how contracts that in Ethereum would need to be preempted by the miner upon exhausting their resources could be made (or proven) to yield by themselves in Tau and resume on next block. So the question is how do you do that. Actually the question is more complex than that, so the best would be to quote it and start explaining from there.
ohad
Hero Member
*****
Offline Offline

Activity: 878

http://idni.org


View Profile WWW
August 30, 2015, 07:41:31 AM
 #263

1. Buitins aren't really needed except for IO. Everything else can be described by the logic itself.
This really depends on which context you place yourself in. The logic itself is only concerned with describing and inferring relations between symbols in a mechanical, provable and objective way. It's entirely disconnected from the world. To be able to use the logic process actual knowledge about the world you need to add semantics to non-logical symbols via an interpretation. For that purpose, defining precisely the vocabulary and ontologies is of upmost importance for everyone interested in making any practical real-world use of Tau even if this has no bearing on the way the Tau is going to work on a logical level. Sensors and effector built-ins are even more important because they are connecting the logic to the state of the world and allowing it to query or affect that state.

Let me give an example for what I meant. Roughly speaking, one can think of two ways to process integers: by using the CPU's built-in instructions, or to introduce number theory and let the calculations follow from the rules. The latter method sounds insane, but some things really work like that in real life, for example https://github.com/idris-lang/Idris-dev/blob/master/libs/prelude/Prelude/Nat.idr
So, builtins come to give the former option. You don't must do simply math or string operations in logic unless you need to. You can add integers using the builtin math:add, for example.
Effects, on the other hand, definitely have to have well defined builtins and policies, and cannot be implemented as part of the pure reasoning process.

You could complete the core of Tau without ever worrying about the way it's going to interface with the world, but then what you are going to release is an autistic sollipsistic reasoner that has no practical use other than making and verifying constructive proofs.

I cannot postpone Effects for later. Maybe for tau as a stand-alone cool toy, but not the Tau Chain: the client has be shipped with the genesis hard-coded (as in BTC) and to be able to bootstrap from there into downloading and executing all blocks.

We all know that's not the plan so you will have to specify, develop and document a standard library for Tau that will allow it to interract with the world in a way that is well enough standardized to allow consensus on its semantics and effects. Until this is done you will have 0% adoption because Tau will be essentially useless. You could wait longer to start worrying about that but then you lose a lot of benefits such as showing real world examples of what Tau can do, and having contributors help you with developing that layer.

Indeed I put strong emphasis on the points you mentioned, and maybe this is part of the reason I cannot give the final answers yet.

2. Builtins should be quite minimal. The rationale is that a client will have to be able to replay the chain from genesis, so the root chain must contain code that the clients support.
I second that. Built-ins are the weak point in the system because they need to run as native code (as opposed to Tau logic) that can't be formally tested by the logic. The more builtins there are the more attack surface there is to introduce backdoors and/or find zero-day exploits. Built-ins should also be entirely orthogonal, meaning that there should be one and only one way do something and everybody should be using that way. Everything else should be done in Tau. Actually, if you have a good JIT system that allows to run Tau code at a decend performance level, I would even advise against using built-ins for the DHT. Basic network and cryptographic built-ins are sufficient to restore the full DHT functionnality intrinsically. Whenever a built-in is needed for performance, what you could do is let people decide in the bootstrap file if they want to use the native built-ins or the pure logic version. You could also test the native built-ins using a Tau-level model. After sufficient unit test runs against the boundary and median values of the input parametes range of definition as well as randomized tests, one could decide (in the "belief" sense) with enough confidence that  the native built-in and the pure logic are indeed equivalent. That would allow to even reimplement as native code pieces of logic with no side-effect but a pretty heavy computational profile (cryptographic operations come to mind).

Computation on chain: What HMC mentioned is with respect to computation that is being done on the chain only, like a computation that is shared by everyone trustlessly. Application that runs locally and uses the chain only for certain operations (like a btc wallet) simply runs on the local machine and does not need to wait for the next block in order to continue local execution.
Recovering Turing completeness at this scope relies on the notion that Turing completeness can be recovered from tfpl by successive calls to it. Therefore, over time, computation on chain also becomes Turing complete.
This was implied in the question. The context of the question, the whole article, and HMC's answer assume that a blockchain that supports pseudo-turing-complete contracts has been implemented over Tau and we are discussing how contracts that in Ethereum would need to be preempted by the miner upon exhausting their resources could be made (or proven) to yield by themselves in Tau and resume on next block. So the question is how do you do that. Actually the question is more complex than that, so the best would be to quote it and start explaining from there.

Quote
In the context of the comparition with Ethereum, what I understand HMC to be saying is that the question of the maximum runtime of a contract is decidable which allows to break down the contract in execution segments and schedule them over multiple blocks.

Correct.

Quote
Hence my question. If the only thing  that is decidable for any program (you don't control what contract creators put in a contrat) is the fact that the program will stop, this does absolutely nothing to make it anywhere near practical. If the program takes 5 years to stop, we aren't any better off than if it didn't stop at all.

Everything (once expressed in MLTT) is decidable, not only halting but all statements, including the exact runtime complexity. And of course you do control what the contract creators put in the contract: its code is open and you can query the reasoner for any question you have regarding it. If the reasoner doesn't answer instantly, you may ignore the contract. Conversly, the contract creators can supply a proof for "interesting" assertions.

Quote
So if I understand correctly, what HMC says above would work only for these contracts that were shipped together with a proof already written explicitely by the contract developer that establishes in polynomial time that the program will stop because the developer went through the deliberate design decision of making his program do everything in small chopped batches with yielding and continuations all over the place. Is that the spirit? And to ensure that every contract on the chain are following this paradigm, the contracts would need to come with the proof for the transaction to be validated and added to the blockchain? And do you also ship a proof that the proof indeed runs in polynomial time? And so on Smiley?

The time takes to verify the proof is given by the size of the proof (since it's linear), so it is easy to predict that complexity. Some verifications are even logarithmic at the proof size, like in amiller's merkle tree.
The questions of whether to require a proof with every contract, or in which cases to require that, or proof for what exactly, or what is the maximum length allowed and so on, this is almost-all context (app) dependent. I say "almost" because such considerations exist also on the root chain, but this again goes back to tau's users. Those are the rules of the network which we do not set, but the users set. We tend to be as minimalistic as possible on influencing the root's rules. On genesis, tau will indeed be usable only for fans/geeks/evangalists/techies and so on. The first users will shape the root chain, specifically: the rules of rulemaking of rulemaking (the rules of creating contexts or changing the root chain). At mentioned several times, tau can be equivalently described as a decentralized Nomic game. I think that it is more reasonable than developers hard-coding some necessarily-arbitrary rules. What do you think?

Tau-Chain & Agoras
Tobo
Hero Member
*****
Offline Offline

Activity: 762


View Profile
August 30, 2015, 03:12:01 PM
 #264

What kind consensus mechanism will be used for the Tau-chain?
ohad
Hero Member
*****
Offline Offline

Activity: 878

http://idni.org


View Profile WWW
August 30, 2015, 04:06:43 PM
 #265

What kind consensus mechanism will be used for the Tau-chain?

It is for the users to decide. See last paragraph in my last comment, and I've just had a discussion about it here https://www.facebook.com/groups/870781236307340/permalink/994080593977403/

Tau-Chain & Agoras
ohad
Hero Member
*****
Offline Offline

Activity: 878

http://idni.org


View Profile WWW
August 30, 2015, 04:56:04 PM
 #266

Now we have a reddit https://www.reddit.com/r/tauchain/

Tau-Chain & Agoras
klosure
Jr. Member
*
Offline Offline

Activity: 49


View Profile
August 30, 2015, 05:52:08 PM
 #267

It is for the users to decide. See last paragraph in my last comment, and I've just had a discussion about it here https://www.facebook.com/groups/870781236307340/permalink/994080593977403/
Facebook? Seriously!? Please don't ask us to partake in this masquerade. Facebook is the complete opposite of everything that most people in the crypto, FOSS and hacking scenes stand for, and the first thing I hope Tau will help us get rid of for the sake of the collective mental health of our society.
ohad
Hero Member
*****
Offline Offline

Activity: 878

http://idni.org


View Profile WWW
August 30, 2015, 05:55:07 PM
 #268

It is for the users to decide. See last paragraph in my last comment, and I've just had a discussion about it here https://www.facebook.com/groups/870781236307340/permalink/994080593977403/
Facebook? Seriously!? Please don't ask us to partake in this masquerade. Facebook is the complete opposite of everything that most people in the crypto, FOSS and hacking scenes stand for, and the first thing I hope Tau will help us get rid of for the sake of the collective mental health of our society.

indeed tau is the opposite and comes to replace those systems, but, masquerade??

Tau-Chain & Agoras
klosure
Jr. Member
*
Offline Offline

Activity: 49


View Profile
August 30, 2015, 06:18:36 PM
 #269

A metaphore for all the posing, pretending, attention seeking, courting and other superficial and emotionally immature behaviors that this plateform encourages.

Can you please cross post here or on Reddit the discussion about the Nomic game so that the more privacy inclined among us can participate?
ohad
Hero Member
*****
Offline Offline

Activity: 878

http://idni.org


View Profile WWW
August 30, 2015, 06:21:15 PM
 #270

A metaphore for all the posing, pretending, attention seeking, courting and other superficial and emotionally immature behaviors that this plateform encourages.

Can you please cross post here or on Reddit the discussion about the Nomic game so that the more privacy inclined among us can participate?


Ron revived a very old fb group about tau, and I thank him for that. It doesn't mean it's going to be the main location of the happening. The main locations are still BTT, idni.org, github and IRC.

Tau-Chain & Agoras
ohad
Hero Member
*****
Offline Offline

Activity: 878

http://idni.org


View Profile WWW
September 01, 2015, 06:24:29 AM
 #271

New blog post: http://www.idni.org/blog/decentralized-democracy

Tau-Chain & Agoras
klosure
Jr. Member
*
Offline Offline

Activity: 49


View Profile
September 01, 2015, 07:51:47 AM
 #272

Thanks for the replies Ohad.
One earlier question left unanswered

Do you have specs about what exactly Agora is going to be. I mean other than the story of Bob and Alice . What kind of assets will Agora allow to trade? Will it feature a full fledged distributed exchange like Ripple? Will it support other market paradigms like auctions or prediction markets?

One more question: there are many references to "chains" in your answers, blog posts and papers.
- There was the blockchain planned for Zennet which was supposed to be used for handling the transactions in Zencoins, was supposed to use PoW, and wasn't supposed to have any inflation, which meant that miners would be rewarded with transaction fees alone.
- In the context of Agora, since Agora is a superset of Zennet, I assume that the originally planned blockchain comes along except that instead of being implemented as a native client, it will be emulated using Tau logic and network built-ins, and now the tokens are called Agora coins, everything else being the same (is that right?). But is it still PoW? If it is do you plan to have Tau open a server socket and let good old mining programs connect to it and do getWork calls?
- In the whitepaper about Tau, you discuss briefly using the block index of a PoW blockchain to serve as the arrow of time. This is a powerful idea but it raises the question of how fundamental the blockchain is to the design. Would you use a blockchain that is implemented independently to Tau as the blueprint for time, and allow Tau logic to query time using a hardcoded built-in? Or would you let the Tau programs query other Tau programs that implement a blockchain node and use whatever they say is the latest index as a time reference? The former would be a more global, reliable and trustable reference but would also add complexity to the base Tau framework and raise the question of how to incentivize the miners of the fundamental chain.
- In your latest block post as well as many of your answers, you seem to be saying that some of the Tau logic will be somewhat stored and exchanged over a blockchain which will allow nodes to share and publish knowledge. So would that chain be a native chain (not implemented in Tau logic but C++ for instance) or are we talking about a chain that would be created in the Tau bootstrapping file? Is this chain related to Agora or not? Does this chain have economic incentives? If it does what is it? If not how do you ensure that the chain remains consistent?
- Last but not least you make the parallel in your last blog article between chains and contexts. For example you talk about a "main chain" to refer to some sort of default context. Is the "chain" term in that article still referring to the "chain" in blockchain or is "chain" a linked-list of context elements multiplexed in physical blocks of a real blockchain?

I could go on, but the main point here is that it's not at all clear (not to me at least) at which level(s) you are planning to introduce blockchains, what it/they will be used for at each specific level, what type of incentive scheme will be introduced to keep these structures consistent accross the network, and how that relates to Tau and/or Agora?

I would appreciate if you can address the points above one by one in your answer.
ohad
Hero Member
*****
Offline Offline

Activity: 878

http://idni.org


View Profile WWW
September 01, 2015, 10:11:00 AM
 #273

Thanks for the replies Ohad.
One earlier question left unanswered

Do you have specs about what exactly Agora is going to be. I mean other than the story of Bob and Alice . What kind of assets will Agora allow to trade? Will it feature a full fledged distributed exchange like Ripple? Will it support other market paradigms like auctions or prediction markets?
One more question: there are many references to "chains" in your answers, blog posts and papers.
- There was the blockchain planned for Zennet which was supposed to be used for handling the transactions in Zencoins, was supposed to use PoW, and wasn't supposed to have any inflation, which meant that miners would be rewarded with transaction fees alone.
- In the context of Agora, since Agora is a superset of Zennet, I assume that the originally planned blockchain comes along except that instead of being implemented as a native client, it will be emulated using Tau logic and network built-ins, and now the tokens are called Agora coins, everything else being the same (is that right?). But is it still PoW? If it is do you plan to have Tau open a server socket and let good old mining programs connect to it and do getWork calls?

The workflow of the client can be summarizes as follows:
We have a peer to peer network such that every peer holds an ontology (its rules). Peers speak between themselves at the same language too - they query each other's reasoners, though indirectly. A wrapper around the reasoner recieve the event and query the reasoner "what should I do now", together with the event's information.
Therefore, one can quite naturally add rules to what happens when some query arrives, even locally. The rules doesn't necessarily have to be explicit, as their consequences are calculated by the reasoner.
The query itself can be a computation, some code to run, and even native code: I forgot to mention that we do plan to have FFI builtins. Of course, one has to allow using them, and the typesystem should handle them correctly like such types cannot be Auth types (execution verification).
What to do when a query to run code, how much to charge, under what conditions, all amounts to local rules in the client. Therefore, Zennet's design completely lost meaning, except the pricing formula zennet.sc/zennetpricing.pdf since it all amounts to little piece of tau code. Renting computational power won't need a distinct chain or a side chain, but only the coin it uses has to have it's timestamped ledger. Other data transfer can be done using the DHT layer, or by any other p2p rules.

The builtins will have multiple levels of abstraction. DHT (like get/set) and blockchain primitives (like getWork) will be builtins as for themselves, aside more low level ones like sockets.

Back to what assets Agoras will allow trading, I tend to adhere the principle of letting the users maximum flexibility and minimum intervention. So the goal to implement a stable coin, and a market where participants may place their bids, whether it is computational power or coding or anything, and with a user interface. Readily-made rulesets for common such operations will also be supplied. In addition, there are more applications that need a currency other than markets. The one I'm planning to put focus on under Agoras is a search engine. Remember that with large enough network, tau can crawl the web overnight. And after that night you don't end up with a one line interface tht google gives you, but the whole database is open. Of course computational costs has to be taken care carefully, so we plan to make the search engine self-incentivizing. In general, we'll do our best to make Agoras the best choice as a platform for any application that involves currency.

- In the whitepaper about Tau, you discuss briefly using the block index of a PoW blockchain to serve as the arrow of time. This is a powerful idea but it raises the question of how fundamental the blockchain is to the design. Would you use a blockchain that is implemented independently to Tau as the blueprint for time, and allow Tau logic to query time using a hardcoded built-in?

Yes, but the ontologies will have rich API to the DHT&Chain parameters, and can have a wide control on their flow. A simple example is the conditions for a block to be accepted. Chain builtins will also supply primitives of "before" and "after" that can be Auth types yet external, thanks to the chain's timestamping.

[/quote]
Or would you let the Tau programs query other Tau programs that implement a blockchain node and use whatever they say is the latest index as a time reference? The former would be a more global, reliable and trustable reference but would also add complexity to the base Tau framework and raise the question of how to incentivize the miners of the fundamental chain.

Not exactly, and as above, ontologies will control the chain only up to some extent, but the general flow of DHT&Chain is somehow hard coded.

- In your latest block post as well as many of your answers, you seem to be saying that some of the Tau logic will be somewhat stored and exchanged over a blockchain which will allow nodes to share and publish knowledge. So would that chain be a native chain (not implemented in Tau logic but C++ for instance) or are we talking about a chain that would be created in the Tau bootstrapping file? Is this chain related to Agora or not? Does this chain have economic incentives? If it does what is it? If not how do you ensure that the chain remains consistent?

Information is not stored on the chain - it is stored on the DHT layer, and in case a timestamp is needed, only a merkle root is enough to get into the chain. The root chain has to contain the protocol's definitions (the client's code) and merkle roots for timestamping.
In any case, we always speak about one chain only, and more chains only arise at the scope of pegging them in the root chain, as sidechains do.

- Last but not least you make the parallel in your last blog article between chains and contexts. For example you talk about a "main chain" to refer to some sort of default context. Is the "chain" term in that article still referring to the "chain" in blockchain or is "chain" a linked-list of context elements multiplexed in physical blocks of a real blockchain?

When I say 'chain' on our scope I always mean 'blockchain'.

I could go on, but the main point here is that it's not at all clear (not to me at least) at which level(s) you are planning to introduce blockchains, what it/they will be used for at each specific level, what type of incentive scheme will be introduced to keep these structures consistent accross the network, and how that relates to Tau and/or Agora?

An incentive scheme is indeed an extremely important (if not vital) to the future of the network. Still, the first users have to define it, or better, define how it can be changed. I do have thoughts and opinions about that, and I will vote and propose accordingly over the system.

Thanks for the good questions, will be happy for more.

Tau-Chain & Agoras
skunk
Sr. Member
****
Offline Offline

Activity: 329


View Profile
September 01, 2015, 10:48:33 AM
 #274

intersting... do you know bitnation? maybe the two project can join each other...

redfish64
Jr. Member
*
Offline Offline

Activity: 32


View Profile
September 02, 2015, 02:46:13 AM
 #275

I was wondering if anyone could recommend some reading material, or at least a general direction of what to study to help grasp the concepts of Tau better for those with primarily an imperative programming background (C++, Java, perl, etc.).

I've worked through some of "The Reasoned Schemer", and right now I'm learning Haskell, and planning on moving onto Idris once I get the basics down.

Does this seem like a good path? Other material?



ohad
Hero Member
*****
Offline Offline

Activity: 878

http://idni.org


View Profile WWW
September 02, 2015, 02:52:35 AM
 #276

I was wondering if anyone could recommend some reading material, or at least a general direction of what to study to help grasp the concepts of Tau better for those with primarily an imperative programming background (C++, Java, perl, etc.).

I've worked through some of "The Reasoned Schemer", and right now I'm learning Haskell, and planning on moving onto Idris once I get the basics down.

Does this seem like a good path? Other material?

I'd say you follow a very good path. Tau is very different from Haskell, but one better know Haskell in order to understand many of the type theory literature. Idris is the ultimate entrance to the world of dependent types, and borrowed syntax from Haskell. Tau aims to be like Idris, just with RDF syntax, and some DHT and chain primitives etc.

Tau-Chain & Agoras
klosure
Jr. Member
*
Offline Offline

Activity: 49


View Profile
September 06, 2015, 07:13:44 AM
 #277

Found that on the Ethereum reddit: Compositional financial contracts
Financial contracts language written as a DSL over Coq allowing to express any finite-time financial contract in existence, reason with their logic and compose them with logic connectives to express entire financial portfolios in a structured way that can be reasoned with. This is very interesting because it allows for instance to assert delta neutrality of a portfolio in a provable way, or automatically propose optimal hedging scheme given specific expected market conditions (which could themselves be statistically extrapolated from history) and desired outcomes under these conditions .

That kind of formalism would have allowed to anticipate and perhaps prevent the multiple meltdowns of the financial industry. After the financial crisis, regulators have reviewed financial institutions portfolio and painstakingly benchmarked them against different scenarii. This took them years and enormous resources, and yet there is no way to prove that they did the job right and didn't overlook something. The results they got was obsolete on the very moment they produced it because portfolio are constantly in motion so knowing that the institution was sound at the time they snapshoted its portfolio does lnothing to prove it is still sound a couple of days later when all that is needed for a spectacular blow-up is a large naked future position.

With a language like the one proposed above, every financial institution could model their entire portfolio including proprietary one as well as the portfolio they manage for their clients and let the SEC access to it. With that, it would be a breeze for the SEC to test financial institutions soundness and proper capitalization and assert they are not playing against their customers. To do an industry wide benchmark all the SEC would need to do is to compose all the portfolios of all the financial institutions and test that syntectic portfolio against a given set of stress constraints to see where the industry as a whole would be heading in extreme market conditions. Actually, someone asks in the Q&A at the end how this relates to SEC's recent effort to try to model contracts as a DSL over Python and the answer pretty much nails it: Python isn't decidable so the SEC is barking up the wrong tree.

This really sounds like the archetypal case of a Tau chain application and a very simple one at it yet an application with far reaching consequences. Someone with a shared background and a good sense of pitch (HMC maybe?) should contact them and explain them why they should have a look at Tau.
ohad
Hero Member
*****
Offline Offline

Activity: 878

http://idni.org


View Profile WWW
September 06, 2015, 07:40:39 AM
 #278

Proof of Code Execution: http://www.idni.org/blog/proof-of-exec

klosure: thanks, I'll take a look.

Tau-Chain & Agoras
redfish64
Jr. Member
*
Offline Offline

Activity: 32


View Profile
September 07, 2015, 10:28:25 AM
 #279

Proof of Code Execution: http[Suspicious link removed]c

This looks interesting, and I think I've learned enough about logic programming to understand it. Just one question, though... (from the blog post)

Quote
Eventually, the hash of the tree's root is a proof that the whole computation was followed, since one can replay the computation and verify the hash.

You cannot be sure the computation wasn't run within some sort of sandbox environment meant to look like the real thing, correct? In fact, in order to verify the proof, you'd have to create such a sandbox to replay it. So, what is the purpose of using this?
ohad
Hero Member
*****
Offline Offline

Activity: 878

http://idni.org


View Profile WWW
September 07, 2015, 10:49:36 AM
 #280

Proof of Code Execution: http[Suspicious link removed]c

This looks interesting, and I think I've learned enough about logic programming to understand it. Just one question, though... (from the blog post)

Quote
Eventually, the hash of the tree's root is a proof that the whole computation was followed, since one can replay the computation and verify the hash.

You cannot be sure the computation wasn't run within some sort of sandbox environment meant to look like the real thing, correct? In fact, in order to verify the proof, you'd have to create such a sandbox to replay it. So, what is the purpose of using this?


Lambda-auth (links below) suggests a framework such that every type can be auth or unauth. We plan to have the same design and distinction on our typesystem. The point is that you don't have to have all your program authed, but you can specify any "key points" to be authed. Naturally indeed one doesn't care about the whole execution of the program but only certain parts of it. An example would be random number generation. Assume one doesn't really care about the functionality of some website because it's all pretty much transparent, except maybe random number generation. So that certain procedure may be authed. Given that, we can collaboratively trust a single point to draw random values that may affect all of us.
Another example would be executing a statement query from some online bank. The bank can auth that query and supply a proof that your reported balance is correct.
Blockchain builtins provide auth types for time ordering, and this is of course a very powerful combination.

http://amiller.github.io/lambda-auth/
http://www.cs.umd.edu/~amiller/gpads/gpads-full.pdf

Tau-Chain & Agoras
Pages: « 1 2 3 4 5 6 7 8 9 10 11 12 13 [14] 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 ... 143 »
  Print  
 
Jump to:  

Sponsored by , a Bitcoin-accepting VPN.
Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!