Bitcoin Forum
May 29, 2024, 01:02:28 AM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: « 1 [2] 3 4 5 6 7 8 9 »  All
  Print  
Author Topic: [PRE-ANN] Qeditas: A Formal Library as a Bitcoin Spin-Off  (Read 14990 times)
This is a self-moderated topic. If you do not want to be moderated by the person who started this topic, create a new topic.
Bill White (OP)
Member
**
Offline Offline

Activity: 118
Merit: 11

Qeditas: A Formal Library as a Bitcoin Spin-Off


View Profile WWW
March 29, 2015, 04:34:22 PM
 #21

Interesting. I'll keep an eye on this as well. I can help test it when it's ready.

By the way, there's a typo in the white paper, Section 8:  "require for the Qeditas" should be "require for Qeditas" or "require for the Qeditas project".

Thanks for pointing this out. I fixed it. When the time comes I will contact you about the testing phase.

TrackCoin
Full Member
***
Offline Offline

Activity: 140
Merit: 100


View Profile
March 29, 2015, 04:40:33 PM
 #22

I have to Study about this...without know cant tell anything but can tell much Good Luck.... Smiley

Bill White (OP)
Member
**
Offline Offline

Activity: 118
Merit: 11

Qeditas: A Formal Library as a Bitcoin Spin-Off


View Profile WWW
March 30, 2015, 12:17:05 PM
 #23

Interesting idea. In the white paper you seem unsure if you'll use Coq as the prover. You should definitely use Coq. i didn't really get why you're reluctant to commit to it?

My primary concern is consistency. If someone were to prove False in the system, then every proposition would be (trivially) provable. As the white paper remarked, False was proven in Coq twice at proofmarket.org. It seems the first time False was redefined (sounds like Pollack inconsistency); this is easy to avoid. The second time it seems Coq 8.4pl3 was actually proven inconsistent. Coq is a very nice system, but it is also still experimental.

My current thinking is that the Qeditas proof checker should be like Coq, but more restricted. The intention of the restrictions is to ensure (as much as possible) consistency. Perhaps people could prove theorems in Coq and then do a second check that the Qeditas proof checker also accepts the proof. I must admit, however, that there are some issues with the logic of Coq (e.g., universe levels and termination checking) where I'm not yet sure what restrictions would be appropriate.

Ideas?

i don't know enough about the internals of Coq to really say. "like Coq, but more restricted" sounds like that mathgate proof checker. Does it not have these issues?

The logic/framework of Egal (the mathgate proof checker) is simple type theory. This goes back to Church (1940) and is well-understood. On top of that there's a set theory (Tarski-Grothendieck). If this turned out to be inconsistent, it would be a surprising mathematical result. (By way of contrast, if Coq turned out to be inconsistent, it would mean the type theory needs to be patched in the next release.)

Notice: We are less than 64 blocks from the snapshot, so it should occur within in the next 12 hours.

Bill White (OP)
Member
**
Offline Offline

Activity: 118
Merit: 11

Qeditas: A Formal Library as a Bitcoin Spin-Off


View Profile WWW
March 30, 2015, 07:57:38 PM
Last edit: March 30, 2015, 10:10:26 PM by Bill White
 #24

Bitcoin is less than 13 blocks away from the last block to be included in the snapshot (349,999). This is a final reminder to remove bitcoins from exchanges if you want to be able to claim the corresponding initial distribution for Qeditas.

Edit: The 350,000th block has been mined. When the snapshot data is ready, I will post it for those who want to verify its accuracy.

Bill White (OP)
Member
**
Offline Offline

Activity: 118
Merit: 11

Qeditas: A Formal Library as a Bitcoin Spin-Off


View Profile WWW
March 31, 2015, 11:07:08 PM
Last edit: April 02, 2015, 01:22:58 PM by Bill White
 #25

The snapshot is available:

https://mega.co.nz/#F!0xhzUbTS!h-LeyiTbF9dXgusKCc_bfQ

It contains the balances of all p2pkh and p2sh addresses as of block 350,000 (after txs in block 349,999 and before txs in block 350,000). Native multisig and raw scripts were converted to p2sh.

If anyone notices an address with an incorrect balance, please post here to let me know.

Bill White (OP)
Member
**
Offline Offline

Activity: 118
Merit: 11

Qeditas: A Formal Library as a Bitcoin Spin-Off


View Profile WWW
April 03, 2015, 08:48:29 PM
Last edit: June 26, 2015, 02:56:53 PM by Bill White
 #26

There is a Coq formalization corresponding to Qeditas available here:

git clone git://qeditas.org/qeditastheory.git

It is far more detailed than the white paper, but is likely not as readable. It is mostly an extension of the "ledgertheory" Coq development I posted two months ago.

Trent Russell
Full Member
***
Offline Offline

Activity: 132
Merit: 100


willmathforcrypto.com


View Profile WWW
April 04, 2015, 03:06:04 PM
 #27

There's a recent thread on the coq-club mailing list about a way to prove False in Coq:

https://sympa.inria.fr/sympa/arc/coq-club/2015-03/msg00134.html

Bill White (OP)
Member
**
Offline Offline

Activity: 118
Merit: 11

Qeditas: A Formal Library as a Bitcoin Spin-Off


View Profile WWW
April 12, 2015, 08:54:56 PM
 #28

There's a recent thread on the coq-club mailing list about a way to prove False in Coq:

https://sympa.inria.fr/sympa/arc/coq-club/2015-03/msg00134.html


Thank you for pointing me to this discussion. At this point I don't see a way to build Qeditas on top of Coq while ensuring consistency. Unless someone can suggest a way, my inclination is to build Qeditas on top of Egal.

Bill White (OP)
Member
**
Offline Offline

Activity: 118
Merit: 11

Qeditas: A Formal Library as a Bitcoin Spin-Off


View Profile WWW
April 12, 2015, 09:47:22 PM
 #29

Here is an update on Qeditas.

I have written code for compact trees (a variant of a Merkle Patricia tree, see https://bitcointalk.org/index.php?topic=949880) and used it to create the initial compact tree corresponding to the snapshot. Unfortunately, there are some efficiency issues and it's clear more work is required to make the code practical. Keeping the whole tree in memory is too RAM intensive (the full tree requires roughly 1.6 GB). Also computing hashroots (if no intermediate hashes are stored) takes far too long. I have some ideas for how to fix these issues, but it does mean things are taking longer than expected. I would still like to get the network running in April, but it may run into May.

In other news: I believe I have an easy way for people to buy and sell their claims from the snapshot. People can use their bitcoin address to "endorse" their claim to a Qeditas address. Here is an example of how this work.

Address 1LvNDhCXmiWwQ3yeukjMLZYgW7HT9wCMru had a balance of 1.5 mbits at the time of the snapshot.

Suppose Alice is has the private key for this Bitcoin address.

Suppose Bob with the Qeditas address QVmqaNmBE3GbTPG3kqNh75mYpA6uUcxs35 wants to buy this claim from Alice.

Alice can sign a message

"endorse QVmqaNmBE3GbTPG3kqNh75mYpA6uUcxs35"

with her bitcoin address. Qeditas will use such a signed message to allow Bob to spend from the Qeditas address corresponding to 1LvNDhCXmiWwQ3yeukjMLZYgW7HT9wCMru.

Presumably Bob will pay some bitcoins to Alice's address above in exchange for this endorsement (although at an exchange rate of 10000:1, Bob would only need to pay Alice 15 satoshis).

Note that this will not require Alice to download or use Qeditas or any third party software.

Please post if you have any questions. Otherwise I will try to post updates as appropriate.

kushti
Full Member
***
Offline Offline

Activity: 315
Merit: 103


View Profile WWW
April 13, 2015, 04:41:43 PM
 #30

Another concern around using 350K Bitcoin's snapshot. As you're going to use Proof-of-Stake, and majority of Bitcoin holders will no participate in the Qeditas consensus from day 1, so generating balance will be low during early days. In this case, system could be overtaken by a big holder, history attack is also possible(you'll need to introduce checkpoints) etc.

How do you want to solve compact Merkle tree problems?

What's new in the 'qeditastheory'  in comparison with the 'ledgertheory'?

Ergo Platform core dev. Previously IOHK Research / Nxt core dev / SmartContract.com cofounder.
yampi
Sr. Member
****
Offline Offline

Activity: 433
Merit: 250


View Profile
April 13, 2015, 05:14:36 PM
 #31

How was the distribution balances calculated?
Are these balances in lowest unit forms?
Bill White (OP)
Member
**
Offline Offline

Activity: 118
Merit: 11

Qeditas: A Formal Library as a Bitcoin Spin-Off


View Profile WWW
April 13, 2015, 08:47:27 PM
 #32

Another concern around using 350K Bitcoin's snapshot. As you're going to use Proof-of-Stake, and majority of Bitcoin holders will no participate in the Qeditas consensus from day 1, so generating balance will be low during early days. In this case, system could be overtaken by a big holder, history attack is also possible(you'll need to introduce checkpoints) etc.

I agree with the concern. I think if a single large bitcoin holder wants to overtake the network in the first few days, they are likely to succeed. The hope is that there are enough participants at different balance levels to prevent this, but it's a real danger.

As for checkpoints, I like the solution Nxt uses of not allowing reorganizations beyond N blocks and plan to use this in Qeditas. This makes the criteria "weakly subjective" in Vitalik Buterin's terminology, but realistically I think this is good enough. To be honest, I think even in a POW network like Bitcoin, if there were a reorganization beyond 24 hours there would likely be some kind of social (human) consensus very quickly to deny the reorganization. If I'm right about that, the criteria Bitcoin uses is not truly "objective."

How do you want to solve compact Merkle tree problems?

I'm still experimenting, but the idea is simple. The "real" tree is a sparsely populated binary tree of depth 162 with account assets at the leaves. Instead of holding the full compact tree in memory, I break it into several parts and save the different parts into files. (I could use a database, but I don't think I need one.) I can describe these "parts" as follows:

The top of the tree: from the root through the first 6 levels.
The midsections of the tree: from level 6 to level 14.
The bottom parts: the rest of the way to the leaves.

The numbers are a bit arbitrary, and there's actually a general data type that can be used to choose multiple ways of breaking the tree into parts. For simplicity, let's assume it's broken up as above.

A transaction mentions certain addresses. The leaves corresponding to those addresses are to be transformed. For this reason we need each "bottom part" that contains one of these leaves. We also need the particular midsections above those selected bottom parts. The top part is always needed. All these parts are loaded into memory to do the transformation. This should be a very small part of the tree in practice.

After the transformation in order to save the new (transformed) tree, only the new versions of the selected bottom parts, midsections and top part must be saved. The unselected parts are unchanged and we already have them saved in files.

Another side effect is that to calculate the hashroot of the full tree is very easy since it can start hashing from the tops of the midsections.

There are, of course, tradeoffs. On one extreme is not to break up the tree at all. In that case too much RAM is consumed and the operation of computing the hashroot takes too long (since every intermediate hash needs to be computed). On another extreme, every single node in the tree could be saved in files. I haven't tried this, but I think it would require too many file operations.

Does this sound reasonable? Do you know of some better techniques to handle this?

What's new in the 'qeditastheory'  in comparison with the 'ledgertheory'?

I will answer this in another reply since it will be a bit longer.

Bill White (OP)
Member
**
Offline Offline

Activity: 118
Merit: 11

Qeditas: A Formal Library as a Bitcoin Spin-Off


View Profile WWW
April 13, 2015, 09:03:40 PM
 #33

How was the distribution balances calculated?
Are these balances in lowest unit forms?

The distribution given to an address was determined by the sum of the satoshis in the unspent transaction outputs to that address as of the snapshot (Block 350,000, not counting this block).

The lowest Qeditas monetary unit is a cant (short for Cantor). It corresponds to 0.0001 satoshis in the snapshot. This means the number of satoshis in the snapshot will be multiplied by 10,000 in order to determine the number of initial cants.

Example:

Suppose someone has a bitcoin address A, and there were two unspent transaction outputs to A as of the snapshot. One tx output to A was worth 10 satoshis and the other was worth 1 satoshi. The balance of A is 11 satoshis in the snapshot.

In the initial distribution the Qeditas version of address A will have a balance of 11 * 10,000 cants, i.e., 110,000 cants.

Of course, "cants" like "satoshis" are too small to be a useful unit in most cases. A more reasonable level would be a fraenk (short for Fraenkel), which corresponds to 1012 cants.

Example:

Suppose someone has a bitcoin address B that had a balance of 2 bitcoins (200 million satoshis) as of the snapshot.

In the initial distribution the Qeditas version of address B will have a balance of 2 fraenks (2 * 1012 cants).

Fraenks are like bitcoins (as units) and cants are like a small fraction of a satoshi.

Does this answer your questions?

Bill White (OP)
Member
**
Offline Offline

Activity: 118
Merit: 11

Qeditas: A Formal Library as a Bitcoin Spin-Off


View Profile WWW
April 13, 2015, 09:59:46 PM
Last edit: April 25, 2015, 09:21:09 PM by Bill White
 #34

What's new in the 'qeditastheory'  in comparison with the 'ledgertheory'?

First, a few minor things:

I changed "addresses" and also "mtrees" and "ctrees" to use 168 bits (edit: 162 bits) instead of 160. This was to account for having different "kinds" of addresses. Now I've decided I only need 4 kinds, so the "right" number would be 162. (The 4 kinds of addresses are p2pkh, p2sh, hashes of terms, intentions (hashes of nonced data), but this isn't in the Coq code.)

MathData.v

There is a new file MathData.v that contains a skeleton of higher-order logic (simple types, terms and proofs). It isn't a real formalization of higher-order logic, but is just enough to know what kind of "data" would be in a publication and what it means for some "data" to use/support an object or proposition. Here "data" means a sequence of mathematical parameters, definitions, axioms and theorems (with proofs). If the data contains a proposition as an axiom and this is justified by the fact that the proposition has been proven before, then the data "uses" the proposition. Essentially the data is building on someone's previous work.

Assets.v

Assets.v has been extended to contain several new kinds of preassets. In ledgertheory there were only "currency" and "publication". In qeditastheory there are

  • currency: the currency units as before.
  • bounty: currency units locked to a proposition. The idea is that this can only be spent by someone who publishes a proof of the proposition. (In the implementation it should also be spendable by someone who proves the negation of the proposition.)
  • owns: this records ownership of a mathematical object or proposition by an address. (A boolean is used to determine if it's being owned as an object or proposition. This is a little obscure and in the ocaml implementation I'm making separate preassets of OwnsObj and OwnsProp.)
  • rights: rights can be spent to make use of an object or propostion in a publication. (Again, a boolean determines if it's an object or proposition. Again, the implementation will split into RightsObj and RightsProp.)
  • intention: this is just some hash that must be published in advance of publishing a "publication." It's part of a protocol intended to prevent an attacker from taking someones publication and then immediately publishing it as his or her own work. An "intention" (a hashed nonce of the publication) must be published and confirmed before a "publication" can be confirmed.
  • publication: this is a placeholder for publishing "data" (as in MathData above). There's also a place for a "nonce" for the purpose of checking if the appropriate "intention" has already been sufficiently confirmed.

I changed "obligations" from using lists of addresses (intended to handle multisig) to just using addresses. I can do this since I decided to support p2sh addresses.

There are also a number of new definitions and results about rights and so on (e.g., rights_mentioned).

Transactions.v

There are only a few changes here.

There are some additional checks for whether or not a transaction is valid (see tx_valid and tx_outputs_valid). For example, the transaction cannot declare two different owners of the address of a mathematical term. (Well, it's complicated. There can be one owner of an address "as an object" and a different owner of an address "as a proposition." Using booleans helped make some of these statements in a concise, but tricky way.)

Signatures are generalized (see "gensignat"). The idea is to allow for "ordinary signatures" and for "endorsement signature".  The endorsed signature will be a combination of a bitcoin signed message of the form "endorse Q" and a signature of the transaction by the key for Q. This is what will allow for bitcoin users to "endorse" their part of the distribution to a Qeditas user without needing to use Qeditas.

There is also a new kind of obligation for simply "moving" assets (see check_move_obligation and check_tx_signatures). Suppose an asset A is held in an address B but only the private key for a different address C is allowed to "spend" A. (This could be the case because C is allowing B to stake with the asset.) There's a potential attack in which someone keeps sending unwanted assets to B which B cannot spend. To mitigate this, I decided that B can sign for a transaction which simply "moves" the asset A. The "move" would send the asset to a separate address without changing the underlying preasset or the obligation for spending it (only C will be able to "spend" it).

LedgerStates.v
MTrees.v
CTrees.v

These are similar to before, but with many new cases due to the new preassets. Also, there must be new definitions about "consuming rights" and so on.

Once we get to this stage, there is a new important distinction. In ledgertheory sometimes we needed to check that a certain asset exists (in order to spend it). In qeditastheory we sometimes need to also check that a certain asset does not exist. For example we may need to verify that some address corresponding to a proposition does not yet have an owner. In order to check that no asset of a certain form is held at an address, we need the full list of assets at the address. There are many new definitions and results about this new "full" case (e.g., see mtree_full_approx_addr and In_full_needed_owns).

Blocks.v

There are a number of new properties corresponding to rights and ownership and so on (the new preassets).

In addition I took away some of the use of dependent types for describing blocks and block chains. I think in ledgertheory dependent types were overused in Blocks.v in ways that complicated things more than was really needed.

kushti
Full Member
***
Offline Offline

Activity: 315
Merit: 103


View Profile WWW
April 14, 2015, 05:15:00 PM
 #35

Thank you for the detailed explanation, Bill. I think about using you compact trees in my Scorex project, so I will have to think about effective implementation as well,  but it will be not within next days. Unfortunately, I've never read about data structures like this.

Have you started to work on consensus part? PoS + PoR sounds attractive

Ergo Platform core dev. Previously IOHK Research / Nxt core dev / SmartContract.com cofounder.
Bill White (OP)
Member
**
Offline Offline

Activity: 118
Merit: 11

Qeditas: A Formal Library as a Bitcoin Spin-Off


View Profile WWW
April 15, 2015, 08:17:20 PM
 #36

Thank you for the detailed explanation, Bill. I think about using you compact trees in my Scorex project, so I will have to think about effective implementation as well,  but it will be not within next days. Unfortunately, I've never read about data structures like this.

Have you started to work on consensus part? PoS + PoR sounds attractive

I haven't yet started implementing the consensus part, but will let you know.

What is Scorex? Do you have a link?

kushti
Full Member
***
Offline Offline

Activity: 315
Merit: 103


View Profile WWW
April 16, 2015, 02:42:38 PM
 #37

I haven't yet started implementing the consensus part, but will let you know.

What is Scorex? Do you have a link?

It's the compact(<4K lines of Scala code) cryptocurrency engine for hackers, intentionally not production-ready. Pre-release code is already on Github https://github.com/ConsensusResearch/Scorex-Lagonaki , but it's too early to use it, I need to implement disk persistance for state and fix some networking issues. There are two consensus algos in the code, Qora-like and Nxt-like, with possibility to switch by changing 1 line of code (in Constants.scala).  The purpose of Scorex atm is to play with consensus in almost-real-world environment, then I want to add an implementation of some scalability proposal, also I have(very rough atm) idea about scalable smart contracts implementation(w. private contracts support in decentralized environment), I'll describe the idea to you in PMs when I will have more clear picture.  Also the project is open to other researchers willing to implement proof-of-concept within days, not weeks or months.


Ergo Platform core dev. Previously IOHK Research / Nxt core dev / SmartContract.com cofounder.
Bill White (OP)
Member
**
Offline Offline

Activity: 118
Merit: 11

Qeditas: A Formal Library as a Bitcoin Spin-Off


View Profile WWW
April 25, 2015, 07:40:40 PM
Last edit: June 26, 2015, 02:58:14 PM by Bill White
 #38

Another update is due.

The good news is that the code to swap parts of the compact Merkle tree in and out of RAM is working, so this problem is solved. I may need to do more work to make it more efficient, but I've moved on to implementing other parts now.

The bad news is that it's clear the code as a whole will not be ready by the end of April. I think it's best not to try to give an estimate at the moment.

I've spent some time looking at Egal (the proof checker mathgate used) again. It's led me to think about improvements I can make to Qeditas.

One nice feature of Egal is that previous definitions can be imported simply as a parameter in a document, as long as a hash value identifying it as a specific term is given. Using this feature an author can avoid repeating every definition all the way back to the primitives in every document. However, some definitions are needed so often (e.g., equality, negation, conjunction, etc.) that it doesn't make sense to repeat these definitions in every document in which they're used. Egal handled this by having "signature" files. It seems to be similar to importing a module in Coq. I think Qeditas should also handle signatures.

It also seems that Egal is only weakly tied to Tarski-Grothendieck set theory. In principle it should be able to do proof checking for any theory that can be expressed in simple type theory. I think it's a good idea to also allow the publications of theories. If one of these theories turned out to be inconsistent, the inconsistency should be localized to that theory. Initially I think there will be two theories: the empty theory (just simply type theory) and Tarski-Grothendieck set theory.

Both theories and signatures introduce a new complication: In order to verify correctness of a document depending on a theory/signature, a Qeditas node must be able to look up the details of the theory/signature. This means each node will need to explicitly keep a copy of all theories and signatures. This would only become a problem if people created lots of "spam" theories and signatures. The best way I can think of to prevent such spam is to have a very high burn requirement when a theory or signature is published.

Suppose whenever a theory or signature is published by a transaction, the transaction must burn 21 fraenks per KB. Since there is a hard limit of 21 million fraenks, this ensures that there will never be more than 1 GB of theory/signature data. Hopefully this will be sufficiently costly to encourage people to design theories and signatures carefully before publishing them.

I'm modifying the Coq sources in qeditastheory to add signatures and theories. I have also made some modifications to the Coq source so that hashes of important data (e.g., documents, terms, and so on) is computed as a root of a Merkle tree. This modification will be necessary to support the proof of retrievability aspect of the consensus mechanism.

I hope to commit updates to the qeditastheory github git repo later this afternoon.

git clone git://qeditas.org/qeditastheory.git

Daedelus
Hero Member
*****
Offline Offline

Activity: 574
Merit: 500



View Profile
April 25, 2015, 10:15:08 PM
 #39

Just FYI, I think Kushti released Scorex a couple of days ago, if you're interested in knowing more.
Bill White (OP)
Member
**
Offline Offline

Activity: 118
Merit: 11

Qeditas: A Formal Library as a Bitcoin Spin-Off


View Profile WWW
April 26, 2015, 10:26:16 PM
 #40

Just FYI, I think Kushti released Scorex a couple of days ago, if you're interested in knowing more.

Thank you for letting me know. I have my hands full at the moment, but it's definitely on my agenda to look into what he's done as soon as I have some extra time.

Pages: « 1 [2] 3 4 5 6 7 8 9 »  All
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!