The current Armory version cannot parse transactions with inputs directly from mined coins. Maybe you could add that functionality? (For P2Pool payouts, etc.)
Can you clarify the issue? I just stepped through the code in the debugger using an address that receives P2Pool payouts, and at the very least, it seems to be computing "Full", "Spendable" and "Unconfirmed" correctly. But I don't have a wallet to actually display such outputs to see how that information is making it to the user interface. (1) Are the mined transactions showing up in the ledger? (2) They should be displayed with a little pickaxe. I assume they're not...? (3) Do those inputs change the balances shown at the bottom in a way you would expect? (4) Do you have difficulty spending transactions from wallets that receive mined transactions?
|
|
|
I'm not concerned about enough confirmations. I'm actually concerned that it has 10,000 confirmations, and that it was actually spent 3,000 blocks ago, and that I have to search through 10,000 full blocks in order to know whether it's still a valid output. Or, I can ask other nodes for help, but I how do I know to trust them?
SPV nodes keep copies of all of their own transactions, so they should always know when outputs they own are spent. They don't need to query the network for this information. Sure they don't. Until they do. Maybe they haven't been online in a couple weeks and 2 GB of blockchain has happened since then. Or maybe they want to check another address that they never owned to begin with. Or maybe you just imported your wallet to another device (as Maaku said), or you're HDD failed and you are restoring from backup? Are you now expected to become a nearly-full node to rescan the entire transaction history since your wallet was created 2 years ago? You can ask peers for it, and do fancy stuff to convince yourself the history you have been given is real and complete. Or you avoid all of it and just get it from any peer you see on the network and not have to trust anyone other than majority hashing power. Download it, verify it and you're done.
|
|
|
If you are a C++, python or C dev and fancy playing about with Protocol buffers here is a useful project for you . . . Some time ago Andreas's Android bitcoin wallet moved their wallet format from a Java specific one to one based on protocol buffers (protobuf). I have just upgraded MultiBit to use the same format for new wallets. It is the data format used for the majority of Google's machine-to-machine chatter and is compact and easy to extend. You can auto-generate accessor code in a variety of programming languages using a message definition file, which for bitcoinj is here: http://code.google.com/p/bitcoinj/source/browse/core/src/bitcoin.proto?name=release-0.5There are protobuf generators for (at least) C++, python and C that I know of. Having a bitcoin wallet format that is easy to read and write to in a variety of programming languages sounds like it is a Good Thing To Have so if you have the skills, the time and the inclination please, well, give it a go! Jim, I really like google's protobuf library, and was looking forward to using them for something. They seem to have great support across multiple languages. However, for Armory wallets, I'm sticking to pure, manual, binary files. I may just be stupid stubborn, but I like the 100.0% control over every bit in the file, and not letting any outside library touch it. I recognize this is much smaller scope than the BSD-DB library that let to the wallet encryption bug, but I'm still paranoid about it. The wallet is one the thing I won't outsource to any other code. However, for just about anything else (or most other apps, where the developers aren't as paranoid as me), I think it's great
|
|
|
This is the basis of the overlay network that Electrum/Stratum uses.
Electrum and Stratum don't use SPV. Those clients don't keep block headers (AFAIK). BitcoinJ uses SPV. Yes, but how do you know a particular TxOut hasn't been spent since it appeared in the blockchain? I wait until it has 6 confirmations. SPV allows me to determine the number of confirmations accurately (assuming the majority of mining power is honest). I'm not concerned about enough confirmations. I'm actually concerned that it has 10,000 confirmations, and that it was actually spent 3,000 blocks ago, and that I have to search through 10,000 full blocks in order to know whether it's still a valid output. Or, I can ask other nodes for help, but I how do I know to trust them?
|
|
|
The issue with SPV is that you have to trust either random peers to give you the correct/complete info, or connect to your own trusted system/subscription to get reliable information. You don't have to trust random peers with SPV. SPV clients have the block headers and can use the Merkle roots to accurately get the number of confirmations for any transaction. Yes, but how do you know a particular TxOut hasn't been spent since it appeared in the blockchain? I can verify, by downloading full transactions, that a particular TxOut was valid at one point in time, but I have no way to determine if it's been spent since that time. Whether I'm verifying my own balance, or trying to confirm the validity of the inputs of another transaction, any malicious node can give me seemingly correct information, but still make my node incapable of operating -- perhaps because when I got online the first time an imported my wallet, I was given only old TxOuts that have since been spent, and my node will be stuck creating invalid transactions trying to spend them. This is the basis of the overlay network that Electrum/Stratum uses. I don't think there's anything wrong with that overlay network, other than it relies on trusted nodes being setup and available, or subscribing to a service, all of which is a degree of centralization (and extra user effort). This chain avoids all of it, providing (at the expense of complexity), more reliable information without trusting anyone. If that was the only benefit, I'd agree it wouldn't be worth the hassle. But as I said, that's a secondary benefit of this structure. The main reason to do it is compression. Which could be implemented with a simpler tree structure using an alt/meta-chain. Or this tree-structure using a different distribution technique (protocol change, overlay network, trusting random peers, etc).
|
|
|
Trustless lightweight-node support: New nodes entering the network for the first time, will only have to download a tiny amount of data to get full, verifiable knowledge of their balance and how to spend it (much of which can be stored between loads). A single honest peer out of thousands guarantees you get, and recognize, good data.
It doesn't seem trustless to me. Lightweight nodes (not storing all unspent outputs) can't know whether a block is valid, so they need to trust the majority of the network's mining power. This is no more secure than SPV, though possibly a little easier for lightweight nodes. This doesn't make sense at all. The entirety of all Bitcoin trust relies on trusting "the majority of the network's mining power." That is the security model of Bitcoin itself, and has the benefit that you only need one honest node out of one million to be able to distinguish truth amongst a sea of malicious peers. The issue with SPV is that you have to trust either random peers to give you the correct/complete info, or connect to your own trusted system/subscription to get reliable information. Without a subscription service, you have to query lots of peers and possibly use filtering/profiling games with peers to identify untrusted nodes, etc. With this idea, you minimize the amount of information needed to be downloaded, and can compare directly against the headers -- you can get all the information you need from a single peer and know it's right (assuming you've already got the headers). And that is actually just a side-benefit of it -- the original goal was blockchain compression, which this does near-optimally. I agree that complexity is high. But I disagree with the notion that you're not getting something for it.
|
|
|
isn't it possible to merge mine the balance chain just like many pools do with namecoin? that way there is not need to give any reward to miners. just include it in bitcoind as a "protocol requirement" for mining.
Yup. The second chain doesn't need incentive due merged mining, unless it's particularly resource-consuming. If the software is already written, and integrates into bitcoind or other mining software transparently, and doesn't impact mining performance, then miners would [likely] have no complaints about adopting it given the huge upside it offers. It's basically free, from their perspective. Though I don't agree it would be a "protocol requirement." Miners have various reasons for doing what they do. And this is being promoted as non-disruptive and optional. But you don't need more than, probably 20% of mining power to engage this idea for it to be successful. I'm not even sure what a 51% attack would look like on the alt-chain, but it wouldn't be very exciting -- the worst you could do is prevent some lite-nodes being able to verify their own balance -- but there would be no financial gain for it, and it would cost a fortune. 20% seems like a number high enough that there would always be a "checkpoint" nearby, and high enough that it would be vastly too expensive for a prankster to do anything to that chain.
|
|
|
I asked before but I don't think it's been answered: what branch to test? master is 0.77 it seems.
Oh yeah, sorry. Branch is blkchainagain. I release installers, but forget people like to compile it themselves... EDIT2: Let's compromise -- can you send me your Makefile with the modifications you would recommend? At the moment I'd just be taking a shot in the dark, because I don't fully understand the variability of the environments you are in. But I'll see what you're trying to do and I'll accommodate I put up a pull request on github awhile ago. You can see the simple changes I made there. I give you full right to license that code however you want if that is still a concern for you. Holy crap you're right. I totally forgot that you did that! Okay, I'll look at it and make the appropriate changes. There's no reason not to. Oh, one more thing, I'm not sure if this is intentional or not but the Armory process closes completely when I close the main Armory window, rather than minimizing to the system tray.
I never implemented "minimize to try" -- I'm sure it's not too difficult, though. It's a feature I never used (or wanted to use) but if others like it, I might be able to crank it out in an hour
|
|
|
What branch should I checkout? blkchainagain? It looks like master is still .77 I am looking at line 29 of cppForSwig/Makefile, and it has "libpython2.6.a" in it. Is this how you meant to do it? shouldn't this be dynamic based on the installed version or something? When I run "make swig", it (expectedly) errors out with: g++ -shared -lpthread UniversalTimer.o BinaryData.o FileDataPtr.o BtcUtils.o BlockObj.o BlockUtils.o EncryptionUtils.o libcryptopp.a libpython2.6.a CppBlockUtils_wrap.o -o ../_CppBlockUtils.so i686-apple-darwin11-llvm-g++-4.2: libpython2.6.a: No such file or directory make: *** [swig] Error 1
I do have libpython2.7.a in "/usr/local/lib/python2.7/config/libpython2.7.a" What is the proper way to put that in the Makefile? Also, it would be really nice if the dependency directory was easy to change. Something like a variable at the top of the file just like in the Satoshi client Makefile. I'm always having to edit the Armory Makefile to change the paths to "/usr/local/include" for my Mac. Ahh. I was getting sick of compiling four different versions of Armory because of issues with the python versions. So, I modified the makefile to statically compile it in so that the python-version on the user's computer would no longer matter. Thus I won't have to split out different versions for python2.6 and python2.7 systems anymore. I didn't really want to do this, but I've also had so many complaints/crashes due to it, and python is small enough, that static compiling makes sense. I see it doesn't come without consequences, though... I'm really bad with makefiles, if you didn't notice. However, it changes very rarely. So you should be able to modify the Makefile once, and "git pull" the changes without overwriting it. I suspect you're doing a fresh checkout everytime, but if you pull instead, it will merge the new version with the Makefile changes you've made. It keeps the changes even when you switch between branches. EDIT: On that note I just tested the .deb compiled on my 10.04 machine on 12.04 -- it previously didn't work due to this python-version issue, but now it does! I'm keeping it! EDIT2: Let's compromise -- can you send me your Makefile with the modifications you would recommend? At the moment I'd just be taking a shot in the dark, because I don't fully understand the variability of the environments you are in. But I'll see what you're trying to do and I'll accommodate
|
|
|
The current Armory version cannot parse transactions with inputs directly from mined coins. Maybe you could add that functionality? (For P2Pool payouts, etc.)
Also, like other users have said, it would be very useful to choose which coins you use for inputs to a transaction.
I actually took the time to fix the coinbase transactions, a long time ago. But it seems that I re-broke it since then... and I've been too busy to investigate it. I guess now is a good time to do it! However, some unusual behaviors (mainly with unconfirmed balance) was fixed in 0.79.99. Maybe that too? Have you tested it with that version? Either way, I'll go hunt down an address that is used to receive P2Pool coinbases and check it out. Adding a coin-control feature won't be too difficult, but it affects transaction creation and sending, so I don't want to rush it. I want time to be really careful, and give myself lots of time to play with it and test it, so it won't go into Beta. But it will be one of the first things I add after Beta.
|
|
|
Given that testing is going so smoothly, hopefully I'll have something to release this week!
I have a bunch of little things to do, but none of it will be major functionality -- mainly a "Help-->About" window, adding a couple buttons, cleaning up some dialogs, and some other general polishing. And I need some time for folks to get back to me with any bugs/complaints.
I also want to spend some time developing user-documentation. I expect a flurry of new users once Armory gets the official "Beta" tag, and thus I should be a little more prepared for that. I don't want to delay the release just for that, but I also don't want to be unprepared for the new users.
Any last-minute requests for minor additions/improvements? On my list I have:
Help-->About window Help-->Help link to webpage Memory-pool-corruption prevention Adding a "MAX" button Labeling addresses in your wallets that were created for change Fixing stretching/resizing of a couple windows Update intro dialog and splash screen to match current capabilities (like warning about 3 min load times on XP!)
|
|
|
The counter argument is that you will never find yourself in this position: you are either downloading and processing the whole chain, in which case you have no problem replaying the incremental updates. Or you download from some checkpoint, in which case you can still replay the insertions and deletions from that checkpoint with minimal effort. Correct. So why does it matter? I'm not sure why it's “inelegant” either, since as you point out there is no use case for recreating the tree from only an unsorted list of outputs anyway. I didn't point that out, because I don't think I agree with it. It's a counter-argument that, while I can't dispute it directly at the moment, makes me extremely uncomfortable. What future use case haven't we considered that would be stunted by this decision? I'm only agreeing that I don't have a direct counter the argument for it (and thus cannot fully defend my position). But I'd hardly believe I'm the only one who would be bothered by it: I think it's extremely inelegant that if I have every node in the tree, that I still have to download GB more data just to know how to organize it (or rather, I might as well just re-download the tree directly, at that point).
|
|
|
Two big issues brought up so far, in outside discussion:
(1): I am currently in respectful disagreement with Socrates about the ability to construct stateless unspent-txout-trees. Recap: If one were to use a red-black tree to store this info, then the exact structure of that tree will depend on the entire, exact, incremental history of additions and deletions to the tree starting from the genesis block. To construct the tree from scratch, I must replay the entire history in the same order every time.
I am of the opinion that you should be able to start with the current list of unspent-TxOuts, however it is that you got them, and should be able to construct the correct tree without knowing anything about the history of how elements were added and deleted. If using the vanilla red-black trees, if I start with just the current unspent TxOuts list, I have every node in the tree -- but I need the entire history of already-spent-TxOuts just to be able to replay that history to construct the tree correctly. This seems inelegant and dangerous. But I can't figure out why, other than gut instinct.
The counter argument is that you will never find yourself in this position: you are either downloading and processing the whole chain, in which case you have no problem replaying the incremental updates. Or you download from some checkpoint, in which case you can still replay the insertions and deletions from that checkpoint with minimal effort.
(2): I misunderstood DiThi's original proposal, as a much simpler tree structure that could not provide the trustless lite-node behavior that my trees do. However, as I re-read it, I'm realizing that I think he's right -- a simpler tree of unspent TxOuts does actually give you that capability. Is this right?
A couple months ago when I first theorized this idea, I had a very good good reason for believing that you needed to aggregate the unspent-TxOuts (UTXOs) by address/script. If you didn't do it, bad things would happen. Unfortunately, I don't remember what those bad things were! It's entirely reasonable that I mis-applied some logic in my head and decided I needed an unnecesssarily-complex data structure to achieve security.
So, am I wrong? I'm not sure. What are the weaknesses and advantages of DiThi's tree structure vs. mine (his leaf nodes are UTXOs, mine are roots of UTXO sub trees). One thing I can think of is: how do you know that a peer gave you the complete list of UTXOs and is not hiding the rest? Though, I've already made an assumption that you have at least one honest peer, so that probably not an issue... is it?
|
|
|
Armory works on Windows XP 32-bit! Hell yeah!
That is all. (32-bit .msi files coming soon)
EDIT: Except I just noticed loading is slow as molasses... 3 min on my WinXP VM! why does Windows hate me?
|
|
|
Question: as users download this alt-chain, do they always download the alt-chain from its genesis block (imagine namecoin), or do they always only download a certain number of blocks back, after which point, all earlier blocks are discarded? (imagine p2pool).
I would have to imagine it was the second.
If it was the first, the value of the alt-chain would tend to zero over time: it would only be useful for pruning spent transactions that existed at the time the alt-chain was created, as all the blocks representing diffs to the alt chain would be the same as (in proportion) to the diffs on the main chain. That is, at least the way I understood it.
If it was the second, I would imagine that it would be more sensible to create a superblock (say, every 1000 blocks) that publishes a brand new tree, and then all non-superblocks that follow (up to the next superblock) would be updates to that tree. Anyone downloading the alt-chain would never need more than 1000 blocks: one superblock and all the incremental blocks since the superblock was created. Anything older than the superblock would be simply unnecessary.
The second blockchain actually would have no substance to it. It would solely consist of headers which would contain the master-merkle roots. The merkle-roots are created from the main chain data, so you go get the data from that network. There would be no block reward -- your reward is on the main chain which you mining at the same time. So yes, everyone downloads the entire alt-chain. But the entirety of this extra chain is the headers themselves: so it's only about 4-5 MB per year. I am assuming it is necessary to have a verifiable source for pruned-headers -- a separate blockchain succeeds because correctness of data is required to be accepted.
I believe you could provide this by publishing both your "genesis block" as well as the source code for the one-off utility you make to produce it. One holding a complete copy of the Bitcoin block chain should be able to run your program and create a bit-for-bit copy of your genesis block. Get a few people to do this, and to GPG-sign the hash of the resulting output. Voila. See (1e) in the original post: the point of this exercise is to not have to trust specific nodes, GPG/PGP signatures, add centralization, etc. You trust the proof-of-work. The merkle-root with the most work behind it on the alt-chain is the merkle-root that you trust to be correct, just like you do with the main-network headers. We don't want users to have to setup a list of trusted authorities. Then you have revocation lists. And keep the list updated. And maintain GPG keys of such authorities. And politics about who should be trusted authorities. And of course, centralization. Or you setup a alternate blockchain, and trust the data that has the most work behind it. Voila.
|
|
|
There are two kinds of orderings here. First is the order in which updates are made to the Merkle trees. Each block is ordered, and within a block transactions are ordered, and within a transaction the txIns and txOuts are ordered. To update your Merkle trees as the result of a bitcoin transaction, you remove all the inputs, and insert all the new outputs. Everyone should be able to do this in the same order, right?
Now, within a Merkle tree, the order is determined by a key. Think of each Merkle tree as a database index. We would like to have at least two kinds of indexes, which means maintaining two instances of the Merkle tree:
Consider 10 unspent-TxOuts, {0,1,2,3,4,5,6,7,8,9}. You and I are both constructing the same binary/red-back tree, except that you are starting from scratch and I am starting from a tree where elements {0,1,2,5,6,9,13} are part of the tree. I have to add {3,4,7,8} and remove {13} from my tree. You just add all 10 elements in a specified order. We're going to get different roots. I technically know how your tree would've been constructed, but I might have to start over and reinsert everything from scratch if I wanted to make sure my tree structure matches yours. That's what I mean about "commutative computation" -- we need to make sure that regardless of whether you're starting from scratch, or updating an existing tree from any point in the past, that you'll get the same answer. As I said before, a trie would work, but is generally very space inefficient, and that matters here.
|
|
|
Let me try to explain a solution to the 'alternate Merkle tree' you require. The basic idea is to use a balanced binary search tree, such as a red-black tree or a 2-3 tree. Updating such a data structure, including rebalancing, only requires accessing O(log N) tree nodes. A lite client would be able to verify such an update by receiving just the relevant nodes. There is never a need to recompute the entire tree from scratch. Balancing is strict, in that the worst-case length from the root to a leaf never exceeds O(log N). There's been a bunch of academic research on this topic, where it's known as "Authenticated Data Structures" [1,2,3]. Although the idea has been around for almost 15 years, I don't know of a single implementation. So I made a toy implementation available at https://github.com/amiller/redblackmerkleI'm hoping spread awareness of this technique, since it's pretty clear that some clever use of Merkle trees is going to be important in Bitcoin's future. Let's discuss this! P.S. Most of these structures only require collision resistant hash functions. However, if you want to use a fancy hash functions with special (homomorphic) properties, you can make even more efficient structures, e.g. a Merkle 'bloom filter' [4]. [1] Certificate Revocation and Certificate Update Noar and Nissim, 1998. USENIX https://www.usenix.net/publications/library/proceedings/sec98/full_papers/nissim/nissim.pdf[2] Authenticated Data Structures Roberto Tamassia, 2003. http://cs.brown.edu/research/pubs/pdfs/2003/Tamassia-2003-ADS.pdf[3] Persistent Authenticated Data Structures and their applications Anagnostopoulos, Goodrich, and Tamassia http://cs.brown.edu/people/aris/pubs/pad.pdf[4] Cryptography for efficiency: Authenticated Data Structures Based on Lattices and Parallel Online Memory Checking Papamanthao and Tamassia, 2011 http://www.cse.msstate.edu/~ramkumar/gw-102.pdfWow, thanks socrates! That's an excellent place to start. Of course, I should've thought about that from the start, since I am adept with data structures, and especially trees/tries of sorts. I have even coded a red-black tree before... My brain was stuck on how to modify the base merkle-tree concept into what I wanted, and didn't consider going back to a different (though related) data structure. There is one problem, though it may be part of the materials you already referenced: the tree must be constructed identically by all parties, and from any state. And all binary-tree structures are insert-order dependent, unless you're storing them in some sort of trie. Specifying an insert order doesn't work, because someone constructing from scratch doesn't know how someone updating from a previous tree will insert them. But I wouldn't want to go to a basic trie, due to the space inefficiency. Something like a Patricia tree/trie would probably work, but it's very difficult to implement (correctly) and there's not a lot of existing, trusted implementations out there. I'll dig into your materials a bit. Thanks!
|
|
|
Before I read this I just want to quickly post that I personally, no matter whether justifiably or unjustifiably, I personally feel like this is the most pressing issue when it comes to Bitcoin's successful future and I really hope the core team has an order of priorities planed accordingly.
I too believe this is a critical issue for Bitcoin, as a whole. I had floated the idea that handling blockchain size was critical, in the past, but other issues seemed more pressing for the devs at the time -- I didn't have a solid idea to promote, and the blockchain size wasn't so out of hand yet. One nice benefit of this solution is that because it's an alt-chain, technically no core devs have to be on-board. It can be done completely indepedently and operate completely non-disruptively, even with only the support of other devs who believe in it. I'd certainly like to get core devs interested in it, as they are very smart people who probably have a lot of good ideas to add. But one of the biggest upsides here is that it can be done completely indepdently.
|
|
|
This idea has been scattered throughout some other threads, but there is no one place that fully explains the idea with pictures. I believe this relieves two major problems with the network at once -- compression/pruning, and lightweight node security -- and does so in a non-disruptive way. I am not positive that this is the right way to go, but it definitely warrants discussion. Summary: [SEE ILLUSTRATIONS BELOW] Use a special tree data structure to organize all unspent-TxOuts on the network, and use the root of this tree to communicate its "signature" between nodes. The leaves of this tree actually correspond to addresses/scripts, and the data at the leaf is actually a root of the unspent-TxOut list for that address/script. To maintain security of the tree signatures, it will be included in the header of an alternate blockchain, which will be secured by merged mining. This provides the same compression as the simpler unspent-TxOut merkle tree, but also gives nodes a way to download just the unspent-TxOut list for each address in their wallet, and verify that list directly against the blockheaders. Therefore, even lightweight nodes can get full address information, from any untrusted peer, and with only a tiny amount of downloaded data (a few kB). (NOTE: I have illustrated everything as using straight merkle-trees, but as noted in the downsides/uncertainties section: a variant of the merkle-tree will have to be to used that guarantees efficient updating of the tree.) (1) Major Benefits:(1a) Near-optimal blockchain compression: theoretically, the size of the pruned blockchain would be proportional to the transaction volume (thus could go up or down), instead of the entire global history which always increases in size. In practice, it wouldn't be so clean, but you really won't do any better than this. Whoops! Before this idea was fully developed, I had overlooked the fact that full nodes will still have to maintain the transaction-indexed database. This address-indexed DB is not a replacement, but would have to be in addition to the that. Therefore, it necessarily increases the amount of work and data storage of a full node. But it can simply be an "add-on" to an existing "ultraprune" implementation. (Either way, this should actually be a downside).- (1b) Trustless lightweight-node support: New nodes entering the network for the first time, will only have to download a tiny amount of data to get full, verifiable knowledge of their balance and how to spend it (much of which can be stored between loads). A single honest peer out of thousands guarantees you get, and recognize, good data.
- (1c) Perfectly non-disruptive: There is no main-network protocol or blockchain changes at all. All the balance-tree information is maintained and verified in a separate blockchain through merged mining. In fact, it's so non-disruptive, it could be implemented without any core-dev support at all (though I/we would like their involvement)
- (1d) Efficient tree querying&updating: The full-but-pruned nodes of the network will be able to maintain this data structure efficiently. New blocks simply add or remove unspent coins from the tree, and all operations are "constant time and space" (there is an upper limit on how much time and space is required to prove inclusion of, insert, or delete a piece of data, no matter how big the network is)
- (1e) No user-setup or options: Unlike overlay networks, achieving full trust does not require finding a trusted node, or subscribing to a service. Just like the main blockchain -- you find a bunch of random peers and get the longest chain. This could be bootstrapped in a similar fashion as the main network.
(2) Downsides and Uncertainties:- (1a) See revised (1a) above
- (2a) Complexity of concept: This is not simple. It's a second blockchain, requiring merged mining -- though if it is successful and supported by the community, it could be added to the network by requiring that miners compute and include the root hash of this data structure in the coinbase script (just like with block height). This is entirely feasible, but it could be a bear to implement it.
- (2b) Uncertainties about lite-node bootstrap data: Depending on how the data is structured, there may still be a bit of a data for a lite node to download to get the full security of a full node. It will, undoubtedly, be much less than downloading the entire chain. But, there is obviously implications if this security comes at the cost of 1 MB/wallet, or 100 MB/wallet (still better than 4GB, as of this writing). UPDATE: My initial estimate based on the "Hybrid PATRICIA/Brandais Tree" (aka Reiner-Tree), is that a wallet with 100 addresses could verify its own balance with about 250 kB.
- (2c) [SEE UPDATE AT BOTTOM] Merkle-tree Alternative Needed: Vanilla merkle-trees will not work, because adding or removing single branches is likely to cause complete recomputation of the tree. But it should be possible to create an alternative with the following properties:
- Commutative computation: a node should be able to get the same answer regardless of whether the tree is computed from scratch, or is based on updating a previous tree.
- O(log(N)) updating: removing or adding a single leaf node should be able to be done in O(log(N)) time. With a vanilla merkle tree, this is true only if you remove a node and add a node to the same leaf location.
(3) Assumptions::- (3a) Need verifiable tree roots: I argue that a regular overlay network won't suffice, solely because it's too easy for malicious nodes to spread incorrect data and muck up the network. If there's enough malicious nodes in an overlay network, it could make lite nodes that depend on it unusable. I am assuming it is necessary to have a verifiable source for pruned-headers -- a separate blockchain succeeds because correctness of data is required to be accepted.
- (3b) Merged mining does what we think it does: It is a secure way to maintain a separate blockchain, leveraging existing mining power.
- (3c) Efficient sorting: Leaf nodes of the main tree will have to be sorted so that all nodes can arrive at the same answer. However, this can be done using bucket-sort in O(N) time, because the leaf nodes are hashes which should be uniformly distributed.
Alt-Chain Merkle Tree construction:-- For each address/script, collect all unspent-TxOuts -- Compute merkle root of each TxOut tree -- Sort roots, use as leaf nodes for a master-merkle-tree. -- Include merkle-root of master tree in alternate chain header. https://dl.dropboxusercontent.com/u/1139081/BitcoinImg/ReinerAltChain/reinercompression.png Getting your balance:-- Download headers of both chains -- Request unspent-TxOut-hash list. -- Compute sub-merkle root for this address -- Request secondary-branch nodes (O(log(N)) -- Compute master root; compare to block header -- Request the full TxOuts for each unspent-TxOut-hash above https://dl.dropboxusercontent.com/u/1139081/BitcoinImg/ReinerAltChain/reinercompression2.png Alternate Chain:All data is included on the alternate blockchain, which is maintained through merged mining on the main chain. This is only one extra tx per block on the main chain. That is the full extent of its impact on the main chain, and any nodes that are ignoring/unaware of the alt-chain. https://dl.dropboxusercontent.com/u/1139081/BitcoinImg/ReinerAltChain/reinerchain.png
Yes, this is a huge undertaking. Yes, there's a lot of uncertainties. Yes, I need a new merkle tree structure. But, this idea would kill two massive birds with one stone (kill two albatrosses with one block?) Alright, tear it apart!
UPDATE:After lots and lots of discussion and debate, I believe that the address index should be maintained as a trie-like structure. Other's have expressed interest in a binary-search tree (BST). Either way, the structure can be adapted to have the same properties we desire of a merkle tree, but with a lot more flexibility, such as very quick insertion, deletion, querying, updating, etc. My preference is the creme-de-la-creme of tries -- a hybrid PATRICIA tree (level-compressed trie) and de-la-Braindais tree (node-compressed). It looks something like this: https://dl.dropboxusercontent.com/u/1139081/BitcoinImg/ReinerAltChain/DataStructures_Values.pngThe structure would be indexed by TxOut script ("recipient"), and each node is recursively authenticated by the nodes below it. The uniqueness of the trie structure guarantees that there is exactly one solution for a given set of TxOuts, which also means that only the existing set of TxOuts need to be obtained in order to create the trie (the BST requires replaying all transactions, in order, to have a well-defined internal structure). For education on trie structures, see my pretty pictures in this post.
|
|
|
Hello etotheipi,
The linux for the linux version 0.79.99 seems broken, can you please fix it?
Thank you Raphy
I don't know why dropbox does that! I removed and re-added the file and it works now. Sorry about that! Just use send-multi to spend the whole balance of that imported address. One output will be your desired destination address, another output will be your desired change address. If you can forget to use send-multi from your brain wallet address, then you can equally forget to mark checkbox somewhere to use imported addresses only.
EDIT: in UI send-multi is probably called something like "Add another recipient"
Yes, this does work. And I will be adding a "MAX" button that will fill in the maximum remaining amount (total balance minus values you've already plugged in for other recipients and fee). It means you're moving your entire balance every time, but it doesn't actually make any difference (unless you care about the age of your coins). In fact, it's probably better as it would always keep all your savings in a single address... I know that doesn't solve molec's problem, but it's something that always bothered me about other programs, that should be plenty easy to add. Also, you could just make a paper backup of the offline wallet
|
|
|
|