Bitcoin Forum
June 16, 2024, 08:12:00 AM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
  Home Help Search Login Register More  
  Show Posts
Pages: « 1 ... 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 [104] 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 ... 186 »
2061  Bitcoin / Armory / Re: Armory - Discussion Thread on: December 20, 2012, 02:52:57 PM
Is there anywhere I can read about the wallet specification without delving into armory source?

Indeed!  It's on the webpage:  http://bitcoinarmory.com/index.php/armory-wallet-files

However, I'm working on a new wallet format now, so that will be changing pretty substantially in the near future, but there's also a lot of work left to be done (so maybe not near-near future...)
2062  Bitcoin / Development & Technical Discussion / Re: Improving Offline Wallets (i.e. cold-storage) on: December 20, 2012, 02:29:10 PM
It does not have to be radical collecting all dust in a single transaction but have a tendency to use at least as many inputs as outputs.
I like this. This way you will slowly collect dust over time. You could make it in such a way that whenever possible you squeeze in additional inputs if it does not affect the fee.

Hahah,

Actually Armory already does this.  The algorithm could be improved, but it will try to collect dust and throw it on top, as long as it doesn't induce a fee, and it doesn't increase the address linkages (it is from addresses already on the input side).

This will be improved in the future, as someone pointed out that I can treat addresses that have already been linked, as a single "group."  Thus, I can throw in dust from all groups of addresses already represented on the input side, without damaging the input linkages.
2063  Bitcoin / Development & Technical Discussion / Re: Restoring addresses from old backup on: December 20, 2012, 05:24:49 AM
As I understand, and to my experience, in Armory you can take a single backup, and the program will be able to regenerate all addresses in a wallet based upon this one backup. It means that you can take one backup when you create the wallet, and that's all you need. It will use the information from the first address as a seed to generate the next, so you don't risk loosing the other private keys.

Is there something similar implemented in the Qt client? Or do I have to back it up every time I create a new address?

What information is used as seed when I generate a new address in the Qt client?

BIP 32 was developed by the Bitcoin-Qt devs, and will hopefully make it's way into Bitcoin-Qt in 0.8.  But it's looking like it'll be a later release.  This will provide multi-chain deterministic addresses (create multiple wallets built from one seed).  I plan to implement the same thing in Armory so it will be compatible.

But at the moment, you are correct:  this cannot be done in Bitcoin-Qt, and it's a real shame because requiring-regular-backups-at-the-cost-of-losing-your-money is a real problem in the world of end-user software.  It's never worth dealing with backups until it's too late, i.e. "ehhh, I'll set it up next week."  Plus, that means that your backup solution can't be extremely secure, because it still needs to be convenient since you're doing it multiple times.  At least with a deterministic wallet, you can backup once, drive to the bank and put it in your safe-deposit box.  Once.

Personally, I think this is the number one reason to use these alt clients, because the ability to backup once, forever is such a powerful feature.


2064  Bitcoin / Armory / Re: Armory - Discussion Thread on: December 19, 2012, 05:22:41 PM
Ente

Actually, I think you just need to recompile.  Just type "make".  And you shouldn't do a checkout on the remotes/origin directly.   Just "git checkout dev" then do a "git pull origin dev".

Otherwise,  that's the right spirit!  Just compile it first,  *then* look for bugs :-)
2065  Bitcoin / Armory / Re: Armory - Discussion Thread on: December 19, 2012, 01:12:04 PM
On the topic of testing... anyone currently using it?  Any problems with it?  It's mostly small updates, so I don't expect a lot to go wrong with it.  But I still need some feedback to know for sure.

I would like to test dev releases.
I am a bit vary about using the dev branch of a beta software on funds, though..
What would you, generally, suggest as a good way to handle this?
Of course I have external backups of the wallet files.

Ente

I would only be concerned if I make updates to the wallet code, thus risking the possibility of making errors in computing addresses, etc.  But the wallet code hasn't been touched in months (except for some tweaks to the keypool). 

When I make the new wallet, I expect people will want to test on testnet first, or with smaller amounts of coins.  For these types of releases, though, the worst thing that will happen is that pressing some buttons will throw errors strange errors, or maybe even crash Armory.  If that happens, please send me the log file.  But I wouldn't worry about losing coins...
2066  Bitcoin / Development & Technical Discussion / Re: Ultimate blockchain compression w/ trust-free lite nodes on: December 19, 2012, 04:55:23 AM
I love your inkscape graphics.  I downloaded it cause of your mention Smiley

It's like VIM:  it's got a bit of a learning curve to be able to use it efficiently, but there's so many shortcuts and hotkeys that you can really fly once you have some experience (and yes, I do 100% of my code development in vim Smiley)

Doesn't it make more sense to start downloading from the bottom of the tree instead of the top?  Say, partition the address space up, and request all UTxOs that lie a given partition - aong with the full bounding branches - and then compute the missing node hashes up to the root.  Inclusion of each partition in some known block is verified, and then we'd just have catch up the delayed partitions separately using full tx data.  The deterministic property of the tree makes the partition syncing trivial, and I assume tx data will be available within some relatively large time window for reorgs and serving, etc.

My brain is fried right now too, I'll have a closer look at what you wrote after some sleep.  Maybe I'm oversimplifying it...

I think we're saying the same thing:  I showed partitioning at the second level, but it really would be any level low enough to meet some kind of criteria (though I'm not sure what that criteria is, if you don't have any of the data yet).  It was intended to be a "partitioning from the bottom" and then filling up to the root once you have it all.  

I imagine there would be a P2P command that says "RequestHashes | HeaderHash | Prefix".  If you give it an empty prefix, that means start at root:  it will give you the root hash, followed by the 256 child hashes.  If you give it a prefix "\x01" it gives you the hash of the node starting at '\x01' and the hashes of its 256 children. This is important, because I think for this to work, you have to have a baseline for what the tree is going to look like for your particular target block.  I think it gets significantly more complicated if you are aiming for partitions that are from different blocks...

Then there'd be another command that says "RequestBranch | HeaderHash | Prefix | StartNode".  The header hash/height would be included only so that peers that are significantly detached from your state won't start feeding you their data.  i.e. Maybe because they don't recognize your hash, or somehow they are more than 100 blocks from the state you are requesting.  If the peer's state is within 100 blocks, they start feeding you that partition, ordered lexicographically.  They'll probably be transferred in chunks of 1000 nodes, and then you put in the next request using the 1000th node as the start node to get the next chunk.  Since we have branch independence and insert order independence, the transfer should be stupid simple.

Also something to note:  I think that the raw TxOut script should be the key for this tree.  Sure, a lot of these will have a common prefix, but PATRICIA trees will compress those anyway.  What I'm concerned about is something like multiple variations of the same address, such as a TxOut using hash160 vs using full public key.  That can lead to stupid side-effects if you are only requesting by addr.  
2067  Bitcoin / Armory / Re: Armory - Discussion Thread on: December 19, 2012, 04:03:42 AM
May have found a small bug (v0.86-beta):

I created a new receiving address in offline mode. Now when I close Armory and re-launch it, the receiving address is gone. If I click the "receive bitcoins" (within a wallet) button again, the exact same address gets re-created along with the comment.

Actually, there's a bug fix for that in 0.86.2 -- please verify for me that the issue is resolved (without breaking anything).  Thanks!
2068  Bitcoin / Development & Technical Discussion / Re: Ultimate blockchain compression w/ trust-free lite nodes on: December 19, 2012, 02:41:34 AM
How about this: download whatever non-sychronized UTxO set you can from peers, then start downloading blocks backwards, adding any missing new txouts and removing any that were spent during the download.  Then once you're a few blocks before the time you started the download you could build the tree and make sure it hashes properly.

@d'aniel:  You might be right that it's possible to reconstruct the tree from an amalgamation of closely related states.  Though, I'm concerned that there's too many ways for that to go wrong.  Let's start a thought-experiment:  I have a fresh slate, with no Tx and no UTXO.  I then execute 65,536 requests for data, and download each one from a different peer (each request is for a different 2-byte prefix branch).  I will assume for a moment that all requests execute successfully, and we end up with something like the following:



A couple notes/assumptions about my drawing:  

  • (1) I have drawn this as a raw trie, but the discussion is the same (or very close to the same) when you transition to the Patricia/Brandais hybrid.  Let me know if you think that's a bad assumption.
  • (2) We have headers up to block 1000.  So we ask one peer that is at block 1000 for all 65,536 trie-node hashes.  We verify it against the meta-chain header.
  • (3) We make attempts to download all 65,536 subtrees from a bunch of peers, and end up mostly with those for block 1000, but a few for 995-999, and a couple have given us block 1001-1002 because that was what they had by the time we asked them to send us that branch.  We assume that peers tell us what block they are serving from.
  • (4) Some branches don't exist.  Even though the second layer on the main network they will always be at 100% density, there may be various optimization-related reasons to do this operation at a lower branch level where it's not 100% density.
  • (4a) I've used green to highlight four situations that I don't think are difficult, but need to be aware of them.  Branch \x0202 is where the node hashes at block 1000 say it's an empty node, but is reported as having data by the peer serving us from block 1001.  \x0203 is the same, but with a peer serving block 993 telling us there is data there.  \x0302 and \x0303 are the inverse:  block 1000 has hashes for those trie-nodes, but when requested from peers serving at other points in time, they report empty.
  • (5) Downloading the transactions-with-any-unspent-txouts from sources at different blocks also needs to be looked at.  We do eventually need to end up with a complete list of tx for the tree at block 1000 (or 1002?).  I'm expecting that any gaps can be filled with subsequent requests to other nodes.

So, as a starter algorithm, we acquire all this data and an almost-full UTXO tree.  We also acquire all of the blocks between 993 and 1002.   One branch at a time, we fast forward or rewind that branch based on the tx in blocks 993-1002.  It is possible we will be missing block data needed (due to #5), but I assume we will be able to acquire that info from someone -- perhaps this warrants keeping tx in the node's database for some number of blocks after it is depleted, to make sure it can still be served to other nodes catching up (among other reasons).

On the surface, this looks workable and actually not terribly complicated.  And no snapshots required!  Just ask peers for their data, and make sure you know what block their UTXO tree is at.  But my brain is at saturation, and I'm going to have to look at this with a fresh set of eyes later this week, to make sure I'm not neglecting something stupid.
2069  Bitcoin / Armory / Re: Armory - Discussion Thread on: December 19, 2012, 12:33:17 AM
For those compiling from source, the latest is on the "dev" branch i nthe git repo.  I just realized I should start a "testing" branch, and use that as a holding-cell for soon-to-be-master upgrades, and then I don't have to keep telling you guys what branch to use.  Not sure why I didn't do this sooner...
Is the lack of a git tag for the 0.87 release intentional or an oversight?

I'll tag it when it's a real release -- right now it's still in dev branch because it's still a testing release.  Second, I have been pretty lazy about tagging versions, but I will start doing so more religiously, now that there are a couple build systems relying on them.  So, I will be sure to tag all future full-releases.   Third, this will actually be 0.86.2 -- it's really just a bugfix/polishing release.

On the topic of testing... anyone currently using it?  Any problems with it?  It's mostly small updates, so I don't expect a lot to go wrong with it.  But I still need some feedback to know for sure.
2070  Bitcoin / Development & Technical Discussion / Re: Correct header data for block 1 on: December 18, 2012, 11:54:54 PM
Perhaps it's a single-bit error!  I documented my experience with this here.  It was maddening, to be able to confirm 1.5 million tx hashes, and have exactly one fail.  It took me a while, but I eventually narrowed it down.  I was able to manually compare the tx from a different source to the one in my blkfile. 
2071  Bitcoin / Development & Technical Discussion / Re: Ultimate blockchain compression w/ trust-free lite nodes on: December 18, 2012, 11:52:04 PM
Of course, if snapshots end up not being the best solution, then I'm all for that as well.

Well, I am not seeing a way around using snapshots.  I was hoping someone more insightful than myself would point out something simpler, but it hasn't happened yet...

Also, as mentioned earlier, I think snapshots are wildly expensive to store.  I think if a node wants block X and the snapshot is only for block X+/-100, then he can get that snapshot at X and the 100 blocks in between and rewind or fast forward the UTXO tree on his own.  The rewinding and fast-forwarding should be extremely fast once you have the block data. 

Although this does open the question of how nodes intend to use this data.  If it turns out they will want to understand how the blockchain looked at multiple points in time, then perhaps it's worth the effort to store all these snapshots.  If it never happens, then the fast-foward/rewind would be better.  My thoughts on this is:

(1) The gap between snapshots should be considered relative to the size of a snapshot.  My guess is that 100 blocks is smaller than a snapshot, and thus you never need more snapshots than that
(2) Snapshots at various points in time actually won't be that useful, other than helping other nodes download.  These kinda-full-nodes only care about the latest state of the UTXO tree, nothing else.  If you think there's other reasons, please point them out.

2072  Bitcoin / Development & Technical Discussion / Re: Ultimate blockchain compression w/ trust-free lite nodes on: December 18, 2012, 10:51:08 PM
On that note, isn't it actually 15 hashes per full merkle tree of 256 nodes?
Yeah, whoops.

Regarding the issue of synching ones Reiner tree: Smiley is it really a problem this proposal needs to solve?  Couldn't the client just wait to build/update it til after he's caught up with the network in the usual way?

Well, I'm hoping that it will be possible to not need to "catch up with the network" in the current sense.  Certain types of nodes will only care about having the final UTXO set, not replaying 100 GB of blockchain history just to get their 2 GB of UTXO data.  I'd like it if such nodes had a way of sharing these UTXO trees without using too much resources, and without too much complication around the fact that the tree is changing as you are downloading. 

One core benefit of the trie structure is that nodes can simply send a raw list of UTXOs, since insertion order doesn't matter (and thus deleted UTXOs don't need to be transferred).  Sipa tells me there's currently about 3 million UTXOs, so at 36 bytes each, that's about 100 MB to transfer.  There is, of course, the raw transactions with any remaining UTXO that need to be transferred, too -- currently 1.3 million out of about 10million total tx in the blockchain.  So that's probably another few hundred MB.  But still only a fraction of the 4.5 GB blockchain.   


As I said, the simplest is probably to have nodes just spend the space on a snapshot at every retarget, and let nodes synchronize with that (or perhaps every 500 blocks or something, as long as all pick the same frequency so that you can download from lots of sources simultaneously).  After that, they can download the few remaining blocks to update their own tree, appropraitely.

I had come up with a scheme for deciding how long to keep each snapshot I thought would balance space and usefulness well.

If the block height (in binary) ends in 0, keep it for 4 blocks.
If 00, keep for 8 blocks.
If 000, keep for 16 blocks.
If 0000, keep for 32 blocks.
If 00000, keep for 64 blocks... etc. all the way to the genesis block.

That's a generalization of what I proposed:  if(blockheight mod 2016 == 0) {storeSnapshotFor2016Blocks}.  Clearly, the modulus needs to be calibrated...  The problem is these snapshots are very expensive, so we would prefer not to do snapshots at all.  But one may be necessary.  Hopefully not more than that.  Although it would be great if I just overlooked something and we could do this without snapshots at all.
2073  Bitcoin / Development & Technical Discussion / Re: Ultimate blockchain compression w/ trust-free lite nodes on: December 18, 2012, 10:43:03 PM
Is it possible to read somewhere exactly what is stored in a block in pseudocode?

A block, as it is stored on disk is very straightforward and parsed very easily:

Code:
[MagicBytes(4) || BlockSize(4) || RawHeader(80) || NumTx(var_int) || RawTx0 || RawTx1 || ... || RawTxN] 

The blk*.dat files are just a list of binary sequences like this
2074  Bitcoin / Armory / Re: Armory - Discussion Thread on: December 18, 2012, 07:53:39 PM
Has anyone given thought to a web-based armory client?

For security reasons it would only handle watch-only wallets and you'd still have to sign transactions on a desktop client. Or it could be a full implementation, at the cost of security but there'd be no need for syncing.

An ideal scenario would be a service where I import a watch only copy of my wallet, from there I can view my wallet as I would an online desktop wallet and generate offline transactions.

The bottom line is that this would be the ultimate e-wallet solution, coupled with an android/iOS offline-only armory app for signing transactions. It would offer both security (no need to trust host with your keys) and comfort (no need to sync the blockchain)

Good idea.  It's a very good idea ...

...and I secretly came up with this idea a couple weeks ago Smiley  Inspired by a user coming to me for help because he couldn't download the blockchain from his crappy internet connection.  He wanted to send me his watching-only wallet and I would generate the tx for him.  I've been pondering this idea for the last couple weeks, and it's something I'm keeping my eyes open for.  I was keeping it secret, and maybe it would just show up in some random Armory release one day Smiley

I seriously don't have any plans for this in the next one month.  But 2+ months, it's a distinct possibility...

2075  Bitcoin / Hardware wallets / Re: [ANN] Hardware wallet project on: December 18, 2012, 06:43:21 PM
Slush, I'm interested in looking at your BIP 0032 implementation. Is the source code somewhere publicly available?

Btw, I have a BIP 32 implementation in the "newwallet" branch of Armory.  It's only the crypto part -- I haven't been able to integrate it into a new wallet format yet (and thus, not usable in Armory yet).  But it includes the ChildKeyDeriv() source code, and a unit test for both HMAC-SHA512 and the ChildKeyDeriv().

The unit tests may not be entirely accurate, because I made them before sipa decided that all derivations should use compressed public keys.  But the algorithm is otherwise 98% what is described in the BIP.
2076  Bitcoin / Development & Technical Discussion / Re: A valid criticism of Bitcoin's design? on: December 18, 2012, 02:48:20 PM
But it still wouldn't be that bad.  There are quantum-resistant algorithms that can be run on classical computers.  So it's pretty ridiculous to single out Bitcoin for being susceptible to QCs without mentioning that all sensitive communications on the internet will be susceptible.  And without mentioning that there are alternatives.

I think it is important to note with this that the quantum resistant algorithms that have been published tend to fall rather quickly to classical cryptanalysis. In a couple decades there will probably be a few quantum resistant public key algorithms that have passed rigorous review, but at the moment there isn't really a more secure alternative to ECDSA. At least not a more secure alternative against Shore and Grover.

Try some Unbalanced Oil & Vinegar.
2077  Bitcoin / Development & Technical Discussion / Re: A valid criticism of Bitcoin's design? on: December 18, 2012, 06:04:24 AM
Quantum computers indeed would break ECDSA, allowing private keys to be derived from public keys in polynomial time.   I suppose a mathematical breakthrough in finite field theory could do the same, but astoundingly unlikely.  But Bitcoin is the least of the world's problems when this happens:  the entire web of trust, SSL/PKI/CA architecture would be broken, rendering useless most internet crypto that we all rely on.  The world will have a lot of problems when QCs start to come into regular existence.  In particular, there will be a gap between when they start to become available to the wealthy, but are not ubiquitous enough to enable widespread quantum encryption to replace it.

But it still wouldn't be that bad.  There are quantum-resistant algorithms that can be run on classical computers.  So it's pretty ridiculous to single out Bitcoin for being susceptible to QCs without mentioning that all sensitive communications on the internet will be susceptible.  And without mentioning that there are alternatives.  

In fact, Bitcoin is wholly prepared to deal with this:

(1) By avoiding address reuse, you're 99% safe even in a world full of quantum computers -- because your address is the hash of your public key, and thus no one knows your public key until you've already submitted a signed transaction to the network.  Even in 50 years when QCs are "fast", they probably won't be fast see your public key, back out your private key, sign your coins to themselves, and then broadcast that tx ... all before your transaction has propagated to a majority of nodes.  Yes yes, isolation attacks... but see #2

(2) The Bitcoin network has the property that you can actually change stuff like crypto algorithms, hashing algorithms, etc, by incrementing a version number and encouraging all nodes to use the new [QC-resistant] algorithms.  Of course, it's not a trivial change, but the network would survive and users would be able to continue using the old encryption for a short time until the transition is made, as long as they never reuse addresses.   Especially large transactions could be submitted directly to miners to avoid isolation issues.

I would say the OP quote is simply under-informed.

P.S. -- Also, these worst case scenarios assume that QCs just pop out of nowhere and suddenly exist on everyone's desktops.  The fact is, we'll see QCs coming decades in advance... so all this "OMG need to swap crypto algorithms immediately!" stuff is overblown.
2078  Bitcoin / Armory / Re: Armory - Discussion Thread on: December 17, 2012, 05:50:32 PM
Please help test version 0.86.2-beta!  Lots of bug fixes and small improvements that should make the Armory experience smoother.  Please help test and give me feedback.

Luckily, most of the changes in this version are isolated, so there's not a lot of bugs expected.  But you never know until people start using it!  But the point was, it should be fairly stable already.




Windows 64 installer:   armory_0.86.2-beta_win64.msi
Windows 32 installer:   armory_0.86.2-beta_windows_all.msi
Ubuntu/Debian 64-bit installer:  armory_0.86.2-beta_amd64.deb
Ubuntu/Debian 32-bit installer:  armory_0.86.2-beta_i386.deb

For those compiling from source, the latest is on the "dev" branch i nthe git repo.  I just realized I should start a "testing" branch, and use that as a holding-cell for soon-to-be-master upgrades, and then I don't have to keep telling you guys what branch to use.  Not sure why I didn't do this sooner...


All features new to 0.86.2:

   - Added Root Key to "Backup Individual Keys"
        So you can backup your imported keys and deterministic "seed"
        from one operation, instead of two.  Key pool addresses are now
        accessible, too.

   - Right-click Ledger Options
        Added right-click menu to ledger for quick access to transaction
        and wallet information.  Also includes options for opening your
        web-browser right to tx or address information in blockchain.info.

   - Offline-Sign Confirmation & Warnings
        Offline signing now displays appropriate warnings about what users
        should verify for before signing and broadcasting.

   - Added Comments to Coin Control (Expert Mode)
        Abbreviated comments are now show in the coin control selection
        window, with full comments available via mouse-over text.

   - Bugfix: Disappearing Addresses
        Some startup operations were inadvertantly "rewinding" wallets with
        unused addresses, causing those addresses to disappear from the
        address list, and then shown again when the user requested another
        address.  Resolved.

   - Bugfix: Ledger Sorting
        All fields in the primary ledger are sortable.  Some fields become
        unsortable as a side-effect of ledger optimizations in v0.85.



2079  Bitcoin / Armory / Re: Armory - Discussion Thread on: December 17, 2012, 02:44:37 PM
Etotheipi, which Armory release should i use for a Windows XP SP3 32-bit client?

Use the "windows_all" version.  That version is actually built on WinXP 32-bit, though it works fine on Win7-64bit too.
2080  Bitcoin / Wallet software / Re: Will Android clients ever support encrypted QR codes? (for paper wallets) on: December 17, 2012, 02:48:43 AM
I have had multiple users request this in Armory.  My response is controversial, but I want to throw it out there as food for thought, and you can ignore it if you don't like it:

If you have an encrypted wallet and all your backups are encrypted as well, including encrypted paper backups -- you have a brain-wallet.  Not exactly a brain-wallet, just all the downsides of brain-wallets.  You are at significant risk of losing your coins no matter how good you think you are.  Either because you forget your encryption passphrase because you only used it once five years earlier when you made the backup, or because you get hit by a bus and take the passphrase (and BTC) to your grave with you.  If the encryption is implemented properly, the backup will be useless without the passphrase.

I have no problem with having encrypted backups in addition to an unencrypted backup stored somewhere secure such as a safe or safe-deposit box.  But I think if the option is there, a lot of users will make 100% of their backups encrypted, and a lot of BTC will be permanently lost.
 


Pages: « 1 ... 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 [104] 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 ... 186 »
Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!