Bitcoin Forum
May 24, 2024, 11:34:19 PM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
  Home Help Search Login Register More  
  Show Posts
Pages: « 1 2 3 [4] 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 »
61  Bitcoin / Development & Technical Discussion / Re: A bit of criticism on how the bitcoin client does it on: May 15, 2013, 12:13:50 PM
But if I make a long bloom filter, then it should never match and so I should be only getting "merkleblock" not followed by any "tx"...
Haven't given up yet Smiley

If you don't want any transactions at all, just use getheaders.
62  Bitcoin / Development & Technical Discussion / Re: A bit of criticism on how the bitcoin client does it on: May 15, 2013, 08:53:18 AM
I have been looking at BIT37 and it seems that "merkleblock" is exactly what I need in order to divide a new block's download into small chunks and then distribute the block's download among different peers, using a bunch of "getdata 1" instead of one "getdata 2".

I don't think you can perfectly partition blocks using filtered blocks, as there are several matching criteria, and one match is enough to include a transaction. So if you send disjunct Bloom filters (nHashes=1, and disjunct bitsets to each peer), you'll still get doubles. It is perhaps an interesting extension to BIP37 to support such partitioning, especially for larger blocks.

However, one case that is explicitly (and intentionally) supported by BIP37 is requesting blocks (full-node wise) with the transactions already known to the peer filtered out. So contrary to what Mike says, I'm pretty sure you can have a full node that uses BIP37 to fetch blocks, and save download bandwidth using that.
63  Bitcoin / Development & Technical Discussion / Re: A bit of criticism on how the bitcoin client does it on: May 13, 2013, 08:31:48 PM
Quote
What protocol is used to actually fetch blocks is pretty much orthogonal to the logic of deciding what to fetch, and how to validate it, IMHO.
I disagree. If you are a node behind DSL, and you have a very limited upload bandwidth, you do not want to serve blocks, unless it is really necessary.
There are servers out there, connected to the fastest networks in the world - these you should use, as much as you can. Who is going to stop you?

I agree completely. But it still has nothing to do with your logic of deciding what to fetch and how to validate it. It's just using a different protocol to do it.
64  Bitcoin / Development & Technical Discussion / Re: A bit of criticism on how the bitcoin client does it on: May 13, 2013, 08:22:30 PM
Why to ask 500 blocks back?

It doesn't, as far as I know. It asks for "up 500 blocks starting at hash X", where X is the last known block.

Quote
Quote
There is one strategy however that's pretty much accepted as the way to go, but of course someone still has to implement it, test it, ... and it's a pretty large change. The basic idea is that downloading happens in stages as well, where first only headers are fetched (using getheaders) in a process similar to how getblocks is done now, only much faster of course. However, instead of immediately fetching blocks, wait until a long chain of headers is available and verified. Then you can start fetching individual blocks from individual peers, assemble them, and validate as they are connected to the chain. The advantage is that you already know which chain to fetch blocks from, and don't need to infer that from what others tell you.
I saw getheaders and I was thinking about using it.
Now I think if you really want to combine the data you got from getheaders, with the parts of blocks acquired from you peers after they have implemented BIP37 (otherwise it won't be much faster) - then good luck with that project, man! Wink

Using Bloom filtering may not be entirely viable yet, I'll have to check. The big changes is first downloading and validating headers, and then downloading and validating the blocks itself. IMHO, it's the only way to have a sync mechanism that is both fast, stable and understandable (I have no doubt that there are other emchanisms that share two of those three properties...).

Quote
I mean, I would rather prefer baby steps - even extreme, like having a central sever from which you can fetch a block, by its hash. I mean: how expensive would be that? But how much bandwidth would it save for these poor people.. Smiley

What protocol is used to actually fetch blocks is pretty much orthogonal to the logic of deciding what to fetch, and how to validate it, IMHO.
65  Bitcoin / Development & Technical Discussion / Re: A bit of criticism on how the bitcoin client does it on: May 13, 2013, 07:49:39 PM
1.
Maybe I have not verified it well enough, but I have an impression that the original client, whenever it sends "getblocks", it always asks as deep as possible - why to do this? It just creates unnecessary traffic. You can surely optimize it, without much effort.

Not sure what you mean by "as deep as possible". We always send getdata starting at whatever block we already know. The reason for starting from early blocks and moving forward is because validation is done is stages, and at each point as much as possible is already validated (as a means to prevent DoS attacks, mostly). As most checks can only be done when you have the entire chain of blocks from genesis to the one being verified, you need them more or less in order.

Quote
2.
IMO, you should also optimize the way you do "getdata".
Don't just send getdata for all the block that you don't know, to all the peers at the same time - it's crazy.
Instead, try to i.e. ask each node for a different block - one at a time, until you collect them all...

That's not true, we only ask for each block once (and retry after a timeout), but it is done to a single peer (not to all, and not balanced across nodes). That's a known badness, but changing isn't trivial, because of how validation is done.

There is one strategy however that's pretty much accepted as the way to go, but of course someone still has to implement it, test it, ... and it's a pretty large change. The basic idea is that downloading happens in stages as well, where first only headers are fetched (using getheaders) in a process similar to how getblocks is done now, only much faster of course. However, instead of immediately fetching blocks, wait until a long chain of headers is available and verified. Then you can start fetching individual blocks from individual peers, assemble them, and validate as they are connected to the chain. The advantage is that you already know which chain to fetch blocks from, and don't need to infer that from what others tell you.

Quote
3.
Last, but not least.
Forgive me the sarcasm, but I really don't know what all the people in the Bitcoin Foundation have been doing for the past years, besides suing each other and increasing transaction fees Wink

The Bitcoin Foundation has only been around for a year or so, and they don't control development. They pay Gavin's salary, but other developers are volunteers that work on Bitcoin in their free time.

Quote
We all know that the current bitcoin protocol does not scale - so what has been done to improve it?
Nothing!
The blocks are getting bigger, but there have been no improvements to the protocol, whatsoever.
You cannot i.e. ask a peer for a part of a block - you just need to download the whole 1MB of it from a single IP.

BIP37 actually introduced a way to fetch parts of blocks, and it can be used to fetch a block with just the transactions you haven't heard about, so it avoids resending those that have already been transferred as separate transactions (though I don't know of any software that uses this mechanism of block fetching yet; once BIP37 is available on more nodes, I expect it will be). Any other system which requires negotiating which transactions to send adds latency to block propagation time, so it's not necessarily the improvement you want.

Quote
Moreover, each block has an exact hash, so it is just stupid that in times when even MtGox API goes through CloudFlare to save bandwidth, there is no solution that would allow a node to just download a block from an HTTP server, and so it is forced to leech it from the poor, mostly DSL armed, peers.
The way I see it, the solution would be very simple: every mining pool can easily use CloudFlare (or any other cloud service) to serve blocks via HTTP.
So if my node says "getdata ...", I do not necessarily mean that I want to have this megabyte of data from the poor node and its thin DSL connection. I would be more than happy to just get a URL, where I can download the block from - it surely would be faster, and would not drain the peer's uplink bandwidth that much.

There are many ideas about how to improve historic block download. I've been arguing for a separation between archive storage and fresh block relaying, so nodes could be fully verifying active nodes on the network without being required to provide any ancient block to anyone who asks. Regarding moving to other protocols, there is the bootstrap.dat torrent, and there's recently been talk about other mechanism on the bitcoin-development mailinglist.
66  Economy / Service Discussion / Re: how does blockchain.info get the originating IP? on: May 12, 2013, 09:29:54 PM
They don't know the originating IP; only the IP that first relayed it to them.
67  Bitcoin / Development & Technical Discussion / Re: Deterministic wallets on: May 12, 2013, 06:38:13 PM
1. Can we reserve more than one bit of the number i for different kind of derivations, not only the highest bit? This would allow addition other kinds of derivations or tweaks of existing ones in the future.

Adding more flag bits is certainly possible, but we can extend the length of the serialized i value in that case, so the only thing that needs to change is the serialization format.

Quote
5. It is certainly possible to do everything with HMAC-SHA256 alone. For instance, if you need two values like (I_L,I_R) you can do I_L=HMAC(secret,0) and I_L=HMAC(secret,1). The question is, does it reduce dependencies of the code, code review, etc. to be worthwhile?

All this might get implemented on smardcards one day. I really don't understand why you'd want to rely on a new, second hash function here, SHA512. Entropy is _not_ a reason.

I like the simplicity of a single existing construct that operates natively on 512 bits. Yes, separate SHA256 calls would work too, but are less elegant IMHO. I don't mean to say it would be less secure, and this is just bike shedding.

If smartcards have a problem with SHA512, they'll have a huge problem with ECDSA.

Quote
And third, can you change
Code:
I = HMAC( cpar, Kpar || i)
in the public derivation to
Code:
I = HMAC( HMAC(cpar,i),  Kpar)
? I see the provability of the link as a quite an important property (cf. #228).

BTW, the function I will propose for payer-derived payment addresses (which is basically just a deterministic wallet of the payee in the hands of the payer) will be
Code:
K_derived := HMAC( m, K_base) *G + K_base
where m is the (hash of the) invoice or contract that is being paid for.
This would fit nicely together, as it would be the exact same function just with m substituted for HMAC(cpar,i). We could re-use code as well as security reasoning.

That provability argument is interesting, but can you come up with use cases? I'm also interested in knowing the complexity of iddo's zero-knowledge proofs that could avoid this.
68  Bitcoin / Development & Technical Discussion / Re: Initial replace-by-fee implementation is now available on testnet on: May 12, 2013, 11:02:24 AM
I'm in the middle about this.

One the one hand, I do understand the reason for a change like this. If miners become (short-term) selfish/rational, and start implementing behaviour like the one implemented by this patch, I will completely be in favor of making it the default - as anything else would indeed just be a false sense of security. It would remove the "best-effort attempt" by the network to secure 0-conf transactions, and force people to find real solutions for fast transactions, instead of relying on behaviour that cannot be guaranteed. And perhaps that is inevitable anyway. And looking at Bitcoin as an experiment, it's so much more interesting to aim for a system that doesn't require specific parties to act in the best interest (especially non-rationally) of the community anyway...

On the other hand, it's not true that doing double-spending 0-conf transactions is trivial now. Not particularly hard, but not as easy as whistling in a telephone either. So in that sense, the network right now does provide some 0-conf security. Not much, and nothing guaranteed, but it works, and there is economic infrastructure that relies on it. I don't think they're even wrong about relying on it, if they're willing to eat the costs from damage caused by it (on the other hand, perhaps few can actually estimate the risk correctly, and will get hurt when their business becomes profitable enough to attack...). Anyway, what I'm getting at is that this is a drastic change if we'd put it in mainline, and drastic changes may be more damaging than the actual problem we're solving. Perhaps we need time to outgrow relying on 0-conf security.

So: I don't know. I like the idea of aligning miner's short-term self interest (even if they don't exploit that right now) with the policy implemented by the reference client. But perhaps not immediately...
69  Bitcoin / Armory / Re: [BOUNTY: 2.0 BTC] [CLAIMED] Message Signing in Armory on: May 12, 2013, 09:47:02 AM
Just as a FYI, the C secp256k1 library I'm writing has support for creating recoverable signatures, and doing key recovery (though not the - admittedly weird - serialization that Bitcoin uses for it).
70  Bitcoin / Development & Technical Discussion / Re: Deterministic wallets on: May 12, 2013, 09:41:27 AM
I still haven't seen a compelling argument for being able to change chaincode independently of keys. As far as I'm concerned, they constitute an somewhat-less-secret overflow entropy buffer for keys, and are thus intimately linked with them.

Yes, I understand there are use cases, but they seem far-fetched and either assume it is hard to set up a secure communication channel between trusted parties (which I think in practice is easy), or integrating them more tightly into multisig schemes (which I don't like; scripts/addresses are a layer strictly above key derivation, imho; in that sense I also dislike using the term "address" for identifying keys - it should only be called address when you're talking about the script template associated with a particular key, with the intention of receiving coins on it).

Maybe at some point there will be a use case important enough, and we can still standardize a system for "updating" a BIP32 chain with a new chaincode, though I don't like the complexity of reasoning about the security of that.

Unless someone comes up with a use case that it worth supporting (and hopefully doesn't require pretty much rewriting it from scratch...), I'm going to announce the current specification as final on the conference next week.
71  Bitcoin / Development & Technical Discussion / Re: 0.8.2rc1 ready for testing on: May 10, 2013, 11:31:31 PM
When I start it up there is a message that says not to be used for mining. Can someone tell me why?

Non-release builds carry that warning (and this is just release candidate).
72  Bitcoin / Development & Technical Discussion / Re: Deterministic wallets on: May 10, 2013, 11:30:41 PM
- get the basescript with GetCScript(id)

Where does the CScript from if you don't know the other keys in the first place? P2SH transactions do not contain the actual (public) keys anyway, so whatever you do, you need some metadata about the P2SH script in your wallet. I don't see the problem with extending this with the individual BIP32 public parent keys (if you need to be able to access the entire chain of multisig addresses in the first place).
73  Bitcoin / Development & Technical Discussion / Re: Deterministic wallets on: May 10, 2013, 01:37:29 PM
I imagined constructing multisig addresses of a set of (potentially completely independent) BIP32 chains, using equal indices in each, in lockstep.

In my opinion, BIP32 is mostly about generating sequences of keys, which can either be used directly as pay-to-pubkeyhash addresses, or combined using multisig into a P2SH address. Isn't that both more simple than trying to put BIP32 on top of multisig, and no reason for having the chaincodes be independent?
74  Bitcoin / Development & Technical Discussion / Re: Concerns regarding deterministic wallet on: May 09, 2013, 11:23:46 PM
Consider that the entire bitcoin network, over the course of the last 4.5 years, has "only" produced about 269 hashes.

Tomorrow we're crossing 270, by the way Wink
75  Bitcoin / Development & Technical Discussion / Re: Concerns regarding deterministic wallet on: May 09, 2013, 11:05:55 PM
If an attacker needs to guess a key, there is nothing to worry about. The keyspace is way too large for that.

If an attacker has access to your wallet/backup/passphrase/... in a way that grants him access to one of the keys, he very likely has access to all keys.

There is one small security difference between deterministic and randomly-generated wallet keys: if someone manages to copy the keys from the second, he cannot wait (long) before stealing, as the coins tend to move to newer addresses (i.e., it becomes "unstolen" over time).

Also, there are plans to implement deterministic wallets for the reference client too, as the advantages for backup safety far outweigh the security risks.
76  Bitcoin / Development & Technical Discussion / Re: Bootstrapping the pruned blockchain on: May 08, 2013, 11:07:32 PM
The validity of bootstrap can easily be checked by checking the hash (which has to be hardcoded into the software) - the way current (full) blockchain bootstrap works.

This is nonsense. The bootstrap isn't validated in any way - it's just a way to get blocks to your client, and the blocks are verified before accepting them.
77  Bitcoin / Development & Technical Discussion / Re: Bootstrapping the pruned blockchain on: May 08, 2013, 11:00:59 PM
Bitcoin - at least as implemented by the reference client today - is a zero-trust system: every rule that can be validated is validated by your local node upon receiving data from other nodes. If you want to apply this principle to history, it means you have to see history. There is no way around it: bootstrapping a full node requires either feeding it the entire history, or trusting someone else to give you the current state that resulted from that history.

That does not mean everyone needs to keep the entire chain around, it just has to be available enough so that new nodes can get it (once).

There is an alternative, with almost as good security properties, namely if we could somehow encode a hash of the current state inside blocks (in the coinbase, for example). That would mean someone bringing up a new code could just download the historic headers, verify the proof-of-work in it, download the current state from somewhere, and verify it against the latest block. This would allow him to only require "SPV-level" trust in the past (maybe up to some point ago he considers safe), and then continue full validation from there.
78  Bitcoin / Development & Technical Discussion / Re: Deterministic wallets on: May 08, 2013, 10:26:04 PM
The reason for specifying that I_L >= n isn't allowed is indeed to guarantee uniformity. However, as this has such a ridiculously small chance to ever occur, I don't want whatever rule is used to deal with it to be complex to implement.

Also, the reason for using the same construction (k_i = k_par + I_L) for type-1, is to 1) make the code to implement it more uniform 2) not give a performance advantage to type-1. I agree that strictly from a security point of view k_i = I_L is just as good.
79  Bitcoin / Development & Technical Discussion / Re: Listunspent error on: May 04, 2013, 12:14:35 AM
Listunspent fetches from the wallet, not from the (global) UTXO set.
80  Bitcoin / Development & Technical Discussion / Re: Deterministic wallets on: May 03, 2013, 05:10:13 PM
Hey iddo & thanke,

I love how thoroughly you've been discussing this, though I must admit I haven't read the through entire discussion.

I've updated the specification to use addition instead of multiplication, but now I'd like to make the BIP32 specification final soon.

So my questions are:
  • Is there a use case to allow updating keys without updating chain codes, that's worth breaking the current spec for?
  • Is there a reason to disallow secret derivation after public derivation?

If not, I'd like the current version to be final.

Grau: The thread has progressed quite a bit since your comment, but I agree it makes sense to have a set of "guidelines for wallet behaviour" that will make working with these wallets easier, though in no way required.


Pages: « 1 2 3 [4] 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 »
Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!