Bitcoin Forum
May 24, 2024, 03:49:10 AM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
  Home Help Search Login Register More  
  Show Posts
Pages: « 1 ... 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 [214] 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 ... 288 »
4261  Bitcoin / Project Development / Re: Namecoin Flaws & Improvements Tip Bounty on: July 21, 2013, 11:36:24 PM
Uh. How about: Not possible to build a secure (zero trust) lite resolver.  To safely and correctly resolve namecoin names you need to run a full namecoin node.

It's solvable but it needs at least a soft-forking block validity rule to fix.

I described the starting point of how to solve this years ago which has later evolved into the various committed utxo proposals, but not a whole lot of actual code.
4262  Bitcoin / Development & Technical Discussion / Re: Transaction fee in Bitcoin-Qt not saved on: July 21, 2013, 11:21:16 PM
You don't happen to have a bitcoin.conf with txfee=0 in it, do you?
4263  Bitcoin / Development & Technical Discussion / Re: NETWORK FREEZE bitcoin difficulty stuck high with a large hashrate drop on: July 21, 2013, 11:10:57 PM
In the case of a network partition we actually _want_ blocks to stop or go very slowly. Otherwise a large amount of partitioned hashpower we're not currently aware of could show back up and wipe out the recent history, reversing recent transactions on the losing side and leaving everyone who transacted in the span vulnerable to reversals.

This is basically the same risk as an attacker who mines a chain down to trick isolated nodes, but happening naturally without an attacker.
4264  Bitcoin / Development & Technical Discussion / Re: Reasons to keep 10 min target blocktime? on: July 21, 2013, 08:47:54 PM
You know this has been discussed many times before. it would be really best if you'd spent some more time studying them rather than starting a new thread and externalize the studying cost…

You seem to know the primary arguments against it, but I'll repeat the ones I think are most interesting:

(1) Orphaning rate depends on the time relative to communications & validation delay (formula given in the link).  At the limit as the block-time goes to zero the network will stop converging and typical reorganizations tend to infinitely long.  The actual delays depend on network topography and block size. And as an aside— in the past we've seen global convergence times on Bitcoin get up to over two minutes, although the software performance has been improved since then it doesn't seem that there a ton of headroom before convergence failures would be likely in practice, certainly fast convergence is harder with larger blocks.

(1a) There have been altcoins who didn't understand this and set their block times to be stupidly low and suffered pretty much instant convergence failure (e.g. liquidcoin). There are other ones that may start failing if they ever get enough transaction volume that validation actually takes a bit of time.

(2) The computational/bandwidth/storage cost of running a SPV node, or query a remote computation oracle for signing, or to present a bitcoin proof in a non-bitcoin chain is almost entirely due to the header rate. Going to 5 minutes, for example, would double these costs. Increasing costs for the most cost-sensitive usages is not very attractive.

(3) With the exception of 1 confirmation transactions, once you are slow enough that orphaning isn't a major consideration there is no real security difference that depend on the particular rate. For moderate length attacks sum computation matters and how you dice it up doesn't matter much. One confirm security— however— isn't particular secure.

(3a)  If there is actually a demand for fast low security evidence of mining effort,  you can achieve that simply by having miners publish shares like P2Pool does. You could then look at this data and estimate how much of the network hashrate is attempting to include the transaction you're interested in.  This doesn't, however, create the orphaning/convergence problems of (1) or the bandwidth/storage impact on disinterested nodes of (2).

(3b) Because mining is a stochastic lottery confirmations can take a rather long time even when the mean is small. Few things which you can describe as "needing" a 2 minute mean would actually still be happy with it taking 5 times that sometimes. Those applications simply need to use other mechanisms than global consensus as their primary mechanism.

(4) While you can debate the fine details of the parameters— perhaps 20 minutes or 5 minutes would have been wiser— because of the above none of the arguments are all that compelling.  Changing this parameter would require the consent of all of the surviving Bitcoin users, absent a really compelling argument it simply isn't going to happen.

If you'd like to explore these ideas just as an intellectual novelty,  Amiller's ideas about merging in evidence of orphaned blocks to target an orphaning rate instead of a time are probably the most interesting—  the problem then becomes things like how to prevent cliques of fast miners self-centralizing against further away groups who can't keep up, and producing proofs for SPV clients which are succinct in the face of potentially quite fast blocks.
4265  Bitcoin / Mining speculation / Re: ASIC resale value on: July 21, 2013, 03:49:53 PM
There are many many potential ways to use proof of work to secure protocols: fight spam, dos attacks, etc. Any of these things could, if they wanted, use the Bitcoin POW function.

But few of them have been adopted— one reason proof of work (e.g. hashcash) hasn't been adopted for these things is because most of the time the enemy has a botnet and botnets are even better at PoW than desktop computers (e.g. the desktop user pays for power, the attacker does not).   But with a mining asic, this isn't so obviously so, at least for now.

So, if people want, they could go out an develop alternative uses for mining hardware. If some take of it may create a useful secondary market.
4266  Bitcoin / Pools / Re: I got ripped off. My Bitcoins at EMC got Stollen on: July 21, 2013, 01:28:03 PM
Sounds like your account was hacked. If you were using the same password anywhere else you should make sure you change it right away.
4267  Bitcoin / Development & Technical Discussion / Re: NETWORK FREEZE bitcoin difficulty stuck high with a large hashrate drop on: July 20, 2013, 04:29:22 PM
Quote
If two of the largest miners of bitcoin went offline, the entire bitcoin economy would be at a standstill for at least several days.

Blocks taking 2 - 3x longer than nominal is hardly "at a standstill". If there is so much hashpower consolidation that this would be a severe concern then our security model would have already failed— it's not like you need a majority hashpower to make moderate reorganizations and transaction reversals with some success.

Quote
This is less likely with bitcoin today, but it is a real vulnerability.
I don't think it's clear that it is. Being just a small multiple slower would be annoying, but if left us short capacity competition for space would attract larger transaction fees, which would make mining more profitable, which would draw in more hashrate.

Quote
What can we do about that? Terracoin has fast changing difficulty
And ended up with exploitable vulnerabilities multiple times as a result of their tinkering there.  Continuous difficulty adjustment halves the cost for an attacker to mine down a fork for use in isolation attacks.

Presumably if the network were ever stranded— and people somehow still cared about Bitcoin at all (it not clear to me how those two things could ever be true)— then it wouldn't be too difficult to do a single point hardfork to step the difficulty back down. Considering that, I think this is not worth worrying about.
4268  Bitcoin / Development & Technical Discussion / Re: Quiz: Are you a Satoshi client guru developer? on: July 19, 2013, 10:18:59 PM
BYW: The code in serialize.h states over "Variable-length integers" : "Every integer has exactly one encoding".
This is mistaken.
Fun list!  Though on this point you're incorrect, or at least its debatable. The code in question is generic and doesn't only work on fixed length types. The encoding is non-redundant, but the current code doesn't bother to prevent overflow, and I believe this is actually an oversight. Perhaps it should  probably changed to take a maximum size so that data on the range 0-255 could be encoded without overhead.  This encoding is used for the ultraprune databases, not external IO.

Why another format?  There are 6676408 TX outs: increasing each one by only 1 byte would increase the working set size by 3%. Since this serialization is used only internally there aren't a lot of ecosystem costs from it... but the space savings matters. Be glad we don't have a full range coder in there. Tongue
4269  Bitcoin / Development & Technical Discussion / Re: Zerocoin when? on: July 19, 2013, 05:17:51 PM
The zerocoin part does more than defend against DOS, doesn't it? It also provides a degree of anyonymity, if I understand it. In the conventional multi-party anti-taint protocol, every participant knows the mapping from inputs to outputs. But in your improved protocol using libzerocoin, nobody sees the mapping. Now, this requires more than two participants, so considerable organization is needed to coordinate.

Still, this an application of the zerocoin protocol which doesn't have an impact on the blockchain. OTOH, it has a small anonymity set, so the benefit is rather modest.
Indeed. Although there are simpler ways to hide the connection: e.g. Tor plus blind signatures:  parties provide inputs, get a blind signature from all other parties, reconnect and expose their blindsigned tokens to get into the output list, but they leave open a DOS attack without an even more complicated protocol.  Using ZC solves both the connections problem and gives you anti-DOS, which blind signatures themselves don't provide.

I was working under a (handwave handwave) assumption that the parties would meet over Tor, Bitmessage, or some other anonymity preserving transport.  Practically speaking, a direct usage Zerocoin requires something similar.

I'm not sure about the anonymity set impact, it's a bit hard to reason about. One of the scaling arguments for ZC is that you could use it infrequently for a fairly small set of high value transactions. This has an impact on the anonymity set too.  Because throughput isn't very limited in the joint-transaction case, and because it could potentially piggyback on regular transactions (E.g. I want to donate to Foo, but instead of donating directly I do it via a mix transaction), it should be possible to cascade many stages of mixing and increase the anonymity set size.
4270  Bitcoin / Development & Technical Discussion / Re: A question on ECDSA signing (more efficient tx signing)? on: July 19, 2013, 03:54:22 AM
Interesting.  I need to look into that.   So when the original message & signature are known once can reconstruct the public key.  So why does Bitcoin include the public key in the tx input?
Because when Bitcoin was written Satoshi didn't know this. It's also somewhat slower to do it that way, though the space saving is great enough that I believe it's a pretty clear win.  Also, your composed key stuff requires accumulation of the ECC points, so it's not entirely free either.

Quote
Makes smaller 1 input tx but larger multi-input tx compared to composite key but is still always smaller than "Bitcoin today".
One question: What do you mean by "two disambiguation bits"?
Quite literally: two bits to disambiguate the multiple possible public keys. The validator could perform the recovery multiple times but that would be quite slow. It's needed for the same reason a compressed public key needs an extra bit to encode the 'sign' of the y coordinate.

Right, it wouldn't be as small as the composed keys but it also wouldn't have the other downsides: privileging ECDSA, making common ownership deanonymization attacks more powerful, goofing up sighash single, breaking the independence of inputs.

If you were willing to do the composed-key signing— you could instead have just used one address, or some address type that used a common ecdsa public key plus a sequence number (e.g. to disambiguate payments).  Then you'd just have an optimization that could aggregate multiple signings under the same key in a transaction into a single one.  ... though I'm also not keen on privileging address reuse.
4271  Bitcoin / Development & Technical Discussion / Re: A question on ECDSA signing (more efficient tx signing)? on: July 19, 2013, 03:14:25 AM
Maybe we are speaking of different things but doesn't ECDSA allow creating a composite key by simply adding the two private keys together.
Yes, I was speaking of something different there. Batch verification.  I went on to talk about creating a third key from two other though, which can be done but suffers some of the limitations I mentioned.

Quote
Is there any security risk to a format like this?  Any reduction in key strength?  I can't seem to find anything that would indicate it is the case but this really isn't my area of expertise.
I'm not aware of any security harm from doing it.

Quote
Interesting.  Can you provide information or reference on public key recovery?
If you have a signature and the message it signed and two disambiguation bits you can recover public key used. This saves you from having to transmit the public keys.  We use this today for bitcoin signmessage.


Quote
I read this a couple of times and still couldn't conceptualize how hash tree in transaction would add security.  I bookmarked it though.
It would not _add_ security, what it would permit is more flexible security / computation / bandwidth tradeoffs without compromising security.

A simple example:  Say you are an SPV node. A full node tells you about a transaction paying you which is in a block.  You really don't give a @#$@@ about the _whole_ transaction, what you really want is proof that a particular txout is in that block.  Today the full node must send you the full transaction— along with all of its potentially bloated scriptsigs which you can't verify (lacking the inputs) and don't care about and all the other outputs that don't pay you.  If the transaction was internally tree structured you could request only the data of interest and still receive proof that that data was committed in the block in question.
4272  Bitcoin / Development & Technical Discussion / Re: New Attack Vector on: July 18, 2013, 06:29:50 AM
An overestimate of log2 is easy.  It doesn't need two parameters...
He went and found the origin of the code, these particular lines are not a log2 (obviously) but they're used as part of an integer implementation of a log2.

He didn't actually describe what those lines do, which is amusing since there is actually a comment that explains them right above them in the code they came from.

I thought it was a fun example because I know that an adequate programmer can, in fact, understand it purely from the code— since I was given an algebraic simplification of that code (taking advantage of the fact that in the calling enviroment case it's never used with val==0) by a random OSS contributor, and I think that was even before it had a comment explaining what it was doing.

A detailed description of this code ends up being an opaque transliteration which people will convert back to $language incorrectly (noting that kokjo actually did have trouble figuring out what computation it was performing and doubted his understanding of the mere behavior).  While, a high level description "This is (val>>(l-16)), but guaranteed to round up, even if adding a bias before the shift would cause overflow (e.g., for 0xFFFFxxxx)." would almost certainly be implemented incorrectly, as you can see it's a bit tricky and joe-random-programmer doesn't generally know how to make fixed point division round up even before you worry about overflow.  (How many people know about the truncation vs floor behavior of >> and / operators, and that they're not the same in, e.g. C and python or how you convert one to another, or would even realize that they had to take special care— even if the spec called it out?)

I'm sure a sufficiently deft drafter could write a spec that handles this... but this is two lines.  Getting exact behavior when you need to worry about things like overflow and correct performance over the entire range of a machine number is simply hard and a lot of programmers are not mentally prepared for it. Writing spec text which doesn't create hazards can be tricky.
4273  Bitcoin / Development & Technical Discussion / Re: A question on ECDSA signing (more efficient tx signing)? on: July 18, 2013, 06:01:08 AM
Some ECC signing systems can do grouping operations where you can compose all the keys and all the data being signed and validate the whole thing with negligible probability of accepting if any were invalid and zero possibility of rejecting if all were valid.  But AFAIK ECDSA is not such a scheme, and the ones I know of have larger signatures. (uh, though if you were to use it to always merge all the signatures in a block— that might be interesting).

Maybe you were already suggesting it but its certainly possible to— in a single transaction— expose a bunch of public keys, compose them, and sign with the composition. This is what you get with type-2 deterministic wallets, or vanitypooling, effectively.  But the space reduction is reduced by the need to expose the public keys... and it would make taint analysis more potent because you multiple parties cannot securely sign in that model. It's also incompatible with sighash single. If you wanted an incompatible ECC specific change— you could instead add public key recovery. This would get similar space savings, but also save on transactions with a single input while not breaking the ability to confuse taint analysis with joint transactions, or breaking alternative sighashes.

More philosophically, privileging a particular asymmetric cryptosystem in Bitcoin is probably not a grand idea.   We have this fantastic flexible signature system where signing criteria are programs not just a fixed asymmetric operator. It's worth taking some extra cost in order to keep that flexibility on equal footing.  It also may well be the case that we become concerned about the security of ECDSA in the future (e.g. practical very large QCs make ECDSA unacceptably weak).  The obvious alternatives do not obviously support such mathmatical fun.

One thing that I do sometimes wish is that transactions were themselves hash trees internally.  It would be very nice to be able to give to someone all the data they need to build a current UTXO securely (and thus also verify that there is no permitted inflation in the chain) without sending them a bunch of deeply burred signature data which isn't personally interesting to them and which they believe has adequate hashpower security to only do randomized spot checks.
4274  Bitcoin / Bitcoin Discussion / Re: Bitcoin Address Collisions. on: July 17, 2013, 10:27:21 PM
If collissions do occur it won't be because someone brute forces the addresses it will be because of an as of yet undiscovered flaw in ECDSA or one of the hashing algorithms which allow attacks at many dozens of magnitudes faster than brute force.
Or bad RNGs in crappy JS wallet generators or hardware wallets.
4275  Bitcoin / Bitcoin Discussion / Re: Bitcoin Address Collisions. on: July 17, 2013, 10:26:28 PM
Assuming Bitcoin takes off, and your salary is 0.000000000000000000000000000000000340 satoshis or an even lower amount, then even 0.50 won't be that bad.
Bitcoin cannot represent an amount that small, the maximum number of non-zero outputs is 21e14, and at that point the UTXO size would be about 44 petabytes.

If you want to speculate about tinier amounts inside the Bitcoin system proper, you'd have to hypothesize some hardfork to increase precision. At the same time, even today, with no protocol change you could freely use a 512 bit address (well, assuming you could convince the sending party to write a custom scriptpubkey).

And again: your speed of generation doesn't change the number of valuable utxo that exist; so its still only a linear attack.
4276  Bitcoin / Bitcoin Discussion / Re: Bitcoin Address Collisions. on: July 17, 2013, 10:07:51 PM
And in my opinion, you don't need to count to ~2^256 to find a collision. Perhaps even less than half of that may be enough for a single one.
This is just simple math, not "opinion"—  but finding an arbitrary collision isn't relevant, getting two of your own addresses twice accomplishes nothing. You'd need to collide with an address which has been assigned a non-trivial amount of funds... so your trillions per second only gives you a linear speedup.
4277  Bitcoin / Development & Technical Discussion / Re: What is stopping pruning from being added to the qt client? on: July 17, 2013, 08:56:45 PM
When blocks become very large it will be more efficient to download them in parallel from multiple peers. Allowing that means you've got to subdivide the blocks somehow, might as well subdivide at the transaction level.
Without debating the efficiency of parallel fetch (which is, in fact, debatable)— it's unrelated to talking about archival information— if you want hunks of the latest blocks from your peers you can simply ask them for them. If you want to think of it as a hashtable it's a D1HT, but it seems unnecessary to think of it that way. Besides, prefetching the transaction list requires a synchronous delay.  Simply asking your peers for a blind orthogonal 1/Nth of block $LATEST is more efficient.

As a toy example, if you have 8 peers and hear about a block, you could ask each peer to give you transaction n where (n+x)%8==0 where x is their peer index. If not all 8 peers have received the block you can go back and re-request their fraction from the ones that do.  Additionally, peers can filter their responses based on transactions they've already send you or seen you advertise. By grabbing blind hunks like this you avoid transmitting a transaction list (which is a substantial fraction of the total size of the transactions) and avoid round trips both to get the list and negotiate who has/doesn't have what.  Though this is something of a toy example, it's fairly close the functionality our bloom filtered getblock already provides.

Quote
In addition I'm assuming that all nodes are going to maintain a complete copy of the UTXO set at all times. That means if they wanted to download old blocks the only data they should need to fetch from the network is the pruned transactions from those blocks.
If someone has a complete and trusted UTXO set then what need would you expect them to have to fetch archival data?
4278  Bitcoin / Development & Technical Discussion / Re: New Attack Vector on: July 17, 2013, 07:25:44 PM
i dare you give me a piece of c code(10-20 lines) that i can't explain.

Okay, I'll give you an easy one. How about two statements?

Code:
uint32_t f(int l, uint32_t val) {
  if (l > 16) {
    val = (val >> (l - 16)) + (((val&((1<<(l - 16)) - 1)) + (1<<(l - 16)) - 1)>>(l - 16));
  } else val <<= 16-l;
  return val;
}
An implementation of f() constructed from your English description must produce identical results for all possible val and l on the range 0-31 inclusive.
4279  Bitcoin / Development & Technical Discussion / Re: What is stopping pruning from being added to the qt client? on: July 17, 2013, 06:59:41 PM
I was suggesting this as a way of storing prunable transactions, not blocks.
There is no need in the bitcoin system for uncorrelated random access to transactions by transaction ID. It's a query thats not needed for anything in the system.

Quote
Why is a multihop probe unreasonable when it comes to retrieving archival data?
Because it's lowers reliability and makes DOS attack resistance hard. It's part of why the sybil problem is not solved in DHT's on anonymous networks. (By anonymous there I mean that anyone can come or go or connect at any point, freenet opennet is actually insecure— at least in theory).  Moreover, it's unreasonable because it doesn't actually improve anything over a simpler solution.
4280  Bitcoin / Development & Technical Discussion / Re: New Attack Vector on: July 17, 2013, 06:40:57 PM
It mutates with every commit to the satoshi client repo. Code is not a standard.
Prior versions do not mutate with commits to GIT. Those prior versions deployed in the network are the reference against which future compatibility is compared.

Refactoring the code to eventually make a better executable specification out of it, isolating out the bitcoin rules from the low level aspects of their implementation and building a great test framwork around all of it would be a very useful goal that would make this more powerful.

stop writing code, and sit down and make a standard. Its not that hard, nobody just wants to do it because they are lazy bastard who like to code crap code, instead of doing things the right way.
Just like the rfc's describe what the protocol look like down to the smallest detail, and then don't change it. Describe how clients interact with keyword defined in http://www.ietf.org/rfc/rfc2119.txt.
Having worked on RFCs (Perhaps I'll see you in Berlin in a couple weeks? Will you be at the IETF meeting? I will be speaking in the Technical Plenary.), I don't agree.  Not that I disagree that having a Bitcoin RFC would be a good thing— but I don't actually believe it would usefully solve any of the concerns you wish to solve.  When the behavior must be _exact_ it is exceptionally difficult to write a specification that is correct, and the end result ends up being— effectively— code, though it may be a kind of odd quasi English pseudo-code where no tools exist to actually execute it, analyze it, or test it. Behavior specified in standards is only infrequently tightly bound, in the blockchain rules most of the behavior is tightly bound— there is very little implementation freedom available.

Say we were to have written an RFC for Bitcoin in 2010.  It wouldn't have documented that weird invalid DER signatures were accepted, because we didn't know about that then.  Someone who implemented according to the specification might accept them, or might not, depending on how they implemented their code.  When a block arose that exposed the inconsistency the network would suffer an irreparable fork.  What behavior would win?  That would depend on a lot of things— technical, political, and economic. Most likely the most restrictive behavior would win— since restrictions are only soft-forking from the perspective of the permissive implementation, even if the spec said you must be permissive.  What it _wouldn't_ depend on is what the text of the RFC said.

A non-executable specification is a dead letter in a consensus system. It may be informative. It may be helpful.  But what it cannot be is normative: Normative behavior arises out of participating in the consensus and a non-executable specification cannot participate in the consensus.
Pages: « 1 ... 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 [214] 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 ... 288 »
Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!