Bitcoin Forum
May 04, 2024, 05:33:47 AM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
  Home Help Search Login Register More  
  Show Posts
Pages: « 1 ... 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 [181] 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 ... 288 »
3601  Bitcoin / Development & Technical Discussion / Re: Proof of Storage to make distributed resource consumption costly. on: October 20, 2013, 02:34:48 AM
I initially thought "Proof of Storage" would mean that A has a large file that B previously had, then A periodically prooves to B that she still has the full file available.
It's utterly trivial to do that. Just ask that they tell you parts of the file from time to time. You can even do this without remembering the file yourself if you just compute a hashtree over it, and remember the root, and they give you the hash fragments.

But that wouldn't be very useful as a decentralized anti-dos mechanism: one copy of the file would allow you to prove to as many concurrent peers as you want.  If, instead, the peers ask you to store some unique file for them to address that issue— they have to undertake the cost of actually sending you that file which is contrary to the goal of using this as anti-dos (and also you have the risk that Sergio addressed of someone doing that in order to invisibly delegate their proof work to someone else).

What this adds to that idea is the ability to send a few bytes that the peer can use to make a file of arbitrary size, and then they can convince you that they have actually done so... without you having to do non-trivial communication, computation, or storage. The price is, indeed, that the information stored isn't terribly useful. (You could perhaps replace the hash with some other kind of abstractly useful function "folding proteins" or something— but in general double-use for proof-of-whatever subsidizes the cost of attacking, since now your attack can be paid for by the secondary benefit of the barrier function, and other useful functions have some risk of surprising optimizations that strong cryptographic hashes lack).
3602  Bitcoin / Development & Technical Discussion / Re: Where exactly are these Bitcoins mined from? on: October 20, 2013, 01:47:44 AM
Are you asking about all bitcoins in the Bitcoin system?

The newly created aren't mined "from" anywhere, they are simply introduced to the system out of thin air. See https://en.bitcoin.it/wiki/Mining for more information.
3603  Bitcoin / Development & Technical Discussion / Re: How does vanitygen know when 100% is? on: October 20, 2013, 01:31:07 AM
It shouldn't be too hard to figure out how many possible addresses there are that start witha prefix of a certain length. So it knows how big the key space is. And it can report what percentage if that space has been searched 
The space is ~2^256 it will never search more than a very very tiny sliver of the space. Additionally there may, in fact, be _no_ solutions to your query (even a simple one like 1zzzzz*, though that is unlikely).  Moreover, the search is randomized. Even if there were a solution, and even if you did evaluate ~2^256 points you may still have not found it (and in fact could search forever without finding it).

It's not incorrect to show a longer expected time for a prefix of 12 than 4, of course. But what is incorrect is an indication that goes _down_.  If it tells you it will take 1 hour to find an 8 character prefix, it should tell you that it expects 58 hours for 9 indeed.  But if you've spent a half hour on your 8 character search and still not found one, it should still be reporting 1 hour. Not 30 minutes.  Your probability of finding a solution is not appreciably increased by all the times you've failed to find one because the probability is independent (given cryptographic assumptions).
 
3604  Bitcoin / Development & Technical Discussion / Making fraud proofs safe to use in practice. on: October 20, 2013, 12:49:07 AM
An idea which has come up in the past is that various security model reductions in Bitcoin could be practically secure if we had compact fraud proofs.

Today, for example, SPV nodes validate nothing. So if there is a longer _invalid_ chain, SPV nodes will still follow it.  This is a existential risk for Bitcoin in the future if SPV nodes become very very popular and almost no one runs full nodes: At some point running a full node may even be foolish, because you'll reject a chain that the economic majority (running SPV nodes) accepts.

However, if instead it were possible to construct small messages that a SPV node could inspect and then be completely confident that a block broke the rules then security would be restored: An invalid chain would only be accepted so long as all SPV nodes could be partitioned from all honest nodes— there only really need to be one attentive honest node in the whole world to achieve security, so long as its truth cannot be censored.

Modifying Bitcoin to make compact fraud proofs possible is fairly straight-forward.  The big problem is that from a practical engineering perspective this idea is very risky:  Processing a fraud proof is at least as complicated as validating a block and we know that people get block validation wrong already. If these proofs exist then there would be no reason for a miner to ever try to create fraud. As a result they will never be used. If they are never used the implementations will be wrong... if not in the reference then in all of the alternative implementations.  Some implementations might be vulnerable to incorrect fraud proofs that break consensus even absent fraud, others might fail to reject some kinds of fraud.  Basically any inconsistency in this normally dead code is a point an attacker could use to break the network, or it could just cause failure by chance.

Obviously software testing and whatnot addresses this kind of concern in theory. But in practice we already see people writing alternative implementations which are forked by the pre-existing test code. Good testing is only a complete answer if you assume idealized spherical developers.  Completely eliminating the possibility of error in these systems via testing is infeasible in any case, because the input space is too large to exhaust.

So in spite of the potential benefits, I think many people have not been super excited about it as a practical idea.

I finally stumbled into a retrospectively obvious way to make this kind of design more safe:  You require that all block solutions commit to two distinct candidate blocks.  One of the candidate blocks is required to be fraudulent.  The fraudulent block is eliminated through a fraud proof. Nodes which do not process fraud proofs correctly will be unable to determine which of the two is the right one.

This wouldn't eliminate all risk, and it has considerable overhead if used completely seriously but it would at least address the concern that the proof code would be completely nonfunctional due to disuse.
 
3605  Bitcoin / Development & Technical Discussion / Re: Reducing UTXO: users send parent transactions with their merkle branches on: October 20, 2013, 12:16:35 AM
2. Users send not only a transaction, but all parent transactions and their merkle branches.
3. Full node does not need to lookup UTXO to check if the parents are valid. This part of UTXO is already provided by the sender. Node needs only to check that merkle branches are valid and point to a block that was already validated.
The trick here is that the UTXO needs to be constructed here in such a way that the information provided with transactions is always enough to update the new committed UTXO hash.

This is trickier than it might seem at first glance for a couple reasons.

First, a proof of UTXO existence must also carry enough data to perform a proof for removal. Some tree structures make these proofs one and the same, but in others they are not.

Secondly, users construct their proofs independently of each other.  So a user constructs a proof for find and remove A, and another user constructs a proof for find and remove B.  This means the block must contain a proof for remove A+B.   This requirement generally eliminates any UTXO scheme based on a self-balancing tree and any scheme with content adaptive level compression,  since the A+B proof may need to access additional data than the A or B alone in order to rebalanced or recompress the tree.  (Note, most UTXO discussion on this forum has been about tree types invalidated by this requirement. It's easily fixed, but I guess it's good we didn't run headlong into implementing them). In #bitcoin-dev we've been calling this a "composable" or "commutative" property.

Insertion of new utxo, in particular, is somewhat tricky: For any kind of binary search tree insert may need to happen at arbitrary location. Users can not write proofs for the insertions of their new UTXO sets because they have no clue what the state of the tree would be in at the time their insertion actually happens.

Petertodd's suggestion is to store the utxo as an authenticated insertion ordered binary tree which supports efficient inserts, a merkle mountain range. This addresses the above issues nicely. Proofs are still invalidated by updates, but anyone who has the update can correct any proof (even a proof originally written by a third party).

Most important about Petertodd's suggestion is that it completely eliminates the necessity of storing third party data.  In a practical system nodes that store everything would still exist, of course, but they aren't required in Petertodd's idea: The system would work fine so long as each person keeps track of only his own coins (plus headers, of course).

There are some tradeoffs in this scheme however:  Anyone storing the proof required to spend a coin must observe every block so that they can update their proofs as the coins surrounding their coins in the UTXO set change.  Proof sizes would be log2() in the size of the complete history, rather than in the size of the spendable coins because unless we nodes to store the complete history there may be no single party that has the data required to re-balance the tree as coins are spent.

The latter point is not really a killer, since log2() grows slowly and the universe is finite. Smiley It's also somewhat offset by the fact that spends of recently created coins would have smaller proofs. The first point can be addressed by the existence of nodes who do store the complete data, and unlike in the Bitcoin of today, those nodes could actually get compensated for the service they provide.  (E.g. I write a txn spending my coin, but I can't produce the proof because I've been has been offline for a long time.  Your node tells me that it'll provide the proof, so long as the transaction pays it some fee).

The cost of observation could potentially be reduced if nodes were required to store the N top levels of the tree, by virtue of not including them with transactions. Then you would only need to observe the blocks (or, rather, the parts of blocks) which made updates to branches where you have txouts.

The potential to completely eliminate storing third party data removes some of the hazards of the Bitcoin design. E.g. no more incentive to abuse the blockchain as a backup service. No need to worry about people stuffing child pornography into it to try to get the data censored.  However, that full vision also requires that new nodes be able to bootstrap without auditing the old history. This would be a substantial change from the current zero trust model, and I'm not sure if such a change would be viable in Bitcoin. At a minimum it would probably require the robust existence of fruad proofs, in order to make a persuasive argument to newcomers that the history they can't review doesn't contain violations of the rules.
3606  Bitcoin / Pools / Re: [24 TH] p2pool: Decentralized, DoS-resistant, Hop-Proof pool on: October 19, 2013, 08:40:00 AM
I love my p2pool, ty so much for writing it. I wish I could still use it.

However my 30x333 MHz of around 10gh processing is now at this 268m difficulty, failing to get a share before p2pool finds a block, so I'm no longer getting any payouts. I've missed four in in row, after a couple of months getting in nearly every payout, bar two I think.

Unless I'm wrong, I have had to stop using p2pool and move to a smaller pool that still pays pplns.
You'll still get paid— but as your correct payout falls too low you won't get paid every block. Instead, you'll get shares here and there and get paid way more while those are in the window than your hashrate would suggest, offsetting when you didn't get paid. On average your expected income is the same, just with somewhat higher variance.    This is how p2pool avoids creating potentially infinitely large coinbase transactions that pay out really tiny dust (not to mention keeping the sharechain datarate sane).
3607  Bitcoin / Development & Technical Discussion / Re: merged mining vs side-chains (another kind of merged mining) on: October 19, 2013, 08:12:05 AM
Actually, SPV is pretty sketchy in case with normal merged mining too.
Assuming that the bitcoin hashrate is actually "free" to be turned around maliciously.  This is tricky, it's like some of the arguments that Bitcoin is only secure if half of all existent (or potentially existent!) computing power is currently working on it.  In any case, my comments were only in tepid complaint about "almost as strong as" in comparison to Bitcoin. There is more to consensus than just the blocks.

None of this applies to your central purpose of your message: Parasitic altcoins have the same problem with internally invalid data or the inability to make efficient access to their state or to have reduced security compressed representations of their state.  So quit letting me knock you off-topic with my pedantry. Tongue
3608  Bitcoin / Development & Technical Discussion / Re: merged mining vs side-chains (another kind of merged mining) on: October 19, 2013, 07:55:40 AM
By the way, while Namecoin transactions can be SPV'd, domain resolution requires scanning last N blocks where N is something like 36000. So loss of SPV won't be a big problem for Namecoin either...
Trivially fixed (you might recognize the idea by the more recent name of "Committed UTXO set"). Tongue

In any case, yea, I'm not intending to say bad things about the idea generally. But there are limitations to be aware of. Perhaps some of them are fixable, I haven't given it much thought.   Thanks for following up.
3609  Other / Politics & Society / Re: Zhou Tonged - End of Silk Road on: October 19, 2013, 05:14:18 AM
I expected this to be dumb. Indeed, it was. But it was also a lot of fun. Nice singing.
3610  Bitcoin / Project Development / Re: Bitcoin RPM packages for Fedora and Red Hat Enterprise Linux on: October 19, 2013, 05:06:16 AM
Interesting, how are the Bitcoin devs going to get around the ECDSA patent shit-fight that is causing all these problems in RH-derivative OpenSSL anyway?
The patent situation for ECC is highly over hyped. Mostly it's just optimizations which are patented (and mostly for characteristic 2 curves). In my prior review, it looked like what we were doing was fine.  There is also a lot of ecc patents expiring this year and next, further solidifying the situation.
3611  Bitcoin / Development & Technical Discussion / Re: Potential Future Bitcoin Issue on: October 19, 2013, 04:25:11 AM
I would anticipate that pools/ miners will actively scan pending transactions. Which leads me to the thought that they might shutdown to save power when pending transactions don't contain enough bitcoin to profit on, opening a vector for attack as it artificially decreases the difficulty
I'm not sure what you mean by "actively scan".

Petertodd suggests that people should be nlocktiming some of their transactions for the future to reduce problems with this. E.g. once the current block has enough incentive, start making transactions that can only be mined in the one after it... Regardless, if transaction load isn't so high that blocks are mostly pretty full, I suspect we'll have much greater problems than miners running intermittently.
3612  Bitcoin / Development & Technical Discussion / Re: How does vanitygen know when 100% is? on: October 19, 2013, 03:10:33 AM
The numbers vanity gives are just bogus.

There is a fixed low probablity of finding a solution. The probability does not go up the longer you've gone... the expected time remains at the initial expected time forever, its just that eventually you get a surprise solution. Smiley
3613  Bitcoin / Development & Technical Discussion / Re: merged mining vs side-chains (another kind of merged mining) on: October 19, 2013, 12:30:51 AM
Everyone will neglect sidechain X because it is invalid. [...] SPV clients on the sidechain must have a bitcoin SPV client included. Then the client has to rely on the bitcoin block depth very similar to what a bitcoin SPV client would do... So nothing special...
These statements are inconsistent, SPV clients cannot reject the invalid chain. How can they distinguish between the latest bitcoin block being a correction of an invalid fork (they should believe it), or a gigantic reorg replacing a run of valid blocks (they should reject it)?

Quote
This is the same situation as a temporary bitcoin chainfork, when two blocks were found roughly at the same time. The miner of the next block will then extend the sidechain which is longer, because this will give him the higher probability to create a sustainable side-chain block.
The point in this post is that the bitcoin chain is deciding the identity of the other chain, not the length of the other chain. In this context, it doesn't matter what side chain is longer, what chain get committed to the bitcoin chain in the future is the deciding factor.
3614  Bitcoin / Pools / Re: Suggestion for how to choose a pool difficulty for miners. on: October 18, 2013, 11:29:00 PM
I don't see why people give a shit if their _daily_ income has only a 1% variation. Jesus Christ, do you all have mining hardware setups whos output exactly match the cost of your daily heroin fix or something?  Tongue

There are very few small to medium size businesses that have variation against expected within 1% even on a timescale of months.

I mean, sure, if the bandwidth isn't a concern and the pool doesn't care to charge people based on their actual load, then by all means, why not lower.

But caring about a daily 1% variation is just further confirmation to me that y'all are crazy. Smiley
3615  Bitcoin / Development & Technical Discussion / Re: merged mining vs side-chains (another kind of merged mining) on: October 18, 2013, 11:15:35 PM
On the other hand, side-chain consensus is fully dependent on Bitcoin consensus: side-chain reorganization is impossible without Bitcoin reorganization. (But Bitcoin reorgs can easily trigger side-chain reorgs.) This means that side-chain consensus is almost as strong as Bitcoin consensus.
The obvious constructions have some problems.

What happens when Bitcoin block X  mines sidechain X  and bitcoin block X+1 mines sidechain X' (a fork)?

Okay, having answered that. Now answer what happens when Bitcoin block X  mines sidechain X  and bitcoin block X+1 mines sidechain X' (a fork), but sidechain X is _invalid_?

Okay, having answered that. What happens when its the sidechain along with bitcoin block X+1, X+2, etc. that are invalid? How do SPV clients on the sidechain work? 

Having answered that, can you still say that the consensus is 'almost as strong as Bitcoin consensus'?
3616  Bitcoin / Development & Technical Discussion / Re: HD wallets for increased transaction privacy [split from CoinJoin thread] on: October 18, 2013, 09:06:04 PM
Nor is there a need to give customers in many cases an extended public key such as when there is no reoccurring payment relationship.
The customer might not have a single address contain a sufficiently-large output to make the payment. In that case, the customer would prefer to use multiple payment addresses to send the payment in two or more transactions.

The need for multiple payment addresses is not strictly related to whether or not payments are recurring. 
Okay, this still doesn't prevent the issuing of extended public keys being orthogonal with the gap problem.

On this point you now have to weigh the privacy vs reduced unzip attack security. I don't expect people to actually do what you're suggesting, especially since there is also the alternative of just asking for multiple addresses from the webserver (which also works for your application even where there is no public derivation available).  We're also taking a tradeoff of increased ecosystem dependance on the possibility of public derivation when you we do that, which I think is unfortunate as it'll be incompatible with any other cryptosystems bitcoin adds in the future.

As an aside, I note that if you invoke petertodd's take on cut-through payments you get the quadratic gap problem.
3617  Bitcoin / Development & Technical Discussion / Re: HD wallets for increased transaction privacy [split from CoinJoin thread] on: October 18, 2013, 08:56:27 PM
Ok, so you delegate an extended public key to your VPS and keep your master seed on an air-gapped cold wallet. You can still give each customer a unique extended public key - you just need to configure the client on your VPS and on your air-gapped wallet to add one extra layer of structure.
It doesn't matter if the customer is given an extended public key or a regular address. In fact, in the worst case the additional level of indirection makes the gapping problem quadratically worse, though if you allow no gaping of the leaf chain it's merely no better (because there can be gaps between customers who have successfully paid).  Nor is there a need to give customers in many cases an extended public key such as when there is no reoccurring payment relationship. (and avoiding doing so avoids disclosing a chaining code, thus reducing your exposure to the unzip attack should a private key get disclosed).
3618  Bitcoin / Development & Technical Discussion / Re: HD wallets for increased transaction privacy [split from CoinJoin thread] on: October 18, 2013, 08:47:59 PM
If you're going to encourage people to upload their extended public keys to this forum to hand out to other users on their behalf, then some of them are going to believe they are getting more privacy than they actually are. That is only marginally more secure than posting a static public address and might be worse in practise because of the false sense of security. That's what I mean by a honeypot.
So you are just willfully ignoring a use case used by thousands of people, one which likely dwarfs many of the reoccurring payment applications... and instead propose a solution which doesn't need BIP32 at all, which works fine today, and which people _do not use_ because it is too costly relative to the privacy provided.  I don't think anyone would be at danger of not realizing that the forum would know about forum issued addresses, but even if they were— the alternative you propose can already be used and observably isn't in very many cases.

Under any circumstance where it happens when the receiver is not looking forward at least 1000 addresses.
Alice should also never give the same extended public key to two people, so one person's griefing won't affect her dealing with anyone else.
I believe you're allowing your dislike of third party delegation to blind you to all forms of delegation.  I run a website on a not very secure VPS. I would rather it not be the case that someone who compromises the VPS be able to steal all the funds. So I delegate an extended public key to the VPS to allow it to compute new addresses for payments, the spending of which is handled by an entirely air gapped wallet.  Because some people may fail to pay the usage may be sparse, and you can not depend on advancing only to the next unused address or payments will be missed.

If you start to respond that my VPS would be a honey pot and I shouldn't have an extended public key on anything which isn't completely secure, please just don't reply. If you have an issue with one of the design objectives of BIP32 then please feel free to ignore that objective completely.  Other people live in a world where security involves complicated tradeoffs and are going to go ahead happily using it for what it was specifically invented for.
3619  Bitcoin / Development & Technical Discussion / Re: HD wallets for increased transaction privacy [split from CoinJoin thread] on: October 18, 2013, 08:25:44 PM
You have failed to improve anyone's privacy.
You've subtly misrepresented what I said, which doesn't particularly surprise me, but whatever.
Because I'm an evil nasty person out to do you harm. Because thats totally more likely than us just having an honest misunderstanding and me not being able to figure out how you're not crazy here.  Smiley

Quote
Are you honestly claiming that creating a honeypot is a way to improve privacy?
I'm really very confused by your comments.

Being able to delegate address generation to less trusted things, like a VPS— allowing you to have unique addresses where otherwise reuse would be required was a major design goal of BIP32.

Yes, you could have some always online trusted server communicating over some strongly private channel. But that just isn't practical in many cases.  As evidenced by the rampant use of static addresses in these places.  Allowing the less trusted device to generate addresses is a _strict improvement_ in privacy over the alternative of equivalent usability, because it reduces the space of parties who can deanonymize these particular transactions to only those who can compromise the less trusted issuer.  True, it does not replace issuing from a trusted location— thats still preferable— but presumably people would still do that where its actually possible, as they already do.

I am struggling to come up with any remotely rational basis for your complaint.  Are you under the impression that a user could only have a single chain, and thus this practice would reduce their privacy for all their addresses rather than just the subset which would have instead used a single static address?

No. The worst he can do is send nothing to 1000 addresses and then someone else sends 100 bitcoins to the next one.
Under what circumstances would this be a problem?
Under any circumstance where it happens when the receiver is not looking forward at least 1000 addresses.
3620  Bitcoin / Development & Technical Discussion / Re: BIP0032 Mistake on: October 18, 2013, 07:13:51 PM
I pointed that mistake out over a month ago.
Where?
Pages: « 1 ... 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 [181] 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 ... 288 »
Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!