Bitcoin Forum
June 24, 2024, 04:38:10 AM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
  Home Help Search Login Register More  
  Show Posts
Pages: « 1 2 3 4 5 6 7 8 9 10 11 12 13 14 [15] 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 »
281  Bitcoin / Development & Technical Discussion / Re: New paper: Accelerating Bitcoin's Trasaction Processing on: December 09, 2013, 06:58:52 AM
Nobody said that propagation delays are the *only* obstacle. We said they are currently the *primary* obstacle, because currently you will run into the trouble at much lower tps.

That is incorrect. We are already seeing centralization due to the enormous costs of running a full node. The number of full nodes in operation is declining, despite increased awareness of bitcoin. And currently the network propagation time is too small to have any meaningful impact. The trouble spots in scaling bitcoin are elsewhere at this time, I'm afraid.
282  Bitcoin / Development & Technical Discussion / Re: What safeguards can I use when attempting 0 confirmation transactions? on: December 08, 2013, 01:05:53 AM
Isn't Z the average block interval / 2 because a random transaction has 50% probability of occurring each side of the mean block interval?
Z increases however depending upon how less often getblocktemplate or getwork is invoked during mining by the "average" miner. If that is in the order of minutes then Z approaches 10 instead of the ideal 5 minutes.


No. Block finding is a Poisson process. You are always 10 minutes (or whatever the current nethash & difficulty commands) away from the expected arrival of the next block.
283  Bitcoin / Development & Technical Discussion / Re: New paper: Accelerating Bitcoin's Trasaction Processing on: December 07, 2013, 09:36:21 PM
2) This is a local-only change that does not require consensus. It's okay for light nodes to still follow the most-work chain. What this change does is provide miners specifically with a higher chance of ending up on the final chain, by better estimating which fork has the most hash power behind it.

Not sure why you call it "local-only change", because as we are discussing here, it appears that there's a risk for netsplits that don't re-converge, especially if different nodes will follow different rules.
There is no possibility for non-convergence. The most-work tree eventually wins out. Always.
284  Bitcoin / Development & Technical Discussion / Re: New paper: Accelerating Bitcoin's Trasaction Processing on: December 07, 2013, 07:58:24 PM
Some questions to devs please: what's the chance to see it implemented in Bitcoin? Would it take very long? What's the biggest obstacle?

It would need to be post-pruning to prevent DoS attacks.

EDIT: But to be clear, it gives ~0 benefit until you hard-fork to significantly increase the transaction rate.
285  Bitcoin / Development & Technical Discussion / Re: New paper: Accelerating Bitcoin's Trasaction Processing on: December 07, 2013, 06:03:17 PM
iddo, I think there are two things to point out with respect to that wizards' conversation (where we didn't really reach consensus):

1) The DoS danger can be avoided with some trivial heuristics, for example: drop or ignore orphaned blocks with less than factor X of the difficulty of the currently accepted best block, and drop blocks furthest from the current best block when orphan storage exceeds Y megabytes/gigabytes. In reality there are a spectrum of possibilities between nodes having omniscient knowledge about all forks (pure GHOST), and nodes blindly following the most-work chain (bitcoin current). Even a heuristic-limited, DoS-safe version somewhere in the middle of that spectrum would be an improvement over today.

2) This is a local-only change that does not require consensus. It's okay for light nodes to still follow the most-work chain. What this change does is provide miners specifically with a higher chance of ending up on the final chain, by better estimating which fork has the most hash power behind it.

3) What this patch actually accomplishes is a little obscured by the hyperbole of the OP. There are alt coins which had a constant difficulty, and those which had incredibly low block intervals. I'm speaking in the past tense they basically imploded when their network size increased and the propagation time approached the block interval. This patch wouldn't fix that problem (because eventually you run out of bandwidth), but it does let you get a lot closer to the network's block propagation time before witnessing catastrophic consequences. This is important because when we increase the maximum transaction processing rate of bitcoin, it will be by either dropping the block interval or increasing the maximum block size (both of which are hard forks), and how far we can safely do that is bounded by this limit, among others.
286  Bitcoin / Development & Technical Discussion / Re: New Mystery about Satoshi on: December 04, 2013, 06:44:43 PM
Because Naoshi Sakamoto is not even a remotely related name to Satoshi Nakamoto, and if you spoke Japanese you would know that. Please don't give this poor guy the nightmare of public scrutiny.
287  Bitcoin / Development & Technical Discussion / Re: Why aren't transactions faster? on: November 29, 2013, 08:58:36 PM
Satoshi actually included a very useful mechanism in the protocol specifications and data structures which hasn't been implemented in BitcoinQT but could basically solve the double-spend problem (at least regarding tx replacement) if it was:

Using the sequence and lock_time fields prevents a tx from being replaced by another tx after the specified time (or block number, or ever if sequence = UINT_MAX)

Beyond the valid points made by gmaxwell and Peter Todd, you are making one serious mistake in your reasoning: you are assuming that other clients will behave like your client does. The *only* transaction-selection behavior that is enforced by protocol is proof-of-work confirmation of transactions in blocks. You have no guarantees that any other node will handle double-spends the same way you or a future version of the reference bitcoin client does.

Example: a site like blockchain.info tries to connect to every single node on the bitcoin network, and keeps all transactions, including double-spends. If it were configured to forward transactions, or if miners scraped the bc.i update pages for transactions it hadn't seen (a not unlikely possibility), then it does not matter in the slightest what the default relay rules are of the rest of the network. Anyone can perform a double-spend by sending the transaction to bc.i, and one way or another a major mining pool with promiscuous replace-by-fee settings will accept it.

The bitcoin protocol offers absolutely no hard guarantees about consensus until a transaction finds its way into a valid block. Placing any trust in unenforced and unenforceable relay rules is a recipe for disaster.
288  Bitcoin / Development & Technical Discussion / Re: Why aren't transactions faster? on: November 29, 2013, 05:14:34 PM
OpenCXP does nothing and can do nothing to prevent a double-spend with higher fees from replacing the transaction you agreed upon once you walk out that door.
289  Bitcoin / Development & Technical Discussion / Re: Why aren't transactions faster? on: November 29, 2013, 12:37:53 AM
Once replace-with-fee is deployed, I could rip off that coffee shop with near 100% reliability, every time until they stop serving me coffee. Don't accept zero-conf transactions.
290  Bitcoin / Development & Technical Discussion / Re: How does a site like Blockchain.info know which outputs are change? on: November 28, 2013, 08:33:04 PM
If it were me, I'd do prime decomposition on the amounts, calculate their relative magnitude, a boolean value indicating whether they'd been seen before, etc., label a number of training examples, and have a support vector machine generate a classifier.

I'm so glad it is not you, that kind of thing is exactly someone fascinated with machine learning would go for. So many thousands and thousands of crap papers where guys blindly go after machine learning -- and it is mostly always svm --, without even considering other methods, reporting results close to 100% accuracy and other metrics just to find out that they don't even know how to setup training/testing sets, neither have a clue about the features they are using.

Yes, because when faced with a classic machine learning problem, the tried and true techniques of machine learning are not what you'd want to use.
291  Bitcoin / Development & Technical Discussion / Re: How does a site like Blockchain.info know which outputs are change? on: November 28, 2013, 07:42:23 AM
Bitcoin-Qt is not the only wallet application...
292  Bitcoin / Development & Technical Discussion / Re: How does a site like Blockchain.info know which outputs are change? on: November 28, 2013, 06:39:05 AM
If it were me, I'd do prime decomposition on the amounts, calculate their relative magnitude, a boolean value indicating whether they'd been seen before, etc., label a number of training examples, and have a support vector machine generate a classifier.

There's a million other ways you can do it and get decent results. Doesn't stop it from being a WAG though.
293  Bitcoin / Development & Technical Discussion / Re: How does a site like Blockchain.info know which outputs are change? on: November 28, 2013, 03:35:07 AM
It makes a wild-ass guess.
294  Bitcoin / Development & Technical Discussion / Re: Why aren't transactions faster? on: November 27, 2013, 10:46:40 PM
I agree that with the proper precautions (waiting a few seconds to see if there are double spend attempts, ensuring the transaction has a sufficient tx fee) 0-confs is enough for a point of sale transaction.

No, no, and no. None of those precautions you mention do anything to protect you against a double spend. There is nothing you can do to provide significant protection except wait for a confirmation. Do not trust zero-confirmation transactions, ever*.

(* Unless you are extending pre-existing trust you've placed in the person sending the coins, or have some mechanism for obtaining restitution in the case of a double-spend. Either way, that's side-stepping, not solving the problem.)
295  Bitcoin / Development & Technical Discussion / Re: Zerocoin proofs reduced by 98%, will be released as an alternative coin. on: November 25, 2013, 06:03:07 PM
Yes, it's exactly MMR applied to the Chaum token double-spend db. This solves the problem of maintaining that ever-increasing list of unblinded, spent tokens by pushing the problem out of the validators and onto the people holding the coins. Proof size grows with log2 the number of spent tokens, but the proofs can be thrown away once validated (as they can be reconstructed from the block chain history).

It doesn't link the spend to the original coin however, as we're only dealing with revelation of the unblinded tokens. You still need some sort of ZKP that the unblinded token was out of the original set of blinded tokens.
296  Bitcoin / Development & Technical Discussion / Re: Why aren't transactions faster? on: November 25, 2013, 05:54:09 PM
I should have chosen a better lower bound than 10s, because yes at that scale network propagation has measurable effects on the security. But a 1-minute confirm would not be significantly less secure than a 10-minute confirm (and a 2-week confirm wouldn't be much more secure than that).

Zero-confirmation transactions have no security beyond your trust in the person you are interacting with. Full-stop.
297  Bitcoin / Development & Technical Discussion / Re: Zerocoin proofs reduced by 98%, will be released as an alternative coin. on: November 25, 2013, 09:20:48 AM
* Spent coins list is needed for validation and grows forever (e.g. no pruning of the critical validation state).

I've found away around this limitation using a variant of the UTXO proof tree structure. A tree containing all spent tokens is constructible from the spend history visible in the chain history. Anyone holding an unspent token maintains an insertion-proof into this tree, which is included as part of the spend. Validating nodes need only keep the root hash for a given series, which is updated after validating each spend.

But the other two points remain as major obstacles...
298  Bitcoin / Development & Technical Discussion / Re: Why aren't transactions faster? on: November 25, 2013, 09:10:21 AM
Because the confirmations would be backed by 1/10th as much hashing power, so also only 1/10th as secure. If you wait for one confirmation now, you would have to wait for 10 confirmations with the new system to have a comparable level of security.

This is a commonly held belief here, but incorrect. One 10-second confirmation provides exactly the same security as a a single 10-minute confirmation. A coin with block coming 10x as quickly would allow you to wait less time for confirmations. It would also be a terrible idea for a whole host of other reasons, many of them mentioned above.
299  Bitcoin / Development & Technical Discussion / Re: Find senders BTC address: using getrawtx decoderawtx.. on: November 24, 2013, 07:49:09 PM
You can't. There is no way to do it. The blockchain does not hold that information.
300  Bitcoin / Development & Technical Discussion / Re: CoinJoin: Bitcoin privacy for the real world on: November 18, 2013, 10:22:43 PM
Michael, your examples indicate bad coin control and insufficient anonymity sets, not in inherent unknown weakness in CoinJoin.
Pages: « 1 2 3 4 5 6 7 8 9 10 11 12 13 14 [15] 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 »
Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!