Bitcoin Forum
May 23, 2024, 10:14:30 AM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
  Home Help Search Login Register More  
  Show Posts
Pages: « 1 2 3 4 5 6 7 8 9 10 11 12 [13] 14 15 16 17 18 19 20 21 22 23 24 »
241  Bitcoin / Development & Technical Discussion / Re: Inflation-proofing via fraud notices on: March 07, 2013, 07:24:31 PM
Since SPV clients anyways need to trust full clients,
The point of this is fixing it so that they don't. It would allow the network to remain secure even if there were fairly few full nodes (because running one had become too expensive). Otherwise, the few full-nodes may find it in their economic or political interest to lie and inflate the currency and if nothing has been done to make it easy to SPV nodes to automatically catch and reject these lies then everyone else may feel that they have to go along with it. An important thing about trust is that its easier to trust when the number of things the trusted entities could get away with is reduced.

The important things to check are:
* No scriptsigs are invalid.
SPV nodes could randomly check them today and send out compact proofs (fragments) showing signature problems.

* No double-spends.
SPV nodes could randomly check with reduced effectiveness, or a single honest full node could send out compact proofs of doublespends (fragment pairs).

* No subsidy inflation.
Needs the proof described in this thread, with it SPV nodes could randomly check and send out compact proofs (fragment plus input txn) showing bogus subsidy.

* No spending non-existing coin.
Requires committed UTXO set to produce a compact proof.
(Also, committed UTXO isn't quite sufficient, because a txn can spend coins created in the same block. So this would also need a separate search tree of utxo created in the current block committed)

If these things can be proven then Bitcoin could be made robust against rules violation even if most nodes were SPV by the existence of only one non-isolated honest full node and/or spot-checking by SPV nodes.

This overall upgrade strikes me as quite important for scaling beyond the current block size limit, but due to its hard-forking nature it needs to be ready and well-tested long before the block size limit is actually lifted.  This looks like a pretty Big Job - especially the authenticated UTXO set stuff.

I saw in IRC that Gavin stated he isn't "philosophically opposed" to this.  Though as Mike said here, "the devil is in the details," if they can be worked out and a concrete implementation plan developed, I'm thinking that with the recent exchange rate run up people might be feeling generous enough to fund work on this.  Maybe I'm being overly optimistic, but if all the devs got behind a funding campaign and did a good job of conveying its significance (and maybe how unimpeded scaling will make their bitcoins worth more, and their mining revenues worth more down the road), I think money would be forthcoming, especially now.

I guess such a campaign would be the job of Bitcoin Foundation to organize?
242  Bitcoin / Development & Technical Discussion / Re: I'VE CHANGE MY MIND! on: February 23, 2013, 07:57:40 AM
Hahaha, awesome.  I'm convinced.
243  Bitcoin / Development & Technical Discussion / Re: How a floating blocksize limit inevitably leads towards centralization on: February 22, 2013, 07:48:58 AM
What changed in your understanding of marketing during the last three years?

https://bitcointalk.org/index.php?topic=1347.msg15145#msg15145

The block size is an intentionally limited economic resource, just like the 21,000,000-bitcoin limit.

Changing that vastly degrades the economics surrounding bitcoin, creating many negative incentives.


One of the most lucid comments I have read so far. Thank you jgarzik
Haha, I asked him to elucidate that comment.
244  Bitcoin / Development & Technical Discussion / Re: How a floating blocksize limit inevitably leads towards centralization on: February 22, 2013, 03:10:01 AM
The negatives that bother me about not changing the block size limit are...

I very much doubt any of the items in that list are valid concerns. The only real broken promise of a fixed block size are the eventual high transaction fees. It is still not clear that this is a bad thing. One thing is for certain, however. A fixed block size should deliver Bitcoin's promise of "largest hash rate of any block chain"

You didn't actually address any of my concerns, you just dismissed them all.  Any reasons why?

I gave a reason to doubt the assumption that a fixed block size will necessarily lead to greater hashing power here: https://bitcointalk.org/index.php?topic=145641.msg1546008#msg1546008
245  Bitcoin / Development & Technical Discussion / Re: How a floating blocksize limit inevitably leads towards centralization on: February 22, 2013, 02:52:13 AM
What changed in your understanding of marketing during the last three years?

https://bitcointalk.org/index.php?topic=1347.msg15145#msg15145

I'm glad you asked Smiley

Being the person who actually posted a faux-patch increasing the block size limit, it is important to understand why I disagree with that now...  it was erroneously assuming that the block size was the whole-picture, and not a simple, lower layer solution in a bigger picture.

The block size is an intentionally limited economic resource, just like the 21,000,000-bitcoin limit.

Changing that vastly degrades the economics surrounding bitcoin, creating many negative incentives.



Would you mind briefly listing the negative incentives you perceive?

The negatives that bother me about not changing the block size limit are:
1) It will end the promise of a universal payment network (modulo micropayments).
2) It will end the promise of widespread disintermediated finance/personal financial sovereignty.
3) With (2) comes the possibility of a few intermediaries having "sovereignty" over a very large quantity of all bitcoins.
4) Also with (2) comes a weakening of the promise of censorship-resistance (it could very well work like in China, for example, where it's possible to evade censorship, but too much of a pain in the ass for most to bother).
5) It will end the promise of stability due to widespread direct use of "high-powered money" (i.e. it opens the possibility of a fluctuating fractionally backed money supply).

I'll take my mention of "sovereignty" here as an opportunity to drop an Omar Little quote cause I like it: "Man, money ain't got no owners, only spenders."
246  Alternate cryptocurrencies / Altcoin Discussion / Re: Ripple Giveaway! on: February 21, 2013, 04:40:31 PM
rNizvEPUy8sqXZjfqaeQbKhuTSJqNZAHFJ
247  Bitcoin / Development & Technical Discussion / Re: How a floating blocksize limit inevitably leads towards centralization on: February 21, 2013, 03:57:58 PM
If we want to cap the time of downloading overhead the latest block to say 1%, we need to be able to download the MAX_BLOCKSIZE within 6 seconds on average so that we can spend 99% time hashing.

At 1MB, you would need a ~1.7Mbps  connection to keep downloading time to 6s.
At 10MB, 17Mbps
At 100MB, 170Mbps

and you start to see why even 100MB block size would render 90% of the world population unable to participate in mining.
Even at 10MB, it requires investing in a relatively high speed connection.

Also of importance is the fact that local bandwidth and international bandwidths can wary by large amounts. A 1Gbps connection in Singapore(http://www.starhub.com/broadband/plan/maxinfinitysupreme.html) only gives you 100Mbps international bandwidth meaning you only have 100Mbps available for receiving mining blocks.

Since a couple people have thanked the author for posting this, I thought I should mention that only transaction hashes need to be sent in bursts.  So a block of 1000 transactions (roughly 1MB) only requires 30KB of data to be sent in a burst, requiring a ~43Kbps connection to keep downloading time to 6s.  100MB blocks require ~4.3Mbps.  The continuous downloading of transaction data is below these limits.
248  Bitcoin / Bitcoin Discussion / Re: Why the Bitcoin rules can't change (reading time ~5min) on: February 21, 2013, 03:41:01 PM
hazek, I had your notion of "sovereignty" from this thread in mind when I wrote this post: https://bitcointalk.org/index.php?topic=144895.msg1545885#msg1545885

It appears to be achievable with an SPV client to a similar degree that most people currently achieve it with a fully validating client.  I don't see any reason why even smart phone clients can't be "sovereign", at least practically speaking.
249  Bitcoin / Development & Technical Discussion / The assumption that mining can be funded via a block size limit on: February 21, 2013, 03:13:33 PM
I posted the following bit in a comment in one of the many recent block size limit threads, but it's really a separate topic.

It also strikes me as unlikely that a block size limit would actually achieve an optimal amount of hashing power.  Even in the case where most users have been driven off the blockchain - and some off of Bitcoin entirely - why should it?  Why shouldn't we just expect Ripple-like trust networks to form between the Chaum banks, and blockchain clearing to happen infrequently enough so as to provide an inconsequential amount of fees to miners?  What if no matter what kind of fake scarcity is built into the blockchain, transaction fees are driven somewhere around the marginal transaction fee of all possible alternatives?

This assumption is at the a crux of the argument to keep a block size limit, and everyone seems to have just assumed it is correct, or at least left it unchallenged (sorry if I missed something).
250  Bitcoin / Development & Technical Discussion / Re: How a floating blocksize limit inevitably leads towards centralization on: February 21, 2013, 02:39:36 PM
All the important protocol rules can be enforced by SPV clients if support for "error messages" is added to the network.  This is described here: https://bitcointalk.org/index.php?topic=131493.0

The trust model relies on information being hard to suppress, which is the same as the trust model nearly everyone running a full node is subscribing to in practice anyway by not personally vetting the source code.

Of course with little expenditure most people will still be able to run massively scaled full nodes, anyway, once all the proposed optimizations are implemented.  But it's at least nice to know even the smart phone clients can "have a vote".

If the transaction rate does reach such huge levels, then it strikes me that the hashing power funding problem has been solved automatically - all those default but technically optional half-cent transaction fees sure would add up.

It also strikes me as unlikely that a block size limit would actually achieve an optimal amount of hashing power.  Even in the case where most users have been driven off the blockchain - and some off of Bitcoin entirely - why should it?  Why shouldn't we just expect Ripple-like trust networks to form between the Chaum banks, and blockchain clearing to happen infrequently enough so as to provide an inconsequential amount of fees to miners?  What if no matter what kind of fake scarcity is built into the blockchain, transaction fees are driven somewhere around the marginal transaction fee of all possible alternatives?
251  Bitcoin / Bitcoin Discussion / Re: Parts of the code related to the 21 million limit on: February 15, 2013, 09:23:45 PM
I feel like that simple little function could make for some good geeky marketing Smiley
252  Other / Politics & Society / Re: The Quantum Conspiracy: What Popularizers of QM Don't Want You to Know on: February 11, 2013, 09:53:22 PM
They lose me when physicists start talking about the mind.
You mean the mystical nonsense?  Not All Physicists Are Like That...
253  Other / Politics & Society / Re: The Quantum Conspiracy: What Popularizers of QM Don't Want You to Know on: February 11, 2013, 01:57:48 PM
It turns out that the most popular and widely taught interpretation of Quantum Mechanics, so called Copenhagen interpretation, which states that the waves of probabilities somehow collapse as a result of measurement is mathematically untenable. Well, it was just about time... Author leaves us with two options: multiple worlds model (read parallel realities, which is mathematically valid) and his own interpretation called zero worlds model, which states that there is no underlying objective reality at all and we are all creations of our thoughts. He didn't say where those thoughts come from though, but that would probably be outside of the scope of his presentation.

So the bottom line is - be careful with what you read and who you trust in mainstream science. Smiley

http://www.youtube.com/watch?v=dEaecUuEqfc
Enjoy!

There's no conspiracy going on here, just a lot of richness you seem to be failing to appreciate.  Here's a good place to start: https://en.wikipedia.org/wiki/Quantum_decoherence

I should add: most adherents of the Copenhagen interpretation tack on decoherence to explain the appearance of wavefunction collapse.  Also, there are more interpretations than just many worlds and the author's mystical nonsense.  Check out consistent histories, for example: https://en.wikipedia.org/wiki/Consistent_histories
254  Bitcoin / Development & Technical Discussion / Re: The MAX_BLOCK_SIZE fork on: February 10, 2013, 10:14:47 PM
Yeah.  If there's a major split, the <1MB blockchain will probably continue for a while.  It will just get slower and slower with transactions not confirming.  It would be better if there is a clear upgrade path so we don't end up with a lot of people in that situation.

You are assuming miners want to switch.  They have a very strong incentive to keep the limit in place (higher fees, lower storage costs).
Sometimes more customers paying less results in higher profit.  Miners will surely have an incentive to lower their artificially high prices to accommodate new customers, instead of having them all go with much cheaper off-blockchain competitors.
255  Bitcoin / Development & Technical Discussion / Re: The MAX_BLOCK_SIZE fork on: January 31, 2013, 10:32:23 PM
To recap, this is my issue with Hearn: he makes a false claim (quoted above) to prop a false generalization of his. If that doesn't work, he just ignores the point. Intellectual dishonesty at its finest, and not quite the first time either.

Any hard forks in that list of many changes you weasel you!

IIRC, a couple years ago there was a buffer overflow bug that required an emergency hard forking change to fix.  I think there was one more that was rolled out gradually over a couple years, but I don't care enough to look it up for you.

Perhaps Mike didn't notice your demand that he address your point because he (understandably) has you on his ignore list.
256  Bitcoin / Development & Technical Discussion / Re: Inflation-proofing via fraud notices on: January 22, 2013, 08:42:12 AM
Here's sort of a recent continuation of that discussion gmaxwell pointed you to as well: https://bitcointalk.org/index.php?topic=131493.0
257  Bitcoin / Development & Technical Discussion / Re: Ultimate blockchain compression w/ trust-free lite nodes on: January 18, 2013, 06:36:02 AM
Am I correct in thinking that the skip string and child keys don't really need to be hashed in at each step running up a Patricia trie, since the hash stored at the end of a given branch is of the full word formed by that branch, and inclusion of that hash in the root is proved regardless?

This would simplify the authentication structure a fair bit, allowing us to avoid having to hash weird numbers of bits, and make it not depend on whether or not we're compressing strings of single child nodes with skip strings (I don't know if this would ever be an advantage).
258  Bitcoin / Development & Technical Discussion / Re: Ultimate blockchain compression w/ trust-free lite nodes on: January 18, 2013, 02:51:23 AM
But I see the benefit that you don't even really need linked lists at each node, only two pointers.  But you end up with a lot more total overhead, since you have the trienode overhead for so many more nodes...
More specifically, it will need somewhat less than double the amount of nodes, since every node in a 2-way trie has two children - i.e. the same number of extra nodes as the Merkle tree, which we were going to need to include anyway.

Quote
I'll have to think about this one.  I think there is clearly a benefit to a higher branching factor:  if you consider a 2^32-way Trie, there is a single root node and just N trienodes -- one for each entry (so it's really just a lookup table, and lookup is fast).  If you have a 2-way (bitwise) Trie, you still have N leaf nodes, but you have a ton of other intermediate nodes and all the data that comes with them.  And a lot more pointers and "hops" between nodes to get to your lefa.  It leads me to believe that you want a higher branching factor, but you need to balance against the fact that the branching adds some efficiency (i.e. in the case of using linked lists between entries, it would obviously be bad to have a branching factor of 2**32).
Wouldn't the bitwise trie actually require fewer hops, since it doesn't need to traverse through the linked list?  You seem to be saying this and at the same time saying it requires more Smiley

Assuming we were going to be serializing the original nodes and keying them individually in a database, going bitwise shouldn't affect this at all, since we would just prune off a bunch of crap to "zoom out" to a 256-way "macro-node", and conversely build in the details of a macro-node grabbed from storage to "zoom back in" to the bitwise subtrie.

Estimate of the amount of extra overhead this proposal will impose on existing clients

Since the network is switching to using a leveldb store of utxos now, we can easily make this estimate.  I'll ignore for now the fact that we're actually using a tree of tries.  Assuming we're actually storing 256-way macro-nodes, and that the trie is well-balanced (lots of hashes, should be), then for a set of N = 256^L utxos, a branch will have L macro-nodes plus the root to retrieve from/update to disk, instead of just the utxo like we do now.  But it makes sense to cache the root and first level of macro-nodes, and have a "write-back cache policy", where we only do periodic writes, so that the root and first level writes are done only once per batch.  So for a batch size >> 256, we have more like L - 1 disk operations to do per utxo retrieval/update, or L - 2 more than we do now.  That's only one extra disk operation per retrieval/update with L = 3, or N = ~17M utxos total.  Due to the extreme sparsity, it probably doesn't make much sense to permanently cache beyond the first level of macro-nodes (16 levels of bitwise nodes ~2*2^16*(2*32 + 2*8 ) bytes (2*2^16 nodes, 32 bytes per hash, 8 bytes per pointer)  ~ 10MB of permanently cached data), since any macro-nodes in the next level can be expected to be accessed during a batch of, say 10,000 writes, roughly 1/256^2 * 10,000 ~ 0.15 times.  So additional memory requirements should stay pretty low, depending on how often writes are done.

I guess to do it properly and not assume the trie is perfectly balanced, we would have a "LFU cache policy", where we track the frequency with which macro-nodes are accessed, and throw out during writes the least frequently used ones, keeping some desired number of those most frequently used.

Disk operations are almost certainly the bottleneck in this proposal, but they aren't in the main client, so it's possible that this wouldn't add much in the way of noticeable overhead.

Is my math correct?  (There was a small mistake I fixed in an edit.)
259  Bitcoin / Development & Technical Discussion / Re: Ultimate blockchain compression w/ trust-free lite nodes on: January 17, 2013, 11:25:02 AM
I was thinking rather than use a Merkle tree at each trie node for verifying the presence of a child, we could use a bitwise trie instead, since it would have faster updates due to always being balanced and not having to sort.  This would also speed up lookups, since we wouldn't have to traverse through a linked list of pointers at each node.

But then we've just arrived in a roundabout sort of way at the whole trie being essentially a bitwise trie, since the nodes in the original trie with 256 bit keysize overlap exactly with subtries of the bitwise trie.

Was there a reason for not proposing a single bit keysize from the start?  I thought about it a while back, but disregarded it for some reason I can't recall.
260  Bitcoin / Development & Technical Discussion / Re: Ultimate blockchain compression w/ trust-free lite nodes on: January 03, 2013, 12:48:18 AM
Am I neglecting anything?
Maybe just that for privacy reasons, instead of a request for txouts for a specific address, a lightweight client would probably want to submit a bloom filter to its peer that would yield enough false positives to obfuscate his ownership of the address.
Pages: « 1 2 3 4 5 6 7 8 9 10 11 12 [13] 14 15 16 17 18 19 20 21 22 23 24 »
Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!