Bitcoin Forum
April 25, 2024, 11:21:27 AM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
  Home Help Search Login Register More  
  Show Posts
Pages: « 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 [50] 51 52 53 54 55 56 57 58 59 60 61 62 »
981  Bitcoin / Development & Technical Discussion / Re: How a floating blocksize limit inevitably leads towards centralization on: February 19, 2013, 09:11:16 AM
Please people, understand one thing: you can't run a full payment network used by millions as a hobby. The day Bitcoin becomes something really serious (i.e., thousands of transactions per second), being a full node will necessarily be a professional task. It won't be something you can do on your laptop, "just for fun".

Right now the capacity of Bitcoin is about half a million transactions per day. So you can participate in that level of transactions as a hobby. The value of those transactions can be as high as required. If Bitcoin does become a widespread store of value, blocks will probably be transferring hundreds of million of dollars worth of value each, tens of billions every day.

But after all, it's just information, so yes, participating will be perfectly possible as a hobby, and for a fairly affordable fee, you'll be able to even make transactions directly on the world's decentralized value transfer service, the same system big banks will use.

EDIT: And, as Mike said, the idea of converting Bitcoin into some replacement to SWIFT with $20 fees for transactions, which would force people to use bank-like institutions for daily transfers, just because you want "ordinary people to verify transactions", totally turns me off. Bitcoin can be much more than that. If you actually want it to remain this censorship-resistant currency that it is, it has to remain suitable for small transactions like buying some plugin from Wordpress. If you want Bitcoin to remain an alternative for those seeking financial privacy, you have to keep it suitable for SR users and alike - otherwise all these "bank-like" payment processor would ruin your privacy. If you want Bitcoin to remain an alternative for those trying to protect their purchasing power from inflation, you have to keep it suitable for those who want to protect their daily money on their own, without having to use a bank just for storage purposes which would recreate the incentive for fractional reserves. The list can go on. Bitcoin has the potential to be much more than SWIFT 2.0. But for that, processing transactions will have to become a professional activity (it kinda already is actually).

Absolutely. But the solution isn't to make access to the core Bitcoin network, the thing that actually keeps Bitcoin inflation free and secure, require such a huge investment in computer hardware that only big banks and other large institutions can afford access. The solution is to keep blocks small, and build payment systems that work on top of the block chain.

Remember that if the blockchain is kept small enough that validating it is affordable, you don't have to trust the payment processors very much. The protocols will be designed in ways that allow anyone to prove fraud automatically and warn the whole world. The client software people use will see these fraud proofs, and immediately stop using the payment processor, putting them out of business. Yet at the same time, using technologies like chaum tokens, those payment processors can't even know where payments are going too; you're privacy is even more protected than with on-chain transactions, because the links connecting one transaction to another are severed with unbreakable mathematics.

Do you think the banking crisis would have happened if banks were forced to have all their bank-to-bank transactions publicly recorded for the whole world to see? Keeping the blocksize limited does exactly that.

But maybe not, and if just one miner starts creating gigabyte blocks, while all the rest agrees on 10 MiB blocks, ugly block-shunning rules will be necessary to avoid such blocks from filling everyone's hard drive (yes, larger block's slower relay will make them unlikely to be accepted, but it just requires one lucky fool to succeed...).

Succeed in what? Killing everybody else? Do you realize that would likely require more than 50% of the network processing power, otherwise the "unacceptably-gigantic" block would always be an orphan? Miners would likely reject blocks way too large, specially if it's filled with transactions never seen before (i.e., a likely attempt of flooding).

Ok, so a 10GiB block is unacceptably large. What about a 5Gib block? Or a 1GiB block? Or a 500MiB block? At some point the block will be confirmed by a large fraction of the hashing power, but not all the hashing power. The hashing power that couldn't process that gigantic block in time has effectively dropped off of the network, and is no longer contributing to the security of the network.

So repeat the process again. It's now easier to push an even bigger block through, because the remaining hashing power is now less. Maybe the hashing power has just given on on Bitcoin mining, maybe they've redirected their miners to one of the remaining pools that can process such huge blocks, either way, bit by bit the process inevitably leads to centralization.

EDIT: And also, as a general comment on the discussion, you people fearing "too much centralization", as in "too few market participants", should realize that, at most, what would happen would be a few pool operators, like we have now. Pool operators do not own the processing  power. Such processing power will remain scattered among thousands of people, who may easily migrate to different pools if they feel like. Pretty much like what already happens. Current pools need to have some "professional bandwidth" if only for protecting against DDoS, It already require professional resources to run a mining pool.

Pool operators do own hashing power if the miners contributing the hashing power can't effectively validate the blocks they mine.

If running a validating node requires thousands, or even tens of thousands, worth of expensive equipment, how exactly do you expect to even find out that you've been mining at a dishonest pool? If >50% of the people mining and running validating pools decide to get together and create bogus transactions creating coins out of thin air, you won't even know they've been defrauding everyone.(1) If running a node requires tens of thousands of dollars worth of equipment, and it will to support Visa-scale transaction volumes, only a small handful of large banks are going to run nodes. I think you can see how collusion between half-a-dozen large banks becomes not just possible, but likely.

1) Yes, you can try to create automated fraud proof mechanisms to detect it - I wrote about the idea here - but implementing the software to process fraud proofs is extremely complex, much more complex than applying the same idea to keeping off-chain banking services honest. I also have little hope that those mechanisms will actually get written and tested before the much more simple step of just lifting the block limit is taken.

In this thread I'm a bit disappointed in Gavin. I used to see him as a very conservative project leader, only including changes when there's community consensus about it and no doubt about its security implications. And I liked that, even though it meant that some of the changes I support are not going to be included. For a monetary system, trust and stability are essential, and I hope Gavin will continue to provide that trust and stability, so hopefully he just considers abandoning the transaction limit as an academic "thought experiment", and not something he is planning to actually put into the code in the near term.

I agree %100. Increasing the block limit seems like a conservative change - it's just one little number - but the long-term implications are enormous and have the potential to drastically change what Bitcoin is. It may be a conservative change for the small number of big businesses that are heavily invested in the current system, and can afford the network power to process large blocks, but it's not a conservative change for the rest of us.
982  Economy / Service Discussion / Re: Satoshi Dice -- Statistical Analysis on: February 19, 2013, 07:11:01 AM
I'm a little late today.  I've been trying and failing to chop down dead trees for firewood.  I'm kind of incompetent with a chainsaw and ended up breaking it.  Ho hum.

Any day you use a chainsaw incompetently and retain all your limbs is a good day Smiley

dooglus: Just sent you a 0.1BTC donation for your excellent Satoshidice analysis work so you can buy some chainsaw pants. Tongue
983  Bitcoin / Development & Technical Discussion / Re: BIP: Increasing the Network Hashing Power by reducing block propagation time on: February 19, 2013, 06:49:52 AM
"header" command format is:

- Block header
- transactions hash list
- Coinbase transaction (maximum 10 Kbytes in size)

An average "header" command size (for an 1 Mbyte block, considering an average 400 bytes tx) is 80 kbytes, that takes 1.5 seconds per hop.

You have to be careful with transmitting transaction hash lists rather than the transactions themselves. While it definitely makes propagation faster in the average case, it also means that the worst-case, a block entirely composed of transactions that have not been previous broadcast on the network, is significantly worse. For the purpose of just reducing orphans, as is done by P2Pool as gmaxwell pointed out, the worst case isn't a problem, but don't make assumptions that have security implications. In particular by the mechanism I pointed out here you'll actually create a market for large hashpower miners where they can offer lower fees, paid for by the lower effective competition, provided the sender promises not to send the transaction to any other miner. I can't see such markets developing for 1MiB blocks, but they might develop for much larger block sizes.

P2Pool is interesting because miners on it have an incentive for any P2Pool block to be propagated to the network as a whole as fast as possible. In addition the perverse propagation incentives for shares within P2Pool are probably less of an issue given that the higher the hash power of P2Pool as a whole, the lower the variance for any individual miner - miners on P2Pool aren't playing a zero-sum game. P2Pool miners also have very little control over propagation because P2Pool shares are all identical.
984  Bitcoin / Development & Technical Discussion / Re: How a floating blocksize limit inevitably leads towards centralization on: February 19, 2013, 06:03:54 AM
However, with no limit on block size, it effectively becomes miners who are in control of _everyone_'s block size. As a non-miner, this is not something I want them to decide for me. Perhaps the tragedy of the commons can be avoided, and long-term rational thinking will kick in, and miners can be trusted with choosing an appropriate block size. But maybe not, and if just one miner starts creating gigabyte blocks, while all the rest agrees on 10 MiB blocks, ugly block-shunning rules will be necessary to avoid such blocks from filling everyone's hard drive (yes, larger block's slower relay will make them unlikely to be accepted, but it just requires one lucky fool to succeed...).

Well, read my initial post at the top; that larger blocks don't propegate to the network as a whole is actually benefits the miner because provided the blocks propagate to more than 50% of the effective hashing power, the part that doesn't get the block is effectively wasting their mining effort and taken out of the competition.

Additionally even if miners see a rational reason to keep block-sizes low, which I already doubt, allowing them to control the size gives irrational miners who are trying to actively damage Bitcoin another way to do so. Right now all that an evil miners can do is either help double-spend attempts, easily defeated with confirmations, or launch a 51% attack, so far defeated with large amounts of hashing power. We don't want to give yet more ways for malicious people to damage Bitcoin, especially ones which are actually profitable in the short term.

My suggestion would be a one-time increase to perhaps 10 MiB or 100 MiB blocks (to be debated), and after that an at-most slow exponential further growth. This would mean no for-eternity limited size, but also no way for miners to push up block sizes to the point where they are in sole control of the network. I realize that some people will consider this an arbitrary and unnecessary limit, but others will probably consider it dangerous already. In any case, it's a compromise and I believe one will be necessary.

I mean, I'm not totally against a one-time increase if we really need it. But what I don't want to see is an increase used as a way to avoid the harder issue of creating alternatives to on-chain transactions. For one thing, Bitcoin will never make for a good micropayments system, yet people want Bitcoin to be one. We're much better off if people work on off-chain payment systems that complement Bitcoin, and there are plenty of ways, like remote attestation capable trusted hardware and fidelity bonds, that allow such systems to be made without requiring trust in central authorities.

I would hate to see the limit raised before the most inefficient uses of blockchain space, like satoshidice and coinad, change the way they operate. In addition I would hate to see alternatives to raising the limit fail to be developed because everyone assumes the limit will be raised. I also get the sense that Gavin's mind is already made up and the question to him isn't if the limit will be raised, but when and how. That may or may not be actually true, but as long as he gives that impression, and the Bitcoin Foundation keeps promoting the idea that Bitcoin transactions are always going to be almost free, raising the block limit is inevitable.

Anyway, there are plenty of good reasons to have off-chain transactions regardless of what the block limit is. In particular they can confirm instantly, so no waiting 6 blocks, and they can use chaum tokens to truely guarantee that your transactions are private and your personal financial information stays personal.
985  Bitcoin / Press / Re: 2013-02-18 newstatesman.com - How Paypal robs the Bank of England on: February 18, 2013, 10:23:52 PM
Quote
If you do, things get trickier; the exchanges have had a number of high-profile failures, and are probably the weakest point in a network which manages to combine cryptographic perfection with an incredible amount of possibilities for human error.

That has got to be one of the most accurate descriptions of the security of Bitcoin I've seen in non-Bitcoin press.
986  Bitcoin / Development & Technical Discussion / Re: How a floating blocksize limit inevitably leads towards centralization on: February 18, 2013, 09:46:16 PM
Network assurance contracts are far from a sure thing. It's basically an attempt to solve the tragedy of the commons, and the success rate society has had there is pitiful, even with strong central authorities. Assuming they will work is a big risk.

Sure, there is some risk, but Kickstarter is showing that the general concept can indeed fund public goods.

For consumer products where you get a tangible object in return. Security through hashing power is nothing like kickstarter.

1.2 megabytes a second is only ~10 megabits per second - pretty sure my parents house has more bandwidth than that. Google is wiring Kansas City with gigabit fibre right now, and we're not running it as a charity. So network capacity doesn't worry me a whole lot. There's plenty of places in the world that can keep up with a poxy 10 megabits.

No, that's 1.2MiB average; you need well above that to keep your orphan rate down.

Again, you're making assumptions about the hardware available in the future, and big assumptions. And again you are making it impossible to run a Bitcoin node in huge swaths of the world, not to mention behind Tor.

You also have to ask the question, what % of that 3TiB/month results in unspent txouts? Ultimately it's the UTXO set that is the hard limit on the storage requirements for full validating nodes. Even at just 1% volume growth, you're looking at 3GiB/month growth in your requirement for fast-random-access memory.

How did you arrive at 3GB/month? The entire UTXO set currently fits in a few hundred megs of RAM.

I'm assuming 1% of transactions per month get added to the UTXO set. With cheap transactions increased UTXO set consumption for trivial purposes, like satoshidice's stupid failed bet messaging and timestamping, is made more likely so I suspect 1% is reasonable.

Again, other than making old UTXO's eventually become unspendable, I don't see any good solutions to UTXO growth.

All the time you're spending waiting for transactions to be retrieved from memory is time you aren't hashing.

Why? Hashing happens in parallel to checking transactions and recalculating the merkle root.

I mean proof of work hashing for mining. If you don't know what transactions were spent by the previous block, you can't safely create the next block without accidentally including a transaction spent by the previous one, and thus invalidating your block.

You're example has nothing to do with Bitcoin. Even in the early days it would be obvious to anyone who understood comp-sci that static websites are O(1) scaling per client so there isn't any reason to think you couldn't create websites for as much load as you wanted.

Nobody in 1993 could build a website that the entire world used all the time (like Google or Wikipedia). The technology did not exist.

Don't be silly. Even in 1993 people knew that you would be able to do things like have DNS servers return different IP's each time - Netscape's 1994 homepage used hard-coded client-side load-balancing implemented in the browser for instance.

DNS is another good example: the original hand-maintained hosts.txt file was unscalable, and sure enough it was replaced by a the hierarchical and scalable DNS system in the mid 80's.

Or what about the global routing table? Every backbone router needs a complete copy of the routing table. BGP is a broadcast network. How can the internet backbone scale? Perhaps we should only allow people to access the internet at universities to avoid uncontrollable growth of the routing table.

...and what do you know, one of the arguments for IPv6 even back in the early days in the early 90's was the IPv4 routing space wasn't very hierarchical and would lead to scaling problems for routers down the line. The solution implemented has been to use various technological and administrative measures to keep top-level table growth in control. In 2001 there were 100,000 entries, and 12 years later in 2013 there are 400,000 - nearly linear growth. Fortunately the nature of the global routing table is that linear top-level growth can support quadratic and more growth in the number of underlying nodes; getting access to the internet does not contribute to the scaling problem of the routing table.

On the other hand, getting provider-independent address space, a resource that does increase the burden on the global routing table, gets harder and harder every year. Like Bitcoin it's an O(n^2) scaling problem, and sure enough the solution followed has been to keep n as low as possible.

The way the internet has actually scaled is more like what I'm proposing with fidelity-bonded chaum banks: some number of n banks, each using up some number of transactions per month, but in turn supporting a much larger number m of clients. The scaling problem is solved hierarchically, and thus becomes tractable.

Heck, while we're playing this game, find me a single major O(n^2) internet scaling problem that's actually been solved by "just throwing more hardware at it", because I sure can't.

I just don't see scalability as ever being a problem, assuming effort is put into better software. Satoshi didn't think this would be a problem either, it was one of the first conversations we ever had. These conversations have been going around and around for years. I am unconvinced we're developing better insight into it anymore. Satoshis vision was for the block limit to be removed. So let's do it.

Appeal to authority. Satoshi also didn't make the core and the GUI separate, among many, many other mistakes and oversights, so I'm not exactly convinced I should assume that just because he thought Bitcoin could scale it actually can.
987  Bitcoin / Development & Technical Discussion / Re: How a floating blocksize limit inevitably leads towards centralization on: February 18, 2013, 08:35:12 PM
In the absence of a block size cap miners can be supported using network assurance contracts. It's a standard way to fund public goods, which network security is, so I am not convinced by that argument.

Network assurance contracts are far from a sure thing. It's basically an attempt to solve the tragedy of the commons, and the success rate society has had there is pitiful, even with strong central authorities. Assuming they will work is a big risk.

Perhaps I've been warped by working at Google so long but 100,000 transactions per second just feels totally inconsequential. At 100x the volume of PayPal each node would need to be a single machine and not even a very powerful one. So there's absolutely no chance of Bitcoin turning into a PayPal equivalent even if we stop optimizing the software tomorrow.

But we're not going to stop optimizing the software. Removing the block cap means a hard fork, and once we decided to do that we may as well throw in some "no brainer" upgrades as well, like supporting ed25519 which is orders of magnitude faster than ECDSA+secp256k1. Then a single strong machine can go up to hundreds of thousands of transactions per second.

I don't see any reason to think CPU power will be the issue. It's network capacity and disk space that is the problem. You're 100x the volume of PayPal is 4000 transactions a second, or about 1.2MiB/second, and you'll want to be able to burst quite a bit higher than that to keep your orphan rate down when new blocks come in. Like it or not that's well beyond what most internet connections in most of the world can handle, both in sustained speed and in quota. (that's 3TiB/month) Again, P2Pool will look a heck of a lot less attractive.

You also have to ask the question, what % of that 3TiB/month results in unspent txouts? Ultimately it's the UTXO set that is the hard limit on the storage requirements for full validating nodes. Even at just 1% volume growth, you're looking at 3GiB/month growth in your requirement for fast-random-access memory. That's an ugly, ugly requirement - after all if a block has n transactions, your average access time per transaction must be limited to 10minutes/n to even just keep up.

EDIT: also, it occurs me me that one of the worst things about the UTXO set is the continually increasing overhead it implies. You'll probably be lucky if cost/op/s scales by even something as good as log(n) due to physical limits, so you'll gradually be adding more and more expensive constantly on-line hardware for less and less value. All the time you're spending waiting for transactions to be retrieved from memory is time you aren't hashing. In addition your determinism goes down because inevitably the UTXO set will be striped across multiple storage devices, so at worst every tx turns out to be behind one low-bandwidth connection. God help you if an attacker figures out a way to find the worst sub-set to pick. UTXO proofs can help a bit - a transaction would include it's own proof that it is in the UTXO set for each txin - but that's a lot of big scary changes with consensus-sensitive implications.

Again, keeping blocks small means that scaling mistakes, like the stuff Sergio keeps on finding, are far less likely to turn into major problems.

The cost of a Bitcoin transaction is just absurdly low and will continue to fall in future. It's like nothing at all. Saying Bitcoin is going to get centralized because of high transaction rates is kinda like saying in 1993 that the web can't possibly scale because if everyone used it web servers would fall over and die. Well yes, they would have done, in 1993. But not everyone started using the web overnight and by the time they did, important web sites were all using hardware load balancers and multiple data centers and it was STILL cheap enough that Wikipedia - one of the worlds top websites - could run entirely off donations.

You're example has nothing to do with Bitcoin. Even in the early days it would be obvious to anyone who understood comp-sci that static websites are O(1) scaling per client so there isn't any reason to think you couldn't create websites for as much load as you wanted. Meanwhile unlike Wikipedia Bitcoin requires global shared state that must be visible to, and mutable by, every client. Comparing the two ignores some really basic computer science that was very well understood even when the early internet was created in the 70's.
988  Bitcoin / Development & Technical Discussion / Re: How a floating blocksize limit inevitably leads towards centralization on: February 18, 2013, 07:45:30 PM
RE: lots of code to write if you can't keep up with transaction volume:  sure.  So?

Well, one big objection is the code required is very similar to that required by fidelity-bonded bank/ledger implementations, but unlike the fidelity stuff, because it's consensus screwing it up creates problems that are far more difficult to fix and far more widespread in scale.


Transaction volume itself leads to centralization too, simply by ensuring that only a miner able to keep up with the large volume of low-fee transactions can make a profit.

I really don't understand this logic.

Yes, it is a fact of life that if you have a system where people are competing, the people who are less efficient will be driven out of business. So there will be fewer people in that business.

You seem to be saying that we should subsidize inefficient miners by limiting the block size, therefore driving up fees and making users pay for their inefficiency.

"This mining this is crazy, like all that work when you could just verify a transaction's signatures, and I dunno, ask a bunch of trusted people if the transaction existed?"

So, why do we give miners transaction fees anyway? Well, they are providing a service of "mining a block", but the real service they are providing is the service of being independent from other miners, and we value that because we don't want >50%  of the hashing power to be controlled by any one entity.

When you say these small miners are inefficient, you're completely ignoring what we actually want miners to do, and that is to provide independent hashing power. The small miners are the most efficient at providing this service, not the least.

The big issue is the cost to be a miner comes in two forms, hashing power and overhead. The former is what makes the network secure. The latter is a necessary evil, and costs the same for every independent miner. Fortunately with 1MiB blocks the overhead is low enough that individual miners can profitably mine on P2Pool, but with 1GiB blocks P2Pool mining just won't be profitable. We already have 50% of the hashing power controlled by about three or four pools - if running a pool requires thousands of dollars worth of equipment the situation will get even worse.

Of course, we've also been focusing a lot on miners, when the same issue applies to relay nodes too. Preventing DoS attacks on the flood-fill network is going to be a lot harder when when most nodes can't verify blocks fast enough to know if a transaction is valid or not, and hence the limited resource of priority or fees is being expended by broadcasting it. Yet if the "solution" is fewer relay nodes, you've broken the key security assumption that information is easy to spread and difficult to stifle.

All in the name of vague worries about "too much centralization."

Until Bitcoin has undergone a serious attack we just aren't going to have a firm idea of what's "too much centralization"
989  Bitcoin / Development & Technical Discussion / Re: How a floating blocksize limit inevitably leads towards centralization on: February 18, 2013, 07:09:02 PM
Half-baked thoughts on the O(N) problem:

So, we've got O(T) transactions that have to get verified.

And, right now, we've got O(P) full nodes on the network that verify every single transaction.

So, we get N verifications, where N = T*P.

The observation is that if both T and P increase at the same rate, that formula is O(N^2).

... and at this point your (and gmaxwell's) imagination seems to run out, and you throw up your hands and say "We Must Limit Either T or P."

Really?

If we have 20,000 full nodes on the network, do we really need every transaction to be verified 20,000 separate times?

I think as T and P increase it'd be OK if full nodes with limited CPU power or bandwidth decide to only fetch and validate a random subset of transactions.


Well you'll have to implement the fraud proofs stuff d'aniel talked about and I later expanded on. You'll also need a DHT so you can retrieve arbitrary transactions. Both require a heck of a lot of code to be written, working UTXO for fraud proofs in particular; random transaction verification is quite useless without the ability to tell everyone else that the block is invalid.

Things get ugly though... block validation isn't deterministic anymore: I can have one tx out of a million invalid, yet it still makes the whole block invalid. You better hope someone is in fact running a full-block validator and the fraud proof mechanism is working well or it might take a whole bunch of blocks before you find out about the invalid one with random sampling. The whole fraud proofs implementation is also now part of the consensus problem; that's a lot of code to get right.

In addition partial validation still doesn't solve the problem that you don't know which tx's in your mempool are safe to include in the next block unless you know which ones were spent by the previous block. Mining becomes a game of odds, and the UTXO tree proposals don't help.  A UTXO bloom filter might, but you'll have to be very careful that it isn't subject to chosen key attacks. Transaction volume itself leads to centralization too, simply by ensuring that only a miner able to keep up with the large volume of low-fee transactions can make a profit.

I've already thought of your idea, and I'm sure gmaxwell has too... our imagination didn't "run out"
990  Bitcoin / Development & Technical Discussion / Re: How a floating blocksize limit inevitably leads towards centralization on: February 18, 2013, 06:42:30 PM
I agree with Gavin, and I don't understand what outcome you're arguing for.

You want to keep the block size limit so Dave can mine off a GPRS connection forever? Why should I care about Dave? The other miners will make larger blocks than he can handle and he'll have to stop mining and switch to an SPV client. Sucks to be him.

I primarily want to keep the limit fixed so we don't have a perverse incentive. Ensuring that everyone can audit the network properly is secondarily.

If there was consensus to, say, raise the limit to 100MiB that's something I could be convinced of. But only if raising the limit is not something that happens automatically under miner control, nor if the limit is going to just be raised year after year.

Your belief we have to have some hard cap on the N in O(N) doesn't ring true to me. Demand for transactions isn't actually infinite. There is some point at which Bitcoin may only grow very slowly if at all (and is outpaced by hardware improvements).

Yes, there will likely only be around 10 billion people on the planet, but that's a hell of a lot of transactions. At one transaction per person per day you've got 115,700 transactions per second. Sorry, but there are lots of reasons to think Moore's law is coming to an end, and in any case the issue I'm most worried about is network scaling, and network scaling doesn't even follow Moore's law.

Making design decisions assuming technology is going to keep getting exponentially better is a huge risk when transistors are already only a few orders of magnitude away from being single atoms.

Likewise, miners have all kinds of perverse incentives in theory that don't seem to happen in practice. Like, why do miners include any transactions at all? They can minimize their costs by not doing so. Yet, transactions confirm. You really can't prove anything about miners behaviour, just guess at what some of them might do.

The fact that miners include transactions at all is a great example of how small  the block limit is. Right now the risk of orphans due to slow propagation is low enough that the difference between a 1KiB block and a 250KiB block is so inconsequential that pools just run the reference client code and don't bother tweaking it. I wouldn't be the slightest bit surprised to be told that there aren't any pools with even a single full-time employee, so why would I expect people to really put in the effort to optimize revenue, when it'll probably lead to a bunch of angry forum posts and miners leaving because they think the pool will damage Bitcoin?

I don't personally have any interest in working on a system that boils down to a complicated and expensive replacement for wire transfers. And I suspect many other developers, including Gavin, don't either. If Gavin decides to lift the cap, I guess you and Gregory could create a separate alt-coin that has hard block size caps  and see how things play out over the long term.

I don't have any interest in working on a system that boils down to a complicated and expensive replacement for PayPal.

Decentralization is the fundamental thing that makes Bitcoin special.
991  Bitcoin / Development & Technical Discussion / Re: Does a Request for Comments (RFC) for the Bitcoin protocol exist? on: February 18, 2013, 06:26:59 PM
Of course, I can't think of any projects you've actually created, so I don't have any reason to think you've actually run into any the supposed serious limitations inherent in the satoshi implementation that only a complete re-write can solve.

haha you mean there is no one which is fully understand the "whole thing"?? and you expect that "the rest" will trust the "whole thing"??

That's exactly what I mean. Unfortunately it's true, and the solution has been to use the code Satoshi wrote, the reference client, as a module by itself to talk to the network, and then write other modules, such as your wallet code and business logic, that interfaces to the Satoshi client.

If my answer results in you not being able to trust Bitcoin, then sell your coins and stop using it.

Those arguments about block size are a classic example of strawman argument from the "old money" & "old code" side. The forward-difference state maintenance was good for the proof-of-concept. But in practice nearly everyone does reverse-differencing.

https://bitcointalk.org/index.php?topic=87763.msg965877#msg965877

It is really old and well researched problem and almost all practical solutions use reverse-delta or some variant involving reverse-differencing.

I fully understand that given current situation there are no human (and other) resources available to move the network protocol and the code base forward.

Alright, so you are talking about either one of two things:

1) The reference implementation should use a reverse-delta scheme to actually store transactions. But as I've said, the current system runs fine.

2) You're suggesting implementing the UTXO concept, probably as a hard-fork change. Funny enough though, I'm actually working on writing a prototype UTXO implementation right now based on TierNolan's suggestion to use Radix/PATRICIA trees. A UTXO set is definitely something the devs want to see in the reference client, although it's a long-term goal, and in any case no-one has even done a prototype yet. If it is adopted, yes, blocks may eventually be 100% reverse-delta, but that's a really long way off because you would need a consensus...


As much as you hate my Bitcoin Airlines story it is a decent analogy: adding "fuel fees" to the "seat ticket transaction price" isn't going to make the Airline more popular and safer.

...or maybe you're confusing reverse-delta schemes with the blocksize limit. Reverse-delta doesn't decrease the size of blocks, only the amount of data needed to prove the existence of a given transaction. (this is why I'm doing my prototype; I'll need it for fidelity bonded trusted ledgers eventually) Again, RFCs and alternate implementations have nothing to do with this issue.

Anyway, I've wasted my time enough and have real work to do.
992  Bitcoin / Development & Technical Discussion / Re: How a floating blocksize limit inevitably leads towards centralization on: February 18, 2013, 06:08:14 PM
So...  I start from "more transactions == more success"

I strongly feel that we shouldn't aim for Bitcoin topping out as a "high power money" system that can process only 7 transactions per second.

Hey, I want a pony too. But Bitcoin is an O(n) system, and we have no choice but to limit n.

I agree with Stephen Pair-- THAT would be a highly centralized system.

A "highly centralized" system where anyone can get a transaction confirmed by paying the appropriate fee? A fee that would be about $20 (1) for a typical transaction even if $10 million a day, or $3.65 billion a year, goes to miners keeping the network secure for everyone?

I'd be very happy to be able to wire money anywhere in the world, completely free from central control, for only $20. Equally I'll happily accept more centralized methods to transfer money when I'm just buying a chocolate bar.


1) $10,000,000/144blocks = $69,440/block
     / 1MiB/block = $69.44/KiB

A two-in, two-out transaction with compressed keys is about 300 bytes, thus $20.35 per transaction.

So, as I've said before:  we're running up against the artificial 250K block size limit now, I would like to see what happens. There are lots of moving pieces here, so I don't think ANYBODY really knows what will happen (maybe miners will collectively decide to keep the block size low, so they get more fees.  Maybe they will max it out to force out miners on slow networks.  Maybe they will keep it small so their blocks relay through slow connections faster (maybe there will be a significant fraction of mining power listening for new blocks behind tor, but blasting out new blocks not via tor)).

That sounds like a whole lot of "maybe" I agree that we need to move cautiously, but fundamentally I've shown why a purely profit driven miner has an incentive to create blocks large enough to push other miners out of the game and gmaxwell has made the point that a purely profit driven miner has no incentive not to add an additional transaction to a block if the transaction fee is greater than the cost in terms of decreased block propagation leading to orphans. The two problems are complementary in that decreased block propagation actually increases revenues up to a point, and the effect is most significant for the largest miners. Unless someone can come up with a clear reason why gmaxwell and myself are both wrong, I think we've shown pretty clearly that floating blocksize limits will lead to centralization.

Variance already has caused the number of pools out there to be fairly limited; we really don't want more incentives for pools to get larger.

I think we should put users first. What do users want? They want low transaction fees and fast confirmations. Lets design for that case, because THE USERS are who ultimately give Bitcoin value.

They want something impossible from an O(n) system without making it centralized. We've already got lots of centralized systems - creating another one doesn't do the world any good. We've only got one major decentralized payment system, Bitcoin, and I want to keep it that way. Users can always use centralized systems for low-value transactions, and if block sizes are limited they'll even be able to very effectively audit the on-chain transactions produced by those centralized systems. Large blocks does not let you do that.

Ultimately, the problem is the huge amount of expensive infrastructure built around the assumption that transactions are nearly free. Businesses make decisions based on what will happen at most 3-5 years in the future, so naturally the likes of Mt. Gox, BitInstance, Satoshidice and others have every reason to want the block size limit to be lifted. It'll save them money now, even if it will lead to a centralized Bitcoin five or ten years down the road.
993  Bitcoin / Development & Technical Discussion / Re: How a floating blocksize limit inevitably leads towards centralization on: February 18, 2013, 05:28:43 PM
Wouldn't already a valid header (or even just the hash of that header) be enough to start mining at least an empty block?

Yes, but an empty block doesn't earn you any revenue as the block reward drops, so mining is still pointless. You still need the full block to know what transactions were mined, and thus what transactions in the mempool are safe to include in the block you want to attempt to mine.

Additionally without the full block, you don't know if the header is valid, so you are vulnerable to miners feeding you invalid blocks. Of course, someone has to create those invalid block hashes, but the large miners are the only ones who can validate them, so if smaller miners respond by taking risks like mining blocks even when they don't know what transactions have already been mined, the larger miners can run lots of nodes to find those blocks as fast as possible, and distribute them to other small miners without the equipment to validate the blocks.

Also if you produce blocks large and fast enough to drive someone out of mining you'd also drive a lot more full clients off the network.

Sure, but all the scenarios where extremely large blocks are allowed are also assuming that most people only run mini-SPV clients at best; if one of the smaller "full-node transaction feed" services gets put off-line, your customers, that is transaction creators, will just move to a larger service for their tx feed.

Miners already have quite high incentives to DDoS (or otherwise break) all other pools that they are not part of, no matter the block size. I think there are more effective, less disruptive for users and cheaper ways of driving competing miners off the grid than a bandwidth war.

Yes, but DoSing nodes by launching DoS attacks is illegal. DoSing full-nodes by just making larger blocks isn't. For the largest miner/full-node service the cost of launching such an attack is zero, they've already paid for the extra hardware capacity, so why not use it to it's full advantage? So what if doing so causes %5 of the network to drop out.

The most dangerous part of this scenario is that you don't need miners to even act maliciously for it to happen.  The miner with the largest investment in fixed costs, network capacity and CPU power, has a profit motive to use that expensive capacity to the fullest extent possible. The fact that doing so happens to push the miner with the smallest investment in fixed costs off of the network, furthering the largest's profits due to mining, is inevitable. Furthermore the process is guaranteed to happen again, because the largest miner has no reason not to take those further mining profits and invest in yet more network capacity and CPU power.

Again, remember that those fixed costs do not make the network more secure. A 51% attacker doesn't care about valid transactions at all; they're trying to mine blocks that don't have the transactions that the main network does, so they don't need to spend any money on their network connection.

Every cent that miners spend on internet connections and fast computers because they need to process huge blocks is money that could have gone towards securing the network with hashing power, but didn't.
994  Bitcoin / Development & Technical Discussion / Re: Why do people pay fees? Why are free transactions accepted by miners? on: February 18, 2013, 05:11:52 PM
Considering that the network pays a bounty of 25 BTC for a block containing 300 tx nowadays, that is a cost of 0.08 BTC per transaction.
The typical fee is a magnitude below that, no wonder miner does not currently care much of the fee.

This.

Like it or not, we have to pay for network security somehow. People constantly complain about transactions fees but the real cost is inflation; $120k USD worth of it every day. Only because of the huge number of people adopting Bitcoin and investing in it have we been able to ignore the inflationary cost of mining.
995  Bitcoin / Development & Technical Discussion / Re: Does a Request for Comments (RFC) for the Bitcoin protocol exist? on: February 18, 2013, 05:03:24 PM
Bitcoin Airlines 2012 Annual Report:

You can troll all you want, but fundamentally the reference client runs great on fairly modest computers and because of the 1MiB block size Moore's law this will continue to be true. As I've posted elsewhere raising that limit is not an option. Right now I can run a Bitcoin node just fine on a $5/month VPN server with very little CPU power or memory. Even if your RFC specification somehow resulted in a Bitcoin node that used 10x less resources, frankly I don't care about the difference between $5/month and $0.5/month. On the other hand, I do care about network splits, and using the Satoshi codebase for full validating nodes will be the best way to prevent them for the foreseeable future.

Talking about "old money" and "new money" as somehow having anything to do with the issue of what codebase you should us to run a validating node is bizzare. The only financial interest the "old money" has is to keep Bitcoins valuable, and the best way to do that is to keep the network secure and well-used. Other than validating nodes there are lots of reasons to promote alternative implementations - I personally use Armory as a wallet and am working on a project that would create ledger services to process transactions entirely off the blockchain, while providing for a mechanism where these services could be both anonymous and trusted.

Of course, I can't think of any projects you've actually created, so I don't have any reason to think you've actually run into any the supposed serious limitations inherent in the satoshi implementation that only a complete re-write can solve.
996  Bitcoin / Development & Technical Discussion / How a floating blocksize limit inevitably leads towards centralization on: February 18, 2013, 04:43:02 PM
This is a re-post of a message I sent to the bitcoin-dev mailing list. There has been a lot of talk lately about raising the block size limit, and I fear very few people understand the perverse incentives miners have with regard to blocks large enough that not all of the network can process them, in particular the way these incentives inevitably lead towards centralization. I wrote the below in terms of block size, but the idea applies equally to ideas like Gavin's maximum block validation time concept. Either way miners, especially the largest miners, make the most profit when the blocks they produce are large enough that less than 100%, but more than 50%, of the network can process them.



Quote
One of the beauties of bitcoin is that the miners have a very strong incentive to distribute as widely and as quickly as possible the blocks they find...they also have a very strong incentive to hear about the blocks that others find.

The idea that miners have a strong incentive to distribute blocks as widely and as quickly as possible is a serious misconception. The optimal situation for a miner is if they can guarantee their blocks would reach just over 50% of the overall hashing power, but no more. The reason is orphans.

Here's an example that makes this clear: suppose Alice, Bob, Charlie and David are the only Bitcoin miners, and each of them has exactly the same amount of hashing power. We will also assume that every block they mine is exactly the same size, 1MiB. However, Alice and Bob both have pretty fast internet connections, 2MiB/s and 1MiB/s respectively. Charlie isn't so lucky, he's on an average internet connection for the US, 0.25MiB/second. Finally David lives in country with a failing currency, and his local government is trying to ban Bitcoin, so he has to mine behind Tor and can only reliably transfer 50KiB/second.

Now the transactions themselves aren't a problem, 1MiB/10minutes is just 1.8KiB/second average. However, what happens when someone finds a block?

So Alice finds one, and with her 1MiB/second connection she simultaneously transfers her new found block to her three peers. She has enough bandwidth that she can do all three at once, so Bob has it in 1 second, Charlie 4 seconds, and finally David in 20 seconds. The thing is, David has effectively spent that 20 seconds doing nothing. Even if he found a new block in that time he wouldn't be able to upload it to his other peers fast enough to beat Alice's block. In addition, there was also a probabalistic time window before Alice found her block, where even if David found a block, he couldn't get it to the majority of hashing power fast enough to matter. Basically we can say David spent about 30 seconds doing nothing, and thus his effective hash power is now down by 5%


However, it gets worse. Lets say a rolling average mechanism to determine maximum block sizes has been implemented, and since demand is high enough that every block is at the maximum, the rolling average lets the blocks get bigger. Lets say we're now at 10MiB blocks. Average transaction volume is now 18KiB/second, so David just has 32KiB/second left, and a 1MiB block takes 5.3 minutes to download. Including the time window when David finds a new block but can't upload it he's down to doing useful mining a bit over 3 minutes/block on average.

Alice on the other hand now has 15% less competition, so she's actually clearly benefited from the fact that her blocks can't propegate quickly to 100% of the installed hashing power.


Now I know you are going to complain that this is BS because obviously we don't need to actually transmit the full block; everyone already has the transactions so you just need to transfer the tx hashes, roughly a 10x  reduction in bandwidth. But it doesn't change the fundamental principle: instead of David being pushed off-line at 10MiB blocks, he'll be pushed off-line at 100MiB blocks. Either way, the incentives are to create blocks so large that they only reliably propagate to a bit over 50% of the hashing power, *not* 100%

Of course, who's to say Alice and Bob are mining blocks full of transactions known to the network anyway? Right now the block reward is still high, and tx fees low. If there isn't actually 10MiB/second of transactions on the network it still makes sense for them to pad their blocks to that size anyway to force David out of the mining business. They would gain from the reduced hashing power, and get the tx fees he would have collected. Finally since there are now just three miners, for Alice and Bob whether or not their blocks ever get to Charlie is now totally irrelevant; they have every reason to make their blocks even
bigger.

Would this happen in the real world? With pools chances are people would quit mining solo or via P2Pool and switch to central pools. Then as the block sizes get large enough they would quit pools with higher stale rates in preference for pools with lower ones, and eventually the pools with lower stale rates would probably wind up clustering geographically so that the cost of the high-bandwidth internet connections between them would be cheaper. Already miners are very sensitive to orphan rates, and will switch pools because of small differences in that rate.

Ultimately the reality is miners have very, very perverse incentives when it comes to block size. If you assume malice, these perverse incentives lead to nasty outcomes, and even if you don't assume malice, for pool operators the natural effects of the cycle of slightly reduced profitability leading to less ability invest in and maintain fast network connections, leading to more orphans, less miners, and finally further reduced profitability due to higher overhead will inevitably lead to centralization of mining capacity.
997  Bitcoin / Development & Technical Discussion / Re: Applying Ripple Consensus model in Bitcoin on: February 18, 2013, 03:39:58 PM
If this idea is too controversial, could we first build it as an alarming system against chain split and massive chain rewrite? This is particularly useful in a case of network split, as some people will find that many validators are missing suddenly.

Go right ahead; no-one is stopping you. Merchants especially should be using chain monitoring to determine when something may be going wrong and they may want to stop accepting orders. But don't use chain monitoring for miners, at least automatically: a false alarm of the monitoring mechanism can be used as a way to create a network split in itself.
998  Bitcoin / Development & Technical Discussion / Re: Does a Request for Comments (RFC) for the Bitcoin protocol exist? on: February 18, 2013, 03:32:47 PM
So what is the moral of the story above? Standards are tools (like axes), they can be used for good and bad purposes. The good question to ask yourself is:

1) is the particular vendor interested in interoperability and promoting the market by encouraging diverse implementation?

or

2) is the particular vendor known for promoting exclusivity, discouraging alternatives and always hyping his own implementation?

Please take your time to review the posting history and make a smart choice.

You just don't get it do you? Bitcoin is a consensus system and we don't yet know how to create diverse implementations that meet the incredibly strict requirement of every implementation acting exactly the same way. For now, specifying the "Bitcoin standard" as source code is unfortunately the best we can do, and even that is difficult. This is why the recommended way to create a Bitcoin-using service is to run one or more full nodes using the reference (Satoshi) implementation to have some trusted nodes to connect to, and then use either the reference client or an alternative implementation as the base for your actual business logic. The reference node insulates your custom code from the network, especially malicious actors, and ensures that the rules for what are valid blocks and valid transactions are followed correctly. Your code can assume that anything the reference node communicates to it is accurate. (though you do need to handle re-orgs)

Ultimately that the GUI and the Bitcoin node were implemented in the same codebase was a big design mistake by Satoshi, but we have to live with it. Satoshi should have written a very small, very simple, validating node core and then wrote a separate client library and UI/wallet app as a separate code-base using that core. I think all the developers want to work towards that goal, but doing so is a huge project with a lot of risks. Not worth it given there aren't many disadvantages to just running bitcoind.

I think it's notably that you don't see the core devs complaining about Mike Hearns bitcoinj or jgarzik's pynode, neither of which claims to be a full validating node. The former especially has been used as the basis for a lot of services - satoshidice is built on bitcoinj IIRC. I have working on pynode on my todo list myself; we don't have enough people working on libraries to make it easy to explore new types of transactions. What we do have is people wasting a lot of time and effort attempting to make fully validating node re-implementations that are downright dangerous to their users and the network as a whole, yet don't provide any benefits.

You know gmaxwell has asked me a few times about my progress on pynode - not exactly the actions of someone trying to clamp down on alternatives. But he knows I understand the consensus problem.
999  Bitcoin / Development & Technical Discussion / Re: Does a Request for Comments (RFC) for the Bitcoin protocol exist? on: February 18, 2013, 08:46:52 AM
In any case, for everyone who thinks an RFC or similar specification document is such a great idea, nobody is stopping you from writing one yourself. If you write a good one that manages to address the issues raised by Mike Hearn and gmaxwell, and keep it updated, it has a chance of being adopted and in any case the process will teach you a lot about how Bitcoin really works.
1000  Bitcoin / Development & Technical Discussion / Re: Applying Ripple Consensus model in Bitcoin on: February 15, 2013, 12:03:13 PM
Retep makes no direct comment on my proposal. I'd like to know what he thinks

Proof of work via SHA256 hashing is really nice because you can validate it by machine. For instance a nifty project would be to get some remote attestation capable hardware like the IBM 4758 cryptographic coprocessor often used by banks. Basically what's special about it is the hardware itself is exceptionally difficult to tamper with, and additionally IBM includes a mechanism called remote attestation where the hardware will tell you what software is running on it. Since these co-processors are used for many, many different purposes IBM can't release hardware that lies without damaging a significant amount of trust in them.

So, what you would do is write a very small, very simple piece of code that implements the Bitcoin block hashing algorithm. What this code would do is accept encrypted messages from anyone, either the query "What's the legit block chain?" or the statement "Here is the next block in the chain" Since the messages are encrypted the operator of the service can't prevent someone from telling the hardware about the best known chain, so anyone making a query asking what the chain is can be pretty sure that the response is accurate. The existence of this service would allow others to use it to bootstrap their own clients without needing to know any honest nodes at all.(1) Smart Property is a good example where this service would be useful. Additionally it could argument or replace the checkpoint mechanism.

The problem with Ripple-style consensus is stuff like the above just can't be done because maintaining a list of public keys associated with trusted entities is fundamentally a task that only humans can do. For Ripple human consensus is probably a reasonable idea - Ripple depends on human evaluation of trust relationships anyway - but applying that concept to Bitcoin would turn it into something very different than it is now.

It also isn't a given that it would make Bitcoin any more secure either: if miners use this consensus scheme too, then by breaking the consensus you can either re-direct hashing power to your new, illegitimate chain, or failing that, turn the hashing power off to make a 51% attack easier. For non-miners consensus can help, but only in the sense that the consensus is warning you something is wrong, so you shouldn't trust transactions for now until we figure out what is wrong. Bitcoin already has a primitive version of that with the alert system anyway.


1) You do need a RNG to create nonces to prevent replay attacks. Also note that neither the client nor the server need a clock; if a 51% attacker does not exist, the length of the valid chain will always outpace any false chain. That said, at least having an uptime timer is useful so clients can determine if the server has been running long enough to have been told about the best block; similarly a message counter is also useful.

Multiple servers should exist, and clients should always query multiple servers and also automatically provide out-of-date servers with updates. You will have to be careful though to ensure queries for the chain and update queries are always exactly the same length. Additionally client behavior for either action needs to be identical to ensure updates can't be censored. (without breaking the encryption of course) The fact that the hardest PoW's in existence have about a 98% chance of being Bitcoin block hashes (2% chance of being orphans) could be useful.

Clients do need to have a way to prevent attackers with control of their upstream network connections from setting up these servers themselves. Simply having a list of trusted pubkeys of people who you trust to set such a server up with one good option; the hardware itself tends to be fairly expensive too. You can also get clever and have the server create a PoW based on their identity; a Bitcoin PoW for an invalid block is nice because determining the value expended is relatively easy. You can also have the clients pay for each message by performing the PoW on your behalf, and recording the hardest PoW found to in turn use as your proof that the server is legit.
Pages: « 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 [50] 51 52 53 54 55 56 57 58 59 60 61 62 »
Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!