Second half-baked thought:
One reasonable concern is that if there is no "block size pressure" transaction fees will not be high enough to pay for sufficient mining.
Here's an idea: Reject blocks larger than 1 megabyte that do not include a total reward (subsidy+fees) of at least 50 BTC per megabyte.
"But miners can just include a never broadcast, fee-only transactions to jack up the fees in the block!"
Yes... but if their block gets orphaned then they'll lose those "fake fees" to another miner. I would guess that the incentive to try to push low-bandwidth/CPU miners out of the network would be overwhelmed by the disincentive of losing lots of BTC if you got orphaned.
The issue with setting an arbitary number like this is that it may not make economic sense to pay 50BTC in fees for 3600 transactions (~0.0138BTC) depending on how much Bitcoin is worth in the future. Should Bitcoin reach $100k, a transaction on average would cost in excess of $1000.
Im all for a floating block size limit but unlike difficulty, a change in the current block size affects "all" parties involved (clients, miners, all users via tx fees).
The core difference is that when someone wants to increase their hash rate, they bear the cost of investing in additional hardware. At most, it will bring up difficulty and force other miners to mine more efficiently or drive the less efficient ones out of business while increasing the security of the network. Users benefit(for "free") from increased mining competition via a more secure network. Free market forces will drive down mining profitability/transaction costs and increase hash rates to an equilibrium and everyone wins.
Now when there is a block size increase(whether it be floating or forked as a hard rule), things get messy.
Pros
Transactions are verified cheaper and processed as fast as they are today
Cons
Increased Storage costs for everyone(
full nodes, miners)
Increased Bandwidth requirements (
everyone including SPVs)
Reduced Network hash rate == Reduced security of Bitcoin(due to lower transaction fees, miners invest less in hashing power)
So lets break this down
StorageIm surprised nobody is talking about storage issues. Sometimes when i launch the reference client after a few days, i start thinking how absurd it is that each SD bet has become a permanent cost(time to download, disk space, network usage) for every user of the full client now and in the future. Even at 1MB/block, we are looking at a block chain increase of about 55GB a year(including misc indexing files/USX DB). Increase this to 10MB and you start requiring a dedicated hard disk(500+GB) for every years worth of blocks. At 10MB, it starts requiring a considerable investment to run a full node after a year. This means your average user WILL NEVER run a full node after a year of this change. After a few years, running a full node becomes the domain of medium sized companies.
Solution:
Now lets assume that pruning is implemented and we start storing only the unspent output, at 10MB, a 2GB unspent output DB starts to seem reasonable. A few archived nodes containing the full blocks could be run by donation financed sites or the big Bitcoin businesses(MtGox, blockchain, etc..). Or the full client could be modified to include a DHT mode instead/in addition to the pruning mode to allow the average user to store a subset of the block chain.
Network BandwidthAs easy as it is for Mike to say that 100mbit connections are widely available and bandwidth is not an issue, the fact is that not everyone lives in Kansas or Korea. If you look at Asia(excluding Japan/Korea/Singapore/Taiwan/HK), there are not many countries where you can get a stable 512kbps connection. Speed aside, even developed countries like US/Canada/Australia/New Zealand have many ISPs with puny bandwidth caps of 50GB - 100GB per month charging above $70. Some parts of Europe have extremely expensive internet with poor connectivity as well. This may or may not change. Some countries have government imposed monopolies allowing poor service and high prices while some countries do not have government investment/economies of scale to warrant an investment in internet infrastructure.
Still, having only miners having to worry about network bandwidth is fine in my opinion as it is a competitive business.
A full node used for verification should not need to worry about a 1-2min block download time as it is not in a race to find the next block. But it does mean that full nodes starting afresh may not be able to catch up with the current block if their download speeds are too slow. For a 1mbit connection, even a 10MB block size would be pushing it. To me it becomes a serious issue when half the people in the world are unable to run a full node because the blocks are too large for them to catch up.
SecurityThis is something that we cannot account for because we have not had a precedent breach of security. Still, a single incident could be fatal to Bitcoin's reputation as secure form of money, something which it may not be able to recover from(infact this may lead to a loss of confidence in the currency and cause a collapse of it's value akin to Hyperinflation scenarios). I think this point should not be taken lightly, we know there are many parties who will benefit from Bitcoin's demise and would not mind mounting an attack at a loss.
My TakeIm with retep and max on this one for couple of reasons. Even if technically feasible, we should be extremely conservative in raising the block size limit(if at all) just because of security.
On a more philosophical level, i dislike wasteful systems.
I find it absurd that with ever more powerful hardware, software is getting slower and more bloated. In the past couple of decades, there has been a culture of waste in computing as we abuse Moore's Law.
Lets run virtual machines because the hardware is getting faster.
But we will need 1GB ram and a 1ghz dual-core processor with a fast GPU to swipe 16 icons on screen smoothly.
Compare this with the Genesis/Megadrive which was running the Sonic series on a 7mhz processor with 64KB RAM without a GPU and you start to realise just how inefficient and wasteful today's software has become.
Bitcoin as a decentralised p2p system is extremely inefficient as compared to a centralised system like Visa as has been pointed out by multiple posters. Now in Bitcoin's case, the inefficiency is a requirement to maintain a decentralised system, a necessary evil if you will. Competing centralised services built atop of Bitcoin to cater for micro payments will not only be more efficient and cheaper but also instant, something which Bitcoin will not be able to compete with and should not. Advocating the use of Bitcoin for volumes it is not optimised for just seems extremely wasteful to me.
It is important to understand that these centralised services will be more akin to the tech industry than with today's banking industry. Anyone can start a tech company in his basement or garage if he wants, you cant start a bank or fiat payment processor like Visa unless you have connections with people of power(Regulators, governments, big banks) because of many artificial barriers to entry. Unlike fiat currency, Bitcoin is an open platform and the anybody is free to build services atop of it for which Bitcoin is not well suited for.
Likewise anyone who can host a website can start a micro-payment processor. The big exchanges and hosted wallet services would(already) do it for free to save on Bitcoin TX fees and allow instant confirmation. Moreover, this is largely going to be for micro-payments where customers would maintain a small pre-paid deposit and processors would clear balances regularly(hourly-daily) with other processors. The losses in the event of a dishonest processor would be minimal.
In summary, leave the block size alone and let the free market work around it. Bitcoin's primary role was to liberalize money and that goal should not be compromised to support a micro-payment network for which it is ill-suited for.