Bitcoin Forum
December 13, 2017, 07:26:46 AM *
News: Latest stable version of Bitcoin Core: 0.15.1  [Torrent].
 
   Home   Help Search Donate Login Register  
Pages: « 1 2 3 4 5 6 7 8 9 [10] 11 12 13 14 15 16 17 18 19 20 21 22 23 24 »  All
  Print  
Author Topic: Permanently keeping the 1MB (anti-spam) restriction is a great idea ...  (Read 103891 times)
Cryddit
Legendary
*
Offline Offline

Activity: 840


View Profile
February 06, 2015, 07:28:45 PM
 #181

Okay, I'm going to start by saying that in the short run (next four or so years) there is no escaping an increase in maximum block size.  So yes, definitely do that.

However, in the longer term, that's still supralinear scaling and still potentially faces scale problems.  So we need another solution.  We don't need altcoins whatsoever.  It's possible for a single cryptocurrency to exist as multiple blockchains.   There can be a central blockchain that does hardly anything else than mediate automatic exchanges between address spaces managed by dozens of side chains.

Transaction times where you have coins on one side chain and need to pay into an address that's on a different side chain would become longer, because now two transactions that mutually depend on one another must appear in two separate blockchains.   So that's annoying if your wallet can't find coins that are in the same chain as the address you are paying to. 

Within chain A, a cross-chain tx might appear as "Han paid Chain B in tx foo-a" and in chain B it appears as "Chain A paid Chewie in tx foo-b"  And if that result would cause chain A to have a negative balance, it triggers "Chain B paid Chain A half of chain B's positive balance in tx foo-central" in the central blockchain.

Anyway, you'd have to at least temporarily track the central chain, any chain into which you're paying, and any from which you're being paid, like a lightweight client.  Beyond that, you'd have the option of actually having the full download and proof all the way back to an origin block of any subset of chains that interest you.


1513150006
Hero Member
*
Offline Offline

Posts: 1513150006

View Profile Personal Message (Offline)

Ignore
1513150006
Reply with quote  #2

1513150006
Report to moderator
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction. Advertise here.
1513150006
Hero Member
*
Offline Offline

Posts: 1513150006

View Profile Personal Message (Offline)

Ignore
1513150006
Reply with quote  #2

1513150006
Report to moderator
1513150006
Hero Member
*
Offline Offline

Posts: 1513150006

View Profile Personal Message (Offline)

Ignore
1513150006
Reply with quote  #2

1513150006
Report to moderator
1513150006
Hero Member
*
Offline Offline

Posts: 1513150006

View Profile Personal Message (Offline)

Ignore
1513150006
Reply with quote  #2

1513150006
Report to moderator
spooderman
Legendary
*
Offline Offline

Activity: 1470


View Profile WWW
February 06, 2015, 08:04:22 PM
 #182

Excellent post!

Society doesn't scale.
leopard2
Legendary
*
Offline Offline

Activity: 1243


View Profile
February 06, 2015, 08:43:36 PM
 #183

Why couldn't MAX_BLOCK_SIZE be self-adjusting?

It certainly could be.   The point of the post wasn't to arrogantly state what we must do or even that we must do something now.   I would point out that planning a hard fork is no trivial manner so the discussion needs to start now even if the final switch to new block version won't actually occur for 9-12 months.   The point was just to show that a permanent 1MB cap is simply a non-starter.  It allows roughly 1 million direct users to make less than 1 transaction per month.  That isn't a backbone it is a technological dead end.

For the record I disagree with Gavin on if Bitcoin can (or even should) scale to VISA levels.   It is not optimal that someone in Africa needs transaction data on the daily coffee habit of a guy in San Francisco they will never meet.   I do believe that Bitcoin can be used as a core backbone to link a variety of other more specialized (maybe even localized) systems via the use of sidechains and other technologies.  The point of the post was that whatever the future of Bitcoin ends up being it won't happen with a permanent 1 MB cap.

There are always altcoins to pay for the coffees, but still there can be no doubt that 1MB is not enough. I cannot believe people seriously oppose the increase! After all it will not make the blockchain 20x bigger, it will only make the blockchain as large as needed to include all transactions, once the 1MB limit is not sufficient anymore.

So people who oppose the change are basically saying, BTC transactions have to be severely limited (at around 7 TPS) forever, just to stick to the 1MB limit forever. That is nuts.

Truth is the new hatespeech.
homo homini lupus
Member
**
Offline Offline

Activity: 70


View Profile
February 06, 2015, 09:43:58 PM
 #184

I cannot believe people seriously oppose the increase!

"arguing to ignorance"

the rest of the post was distorted trash based on false assumptions too
RoadStress
Legendary
*
Offline Offline

Activity: 1652


View Profile
February 06, 2015, 09:50:13 PM
 #185

I cannot believe people seriously oppose the increase!

"arguing to ignorance"

the rest of the post was distorted trash based on false assumptions too

Troll and useless post. Bringing nothing to discuss. Move alone.

iCEBREAKER is a troll! He and cypherdoc helped HashFast scam 50 Million $ from its customers !
H/w Hosting Directory & Reputation - https://bitcointalk.org/index.php?topic=622998.0
amincd
Hero Member
*****
Offline Offline

Activity: 772


View Profile
February 06, 2015, 11:50:55 PM
 #186

So people who oppose the change are basically saying, BTC transactions have to be severely limited (at around 7 TPS) forever, just to stick to the 1MB limit forever. That is nuts.

It actually could be as few as 2 tps, meaning that if there are over 60 million Bitcoin users, each one will only use the blockchain once per year, and with a billion users (1 out of 7 people in the world), each person can use the blockchain less than once a decade:

The numbers below are for 2tps.  Double the numbers if you think 4tps is more appropriate but it doesn't materially change the insignificant upper limit.

Code:
Maximum supported users based on transaction frequency.
Assumptions: 1MB block, 821 bytes per txn
Throughput:  2.03 tps, 64,000,000 transactions annually

Total #        Transactions per  Transaction
direct users     user annually    Frequency
       <8,000       8760          Once an hour
      178,000        365          Once a day
      500,000        128          A few (2.4) times a week
    1,200,000         52          Once a week
    2,600,000         24  Twice a month
    5,300,000         12  Once a month
   16,000,000          4  Once a quarter
   64,000,000          1          Once a year
  200,000,000          0.3        Less than once every few years
1,000,000,000          0.06       Less than once a decade

This is totally unrealistic, and it's not going to work. Running a block-space scarcity experiment on Bitcoin to see what happens when people can no longer use the blockchain for real-world transactions, when the block size can still be significantly increased without making the network centralized, is dangerous and irresponsible. The idea that they'll opt for a Bitcoin micropayment channel hub, rather than just giving up on Bitcoin, is pure speculation, and one that I don't think will be borne out.
BADecker
Legendary
*
Offline Offline

Activity: 1512


View Profile
February 07, 2015, 12:22:40 AM
 #187

How about keeping a main blockchain, housed in several thousand, major repositories around the world, with mirrors and backups. Everything would go into the main blockchain.

There would be a smaller, secondary, "practical," everyday blockchain that would house the first record of every address, and the last two. Yet it would eliminate all the times an address was used between the first and the second last.

This way we would always have the full record when needed, but we would have easy access for everyday use.

Smiley
zebedee
Donator
Hero Member
*
Offline Offline

Activity: 670



View Profile
February 07, 2015, 04:20:07 AM
 #188

[For the entirety of Bitcoin's history, it has produced blocks smaller than the protocol limit.

Why didn't the average size of blocks shoot up to 1 MB and stay there the instant Satoshi added a block size limit to the protocol?

I'm not sure what you're getting at. Clearly there just hasn't been the demand for 1 MB worth of transactions per block thus far, but that could change relatively soon., and thus the debate over lifting the 1 MB cap before we get to that point. If suddenly the block limit were to drop to 50kb, I think we'd start seeing a whole lot of 50kb blocks, no?
Justus is, I believe, pointing out that until very recently bitcoin has effectively had no block size limit, as blocks near the protocol limit were almost non-existent.  More recently we tend to get a few a day, mostly from F2Pool.

Those claiming we'll have massive runaway blocks full of one satoshi / free transactions have never adequately explained why it wasn't true historically when the average block size was 70k, and why people still felt the need to pay fees then.

Anyone trying to send free / very low fee transactions recently will know from having it backfire that they have to think long and hard about taking the risk if they want confirmation in a reasonable time, and that's the way it should be and likely always will be.   Each incremental transaction increases miner risk, and therefore has a cost, and that's natural and good, and enough for an equilibrium to be found.

Heck, were the cap completely removed, and some major pools concerned about spam (aren't we all?) stated that, for their own values of X, Y and Z, that they'd not relay blocks larger than (say) 500KB that pay total fees of less than X satoshis per kilobyte, and would not even build on blocks paying fees of less than Y per kilobyte unless they had managed to become Z blocks deep, would have a huge deterrent effect of making it expensive to try to spam the network.  Not many people are willing to risk 25 BTC to make a point, never mind be willing to continue to do so repeatedly.   X, Y and Z wouldn't need to be uniform across pools, and of course could change with time and technology changes.  An equilibrium would be found and blocks would achieve a natural growth rate than no central planner can properly plan.
Soros Shorts
Donator
Legendary
*
Offline Offline

Activity: 1612



View Profile
February 07, 2015, 05:03:47 AM
 #189

Heck, were the cap completely removed, and some major pools concerned about spam (aren't we all?) stated that, for their own values of X, Y and Z, that they'd not relay blocks larger than (say) 500KB that pay total fees of less than X satoshis per kilobyte, and would not even build on blocks paying fees of less than Y per kilobyte unless they had managed to become Z blocks deep, would have a huge deterrent effect of making it expensive to try to spam the network.  Not many people are willing to risk 25 BTC to make a point, never mind be willing to continue to do so repeatedly.   X, Y and Z wouldn't need to be uniform across pools, and of course could change with time and technology changes.  An equilibrium would be found and blocks would achieve a natural growth rate than no central planner can properly plan.

Yes, it makes sense to remove/raise the hard limit from the protocol and let individual miners set their own limits since they are the ones most in touch with what the optimal parameters would be for their individual setups. If we went to a 20MB cap tomorrow I'd guess that no pool would even try to build blocks anywhere near that size given the high risk of getting an orphaned block with the current state of the network.
justusranvier
Legendary
*
Offline Offline

Activity: 1400



View Profile WWW
February 07, 2015, 05:41:19 AM
 #190

until very recently bitcoin has effectively had no block size limit, as blocks near the protocol limit were almost non-existent.
johnyj
Legendary
*
Offline Offline

Activity: 1834


Beyond Imagination


View Profile
February 07, 2015, 06:17:04 AM
 #191


Of course, 28 minutes is still long. That is based on 2013 data.
This data is massively outdated... it's before signature caching and ultra-prune, each were easily an order of magnitude (or two) improvements in the transaction dependent parts of propagation delay. It's also prior to block relay network, not to mention the further optimizations proposed but not written yet.

I don't actually think hosts are faster, actually I'd take a bet that they were slower on average, since performance improvements have made it possible to run nodes on smaller hosts than were viable before (e.g. crazy people with Bitcoind on rpi). But we've had software improvements which massively eclipsed anything you would have gotten from hardware improvements. Repeating that level of software improvement is likely impossible, though there is still some room to improve.

There are risks around massively increasing orphan rates in the short term with larger blocks (though far far lower than what those numbers suggest), indeed... thats one of the unaddressed things in current larger block advocacy, though block relay network (and the possibility of efficient set reconciliation) more or less shows that the issues there are not very fundamental though maybe practically important.

In the end, the whole block of transactions must be present on the blockchain at the most distant end of the network in 3 minutes to allow newly discovered blocks to be added upon it. Ideally, you need to transmit 20MB data in 1-2 minutes. Maybe it is possible to use multi-threaded P2P downloading to accelerate the data transfer

onemorebtc
Sr. Member
****
Offline Offline

Activity: 266


View Profile
February 07, 2015, 06:51:40 AM
 #192

In the end, the whole block of transactions must be present on the blockchain at the most distant end of the network in 3 minutes to allow newly discovered blocks to be added upon it. Ideally, you need to transmit 20MB data in 1-2 minutes. Maybe it is possible to use multi-threaded P2P downloading to accelerate the data transfer

bigger pools already use another block propagation between themself. i am not sure how much it can handle, but obviously its optimized for miners needs

transfer 3 onemorebtc.k1024.de 1
gmaxwell
Staff
Legendary
*
Offline Offline

Activity: 2366



View Profile
February 07, 2015, 07:27:31 AM
 #193

In the end, the whole block of transactions must be present on the blockchain at the most distant end of the network in 3 minutes to allow newly discovered blocks to be added upon it. Ideally, you need to transmit 20MB data in 1-2 minutes. Maybe it is possible to use multi-threaded P2P downloading to accelerate the data transfer
Blocks are just transaction data which has almost all already been relayed through the network. All that one has to send is just a tiny set of indexes to indicate which of the txn in circulation were included and in what order  and there are already alternative transports that do this (or even less-- just a difference between a deterministic idealized list and the real thing).  The data still has to be sent in the network, so it doesn't fundamentally improve scaling to be more efficient here (just a constant factor), but it gets block size pretty much entirely out of the critical path for miners.

Bitcoin will not be compromised
antonioeram
Member
**
Offline Offline

Activity: 108

Is John Galt, Satoshi ? or viceversa


View Profile WWW
February 07, 2015, 09:37:58 AM
 #194

So basically a solution should

- increase the block value (~20Mb)
- use high speed super nodes (in order to propagate in 2 minutes interval)
- use some kind of multi-threaded P2P protocol
- increase the reward on miners.

good.

what next ?

PGP - Public Key
https://goo.gl/DXrS8t
Realpra
Hero Member
*****
Offline Offline

Activity: 819


View Profile
February 07, 2015, 10:14:56 AM
 #195

...

20 mb is nice for a start, it will last as a year or two, three maybe, but it will need to be upgraded eventually as well. We are dangerously close to our limit with 1 my and we can't stay at 1 mb.
Gavins proposal automatically scales 40% per year after the 20mb raise, then stops after 20 years - last time I read up on it.

After 20 years this puts us at ~16.000 mb or 32.000 TX/second. That's more than the VISA network and I think it will last a long time.

I also support Gavin and OP.

Cheap and sexy Bitcoin card/hardware wallet, buy here:
http://BlochsTech.com
amincd
Hero Member
*****
Offline Offline

Activity: 772


View Profile
February 07, 2015, 11:05:36 AM
 #196

...

20 mb is nice for a start, it will last as a year or two, three maybe, but it will need to be upgraded eventually as well. We are dangerously close to our limit with 1 my and we can't stay at 1 mb.
Gavins proposal automatically scales 40% per year after the 20mb raise, then stops after 20 years - last time I read up on it.

After 20 years this puts us at ~16.000 mb or 32.000 TX/second. That's more than the VISA network and I think it will last a long time.

I also support Gavin and OP.

+1 At that point we'll have a ton of sidechains (hopefully) that can handle any further growth in demand for peer-to-peer electronic cash transactions.
Bitcoinexp
Hero Member
*****
Offline Offline

Activity: 545


Bountie- Do You Have Game?


View Profile
February 07, 2015, 01:39:38 PM
 #197

Great effort in posting. You hit the hammer right on the nail. But i think the title is slightly misleading for me. Perhaps a question mark at the end instead would suffice. I think a decision has to be made immediately because we are really close to reaching the limit with the current 1MB restriction.

sangaman
Sr. Member
****
Offline Offline

Activity: 342



View Profile WWW
February 07, 2015, 05:14:14 PM
 #198

[For the entirety of Bitcoin's history, it has produced blocks smaller than the protocol limit.

Why didn't the average size of blocks shoot up to 1 MB and stay there the instant Satoshi added a block size limit to the protocol?

I'm not sure what you're getting at. Clearly there just hasn't been the demand for 1 MB worth of transactions per block thus far, but that could change relatively soon., and thus the debate over lifting the 1 MB cap before we get to that point. If suddenly the block limit were to drop to 50kb, I think we'd start seeing a whole lot of 50kb blocks, no?
Justus is, I believe, pointing out that until very recently bitcoin has effectively had no block size limit, as blocks near the protocol limit were almost non-existent.  More recently we tend to get a few a day, mostly from F2Pool.

Those claiming we'll have massive runaway blocks full of one satoshi / free transactions have never adequately explained why it wasn't true historically when the average block size was 70k, and why people still felt the need to pay fees then.

Anyone trying to send free / very low fee transactions recently will know from having it backfire that they have to think long and hard about taking the risk if they want confirmation in a reasonable time, and that's the way it should be and likely always will be.   Each incremental transaction increases miner risk, and therefore has a cost, and that's natural and good, and enough for an equilibrium to be found.

Heck, were the cap completely removed, and some major pools concerned about spam (aren't we all?) stated that, for their own values of X, Y and Z, that they'd not relay blocks larger than (say) 500KB that pay total fees of less than X satoshis per kilobyte, and would not even build on blocks paying fees of less than Y per kilobyte unless they had managed to become Z blocks deep, would have a huge deterrent effect of making it expensive to try to spam the network.  Not many people are willing to risk 25 BTC to make a point, never mind be willing to continue to do so repeatedly.   X, Y and Z wouldn't need to be uniform across pools, and of course could change with time and technology changes.  An equilibrium would be found and blocks would achieve a natural growth rate than no central planner can properly plan.

I agree, and I never meant to suggest otherwise. Bitcoin still has effectively no block size limit, and if the block limit became 1GB tomorrow it most likely wouldn't result in blocks being any larger in the foreseeable future. I corrected myself because I at first said that no block limit would result in the greatest overall transaction fees being paid, but I don't think that's true. Given the tragedy of the commons issue surrounding blockchain size, the marginal cost of any individual miner including a transaction in his block is only negligibly higher than the risk of an orphaned block. If a miner doesn't include the transactions with fees above that marginal cost, they can be profitably taken by the next miner to create a block. That's not necessarily how it has to work, miners may attempt to employ strategies (like you mentioned) where that wouldn't be the case, but there's no guarantee they would succeed.
JeromeL
Member
**
Offline Offline

Activity: 104


View Profile
February 07, 2015, 05:53:02 PM
 #199

Great post Death&Taxes.

However I have never heard anywhere people supporting to keep permanently the max block size to 1MB. So your post is somewhat misleading and is avoiding the most interesting questions. Which are:
1) max block size design : is switching from hard coded max block size 1MB to hard coded 20MB the smartest way to proceed ?
2) timing: is now the best timing to do that? It  appears there is no real argument to rush. Tons of innovation is going on, we are months away from a first side chain implementation. Perhaps in 6 months or a year from now we will have ideas we hadn't thought of.

From the comments here and on reddit, I don't see a real consensus on that subject. So imo it's better to keep discussing about it. We can reasonably expect to see concurrent proposals to Gavin's, with a different MBS design and perhaps a more appropriate timing.

▃▃▃▌▌  PECUNIO  ▐▐▃▃▃▃
BLOCKCHAIN INVESTMENTS SAFE AND EASY
amspir
Member
**
Offline Offline

Activity: 112


View Profile
February 07, 2015, 05:58:09 PM
 #200


1) max block size design : is switching from hard coded max block size 1MB to hard coded 20MB the smartest way to proceed ?
2) timing: is it the best timing to do that? It  appears there is no real argument to rush. Tons of innovation is going on, we are months away from a first side chain implementation. Perhaps in 6 months or a year from now we will have ideas we hadn't thought.


Double the max blocksize every two years which roughly keeps in line with Moore's law.
Pages: « 1 2 3 4 5 6 7 8 9 [10] 11 12 13 14 15 16 17 18 19 20 21 22 23 24 »  All
  Print  
 
Jump to:  

Sponsored by , a Bitcoin-accepting VPN.
Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!