Bitcoin Forum
April 20, 2024, 07:32:53 AM *
News: Latest Bitcoin Core release: 26.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: « 1 2 3 4 [5] 6 7 8 9 10 »  All
  Print  
Author Topic: The MAX_BLOCK_SIZE fork  (Read 35541 times)
fornit
Hero Member
*****
Online Online

Activity: 991
Merit: 1008


View Profile
February 04, 2013, 03:33:52 AM
 #81

An initial split ensuring "high or reasonable" fee transactions get processed into the blockchain within an average of 10 minutes, and "low or zero" fee transactions get processed within an average of 20 minutes might be the way to go.

Consider the pool of unprocessed transactions:

Each transaction has a fee in BTC and an origination time. If the transaction pool  is sorted by non-zero fee size then: fm =  median (middle) fee value.

[...]

The public would learn that low or zero fee transactions take twice as long to obtain confirmation. It then opens the door for further granularity where the lower half (or more) of the pool is divided 3, 4, or 5 times such that very low-fee transactions take half an hour, zero-fee transactions take an average of an hour. The public will accept that as normal. Miners would reap the benefits of a block limit enforced fee incentive system.

i doubt that transactions are that evenly distributed over a 24h or a 7day period. you might end up with all low-fee transactions bring pushed several hours, to times when rush hour is only for people living in the middle of the atlantic or pacific ocean.
which is imho perfectly okay for tips or micro-donations.

1713598373
Hero Member
*
Offline Offline

Posts: 1713598373

View Profile Personal Message (Offline)

Ignore
1713598373
Reply with quote  #2

1713598373
Report to moderator
1713598373
Hero Member
*
Offline Offline

Posts: 1713598373

View Profile Personal Message (Offline)

Ignore
1713598373
Reply with quote  #2

1713598373
Report to moderator
There are several different types of Bitcoin clients. The most secure are full nodes like Bitcoin Core, but full nodes are more resource-heavy, and they must do a lengthy initial syncing process. As a result, lightweight clients with somewhat less security are commonly used.
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction.
1713598373
Hero Member
*
Offline Offline

Posts: 1713598373

View Profile Personal Message (Offline)

Ignore
1713598373
Reply with quote  #2

1713598373
Report to moderator
1713598373
Hero Member
*
Offline Offline

Posts: 1713598373

View Profile Personal Message (Offline)

Ignore
1713598373
Reply with quote  #2

1713598373
Report to moderator
1713598373
Hero Member
*
Offline Offline

Posts: 1713598373

View Profile Personal Message (Offline)

Ignore
1713598373
Reply with quote  #2

1713598373
Report to moderator
kjj
Legendary
*
Offline Offline

Activity: 1302
Merit: 1024



View Profile
February 04, 2013, 04:44:59 AM
 #82

If space in a block is not a limited resource then miners won't be able to charge for it, mining revenue will drop as the subsidy drops and attacks will become more profitable relative to honest mining.
How many business can you name that maximize their profitability by restricting the number of customers they serve?

If it really worked like that, then why stop at 1 MB? Limit block sizes to a single transaction and all the miners would be rich beyond measure! That would certainly make things more decentralized because miners all over the world would invest in hardware to collect the massive fee that one lucky person per block will be willing to pay.

Why stop there? I'm going to start a car dealership and decide to only sell 10 cars per year. Because I've made the number of cars I sell a limited resource I'll be able to charge more for them, right?

Then I'll open a restaurant called "House of String-Pushing" that only serves regular food but only lets in 3 customers at a time.

If car dealerships sold cars for however much you were willing to pay, down to and including free, you can bet they'd limit the number of cars they "sold".  And I doubt you'd even get 10 out of them.

The problem is that we really don't know yet how to operate with the system we have, much less a different one.  In a decade or two, when the subsidy is no longer the dominant part of the block reward, maybe then we'll have some idea how to price transactions, and we will be able to think clearly about mechanisms to adjust the block size.

17Np17BSrpnHCZ2pgtiMNnhjnsWJ2TMqq8
I routinely ignore posters with paid advertising in their sigs.  You should too.
jl2012
Legendary
*
Offline Offline

Activity: 1792
Merit: 1092


View Profile
February 04, 2013, 05:00:22 AM
 #83

If space in a block is not a limited resource then miners won't be able to charge for it, mining revenue will drop as the subsidy drops and attacks will become more profitable relative to honest mining.
How many business can you name that maximize their profitability by restricting the number of customers they serve?

If it really worked like that, then why stop at 1 MB? Limit block sizes to a single transaction and all the miners would be rich beyond measure! That would certainly make things more decentralized because miners all over the world would invest in hardware to collect the massive fee that one lucky person per block will be willing to pay.

Why stop there? I'm going to start a car dealership and decide to only sell 10 cars per year. Because I've made the number of cars I sell a limited resource I'll be able to charge more for them, right?

Then I'll open a restaurant called "House of String-Pushing" that only serves regular food but only lets in 3 customers at a time.

If car dealerships sold cars for however much you were willing to pay, down to and including free, you can bet they'd limit the number of cars they "sold".  And I doubt you'd even get 10 out of them.

The problem is that we really don't know yet how to operate with the system we have, much less a different one.  In a decade or two, when the subsidy is no longer the dominant part of the block reward, maybe then we'll have some idea how to price transactions, and we will be able to think clearly about mechanisms to adjust the block size.

Why don't we just let miners to decide the optimal block size?

If a miner is generating an 1-GB block and it is just too big for other miners, other miners may simply drop it. It will just stop anyone from generating 1-GB blocks because that will become orphans anyway. An equilibrium will be reached and the block space is still scarce.

Donation address: 374iXxS4BuqFHsEwwxUuH3nvJ69Y7Hqur3 (Bitcoin ONLY)
LRDGENPLYrcTRssGoZrsCT1hngaH3BVkM4 (LTC)
PGP: D3CC 1772 8600 5BB8 FF67 3294 C524 2A1A B393 6517
Jeweller (OP)
Newbie
*
Offline Offline

Activity: 24
Merit: 1


View Profile
February 04, 2013, 06:39:40 AM
 #84

Why don't we just let miners to decide the optimal block size?

If a miner is generating an 1-GB block and it is just too big for other miners, other miners may simply drop it. It will just stop anyone from generating 1-GB blocks because that will become orphans anyway. An equilibrium will be reached and the block space is still scarce.

Unfortunately it's not that simple for a couple reasons.

First, right now clients will reject oversized blocks from miners.  Other miners aren't the only ones who need to store the blocks, all full nodes do even if they just want to transact without mining.  So what if all the miners are fine with the 1-GB block and none of the clients nodes are?  Total mess.  Miners are minting coins only other miners recognize, and as far as clients are concerned the network hash rate has just plummeted.

Second, right now we have a very clear method for determining the "true" blockchain. It's the valid chain with the most work.  "Most work" is easily verified, everyone will agree.  "Valid" is also easily tested with unambiguous rules, and everyone will agree.  Miners can't "simply drop" blocks they don't like.  Maybe if that block is at depth -1 from the current block, sure.  But what if someone publishes a 1GB block, then someone else publishes a 1MB block on top of that?  Do you ignore both?  How far back do you go to start your own chain and try to orphan that whole over-size branch?

I think you can see the mess this would create.  The bitcoin network needs to operate with nearly unanimous consensus.
jl2012
Legendary
*
Offline Offline

Activity: 1792
Merit: 1092


View Profile
February 04, 2013, 07:07:09 AM
 #85

Why don't we just let miners to decide the optimal block size?

If a miner is generating an 1-GB block and it is just too big for other miners, other miners may simply drop it. It will just stop anyone from generating 1-GB blocks because that will become orphans anyway. An equilibrium will be reached and the block space is still scarce.

Unfortunately it's not that simple for a couple reasons.

First, right now clients will reject oversized blocks from miners.  Other miners aren't the only ones who need to store the blocks, all full nodes do even if they just want to transact without mining.  So what if all the miners are fine with the 1-GB block and none of the clients nodes are?  Total mess.  Miners are minting coins only other miners recognize, and as far as clients are concerned the network hash rate has just plummeted.

Second, right now we have a very clear method for determining the "true" blockchain. It's the valid chain with the most work.  "Most work" is easily verified, everyone will agree.  "Valid" is also easily tested with unambiguous rules, and everyone will agree.  Miners can't "simply drop" blocks they don't like.  Maybe if that block is at depth -1 from the current block, sure.  But what if someone publishes a 1GB block, then someone else publishes a 1MB block on top of that?  Do you ignore both?  How far back do you go to start your own chain and try to orphan that whole over-size branch?

I think you can see the mess this would create.  The bitcoin network needs to operate with nearly unanimous consensus.

I know the 1MB is a hard limit which affects both miners and clients. I'm assuming a world without MAX_BLOCK_SIZE at all, both miners and clients.

Miners can ALWAYS drop a valid block if they don't like it, just like ignoring any valid transaction. Currently, miners taking non-standard transaction has higher risks of orphaned block because other miners may not like these block.

If a miner (Bob) sees a new valid block with height N but doesn't like it for whatever reason, he will simply keep mining on top of block N-1. When Bob finds another valid block (N2), he will broadcast to the network and other miners will choose one between N and N2. Here Bob takes a risk of being orphaned because other miners may build on block N. If block N+1 is built on N, Bob has to reconsider the risk and he may decide to keep mining on N+1, instead of N-1 or his N2. However, if Bob (or his team) owns 51% of the network, he will always win and block N must be eventually orphaned. (You may call it a 51% attack but this is exactly how the system works)

Therefore, if the majority of miners do not like 1GB block, building 1GB block will become very risky and no one will do so.

Donation address: 374iXxS4BuqFHsEwwxUuH3nvJ69Y7Hqur3 (Bitcoin ONLY)
LRDGENPLYrcTRssGoZrsCT1hngaH3BVkM4 (LTC)
PGP: D3CC 1772 8600 5BB8 FF67 3294 C524 2A1A B393 6517
kjj
Legendary
*
Offline Offline

Activity: 1302
Merit: 1024



View Profile
February 04, 2013, 07:30:44 AM
 #86

I know the 1MB is a hard limit which affects both miners and clients. I'm assuming a world without MAX_BLOCK_SIZE at all, both miners and clients.

Miners can ALWAYS drop a valid block if they don't like it, just like ignoring any valid transaction. Currently, miners taking non-standard transaction has higher risks of orphaned block because other miners may not like these block.

If a miner (Bob) sees a new valid block with height N but doesn't like it for whatever reason, he will simply keep mining on top of block N-1. When Bob finds another valid block (N2), he will broadcast to the network and other miners will choose one between N and N2. Here Bob takes a risk of being orphaned because other miners may build on block N. If block N+1 is built on N, Bob has to reconsider the risk and he may decide to keep mining on N+1, instead of N-1 or his N2. However, if Bob (or his team) owns 51% of the network, he will always win and block N must be eventually orphaned. (You may call it a 51% attack but this is exactly how the system works)

Therefore, if the majority of miners do not like 1GB block, building 1GB block will become very risky and no one will do so.

What you are describing is much worse than a mere fork, the only word I can think of for it is a shatter.

17Np17BSrpnHCZ2pgtiMNnhjnsWJ2TMqq8
I routinely ignore posters with paid advertising in their sigs.  You should too.
jl2012
Legendary
*
Offline Offline

Activity: 1792
Merit: 1092


View Profile
February 04, 2013, 07:42:36 AM
 #87

I know the 1MB is a hard limit which affects both miners and clients. I'm assuming a world without MAX_BLOCK_SIZE at all, both miners and clients.

Miners can ALWAYS drop a valid block if they don't like it, just like ignoring any valid transaction. Currently, miners taking non-standard transaction has higher risks of orphaned block because other miners may not like these block.

If a miner (Bob) sees a new valid block with height N but doesn't like it for whatever reason, he will simply keep mining on top of block N-1. When Bob finds another valid block (N2), he will broadcast to the network and other miners will choose one between N and N2. Here Bob takes a risk of being orphaned because other miners may build on block N. If block N+1 is built on N, Bob has to reconsider the risk and he may decide to keep mining on N+1, instead of N-1 or his N2. However, if Bob (or his team) owns 51% of the network, he will always win and block N must be eventually orphaned. (You may call it a 51% attack but this is exactly how the system works)

Therefore, if the majority of miners do not like 1GB block, building 1GB block will become very risky and no one will do so.

What you are describing is much worse than a mere fork, the only word I can think of for it is a shatter.

This is actually happening and forces some miners to drop transactions from Satoshi Dice to keep their blocks slimmer. Ignoring big blocks might not be intentional but big blocks are non-competitive for obvious reason (take longer to propagate)

May be I should rephrase it:

Therefore, if the majority of miners are unable to handle the 1GB block N timely, they will keep building on N-1 until N is verified. Block N is exposed to a higher risk of orphaning, and building 1GB block will become very risky and no one will do so.

Donation address: 374iXxS4BuqFHsEwwxUuH3nvJ69Y7Hqur3 (Bitcoin ONLY)
LRDGENPLYrcTRssGoZrsCT1hngaH3BVkM4 (LTC)
PGP: D3CC 1772 8600 5BB8 FF67 3294 C524 2A1A B393 6517
MPOE-PR
Hero Member
*****
Offline Offline

Activity: 756
Merit: 522



View Profile
February 04, 2013, 09:43:14 AM
 #88

I know the 1MB is a hard limit which affects both miners and clients. I'm assuming a world without MAX_BLOCK_SIZE at all, both miners and clients.

Miners can ALWAYS drop a valid block if they don't like it, just like ignoring any valid transaction. Currently, miners taking non-standard transaction has higher risks of orphaned block because other miners may not like these block.

If a miner (Bob) sees a new valid block with height N but doesn't like it for whatever reason, he will simply keep mining on top of block N-1. When Bob finds another valid block (N2), he will broadcast to the network and other miners will choose one between N and N2. Here Bob takes a risk of being orphaned because other miners may build on block N. If block N+1 is built on N, Bob has to reconsider the risk and he may decide to keep mining on N+1, instead of N-1 or his N2. However, if Bob (or his team) owns 51% of the network, he will always win and block N must be eventually orphaned. (You may call it a 51% attack but this is exactly how the system works)

Therefore, if the majority of miners do not like 1GB block, building 1GB block will become very risky and no one will do so.

What you are describing is much worse than a mere fork, the only word I can think of for it is a shatter.

Actually sounds like correct behavior.

My Credentials  | THE BTC Stock Exchange | I have my very own anthology! | Use bitcointa.lk, it's like this one but better.
Gavin Andresen
Legendary
*
qt
Offline Offline

Activity: 1652
Merit: 2216


Chief Scientist


View Profile WWW
February 04, 2013, 05:17:08 PM
 #89

Why don't we just let miners to decide the optimal block size?

If a miner is generating an 1-GB block and it is just too big for other miners, other miners may simply drop it. It will just stop anyone from generating 1-GB blocks because that will become orphans anyway. An equilibrium will be reached and the block space is still scarce.

I think this is exactly the right thing to do.

There is still the question of what the default behavior should be. Here is a proposal:

Ignore blocks that take your node longer than N seconds to verify.

I'd propose that N be:  60 seconds if you are catching up with the blockchain.  5 seconds if you are all caught-up.  But allow miners/merchants/users to easily change those defaults.

Rationale: we should use time-to-verify as the metric, because everything revolves around the 10-minutes-per-block constant.

Time-to-verify has the nice property of scaling as hardware gets more powerful. Miners will want to create blocks that take a reasonable amount of time to propagate through the network and verify, and will have to weigh "add more transactions to blocks" versus "if I add too many, my block will be ignored by more than half the network."

Time-to-verify also has the nice property of incentivizing miners to broadcast transactions instead of 'hoarding' them, because transactions that are broadcast before they are in a block make the block faster to verify (because of the signature cache). That is good for lots of reasons (early detection of potential double-spends and spreading out the verification work over time so there isn't a blizzard of CPU work that needs to be done every time a block is found, for example).


How often do you get the chance to work on a potentially world-changing project?
jl2012
Legendary
*
Offline Offline

Activity: 1792
Merit: 1092


View Profile
February 04, 2013, 05:34:25 PM
 #90

Why don't we just let miners to decide the optimal block size?

If a miner is generating an 1-GB block and it is just too big for other miners, other miners may simply drop it. It will just stop anyone from generating 1-GB blocks because that will become orphans anyway. An equilibrium will be reached and the block space is still scarce.

I think this is exactly the right thing to do.

There is still the question of what the default behavior should be. Here is a proposal:

Ignore blocks that take your node longer than N seconds to verify.

I'd propose that N be:  60 seconds if you are catching up with the blockchain.  5 seconds if you are all caught-up.  But allow miners/merchants/users to easily change those defaults.

Rationale: we should use time-to-verify as the metric, because everything revolves around the 10-minutes-per-block constant.

Time-to-verify has the nice property of scaling as hardware gets more powerful. Miners will want to create blocks that take a reasonable amount of time to propagate through the network and verify, and will have to weigh "add more transactions to blocks" versus "if I add too many, my block will be ignored by more than half the network."

Time-to-verify also has the nice property of incentivizing miners to broadcast transactions instead of 'hoarding' them, because transactions that are broadcast before they are in a block make the block faster to verify (because of the signature cache). That is good for lots of reasons (early detection of potential double-spends and spreading out the verification work over time so there isn't a blizzard of CPU work that needs to be done every time a block is found, for example).



And if there are too many transactions than the available block space, people will pay more transaction fee and miner will have more money to upgrade their hardware and network for bigger block size.

Donation address: 374iXxS4BuqFHsEwwxUuH3nvJ69Y7Hqur3 (Bitcoin ONLY)
LRDGENPLYrcTRssGoZrsCT1hngaH3BVkM4 (LTC)
PGP: D3CC 1772 8600 5BB8 FF67 3294 C524 2A1A B393 6517
ShadowOfHarbringer
Legendary
*
Offline Offline

Activity: 1470
Merit: 1005


Bringing Legendary Har® to you since 1952


View Profile
February 04, 2013, 06:13:24 PM
 #91

Why don't we just let miners to decide the optimal block size?

If a miner is generating an 1-GB block and it is just too big for other miners, other miners may simply drop it. It will just stop anyone from generating 1-GB blocks because that will become orphans anyway. An equilibrium will be reached and the block space is still scarce.

I think this is exactly the right thing to do.

There is still the question of what the default behavior should be. Here is a proposal:

Ignore blocks that take your node longer than N seconds to verify.

I'd propose that N be:  60 seconds if you are catching up with the blockchain.  5 seconds if you are all caught-up.  But allow miners/merchants/users to easily change those defaults.

Rationale: we should use time-to-verify as the metric, because everything revolves around the 10-minutes-per-block constant.

Time-to-verify has the nice property of scaling as hardware gets more powerful. Miners will want to create blocks that take a reasonable amount of time to propagate through the network and verify, and will have to weigh "add more transactions to blocks" versus "if I add too many, my block will be ignored by more than half the network."

Time-to-verify also has the nice property of incentivizing miners to broadcast transactions instead of 'hoarding' them, because transactions that are broadcast before they are in a block make the block faster to verify (because of the signature cache). That is good for lots of reasons (early detection of potential double-spends and spreading out the verification work over time so there isn't a blizzard of CPU work that needs to be done every time a block is found, for example).

This.

I am all for "let-the-market-decide" elastic algorithms.

If you let people select what is best for their interests, they will make the best choices through multiple tries in order to maximize profit & minimize risk.

Nobody wants to lose money, and everybody wants to earn the most. Therefore market will balance out the block size and reach perfect equilibrium automatically.

Akka
Legendary
*
Offline Offline

Activity: 1232
Merit: 1001



View Profile
February 04, 2013, 06:21:24 PM
 #92

This.

I am all for "let-the-market-decide" elastic algorithms.

If you let people select what is best for their interests, they will make the best choices through multiple tries in order to maximize profit & minimize risk.

Nobody wants to lose money, and everybody wants to earn the most. Therefore market will balance out the block size and reach perfect equilibrium automatically.

I concur,

a kind of "natural selection" in a open marked ends in the best possible solution for the current environment (hardware)

this also allows is to adapt to better hardware as there is no way to tell a 100% certain where development will go. (At least that's my opinion)

All previous versions of currency will no longer be supported as of this update
ildubbioso
Sr. Member
****
Offline Offline

Activity: 389
Merit: 250



View Profile
February 04, 2013, 07:05:05 PM
 #93

So, shouldn't we (you developers actually) change it as fast as possible?
caveden
Legendary
*
Offline Offline

Activity: 1106
Merit: 1004



View Profile
February 04, 2013, 07:25:00 PM
 #94

There is still the question of what the default behavior should be. Here is a proposal:

Ignore blocks that take your node longer than N seconds to verify.

That's nice. Just don't forget to include total download time in the "time to verify", as well as any other I/O time. Bandwidth will be a significant bottleneck once blocks start getting larger.

EDIT: Oh, and of course, there must be tolerance levels too (if I'm X blocks behind the chain I once rejected, I'll give up and start building on top of it). You don't want to create that many chain forks! Smiley
arklan
Legendary
*
Offline Offline

Activity: 1778
Merit: 1008



View Profile
February 04, 2013, 07:25:34 PM
 #95

probably something ot aim to have in place before 1.0 is released... and since were closing in on .8... Cheesy

i don't post much, but this space for rent.
MPOE-PR
Hero Member
*****
Offline Offline

Activity: 756
Merit: 522



View Profile
February 04, 2013, 08:12:55 PM
 #96

Why don't we just let miners to decide the optimal block size?

If a miner is generating an 1-GB block and it is just too big for other miners, other miners may simply drop it. It will just stop anyone from generating 1-GB blocks because that will become orphans anyway. An equilibrium will be reached and the block space is still scarce.

I think this is exactly the right thing to do.

There is still the question of what the default behavior should be. Here is a proposal:

Ignore blocks that take your node longer than N seconds to verify.

I'd propose that N be:  60 seconds if you are catching up with the blockchain.  5 seconds if you are all caught-up.  But allow miners/merchants/users to easily change those defaults.

Rationale: we should use time-to-verify as the metric, because everything revolves around the 10-minutes-per-block constant.

Time-to-verify has the nice property of scaling as hardware gets more powerful. Miners will want to create blocks that take a reasonable amount of time to propagate through the network and verify, and will have to weigh "add more transactions to blocks" versus "if I add too many, my block will be ignored by more than half the network."

Time-to-verify also has the nice property of incentivizing miners to broadcast transactions instead of 'hoarding' them, because transactions that are broadcast before they are in a block make the block faster to verify (because of the signature cache). That is good for lots of reasons (early detection of potential double-spends and spreading out the verification work over time so there isn't a blizzard of CPU work that needs to be done every time a block is found, for example).

Spoken like a true Gavin. No objections.

My Credentials  | THE BTC Stock Exchange | I have my very own anthology! | Use bitcointa.lk, it's like this one but better.
FreeMoney
Legendary
*
Offline Offline

Activity: 1246
Merit: 1014


Strength in numbers


View Profile WWW
February 04, 2013, 08:49:43 PM
 #97

Why don't we just let miners to decide the optimal block size?

If a miner is generating an 1-GB block and it is just too big for other miners, other miners may simply drop it. It will just stop anyone from generating 1-GB blocks because that will become orphans anyway. An equilibrium will be reached and the block space is still scarce.

I think this is exactly the right thing to do.

There is still the question of what the default behavior should be. Here is a proposal:

Ignore blocks that take your node longer than N seconds to verify.

I'd propose that N be:  60 seconds if you are catching up with the blockchain.  5 seconds if you are all caught-up.  But allow miners/merchants/users to easily change those defaults.

Rationale: we should use time-to-verify as the metric, because everything revolves around the 10-minutes-per-block constant.

Time-to-verify has the nice property of scaling as hardware gets more powerful. Miners will want to create blocks that take a reasonable amount of time to propagate through the network and verify, and will have to weigh "add more transactions to blocks" versus "if I add too many, my block will be ignored by more than half the network."

Time-to-verify also has the nice property of incentivizing miners to broadcast transactions instead of 'hoarding' them, because transactions that are broadcast before they are in a block make the block faster to verify (because of the signature cache). That is good for lots of reasons (early detection of potential double-spends and spreading out the verification work over time so there isn't a blizzard of CPU work that needs to be done every time a block is found, for example).



This rule would apply to blocks until they are 1 deep, right? Do you envision no check-time or size rule for blocks that are built on? Or a different much more generous rule?

Play Bitcoin Poker at sealswithclubs.eu. We're active and open to everyone.
notme
Legendary
*
Offline Offline

Activity: 1904
Merit: 1002


View Profile
February 05, 2013, 04:34:12 AM
 #98

probably something ot aim to have in place before 1.0 is released... and since were closing in on .8... Cheesy

We're only 2 minor releases away!!!.... from 0.10

https://www.bitcoin.org/bitcoin.pdf
While no idea is perfect, some ideas are useful.
solex
Legendary
*
Offline Offline

Activity: 1078
Merit: 1002


100 satoshis -> ISO code


View Profile
February 05, 2013, 04:40:50 AM
 #99

probably something ot aim to have in place before 1.0 is released... and since were closing in on .8... Cheesy

We're only 2 minor releases away!!!.... from 0.10

1.0 or 1.10?  Smiley

jl2012
Legendary
*
Offline Offline

Activity: 1792
Merit: 1092


View Profile
February 05, 2013, 04:50:09 AM
 #100

1M MAX_BLOCK_SIZE is obviously an arbitrary and temporary limit. Imagine that bitcoin was invented in 1996 instead of 2009, when 99% normal internet users connected though telephone lines with 28.8kb/s, or  3.6kB/s. To transfer a typical block of 200kB today, it would take more than 1 minute and the system would fail due to very high stale rate and many branches in the chain. If the "1996 Satoshi" used a 25kB MAX_BLOCK_SIZE, are we still going to stick with it till the end of bitcoin?

Donation address: 374iXxS4BuqFHsEwwxUuH3nvJ69Y7Hqur3 (Bitcoin ONLY)
LRDGENPLYrcTRssGoZrsCT1hngaH3BVkM4 (LTC)
PGP: D3CC 1772 8600 5BB8 FF67 3294 C524 2A1A B393 6517
Pages: « 1 2 3 4 [5] 6 7 8 9 10 »  All
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!