Bitcoin Forum
May 04, 2024, 09:11:53 AM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: « 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 [25] 26 »
  Print  
Author Topic: How a floating blocksize limit inevitably leads towards centralization  (Read 71512 times)
WiW
Sr. Member
****
Offline Offline

Activity: 277
Merit: 250


"The public is stupid, hence the public will pay"


View Profile
February 26, 2013, 11:22:50 PM
 #481

And yet, somehow when the reward got cut in half (block fees went down) the hash rate went down. Doh!

And yet, somehow when the hash rate went down nobody successfully attacked the network. What's your point? How does this have to do with decentralization?
1714813913
Hero Member
*
Offline Offline

Posts: 1714813913

View Profile Personal Message (Offline)

Ignore
1714813913
Reply with quote  #2

1714813913
Report to moderator
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction.
1714813913
Hero Member
*
Offline Offline

Posts: 1714813913

View Profile Personal Message (Offline)

Ignore
1714813913
Reply with quote  #2

1714813913
Report to moderator
1714813913
Hero Member
*
Offline Offline

Posts: 1714813913

View Profile Personal Message (Offline)

Ignore
1714813913
Reply with quote  #2

1714813913
Report to moderator
Mashuri
Full Member
***
Offline Offline

Activity: 135
Merit: 107


View Profile
February 27, 2013, 01:48:38 AM
 #482

OK, this thread has been a bear to read but I'm glad I did.  I understand the desire to limit max block size due to bandwidth limits and I certainly do not want a Google-esque datacenter centralization of mining.  Since bandwidth is the primary issue (storage being secondary) then I'm with the people who focus their solutions around bandwidth and not things like profits or hash rate.  I like the idea of taking a bandwidth metric, like propagation time, of all connected nodes and using that to determine max block size.  Done properly, the bitcoin network should be able to optimize itself to whatever the current average bandwidth happens to be, without overwhelming it.

MoonShadow
Legendary
*
Offline Offline

Activity: 1708
Merit: 1007



View Profile
February 27, 2013, 01:52:01 AM
 #483

  I like the idea of taking a bandwidth metric, like propagation time, of all connected nodes and using that to determine max block size.  Done properly, the bitcoin network should be able to optimize itself to whatever the current average bandwidth happens to be, without overwhelming it.

The question is, how do we collect accurate data upon propagation time?  And then how do we utilize said data in a way that will result in a uniform computation for the entire network?

"The powers of financial capitalism had another far-reaching aim, nothing less than to create a world system of financial control in private hands able to dominate the political system of each country and the economy of the world as a whole. This system was to be controlled in a feudalist fashion by the central banks of the world acting in concert, by secret agreements arrived at in frequent meetings and conferences. The apex of the systems was to be the Bank for International Settlements in Basel, Switzerland, a private bank owned and controlled by the world's central banks which were themselves private corporations. Each central bank...sought to dominate its government by its ability to control Treasury loans, to manipulate foreign exchanges, to influence the level of economic activity in the country, and to influence cooperative politicians by subsequent economic rewards in the business world."

- Carroll Quigley, CFR member, mentor to Bill Clinton, from 'Tragedy And Hope'
Mashuri
Full Member
***
Offline Offline

Activity: 135
Merit: 107


View Profile
February 27, 2013, 02:02:25 AM
Last edit: February 27, 2013, 06:05:06 AM by Mashuri
 #484

 I like the idea of taking a bandwidth metric, like propagation time, of all connected nodes and using that to determine max block size.  Done properly, the bitcoin network should be able to optimize itself to whatever the current average bandwidth happens to be, without overwhelming it.

The question is, how do we collect accurate data upon propagation time?  And then how do we utilize said data in a way that will result in a uniform computation for the entire network?

Yes, the metric is the hard part.  I'm not familiar with the inner workings of the mining software so this may be an amateur question: Is there typically any bandwidth "downtime" during the ~10 minutes a miner is hashing away?  If so, could a sort of "speed test" be taken with a uniform sized piece of data between nodes?

EDIT:
Another half-baked thought -- Couldn't each node also report the amount of time it took to download the last block, the aggregate of which could be used for determining size?  I think I remember Gavin suggesting something similar.

solex
Legendary
*
Offline Offline

Activity: 1078
Merit: 1002


100 satoshis -> ISO code


View Profile
February 27, 2013, 05:41:28 AM
 #485

 I like the idea of taking a bandwidth metric, like propagation time, of all connected nodes and using that to determine max block size.  Done properly, the bitcoin network should be able to optimize itself to whatever the current average bandwidth happens to be, without overwhelming it.

The question is, how do we collect accurate data upon propagation time?  And then how do we utilize said data in a way that will result in a uniform computation for the entire network?

Yes, the metric is the hard part.  I'm not familiar with the inner workings of the mining software so this may be an amateur question: Is there typically any bandwidth "downtime" during the ~10 minutes a miner is hashing away?  If so, could a sort of "speed test" be taken with a uniform sized piece of data between nodes?

EDIT:
Another half-baked though -- Couldn't each node also report the amount of time it took to download the last block, the aggregate of which could be used for determining size?  I think I remember Gavin suggesting something similar.

The more time I spend thinking about the max block size pushes me towards concluding that block propagation and verification times are most important. I am interested to read that you have a similar view.

Realpra
Hero Member
*****
Offline Offline

Activity: 815
Merit: 1000


View Profile
February 27, 2013, 06:34:08 AM
 #486

So...  I start from "more transactions == more success"

I strongly feel that we shouldn't aim for Bitcoin topping out as a "high power money" system that can process only 7 transactions per second.

Hey, I want a pony too. But Bitcoin is an O(n) system, and we have no choice but to limit n.
Actually O(n*m) where m is the number of full clients
Did any of you guys remember my "swarm client" idea? It would move Bitcoin from being O(n*m) to O(n) and the network would share the load of storage and processing both.

No one ever found flaws in it and those who bothered to read it generally thought it was pretty neat. Just saying. + it requires no hard fork and can coexist with current clients.

This would also kill the malice driven incentive for miners to drive out other miners as it would no longer work (only bother the WHOLE network).

Cheap and sexy Bitcoin card/hardware wallet, buy here:
http://BlochsTech.com
MoonShadow
Legendary
*
Offline Offline

Activity: 1708
Merit: 1007



View Profile
February 27, 2013, 07:54:15 AM
Last edit: February 27, 2013, 08:09:31 AM by MoonShadow
 #487


Did any of you guys remember my "swarm client" idea? It would move Bitcoin from being O(n*m) to O(n) and the network would share the load of storage and processing both.


Searching the forum for "swarm client" begets nothing.  Link?

EDIT: Nevermind, I found it.  And I think that the main reason no one ever cited fault was because no one who knew the details of how the bitcoin block is actually constructed bothered to read it, or take your proposal seriously enough to respond.  I read your proposal, and could find no details about how a swarm client could actually divide up the task of verification of blocks.  That or I simply didn't understand it. 

For example, pool miners already don't have to verify blocks or transactions.  They never even see them, because that is unnecessary.  The mining is the hashing of the 80 byte header, nothing more.  Only if the primary nonce is exhausted is anything in the dataset of the block rearranged, and that is performed by the pool server.  We could have blocks a gig per, and that would have negligible effects on pool miners.  And we don't need swarm clients to "verify the blockchain", because all but the most recent has already been verified, unless you are starting up a fresh install of a full client.  With light clients we can skip even that part, to a degree.

"The powers of financial capitalism had another far-reaching aim, nothing less than to create a world system of financial control in private hands able to dominate the political system of each country and the economy of the world as a whole. This system was to be controlled in a feudalist fashion by the central banks of the world acting in concert, by secret agreements arrived at in frequent meetings and conferences. The apex of the systems was to be the Bank for International Settlements in Basel, Switzerland, a private bank owned and controlled by the world's central banks which were themselves private corporations. Each central bank...sought to dominate its government by its ability to control Treasury loans, to manipulate foreign exchanges, to influence the level of economic activity in the country, and to influence cooperative politicians by subsequent economic rewards in the business world."

- Carroll Quigley, CFR member, mentor to Bill Clinton, from 'Tragedy And Hope'
TierNolan
Legendary
*
Offline Offline

Activity: 1232
Merit: 1083


View Profile
February 27, 2013, 10:47:00 AM
 #488

I read your proposal, and could find no details about how a swarm client could actually divide up the task of verification of blocks.  That or I simply didn't understand it. 

It looks like a node picks a random number between 0 and N-1 and then checks transactions where id = tx-hash mod N.

Quote
For example, pool miners already don't have to verify blocks or transactions.

In fact, it would be much easier to write software that doesn't do it at all.  Atm, the minting fee is much higher than the tx fees, so it is more efficient to just mint and not bother with the hassle of handling transactions.

If there is a 0.1% chance that a transaction is false, then including it in a block effectively costs the miner 25 * 0.1% = 0.025BTC, since if it is invalid, and he wins the block, the block will be discarded by other miners.

P2P pools would be better setup so that they don't risk it, until tx fees are the main source of income.

Quote
And we don't need swarm clients to "verify the blockchain", because all but the most recent has already been verified, unless you are starting up a fresh install of a full client.  With light clients we can skip even that part, to a degree.

Having each new client verify a random 1% of the blocks would be a reasonable thing to do, if combined with an alert system.  This would keep miners honest.

1LxbG5cKXzTwZg9mjL3gaRE835uNQEteWF
zebedee
Donator
Hero Member
*
Offline Offline

Activity: 668
Merit: 500



View Profile
February 27, 2013, 01:51:31 PM
 #489

I would hate to see the limit raised before the most inefficient uses of blockchain space, like satoshidice and coinad, change the way they operate.
Who gets to decide what's inefficient?  You?  That's precisely the problem - trying to centralize the decision.  It should be made by those doing the work according to their own economic incentives and desires.

SD haters (and I'm not particularly a fan) like Luke-jr get to not include their txns.  Others like Ozcoin apparently have no issue with the likes of SD and are happy to take their money.  Great, everyone gets a vote according to the effort they put in.

In addition I would hate to see alternatives to raising the limit fail to be developed because everyone assumes the limit will be raised. I also get the sense that Gavin's mind is already made up and the question to him isn't if the limit will be raised, but when and how. That may or may not be actually true, but as long as he gives that impression, and the Bitcoin Foundation keeps promoting the idea that Bitcoin transactions are always going to be almost free, raising the block limit is inevitable.
Ah, now we see your real agenda - you want to fund your pet projects of off-chain transaction consolidation.

If that is such a great idea - and it may well be, I have no problem with it - then please realise that it will get funded.

If it isn't getting funded, then please ask yourself why.

But don't try and force others to subsidize what you want to see happen.  Why no do it yourself if it's a winning idea for the end users?

Likely neither you nor the rest are doing it because there's no real economic incentive to do so - for now, perhaps.  But that's what entrepreneurship is all about.
zebedee
Donator
Hero Member
*
Offline Offline

Activity: 668
Merit: 500



View Profile
February 27, 2013, 02:49:57 PM
 #490


Before, I support the change to protocol in a carefully planned way to improve the end user experience, but recently I discovered that you can double spend on both original chain and the new chain after a hard fork, then it means the promise of prevent double-spending and limited supply is all broken, that is much severe than I thought


That simply means that, after a very short period of chaos post-fork, simple economic incentives would VERY quickly force a consensus on one of the chains.  The chaos would not be permitted to continue, by anyone, whichever side they personally want to "win", as it would cost them too much.
MoonShadow
Legendary
*
Offline Offline

Activity: 1708
Merit: 1007



View Profile
February 27, 2013, 04:16:55 PM
 #491

I read your proposal, and could find no details about how a swarm client could actually divide up the task of verification of blocks.  That or I simply didn't understand it. 

It looks like a node picks a random number between 0 and N-1 and then checks transactions where id = tx-hash mod N.


Who any why?  That's the vague part.  Are miner's not checking the blocks themselves, are they depending upon others to spot check sections?  How does that work, since it's the miners who will feel the losses should they mine a block with an invalid transaction?  Realisticly, it'd be at least as effective to permit non-mining full clients to 'spot check' blocks in full, but on a random scale.  Say, only 30% of the blocks that they see do they check before they forward.  All blocks should be fully checked before intergrated into the local blockchain, and I can't see a way around that process.

Quote

Having each new client verify a random 1% of the blocks would be a reasonable thing to do, if combined with an alert system.  This would keep miners honest.

But the miners would still need to check those blocks, and eventually so would everyone else.  This could introduce a new network attack vector.

"The powers of financial capitalism had another far-reaching aim, nothing less than to create a world system of financial control in private hands able to dominate the political system of each country and the economy of the world as a whole. This system was to be controlled in a feudalist fashion by the central banks of the world acting in concert, by secret agreements arrived at in frequent meetings and conferences. The apex of the systems was to be the Bank for International Settlements in Basel, Switzerland, a private bank owned and controlled by the world's central banks which were themselves private corporations. Each central bank...sought to dominate its government by its ability to control Treasury loans, to manipulate foreign exchanges, to influence the level of economic activity in the country, and to influence cooperative politicians by subsequent economic rewards in the business world."

- Carroll Quigley, CFR member, mentor to Bill Clinton, from 'Tragedy And Hope'
TierNolan
Legendary
*
Offline Offline

Activity: 1232
Merit: 1083


View Profile
February 27, 2013, 05:09:55 PM
 #492

But the miners would still need to check those blocks, and eventually so would everyone else.  This could introduce a new network attack vector.

I think miners are going to need to verify everything, at the end of the day.  However, it may be possible to do that in a p2p way.

I made a suggestion in another thread about having "parity" rules for transactions. 

A transaction of the form:

Input 0: tx_hash=1234567890/out=2
Input 1: tx_hash=2345678901/out=1
Output 0: <some script>

would have an mixed parity, since the it its inputs come from some transaction with an odd hash and some with an even hash.

However, a parity rule could be added that requires either odd or even parity.

Input 0: tx_hash=1234567897/out=0
Input 1: tx_hash=2345678901/out=1
Output 0: <some script>

If the block height is even, then only even parity transactions would be allowed, and vice-versa for odd.

If a super-majority of the network agreed with that rule, then it wouldn't cause a fork.  Mixed parity blocks would just be orphaned.

The nice feature of the rule is that it allows blocks to be prepared in advance.

If the next block is an odd block, then a P2P miner system could broadcast a list of proposed transactions for inclusion and have them verified.  As long as all the inputs into the proposed transactions are from even transactions, they won't be invalidated by the next block.  It will only have transactions with inputs from odd transactions, under the rule.

This gives the P2P system time to reject invalid transactions.

All nodes on the network could be ready to switch to the next block immediately, without having to even read the new block (other than check the header).  Verification could happen later.

1LxbG5cKXzTwZg9mjL3gaRE835uNQEteWF
Realpra
Hero Member
*****
Offline Offline

Activity: 815
Merit: 1000


View Profile
February 27, 2013, 05:55:28 PM
 #493


Did any of you guys remember my "swarm client" idea? It would move Bitcoin from being O(n*m) to O(n) and the network would share the load of storage and processing both.


Searching the forum for "swarm client" begets nothing.  Link?
https://bitcointalk.org/index.php?topic=87763.0
(Second search link Tongue)

Quote
I read your proposal, and could find no details about how a swarm client could actually divide up the task of verification of blocks.  That or I simply didn't understand it. 
The details are a little hairy, but it is actually very simple: It is difficult to validate, BUT easy to show a flaw in a block.

To show a block is invalid just one S-client needs to share with the rest of the network that it has a double spend. This accusation can be proved by sending along the transaction history for the address in question.
This history cannot be faked due to the nature of the blocks tree-data-structure.

Even if the S-clients keep a full history of each address they watch and exchange this in cases of accusations the computer power saved should still be substantial despite many addresses being tangled together.

There was also talk of combining this with a 5-10 year ledger system which would put a cap on the running blockchain size.

Cheap and sexy Bitcoin card/hardware wallet, buy here:
http://BlochsTech.com
MoonShadow
Legendary
*
Offline Offline

Activity: 1708
Merit: 1007



View Profile
February 27, 2013, 08:08:53 PM
 #494


Did any of you guys remember my "swarm client" idea? It would move Bitcoin from being O(n*m) to O(n) and the network would share the load of storage and processing both.


Searching the forum for "swarm client" begets nothing.  Link?
https://bitcointalk.org/index.php?topic=87763.0
(Second search link Tongue)

Quote
I read your proposal, and could find no details about how a swarm client could actually divide up the task of verification of blocks.  That or I simply didn't understand it. 
The details are a little hairy, but it is actually very simple: It is difficult to validate, BUT easy to show a flaw in a block.

To show a block is invalid just one S-client needs to share with the rest of the network that it has a double spend. This accusation can be proved by sending along the transaction history for the address in question.
This history cannot be faked due to the nature of the blocks tree-data-structure.


Not true.  A double spend would occur at nearly the same time.  Due to propogation rules that apply to loose transactions, it's very unlikely that any single node (swarm or otherwise) will actually see both transactions.  And what if it did?  If it could sound an alarm about it, which one is the valid one?  The nodes cannot tell.  And even responding to an alarm impies some degree of trust in the sender, which open up an attack vecotr if an attacker can spoof nodes and flood the network with false alarms.


Furthermore, a double spend can't eget into a block even if that miner doesn't bother to validatie it first, since that would imply that the miner is participating in an attack on the network himself, since he shouldn't be able to see both competing transactions.
Quote
Even if the S-clients keep a full history of each address they watch and exchange this in cases of accusations the computer power saved should still be substantial despite many addresses being tangled together.

This would serve little purpose, since addresses are created and abandoned at such a rapid rate.

Quote
There was also talk of combining this with a 5-10 year ledger system which would put a cap on the running blockchain size.

Pruning would also put a cap on the running blockchain size, and doesn't require a hard code fork.  It's also the purpose of the myrkle tree from the beginning.  Satoshi thought about that, too.

"The powers of financial capitalism had another far-reaching aim, nothing less than to create a world system of financial control in private hands able to dominate the political system of each country and the economy of the world as a whole. This system was to be controlled in a feudalist fashion by the central banks of the world acting in concert, by secret agreements arrived at in frequent meetings and conferences. The apex of the systems was to be the Bank for International Settlements in Basel, Switzerland, a private bank owned and controlled by the world's central banks which were themselves private corporations. Each central bank...sought to dominate its government by its ability to control Treasury loans, to manipulate foreign exchanges, to influence the level of economic activity in the country, and to influence cooperative politicians by subsequent economic rewards in the business world."

- Carroll Quigley, CFR member, mentor to Bill Clinton, from 'Tragedy And Hope'
tvbcof
Legendary
*
Offline Offline

Activity: 4592
Merit: 1276


View Profile
February 27, 2013, 09:26:20 PM
 #495

...

Pruning would also put a cap on the running blockchain size, and doesn't require a hard code fork.  It's also the purpose of the myrkle tree from the beginning.  Satoshi thought about that, too.


It strikes me that Satoshi seemed more sensitive to system footprint than many of those who came after.  Both in design and in configuration he seemed to have left Bitcoin in a condition which was suitable more for a reliable backing and clearing solution than as a competitive replacement for centralized systems such as PayPal.

By this I mean that the latency inherent in the Bitcoin-like family of crypto-currencies are always going to be a sore point for Joe sixpack to use in native and rigorous form for daily purchases.  And the current block size is a lingering artifact of the time period of his involvement (actually a guess on my part without looking through the repository.)

I was disappointed that (now) early development focus was on wallet encryption, prettying up the GUI, and the multi-sig stuff if this came at the expense of merkle-tree pruning work.  I personally decided to make lemonade of lemons to some extent in noting that although I thought the priorities and direction were a bit off, the chosen course would probably balloon the market cap more quickly and I could try to make a buck off it no matter what the end result of Bitcoin might be.


sig spam anywhere and self-moderated threads on the pol&soc board are for losers.
misterbigg
Legendary
*
Offline Offline

Activity: 1064
Merit: 1001



View Profile
February 27, 2013, 11:16:00 PM
 #496

...how do we collect accurate data upon propagation time?  And then how do we utilize said data ...

Quite simply, you don't. There is no obvious way to collect these statistics in a way that is not vulnerable to spoofing or gaming by miners. That's why I advocate the voting method in my other post.
MoonShadow
Legendary
*
Offline Offline

Activity: 1708
Merit: 1007



View Profile
February 28, 2013, 12:05:46 AM
 #497

...how do we collect accurate data upon propagation time?  And then how do we utilize said data ...

Quite simply, you don't. There is no obvious way to collect these statistics in a way that is not vulnerable to spoofing or gaming by miners. That's why I advocate the voting method in my other post.

Ah, yeah.  That's why I asked the question that way, because I didn't think that it could be done, and was highlighting the root problem with this method.

"The powers of financial capitalism had another far-reaching aim, nothing less than to create a world system of financial control in private hands able to dominate the political system of each country and the economy of the world as a whole. This system was to be controlled in a feudalist fashion by the central banks of the world acting in concert, by secret agreements arrived at in frequent meetings and conferences. The apex of the systems was to be the Bank for International Settlements in Basel, Switzerland, a private bank owned and controlled by the world's central banks which were themselves private corporations. Each central bank...sought to dominate its government by its ability to control Treasury loans, to manipulate foreign exchanges, to influence the level of economic activity in the country, and to influence cooperative politicians by subsequent economic rewards in the business world."

- Carroll Quigley, CFR member, mentor to Bill Clinton, from 'Tragedy And Hope'
misterbigg
Legendary
*
Offline Offline

Activity: 1064
Merit: 1001



View Profile
February 28, 2013, 03:00:30 AM
 #498

That's why I asked the question that way, because I didn't think that it could be done, and was highlighting the root problem with this method.

Yep, I don't think it can be done either. At least, not in a way that can't be gamed. And any system which can be gamed, is really no different than a voting system. So, might as well just make it a voting system and let each miner decide the criteria for how to vote.
zebedee
Donator
Hero Member
*
Offline Offline

Activity: 668
Merit: 500



View Profile
February 28, 2013, 03:04:21 AM
 #499

If we want to cap the time of downloading overhead the latest block to say 1%, we need to be able to download the MAX_BLOCKSIZE within 6 seconds on average so that we can spend 99% time hashing.

At 1MB, you would need a ~1.7Mbps  connection to keep downloading time to 6s.
At 10MB, 17Mbps
At 100MB, 170Mbps

and you start to see why even 100MB block size would render 90% of the world population unable to participate in mining.
Even at 10MB, it requires investing in a relatively high speed connection.

Thank you. This is the most clear explanation yet that explains how an increase in the maximum block size raises the minimum bandwidth requirements for mining nodes.
Hmm.  Header can be downloaded in parallel / separately to the block body, and hashing can start after receiving just the header.  Milliseconds amount of time.  Perhaps a "quick" list of outputs spent by the block would be useful for building non-trivial blocks that don't include double-spends, but that would be ~5% of the block size?  Plenty of room for "optimization" here were it ever an issue.

Fake headers / tx lists that don't match the actual body?  That's a black mark for the dude who gave it to you as untrustworthy.  Too many black marks and you ignore future "headers" from him as a proven time-waster.

Build up trust with your peers, just like real life.
Cubic Earth
Legendary
*
Offline Offline

Activity: 1176
Merit: 1018



View Profile
February 28, 2013, 04:22:18 AM
 #500


Before, I support the change to protocol in a carefully planned way to improve the end user experience, but recently I discovered that you can double spend on both original chain and the new chain after a hard fork, then it means the promise of prevent double-spending and limited supply is all broken, that is much severe than I thought


That simply means that, after a very short period of chaos post-fork, simple economic incentives would VERY quickly force a consensus on one of the chains.  The chaos would not be permitted to continue, by anyone, whichever side they personally want to "win", as it would cost them too much.

Or there could be two chains - each with its own pros and cons.  While all us early investors would be able to spend on each chain, it should function like a stock split where though we have twice as many 'shares' each one is only worth 50% of the original value.  It could be 90/10 or 80/20 though, or any two percentages summing to 1.  If you wanted to favor one chain over the other, you could sell your coins on one and buy coins in your preferred chain.
Pages: « 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 [25] 26 »
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!