Bitcoin Forum
November 16, 2024, 01:34:04 PM *
News: Check out the artwork 1Dq created to commemorate this forum's 15th anniversary
 
   Home   Help Search Login Register More  
Pages: « 1 [2] 3 4 5 6 7 8 9 10 11 »  All
  Print  
Author Topic: bitcoin "unlimited" seeks review  (Read 16106 times)
cr1776
Legendary
*
Offline Offline

Activity: 4228
Merit: 1313


View Profile
January 02, 2016, 09:20:35 PM
 #21

Isn't the difference being that BU will allow maxBlockSize to be determined by nodes and core/xt/ect... insures that miners make that decision, or am I missing something?

Well, that is already the case. BU just makes it more convenient.

True, I suppose nodes can already break off from the main chain with little to no hashing security and create their own chain. You are suggesting that in BU a sybil attack is made easier though as the incentive structures under core and xt is to stay on the chain with the majority hashing security? It is far easier and less expensive to spin up a bunch of nodes than replicate replicate the hashing power. Would you agree or disagree?

They don't even have to break off and form their own chain.  They can just recompile with a parameter changed to accept larger blocks.  And then in theory that larger block would be orphaned and it would go back to the main chain eventually. (There are other considerations for say allowing 200MB blocks with regard to just changing that parameter, but safe to ignore them in this reply I think).
BitUsher
Legendary
*
Offline Offline

Activity: 994
Merit: 1035


View Profile
January 02, 2016, 09:23:27 PM
 #22

I would say a Sybil attacker with the resources to cook up 1000 nodes will have no trouble modding a bit of C++ code or hiring a coder to do that. That's the least of the barriers, and even if it were to be relied on, that would be a losing battle. If inconvenience were all that is keeping Bitcoin secure, we would have a problem. Also see my edit to the post immediately above yours.

Is there any coded algorithm for determining blocksize consensus in BU available to post here?
Zangelbert Bingledack
Legendary
*
Offline Offline

Activity: 1036
Merit: 1000


View Profile
January 02, 2016, 09:25:52 PM
 #23

Not sure what you mean. I'm just saying if someone wanted to create a fork of Core with a 200MB blocksize cap now, it's not difficult. Then if they had the resources to deploy 1000 nodes, we'd be at your scenario.

Point is, this has nothing to do with BU.
brg444
Hero Member
*****
Offline Offline

Activity: 644
Merit: 504

Bitcoin replaces central, not commercial, banks


View Profile
January 02, 2016, 09:31:30 PM
 #24

I would say a Sybil attacker with the resources to cook up 1000 nodes will have no trouble modding a bit of C++ code or hiring a coder to do that. That's the least of the barriers, and even if it were to be relied on, that would be a losing battle. If inconvenience were all that is keeping Bitcoin secure, we would have a problem. Also see my edit to the post immediately above yours.

I'm not sure if you're intentionally avoiding the gaping hole in your analysis or if you just don't see it.

Yes, someone could spin up 1000 nodes tomorrow that advertise a larger block size but the context is quite different in that the network has agreed by consensus that these would be invalid. For that reason miners will not mine such blocks or they will get forked off the network for not respecting the consenus rules (and lose money).

From what I understand BU proposes that all of these nodes be aggregated into a signal that miners should consider when creating the blocks. That is the nature of a sybil attack.

With current Core consensus rules it is very easy for miners to tell nodes apart from eachother, there are two kinds: 1MB nodes and the rest.

"I believe this will be the ultimate fate of Bitcoin, to be the "high-powered money" that serves as a reserve currency for banks that issue their own digital cash." Hal Finney, Dec. 2010
BitUsher
Legendary
*
Offline Offline

Activity: 994
Merit: 1035


View Profile
January 02, 2016, 09:32:15 PM
 #25

Not sure what you mean. I'm just saying if someone wanted to create a fork of Core with a 200MB blocksize cap now, it's not difficult. Then if they had the resources to deploy 1000 nodes, we'd be at your scenario.

Point is, this has nothing to do with BU.

The difference being that those 1k nodes would be producing orphaned blocks on the original chain with 99% of the hashing security(thus committing economic suicide) and with the BU proposal one is assuming the miners have accepted the proposal and allow the nodes to dynamically adjust blocksize. This is a significant difference, is it not? Don't we want to assume the future hypothetical that BU has the majority of mining security behind it to evaluate it true potential?
Bergmann_Christoph
Sr. Member
****
Offline Offline

Activity: 409
Merit: 286


View Profile WWW
January 02, 2016, 09:45:34 PM
 #26

Sorry for stepping in.

If someone tries to sybill the networks and sets up 2,000 nodes with a blocklimit of 200 MB, no responsible miner would take this as a reason to set his own limit to 200 MB.

When one of the miners was corrupted too, he could release a 200 MB block and 2,000 Nodes would propagate it. All the other nodes with lower limits would reject the block untill it reaches some depth. For that to happen the majority of miners has to be corrupted.

To be honest, I don't think that this attack is worth to be discussed - while Adam Back raised some question I'd love to see addressed.

Edit, cause "brand new" looks ugly: I'm C. Bergmann but unfortunately lost my password and my bitcoin-signed pledge for recovery was not answered. I'm not affiliated with BU but I like the idea and think it is worth to be discussed open-minded.

--
Mein Buch: Bitcoin-Buch.org
Bester Bitcoin-Marktplatz in der Eurozone: Bitcoin.de
Bestes Bitcoin-Blog im deutschsprachigen Raum: bitcoinblog.de

Tips dafür, dass ich den Blocksize-Thread mit Niveau und Unterhaltung fülle und Fehlinformationen bekämpfe:
Bitcoin: 1BesenPtt5g9YQYLqYZrGcsT3YxvDfH239
Ethereum: XE14EB5SRHKPBQD7L3JLRXJSZEII55P1E8C
Zangelbert Bingledack
Legendary
*
Offline Offline

Activity: 1036
Merit: 1000


View Profile
January 02, 2016, 09:47:34 PM
 #27

BitUsher, so you're saying an attacker spinning up all those nodes would encourage a bunch of other people to raise their limits to 200MB? Similar to what I said above, any miner wishing to take advantage of the situation and mine 200MB blocks is not going to be deterred by having to mod the code a bit. Miners already do that, in fact. Again, BU is only a change in convenience; if convenience is the different between Bitcoin being secure and insecure, we have bigger problems already (soon enough someone's just gonna make a patch, then it will be dirt simple to mod any consensus setting). It won't likely be fruitful to critique BU along those lines.

I'm pretty sure continued discussion on this point would clutter the thread quite a bit and not really be related to what Adam is asking about. Maybe make a new thread?
BitUsher
Legendary
*
Offline Offline

Activity: 994
Merit: 1035


View Profile
January 02, 2016, 09:54:08 PM
Last edit: January 02, 2016, 10:10:31 PM by BitUsher
 #28

Sorry for stepping in.

If someone tries to sybill the networks and sets up 2,000 nodes with a blocklimit of 200 MB, no responsible miner would take this as a reason to set his own limit to 200 MB.

When one of the miners was corrupted too, he could release a 200 MB block and 2,000 Nodes would propagate it. All the other nodes with lower limits would reject the block untill it reaches some depth. For that to happen the majority of miners has to be corrupted.

To be honest, I don't think that this attack is worth to be discussed - while Adam Back raised some question I'd love to see addressed.


Yes, of course those 200MB blocks will be orphaned now with BU. We are discussing how BU would hypothetically work if a majority of the mining power supported the implementation and relegated the blocksize to nodes instead of themselves. BU isn't assuming switching to PoS in the future, right? The security model right now assumes a coordinated attack of miners and nodes. BU would allow the nodes to perform this attack immediately as the miners will be relegating their maxblocksize to the nodes, right?

BitUsher, so you're saying an attacker spinning up all those nodes would encourage a bunch of other people to raise their limits to 200MB? Similar to what I said above, any miner wishing to take advantage of the situation and mine 200MB blocks is not going to be deterred by having to mod the code a bit. Miners already do that, in fact. Again, BU is only a change in convenience; if convenience is what is keeping Bitcoin secure, we have bigger problems already. It won't likely be fruitful to critique BU along those lines.

I'm pretty sure continued discussion on this point would clutter the thread quite a bit and not really be related to what Adam is asking about. Maybe make a new topic?

Yes, I would rather move onto other topics, but can you explain to me the "1% easier" difference in 1 post assuming future possibility of a majority of miners support BU and relegate the blocksize to nodes instead of themselves ?


P.S... I am not posing these questions to denigrate your efforts and am genuinely interested in learning about BU and helping other implementations. Please don't be offended by these questions.
brg444
Hero Member
*****
Offline Offline

Activity: 644
Merit: 504

Bitcoin replaces central, not commercial, banks


View Profile
January 02, 2016, 09:56:47 PM
 #29

Sorry for stepping in.

If someone tries to sybill the networks and sets up 2,000 nodes with a blocklimit of 200 MB, no responsible miner would take this as a reason to set his own limit to 200 MB.

When one of the miners was corrupted too, he could release a 200 MB block and 2,000 Nodes would propagate it. All the other nodes with lower limits would reject the block untill it reaches some depth. For that to happen the majority of miners has to be corrupted.

The attack is a lot more complex than that. I think you're on the BU forum? Taek had a nice explanation of the centralization pressure enabled by BU. Someone could leverage a sybil attack to effectively do just what he proposed: slowly but surely prune nodes out of the network until it gets consolidated into a few more controllable hands.


Quote from: Taek
If you are a miner, and you know a block of size X can be processed by 85% of the network, but not 100%, do you mine it? If by 'network', we mean hashrate, then definitely! 85% is high enough that you'll be able to build the longest chain. The miners that can't keep up will be pruned, and then the target for '85% fastest' moves - now a smaller set of miners represents 85% and you can move the block size up, pruning another set of miners.

If by 'network', you mean all nodes... today we already have nodes that can't keep up. So by necessity you are picking a subset of nodes that can keep up, and a subset that cannot. So, now you are deciding who is safe to prune. Raspi's? Probably safe. Single merchants that run their own nodes on desktop hardware? Probably safe. All desktop hardware, but none of the exchanges? Maybe not safe today. But if you've been near desktop levels for a while, and slowly driving off the slower desktops, at some point you might only be driving away 10 nodes to jump up to 'small datacenter' levels.

And so it continues anyway. You get perpetual centralization pressure because there will always be that temptation to drive off that slowest subset of the network since by doing so you can claim more transaction fees.

"I believe this will be the ultimate fate of Bitcoin, to be the "high-powered money" that serves as a reserve currency for banks that issue their own digital cash." Hal Finney, Dec. 2010
JackH
Sr. Member
****
Offline Offline

Activity: 381
Merit: 255


View Profile
January 02, 2016, 10:04:32 PM
 #30

BitUsher, so you're saying an attacker spinning up all those nodes would encourage a bunch of other people to raise their limits to 200MB? Similar to what I said above, any miner wishing to take advantage of the situation and mine 200MB blocks is not going to be deterred by having to mod the code a bit. Miners already do that, in fact. Again, BU is only a change in convenience; if convenience is the different between Bitcoin being secure and insecure, we have bigger problems already (soon enough someone's just gonna make a patch, then it will be dirt simple to mod any consensus setting). It won't likely be fruitful to critique BU along those lines.

I'm pretty sure continued discussion on this point would clutter the thread quite a bit and not really be related to what Adam is asking about. Maybe make a new thread?

You wanted to be able to post on this forum and not be censored, yet you are not prepared to answer the hard questions posed to BU?

Lets try again. If I setup 2000 nodes, each voting for a 200MB block, thus overtaking consensus, what prevents a step 2 scenario from happening, where a miner that gets lucky starts mining 200MB blocks and propagating those. Longest chain is mine, as I run the most nodes.

Adam's questions are somehow of a similar fashion as he is asking how we prevent multiple shards of the block to happen, where each node follows an arbitrary size and starts rejecting the larger blocks. Meaning, I can kick Adam out of the network quite quickly as my 2000 nodes in consensus for 200MB blocks will ignore his 1MB + 10% blocks consensus with itself.

<helo> funny that this proposal grows the maximum block size to 8GB, and is seen as a compromise
<helo> oh, you don't like a 20x increase? well how about 8192x increase?
<JackH> lmao
LovelyDay
Newbie
*
Offline Offline

Activity: 21
Merit: 0


View Profile
January 02, 2016, 10:10:30 PM
 #31

In the interest of this "review", I will point out a point commonly not understood by those new to BU:

BU follows the longest chain.

If an excessive block is accepted after the chain it's on reaches a certain depth, then that chain becomes an eligible choice, but if there is a longer one with smaller blocks then it will still not be chosen.

So the claim that BU will "insta-fork" when there is a block > 1MB is simply not understanding how it works.

Those who have asked for the detailed algorithm can find a link to the Github repository containing the source code at the BU download page:

http://www.bitcoinunlimited.info/download.html

Further detailed information about BU can also be obtained from the Resources section of the BU site linked above.
That could serve as a good basis of discussion / review.

P.S. I have opened an account on BCT to join this discussion since I think it is important to clear up misconceptions about BU.
BitUsher
Legendary
*
Offline Offline

Activity: 994
Merit: 1035


View Profile
January 02, 2016, 10:15:06 PM
 #32

In the interest of this "review", I will point out a point commonly not understood by those new to BU:

BU follows the longest chain.

If an excessive block is accepted after the chain it's on reaches a certain depth, then that chain becomes an eligible choice, but if there is a longer one with smaller blocks then it will still not be chosen.

So the claim that BU will "insta-fork" when there is a block > 1MB is simply not understanding how it works.

Those who have asked for the detailed algorithm can find a link to the Github repository containing the source code at the BU download page:

http://www.bitcoinunlimited.info/download.html

Further detailed information about BU can also be obtained from the Resources section of the BU site linked above.
That could serve as a good basis of discussion / review.

P.S. I have opened an account on BCT to join this discussion since I think it is important to clear up misconceptions about BU.

Thank You. So if it follows the longest chain than that is exactly how bitcoin core currently works , so I am amiss to what that "1%" cited  difference actually is. Any hints?
JackH
Sr. Member
****
Offline Offline

Activity: 381
Merit: 255


View Profile
January 02, 2016, 10:18:19 PM
 #33

In the interest of this "review", I will point out a point commonly not understood by those new to BU:

BU follows the longest chain.

If an excessive block is accepted after the chain it's on reaches a certain depth, then that chain becomes an eligible choice, but if there is a longer one with smaller blocks then it will still not be chosen.

So the claim that BU will "insta-fork" when there is a block > 1MB is simply not understanding how it works.

Those who have asked for the detailed algorithm can find a link to the Github repository containing the source code at the BU download page:

http://www.bitcoinunlimited.info/download.html

Further detailed information about BU can also be obtained from the Resources section of the BU site linked above.
That could serve as a good basis of discussion / review.

P.S. I have opened an account on BCT to join this discussion since I think it is important to clear up misconceptions about BU.

And longest chain is a rule set by nodes, correct? Meaning, that consensus is formed by the highest voting number of nodes, in this scenario, my 2000 nodes.

If we go by the standard 6 confirmations, 6 depths, then we can safely assume it will be the longest chain. So after my 2000 nodes vote for a 200MB block, I wait 1h for the longest chain to become 200MB. Or for the real paranoid, we wait 2 hours, and I am certain that the 200MB rule is enforced and that there is probably not another chain.

I then spam it with 200MB data, and thus we get 200MB blocks until someone can form a better consensus (launch more nodes with a different blocksize consensus).

All this, I can do in less than one day, and cripple the network for less than $5000.

<helo> funny that this proposal grows the maximum block size to 8GB, and is seen as a compromise
<helo> oh, you don't like a 20x increase? well how about 8192x increase?
<JackH> lmao
sAt0sHiFanClub
Hero Member
*****
Offline Offline

Activity: 546
Merit: 500


Warning: Confrmed Gavinista


View Profile WWW
January 02, 2016, 10:19:40 PM
 #34

Longest chain is mine, as I run the most nodes.


And what is to stop this from happening on bitcoin right now? If you have the most nodes, you have a 51%+ attack.

And what, exactly, is in the 200MB block? Are they all valid transactions in the mempool? And if they are not, how are they going to be validated by nodes?

Just because a limit is 200mb doesnt make it so - just like we dont have  too many 1mb blocks now, despite the present limit.

We must make money worse as a commodity if we wish to make it better as a medium of exchange
LovelyDay
Newbie
*
Offline Offline

Activity: 21
Merit: 0


View Profile
January 02, 2016, 10:21:29 PM
 #35

Thank You. So if it follows the longest chain than that is exactly how bitcoin currently works , so I am amiss to what that "1%" cited  difference actually is. Any hints?

I am not sure which "1%" difference you are citing. I have searched this thread and it came up first referenced in one of your posts. Can you provide a complete link / citation of the statement so that it can be looked at properly?
smooth
Legendary
*
Offline Offline

Activity: 2968
Merit: 1198



View Profile
January 02, 2016, 10:21:37 PM
 #36

In the interest of this "review", I will point out a point commonly not understood by those new to BU:

BU follows the longest chain.

That is my limited understanding of BU. In fact I believe I was the one (as part of an interaction with Peter R) on the GcBU thread who pointed out that the satoshi whitepaper can be read as effectively miners making all the rules on what constitutes a valid block. End user nodes would also implement verification for protection against short term chain forks, but they would validate based on rules set entirely by miners. So as an end user, if miners change the rules, you would simply need to implement those changes in your node, or you would be unable to process the longest chain, and therefore no longer a participant.

This is certainly a different security model than what many in the Bitcoin community have come to understand over the past several years, where forks are accepted by an "economic majority" and "longest chain" is replaced with "longest valid chain". But it seems that (maybe) BU proponents want to adopt a stricter "longest chain" rule that vests all of the rule-making power with miners. I'm neither agreeing nor disagreeing here; I'm trying to state the position to see if I understand it.

Now in the case of BU specifically, I'm not sure I understand how this works when the user has configured a smaller block limit than is present on the longest chain? Does their node switch into an "offline" state based on block headers? The user then has a choice to adjust their setting (and network bandwidth, etc.) or stay off the network?

Quote from: JackH
And longest chain is a rule set by nodes, correct?

As I understand it, the rule is solely set by proof-of-work (i.e. large sum off difficulty). Any node that is off the chain with the most work is considered off the network. In that case it would be sybil proof, because proof-of-work can't be replicated. Let's see if I'm right.

JackH
Sr. Member
****
Offline Offline

Activity: 381
Merit: 255


View Profile
January 02, 2016, 10:21:54 PM
 #37

Longest chain is mine, as I run the most nodes.


And what is to stop this from happening on bitcoin right now? If you have the most nodes, you have a 51%+ attack.

And what, exactly, is in the 200MB block? Are they all valid transactions in the mempool? And if they are not, how are they going to be validated by nodes?


No, no and no again. I can have 1 million nodes and I wont be making any type of 51% attack as the blocksize is hardcoded.

What will currently happen is that my nodes will be part of a network that never makes any block larger than 1MB, despite the fact that my nodes accept up to 200MB.

Bitcoin is not about nodes, or about miners. Its about nodes AND miners.

EDIT: A 200MB block are all valid transactions, send by ME to ME. Remember, I have 2000 nodes to send to and from. I will loose only the fee's that I pay miners, which is absolutely nothing.

This attack was performed on Bitcoin not long ago, with the aim of filling up the mempool. It did not work under the current consensus rules.

<helo> funny that this proposal grows the maximum block size to 8GB, and is seen as a compromise
<helo> oh, you don't like a 20x increase? well how about 8192x increase?
<JackH> lmao
NxtChg
Hero Member
*****
Offline Offline

Activity: 840
Merit: 1002


Simcoin Developer


View Profile WWW
January 02, 2016, 10:21:59 PM
 #38

So the claim that BU will "insta-fork" when there is a block > 1MB is simply not understanding how it works.

And BU proponents are to blame, because they keep pushing it as "everybody sets their own limit and then magic happens (emergent consensus)".

If it follows the longest chain, then I believe the message of BU user can be summarized like this:

"I will accept any blocks the 51% of miners agreed on, at the expense of my business, which will now have to wait for something like an hour, before accepting any transactions".

This is how it should be promoted, then people would understand.

And this also begs the question: why the hell anybody needs to set their own limit at all?!

You still need a delay to see what size the miners picked, so your limit doesn't matter.

Simcoin: https://simtalk.org:444/ | The Simplest Bitcoin Wallet: https://tsbw.io/ | Coinmix: https://coinmix.to | Tippr stats: https://tsbw.io/tippr/
--
About smaragda and his lies: https://medium.com/@nxtchg/about-smaragda-and-his-lies-c376e4694de9
Bergmann_Christoph
Sr. Member
****
Offline Offline

Activity: 409
Merit: 286


View Profile WWW
January 02, 2016, 10:22:27 PM
 #39


Quote from: Taek
If you are a miner, and you know a block of size X can be processed by 85% of the network, but not 100%, do you mine it? If by 'network', we mean hashrate, then definitely! 85% is high enough that you'll be able to build the longest chain. The miners that can't keep up will be pruned, and then the target for '85% fastest' moves - now a smaller set of miners represents 85% and you can move the block size up, pruning another set of miners.

If by 'network', you mean all nodes... today we already have nodes that can't keep up. So by necessity you are picking a subset of nodes that can keep up, and a subset that cannot. So, now you are deciding who is safe to prune. Raspi's? Probably safe. Single merchants that run their own nodes on desktop hardware? Probably safe. All desktop hardware, but none of the exchanges? Maybe not safe today. But if you've been near desktop levels for a while, and slowly driving off the slower desktops, at some point you might only be driving away 10 nodes to jump up to 'small datacenter' levels.

And so it continues anyway. You get perpetual centralization pressure because there will always be that temptation to drive off that slowest subset of the network since by doing so you can claim more transaction fees.

Thanks for clearing that up.


--
Mein Buch: Bitcoin-Buch.org
Bester Bitcoin-Marktplatz in der Eurozone: Bitcoin.de
Bestes Bitcoin-Blog im deutschsprachigen Raum: bitcoinblog.de

Tips dafür, dass ich den Blocksize-Thread mit Niveau und Unterhaltung fülle und Fehlinformationen bekämpfe:
Bitcoin: 1BesenPtt5g9YQYLqYZrGcsT3YxvDfH239
Ethereum: XE14EB5SRHKPBQD7L3JLRXJSZEII55P1E8C
smooth
Legendary
*
Offline Offline

Activity: 2968
Merit: 1198



View Profile
January 02, 2016, 10:25:55 PM
 #40

You still need a delay to see what size the miners picked, so your limit doesn't matter.

It matters because miners do not have a direct economic interest in forcing end users off the network. End users provide fees and demand for the currency which is how miners make money. There may be indirect interests though.

Pages: « 1 [2] 3 4 5 6 7 8 9 10 11 »  All
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!