Zangelbert Bingledack
Legendary
Offline
Activity: 1036
Merit: 1000
|
|
January 03, 2016, 01:45:47 AM |
|
Here's a comment from my reddit relevant to BU. [/u/ForkiusMaximus in reply to /u/kanzure.] >Consensus rules *must* be same for all bitcoin users. It's that simple. >... > How to coordinate such update for a decentralized system? Peer review has worked quite well. I agree. However, this is no reason that this peer review process has to be centralized in Core's repo. That's the whole point of /u/anarchystar's improvement proposal. Miners and nodes can take Core's recommendations into consideration without being bound by a wall of inconvenience (self-modification of the code). Since a lot of miners already mod their code today, it is clear that all Core really does with respect to consensus parameters is set Schelling points1 for consensus to form around. The inconvenience/casual-user-difficulty of modding the code does strengthen the Schelling point, but it has the disadvantage of centralizing control over Schelling-point setting - thus introducing friction and a potential attack vector into the consensus process. Today: - Core sets the Schelling points for consensus parameters (max_blocksize=1MB, etc.) as user-unchangeable settings- Miners and nodes are able to mod their code to change those parameters if they wish (maybe need to hire a coder), but of course they generally don't as they would lose money/functionality due to not tracking consensus - Miners/nodes could all agree to a block where they change the parameters in sync, irrespective of Core, but it would be inconvenient (instructions for doing it would have to be circulated, etc.) With this BIP: - Core sets the Schelling points for consensus parameters (max_blocksize=1MB, etc.) as default settings with alternatives selectable (with warnings) - Miners and nodes can easily change those parameters if they wish (don't need to hire a coder), but of course they generally don't as they would lose money/functionality due to not tracking consensus - Miners/nodes could all agree to a block where they change the parameters in sync, irrespective of Core, and it wouldn't be inconvenient (except just getting everyone on board and aware, which is the same problem faced when Core would release a hardforked upgrade) Note that all these changes are "merely" changes in convenience. I put that in quotes to be fair, because even trivial inconveniences can make a big difference in how people act. However, taking a stand against the spirit of this BIP is to fall back to the position that Bitcoin's consensus is enforced by a wall of inconvenience.If that's the position you want to take, matters just got a lot worse for you: there are now implementations (and yes, they are properly called implementations as they don't force the user to break consensus) that already have this BIP partially included and are working on having it fully included, meaning that wall of inconvenience is about to get a whole lot thinner. With respect to blocksize it already has. A few days ago, in order for a miner/node running Core to adjust the blocksize cap, they had to mod the code themselves and recompile, or hire a C++ programmer familiar with Bitcoin to do it for them. Today, they can simply download a piece of software. Maybe tomorrow they'll be able to just download some kind of tiny plug-in someone makes. Thus we see that the wall of inconvenience cannot be relied on. As is argued in the case of zero-conf transactions, "We might as well break it now because it's trivially defeatable." I t is inevitable that Core's recommended consensus parameters will become unbundled from the rest of its code offerings, not because centralized control over the consensus parameters is bad (though I'd argue it is), but because the inconvenience barrier cannot be maintained. We are only now seeing this unbundling because it is only now that a sizable number of Bitcoin users have started to have a different opinion from Core and/or become wary of vesting inordinate power to set these Schelling points with a single group in a single repo. Core's recommendations will still carry tremendous weight in people's decisions about how to set their consensus parameters, but the process will no longer be centralized. People will go with Core's parameters if they want, or converge on one of the Core dev's proposals, or maybe someone else's. Consensus will happen, not because it is enforced by a barrier of inconvenience in Core software, but because there is overwhelming economic incentive to converge on consensus parameters. To confuse this is to imagine that the tail is wagging the dog. Moreover, the consensus will be economically rational and value-maximizing because miners and nodes are economically rational, which is a fundamental assumption for Bitcoin to work in the first place. Not sure whether I'm allowed to link to reddit, but it was in the thread titled "I just submitted a BIP that would allow users to decide which features to enable. Btcdrak rejected it (he's also controlling the dev mailing list). So I'm posting it here." by /u/anarchystar.
|
|
|
|
BitUsher
Legendary
Offline
Activity: 994
Merit: 1035
|
|
January 03, 2016, 01:50:14 AM |
|
Odd... is anyone else noticing that those 2 posts don't show up in the main thread accusing Adam of condoning censorship or is it just me? Has testing1567 been shadowbanned? Ohhh, the irony.
Those posts reflect a very different version of BU than has been described by its proponents in this thread . Either they have't fully reviewed the code or testing1567 is incorrect but there is a discrepency. I suppose I will have to read the code myself one day to get to the bottom of this.
The comments by testing1567 are showing up for me. BitUsher, please note that I've been focusing on one part of BU - the aspect I consider to be the major one - because I know this discussion will quickly become impossible if we talk about two many things at once (already 6 pages), but there is another aspect of BU called the "oversized block acceptance depth" or "excessive block acceptance depth" that was originally thought to be either necessary or useful to make the concept work. I personally now don't think it is necessary at all, but it may turn out to be useful. It certainly looks like it would be useful, but I'm very much aware of the difficulties in proving that to be case, so for now I consider it an experimental thing for everyone to consider. Meanwhile, I would like to argue that - even in the absence of that setting - the BU concept of simply letting users set the blocksize cap themselves will not result in chaos, but simply a smoother version of what we have now, with the will of the market expressed more completely and granularly, without jiggering by the wall of inconvenience of having controversial consensus parameters locked down. See here for elaboration. I spoke too soon. testing1567 was just hidden within a downvoted thread. I see it now. Ok, thanks for clarifying and I understand why you wanted to ignore those "differences" but they should have been revealed up front when I kept asking for clarification as this is a technical subforum where we are meant to discuss the details . testing1567 had 2 very interesting posts.... do you disagree with any of the information therein before I start pondering them in detail?
|
|
|
|
LovelyDay
Newbie
Offline
Activity: 21
Merit: 0
|
|
January 03, 2016, 01:52:03 AM |
|
Odd... is anyone else noticing that those 2 posts don't show up in the main thread accusing Adam of condoning censorship or is it just me? Has testing1567 been shadowbanned? Ohhh, the irony.
Those posts reflect a very different version of BU than has been described by its proponents in this thread . Either they have't fully reviewed the code or testing1567 is incorrect but there is a discrepency. I suppose I will have to read the code myself one day to get to the bottom of this.
The comments by testing1567 are showing up for me. [...] I've messaged the mods of /r/btc to enquire, and they confirm he's not banned there.
|
|
|
|
Zangelbert Bingledack
Legendary
Offline
Activity: 1036
Merit: 1000
|
|
January 03, 2016, 01:57:02 AM |
|
Ok, thanks for clarifying and I understand why you wanted to ignore those "differences" but they should have been revealed up front when I kept asking for clarification as this is a technical subforum where we are meant to discuss the details .
testing1567 had 2 very interesting posts.... do you disagree with any of the information therein before I start pondering them in detail? I didn't realize you were asking about those other features, as you didn't mention them explicitly that I noticed. I just thought you were misunderstanding something about my explanation. That might explain the confusion. I don't have a lot of thoughts on the acceptance depth aspect. It seems it would work, but it is experimental. It can be turned off, and I have recommended that it be turned off by default and marked as an experimental feature. I don't consider it necessary for the BU concept of letting users determine the blocksize limit, which I consider the main attraction of BU, so for me it's kind of <shrug>.
|
|
|
|
LovelyDay
Newbie
Offline
Activity: 21
Merit: 0
|
|
January 03, 2016, 02:03:43 AM |
|
Unlimiturd: you may have any block size you want, so long as it's <16MB.
Am I wrong about ^this^? If so, please advise on the actual maximum value.
Yeah, you're wrong. You're referring to src/unlimited.h: DEFAULT_EXCESSIVE_BLOCK_SIZE = 16000000
It's a default value of a setting that can be changed in Unlimited. As in: not a permanent feature. The actual hard limit is in src/consensus/consensus.h: static const unsigned int BU_MAX_BLOCK_SIZE = 32000000; // BU: this constant is deprecated but is still used in a few areas such as allocation of memory. Removing it is a tradeoff between being perfect and changing more code. TODO: remove this entirely
Note: this means it is 32MB - currently, subject to future removal. Not 16 as you've now confidently twice claimed. That's all.
|
|
|
|
Zangelbert Bingledack
Legendary
Offline
Activity: 1036
Merit: 1000
|
|
January 03, 2016, 02:04:29 AM |
|
Ok, thanks for clarifying and I understand why you wanted to ignore those "differences" but they should have been revealed up front when I kept asking for clarification as this is a technical subforum where we are meant to discuss the details . Actually, I did mention this to you on page 3. It's been a fast discussion and we forget things, so no worries.
|
|
|
|
Zangelbert Bingledack
Legendary
Offline
Activity: 1036
Merit: 1000
|
|
January 03, 2016, 02:16:48 AM |
|
Note to small block adherents: Despite the name, Bitcoin Unlimited is not a "big blocks" implementation. It's simply an implementation that doesn't include a locked-down blocksize as part of the package. It lets the user set it. It could be 500kB if you like. It's an accident of history that BU is being developed by big block supporters. It could have been developed by small block supporters for the exact same reason: to avoid the dominant Bitcoin implementation from doing something you consider foolish. As I said in my first post, right now the leaders of the dominant Bitcoin implementation are for a low blocksize cap, but imagine if the situation reverses and big blockists are in control, to the consternation of many in the community. I think you would not want them locking down the settings. You might say, "You folks are doing fine otherwise, but you are off on the blocksize cap. Why try to play central planner? Please leave it up to the market if you are so sure the market will like your huge blocks. People will follow your recommendations if they like them anyway, so what are you worried about?" If I were Core maintainer, I would do the same. Perhaps I would set a higher default, but I would not take the option away from the user. To do so risks sudden consensus shocks due to friction effects, risks my position being undermined silently, and most of all assumes I know better than everyone else. I might set it at 10MB. But I may be wrong; I'd rather trust in the market, because none of us knows better than a million people all with skin in the game. Bitcoin Unlimited is just as much a small blocks implementation, guarding against the possibility of, say, Mike Hearn taking over Core, as against, say, LukeJr. Bitcoin Unlimited is simply against central planning of the blocksize. Instead, blocksize consensus would emerge from each user making their own decisions, signaling, coordination, debate, flag days, expert recommendations, etc. It prevents against centralization of developers in one implementation; again, today it's small block adherents in Core, but what if it became big block adherents? It might start to sound like a pretty good idea to let the market decide. Under BU, all our arguments about blocksize become merely academic. We would be trying to predict what the market would decide, rather than vying over control of the One Ring of Power - the official/reference implementation of Bitcoin. Much rancor could be dispensed with. The market would do its thing and probably maximize value, and Bitcoin would continue, unable to be controlled by anyone. Just the way we like it.
|
|
|
|
smooth
Legendary
Offline
Activity: 2968
Merit: 1198
|
|
January 03, 2016, 02:19:21 AM |
|
"We would be trying to predict what the market would decide, rather than vying over control of the One Ring of Power - the official/reference implementation of Bitcoin. Much rancor could be dispensed with."
@Zangelbert Bingledack, I'm somewhat sympathetic to your cause but I don't really see how the market mechanism operates here, outside of a very broad definition of "market" which encompasses politics. Node voting doesn't work at all. Without that you are still reduced to politics and whoever shouts the loudest in trying to convince miners what block size they should use.
|
|
|
|
BitUsher
Legendary
Offline
Activity: 994
Merit: 1035
|
|
January 03, 2016, 02:20:01 AM |
|
Actually, BitUsher, I did mention this to you on page 3. It's been a fast discussion and we forget things, so no worries. Correct... thanks. This is interesting stuff. Does BU have any sig-ops limits for CVE-2013-2292 like what Gavin proposed here - https://github.com/bitcoinxt/bitcoinxt/commit/cc1a7b53629b265e1be6e212d64524f709d27022 of is BU stuck to standard 20k ? I see a brief mention of it here - https://bitco.in/forum/threads/bitcoin-unlimited-code-review.359/ nothing confirmed. Most of my interest is with the experimental stuff discussed by testing1567 as well as some interesting new attack vectors opened up politcially by empowering the nodes with developmental decisions. There are some topics that need further analysis. This does get me interested in a potential oracle or DAO potentially having the role to determine maxBlockSize by analyzing technical merits/limitations at a higher weight than user demand which could be used to influence a more dynamic block adjustment. Note to small block adherents: Despite the name, Bitcoin Unlimited is not a "big blocks" implementation. It's simply an implementation that doesn't include a locked-down blocksize as part of the package. It lets the user set it. It could be 500kB if you like.
There are indeed many misunderstandings. As a point of clarification 1) Very few of the core developers are "small block adherents", besides 1-2 developers , all suggest raising maxBlockSize. 2) testing1567 indicated " My other issue with BU is it lacks a way to move the blocksize down, only up." is this true for nodes with BU(I am aware that miners can set the limit to anything.)
|
|
|
|
achow101
Moderator
Legendary
Offline
Activity: 3542
Merit: 6885
Just writing some code
|
|
January 03, 2016, 02:25:53 AM |
|
BU will let the user select a given Core or XT BIP (this is still be worked on (BUIP002, probably not supposed to link it here)), so for example if they turned on the BIP101 option, their node would mimic an XT node as far as following BIP101, including the 75% threshold and specific starting block.
Really? How? So far what I have seen is that a new block size limit in BU takes effect immediately. There is no mechanism that does the supermajority fork process. If there is a specific option to for the supermajority fork process for a single BIP, then there should be that for every BIP. Will BU have options to allow the user to support whatever BIP or not? How will new BIPs be added? Through a software upgrade? Just like today, where if XT were winning Core miners might switch to XT, and if not they wouldn't, it's the same dynamic: if XT were winning, the BU miners would likely set their blocksize settings to BIP101. They can do this even faster than Core miners can switch to XT since it's just a GUI setting, not a new client to download.
A new client download and install takes about 2 minutes, it's not that big of a problem. Even so, the miners would have to either switch to use bigger block sizes after the fork happens or somehow indicate that they are supporting the bigger blocks before the fork (e.g. the supermajority fork process). This means that that larger block size should not take effect immediately. They can just follow Core. BU can be set up to default to Core behavior (it doesn't now, but it's an experimental release; anyone could fork it that way, trivially). I mean, you could say the same about XT: dumb users might try using XT. Could happen. This certainly isn't a security risk, or else Bitcoin is doomed because there's no way to stop people from releasing forks. Yeah I know XT has the 75% failsafe, so then imagine the reverse: everyone is using XT and someone dumb downloaded Core with its 1MB cap and tried to mine but kept not being able to build any blocks because their client rejected all the XT blocks.
Point is, the situation today is that miners and nodes need to pay attention to developments today. They can't just blindly trust whatever Core puts out - and if that's the expectation then we already have bigger problems.
Sure you can't blindly trust whatever Core puts out, same with XT, BU and every other software implementation.
|
|
|
|
Zangelbert Bingledack
Legendary
Offline
Activity: 1036
Merit: 1000
|
|
January 03, 2016, 02:38:14 AM |
|
"We would be trying to predict what the market would decide, "
@Zangelbert Bingledack, I'm somewhat sympathetic to your cause but I don't really see how the market mechanism operates here, outside of a very broad definition of "market" which encompasses politics. Node voting doesn't work at all. Without that you are still reduced to politics and whoever shouts the loudest in trying to convince miners what block size they should use. Well that's how it is anyway, and even now the market does decide. My point is that there's market friction in the inconvenience barrier of users not being able to set the blocksize cap themselves. That gives artificial solidity to the Schelling point set by Core (as well as the one set by XT). If Core is doing the correct thing, it shouldn't mind putting it to the market test more fully, by taking its finger off the scale. How much is Core's finger really on the scale here? Well, for example, how many reasons are there to mistrust Mike Hearn? Some would say a lot. That means, as things stand now, even if you want BIP101, you can't really have it if you have a problem with Mike, because XT isn't an option for you. And because other people feel that way, you're further limited. The way Core (and XT) does it now makes it a power struggle, a popularity contest, and a package deal. Maybe Core could stall for a long time before people would finally give up and go with Mike. That's a lot of friction in the market. And small block adherents, imagine the reverse, if it were Mike and Gavin were running Core and Pieter, Wlad, and Maxwell had broken off and started their own implementation, with maybe Jeff going between. And people were sticking with Core and its giant block plan, heading for catastrophe. You might notice the market friction then. BU eliminates the power struggle by unbundling the setting of consensus parameters from the rest of the Code. It also of course makes for a lot more choices. If 1MB is too small and 8MB too big, what recourse is there? Roll your own and try to popularize it? Very hard. But propose 4MB and try to get people to agree? More doable. Or what if, like some Chinese miners were saying, 8MB is fine but the scaling to 8GB is ridiculous. What do you do? You have two options, and they are bundled up tightly with all the other aspects of the code and why you choose Core or XT. That's again a lot of market friction.
|
|
|
|
LovelyDay
Newbie
Offline
Activity: 21
Merit: 0
|
|
January 03, 2016, 02:38:51 AM |
|
BU will let the user select a given Core or XT BIP (this is still be worked on (BUIP002, probably not supposed to link it here)), so for example if they turned on the BIP101 option, their node would mimic an XT node as far as following BIP101, including the 75% threshold and specific starting block.
Really? How? So far what I have seen is that a new block size limit in BU takes effect immediately. There is no mechanism that does the supermajority fork process. I think you overlooked Zangelbert's mention that the BIPs are work in progress through BUIP002 - a BU Improvement Proposal. You are correct that there is no full emulation of the BIP101 threshold etc. for now, and I believe the matter of to which degree BIPs need to be emulated faithfully is still being discussed.
|
|
|
|
BitUsher
Legendary
Offline
Activity: 994
Merit: 1035
|
|
January 03, 2016, 02:41:37 AM |
|
A new client download and install takes about 2 minutes, it's not that big of a problem. Even so, the miners would have to either switch to use bigger block sizes after the fork happens or somehow indicate that they are supporting the bigger blocks before the fork (e.g. the supermajority fork process). This means that that larger block size should not take effect immediately.
This is indeed an issue as it could divide the network and create a lot of havoc . I personally believe XT's (and as suggested here BU) 75% threshold to be dangerously low as well. A 95% supermajority with a minimum 2 week grace period and alerts sent should be the default for hardforks. Developers have a responsibility to insure that code changes don't effect users investments. The loss of trust by the code itself losing assets would be extremely negative PR for bitcoin. And small block adherents, imagine the reverse, if it were Mike and Gavin were running Core and Pieter, Wlad, and Maxwell had broken off and started their own implementation, with maybe Jeff going between. And people were sticking with Core and its giant block plan, heading for catastrophe. You might notice the market friction then.
One doesn't have to pick sides. I respect all of the developers above and can have nuanced opinions and disagreements with individual aspects of their code contributions.
|
|
|
|
iCEBREAKER
Legendary
Offline
Activity: 2156
Merit: 1072
Crypto is the separation of Power and State.
|
|
January 03, 2016, 02:42:06 AM |
|
Unlimiturd: you may have any block size you want, so long as it's <16MB.
Am I wrong about ^this^? If so, please advise on the actual maximum value.
Yeah, you're wrong. You're referring to src/unlimited.h: DEFAULT_EXCESSIVE_BLOCK_SIZE = 16000000
It's a default value of a setting that can be changed in Unlimited. As in: not a permanent feature. The actual hard limit is in src/consensus/consensus.h: static const unsigned int BU_MAX_BLOCK_SIZE = 32000000; // BU: this constant is deprecated but is still used in a few areas such as allocation of memory. Removing it is a tradeoff between being perfect and changing more code. TODO: remove this entirely
Note: this means it is 32MB - currently, subject to future removal. Not 16 as you've now confidently twice claimed. Thanks for the correction in response to my request for exactly such a clarification. So 16MB is Unlimiturd's max, except when 32MB is the limit. Unlimiturd: you may have any block size you want, so long as it's <32MB.#rekt
|
██████████ ██████████████████ ██████████████████████ ██████████████████████████ ████████████████████████████ ██████████████████████████████ ████████████████████████████████ ████████████████████████████████ ██████████████████████████████████ ██████████████████████████████████ ██████████████████████████████████ ██████████████████████████████████ ██████████████████████████████████ ████████████████████████████████ ██████████████ ██████████████ ████████████████████████████ ██████████████████████████ ██████████████████████ ██████████████████ ██████████ Monero
|
| "The difference between bad and well-developed digital cash will determine whether we have a dictatorship or a real democracy." David Chaum 1996 "Fungibility provides privacy as a side effect." Adam Back 2014
|
| | |
|
|
|
achow101
Moderator
Legendary
Offline
Activity: 3542
Merit: 6885
Just writing some code
|
|
January 03, 2016, 02:51:29 AM |
|
BU eliminates the power struggle by unbundling the setting of consensus parameters from the rest of the Code.
Why should this be removed from the code and made user configurable? It is consensus critical since it can create hard forks, so why should it be removed? It gives choices you say, so does that mean that we should make everything else that is consensus critical and make that user configurable? Should we remove the block reward schedule? Should we change the difficulty retargeting schedule?
|
|
|
|
Zangelbert Bingledack
Legendary
Offline
Activity: 1036
Merit: 1000
|
|
January 03, 2016, 02:56:26 AM |
|
I'm probably not the right person to ask. Other people on that forum should know. Most of my interest is with the experimental stuff discussed by testing1567 as well as some interesting new attack vectors opened up politcially by empowering the nodes with developmental decisions. There are some topics that need further analysis. Note that nodes already are empowered with decisions on the consensus parameters, it's just that there is a lot of friction in them doing so because of the inconvenience barrier and the strong Schelling point that artificially sets up. (See my post above in reply to Smooth. I would say the current approach makes it far more political, and BU attempts to eliminate such aspects.) This does get me interested in a potential oracle or DAO potentially having the role to determine maxBlockSize by analyzing technical merits/limitations at a higher weight than user demand which could be used to influence a more dynamic block adjustment. Interesting idea. Bitcoin has a lot up its sleeve for the future, and DAO+oracles could do amazing things to market efficiency. I think a prediction market would be ideal. Once a decentralized prediction market is up and running and gets liquidity, this will probably be the way the blocksize cap is decided in the future. This whole debate has made be extremely optimistic about Bitcoin's future, as I see the ferocity with which people will defend and debate until the right answer has been reached even on a fairly esoteric point to most lay people. This is the power of an economic system powered people with skin in the game. How much have all of us learned, no matter what side of the debate you are on, in this past year? Bitcoin drives us to be better, smarter, wiser, less biased, less emotional, less narrowly focused on our own domains of expertise. 1) Very few of the core developers are "small block adherents", besides 1-2 developers , all suggest raising maxBlockSize. Yeah. By "small" I just mean like single-digit MB sizes for the next few years. I don't mean just permanent 1MB supporters. 2) testing1567 indicated " My other issue with BU is it lacks a way to move the blocksize down, only up." is this true for nodes with BU(I am aware that miners can set the limit to anything.) No, any limit can be set. I assume testing1567 was referring to something about how someone said the acceptance depth thing was supposed to work.
|
|
|
|
jbreher
Legendary
Offline
Activity: 3038
Merit: 1660
lose: unfind ... loose: untight
|
|
January 03, 2016, 03:06:22 AM |
|
the social contract (SHA256 PoW, 10 minute solution target, 21e6 emission, 1MB max block, pay-for-priority) cannot change one iota without alienating a dominant plurality of the socioeconomic majority's critical mass.
If indeed it is true that 1MB4EVA is part of Bitcoin's social contract, and indeed anything changed in the social contract will alienate the dominant plurality of the socioeconomic majority's critical mass, then you can go back to sleep, as BU presents no risk to you. But your being here, expending effort in arguing against BU, is an indication that you are afraid that your assertions are incorrect. If the economic majority does not agree to some new limit, then any fork based upon that limit will die. It requires an economic majority to sustain any fork. I can only speak for myself, but I do agree with you that there are certain attributes that I feel are sacrosanct -- without which I would divest. However, I do not agree that 1MB4EVA is even remotely part of these fundamental principles. Further, it seems you've lost that battle even amongst the majority of those that oppose a simple maxblocksize increase at this point in time. Must suck. Sorry.
|
Anyone with a campaign ad in their signature -- for an organization with which they are not otherwise affiliated -- is automatically deducted credibility points.
I've been convicted of heresy. Convicted by a mere known extortionist. Read my Trust for details.
|
|
|
Zangelbert Bingledack
Legendary
Offline
Activity: 1036
Merit: 1000
|
|
January 03, 2016, 03:09:34 AM |
|
Really? How? So far what I have seen is that a new block size limit in BU takes effect immediately. There is no mechanism that does the supermajority fork process.
If there is a specific option to for the supermajority fork process for a single BIP, then there should be that for every BIP. Will BU have options to allow the user to support whatever BIP or not? How will new BIPs be added? Through a software upgrade?
Yeah, software upgrade as far as I know. These are planned. Dev just started recently. Don't know how long it will take. Probably the supermajority requirements will be as in the original BIPs, with option for the user to customize them as well. Depends on what the devs do, or what people who fork BU do.* *Note that, unlike Core or XT, it doesn't really matter (as far as consensus parameters) whether you run BU or a fork of it. This is an implication of the unbundling of consensus-parameter-setting from the rest of the code. So any questions you might be asking about the specific BU project with Andrew Stone as lead dev should probably be reconceived a bit: Instead of asking BU what it will do, ask what anyone that does something similar could do. The genie is kind of out of the bottle. With the unbundling concept, anyone could offer blocksize-related BIP-mimicry of any kind in any configuration. It's just a matter of dev time and inclination. Dave Collins of the btcd implementation mentioned adding BU-style blocksize cap configurability, for example (on the other forum).
|
|
|
|
BitUsher
Legendary
Offline
Activity: 994
Merit: 1035
|
|
January 03, 2016, 03:09:59 AM |
|
Why are you having such a hard time understanding that it IS ALREADY CONFIGURABLE!? BU has done nothing more then add a GUI to a system that is already designed to reach consensus based on the code individual users decide to run. It says this very clearly in the whitepaper, do none of you understand how how Satoshi envisioned this system to work? How can you even invest in Bitcoin when you don't understand these very basic facts, because otherwise this system would be extremely fragile and would make no investment sense at all.
Of course, everything is configurable and to a developer their is little difference between recompiling from source a few changes in the code and making the changes with a GUI. To a non-technical person, this makes a world of difference and has a profound political impact however. Yeah. By "small" I just mean like single-digit MB sizes for the next few years. I don't mean just permanent 1MB supporters.
Interesting. Do you have a general sense of what block sizes are most BU supporters comfortable with? How about yourself? Will BU be rolling in the long list of other scaling changes core is going to be rolling out, including SegWit/SepSig? (Seems to be some confusion on the issue here-- https://bitco.in/forum/threads/what-is-bus-stance-on-segwit.665/) , perhaps this is all premature as you still need to vote in officers and finish updated your github as I see multiple issues there with a quick glance(many old notes and links from pre-fork). However, I do not agree that 1MB4EVA is even remotely part of these fundamental principles. Further, it seems you've lost that battle even amongst the majority of those that oppose a simple maxblocksize increase at this point in time.
1 MB couldn't possibly be a core principle in the inherent initial social contract as it was imposed at a later date and Satoshi. He later indicated how it could be raised as well. Its a good thing that almost no core devs want to keep it at 1MB...even Peter Todd has signed up to increase it recently by accepting SepSig.
|
|
|
|
_mr_e
Legendary
Offline
Activity: 817
Merit: 1000
|
|
January 03, 2016, 03:22:04 AM |
|
Why are you having such a hard time understanding that it IS ALREADY CONFIGURABLE!? BU has done nothing more then add a GUI to a system that is already designed to reach consensus based on the code individual users decide to run. It says this very clearly in the whitepaper, do none of you understand how how Satoshi envisioned this system to work? How can you even invest in Bitcoin when you don't understand these very basic facts, because otherwise this system would be extremely fragile and would make no investment sense at all.
Of course, everything is configurable and to a developer their is little difference between recompiling from source a few changes in the code and making the changes with a GUI. To a non-technical person, this makes a world of difference and has a profound political impact however. Sorry but if you really think bitcoins success lies in the fact that a simple change can only be made by a developer then the system is completely doomed. Good thing emergent consensus does not work like this, anywhere in nature.
|
|
|
|
|