sickpig
Legendary
Offline
Activity: 1260
Merit: 1008
|
|
June 24, 2015, 03:08:45 PM |
|
Update: gavin's BIP has been assigned number 101.
|
Bitcoin is a participatory system which ought to respect the right of self determinism of all of its users - Gregory Maxwell.
|
|
|
hdbuck
Legendary
Offline
Activity: 1260
Merit: 1002
|
|
June 24, 2015, 03:24:12 PM |
|
Update: gavin's BIP has been assigned number 101. lol as in "blocksize 101"? MIT get out of here!
|
|
|
|
cypherdoc (OP)
Legendary
Offline
Activity: 1764
Merit: 1002
|
|
June 24, 2015, 03:27:15 PM |
|
http://www.marketwatch.com/story/heres-who-is-most-exposed-to-a-greek-default-2015-06-23“The private sector “has almost no direct exposure to Greece anymore,” wrote strategists at J.P. Morgan Cazenove, in a Monday note urging clients to get back into German stocks. So the whole 2011-2012 Greek loan restructuring was basically a strategy to move the bagholders from rich powerful bankers and investors into the hands of the general population (as taxed by their governments), wasn't it? EDIT: the entire system is broken. Let private lenders lend to eurozone countries at essentially no risk (that's moral hazard of bailouts). Then bail out the country and the private lenders using public money because of political/social concepts like one unified europe. How about letting governments default on loans WITHOUT kicking them out of the eurozone. How about a government bankruptcy -> elect a new govt, fire the top 10 central bankers and any other bankers the other central banks deem responsible and have a 5-10 year probationary period where their local central bank cannot give euro-creating loans. they're lying:
|
|
|
|
|
sickpig
Legendary
Offline
Activity: 1260
Merit: 1008
|
|
June 24, 2015, 03:36:22 PM |
|
|
Bitcoin is a participatory system which ought to respect the right of self determinism of all of its users - Gregory Maxwell.
|
|
|
cypherdoc (OP)
Legendary
Offline
Activity: 1764
Merit: 1002
|
|
June 24, 2015, 03:37:08 PM |
|
oh my:
|
|
|
|
sickpig
Legendary
Offline
Activity: 1260
Merit: 1008
|
|
June 24, 2015, 03:39:38 PM |
|
(sorry for the OT) if memory serves somewhere in the last few dozens of pages there was a debate involving climate change, I've just found this interesting article titled "What's really warming the world" http://www.bloomberg.com/graphics/2015-whats-warming-the-world/
|
Bitcoin is a participatory system which ought to respect the right of self determinism of all of its users - Gregory Maxwell.
|
|
|
NewLiberty
Legendary
Offline
Activity: 1204
Merit: 1002
Gresham's Lawyer
|
|
June 24, 2015, 03:40:21 PM |
|
I suppose that all depends on the definition of "success".
My definition of success is perhaps modest to some, but it is still far away: Bitcoin being the protocol and the coin of the internet realm.If it achieves this, it could well become much more, but this is the success for which it is designed. Knowing your way of thinking, yes; it is a modest target for something that incorporates so many different things in just one package. Maybe the "beginning of success" would've been more appropriate. It merely scratches the surface of the capabilities, but achieving only this satisfies the Whitepaper and thus remains my criteria for success. As a guiding principle for the project, doing anything that reduces this potential in order to achieve something other than this, would be against the spirit of Bitcoin. Thus, those efforts could/should be a different project, or would otherwise naturally follow from the success of this one goal, which remains a long way off.
|
|
|
|
NewLiberty
Legendary
Offline
Activity: 1204
Merit: 1002
Gresham's Lawyer
|
|
June 24, 2015, 03:43:02 PM |
|
There are other threads on this topic here already. Why pollute this one?
|
|
|
|
sickpig
Legendary
Offline
Activity: 1260
Merit: 1008
|
|
June 24, 2015, 03:52:31 PM |
|
There are other threads on this topic here already. Why pollute this one? You're absolutely right. Again sorry for being off topic.
|
Bitcoin is a participatory system which ought to respect the right of self determinism of all of its users - Gregory Maxwell.
|
|
|
sickpig
Legendary
Offline
Activity: 1260
Merit: 1008
|
|
June 24, 2015, 04:24:00 PM |
|
EU Banks Forced to Report Bitcoin-Linked Accounts Transacting Over €1,000 http://www.xbt.money/eu-banks-forced-to-report-bitcoin-linked-accounts-transacting-over-e1000/An anonymous Dutch bank employee has revealed that major banks in the EU are applying new rules forcing banks to report, monitor or investigate accounts receiving or sending over €1,000 connected to bitcoin.
|
Bitcoin is a participatory system which ought to respect the right of self determinism of all of its users - Gregory Maxwell.
|
|
|
cypherdoc (OP)
Legendary
Offline
Activity: 1764
Merit: 1002
|
|
June 24, 2015, 04:28:52 PM |
|
|
|
|
|
cypherdoc (OP)
Legendary
Offline
Activity: 1764
Merit: 1002
|
|
June 24, 2015, 04:36:13 PM |
|
another person who sees problems with block size voting. i haven't analyzed the attack; just posting here from bitcoin-dev: On Tue, Jun 23, 2015 at 8:05 PM, William Madden <will.madden@novauri.com> wrote:
Here are refutations of the approach in BIP-100 here: http://gtf.org/garzik/bitcoin/BIP100-blocksizechangeproposal.pdf
To recap BIP-100:
1) Hard form to remove static 1MB block size limit 2) Add new floating block size limit set to 1MB 3) Historical 32MB message limit remains 4) Hard form on testnet 9/1/2015 5) Hard form on main 1/11/2016 6) 1MB limit changed via one-way lock in upgrade with a 12,000 block threshold by 90% of blocks 7) Limit increase or decrease may not exceed 2x in any one step Miners vote by encoding 'BV'+BlockSizeRequestValue into coinbase scriptSig, e.g. "/BV8000000/" to vote for 8M. 9) Votes are evaluated by dropping bottom 20% and top 20%, and then the most common floor (minimum) is chosen.
8MB limits doubling just under every 2 years makes a static value grow in a predictable manner.
BIP-100 makes a static value grow (or more importantly potentially shrink) in an unpredictable manner based on voting mechanics that are untested in this capacity in the bitcoin network. Introducing a highly variable and untested dynamic into an already complex system is unnecessarily risky.
For example, the largely arbitrary voting rules listed in 9 above can be gamed. If I control pools or have affiliates involved in pools that mine slightly more than 20% of blocks, I could wait until block sizes are 10MB, and then suddenly vote "/BV5000000/" for 20% of blocks and "/BV5000001/" for the remaining 10%. If others don't consistently vote for the same "/BV#/" value, vote too consistently and have their value thrown out as the top 20%, I could win the resize to half capacity "/BV5000001/" because it was the lowest repeated value not in the bottom 20%.
I could use this to force an exodus to my sidechain/alt coin, or to choke out the bitcoin network. A first improvement would be to only let BIP-100 raise the cap and not lower it, but if I can think of a vulnerability off the top of my head, there will be others on the other side of the equation that have not been thought of. Why bother introducing a rube goldberg machine like voting when a simple 8mb cap with predictable growth gets the job done, potentially permanently?
|
|
|
|
NewLiberty
Legendary
Offline
Activity: 1204
Merit: 1002
Gresham's Lawyer
|
|
June 24, 2015, 04:57:51 PM |
|
another person who sees problems with block size voting. i haven't analyzed the attack; just posting here from bitcoin-dev: On Tue, Jun 23, 2015 at 8:05 PM, William Madden <will.madden@novauri.com> wrote:
Here are refutations of the approach in BIP-100 here: http://gtf.org/garzik/bitcoin/BIP100-blocksizechangeproposal.pdf
To recap BIP-100:
1) Hard form to remove static 1MB block size limit 2) Add new floating block size limit set to 1MB 3) Historical 32MB message limit remains 4) Hard form on testnet 9/1/2015 5) Hard form on main 1/11/2016 6) 1MB limit changed via one-way lock in upgrade with a 12,000 block threshold by 90% of blocks 7) Limit increase or decrease may not exceed 2x in any one step Miners vote by encoding 'BV'+BlockSizeRequestValue into coinbase scriptSig, e.g. "/BV8000000/" to vote for 8M. 9) Votes are evaluated by dropping bottom 20% and top 20%, and then the most common floor (minimum) is chosen.
8MB limits doubling just under every 2 years makes a static value grow in a predictable manner.
BIP-100 makes a static value grow (or more importantly potentially shrink) in an unpredictable manner based on voting mechanics that are untested in this capacity in the bitcoin network. Introducing a highly variable and untested dynamic into an already complex system is unnecessarily risky.
For example, the largely arbitrary voting rules listed in 9 above can be gamed. If I control pools or have affiliates involved in pools that mine slightly more than 20% of blocks, I could wait until block sizes are 10MB, and then suddenly vote "/BV5000000/" for 20% of blocks and "/BV5000001/" for the remaining 10%. If others don't consistently vote for the same "/BV#/" value, vote too consistently and have their value thrown out as the top 20%, I could win the resize to half capacity "/BV5000001/" because it was the lowest repeated value not in the bottom 20%.
I could use this to force an exodus to my sidechain/alt coin, or to choke out the bitcoin network. A first improvement would be to only let BIP-100 raise the cap and not lower it, but if I can think of a vulnerability off the top of my head, there will be others on the other side of the equation that have not been thought of. Why bother introducing a rube goldberg machine like voting when a simple 8mb cap with predictable growth gets the job done, potentially permanently?
Is it meant to be humorous that voting is described as arbitrary when compared to a single person picking a number out of his head?
|
|
|
|
Adrian-x
Legendary
Offline
Activity: 1372
Merit: 1000
|
|
June 24, 2015, 05:08:48 PM Last edit: June 24, 2015, 05:31:40 PM by Adrian-x |
|
I've come to see a single core as being the single greatest threat to Bitcoin out of all of this. A better situation to me would be a bitcoin P2P network with several separately developed cores that adhere to a common set of rules. The upgrade path is then always put to a vote. Any one core could then propose a change simply by implementing it (with a rule that after x% of blocks back the change it becomes active).
Then if people like the change, more will move to that core. This in turn would cause the other core to adopt the change or lose their users, and that is how consensus is achieved. If a majority did not like the change, they would not move to that core, and the change would never be accepted.
At no point in this do any set of gatekeepers get to dictate terms. Since no core has a majority of users captured, change would always have to come through user acceptance and adoption, and developers would simply be proposers of options.
You just reinvented pegged side chains. I think you're just starting to understand Bitcoin. No need to fork to SC's?
|
Thank me in Bits 12MwnzxtprG2mHm3rKdgi7NmJKCypsMMQw
|
|
|
thezerg
Legendary
Offline
Activity: 1246
Merit: 1010
|
|
June 24, 2015, 05:15:21 PM |
|
yes the voting has issues.
The biggest issue is the role the block size cap should have in the network.
Satoshi, Gavin, et al. see it as a "sanity check". In software this is essentially an exceptional condition -- a value that the system should *never* exceed. In this perspective, miners can individually produce smaller blocks (regardless of pending txn count) so why create a mechanism that essentially forces a cartel (collective behavior)? The only reason would be to maximize miner profitability by manipulating the supply curve. Closely tracking evolving network and disk capacity is not necessary for a "sanity check" because most miners will simply start mining smaller blocks (than the limit) if network and disk stop following moore's law.
Others want the max block size to be an entity that can be used to actively manage the network. Presumably if miners are noticing a lot of "spam", but a "misbehaving" miner is mining it, they could vote to reduce block capacity to eliminate this spam. Also, voting lets the limit more closely track the evolution of network and disk capacity.
|
|
|
|
Zangelbert Bingledack
Legendary
Offline
Activity: 1036
Merit: 1000
|
|
June 24, 2015, 05:37:57 PM |
|
Is it meant to be humorous that voting is described as arbitrary when compared to a single person picking a number out of his head?
A single person picking a number out of their head is voting, because all the person can do is make a suggestion. Every suggestion is voted on, by investors. Superimposing a second voting mechanism, and more complexity with it, seems redundant. That is, unless you think "hard forks are dangerous." I say, get comfortable with the fork, make it so that it is easy to do with predictable results, and then use it whenever necessary. Whatever its dangers may be, the dynamics of that voting mechanism are far superior.
|
|
|
|
cypherdoc (OP)
Legendary
Offline
Activity: 1764
Merit: 1002
|
|
June 24, 2015, 05:43:34 PM |
|
yes the voting has issues.
The biggest issue is the role the block size cap should have in the network.
Satoshi, Gavin, et al. see it as a "sanity check". In software this is essentially an exceptional condition -- a value that the system should *never* exceed. In this perspective, miners can individually produce smaller blocks (regardless of pending txn count) so why create a mechanism that essentially forces a cartel (collective behavior)? The only reason would be to maximize miner profitability by manipulating the supply curve. Closely tracking evolving network and disk capacity is not necessary for a "sanity check" because most miners will simply start mining smaller blocks (than the limit) if network and disk stop following moore's law.
Others want the max block size to be an entity that can be used to actively manage the network. Presumably if miners are noticing a lot of "spam", but a "misbehaving" miner is mining it, they could vote to reduce block capacity to eliminate this spam. Also, voting lets the limit more closely track the evolution of network and disk capacity.
Let's assume a spam attacking misbehaving miner ala the large miner, large block attack against small miners that was initially a huge FUD advanced by Wiulle and others that you don't hear anything about anymore. Simple defense, just mine a 0 TX header only block during the time of your large block processing. This is the practical defense that Wang Chun told us they do and that we've witnessed they do during stress tests. And this defense can be done irrespective of whether as a miner you are connected to Corallo's relay network or not which those Fudsters love to go on about. Point being miners can and should be encouraged to track TX's and fees and construct blocks appropriately based on their own internal metrics and profitability. Core devs can't possibly do this. Yes, you could accuse Gavin of trying to be a one man predictor for network growth at this point but his proposal is based on sound growth projections with the larger point being that he is trying to automate out as much human intervention, voting manipulation and need for future forks as much as possible.
|
|
|
|
thezerg
Legendary
Offline
Activity: 1246
Merit: 1010
|
|
June 24, 2015, 05:49:20 PM |
|
yes the voting has issues.
The biggest issue is the role the block size cap should have in the network.
Satoshi, Gavin, et al. see it as a "sanity check". In software this is essentially an exceptional condition -- a value that the system should *never* exceed. In this perspective, miners can individually produce smaller blocks (regardless of pending txn count) so why create a mechanism that essentially forces a cartel (collective behavior)? The only reason would be to maximize miner profitability by manipulating the supply curve. Closely tracking evolving network and disk capacity is not necessary for a "sanity check" because most miners will simply start mining smaller blocks (than the limit) if network and disk stop following moore's law.
Others want the max block size to be an entity that can be used to actively manage the network. Presumably if miners are noticing a lot of "spam", but a "misbehaving" miner is mining it, they could vote to reduce block capacity to eliminate this spam. Also, voting lets the limit more closely track the evolution of network and disk capacity.
Let's assume a spam attacking misbehaving miner ala the large miner, large block attack against small miners that was initially a huge FUD advanced by Wiulle and others that you don't hear anything about anymore. Simple defense, just mine a 0 TX header only block during the time of your large block processing. This is the practical defense that Wang Chun told us they do and that we've witnessed they do during stress tests. And this defense can be done irrespective of whether as a miner you are connected to Corallo's relay network or not which those Fudsters love to go on about. Point being miners can and should be encouraged to track TX's and fees and construct blocks appropriately based on their own internal metrics and profitability. Cite devs can't possibly do this. Yes, you could accuse Gavin of trying to be a one man predictor for network growth at this point but his proposal is based on sound growth projections with the larger point being that he is trying to automate out as much human intervention and need for future forks as much as possible. Miners are yes the key thing to understand is that he's not trying to hit a bullseye here. It is not necessary. He is just trying to define an extreme condition to let coders allocate memory, size embedded hardware, etc based on these maximums. Individual miners are welcome to mine smaller blocks.
|
|
|
|
tvbcof
Legendary
Online
Activity: 4732
Merit: 1277
|
|
June 24, 2015, 06:21:37 PM Last edit: June 24, 2015, 06:34:47 PM by tvbcof |
|
I'm less interested in rushing the hard fork than I was, now that it seems less likely that Bitcoin breaks due to full blocks.
I sent transactions Monday during the test, I just picked some older outputs from the bottom of the wallet, and they confirmed after 1 block. The full block 'problem' was overplayed for propaganda reasons. It's the same principle as making loud noises to get a herd of animals to do what you want them to do. Someone made some graph with his guess at simplistic future predictions which did not (because they can not) make any allowance for future developments. His genius was to put pretty colors on it which were even more arbitrary. Since people like shiny and colorful things (a throw-back to our simian ancestry where identifying ripe fruit was important) it was a wildly successful bit of propaganda with many observers. That does not make it particularly meaningful in reality though. In point of fact there is a large mostly synthetic utilization of transaction availability even at the 1MB derived rate. This from people who would not be there if it were not for the fact that BTC transactions are subsidized to the level of being nearly free. Sloughing off these freeloaders will not threaten the system at all and may be helpful. This would buy us at least a lot of time. If/When an actual problem with capacity occurs everyone that I know of (except perhaps MP) is perfectly amenable to increasing capacity through simplistic block size increase methods. In that case, unlike today, it will be much more easy to gain the 'consensus' that Bitcoin needs.
|
sig spam anywhere and self-moderated threads on the pol&soc board are for losers.
|
|
|
|