I'll hedge my bets and just hold on to both forks and let market forces (or more likely political forces amongst bitcoin magnates) win.
I'm surprised that this fork is still going ahead. I thought that bitcoin cash was the fork for the big blockers. So what is the point of Segwit2x? It offers no real technical solution to the blocksize issue that we have with two forks, one being addressed by the Segwit core/blockstream method, and the other being solved by the original satoshi vision of data centre sized mining nodes (bitcoin cash). Perhaps I misunderstand bitcoin cash. Nobody ever managed to convince me that segwit was nothing more than a clever engineering method to turn what would could have been achieved in a hard-fork into a soft-fork. I understand that the NY agreement was attended by the Blockstream investor, Digital Currency Group, and Miners and other influential bitcoin representatives and not by Blockstream themselves. In reality, the NY agreement was just the HK agreement reiterated, but with a timescale due to a lack of confidence from miners and some other influential bitcoin players that Blockstream may not be truly committed to a blocksize increase.
So I assume this means that Blockstream are not onboard with segwit2x. So why is it going ahead? Are we going to have segwit4x and segwit8x forks, because it does not address any fundamental technical problem?
Or is this fork still going ahead because even DCG themselves are frustrated with Blockstream?
|
|
|
I disagree. It is a very biased Utopian vs Evil article. A summary of nothing useful and I should have switched off as soon as the 'hard brexit' reference comparison was made.
|
|
|
It's known as the quadratic issue. As fees in satoshi's double, and the price of BTC doubles, the fee in fiat terms goes up quadratically.
|
|
|
That's a lot of DCG portfolio companies disagreeing with the road map put forward by the other DCG portfolio company, BS.
But what is the point of segwit if you are going to do a hard fork anyway? The whole point of segwit is that it masquerades hard fork functionality through a soft fork, so if hard forks can work safely (Monero hasn't blown up yet, nor Dash or Ethereum*), then the functionality provided by segwit can be implemented in a cleaner manner with a HF.
*Ethereum has hard forked many times, only a contentious one ended up with a permanent split chain.
|
|
|
When a system is running at max capacity, it only takes a little demand/supply balance to push it over the edge. Despite the mempool, cough, transaction pool spikes; how often has this been beyond 1/4 a day? I mean, how often is the percentage of unconfirmed transactions beyond that of confirmed transaction capacity?
|
|
|
I count about 11.
Honestly though, I don't care either way. I just want to stir the pot also.
I don't see any major crashes there, as the thread title stipulates... The blips in the core software could be caused by ISP disruption, or newly released software update downtime. Besides, I don't see anything out of the ordinary here. BU is showing consistent behavour. ck, did this really have to be a self moderated thread?
|
|
|
There seems to be an influx of new money into the cryptomarkets. Most coins are up in price. Bitcoin market cap dominance has declined. There still seems to be pump and dump flows into and out of BTC, but in general everything has been going up and altcoins have benefited most (hence the decline in bitcoin market cap despite the price rise). Miner voting signalling is pretty much the same with only daily variances, not much longterm change or significant changes in pool support. Don't know if the DCG scaling meeting will occur this month as mooted: http://www.coindesk.com/major-bitcoin-scaling-meeting-take-place-may/Otherwise known as the meeting of the board of central governors as dinofelis would put it.
|
|
|
blockstream has it's own brand and coin and it's called BTC.
Bitcoin has been privatised!
|
|
|
Questioning whether a technical solution has been implemented in the best manner is not the same as attempting to smear it.
|
|
|
The below are my thoughts. People are free to develop their own.
In my view, we probably need both on and off chain solutions as this would widen BTC usability cases. I fail to see how off-chain solutions would work on top of an unreliable and restricted main chain. Even the lightning network white paper raises the need for on-chain growth.
So I personally would vote 'yes' to a blocksize increase as this is the simplest solution. Other people may consider solutions such as extension blocks to be a better way forward.
As for segwit, I need to be convinced that segregated witness blocks are the simplest solution. I suspect it's benefits could be achieved simply by new transaction format types, (similar to flextrans). So at the moment I am inclined to vote 'no', but I am open to be convinced with greater technical arguments. (And implementing a hard fork as a soft fork with software engineering hacks due to Fear Of Hark Fork is not something that would sway me.)
|
|
|
... low fee like bittrex with 20k satoshi, and get a fast confirmations on my wallet, i usually wonder how can they do this...
bittrex takes 20K fee from you buy pays 40k fee on the transaction (about 180 s/b)! go check your tx in a block explorer and you'll see. Yep, most of my not so recent exchange withdrawals I was charged a withdrawal fee less than the fee that the exchange put on the transaction!
|
|
|
How many confirmations on the input? Many pools still take coin age into account when including transactions in a block, it's not just a straight highest fee formula yet. (Despite any propaganda to the contrary to justify it's future removal).
i seriously doubt that. the priority formula is long dead. No it isn't yet. I've made low fee transactions which have been prioritised over higher fee transactions recently. If you look at the block delay estimation on https://bitcoinfees.21.co/ there is a lower bound and an upper bound. Currently some transactions in the 1-20 sats/byte range are being confirmed after 5 blocks, where other transactions in that range might not be confirmed at all (5-Inf). 101-120 sats/byte range show 1-18 blocks delays. So some lower fee transactions are being confirmed in 6 blocks, whilst some of the higher fee transactions are taking 19 blocks. BW, BitFury, ViaBTC, GBMiners are examples of miners that have included my high coin age lower fee transactions ahead of higher fee paying transactions. Antpool and F2pool seem to be the pools that will mine non-full blocks despite fee paying transactions being in the mempool and having plenty of validation time between blocks.
|
|
|
How many confirmations on the input? Many pools still take coin age into account when including transactions in a block, it's not just a straight highest fee formula yet. (Despite any propaganda to the contrary to justify it's future removal).
|
|
|
Ok perfect now I got all of this! Last question where can I see if the simple majority supports HF of BTC or not?
Greetings!
Miner signalling support can be found here: https://coin.dance/blocksBigger blocks hard fork signalling = Bitcoin Unlimited + 8MB proposals. I doubt anyone would attempt any type of fork with a simple 51% majority. It would be a complete unpredictable and economically damaging clusterfuck. Listening node support can be found here: https://coin.dance/nodes
|
|
|
This is cores vision. A full block chain settlement layer with high fees.
We should demonise the miners as Lauda propagandises. Hopefully the revolting miners will become truly revolting as a result.
|
|
|
4. There are two possible ways to deploy/implement SegWit, as a softfork or as a hardfork. SegWit as a hardfork would allow a slightly cleaner implementation but would also require replay protection (as the exchanges have specifically asked for lately). SWSF does not require replay protection assuming a hashrate majority. Replay protection is difficult thus SegWit as a hardfork would altogether cause more technical debt than SWSF. Also a hardfork is generally considered of higher risk and would take a longer preparation time.
Sorry, it seems people have had their heads FOHK'ed with (Fear of Hard Fork). There is little difference between the dangers of a soft fork and a hard fork. In the event of a soft fork we have: 1.) The old chain exists with a more permissive set of rules. 2.) The new chain exists with a more restrictive set of rules. In a hard fork we have: 1.) The old chain exists with a more restrictive set of rules. 2.) The new chain exists with a more permissive set of rules. So they look exactly the same during a chain split. The only difference is that a soft fork is backwards compatible because its more restrictive set of rules. In the event of a successful soft fork, older nodes continue to operate as normal. In the event of a successful hard fork, older nodes become unsynced and have to upgrade. In the event of a contentious fork, hard of soft, it becomes an economically damaging clusterfuck until the winning fork is determined (the longest chain) or a bilateral split occurs (the minority chain implements replay protection)*. * Strictly speaking the software forking away from the existing protocol (hard of soft) should be the version that implements relay protection as you cannot demand the existing protocol chain to change its behaviour. In practice though, the aim is not to create a permanent chain split and achieve consensus, so the minority chain should end up orphaned off, and any transactions that occur during any temporary chain split should end up confirmed on the main chain.
|
|
|
Great now we have another consensus mechanism, PoB (Proof-of-Bribery).
We can now add that to the other ugly consensus mechanisms, PoD and PoP (Proof-of-DDOS and Proof-of-Propaganda).
EDIT: Forgot PoCO (Proof-of-Contractual-Obligation).
|
|
|
In the event of a soft fork we have: 1.) The old chain exists with a more permissive set of rules. 2.) The new chain exists with a more restrictive set of rules.
In a hard fork we have: 1.) The old chain exists with a more restrictive set of rules. 2.) The new chain exists with a more permissive set of rules.
This is a cool explanation. There is no difference between the dangers of a soft fork and a hard fork.
- snip -
So they look exactly the same during a chain split.
But there is no chain split on a soft fork.. That's the WHOLE point ? There is a chain split if there is a division of hash power between old and new rule sets. The only difference is that a soft fork is backwards compatible with older node software, whereas a hard fork isn't. In the event of a successful soft fork, older nodes continue to operate as normal. In the event of a successful hard fork, older nodes become unsynced and have to upgrade. In the event of a contentious fork, hard of soft, it becomes an economically damaging clusterfuck until the winning fork is determined (the longest chain) or a bilateral split occurs (the minority chain implements replay protection). A hard fork which is hacked as a soft fork (where backwards compatibility is an illusionary hack), the soft fork functionality has to be sufficiently locked in before activation to prevent the backwards compatibility hacks from being exploited. In this case, an older node appears to operate as normal, but it really isn't because it is being fooled by filtered hacked data.
|
|
|
Segwit is already active on LTC. The user just doesn't have the ability to use it yet.
|
|
|
|