Bitcoin Forum
November 02, 2024, 08:04:13 PM *
News: Latest Bitcoin Core release: 28.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: « 1 2 3 4 5 6 7 [8] 9 »  All
  Print  
Author Topic: Last Time. In small words. Why 2MB impossible. Why Soft Fork. Why SegWit First..  (Read 6480 times)
franky1
Legendary
*
Offline Offline

Activity: 4396
Merit: 4755



View Profile
May 06, 2017, 09:11:10 AM
Last edit: May 06, 2017, 09:44:01 AM by franky1
 #141

You know about flextrans right?

Wouldn't Flextrans have the exact same problem? I haven't studied Flextrans in detail, but from what I remember it would enable a new "version" of transactions without malleability. But wouldn't legacy transactions ("v1", as they call it here) continue to be allowed in this proposal, too? In this case it could lead to the exact same situation where a malicious miner or pool could try to spam the network with legacy transactions to "take out" some competitors.

yep flex trans is a new tx type just like segwit.. requiring people to choose to use them, but doesnt solve the issues with the old native(legacy) transactions

the solution is to limit the sigops. and develop a new priority fee formulae actually works that charges more for people that bloat and want to spend too often.

things like hope and faith that pools will do the right thing are not enough, i actually laugh that the blockstream(core) devs actually removed code mechanisms and then went for banker economics 'just pay more'

i laughed more when reading their half gesture hopes and utopian promoted half baked promises meant more to them then actual clean code

I DO NOT TRADE OR ACT AS ESCROW ON THIS FORUM EVER.
Please do your own research & respect what is written here as both opinion & information gleaned from experience. many people replying with insults but no on-topic content substance, automatically are 'facepalmed' and yawned at
Lauda
Legendary
*
Offline Offline

Activity: 2674
Merit: 2965


Terminated.


View Profile WWW
May 06, 2017, 10:00:54 AM
 #142

the solution is to limit the sigops.
No, that is no solution whatsoever. All that does is kill use-cases which require higher sigops (see certain multisig).

and develop a new priority fee formulae actually works that charges more for people that bloat and want to spend too often.
Each wallet can and has developed their own fee calculations.

things like hope and faith that pools will do the right thing are not enough, i actually laugh that the blockstream(core) devs actually removed code mechanisms and then went for banker economics 'just pay more'
Statements like these make you look like an idiot. Mining pools have primarily stopped using priority long before Bitcoin Core removed it from their code (which is also the reason for its removal).

"The Times 03/Jan/2009 Chancellor on brink of second bailout for banks"
😼 Bitcoin Core (onion)
franky1
Legendary
*
Offline Offline

Activity: 4396
Merit: 4755



View Profile
May 06, 2017, 10:43:03 AM
Last edit: May 06, 2017, 03:10:05 PM by franky1
 #143

the solution is to limit the sigops.
No, that is no solution whatsoever. All that does is kill use-cases which require higher sigops (see certain multisig).
certain multisig.. pfft. and why do they deserve to have 20% of a block, without paying 20% of the price

and develop a new priority fee formulae actually works that charges more for people that bloat and want to spend too often.
Each wallet can and has developed their own fee calculations.
yep but core removed some irrational ones. but also removed some rational ones. the network as a whole should have atleast some agreed(consensus) limits to make users be leaner and less easy to spam,..

im still laughing how you want to prioritise X but then you dont want to prioritise Y..

things like hope and faith that pools will do the right thing are not enough, i actually laugh that the blockstream(core) devs actually removed code mechanisms and then went for banker economics 'just pay more'
Statements like these make you look like an idiot. Mining pools have primarily stopped using priority long before Bitcoin Core removed it from their code (which is also the reason for its removal).

because that fee formulae was just a rich vs poor mechanism. core didnt even bother being devs to develop a better code formulae.

yep developers didnt develop
yep coders didnt code

instead they shouted "just pay more"
which is still the rich vs poor failed mechanism

I DO NOT TRADE OR ACT AS ESCROW ON THIS FORUM EVER.
Please do your own research & respect what is written here as both opinion & information gleaned from experience. many people replying with insults but no on-topic content substance, automatically are 'facepalmed' and yawned at
Lauda
Legendary
*
Offline Offline

Activity: 2674
Merit: 2965


Terminated.


View Profile WWW
May 06, 2017, 10:46:17 AM
 #144

certain multisig.. pfft. and why do they deserve to have 20% of a block, without paying 20% of the price
It looks like someone hasn't been reading the Bitcoin Core code again. Smiley

This PR also negates any concern of your "easy to spam via 5 max sigops TXs" nonsense:
Quote
Treat high-sigop transactions as larger rather than rejecting them
When a transaction's sigops * bytespersigop exceeds its size size, use that value as its size instead (for fee purposes and mempool sorting). This means that high-sigop transactions are no longer prevented, but they need to pay a fee corresponding to the maximally-used resource.

All currently acceptable transactions should remain acceptable and there should be no effect on their fee/sorting/prioritization.
https://github.com/bitcoin/bitcoin/pull/8365

"The Times 03/Jan/2009 Chancellor on brink of second bailout for banks"
😼 Bitcoin Core (onion)
franky1
Legendary
*
Offline Offline

Activity: 4396
Merit: 4755



View Profile
May 06, 2017, 12:13:50 PM
 #145

certain multisig.. pfft. and why do they deserve to have 20% of a block, without paying 20% of the price
It looks like someone hasn't been reading the Bitcoin Core code again. Smiley

This PR also negates any concern of your "easy to spam via 5 max sigops TXs" nonsense:
Quote
Treat high-sigop transactions as larger rather than rejecting them
When a transaction's sigops * bytespersigop exceeds its size size, use that value as its size instead (for fee purposes and mempool sorting). This means that high-sigop transactions are no longer prevented, but they need to pay a fee corresponding to the maximally-used resource.

All currently acceptable transactions should remain acceptable and there should be no effect on their fee/sorting/prioritization.
https://github.com/bitcoin/bitcoin/pull/8365

yor not understanding it
1. the 5x is about the blocksigoplimit/5=txsigop limit.. (consensus+policy)

2. the 'larger than rather than reject' is more about exceeding 100kb of data that SOME tx's would accumulate while trying to make 4k sigops.

which is where i knew you would knitpick.. so i pre-empted your obvious crap.
screw it. i know there are many knitpickers
c) 1input:2856output=97252bytes~(2.857k sigops)
7tx of (c)=680764bytes(20k sigops)

with a TX that stays below the:
bloat of 1mb block
bloat of 100k 'larger than' limit (of REAL BYTES)

while filling the blocksigop limits to prevent any other transactions getting in.

PS the cludgy maths that core has in the 'pull/8365' is just about trying to assume a fee for the tx. but.. ultimately if a pool was not using that cludgy maths/core implementation for the FEE. a pool would add the TX with a cheaper rate that doesnt have the cludgy maths.

this is why REAL rules. real code, should be used.. not bait and switch hope and faith cludgy maths crap.

I DO NOT TRADE OR ACT AS ESCROW ON THIS FORUM EVER.
Please do your own research & respect what is written here as both opinion & information gleaned from experience. many people replying with insults but no on-topic content substance, automatically are 'facepalmed' and yawned at
Lauda
Legendary
*
Offline Offline

Activity: 2674
Merit: 2965


Terminated.


View Profile WWW
May 06, 2017, 12:16:39 PM
 #146

yor not understanding it
Some people think that you aren't completely uneducated. That's what the only misunderstanding here is.

1. the 5x is about the blocksigoplimit/5=txsigop limit..
There is NO such thing as a TX sigops limit as a consensus rule. It is a RELAY policy. Any miner can create and include a transaction consisting of more than 4k sigops.

this is why REAL rules. real code, should be used.. not bait and switch hope and faith cludgy maths crap.
You don't understand the code behind a simple calculator, yet alone "real rules & real code". Get another job franky, seriously. I debunked you 20 times in 1 week.

"The Times 03/Jan/2009 Chancellor on brink of second bailout for banks"
😼 Bitcoin Core (onion)
franky1
Legendary
*
Offline Offline

Activity: 4396
Merit: 4755



View Profile
May 06, 2017, 12:38:08 PM
Last edit: May 07, 2017, 03:42:38 AM by franky1
 #147



you have not debunked crap.
you have just not seen the whole picture.
you have not seen things from the whoole network point of view.. you just love the word games

i dont even know why im interacting with someone that cant even read c++
lauda.. same advice i gave you a year ago
learn C++
learn to read passed the 1 paragraph sale pitch.

its hard enough to try explaining things in such short amounts before you lose concentration to just shout
"nonsensical" "wrong because shill" " they paying you enough" as your failsafe reply when you cant understand things.

but now you have gone beyond even trying to learn anything.

you have become a hypocrit by making arguments that actually debunk your own earlier arguments

by saying nodes can by pass the fee math cludge is correct.
but thats why real rules need to be placed in the consensus header file. rather than the cludge

P.S the blocksigop limit is in the consensus. but from a network wide overview. where the maths cludge of core can be by passed. my initial arguments still stand.

i tried entering your narrow mindset by pretending that everyone was following core code and even saying i was wrong when looking at cores cludge specifically.. (rather than network overview) and still shown how it can be abused. just to try getting you to understand the risks. but then you go and play semantics ..

your not trying to see the network risks, your just playing word games.
WAKE UP

a txsigops limit of <4k in consensus header file solves the native quadratics.!!
wake up

I DO NOT TRADE OR ACT AS ESCROW ON THIS FORUM EVER.
Please do your own research & respect what is written here as both opinion & information gleaned from experience. many people replying with insults but no on-topic content substance, automatically are 'facepalmed' and yawned at
Lauda
Legendary
*
Offline Offline

Activity: 2674
Merit: 2965


Terminated.


View Profile WWW
May 06, 2017, 12:45:56 PM
 #148

i dont even know why im interacting with someone that cant even read c++
I don't even do C++ and it seem rather obvious that I understand more of it than someone claiming that he knows it (you). That is just sad.

you have not debunked crap.
You should write a book about lying and shilling. Roll Eyes Simple example of easily debunked bullshit.

using v0.12 rules your right..
but check out 0.14 rules
80k block 16k tx

segwit makes things worse for the 1mb block

Quote
<luke-jr> questioner2: sigops in legacy scripts count 4x toward the 80k limit
<luke-jr> this is in validation.cpp:GetTransactionSigOpCost
-snip-
<sipa> a legacy sigop counts as 4 segwit sigops
<sipa> so 20k legacy sigops would fill a block

a txsigops limit of <4k in consensus header file solves the native quadratics.!!
No. You didn't even admit that you were wrong about the in-existence of the 4k limit per TX, as that's a policy rule. How sad.

"The Times 03/Jan/2009 Chancellor on brink of second bailout for banks"
😼 Bitcoin Core (onion)
franky1
Legendary
*
Offline Offline

Activity: 4396
Merit: 4755



View Profile
May 06, 2017, 01:02:00 PM
 #149


using v0.12 rules your right..
but check out 0.14 rules
80k block 16k tx

segwit makes things worse for the 1mb block


Quote
<luke-jr> questioner2: sigops in legacy scripts count 4x toward the 80k limit
<luke-jr> this is in validation.cpp:GetTransactionSigOpCost
-snip-
<sipa> a legacy sigop counts as 4 segwit sigops
<sipa> so 20k legacy sigops would fill a block

a txsigops limit of <4k in consensus header file solves the native quadratics.!!
No. You didn't even admit that you were wrong about the in-existence of the 4k limit per TX, as that's a policy rule. How sad.

thats maths cludge is CORE centric... not NETWORK consensus

from a network overview.. if pools used the 80k CONSENSUS but were not following cores CLUDGE maths.. then it does make things worse
there needs to be a proper RULE of 4k sigops that does not change.
REAL RULE. not math cludge. not implementation defined but real NETWORK consensus RULE

other implementations also have the 80k blocksigops for NETWORK CONSENSUS. meaning that due to segwit. it can make things worse.
ESPECIALLY when core removes the cludge to make a 1merkle version which they promise(but wont uphold) after the soft activation

I DO NOT TRADE OR ACT AS ESCROW ON THIS FORUM EVER.
Please do your own research & respect what is written here as both opinion & information gleaned from experience. many people replying with insults but no on-topic content substance, automatically are 'facepalmed' and yawned at
Lauda
Legendary
*
Offline Offline

Activity: 2674
Merit: 2965


Terminated.


View Profile WWW
May 06, 2017, 01:04:07 PM
 #150

from a network overview.. if pools used the 80k CONSENSUS but were not following cores CLUDGE maths.. then it does make things worse
Nope, that is completely wrong. The 80k limit is post-SW and legacy sigops count as 4x more. Therefore, for legacy transaction the limit does not change at all. It is 20k and will continue to remain 20k,

there needs to be a proper RULE of 4k sigops that does not change.
No.

See how I baited and double debunked you? You don't even understand the ELI5 explanations from sipa & luke-jr, let alone C++. Cheesy

"The Times 03/Jan/2009 Chancellor on brink of second bailout for banks"
😼 Bitcoin Core (onion)
franky1
Legendary
*
Offline Offline

Activity: 4396
Merit: 4755



View Profile
May 06, 2017, 01:06:21 PM
 #151

Nope, that is completely wrong. The 80k limit is post-SW and legacy sigops count as 4x more. Therefore, for legacy transaction the limit does not change at all. It is 20k and will continue to remain 20k,

thats maths cludge OUTSIDE of network consensus rules..

but from a network consensus rule its not what you think

there needs to be a network consensus txsigops rule of <4k to solve the native risks


after a year you have prefered to just defend core.. rather than the network.
anyway.. your only wasting your own time with your games.
have a nice year kissing ass

I DO NOT TRADE OR ACT AS ESCROW ON THIS FORUM EVER.
Please do your own research & respect what is written here as both opinion & information gleaned from experience. many people replying with insults but no on-topic content substance, automatically are 'facepalmed' and yawned at
Lauda
Legendary
*
Offline Offline

Activity: 2674
Merit: 2965


Terminated.


View Profile WWW
May 06, 2017, 01:07:37 PM
 #152

thats maths cludge OUTSIDE of network consensus rules..
but from a network consensus rule its what you think
No. The 4x sigops counting for legacy transactions is enforced by SW rules.

"The Times 03/Jan/2009 Chancellor on brink of second bailout for banks"
😼 Bitcoin Core (onion)
franky1
Legendary
*
Offline Offline

Activity: 4396
Merit: 4755



View Profile
May 06, 2017, 02:03:37 PM
Last edit: May 06, 2017, 03:10:47 PM by franky1
 #153

thats maths cludge OUTSIDE of network consensus rules..
but from a network consensus rule its what you think
No. The 4x sigops counting for legacy transactions is enforced by SW rules.

pools can ignore the 4x sigop count just like they ignored the priority fee formulae. by not following all the wastful cludgy maths stuff outside of consensus
which is where your hopes and expectations lay..

thats why having <4 maxtxsigops in the consensus.h header file would solve the issue so easily


maybe best you spend more time managing sig-spammers and taking a cut.
because you have made it clear you wont take time to learn c++. and prefer just to spam topics with empty word baiting for an income

I DO NOT TRADE OR ACT AS ESCROW ON THIS FORUM EVER.
Please do your own research & respect what is written here as both opinion & information gleaned from experience. many people replying with insults but no on-topic content substance, automatically are 'facepalmed' and yawned at
The One
Legendary
*
Offline Offline

Activity: 924
Merit: 1000



View Profile
May 06, 2017, 06:56:32 PM
 #154

i dont even know why im interacting with someone that cant even read c++
I don't even do C++ and it seem rather obvious that I understand more of it than someone claiming that he knows it (you). That is just sad.

Eh? for real? Then how can one understand something when one admitted to not knowing something?

A bit like me saying:-

I don't know how to make a nuclear bomb, but i understand more than the nuclear scientists. Huh Huh

..C..
.....................
........What is C?.........
..............
...........ICO            Dec 1st – Dec 30th............
       ............Open            Dec 1st- Dec 30th............
...................ANN thread      Bounty....................

hobbes
Full Member
***
Offline Offline

Activity: 128
Merit: 107



View Profile
May 06, 2017, 07:49:31 PM
 #155

5. Because of a block verification processing time vulnerability that increases quadratically with block size, increasing the block size is only possible AFTER SegWit is active and only for SegWit transactions.

False. Parallel validation routes around quadratic hash time issues, by naturally orphaning blocks that take an inordinate time to verify.
I did not look into it but from what I hear it sounds more like a resource consuming band aid. Why not a proper fix with less CPU cycles?


4. There are two possible ways to deploy/implement SegWit, as a softfork or as a hardfork. SegWit as a hardfork would allow a slightly cleaner implementation but would also require replay protection (as the exchanges have specifically asked for lately). SWSF does not require replay protection assuming a hashrate majority. Replay protection is difficult thus SegWit as a hardfork would altogether cause more technical debt than SWSF. Also a hardfork is generally considered of higher risk and would take a longer preparation time.

Sorry, it seems people have had their heads FOHK'ed with (Fear of Hard Fork).
It is not fear but the expectation of 'clusterfuck' (as you put it).

Quote
There is little difference between the dangers of a soft fork and a hard fork.

In the event of a soft fork we have:
1.) The old chain exists with a more permissive set of rules.
2.) The new chain exists with a more restrictive set of rules.
Wait a second, there only exists a single chain as the old chain blocks are being orphaned (I am explicitly talking about a softfork with a hashrate majority as stated above).

Quote
In a hard fork we have:
1.) The old chain exists with a more restrictive set of rules.
2.) The new chain exists with a more permissive set of rules.

So they look exactly the same during a chain split.
No, not at all. With the hard fork the old chain is not 'corrected' to follow the new chain.

Quote
The only difference is that a soft fork is backwards compatible because its more restrictive set of rules.

In the event of a successful soft fork, older nodes continue to operate as normal.
In the event of a successful hard fork, older nodes become unsynced and have to upgrade.
This is a big difference, isn't it?

Quote
In the event of a contentious fork, hard of soft, it becomes an economically damaging clusterfuck until the winning fork is determined (the longest chain) or a bilateral split occurs (the minority chain implements replay protection)*.
Does a 70% hashrate majority still count as contentious? I don't think that would be a big problem for a softfork, the old chain would be forced to go along, but with a hardfork there would certainly remain two chains.

Quote
* Strictly speaking the software forking away from the existing protocol (hard of soft) should be the version that implements relay protection as you cannot demand the existing protocol chain to change its behaviour. In practice though, the aim is not to create a permanent chain split and achieve consensus, so the minority chain should end up orphaned off, and any transactions that occur during any temporary chain split should end up confirmed on the main chain.
How would you implement replay protection for a soft fork, there is only a single chain...

I am considering making my list above a reddit thread as I think it sums up the current situation nicely  Grin

franky1
Legendary
*
Offline Offline

Activity: 4396
Merit: 4755



View Profile
May 06, 2017, 11:25:27 PM
 #156

How would you implement replay protection for a soft fork, there is only a single chain...

soft or hard.
there are scenario's of staying as one chain (just orphan drama and being either small drama or mega clusterf*ck of orphans before settling down to one chain) dependant on % of majority..

but in both soft or hard a second chain can be produced. but this involves intentionally ignoring consensus orphaning mechanism.. in laymens: not connecting to opposing nodes to see their different rules/chain, to then build own chain without protocol arguing(orphaning)

all the reddit doomsdays FUD is about trying to only mention softs best case, and hards worse case.
but never the other way around because then people will wise up to knowing that bitcoins consensus orphaning mechanism is a good thing and that doing things as a hard consensus is a good thing.

I DO NOT TRADE OR ACT AS ESCROW ON THIS FORUM EVER.
Please do your own research & respect what is written here as both opinion & information gleaned from experience. many people replying with insults but no on-topic content substance, automatically are 'facepalmed' and yawned at
jbreher
Legendary
*
Offline Offline

Activity: 3038
Merit: 1660


lose: unfind ... loose: untight


View Profile
May 07, 2017, 01:30:25 AM
 #157

Yup. This is exactly the nonsense that they are preaching. Let's make Bitcoin a very centralized system in which you can't achieve financial sovereignty unless you buy server grade hardware costing thousands of USD.

You have an incredibly myopic sense of scale. Allowing the system to keep up with demand requires an investment of well under 1.0 BTC. And what will you say when transactions fees rise above $10 due to the stupid artificial centrally-planned production quota? Over $100? Over 1BTC?

Anyone with a campaign ad in their signature -- for an organization with which they are not otherwise affiliated -- is automatically deducted credibility points.

I've been convicted of heresy. Convicted by a mere known extortionist. Read my Trust for details.
jbreher
Legendary
*
Offline Offline

Activity: 3038
Merit: 1660


lose: unfind ... loose: untight


View Profile
May 07, 2017, 01:55:24 AM
 #158

5. Because of a block verification processing time vulnerability that increases quadratically with block size, increasing the block size is only possible AFTER SegWit is active and only for SegWit transactions.

False. Parallel validation routes around quadratic hash time issues, by naturally orphaning blocks that take an inordinate time to verify.
I did not look into it but from what I hear it sounds more like a resource consuming band aid. Why not a proper fix with less CPU cycles?

It is not so much a resource consuming band aid, as it is harnessing the natural incentive of greed on the part of the miners (you know, the same force that makes bitcoin work at all) to render the issue a non-problem.

Yes, it takes more memory to validate multiple blocks on different threads at the same time than a single block on a single thread. But this does not only lead to an incentive to not make blocks that take long to validate due to the O(n^2) hashing issue, it also provides a natural backpressure on excessively long-to-validate blocks for any reason whatsoever. Perhaps merely blocks that are huge numbers of simple transactions. And the resource requirements only increase linearly with the number of blocks currently being hashed concurrently by a single node.

More importantly, as miners who create blocks exhibiting this quadratic hash time issue have their blocks orphaned, they will be bankrupted. Accordingly, the creation of these blocks will be disincentivized to the point where they just plain won't be built.

Further, parallel validation is the logical approach to the problem. When one receives a block while still validating another, you need to consider that the first block under validation may be fraudulent. The sooner you find a valid block is the sooner you can get mining on the next block. Parallel validation allows one to find the valid block without having to wait until detection that the fraudulent block is fraudulent is accomplished. Not to mention the stunning fact that other miners do not currently mine at all while validating a block which may be fraudulent.

Last, in the entire 465,185 block history of Bitcoin, there has been (to my knowledge) exactly one such aberrant block ever added to the chain. And parallel validation was not available at the time. But the network did not crash. It paused for a slight bit, then carried on as if nothing untoward ever happened. The point is that, while such blocks are a nuisance, they are not a systemic problem even without parallel validation. And parallel validation routes around this one-in-a-half-million (+/-) event.

By all means, the O(n^2) hash time is suboptimal. We should replace it with a better algorithm at some date. But to focus on it as if it is even relevant to the current debate is ludicrous. It would be ludicrous even without the availability of parallel validation. The fact that BU implements parallel validation makes putting this consideration at the center of this debate ludicrous^2.

Anyone with a campaign ad in their signature -- for an organization with which they are not otherwise affiliated -- is automatically deducted credibility points.

I've been convicted of heresy. Convicted by a mere known extortionist. Read my Trust for details.
hobbes
Full Member
***
Offline Offline

Activity: 128
Merit: 107



View Profile
May 08, 2017, 12:36:58 PM
 #159

How would you implement replay protection for a soft fork, there is only a single chain...

soft or hard.
there are scenario's of staying as one chain (just orphan drama and being either small drama or mega clusterf*ck of orphans before settling down to one chain) dependant on % of majority..

but in both soft or hard a second chain can be produced. but this involves intentionally ignoring consensus orphaning mechanism.. in laymens: not connecting to opposing nodes to see their different rules/chain, to then build own chain without protocol arguing(orphaning)
OK, but I would call that a hardfork ('ignoring consensus orphaning mechanism').

Quote
all the reddit doomsdays FUD is about trying to only mention softs best case, and hards worse case.
but never the other way around because then people will wise up to knowing that bitcoins consensus orphaning mechanism is a good thing and that doing things as a hard consensus is a good thing.
A SWHF certainly has it's benefits but SWSF is the superior solution IMHO. Some people may see this differently but I guess the majority would agree (if it is only because they trust Sipa and the other core people in their judgement).



5. Because of a block verification processing time vulnerability that increases quadratically with block size, increasing the block size is only possible AFTER SegWit is active and only for SegWit transactions.

False. Parallel validation routes around quadratic hash time issues, by naturally orphaning blocks that take an inordinate time to verify.
I did not look into it but from what I hear it sounds more like a resource consuming band aid. Why not a proper fix with less CPU cycles?

It is not so much a resource consuming band aid, as it is harnessing the natural incentive of greed on the part of the miners (you know, the same force that makes bitcoin work at all) to render the issue a non-problem.
Seems like it gives an incentive to mine small blocks? One would have to check the implications of this change really thoroughly...

Quote
Yes, it takes more memory to validate multiple blocks on different threads at the same time than a single block on a single thread. But this does not only lead to an incentive to not make blocks that take long to validate due to the O(n^2) hashing issue, it also provides a natural backpressure on excessively long-to-validate blocks for any reason whatsoever. Perhaps merely blocks that are huge numbers of simple transactions. And the resource requirements only increase linearly with the number of blocks currently being hashed concurrently by a single node.
But quadratically with block size meaning at 16MB blocks or so a 30% miner might still be able to block all nodes permanently.

Quote
More importantly, as miners who create blocks exhibiting this quadratic hash time issue have their blocks orphaned, they will be bankrupted. Accordingly, the creation of these blocks will be disincentivized to the point where they just plain won't be built.
For an attacker disrupting the network for a while might pay of via puts or rising altcoins or just by hurting Bitcoin.

Quote
Further, parallel validation is the logical approach to the problem. When one receives a block while still validating another, you need to consider that the first block under validation may be fraudulent. The sooner you find a valid block is the sooner you can get mining on the next block. Parallel validation allows one to find the valid block without having to wait until detection that the fraudulent block is fraudulent is accomplished. Not to mention the stunning fact that other miners do not currently mine at all while validating a block which may be fraudulent.
See above, might give a bad advantage to small blocks.

Quote
Last, in the entire 465,185 block history of Bitcoin, there has been (to my knowledge) exactly one such aberrant block ever added to the chain. And parallel validation was not available at the time. But the network did not crash. It paused for a slight bit, then carried on as if nothing untoward ever happened. The point is that, while such blocks are a nuisance, they are not a systemic problem even without parallel validation. And parallel validation routes around this one-in-a-half-million (+/-) event.
This is because blocks were and are small.

Quote
By all means, the O(n^2) hash time is suboptimal. We should replace it with a better algorithm at some date. But to focus on it as if it is even relevant to the current debate is ludicrous. It would be ludicrous even without the availability of parallel validation. The fact that BU implements parallel validation makes putting this consideration at the center of this debate ludicrous^2.
The superior solution is on the table, well tested and ready to be deployed. Parallel validation still require additional limitations as suggested by franky1 for larger blocks. Also let me remind you of the resource discussion further up. Of course it is relevant to this debate. Why do you oppose the technically sound and sustainable solution? Particularly as it happens to also bring other important benefits?





franky1
Legendary
*
Offline Offline

Activity: 4396
Merit: 4755



View Profile
May 08, 2017, 01:00:09 PM
 #160

OK, but I would call that a hardfork ('ignoring consensus orphaning mechanism').

soft forks do not need to result in a chain split
hard forks do not need to result in a chain split

soft involves just pool agreements to change something, thats just a network upgrade with one chain
hard involves nodes agreeing to change something, thats just a network upgrade with one chain

again
soft forks do not need to result in a chain split
hard forks do not need to result in a chain split

when some pools disagree and decide to intentionally ignore/ban/reject blocks/communication and the opposition continues. thats a chain split
when some nodes disagree and decide to intentionally ignore/ban/reject blocks/communication and the opposition continues. thats a chain split

soft can intentionally cause a split
hard can intentionally cause a split

and again
soft forks do not need to result in a chain split
hard forks do not need to result in a chain split

by thinking all "hard" actions = split.. and all "soft" actions = utopia.. that is taking softs best case scenario and hards worse case scenario. and avoid talking about the opposite

I DO NOT TRADE OR ACT AS ESCROW ON THIS FORUM EVER.
Please do your own research & respect what is written here as both opinion & information gleaned from experience. many people replying with insults but no on-topic content substance, automatically are 'facepalmed' and yawned at
Pages: « 1 2 3 4 5 6 7 [8] 9 »  All
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!