franky1
Legendary
Offline
Activity: 4284
Merit: 4547
|
|
May 05, 2017, 02:30:58 PM Last edit: May 05, 2017, 02:42:10 PM by franky1 |
|
ill let you argue with yourself will be strongly weakened by the prioritization of native ->Segwit and Segwit -> Segwit transactions.
They may be use cases which require this. Who are you to censor such transactions?
one minute to say pools should and will censor tx's that can spam but then you argue that pools shouldnt censor transactions that can spam you HOPE pools will prioritise segwit keys out of some faith and dream reasoning but hate the idea of code prioritising transactions
|
I DO NOT TRADE OR ACT AS ESCROW ON THIS FORUM EVER. Please do your own research & respect what is written here as both opinion & information gleaned from experience. many people replying with insults but no on-topic content substance, automatically are 'facepalmed' and yawned at
|
|
|
jonald_fyookball
Legendary
Offline
Activity: 1302
Merit: 1008
Core dev leaves me neg feedback #abuse #political
|
|
May 05, 2017, 02:41:32 PM |
|
ill let you argue with yourself will be strongly weakened by the prioritization of native ->Segwit and Segwit -> Segwit transactions.
They may be use cases which require this. Who are you to censor such transactions?
one minute to say pools should and will censor tx's that can spam but then you argue that pools shouldnt censor transactions that can spam Dishonest shills cant even keep their own story straight in the same day. Another contradiction recently has been: high fees are good...and then: bigger blocks wont fix high fees.
|
|
|
|
Lauda
Legendary
Offline
Activity: 2674
Merit: 2965
Terminated.
|
|
May 05, 2017, 02:45:01 PM |
|
ill let you argue with yourself
There is nothing to argue about. You don't understand English. one minute to say pools should and will censor tx's that can spam but then you argue that pools shouldnt censor transactions that can spam
Which is not what I said. I used the word prioritize, which is very different from censoring. prioritize - determine the order for dealing with (a series of items or tasks) according to their relative importance.
censor - examine (a book, film, etc.) officially and suppress unacceptable parts of it.
you HOPE pools will prioritise segwit keys out of some faith and dream reasoning
It is not hope, it is reason. Stop trolling already. Dishonest shills cant even keep their own story straight in the same day.
Said the baboon working for BU. Ironic.
|
"The Times 03/Jan/2009 Chancellor on brink of second bailout for banks" 😼 Bitcoin Core ( onion)
|
|
|
jbreher
Legendary
Offline
Activity: 3038
Merit: 1660
lose: unfind ... loose: untight
|
|
May 05, 2017, 04:42:00 PM |
|
Miners employing parallel validation do not fall victim to extended time validating blocks containing aberrant large quadratic hashing time transactions. Instead, they orphan such blocks. By continuing to mine and validate on other threads while the validation of the aberrant quadratic hashing time block runs on one other thread. Miners who continue to make blocks with such transactions will eventually bankrupt themselves. All without doing any damage to the network. Problem solved.
What implementation includes parallel validation? Oh yeah... BU does.
Given the massive amounts of ram required by ultra large transactions that are heavy in sigops that would be prone the quadratic scaling laws, validating yet another block in parallel is an excellent way of using even more ram. High ram servers with 256GB may be able to cope temporarily with it but normal machines and even normal servers will likely run out of memory and kill bitcoind. Which implementation has had out of memory issues already? Oh yeah... BU did. You don't think the significant mining pools can afford one large server each?
|
Anyone with a campaign ad in their signature -- for an organization with which they are not otherwise affiliated -- is automatically deducted credibility points.
I've been convicted of heresy. Convicted by a mere known extortionist. Read my Trust for details.
|
|
|
jbreher
Legendary
Offline
Activity: 3038
Merit: 1660
lose: unfind ... loose: untight
|
|
May 05, 2017, 05:08:14 PM |
|
5. Because of a block verification processing time vulnerability that increases quadratically with block size, increasing the block size is only possible AFTER SegWit is active and only for SegWit transactions.
False. Parallel validation routes around quadratic hash time issues, by naturally orphaning blocks that take an inordinate time to verify.
|
Anyone with a campaign ad in their signature -- for an organization with which they are not otherwise affiliated -- is automatically deducted credibility points.
I've been convicted of heresy. Convicted by a mere known extortionist. Read my Trust for details.
|
|
|
AngryDwarf
|
|
May 05, 2017, 05:50:55 PM |
|
4. There are two possible ways to deploy/implement SegWit, as a softfork or as a hardfork. SegWit as a hardfork would allow a slightly cleaner implementation but would also require replay protection (as the exchanges have specifically asked for lately). SWSF does not require replay protection assuming a hashrate majority. Replay protection is difficult thus SegWit as a hardfork would altogether cause more technical debt than SWSF. Also a hardfork is generally considered of higher risk and would take a longer preparation time.
Sorry, it seems people have had their heads FOHK'ed with (Fear of Hard Fork). There is little difference between the dangers of a soft fork and a hard fork. In the event of a soft fork we have: 1.) The old chain exists with a more permissive set of rules. 2.) The new chain exists with a more restrictive set of rules. In a hard fork we have: 1.) The old chain exists with a more restrictive set of rules. 2.) The new chain exists with a more permissive set of rules. So they look exactly the same during a chain split. The only difference is that a soft fork is backwards compatible because its more restrictive set of rules. In the event of a successful soft fork, older nodes continue to operate as normal. In the event of a successful hard fork, older nodes become unsynced and have to upgrade. In the event of a contentious fork, hard of soft, it becomes an economically damaging clusterfuck until the winning fork is determined (the longest chain) or a bilateral split occurs (the minority chain implements replay protection)*. * Strictly speaking the software forking away from the existing protocol (hard of soft) should be the version that implements relay protection as you cannot demand the existing protocol chain to change its behaviour. In practice though, the aim is not to create a permanent chain split and achieve consensus, so the minority chain should end up orphaned off, and any transactions that occur during any temporary chain split should end up confirmed on the main chain.
|
|
|
|
franky1
Legendary
Offline
Activity: 4284
Merit: 4547
|
|
May 05, 2017, 06:05:47 PM |
|
Miners employing parallel validation do not fall victim to extended time validating blocks containing aberrant large quadratic hashing time transactions. Instead, they orphan such blocks. By continuing to mine and validate on other threads while the validation of the aberrant quadratic hashing time block runs on one other thread. Miners who continue to make blocks with such transactions will eventually bankrupt themselves. All without doing any damage to the network. Problem solved.
What implementation includes parallel validation? Oh yeah... BU does.
Given the massive amounts of ram required by ultra large transactions that are heavy in sigops that would be prone the quadratic scaling laws, validating yet another block in parallel is an excellent way of using even more ram. High ram servers with 256GB may be able to cope temporarily with it but normal machines and even normal servers will likely run out of memory and kill bitcoind. Which implementation has had out of memory issues already? Oh yeah... BU did. You don't think the significant mining pools can afford one large server each? this is why you dont let TX's get MORE bloated when block sizes increase. best option is to keep tx's at or below 4k sigops. the quadratics are copable and capable on normal machines EG things like < 4ktxsigops < 100k txmaxbytes that way for instance.. spam attack 1mb block requires 5tx sigop spam or requires 10tx bloat data spam 2mb block requires 10tx sigop spam or requires 20tx bloat data spam 4mb block requires 10tx sigop spam or requires 40tx bloat data spam some people think going up is ok (facepalm) (where sigops per tx and bytes per tx goes up with blocksize) 1mb block requires 5tx sigop spam or requires 10tx bloat data spam 2mb block requires 5tx sigop spam or requires 10tx bloat data spam 4mb block requires 5tx sigop spam or requires 10tx bloat data spam some people think going down is bad (facepalm) , yet if txsigops went to say 1k and txmaxbyte =50k 1mb block requires 20tx sigop spam or requires 20tx bloat data spam 2mb block requires 40tx sigop spam or requires 40tx bloat data spam 4mb block requires 80tx sigop spam or requires 80tx bloat data spam where by at 4mb block for instance even using max tx sigops the time to process is seconds not minutes
|
I DO NOT TRADE OR ACT AS ESCROW ON THIS FORUM EVER. Please do your own research & respect what is written here as both opinion & information gleaned from experience. many people replying with insults but no on-topic content substance, automatically are 'facepalmed' and yawned at
|
|
|
-ck
Legendary
Offline
Activity: 4172
Merit: 1641
Ruu \o/
|
|
May 05, 2017, 07:22:07 PM |
|
Miners employing parallel validation do not fall victim to extended time validating blocks containing aberrant large quadratic hashing time transactions. Instead, they orphan such blocks. By continuing to mine and validate on other threads while the validation of the aberrant quadratic hashing time block runs on one other thread. Miners who continue to make blocks with such transactions will eventually bankrupt themselves. All without doing any damage to the network. Problem solved.
What implementation includes parallel validation? Oh yeah... BU does.
Given the massive amounts of ram required by ultra large transactions that are heavy in sigops that would be prone the quadratic scaling laws, validating yet another block in parallel is an excellent way of using even more ram. High ram servers with 256GB may be able to cope temporarily with it but normal machines and even normal servers will likely run out of memory and kill bitcoind. Which implementation has had out of memory issues already? Oh yeah... BU did. You don't think the significant mining pools can afford one large server each? And that's your solution? Have only 10 nodes that can stay online worldwide during that parallel validation period and crash the remaining 6000+ nodes worldwide at the same time?
|
Developer/maintainer for cgminer, ckpool/ckproxy, and the -ck kernel 2% Fee Solo mining at solo.ckpool.org -ck
|
|
|
jonald_fyookball
Legendary
Offline
Activity: 1302
Merit: 1008
Core dev leaves me neg feedback #abuse #political
|
|
May 05, 2017, 07:34:46 PM |
|
Miners employing parallel validation do not fall victim to extended time validating blocks containing aberrant large quadratic hashing time transactions. Instead, they orphan such blocks. By continuing to mine and validate on other threads while the validation of the aberrant quadratic hashing time block runs on one other thread. Miners who continue to make blocks with such transactions will eventually bankrupt themselves. All without doing any damage to the network. Problem solved.
What implementation includes parallel validation? Oh yeah... BU does.
Given the massive amounts of ram required by ultra large transactions that are heavy in sigops that would be prone the quadratic scaling laws, validating yet another block in parallel is an excellent way of using even more ram. High ram servers with 256GB may be able to cope temporarily with it but normal machines and even normal servers will likely run out of memory and kill bitcoind. Which implementation has had out of memory issues already? Oh yeah... BU did. You don't think the significant mining pools can afford one large server each? And that's your solution? Have only 10 nodes that can stay online worldwide during that parallel validation period and crash the remaining 6000+ nodes worldwide at the same time? Surely that won't happen with a simple 2MB HF. So if you are sincere about capacity increase, why not that now and maybe segwit later?
|
|
|
|
jbreher
Legendary
Offline
Activity: 3038
Merit: 1660
lose: unfind ... loose: untight
|
|
May 05, 2017, 07:38:02 PM |
|
Miners employing parallel validation do not fall victim to extended time validating blocks containing aberrant large quadratic hashing time transactions. Instead, they orphan such blocks. By continuing to mine and validate on other threads while the validation of the aberrant quadratic hashing time block runs on one other thread. Miners who continue to make blocks with such transactions will eventually bankrupt themselves. All without doing any damage to the network. Problem solved.
What implementation includes parallel validation? Oh yeah... BU does.
Given the massive amounts of ram required by ultra large transactions that are heavy in sigops that would be prone the quadratic scaling laws, validating yet another block in parallel is an excellent way of using even more ram. High ram servers with 256GB may be able to cope temporarily with it but normal machines and even normal servers will likely run out of memory and kill bitcoind. Which implementation has had out of memory issues already? Oh yeah... BU did. You don't think the significant mining pools can afford one large server each? And that's your solution? Have only 10 nodes that can stay online worldwide during that parallel validation period and crash the remaining 6000+ nodes worldwide at the same time? Yes. Anyone who wants to be a central element of a multibillion dollar system is going to have to buck up for the requisite (and rather trivially-valued, in the scope of things) hardware to do so. Bitcoin's dirty little secret is that non-mining nodes provide zero benefit to the network at large. Sure, operating a node allows that particular node operator to transact directly on the chain, so provides value to that person or persons. But it provides zero utility to the network itself. Miners can always route around the nodes that do not accept their transactions. Miners don't care whether non-mining nodes accept their blocks - only whether other miners will build atop their blocks. And the number will not be ten - it will be many more. As again, anyone who wants to be able to transact directly upon the chain in a trustless manner will need to buck up to the hardware demands.
|
Anyone with a campaign ad in their signature -- for an organization with which they are not otherwise affiliated -- is automatically deducted credibility points.
I've been convicted of heresy. Convicted by a mere known extortionist. Read my Trust for details.
|
|
|
-ck
Legendary
Offline
Activity: 4172
Merit: 1641
Ruu \o/
|
|
May 05, 2017, 07:45:04 PM |
|
Yes. Anyone who wants to be a central element of a multibillion dollar system is going to have to buck up for the requisite (and rather trivially-valued, in the scope of things) hardware to do so.
Bitcoin's dirty little secret is that non-mining nodes provide zero benefit to the network at large. Sure, operating a node allows that particular node operator to transact directly on the chain, so provides value to that person or persons. But it provides zero utility to the network itself. Miners can always route around the nodes that do not accept their transactions. Miners don't care whether non-mining nodes accept their blocks - only whether other miners will build atop their blocks.
And the number will not be ten - it will be many more. As again, anyone who wants to be able to transact directly upon the chain in a trustless manner will need to buck up to the hardware demands.
Thanks. If anyone wants to know what BU'ers think of what the system is and should be, I think I can now refer them to your post. I rest my case.
|
Developer/maintainer for cgminer, ckpool/ckproxy, and the -ck kernel 2% Fee Solo mining at solo.ckpool.org -ck
|
|
|
d5000
Legendary
Offline
Activity: 3976
Merit: 6860
Decentralization Maximalist
|
|
May 05, 2017, 08:44:09 PM |
|
Ok, I think I have understood the quadratic scaling problem now (thanks to @Lauda, @franky1, @jbreher and @-ck), my error was to think that only miners were affected, but as it affects mainly validation, all full nodes are affected and a malicious miner/pool could try to "kill small full nodes" or even smaller mining pools via a spam attack. So my opinion is reinforced that in the case of a block size increase, legacy transactions would have to be restricted by the protocol in some way.
|
|
|
|
jonald_fyookball
Legendary
Offline
Activity: 1302
Merit: 1008
Core dev leaves me neg feedback #abuse #political
|
|
May 05, 2017, 08:53:53 PM |
|
Ok, I think I have understood the quadratic scaling problem now (thanks to @Lauda, @franky1, @jbreher and @-ck), my error was to think that only miners were affected, but as it affects mainly validation, all full nodes are affected and a malicious miner/pool could try to "kill small full nodes" or even smaller mining pools via a spam attack. So my opinion is reinforced that in the case of a block size increase, legacy transactions would have to be restricted by the protocol in some way.
You know about flextrans right?
|
|
|
|
anonymoustroll420
|
|
May 05, 2017, 09:17:04 PM |
|
You know about flextrans right?
Makes it better, but doesn't fix it. It still doesn't scale linearly.
|
Please don't stop us from using ASICBoost which we're not using
|
|
|
jbreher
Legendary
Offline
Activity: 3038
Merit: 1660
lose: unfind ... loose: untight
|
|
May 05, 2017, 11:08:08 PM |
|
Yes. Anyone who wants to be a central element of a multibillion dollar system is going to have to buck up for the requisite (and rather trivially-valued, in the scope of things) hardware to do so.
Bitcoin's dirty little secret is that non-mining nodes provide zero benefit to the network at large. Sure, operating a node allows that particular node operator to transact directly on the chain, so provides value to that person or persons. But it provides zero utility to the network itself. Miners can always route around the nodes that do not accept their transactions. Miners don't care whether non-mining nodes accept their blocks - only whether other miners will build atop their blocks.
And the number will not be ten - it will be many more. As again, anyone who wants to be able to transact directly upon the chain in a trustless manner will need to buck up to the hardware demands.
Thanks. If anyone wants to know what BU'ers think of what the system is and should be, I think I can now refer them to your post. No, you may not. If you want to have a handy reference to what one BU'er -- namely myself -- thinks, then you can refer them to my post. I do not speak for others. Do you care to argue the facts above? Or shall you just rely on crowd sentiment as sufficient to escape any reasoned discussion?
|
Anyone with a campaign ad in their signature -- for an organization with which they are not otherwise affiliated -- is automatically deducted credibility points.
I've been convicted of heresy. Convicted by a mere known extortionist. Read my Trust for details.
|
|
|
-ck
Legendary
Offline
Activity: 4172
Merit: 1641
Ruu \o/
|
|
May 05, 2017, 11:10:42 PM |
|
Do you care to argue the facts above?
No I think I'm quite done here, thanks.
|
Developer/maintainer for cgminer, ckpool/ckproxy, and the -ck kernel 2% Fee Solo mining at solo.ckpool.org -ck
|
|
|
Sierra82fit
|
|
May 06, 2017, 03:48:49 AM |
|
You know about flextrans right?
Makes it better, but doesn't fix it. It still doesn't scale linearly. It is possible to fix it in the near future. How to spread the information then it already works.
|
|
|
|
Lauda
Legendary
Offline
Activity: 2674
Merit: 2965
Terminated.
|
|
May 06, 2017, 07:56:07 AM |
|
Thanks. If anyone wants to know what BU'ers think of what the system is and should be, I think I can now refer them to your post.
I rest my case.
Yup. This is exactly the nonsense that they are preaching. Let's make Bitcoin a very centralized system in which you can't achieve financial sovereignty unless you buy server grade hardware costing thousands of USD. Ok, I think I have understood the quadratic scaling problem now (thanks to @Lauda, @franky1, @jbreher and @-ck), my error was to think that only miners were affected, but as it affects mainly validation, all full nodes are affected and a malicious miner/pool could try to "kill small full nodes" or even smaller mining pools via a spam attack. So my opinion is reinforced that in the case of a block size increase, legacy transactions would have to be restricted by the protocol in some way.
Correct. Everyone is affected and the "parallel validation" BUIP that attempts to solve it is a joke. It does not solve anything.
|
"The Times 03/Jan/2009 Chancellor on brink of second bailout for banks" 😼 Bitcoin Core ( onion)
|
|
|
d5000
Legendary
Offline
Activity: 3976
Merit: 6860
Decentralization Maximalist
|
|
May 06, 2017, 08:24:41 AM |
|
You know about flextrans right?
Wouldn't Flextrans have the exact same problem? I haven't studied Flextrans in detail, but from what I remember it would enable a new "version" of transactions without malleability. But wouldn't legacy transactions ("v1", as they call it here) continue to be allowed in this proposal, too? In this case it could lead to the exact same situation where a malicious miner or pool could try to spam the network with legacy transactions to "take out" some competitors.
|
|
|
|
johnscoin
Member
Offline
Activity: 101
Merit: 10
|
|
May 06, 2017, 09:07:04 AM Last edit: May 06, 2017, 09:17:40 AM by johnscoin |
|
Hi OP, don't spread such naive words.
Roger and miners have already agreed to activate SW first, only if BS devs show some sincerity on further blocklimit increase. But what's BS devs' response? They refuse any proposal and continue to spread lies and personal attacks on Roger and miners.
Bitcoin is not your enemy. Wake UP. Let's fight against BS.
|
|
|
|
|