Bitcoin Forum
November 01, 2024, 02:09:36 AM *
News: Bitcoin Pumpkin Carving Contest
 
   Home   Help Search Login Register More  
Pages: « 1 2 3 4 5 [6] 7 8 9 »  All
  Print  
Author Topic: Last Time. In small words. Why 2MB impossible. Why Soft Fork. Why SegWit First..  (Read 6480 times)
franky1
Legendary
*
Online Online

Activity: 4396
Merit: 4754



View Profile
May 04, 2017, 11:22:46 PM
 #101

You can't harm the network with sigops at 1 MB.

you can. think of the sigops as another "limit" of spam that once filled nothing else can get in

Quote
unsigned int GetLegacySigOpCount(const CTransaction& tx)
{
    unsigned int nSigOps = 0;
    BOOST_FOREACH(const CTxIn& txin, tx.vin)
    {
        nSigOps += txin.scriptSig.GetSigOpCount(false);
    }
    BOOST_FOREACH(const CTxOut& txout, tx.vout)
    {
        nSigOps += txout.scriptPubKey.GetSigOpCount(false);
    }
    return nSigOps;
}

we all know a tx bytes are made by (148*in)+(34*out) roughly(+-10bytes)

so lets make a tx that has 4k sigops
a) 3999input:1output= 591886bytes~(4ksigops)
b) 1input:3999output=136114bytes~(4ksigops)

5tx of (b)=680570bytes~(20ksigops)

screw it. i know there are many knitpickers
c) 1input:2856output=97252bytes~(2.857k sigops)
7tx of (c)=680764bytes(20k sigops)

so i can fill a blocks sigops limit easily with 7tx of (c)
and although its only 7tx, and only 0.68mb of data.. no other transactions can get into the base block.. not even segwit tx's

I DO NOT TRADE OR ACT AS ESCROW ON THIS FORUM EVER.
Please do your own research & respect what is written here as both opinion & information gleaned from experience. many people replying with insults but no on-topic content substance, automatically are 'facepalmed' and yawned at
d5000
Legendary
*
Offline Offline

Activity: 4088
Merit: 7478


Decentralization Maximalist


View Profile
May 04, 2017, 11:59:41 PM
 #102

As I've told franky, pools can and will prioritize native-to-segwit and segwit-to-segwit transactions in the case of native-to-native spam attacks.
I've not understood it in its entirety. Is the following mechanism correct: Legacy spam transactions would been recognized and avoided because they would "steal" them computing power for no benefit?

2 MB + SW in my idea would occur in >2019. If Bitcoin's growth continues at the same speed than until now (30-50% transaction volume growth per year) then we could see pretty full mempools then. OK, maybe not if sidechains or extension blocks are functioning.
I don't agree with the instant jumping visions anyways. Why not 1.2 MB now, 1.4 MB next year and so on, until we hit 2 MB? These kind of approaches make more sense to me.

I have no problem with that concept - only that in this case we should do that with a single hard fork (like in BIP 103) to avoid having to fork every year. Still, my favourite are BIP-100-based ideas where the maximum block size has to be "voted up" in small steps, as we've already discussed somewhere.

I was concern about those who hold bitcoin but do not follow the news daily. Maybe those will ignore bitcoin and wait 10 years. What happens then when they find their bitcoin is "prohibited"?

Obviously the whole concept should be made in such a way that if you hold Bitcoin on a legacy key, you could transfer them to Segwit addresses. Only legacy-to-legacy would have to be prohibited.

█▀▀▀











█▄▄▄
▀▀▀▀▀▀▀▀▀▀▀
e
▄▄▄▄▄▄▄▄▄▄▄
█████████████
████████████▄███
██▐███████▄█████▀
█████████▄████▀
███▐████▄███▀
████▐██████▀
█████▀█████
███████████▄
████████████▄
██▄█████▀█████▄
▄█████████▀█████▀
███████████▀██▀
████▀█████████
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
c.h.
▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄
▀▀▀█











▄▄▄█
▄██████▄▄▄
█████████████▄▄
███████████████
███████████████
███████████████
███████████████
███░░█████████
███▌▐█████████
█████████████
███████████▀
██████████▀
████████▀
▀██▀▀
Lauda
Legendary
*
Offline Offline

Activity: 2674
Merit: 2965


Terminated.


View Profile WWW
May 05, 2017, 12:02:19 AM
 #103

You can't harm the network with sigops at 1 MB.
you can. think of the sigops as another "limit" of spam that once filled nothing else can get in
Nonsense. That is spam, and not what we were talking about. You keep creating straw man arguments.

so i can fill a blocks sigops limit easily with 7tx of (c)
Irrelevant, already known and denied by nobody. You're starting to become boring.

and although its only 7tx, and only 0.68mb of data.. no other transactions can get into the base block.. not even segwit tx's
Which you can avoid by prioritizing Segwit transactions.

Is the following mechanism correct: Legacy spam transactions would been recognized and avoided because they would "steal" them computing power for no benefit?
"Steal the computing power" is a pretty weird way to label this. I'd rather say that legacy transaction spam attacks would be recognized, and pools/miners could start prioritizing the other set of transactions.

I have no problem with that concept - only that in this case we should do that with a single hard fork (like in BIP 103) to avoid having to fork every year.
Of course. A single Bitcoin hard fork is very hard, let alone several of them.

"The Times 03/Jan/2009 Chancellor on brink of second bailout for banks"
😼 Bitcoin Core (onion)
d5000
Legendary
*
Offline Offline

Activity: 4088
Merit: 7478


Decentralization Maximalist


View Profile
May 05, 2017, 12:13:10 AM
 #104

Is the following mechanism correct: Legacy spam transactions would been recognized and avoided because they would "steal" them computing power for no benefit?
"Steal the computing power" is a pretty weird way to label this. I'd rather say that legacy transaction spam attacks would be recognized, and pools/miners could start prioritizing the other set of transactions.
Yep, maybe. What I meant was that big spam transactions would cost them an amount of hashing power that they - if they ignored these transactions or give them a very low priority - could use better to find blocks faster and get an advantage over their competitors.

█▀▀▀











█▄▄▄
▀▀▀▀▀▀▀▀▀▀▀
e
▄▄▄▄▄▄▄▄▄▄▄
█████████████
████████████▄███
██▐███████▄█████▀
█████████▄████▀
███▐████▄███▀
████▐██████▀
█████▀█████
███████████▄
████████████▄
██▄█████▀█████▄
▄█████████▀█████▀
███████████▀██▀
████▀█████████
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
c.h.
▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄
▀▀▀█











▄▄▄█
▄██████▄▄▄
█████████████▄▄
███████████████
███████████████
███████████████
███████████████
███░░█████████
███▌▐█████████
█████████████
███████████▀
██████████▀
████████▀
▀██▀▀
franky1
Legendary
*
Online Online

Activity: 4396
Merit: 4754



View Profile
May 05, 2017, 12:32:27 AM
 #105

Yep, maybe. What I meant was that big spam transactions would cost them an amount of hashing power that they - if they ignored these transactions or give them a very low priority - could use better to find blocks faster and get an advantage over their competitors.

a ASIC does not have a hard drive.. it does not matter to an asic if a block is 250bytes or a gigabyte. the "hashing" is the same
an asic is just given a hash and rehashes it.

data or bloat does not hinder ASICS one bit.. it only hinders the pool/server that validates/relays full block data.



2 MB + SW in my idea would occur in >2019. If Bitcoin's growth continues at the same speed than until now (30-50% transaction volume growth per year) then we could see pretty full mempools then. OK, maybe not if sidechains or extension blocks are functioning.
I don't agree with the instant jumping visions anyways. Why not 1.2 MB now, 1.4 MB next year and so on, until we hit 2 MB? These kind of approaches make more sense to me.
its taken years of debate and still no guarantee on moving the block size once.. do you honestly think moving to 1.2mb is going to benefit the network, and then have another few years of debating to gt 1.4mb..

if your talking about progressive blocksize movements that are automated by the protocol and not dev decisions per change.. then you are now waking up to the whole point of dynamics.. finally your looking passed blockstream control and starting to think about the network moving forward without dev spoon feeding . finally only took you 2 years (even if you think that hard limiting it at silly low amounts is good)

give it 2 more years and you will wake up to hard limit of 4mb and soft limit that moves up in increments.
EG
like the last 8 years (rplace hard with consensus and soft with policy, and you will start to understand)
1mb consensus 0.25mb policy 2009-2011
1mb consensus 0.5mb policy 2011-2013
1mb consensus 0.75mb policy 2013-2015
1mb consensus 0.99mb policy 2015-2017
to become
4mb consensus 2mb policy 2017-2018
where policy grows

oh and guess what.. pools never have just jumped from 0 to 0.25.. or 0.25 to 0.5..
even when policy allowed it, pools took things cautiously to avoid orphan risks

so say
4mb consensus 2mb policy 2017-2018 was implemented
pools wont make a 2mb block the very first block of activation. they would test the water with 1.000250mb to see the risks, timings issues of bugs orphans etc.
and increment from there.

you may argue "but whats to stop a pool jumping to 4mb".. well the same reason pools didnt jump to 1mb.. and instead themselves went up in safe increments to protect their own risks of orphans and other issues (as my last paragraph explained)
also thats where nodes would have an extra safeguard.. but ill leave you to take a few years to realise the extra safeguard. which is what dynamics is all about.

so go spend 2 years shouting nonsense/irrelevant until it finally dawns on you
have a nice 2 years

I DO NOT TRADE OR ACT AS ESCROW ON THIS FORUM EVER.
Please do your own research & respect what is written here as both opinion & information gleaned from experience. many people replying with insults but no on-topic content substance, automatically are 'facepalmed' and yawned at
jbreher
Legendary
*
Offline Offline

Activity: 3038
Merit: 1660


lose: unfind ... loose: untight


View Profile
May 05, 2017, 03:01:33 AM
 #106

Wouldn't then the quadratic hashing time problem be unsolved forever?

The quadratic hashing time issue is a non-problem. Miners employing parallel validation do not fall victim to extended time validating blocks containing aberrant large quadratic hashing time transactions. Instead, they orphan such blocks. By continuing to mine and validate on other threads while the validation of the aberrant quadratic hashing time block runs on one other thread. Miners who continue to make blocks with such transactions will eventually bankrupt themselves. All without doing any damage to the network. Problem solved.

What implementation includes parallel validation? Oh yeah... BU does.

Anyone with a campaign ad in their signature -- for an organization with which they are not otherwise affiliated -- is automatically deducted credibility points.

I've been convicted of heresy. Convicted by a mere known extortionist. Read my Trust for details.
-ck
Legendary
*
Offline Offline

Activity: 4284
Merit: 1645


Ruu \o/


View Profile WWW
May 05, 2017, 03:28:37 AM
 #107

Miners employing parallel validation do not fall victim to extended time validating blocks containing aberrant large quadratic hashing time transactions. Instead, they orphan such blocks. By continuing to mine and validate on other threads while the validation of the aberrant quadratic hashing time block runs on one other thread. Miners who continue to make blocks with such transactions will eventually bankrupt themselves. All without doing any damage to the network. Problem solved.

What implementation includes parallel validation? Oh yeah... BU does.
Given the massive amounts of ram required by ultra large transactions that are heavy in sigops that would be prone the quadratic scaling laws, validating yet another block in parallel is an excellent way of using even more ram. High ram servers with 256GB may be able to cope temporarily with it but normal machines and even normal servers will likely run out of memory and kill bitcoind.

Which implementation has had out of memory issues already? Oh yeah... BU did.

Developer/maintainer for cgminer, ckpool/ckproxy, and the -ck kernel
2% Fee Solo mining at solo.ckpool.org
-ck
hobbes
Full Member
***
Offline Offline

Activity: 128
Merit: 107



View Profile
May 05, 2017, 07:06:32 AM
 #108

This seems to be one of the center points of the discussion to me. Lauda, can you confirm I got it right?
Quote from: The One
Wouldn't segwit hard fork be better than soft fork?
2.) There will be less technical debt by implementing segwit as a hard fork. The software kludges implementing it as a soft fork also creates huge maintenance risks in the future (segwit keys are 'anyonecanspend').
You are wrong here. Exchanges pointed out the need for replay protection for even slightly contentious hardforks a while ago. Replay protection is quite difficult and would cause more technical debt than SWSF. This makes SWSF the currently superior solution.

Absolute bollocks. If SWSF becomes a contentious soft fork, you would still need replay protection. When there is a contentious fork, it makes no difference if that fork is hard or soft. You only need to implement replay protection if you want to cause a bilateral split, otherwise people will eventually unite behind a single chain, the one which has the most proof of work. The uniting behind one chain will happen sooner rather than later otherwise it is a complete clusterfuck.
Maybe it was not clear but of course I am assuming a significant hashrate majority. Then there is no need for replay protection because the chain will always converge to the new chain. If you still disagree please explain.


@franky1
SWHF shares most of the properties you are bashing. I can't see the point you are trying to make. What alternative solution do you propose?

Lauda
Legendary
*
Offline Offline

Activity: 2674
Merit: 2965


Terminated.


View Profile WWW
May 05, 2017, 09:03:22 AM
 #109

What I meant was that big spam transactions would cost them an amount of hashing power that they - if they ignored these transactions or give them a very low priority - could use better to find blocks faster and get an advantage over their competitors.
Well, saying that it would "cost them" differently is also incorrect. If you have two transactions:
1) native-to-native which is part of a big group of spam with X fee.
2) native-to-segwit or segwit-to-segwit which is a genuine transaction with a fee equal to X, it doesn't matter much for the miner. It costs them the same amount.

its taken years of debate and still no guarantee on moving the block size once.. do you honestly think moving to 1.2mb is going to benefit the network, and then have another few years of debating to gt 1.4mb..
There is no debate. I have already mentioned that this would be done with 1 hard fork, so the subsequent rises (1.2 to 1.4 to 1.6 and so on) would be hard coded.

if your talking about progressive blocksize movements that are automated by the protocol and not dev decisions per change.. then you are now waking up to the whole point of dynamics.. finally your looking passed blockstream control and starting to think about the network moving forward without dev spoon feeding . finally only took you 2 years (even if you think that hard limiting it at silly low amounts is good)
I am not strongly interested in hard fork proposals until I see someone coming up with solutions for the sigops problem.

"The Times 03/Jan/2009 Chancellor on brink of second bailout for banks"
😼 Bitcoin Core (onion)
franky1
Legendary
*
Online Online

Activity: 4396
Merit: 4754



View Profile
May 05, 2017, 10:20:04 AM
 #110

What I meant was that big spam transactions would cost them an amount of hashing power that they - if they ignored these transactions or give them a very low priority - could use better to find blocks faster and get an advantage over their competitors.
Well, saying that it would "cost them" differently is also incorrect. If you have two transactions:
1) native-to-native which is part of a big group of spam with X fee.
2) native-to-segwit or segwit-to-segwit which is a genuine transaction with a fee equal to X, it doesn't matter much for the miner. It costs them the same amount.

from the point of view of asics makes no difference.
from the point of view of pools. the pools would have validated a tx before putting it into mempool.. so putting it in a raw(unsolved) block 4xx,xx1 minutes later or 4xx,002 makes no difference to any CPU time of forming a raw block to get a hash to send to asics to solve.

emphasis the quadratic/cpu intensive time only happens once for a pool. when it first gets relayed a tx and validates it to add it to mempool.. the creation of a raw block is just collating data minutes later. not revalidating tx's again

the choice of what gets into a raw block is more about preference. some pools (btcc) love their own internal customers tx's get in fee free. other pools want expensive first. and some pools want to distribute mature rewards to all the external miners first.

some pools want to waste other pools time by making spammy blocks so the first pool can concentrate on the next block while their competitors are hanging validating the first block

also segwit is "supposedly" 75% cheaper. which means pools get 4x less bonus from a segwit tx.

theres also issues of if they add segwit tx's they have to form the 2 merkle. and then have some peers request the pool to strip it down to just the base block..(old nodes connected to pools)*

however some pools would not treat a $0.25 tx as having higher priority than a $1 tx purely because its segwit


its taken years of debate and still no guarantee on moving the block size once.. do you honestly think moving to 1.2mb is going to benefit the network, and then have another few years of debating to gt 1.4mb..
There is no debate. I have already mentioned that this would be done with 1 hard fork, so the subsequent rises (1.2 to 1.4 to 1.6 and so on) would be hard coded.

if your talking about progressive blocksize movements that are automated by the protocol and not dev decisions per change.. then you are now waking up to the whole point of dynamics.. finally your looking passed blockstream control and starting to think about the network moving forward without dev spoon feeding . finally only took you 2 years (even if you think that hard limiting it at silly low amounts is good)
I am not strongly interested in hard fork proposals until I see someone coming up with solutions for the sigops problem.


very simple keep sigops at a REAL 4k or below 4k per tx.
P.S if segwit went soft first and then removed the cludge to go to 1 merkle after. that means removing the 'witness discount' which then would bring back the quadratics risk of REAL 16k sigops (8min native validation time)*

*(disclaimer their is bait in my last sentence i wonder if you will bite)

I DO NOT TRADE OR ACT AS ESCROW ON THIS FORUM EVER.
Please do your own research & respect what is written here as both opinion & information gleaned from experience. many people replying with insults but no on-topic content substance, automatically are 'facepalmed' and yawned at
slaman29
Legendary
*
Offline Offline

Activity: 2828
Merit: 1285


Livecasino, 20% cashback, no fuss payouts.


View Profile
May 05, 2017, 10:26:51 AM
 #111

I guess the thread title has not helped... it isn't going to be the last time and we'll never be able to continue in small words:)

Do any from both sides (I see the same posters) feel this will ever come to meet in some middle?

██
██
██
██
██
██
██
██
██
██
██
██
██
... LIVECASINO.io    Play Live Games with up to 20% cashback!...██
██
██
██
██
██
██
██
██
██
██
██
██
Lauda
Legendary
*
Offline Offline

Activity: 2674
Merit: 2965


Terminated.


View Profile WWW
May 05, 2017, 11:14:35 AM
 #112

emphasis the quadratic/cpu intensive time only happens once for a pool. when it first gets relayed a tx and validates it to add it to mempool.. the creation of a raw block is just collating data minutes later. not revalidating tx's again
If you didn't mine the block, you are going to validate it. If a malicious miner starts deploying quadratic intensive blocks at higher MB (e.g. 2 MB), they could make you constantly be behind them (hence DDOS).

also segwit is "supposedly" 75% cheaper. which means pools get 4x less bonus from a segwit tx.
There is a reason for that. You need to re-read what Segwit is about.

theres also issues of if they add segwit tx's they have to form the 2 merkle. and then have some peers request the pool to strip it down to just the base block..(old nodes connected to pools)*
That's not an issue.

very simple keep sigops at a REAL 4k or below 4k per tx.
Which also makes it easier to clutter up blocks to hit the max sigops per block limit. As you'd say it, this is no fix.

P.S if segwit went soft first and then removed the cludge to go to 1 merkle after. that means removing the 'witness discount' which then would bring back the quadratics risk of REAL 16k sigops (8min native validation time)
(disclaimer their is bait in my last sentence i wonder if you will bite)
Your disclaimer is full of nonsense and proof that you don't understand Segwit. Go back to school.

Do any from both sides (I see the same posters) feel this will ever come to meet in some middle?
Why would you compromise when you've delivered an actually proven and working solution for something that has no benefits aside from a capacity increase? Roll Eyes

"The Times 03/Jan/2009 Chancellor on brink of second bailout for banks"
😼 Bitcoin Core (onion)
franky1
Legendary
*
Online Online

Activity: 4396
Merit: 4754



View Profile
May 05, 2017, 12:42:30 PM
 #113

emphasis the quadratic/cpu intensive time only happens once for a pool. when it first gets relayed a tx and validates it to add it to mempool.. the creation of a raw block is just collating data minutes later. not revalidating tx's again
If you didn't mine the block, you are going to validate it. If a malicious miner starts deploying quadratic intensive blocks at higher MB (e.g. 2 MB), they could make you constantly be behind them (hence DDOS).
now your starting to see why segwit hasnt fixed it!!

also segwit is "supposedly" 75% cheaper. which means pools get 4x less bonus from a segwit tx.
There is a reason for that. You need to re-read what Segwit is about.
core have already removed the FEE calculation features such as priority, reactive.. nothing to stop them removing the 4x witness scale factor as soon as segwit is activated.. after duping people into activating it..
maybe you need to read the documentation and code and then think of the long term.. not the temporary sales pitch..

theres also issues of if they add segwit tx's they have to form the 2 merkle. and then have some peers request the pool to strip it down to just the base block..(old nodes connected to pools)*
That's not an issue.
because of the tier network preventing old nodes connecting direct to pools, i did * that to say i was baiting you.. i was hoping you would have honesty /integrity to explain why its not an issue.. but you love to hide the bad bits under the rug with empty replies or wrong, irrelevant, not an issue

very simple keep sigops at a REAL 4k or below 4k per tx.
Which also makes it easier to clutter up blocks to hit the max sigops per block limit. As you'd say it, this is no fix.
actually you need to think deeper.. by reducing tx sigops to say 1k and then having 80k blocksigops. without any cludgy maths of pretend counting..
it changes it from being just 5-7 tx to being 80tx to fill a block.

P.S if segwit went soft first and then removed the cludge to go to 1 merkle after. that means removing the 'witness discount' which then would bring back the quadratics risk of REAL 16k sigops (8min native validation time)
(disclaimer their is bait in my last sentence i wonder if you will bite)
Your disclaimer is full of nonsense and proof that you don't understand Segwit. Go back to school.
my disclaimer was to await your reply to see how practical, critical, and honest you would be .. but you stayed silent by just saying "t does not matter" without explaining why. knowing you would dig yourself a hole should you explain

but atleast in a few area's you are starting to think beyond the temporary promotion.. now you really need to start wearing the critical hat more often and look passed the blockstream defense you keep trying to promote

I DO NOT TRADE OR ACT AS ESCROW ON THIS FORUM EVER.
Please do your own research & respect what is written here as both opinion & information gleaned from experience. many people replying with insults but no on-topic content substance, automatically are 'facepalmed' and yawned at
hobbes
Full Member
***
Offline Offline

Activity: 128
Merit: 107



View Profile
May 05, 2017, 12:48:36 PM
 #114

I guess the thread title has not helped... it isn't going to be the last time and we'll never be able to continue in small words:)
Will give it another try:

1. There are certain structural oversights in Bitcoin that need to be fixed. Without fixing this altcoins will probably overtake Bitcoin in the long run.

2. SegWit has several benefits including short term higher transaction capacity, long term much higher transaction capacity through second level transactions and also safe (!) increasing of the block size. If Satoshi would design Bitcoin from scratch today he would probably do it somewhat similar to SWHF.

3. SegWit a good solution, ready for action and well tested. Even some of it's strongest opponents secretly admit it is "good" ('verified chatlogs').

4. There are two possible ways to deploy/implement SegWit, as a softfork or as a hardfork. SegWit as a hardfork would allow a slightly cleaner implementation but would also require replay protection (as the exchanges have specifically asked for lately). SWSF does not require replay protection assuming a hashrate majority. Replay protection is difficult thus SegWit as a hardfork would altogether cause more technical debt than SWSF. Also a hardfork is generally considered of higher risk and would take a longer preparation time.

5. Because of a block verification processing time vulnerability that increases quadratically with block size, increasing the block size is only possible AFTER SegWit is active and only for SegWit transactions.

6. Any alternative to SegWit SF would take at least half a year longer in implementation and testing.

7. A mining hardware manufacturer and a rich guy are trying to prevent SegWit from being activated probably because of financial incentives and power political reasons ('verified chatlogs').

8. Watching altcoins with SWSF flourish pressure from the users will become so high that Bitcoin finally will get SegWit SF, probably by the miners accepting it after all.

Lauda
Legendary
*
Offline Offline

Activity: 2674
Merit: 2965


Terminated.


View Profile WWW
May 05, 2017, 12:49:57 PM
 #115

If you didn't mine the block, you are going to validate it. If a malicious miner starts deploying quadratic intensive blocks at higher MB (e.g. 2 MB), they could make you constantly be behind them (hence DDOS).
now your starting to see why segwit hasnt fixed it!!
There is no risk at 1 MB, and with >1MB for Segwit you'd have linear time so it has been fixed in this context.

core have already removed the FEE calculation features such as priority, reactive.. nothing to stop them removing the 4x witness scale factor as soon as segwit is activated.. after duping people into activating it..
maybe you need to read the documentation and code and then think of the long term.. not the temporary sales pitch..
The fee calculation is entirely irrelevant and priority has been mostly unused in ages. You still don't understand why the scale factor was included. Go back to Segwit 101.

because of the tier network preventing old nodes connecting direct to pools, i did * that to say i was baiting you.. i was hoping you would have honestly /integrity to explain why its not an issue.. but you love to hide the bad bits under the rug..
It is still a non-issue.

actually you need to think deeper.. by reducing tx sigops to say 1k and then having 80k blocksigops. without any cludgy maths of pretend counting..
it changes it from being just 5-7 tx to being 80tx to fill a block.
Exactly what would that change? Nothing. You'd disable a lot of use-cases in which these sigops may be needed, in order to make it <20x more expensive to attack the network this way.

my disclaimer was to await your reply to see how practical, critical, and honest you would be .. but you stayed silent by just saying "t does not matter" without explaining why. knowing you would dig yourself a hole should you explain
Ironically you don't explain anything yourself. All you write is "it is x y z". Roll Eyes

"The Times 03/Jan/2009 Chancellor on brink of second bailout for banks"
😼 Bitcoin Core (onion)
franky1
Legendary
*
Online Online

Activity: 4396
Merit: 4754



View Profile
May 05, 2017, 01:18:06 PM
 #116

If you didn't mine the block, you are going to validate it. If a malicious miner starts deploying quadratic intensive blocks at higher MB (e.g. 2 MB), they could make you constantly be behind them (hence DDOS).
now your starting to see why segwit hasnt fixed it!!
There is no risk at 1 MB, and with >1MB for Segwit you'd have linear time so it has been fixed in this context.

your still thinking from the HOPE of a 2merkle soft activation where people move to segwit tx's..
your question was
"If a malicious miner starts deploying quadratic intensive blocks at higher MB (e.g. 2 MB), they could make you constantly be behind them (hence DDOS)."

stop flip flopping to hide the risks of a 1 merkle segwit, by then round circling back to a 2 merkle*.
stop flip flopping to hide the non-fixes of a 2 merkle segwit, by then round circling back to a 1 merkle.

by lowering the txsigops (not fake the maths) you can both allow more tx's in and reduce the CPU demand of native tx's no matter if people are using segwit or not
P.S
*you forget to remind yourself that segwit linear time is ONLY IF people move to segwit keys (which malicious pools/spam users wont do) so stop trying to assume segwit will help, because pools/users that want to be malicious wont use segwit keys

I DO NOT TRADE OR ACT AS ESCROW ON THIS FORUM EVER.
Please do your own research & respect what is written here as both opinion & information gleaned from experience. many people replying with insults but no on-topic content substance, automatically are 'facepalmed' and yawned at
Lauda
Legendary
*
Offline Offline

Activity: 2674
Merit: 2965


Terminated.


View Profile WWW
May 05, 2017, 01:21:44 PM
 #117

your still thinking from the HOPE of a 2merkle soft activation where people move to segwit tx's..
No. You are confused again and need to re-read what I was talking about. You mentioned Segwit into a statement that had nothing to do with it, and lots again.

by lowering the txsigops (not fake the maths) you can both allow more tx's in and reduce the CPU demand of native tx's
Both points are wrong. This:
1) Does not allow for more TXs. All it does is disable some use-cases which require more sigops.
2) It does not reduce CPU demand at all. Those 1k sigops still have quadratic validation time.

P.S you forget to remind yourself that segwit linear time is ONLY IF people move to segwit keys (which malicious pools/spam users wont do) so stop trying to assume segwit will help, because pools/users that want to be malicious wont use segwit keys
I did not forget anything and have already told you the answer to your nonsense. A malicious actor will be strongly weakened by the prioritization of native ->Segwit and Segwit -> Segwit transactions.

"The Times 03/Jan/2009 Chancellor on brink of second bailout for banks"
😼 Bitcoin Core (onion)
franky1
Legendary
*
Online Online

Activity: 4396
Merit: 4754



View Profile
May 05, 2017, 01:46:16 PM
 #118

by lowering the txsigops (not fake the maths) you can both allow more tx's in and reduce the CPU demand of native tx's
Both points are wrong. This:
1) Does not allow for more TXs. All it does is disable some use-cases which require more sigops.
2) It does not reduce CPU demand at all. Those 1k sigops still have quadratic validation time.

1. it does. because having say 1k txsigops and 80k blocksigops  vs 4k(mathematically twisted to be treated as 16k) means you cannot use up all the blocksigops with 5-7tx's but instead need 80+ tx's if your malicious
also
by having 1k sigops for instance it helps keep people to making lean tx's more. ask yourself why should anyone have the ability to make 1tx that uses up 14%-20% of a blocks limit.

2) quadratics of 4k of a few seconds vs 1k thats only a few milliseconds per tx..

EG 80x 1k tx sigops with 80kblocksigop = under 2 seconds CPU time per block..  

EG 5x 4k txsigops with 20k blocksigop= under 50 seconds CPU time per block..  
EG 5x 4k txsigops(math manip to 16k) with 80kblocksigop = under 50 seconds CPU time per block..  

EG 5x 16k txsigops = under 50 minutes CPU time per block..  

so 80x 1k txsigops with 80kblocksigop = under 2 seconds CPU time..   is better than
SFSW: 5x 4k txsigops(math manip to 16k) with 80kblocksigop= under 50 seconds CPU time..  
and better than removing the cludgy math to get a HFSW
HFSW: 5x 16k txsigops = under 50 minutes CPU time..  

do the maths
1 tx of 80k sigops vs 80tx of 1ksigops... both total 80k total sigops. but bcause its broken up into different tx's the CPU time changes where 80tx of 1ksigops is much much better for all reasons

I DO NOT TRADE OR ACT AS ESCROW ON THIS FORUM EVER.
Please do your own research & respect what is written here as both opinion & information gleaned from experience. many people replying with insults but no on-topic content substance, automatically are 'facepalmed' and yawned at
franky1
Legendary
*
Online Online

Activity: 4396
Merit: 4754



View Profile
May 05, 2017, 01:56:12 PM
 #119

I did not forget anything and have already told you the answer to your nonsense. A malicious actor will be strongly weakened by the prioritization of native ->Segwit and Segwit -> Segwit transactions.
a HOPE of priority of segwit users

CODE should mean more then HOPE

I DO NOT TRADE OR ACT AS ESCROW ON THIS FORUM EVER.
Please do your own research & respect what is written here as both opinion & information gleaned from experience. many people replying with insults but no on-topic content substance, automatically are 'facepalmed' and yawned at
Lauda
Legendary
*
Offline Offline

Activity: 2674
Merit: 2965


Terminated.


View Profile WWW
May 05, 2017, 02:07:03 PM
 #120

1. it does. because having say 1k txsigops and 80k blocksigops  vs 4k(mathematically twisted to be treated as 16k) means you cannot use up all the blocksigops with 5-7tx's but instead need 80+ tx's if your malicious
also
That is nonsensical. It does not allow for more throughput. All it does is make it a little bit harder to abuse sigops to fill up blocks.

ask yourself why should anyone have the ability to make 1tx that uses up 14%-20% of a blocks limit.
They may be use cases which require this. Who are you to censor such transactions?

2) quadratics of 4k of a few seconds vs 1k thats only a few milliseconds per tx..
Irrelevant. It is still quadratic validation time.

a HOPE of priority of segwit users
No. It is going to happen as long as there are reasonable pools/miners, which we know that there are (e.g. Bitfury).

"The Times 03/Jan/2009 Chancellor on brink of second bailout for banks"
😼 Bitcoin Core (onion)
Pages: « 1 2 3 4 5 [6] 7 8 9 »  All
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!