Bitcoin Forum
May 05, 2024, 07:04:51 AM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
   Home   Help Search Login Register More  
Poll
Question: Would you approve the compromise "Segwit + 2MB"?
Yes - 78 (62.4%)
No - 35 (28%)
Don't know - 12 (9.6%)
Total Voters: 125

Pages: « 1 2 3 [4] 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 »  All
  Print  
Author Topic: [POLL] Possible scaling compromise: BIP 141 + BIP 102 (Segwit + 2MB)  (Read 14371 times)
jbreher
Legendary
*
Offline Offline

Activity: 3038
Merit: 1660


lose: unfind ... loose: untight


View Profile
March 10, 2017, 01:40:11 AM
 #61

im starting to see what game jbreher is playing.
...

Now you just look silly. I'll leave it at that.

Anyone with a campaign ad in their signature -- for an organization with which they are not otherwise affiliated -- is automatically deducted credibility points.

I've been convicted of heresy. Convicted by a mere known extortionist. Read my Trust for details.
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction.
-ck
Legendary
*
Offline Offline

Activity: 4102
Merit: 1632


Ruu \o/


View Profile WWW
March 10, 2017, 02:04:32 AM
 #62

https://bitco.in/forum/threads/buip033-passed-parallel-validation.1545/

For those unwilling to click through:

Quote
BUIP033: Parallel Validation
Proposer: Peter Tschipper
Submitted on: 10/22/2016

Summary:

Essentially Parallel Validation is a simple concept. Rather than validating each block within the main processing thread, we instead create a separate thread to do the block validation. If more than one block arrives to be processed then we create yet another thread. There are currently up to 4 parallel block processing threads available making a big block DDOS attack impossible. Furthermore, if any attacker were somehow able to jam all 4 processing threads and another block arrived, then the processing for the largest block would be interrupted allowing the smaller block to proceed, unless the larger block or blocks have most proof of work. So only the most proof of work and smallest blocks will be allowed to finish in such
as case.

If there are multiple blocks processing at the same time, when one of the blocks wins the race to complete, then the other threads of processing are interrupted and the winner will be able to update the UTXO and advance the chain tip. Although the other blocks that were interrupted will still be stored on disk in the event of a re-org.

...

Thanks I wasn't aware of that. Probably something worth offering in conjunction with BIP102 then.

Developer/maintainer for cgminer, ckpool/ckproxy, and the -ck kernel
2% Fee Solo mining at solo.ckpool.org
-ck
nillohit
Full Member
***
Offline Offline

Activity: 154
Merit: 100

***crypto trader***


View Profile
March 10, 2017, 10:57:16 AM
 #63

I support SegWit  Grin

П    |⧛ ☛  Join the signature campaign and earn free PI daily!  ✅ |⧛    П
|⧛         ☛  PiCoin - get in now  ✅     ☛ No ICO!  ✅          |⧛
Lauda
Legendary
*
Offline Offline

Activity: 2674
Merit: 2965


Terminated.


View Profile WWW
March 10, 2017, 11:45:35 AM
 #64

I do have to add that, while I think that it would be still extremely hard to gather 90-95% consensus on both ideas, I think both would reach far higher and easier support than either Segwit or BU.
I don't understand that statement. Are you talking about DooMAD's idea (modified BIP100+BIP106) or the compromise proposed by "ecafyelims", or both?
Both.

I ask because I think DooMAD's "10%-blocksize-change-voting proposal" sounds interesting and if there is support by staff/respected community members/devs then it would be worth discussing it in a separate thread to elaborate a "final BIP".
The idea is worth discussing on its own, regardless of whether there is support by others. Do note that "support by staff" (if you're referring to Bitcointalk staff) is useless. Excluding achow101 and potentially dabs, the rest have very limited or just standard knowledge. Did you take a look at recent luke-jr HF proposal? Achow101 modified it by removing the initial size reduction. Read: https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-February/013544.html

if blocks grow to say 8mb we just keep tx sigops BELOW 16,000 (we dont increase tx sigop limits when block limits rise).. thus no problem.
That's not how this works.

https://bitco.in/forum/threads/buip033-passed-parallel-validation.1545/

For those unwilling to click through:

Quote
BUIP033: Parallel Validation
Proposer: Peter Tschipper
Submitted on: 10/22/2016

Summary:

Essentially Parallel Validation is a simple concept. Rather than validating each block within the main processing thread, we instead create a separate thread to do the block validation. If more than one block arrives to be processed then we create yet another thread. There are currently up to 4 parallel block processing threads available making a big block DDOS attack impossible. Furthermore, if any attacker were somehow able to jam all 4 processing threads and another block arrived, then the processing for the largest block would be interrupted allowing the smaller block to proceed, unless the larger block or blocks have most proof of work. So only the most proof of work and smallest blocks will be allowed to finish in such
as case.

If there are multiple blocks processing at the same time, when one of the blocks wins the race to complete, then the other threads of processing are interrupted and the winner will be able to update the UTXO and advance the chain tip. Although the other blocks that were interrupted will still be stored on disk in the event of a re-org.
Which effectively.. solves nothing.

"The Times 03/Jan/2009 Chancellor on brink of second bailout for banks"
😼 Bitcoin Core (onion)
DooMAD
Legendary
*
Offline Offline

Activity: 3780
Merit: 3104


Leave no FUD unchallenged


View Profile
March 10, 2017, 03:11:46 PM
 #65

I ask because I think DooMAD's "10%-blocksize-change-voting proposal" sounds interesting and if there is support by staff/respected community members/devs then it would be worth discussing it in a separate thread to elaborate a "final BIP".
The idea is worth discussing on its own, regardless of whether there is support by others. Do note that "support by staff" (if you're referring to Bitcointalk staff) is useless. Excluding achow101 and potentially dabs, the rest have very limited or just standard knowledge. Did you take a look at recent luke-jr HF proposal? Achow101 modified it by removing the initial size reduction. Read: https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-February/013544.html

I could get behind Achow101's proposal (the link in that linuxfoundation text ended with an extraneous "." which breaks the link) if that one proves less contentious.  I'd also consider tweaking mine to a lower percentage if that helped, or possibly even a flat 0.038MB if we wanted an absolute guarantee that there was no conceivable way it could increase by more than 1MB in the space of a year.  But recurring increases every diff period are unlikely if the total fees generated has to increase every time.  We'd reach an equilibrium between fee pressure easing very slightly when it does increase and then slowly rising again as blocks start to fill once more at the new, higher limit.

.
.HUGE.
▄██████████▄▄
▄█████████████████▄
▄█████████████████████▄
▄███████████████████████▄
▄█████████████████████████▄
███████▌██▌▐██▐██▐████▄███
████▐██▐████▌██▌██▌██▌██
█████▀███▀███▀▐██▐██▐█████

▀█████████████████████████▀

▀███████████████████████▀

▀█████████████████████▀

▀█████████████████▀

▀██████████▀▀
█▀▀▀▀











█▄▄▄▄
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
.
CASINSPORTSBOOK
▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄
▀▀▀▀█











▄▄▄▄█
Lauda
Legendary
*
Offline Offline

Activity: 2674
Merit: 2965


Terminated.


View Profile WWW
March 10, 2017, 03:44:17 PM
 #66

I support SegWit  Grin
I forgot to mention in my previous post, that this is a healthy stance to have as the majority of the technology oriented participants of the ecosystems are fully backing Segwit.

I could get behind Achow101's proposal (the link in that linuxfoundation text ended with an extraneous "." which breaks the link) if that one proves less contentious.
I think it does, as it doesn't initially reduce the block size. This is what made luke-jr's proposal extremely contentious and effectively useless.

I'd also consider tweaking mine to a lower percentage if that helped, or possibly even a flat 0.038MB if we wanted an absolute guarantee that there was no conceivable way it could increase by more than 1MB in the space of a year. 
I don't like fixed increases in particular either. Percentage based movements in both directions would be nice, but the primary problem with those is preventing the system from being gamed. Even with 10%, eventually this 10% is going to be a lot. Who's to say that at a later date, such movements would be technologically acceptable?

"The Times 03/Jan/2009 Chancellor on brink of second bailout for banks"
😼 Bitcoin Core (onion)
DooMAD
Legendary
*
Offline Offline

Activity: 3780
Merit: 3104


Leave no FUD unchallenged


View Profile
March 10, 2017, 04:07:54 PM
 #67

I'd also consider tweaking mine to a lower percentage if that helped, or possibly even a flat 0.038MB if we wanted an absolute guarantee that there was no conceivable way it could increase by more than 1MB in the space of a year. 
I don't like fixed increases in particular either. Percentage based movements in both directions would be nice, but the primary problem with those is preventing the system from being gamed. Even with 10%, eventually this 10% is going to be a lot. Who's to say that at a later date, such movements would be technologically acceptable?

The thing to bear in mind is we'll never make any decision if we're too afraid to make a change because there's a possibility that it might need changing at a later date.  Plus, the good news is, it would only require a soft fork to restrict it later.  But yes, movements in both directions, increases and decreases alike would be ideal.  This also helps as a disincentive to game the system with artificial transactions because your change would be undone next diff period if demand isn't genuine.

.
.HUGE.
▄██████████▄▄
▄█████████████████▄
▄█████████████████████▄
▄███████████████████████▄
▄█████████████████████████▄
███████▌██▌▐██▐██▐████▄███
████▐██▐████▌██▌██▌██▌██
█████▀███▀███▀▐██▐██▐█████

▀█████████████████████████▀

▀███████████████████████▀

▀█████████████████████▀

▀█████████████████▀

▀██████████▀▀
█▀▀▀▀











█▄▄▄▄
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
.
CASINSPORTSBOOK
▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄
▀▀▀▀█











▄▄▄▄█
Lauda
Legendary
*
Offline Offline

Activity: 2674
Merit: 2965


Terminated.


View Profile WWW
March 10, 2017, 05:03:37 PM
 #68

I'd also consider tweaking mine to a lower percentage if that helped, or possibly even a flat 0.038MB if we wanted an absolute guarantee that there was no conceivable way it could increase by more than 1MB in the space of a year. 
I don't like fixed increases in particular either. Percentage based movements in both directions would be nice, but the primary problem with those is preventing the system from being gamed. Even with 10%, eventually this 10% is going to be a lot. Who's to say that at a later date, such movements would be technologically acceptable?
The thing to bear in mind is we'll never make any decision if we're too afraid to make a change because there's a possibility that it might need changing at a later date.  Plus, the good news is, it would only require a soft fork to restrict it later.  But yes, movements in both directions, increases and decreases alike would be ideal.  This also helps as a disincentive to game the system with artificial transactions because your change would be undone next diff period if demand isn't genuine.
You could argue that it may already be quite late/near impossible to make such 'drastic' changes. I've been giving this some thought, but I'm not entirely sure. I'd like to see some combination of the following:
1) % changes either up or down.
2) Adjustments that either align with difficulty adjustments (not sure if this makes thing complicated or riskier, hence the latter) or monthly adjustments.
3) Fixed maximum cap. Since we can't predict what the state of the network and underlying technology/hardware will be far in the future, it is best to create a top-maximum cap a few years in the future. Yes, I know that this requires more changes later but it is better than nothing or 'risking'/hoping miners are honest, et. al.

"The Times 03/Jan/2009 Chancellor on brink of second bailout for banks"
😼 Bitcoin Core (onion)
franky1
Legendary
*
Offline Offline

Activity: 4214
Merit: 4465



View Profile
March 10, 2017, 07:40:37 PM
Last edit: March 10, 2017, 07:51:04 PM by franky1
 #69

You could argue that it may already be quite late/near impossible to make such 'drastic' changes. I've been giving this some thought, but I'm not entirely sure. I'd like to see some combination of the following:
1) % changes either up or down.
2) Adjustments that either align with difficulty adjustments (not sure if this makes thing complicated or riskier, hence the latter) or monthly adjustments.
3) Fixed maximum cap. Since we can't predict what the state of the network and underlying technology/hardware will be far in the future, it is best to create a top-maximum cap a few years in the future. Yes, I know that this requires more changes later but it is better than nothing or 'risking'/hoping miners are honest, et. al.

imagine a case where there were 2 limits.(4 overal 2 for nodes 2 for pools)
hard technical limit that everyone agree's on. and below that a preference limit (adjustable to demand of dynamics).

now imagine
we call the hard technical limit (like old consensus.h) that only moves when the NETWORK as a whole has done speed tests to say what is technically possible and come to a consensus.
EG 8mb has been seen as acceptable today by all speed tests.
the entire network agrees to stay below this, pools and nodes
as a safety measure its split up as 4mb for next 2 years then 8mb 2 years after that..

thus allowing for upto 2-4 years to tweak and make things leaner and more efficient and allow time for real world tech to enhance.
(fibre obtic internet adoption and 5G mobile internet) before stepping forward the consensus.h again



then the preferential limit(further safety measure) that is adjustable and dynamic (policy.h) and keeps pools and nodes inline in a more fluid temporary adjustable agreement. to stop things moving too fast. but fluid if demand occurs

now then, nodes can flag the policy.h whereby if the majority of nodes preferences are at 2mb. pools consensus.h only goes to 1.999
however if under 5-25% of nodes are at 2mb and over 75% of nodes are above 2mb. then POOLS can decide on the orphan risk of raising their pools consensus.h above 2mb but below the majority node policy

also note: pools actual block making is below their(pools) consensus.h

lets make it easier to imagine.. with a picture

black line.. consensus.h. whole network RULE. changed by speed tests and real world tech / internet growth over time (the ultimate consensus)
red line.. node policy.h. node dynamic preference agreement. changed by dynamics or personal preference
purple line.. pools consensus.H. below network RULE. but affected by mempool demand vs nodes overall preference policy.h vs (orphan)risk
orange line.. pools policy.h below pools consensus.h


so imagine
2010
32mb too much, lets go for 1mb
2015
pools are moving thier limit up from 0.75mb to 0.999mb
mid 2017
everyone agree's 2 years of 4mb network capability (then 2 years of 8mb network capability)
everyone agree's to 2mb preference
pools agree their max capability will be below everyones network capability but steps up due to demand and node preference MAJORITY
pools preference(actual blocks built). below other limits but can affect the node minority to shift(EB)
mid 2019
everyone agree's 2 years of 8mb network capability then 2 years of 16mb network capability
some move preference to 4mb, some move under 3mb some dont move
late 2019
MINORITY of nodes have their preference shifted by dynamics of (EB)
2020
MINORITY nodes manually change their preference to not be controlled by dynamics of (EB)
late 2020
MINORITY of nodes have their preference shifted by dynamics of (EB)
2021
MINORITY nodes manually change their preference to not be controlled by dynamics of (EB)
mid 2021
a decision is made whereby nodes preference and pools preference are safe to control blocks at X% scaling per difficulty adjustment period
pools preference(actual blocks built). below other limits but can shift the MINORITY nodes preference via (EB) should they lag behind

p.s
its just a brainfart. no point knit picking the numbers or dates. just read the concept. i even made a picture to keep peoples attention span entertained.

and remember all of these 'dynamic' fluid agreements are all extra safety limits BELOW the black network consensus limit

I DO NOT TRADE OR ACT AS ESCROW ON THIS FORUM EVER.
Please do your own research & respect what is written here as both opinion & information gleaned from experience. many people replying with insults but no on-topic content substance, automatically are 'facepalmed' and yawned at
d5000 (OP)
Legendary
*
Offline Offline

Activity: 3906
Merit: 6172


Decentralization Maximalist


View Profile
March 10, 2017, 08:35:55 PM
 #70

I like that we're moving forward in the discussion, it seems. The original compromise that was the reason for me to start the thread now looks a bit dated.

I would support Lauda's maximum cap idea, as it's true that there could be circumstances where such a flexible system could be gamed.

The challenge is now to find a number for this cap. I had done some very rogue calculations that a 1 TB/year blockchain (that would be equivalent to approximately 20 MB blocks) would enable 160 million people to do about 1-3 transactions (depending on the TX size) per month. That would be just enough for this user base if we assume that Lightning Network and similar systems can manage smaller pauments. 1 TB/year seems pretty high, but I think it's manageable in the near future (~5 years from now).

Obviously if we want the 7 billion people on earth to be able to use Bitcoin on-chain the limit would be much higher, but I think even the most extreme BU advocates don't see that as a goal.

█▀▀▀











█▄▄▄
▀▀▀▀▀▀▀▀▀▀▀
e
▄▄▄▄▄▄▄▄▄▄▄
█████████████
████████████▄███
██▐███████▄█████▀
█████████▄████▀
███▐████▄███▀
████▐██████▀
█████▀█████
███████████▄
████████████▄
██▄█████▀█████▄
▄█████████▀█████▀
███████████▀██▀
████▀█████████
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
c.h.
▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄
▀▀▀█











▄▄▄█
▄██████▄▄▄
█████████████▄▄
███████████████
███████████████
███████████████
███████████████
███░░█████████
███▌▐█████████
█████████████
███████████▀
██████████▀
████████▀
▀██▀▀
AngryDwarf
Sr. Member
****
Offline Offline

Activity: 476
Merit: 501


View Profile
March 10, 2017, 08:52:28 PM
 #71

My thoughts are:

Was the 1 MB cap introduced as an anti spam measure when everybody used the same satoshi node, and did that version simply stuff all mempool transactions into the block in one go?

Big mining farms are probably not using reference nodes, since they probably wouldn't be able to pick transactions where they have been priotised using a transaction accelerator.

Increasing the block size cap in the simplest manner would avoid BU technical debt, as the emergent consensus mechanism probably wouldn't work very well if people do not configure their nodes (it would hit a 16MB cap in a more complicated manner.)

Miners have to way up the benefits of the higher processing costs required to build a bigger block versus the orphan risk associated with the delay caused by it. In other words, a more natural fee market develops.

So it won't be massive blocks by midnight.

Any comments? (probably a silly question  Wink )


Scaling and transaction rate: https://bitcointalk.org/index.php?topic=532.msg6306#msg6306
Do not allow demand to exceed capacity. Do not allow mempools to forget transactions. Relay all transactions. Eventually confirm all transactions.
franky1
Legendary
*
Offline Offline

Activity: 4214
Merit: 4465



View Profile
March 10, 2017, 09:12:25 PM
 #72

I like that we're moving forward in the discussion, it seems. The original compromise that was the reason for me to start the thread now looks a bit dated.

I would support Lauda's maximum cap idea, as it's true that there could be circumstances where such a flexible system could be gamed.

The challenge is now to find a number for this cap. I had done some very rogue calculations that a 1 TB/year blockchain (that would be equivalent to approximately 20 MB blocks) would enable 160 million people to do about 1-3 transactions (depending on the TX size) per month. That would be just enough for this user base if we assume that Lightning Network and similar systems can manage smaller pauments. 1 TB/year seems pretty high, but I think it's manageable in the near future (~5 years from now).

Obviously if we want the 7 billion people on earth to be able to use Bitcoin on-chain the limit would be much higher, but I think even the most extreme BU advocates don't see that as a goal.

mhm
dont think 7billion by midnight.

think rationally. like 1billion over decades.. then your fears start to subside and you start to see natural progression is possible

bitcoin will never be a one world single currency. it will be probably in the top 10 'nations' list. with maybe 500mill people. and it wont be overnight. so relax about the "X by midnight" scare storys told on reddit.

I DO NOT TRADE OR ACT AS ESCROW ON THIS FORUM EVER.
Please do your own research & respect what is written here as both opinion & information gleaned from experience. many people replying with insults but no on-topic content substance, automatically are 'facepalmed' and yawned at
Lauda
Legendary
*
Offline Offline

Activity: 2674
Merit: 2965


Terminated.


View Profile WWW
March 10, 2017, 10:39:55 PM
 #73

imagine a case where there were 2 limits.(4 overal 2 for nodes 2 for pools)
hard technical limit that everyone agree's on. and below that a preference limit (adjustable to demand of dynamics).
Yes, that's exactly what my 'proposal/wish' is supposed to have. A dynamic lower bound and a fixed upper bound. The question is, how do we determine an appropriate upper bound and for what time period? Quite a nice concept IMHO. Do you agree?

i even made a picture to keep peoples attention span entertained
What software did you do this in? (out of curiosity)

The challenge is now to find a number for this cap. I had done some very rogue calculations that a 1 TB/year blockchain (that would be equivalent to approximately 20 MB blocks) would enable 160 million people to do about 1-3 transactions (depending on the TX size) per month. That would be just enough for this user base if we assume that Lightning Network and similar systems can manage smaller pauments. 1 TB/year seems pretty high, but I think it's manageable in the near future (~5 years from now).
Problems:
1) 20 MB is too big right now.
2) 1 TB is definitely too big. Just imagine the IBD after 2 years.
3) You're thinking too big. Think smaller. We need some room to handle the current congestion, we do not need room for 160 million users yet.

Increasing the block size cap in the simplest manner would avoid BU technical debt, as the emergent consensus mechanism probably wouldn't work very well if people do not configure their nodes (it would hit a 16MB cap in a more complicated manner.)
Preference level for me:
Segwit + dynamic block size proposal (as discussed so far) > Segwit alone > block size increase HF alone > BTU emergent consensus. The latter is risky and definitely not adequately tested.

"The Times 03/Jan/2009 Chancellor on brink of second bailout for banks"
😼 Bitcoin Core (onion)
AngryDwarf
Sr. Member
****
Offline Offline

Activity: 476
Merit: 501


View Profile
March 10, 2017, 10:57:33 PM
 #74

Preference level for me:
Segwit + dynamic block size proposal (as discussed so far) > Segwit alone > block size increase HF alone > BTU emergent consensus. The latter is risky and definitely not adequately tested.

Preference level for me would be (current moment of thought - I reserve the right to change my mind):
Segwit + dynamic block size HF > block size HF > BTU > Segwit SF. The latter introducing a two tiered network system and a lot of technical debt.

Although a quick and simple static block size increase is needed ASAP to allow time to get the development of the preferred option right.

Scaling and transaction rate: https://bitcointalk.org/index.php?topic=532.msg6306#msg6306
Do not allow demand to exceed capacity. Do not allow mempools to forget transactions. Relay all transactions. Eventually confirm all transactions.
jbreher
Legendary
*
Offline Offline

Activity: 3038
Merit: 1660


lose: unfind ... loose: untight


View Profile
March 11, 2017, 12:31:40 AM
Last edit: March 11, 2017, 12:48:10 AM by jbreher
 #75

https://bitco.in/forum/threads/buip033-passed-parallel-validation.1545/

For those unwilling to click through:

Quote
BUIP033: Parallel Validation
Proposer: Peter Tschipper
Submitted on: 10/22/2016

Summary:

Essentially Parallel Validation is a simple concept. Rather than validating each block within the main processing thread, we instead create a separate thread to do the block validation. If more than one block arrives to be processed then we create yet another thread. There are currently up to 4 parallel block processing threads available making a big block DDOS attack impossible. Furthermore, if any attacker were somehow able to jam all 4 processing threads and another block arrived, then the processing for the largest block would be interrupted allowing the smaller block to proceed, unless the larger block or blocks have most proof of work. So only the most proof of work and smallest blocks will be allowed to finish in such
as case.

If there are multiple blocks processing at the same time, when one of the blocks wins the race to complete, then the other threads of processing are interrupted and the winner will be able to update the UTXO and advance the chain tip. Although the other blocks that were interrupted will still be stored on disk in the event of a re-org.
Which effectively.. solves nothing.

Exactly. There is no problem which requires solving. This merely eliminates the DoS potential that quadratic hash time exploits might incur, if there was not this obvious workaround already inherent in the protocol.

Lesser implementations that have no embedded nullification of this exploit may wish to take note.

Anyone with a campaign ad in their signature -- for an organization with which they are not otherwise affiliated -- is automatically deducted credibility points.

I've been convicted of heresy. Convicted by a mere known extortionist. Read my Trust for details.
franky1
Legendary
*
Offline Offline

Activity: 4214
Merit: 4465



View Profile
March 11, 2017, 12:43:59 AM
 #76


i even made a picture to keep peoples attention span entertained
What software did you do this in? (out of curiosity)


i just quickly opened up microsoft excel and added some 'insert shape' and lines..
i use many different packages depending on what i need. some graphical some just whatever office doc i happen to already have open

I DO NOT TRADE OR ACT AS ESCROW ON THIS FORUM EVER.
Please do your own research & respect what is written here as both opinion & information gleaned from experience. many people replying with insults but no on-topic content substance, automatically are 'facepalmed' and yawned at
franky1
Legendary
*
Offline Offline

Activity: 4214
Merit: 4465



View Profile
March 11, 2017, 12:47:02 AM
 #77

Exactly. There is no problem which requires solving. This merely eliminates the DoS potential that quadratic has time exploits might incur, if there was not this obvious workaround already inherent in the protocol.

lol

blockstreamer: segwit solves quadratics, its a must, its needed. quadratics is a big deal and segwit promises to solve it
community: malicious users will stick to native keys, thus still quadratic spamming even with segwit active.. meaning segwits promise=broke
blockstreamer: quadratics has never been a problem relax its no big deal

i now await the usual rebuttal rhetoric
"blockstream never made any contractual commitment nor guarantee to fix sigop spamming" - as they backtrack earlier promises and sale pitches
or
personal attack (edit: there we have it, p.S personal attacks aimed at me sound like whistles in the wind)

I DO NOT TRADE OR ACT AS ESCROW ON THIS FORUM EVER.
Please do your own research & respect what is written here as both opinion & information gleaned from experience. many people replying with insults but no on-topic content substance, automatically are 'facepalmed' and yawned at
jbreher
Legendary
*
Offline Offline

Activity: 3038
Merit: 1660


lose: unfind ... loose: untight


View Profile
March 11, 2017, 12:49:16 AM
 #78

Exactly. There is no problem which requires solving. This merely eliminates the DoS potential that quadratic has time exploits might incur, if there was not this obvious workaround already inherent in the protocol.

lol

blockstreamer: segwit solves quadratics, its a must, its needed. quadratics is a big deal and segwit promises to solve it
community: malicious users will stick to native keys, segwits promise=broke
blockstreamer: quadratics has never been a problem relax its no big deal

You're looking ridiculous again, franky1. Y'all might wanna reel you-self back in.

Anyone with a campaign ad in their signature -- for an organization with which they are not otherwise affiliated -- is automatically deducted credibility points.

I've been convicted of heresy. Convicted by a mere known extortionist. Read my Trust for details.
AngryDwarf
Sr. Member
****
Offline Offline

Activity: 476
Merit: 501


View Profile
March 11, 2017, 12:15:32 PM
 #79

Having a little thought about this concept of 'emergent consensus'. Is not the fact that different versions of nodes, or different versions of different node implementations that exists on the network today a form or 'emergent consensus'?

Scaling and transaction rate: https://bitcointalk.org/index.php?topic=532.msg6306#msg6306
Do not allow demand to exceed capacity. Do not allow mempools to forget transactions. Relay all transactions. Eventually confirm all transactions.
franky1
Legendary
*
Offline Offline

Activity: 4214
Merit: 4465



View Profile
March 11, 2017, 12:33:39 PM
Last edit: March 11, 2017, 12:48:28 PM by franky1
 #80

Having a little thought about this concept of 'emergent consensus'. Is not the fact that different versions of nodes, or different versions of different node implementations that exists on the network today a form or 'emergent consensus'?

to answer your question is..

basically that BU and core already have the variables..

nodes: consensus.h policy.h
pools: consensus.h policy.h

and that all nodes have 2 limits although not utilitised to the best of their ability.. meaning at non mining level core does not care about policy.h

and the punchline i was going to reveal to Lauda about my example of dynamics.
BU uses
consensus.h (...) as the upperbound limit (32mb(2009), then 1mb for years and in the future going up as the hard limits EG 16mb)
policy.h (...) as the more fluid value BELOW consensus.h that if the node is in minority. can be pushed by EB or the user manually without needing to wait for events. which is signalled in their useragent eg 2mb and dynamically going up

core however, require tweaking code and recompiling to change both each time)

I DO NOT TRADE OR ACT AS ESCROW ON THIS FORUM EVER.
Please do your own research & respect what is written here as both opinion & information gleaned from experience. many people replying with insults but no on-topic content substance, automatically are 'facepalmed' and yawned at
Pages: « 1 2 3 [4] 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 »  All
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!