Bitcoin Forum
May 24, 2024, 04:52:58 AM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
  Home Help Search Login Register More  
  Show Posts
Pages: « 1 ... 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 [825] 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 ... 1467 »
16481  Bitcoin / Bitcoin Discussion / Re: Please run a full node on: May 12, 2017, 10:20:33 PM
That would mean that the orphan rate is 600%.  

yes the orphan rate would be higher if all pools didnt stop..(because there is competition its more effiecient to stop and move to next block if competitor wins)
im not saying that pools now should do such a thing of continuing on the mainnet.. as that would make the current competition of 20 pools on one network less efficient. as i said try some testnet tests using usb asics.


im saying based on a network of 1 pool... to get some maths of average REAL blocktime if there was no competition(single pool network) then a pool would not give up(stale shares) would not solve but not bother propagating(stale blocks) would not lose the race(orphan) because there would be no competition. so al their background failed attempts become valid...

which would reveal they make more blocks in X time.. which would counter the 1dimensional overview of the 20pool competing network of only seeing and doing bad math on only the accepted blockchain blocks

anyway.. lets bring this back on topic
time for me to have a beer
16482  Bitcoin / Bitcoin Discussion / Re: Please run a full node on: May 12, 2017, 09:51:48 PM
Why would it be closer to 10 minutes when they would have 1/6th the hashpower?  It's not like the orphan factor would account for a 600% increase.  So, that don't make no sense.

because your only seeing the blocks that you see in the blockchain.. your not realising the blocks in the background that dont make it.
you would only understand it if you run some scenarios.

ok.. go learn about "stales" and you will see more of what im on about (stale blocks not stale shares) hint stale blocks are solved blocks that dont even get propogated.

then run scenarios where, even if a competitor has a solved block you actually continue until you have a solved block
then
only take the time from when you add the previous hash of the last block height. and take the time when the block is solved.
do not take just the time from the last block you created to the next block you created.
16483  Bitcoin / Bitcoin Discussion / Re: Please run a full node on: May 12, 2017, 09:30:37 PM
If 6 pools each got one block added to the blockchain per hour , then 1 pool by itself might get blocks a bit more often than once an hour because of the absense of orphaning, but it definitely doesn't mean they would suddenly start solving blocks every 10 minutes by themselves.
 
Do you agree?

1. of course its not LITERALLY 10 minutes.... but its not 1 block an hour per pool either,
its much much closer to 10 minute (average) than it is to 1 hour (average)

firstly forget about the term orphans and the purpose of the orphan mechanism
2. realise that behind the scenes pools make blocks more often then you think
some propogate and get orphaned.. BUT
others get solved and pools just hold locally(not propagate) and just start working on the next block and only propagate the first block if the competitors fails validation..
also
some pools GIVE UP a few seconds/minutes before they would have got a solution, purely because its more efficient to give up and restart with new block
3. the point is of this whole meandered debate.. dont just see btcc making 5 blocks in 5 hours and say "pool only makes 1 block an hour"
the point is there is a vast difference between the time it takes to make a block.. and how often it win a race

the"orphan" link is not proof that only one competitor gets close. i only mention it as the easiest way to show dino that pools make more blocks then he thought.. that show that pools times are close... to counter dinofelis's 1-d thinking that only blocks ever made are the 6 blocks an hour that make it into a chain.

dino for month also fails to understand the purpose of non mining nodes..
so lets get back to that topic and let dinofelis spend a few months learning more about bitcoin to see the full depths of the many things that mak bitcoin way way way better and different to what h believes.

as i feel dinofelis is trying to think that bitcoin is just a centralised cheque clearing house that doesnt need nodes and only processes one batch of cheques/transactions an hour per cheque clearing house
16484  Bitcoin / Bitcoin Discussion / Re: Please run a full node on: May 12, 2017, 08:37:58 PM
It doesn't matter at all that there is a non-zero orphaning rate.   The difficulty adjusts to the actual number of blocks being added to the chain.  
 
Hopefully you see how it works now?  

i have always seen how it works.

when i have described it i have took my 3 dimensional view of it all and brought it down to 1 dimensional to describe the specific problems some people are not seeing..

i NEVER said or presumed that orphan blocks account towards difficulty..
the only reason i linked the orphans was to show that other blocks are made behind thenscene/forgot about within seconds.

id say dinofelis has about a year more research and running scenario's before he see's al the many things that make bitcoin what it is.

anyway i could keep trying to make him try to see that if 6 pools had 1 block each an hour(combined means 6 blocks an hour as a united network) does not mean they would take 6 hours if they went at it alone..



or even cheaper get some friends to run 100 metre races

I said you're a hopeless case, but as sometimes you do make good points, I cannot wrap my mind around you being confused to that point.

People running races are doing *cumulative* work.

seriously!!
you are taking the 100m race too literally as a comparison of the many factors..

the 100m was a analogy of explaining one part which you were mis understanding..

you were assuming a particular runners time was based on how often that particular runner won divided by an a minute(6 races) if there were 6 runners.

i was correcting your misunderstanding other runners in one race all start at the same time(when the se a previous hash as a trigger).

reality is that they are not timed from the last time THEY themself won. (which was your mistake)
nor
about how many races they won divided by 60 seconds (time of 6 races).. (your mistake again)
nor
by taking how many races are won by a participant over a given period EG 1 race every 20 years(5th olympic games) to then in your mind be it take them 20 years to run 100metres(taking your mindset to the extreme)

but by actually looking at how long it would actually take each runner to cross the finish line win or lose



anyway the dice game or even using USB erupters is better ways to run scenarios once you get passed your one dimensional view.

and the fact that without stopping when a winner is found would show if you actually cared about the runner-ups times rather then avoid looking at them, you would see there is a difference between how often racer A won a race. vs how fast he took per race.

and that was me trying to go down to a 1 dimensional view to try to match your one dimensional view
the 100m race was not meant to be a complete 3 dimensional comparison of bitcoin mining vs olympic races..


yes the dice game is better as it includes more factors, but it seemed you were only understanding things 1-2 dimensionally so i was answering things you misunderstand by trying to simplify things to 1-2 dimensionally, just to make you try to understand where each of your 1-2 dimensions were wrong.

it would take a whole book to waffle through the 3 dimensions that make up the whole mining process in detail.
..
anyway
answer this
from a 1 dimensional view you had are you still holding onto this mindset of:

out of 6 blocks. if a pool has 2 blocks in the chain, you still believe it takes 30 minutes to solve a block for that pool IF IT WENT ALONE
or
can you atleast now see from a 2 dimensional view that the pool could have had 6 blocks solved.. but just not qualifying as the ultimate fastest to get displayed/locked in the chain/win the race.

..
answer this
from a 2 dimensional view you had are you still holding onto this mindset of:
if you shot 5 out of 6 pools.. suddenly the 6th pool would find a block 6 times slower / 6 blocks in 6 hours without any competition
or
can you atleast see that from a 3 dimensional view that if you knew all the timings of all the pools WIN OR LOSE without competition it would win every block
AND
that every block would not be an hour apart but much less

ok.. illustration time


but lets look at all the pools times IF they didnt give up, win or lose the times they would make a block.. and to avoid dinofelis taking things too literally lets stretch out the "just a few seconds" to be a few minutes apart
(dont take it too literally)


and now lets see if his 6 hours to make 6 blocks still holds weight



dont take it tooo literally/knitpicky.
 just understand the concept that pools dont take an hour a block..

anyway, this whole block timing has meandered away from why its important to run a node, which dinofelis still needs a few months of learning bitcoin to realise the importance of non-mining nodes
16485  Bitcoin / Bitcoin Discussion / Re: Mempool is flooded and I see no complaints on: May 12, 2017, 06:52:50 AM
With 136K unconfirmed transactions, AntPool mines an empty block (#465952).

but shhhhh dont mention, BTCC pool mines an empty block (#465117).
16486  Bitcoin / Bitcoin Discussion / Re: Blocksize increase vs. difficulty decrease on: May 12, 2017, 06:36:24 AM
In the ongoing scaling debate, I wonder why is the discussion focused solely on the block size.

To my understanding, decreasing the difficulty, either statically or dynamically, has the same effect, of increasing the transaction-confirmation rate.
I don't know if such a thing can be done with a soft work, but for those advocating a hard fork, why not a fork that decreases difficulty?

Can anyone enlighten me?

making blocks average to say 1minute means. that 12.5btc is made every 1min average instead of 10min average (or as amph notes 2016 blocks in 1.4 days instead of 14 days)
which results in the reward halving event every 145 days instead of 4 years.
which means all bitcoins will be mined in about 12 years as oppose to 120 years.

so to mess around with reducing block reward to 1.25 for 1min blocks. and then mess with the reward halving and so on to correct things back to scale, takes alot more coding than you think.. plus breaking a lot more fundamental rules..

ALSO
whether its 10 minutes or 1minute.. the people that say "10mins is too long" usually are buying things where even 1min would still be too long.
EG queuing up at a grocery store checkout. standing there, waiting a minute is still too long.
as many people know seeing someone count their change for 40 seconds instead of just swiping a card infuriates people.

dont believe me.
next time you are at a grocery store.. just stand there and do absolutely nothing for 60 seconds. and then look at the people behind you.
dont beleive me.
day trade on an exchange. see a good price and instead of hitting the order button.. just sit there for 60 seconds.. and then look to see if the good offer is still there.


now when it comes to data movement.
whether its 4032 blocks of 1mb each a month.. 4032 blocks of 10mb each a month
whether its 4032 blocks of 1mb each a month.. 40320 blocks of 1mb each a month

the bandwidth of both 'more data' is the same 40gb and anyone complaining about internet data caps will still run into the same problem



and as other have mentioned if blocks are being created once a minute, by the time they get passed to a node, verified, passed to another node, verified (the relay propagation) this can cause alot more congestion, orphans, rejects and other issues.

this is where pools who think they have won the fastest block, realise someone else beat them by milliseconds, and as such they end up wasting time and not getting as much income..
16487  Bitcoin / Bitcoin Discussion / Re: Please run a full node on: May 12, 2017, 06:14:00 AM
Frankly I think you're both "wrong" as, unless there's more info I'm not aware of, we simply don't have enough real data to know what the true average time is for a given pool. Saying if all but one pool stopped that the blocks would continue on being generated every 10 minutes with the exact same difficulty is clearly wrong. As is saying that the "win" average is an accurate representation of a pools true average if they were the only one solving blocks at a given difficulty.

viper you are mainly right there is not enough info displayed because in most cases all you see is the winners, most pools give up/reset to start on the next block for efficiency reasons.

just to note
the difficulty is based on ONLY the 2016 blocks  ~fortnight that WIN being added to the blockchain. not including the hidden times of the blocks that did not win.

if people ran a test net and set up usb asics (cheap simulations) and run tests. not just on times of winning but times the other usb asics got a result if they didnt giveup they would get more data and see the real data of the network. rather than just data of the "winner first"

EG get 100 usb asics..
make 10 'pools' on a test net where each pool has 10 asics.. (same hashrate) and then time how long each pool solves a block.
by this i mean have it set that the pools dont give up/stop when there was a winner, but continue on until each pool has a solution for the same blockheight..

i guarantee you it wont be a=10min  b=20m c=30m d=40m etc etc..

or even cheaper.. do the dice game
or even cheaper get some friends to run 100 metre races

it will really wake people up once they see all the information

..
however just doing maths only on the winners ..is very 2-dimension thinking
not looking at the time of solving block 460,001 by subtracting the timestamp from the timestamp of 460,000. but foolishly counting blocks per hour of a certain brand, is very 1-dimensional thinking
16488  Bitcoin / Bitcoin Discussion / Re: Please run a full node on: May 12, 2017, 04:50:05 AM
You cannot even use the time stamps at seconds precision, because the lower bits of the time stamp are used as nonce.

Most of the orphaned blocks are in reality much closer in time - simply because if the window of "collision" were bigger, there would be much more of them.

What you do, is post selection bias, however.   ALL orphaned blocks will be close in time !  Otherwise, they wouldn't collide !


what you ar not understanding is many more blocks are produced per hour then you think. some propagate and get displayed. some propagate but dont get displayed, some are solved locally but realise theres no point propagating them and some stop just short of getting a solution to save precious seconds that they could use making the next block

but either way you thinking antpool (using the examples above) averages a block every 30 minutes simply because you only publicly see 2 blocks in that hour. is FLAWED

you are not accounting for the blocks it SOLVED but didnt win.. or the blocks it didnt bother propogating. or the blocks it stopped just shy of solving to save time on the next round....

all you done is seen 2 blocks displayed and done 60mins/2 =30min average
16489  Bitcoin / Bitcoin Discussion / Re: Please run a full node on: May 12, 2017, 04:38:55 AM
people need to actually run scenarios..

and not just do retroactive maths on only the results of winners..

look behind the winners and see the times of the undisplayed losers aswell to see the real times
Where is the data that shows that pool A found a block a couple seconds after pool B? I don't know much about this stuff but if it's so close all the time one would think there would be a hell of a lot of orphans happening. Would that then mean this isn't an accurate reflection of that? https://blockchain.info/charts/n-orphaned-blocks Cause I'm only seeing 3 in the last month.

https://blockchain.info/orphaned-blocks
465722
Timestamp    2017-05-10 08:19:11 Relayed By    Bixin
Timestamp    2017-05-10 08:19:10 Relayed By    GBMiners
1 second apart

464681
Timestamp    2017-05-03 18:55:39 Relayed By    ViaBTC
Timestamp    2017-05-03 18:55:46 Relayed By    BTC.com
7 seconds apart

464185    
Timestamp    2017-04-30 11:40:29 Relayed By    BitFury
Timestamp    2017-04-30 11:39:59 Relayed By    Bixin
30 seconds apart

463505    
Timestamp    2017-04-25 23:15:20 Relayed By    Bitcoin.com
Timestamp    2017-04-25 23:15:22 Relayed By    AntPool
2 seconds apart




imagine it this way knowing only one person can win... some STOP when they see a winner as there is no point wasting precious seconds.. and RESET and work on a new block.

this does not mean it takes them 30 minutes it just means that stopping the instant a winner crosses the line is good odds to stop and restart, rather than to continue for a few more seconds (1-30 seconds) in the narrow hope you are more valid than the fastest first.

there are many many layers of security, efficiencies, percentages, features at play. far more then dino is putting into context when he just counts how many "winning" blocks he sees. rather than how long it actually takes a pool to make a block win or lose.

there is a difference
again for emphasise
the number of blocks that win per hour vs how many blocks are created per hour
16490  Bitcoin / Bitcoin Discussion / Re: Please run a full node on: May 12, 2017, 04:18:34 AM
people need to actually run scenarios..

and not just do retroactive maths on only the results of winners..

look behind the winners and see the times of the undisplayed losers aswell to see the real times
16491  Bitcoin / Bitcoin Discussion / Re: Please run a full node on: May 12, 2017, 03:51:37 AM
The 10 minute interval comes from the probability of the entire network hashrate solving a block, which can be expressed as a Poisson distribution.   If you take away most pools, your time interval goes up.

lol
go check some stats, before making assumptions.


for instance. block 463505
how long did it take antpool to make their block knowing antpools previous block was 463503

i guarantee you it was not 30 minutes, based on dino's bad maths of counting blocks


hint..
will you start the count from 463504 when it added 463504 hash and started working on 463505..
or will you base it off of the last time antpool got listed at 463503

the time to make block 463505 is not based on the last time that same pool had a block in the chain..
but the time it took to create a block with the previous hash (463504) until it has a solution of 463505

again its not based on assuming
if in an hour antpool only shows 2 blocks in a chain of 6 that it can only make 2 blocks an hour... where you average it as 30mins
all that shows is that only 2 blocks beat the competition..
but separately could have made 6 blocks in the hour, but were just not quite quick enough to get listed.

as you can see by the orphan list.
it made a block 2 seconds after bitcoin.com.. but was simply seen as a runner up and not counted..

so again how long do you think it took antpool to make 463505
i guarantee you it was not 30 minutes, (which is based on dino's bad maths of counting blocks that got listed)
but IS about how long a pool actually gets a solution listed or not listed

TL:DR;
more blocks are made then you think... they are just not displayed.
if you could see all the blocks even the ones not displayed you would see things differently


lets word it a different way to end the debate
pools make blocks in an average ~10 minutes..

pools make SPENDABLE/publicly displayed blocks less often dependant on if the competition beats them
16492  Bitcoin / Bitcoin Discussion / Re: Will BU Fork Soon Rip the Network in Half? on: May 12, 2017, 03:07:45 AM
Core is not perfect, but BU is just sad. Might as well stand for Bugs Unlimited.


Can I add : with closed source update ?

can i add that even core dont disclose issues for atleast a month after fix
https://github.com/bitcoin/bitcoin/issues/10364
Quote
but we do not publicly announce bugs even after they have been fixed for some time.
Quote
announcing bugs with exploit guidelines [within] 30 days after a fix is released would put a ton of our users at massive risk
16493  Bitcoin / Bitcoin Discussion / Re: Mempool is flooded and I see no complaints on: May 12, 2017, 03:01:40 AM
Yes sorry. But I was expecting more and more threads of endless complaints about the state of the network. I am surprised that everything is still not as bad in the forum and the price is still going up. Maybe we see the chaos when there is 200k unconfirmed transactions and the price goes down?

the main blockstreamist trolls dont complain.. they (wrongly) think huge fee's are good for 2 reasons.
1. they have been told by their masters that one day they can run a LN hub and get many cents from users by just being on a route between customer/merchant

2. they are economists rubbing hands together thinking it helps raise the price of bitcoin
16494  Bitcoin / Bitcoin Discussion / Re: can we admit segwit SF is never going to get 95% approval? on: May 12, 2017, 02:56:29 AM
"Segwit is a compromise" is rhetoric.  Compromise between what?   sensible scaling and no scaling?  

Truth is:  segwit is something Core came up with own their own without consulting the users, that offers a tiny amount of scaling as a soft fork.  

Do you like spreading FUD?

segwit was a secret project/altcoin as part of blockstream:elements, done separately from the bitcoin community from 2014-2015
consensus 2015 meeting was their first main roadmap that core decided to follow and have not listened/done the other things that the community asked of them.. its either the blockstream highway(roadmap) or no way..
no B roads, no diversion no secondary routes.. just follow blockstreams roadmap or get chucked off the network by the end of 2018
16495  Bitcoin / Bitcoin Discussion / Re: Mempool is flooded and I see no complaints on: May 12, 2017, 02:49:55 AM
I'm complaining.  loudly.  

this is a pain in the butt for me, my company, and more importantly, new users who are trying bitcoin only to find its promise of fast cheap transactions may be not so cheap and certainly not fast.


Yeah, I'm complaining too about the same, but probably also for different reasons.  The Stupid Civil War (SW v. BU) probably is scaring off potential newbies who are confused about what might happen...  "Hard Fork" sounds very scary.

NO ONE (that I have read, anyway, even here at bitcointalk) has come along to encourage the Miner Wankers and the Developer Wankers to sit down and SOLVE this.  Perfect examples of N. N. Taleb's "Intellectuals yet idiots".

Do I sound pissed off...?

people have... but devs have their fingers in ears and only want to do things the way THEY plan using THEIR road map that THEY designed without any community input.

here is one example of a solution
develop a NEW priority fee formulae that actually does a job


here is one example - not perfect. but think about it
imagine that we decided its acceptable that people should have a way to get priority if they have a lean tx and signal that they only want to spend funds once a day. (reasonable expectation)
where if they want to spend more often costs rise, if they want bloated tx, costs rise..

which then allows those that just pay their rent once a month or buys groceries every couple days to be ok using onchain bitcoin.. and where the costs of trying to spam the network (every block) becomes expensive where by they would be better off using LN. (for things like faucet raiding/day trading every 1-10 minutes)

so lets think about a priority fee thats not about rich vs poor(like the old one was) but about reducing respend spam and bloat.

lets imagine we actually use the tx age combined with CLTV to signal the network that a user is willing to add some maturity time if their tx age is under a day, to signal they want it confirmed but allowing themselves to be locked out of spending for an average of 24 hours.(thats what CLTV does)

and where the bloat of the tx vs the blocksize has some impact too... rather than the old formulae with was more about the value of the tx


as you can see its not about tx value. its about bloat and age.
this way
those not wanting to spend more than once a day and dont bloat the blocks get preferential treatment onchain ($0.01).
if you are willing to wait a day but your taking up 1% of the blockspace. you pay more ($0.44)
if you want to be a spammer spending every block. you pay the price($1.44)
and if you want to be a total ass-hat and be both bloated and respending EVERY BLOCK you pay the ultimate price($63.72)

note this is not perfect. but think about it
16496  Bitcoin / Bitcoin Discussion / Re: Will BU Fork Soon Rip the Network in Half? on: May 12, 2017, 01:50:38 AM
Indeed, and bitcoin also needs users. 99% of people run Core software, nobody trusts Buggy Unlimited.

Well, that's just a plain lie:

http://nodecounter.com/#nodes_pie_graph

That took about 30 seconds to look up. Core has dropped to around 86%. They basically had a kind of 'first-mover' advantage with the perception of being the"official" client. Now they're losing share because of their own actions, ironically.

Those facts aside, anyone who understands how and why Bitcoin works understands that this 'node' count (they're not actually nodes) is entirely irrelevant, anyway.

also worth noting that the 86% includes versions of core 0.8 - 0.12 which are not even segwit compatible.
infact many are INDEPENDENT people that forked the core, and made their own tweaks and updates themselves and not bothered sticking with core.
some are even classic/BU/XT nodes in disguise to avoid DDoS attacks from the core gang who only attack user agents that appear not to be core.


also worth noting segwit is IMPLICITLY at 66% and EXPLICITLY well BELOW 66%

also worth noting and even funnier is that BTCC (highly ass kissing core/blockstream/segwit) doesnt even use cores upto date software, they done their own tweaks to their own pool software
https://bitnodes.21.co/nodes/?q=BTCC:0.13.1

16497  Bitcoin / Bitcoin Discussion / Re: time to admit its not "spam" , blocks are full on: May 12, 2017, 01:38:28 AM
If segwit was activated we wouldn't have this problem because there would be enough space, even tho anyone with enough money could fill the blocks too.

Ultimately there is always a possibility to fill the blocks unless the blocksize is stupidly big. So yeah blame miners for not activating segwit, until then pay the fee.

activating segwit is meaningless..
the "extra space" for the cludgy 2merkle version of segwit is for users who move funds to new keypairs AFTER activation.. so that they can hang their asses the main block and have their feet spread out in another area. allowing more transactions to sit in the main area the first persons feet used to be in..

the issue is getting 46m outputs to switch over to segwit keypairs will cause a mega mempool fill event of people trying to move funds across.
the issue is malicious people will stay with native keys and fill the seats of the main block so even segwit users cant sit down. so never get to put their feet up out the way.

the solution is a 4mb single block where both native and segwit keypairs can all sitdown side by side all sharing the same area.. and then limiting what malicious things the native key users can do.

without sorting out the native key users.. segwit is an empty gesture, a hope..
16498  Bitcoin / Bitcoin Discussion / Re: Please run a full node on: May 12, 2017, 01:14:38 AM
This is only true if you mean that : once the other miners are removed and the difficulty re-adjusts to the one miner/pool.... at THAT POINT, it would be every 10 minutes on average.

But sans difficulty adjustment, if you kill 9 out of 10 pools and have only 1 pool left, it would take 100 minutes, not 10.

Is that what you meant?


no
the pool would make blocks on average of 10 minutes
16499  Bitcoin / Bitcoin Discussion / Re: Will BU Fork Soon Rip the Network in Half? on: May 11, 2017, 03:09:27 PM
All they will have is an useless coin with a higher hashrate. Meanwhile 75% of exchanges and merchants will reject BUcoin.

The poll done by 21 suggest that around 75% of big players in the space want segwit and the  70.5% reject Buggy Unlimited explicitly:

Not to mention nobody buy Roger Ver runs nodes.

So it's pretty obvious BU is in general a failure.

questionnaire of 61 people, hmm who got told where to vote - result biased by spamming link to only one side



also if pereira4 is not around billy bob will daily spam the same biased stuff..
if billy not around lauda will

each day the same stuff is posted but none of them even think about researching behind the numbers. they just post it


P.S
want to see the narrative control


P.P.S
question 4 (as advertised by lauda/billy and other) is
do you want MINERS to activate BU

..
if the question was "do you want community consensus to activate BU" results would be different
this is where people need to learn CONTEXT and source of data
16500  Bitcoin / Bitcoin Discussion / Re: SegWit + Variable and Adaptive (but highly conservative) Blocksize Proposal on: May 11, 2017, 02:46:11 PM
Mathematically, assuming an average block time of ~10 minutes, there are a maximum of ~104 difficulty adjustments over a 4 year period, so even if there was a .01 MB increase at every difficulty re-target (the chances of which are negligible), the base blocksize would still only be ~2.04 MB after 4 years.

Is this a compromise most of us could get behind?

For me, maybe

But bear in mind, saying 2.04 MB after 4 years conceals the fact that the real total blocksize would be 8.16MB when you include the signatures in the witness blocks, Segwit is a part of this deal you're proposing.

if there was to be a hard consensus to move to dynamics.. then its much better to use that oppertunity to unite segwit AND native keypairs in the same area.
EG 4mb for both native and segwit to sit in, in a single merkle block. then have it increase by x% a fortnight.

people running speed tests know that on current modern baseline systems (raspberrypi3) and average internet speed of 2017 along with all the efficiences since 2009 (libsecp256k1) have revealed that 8mb is raspberrypi average home user safe..

so starting with a 4mb single merkle block would be deemed more than safe.

and remember
even with a 4mb rule DOES NOT mean pools will make 4mb instantly..
just like they didnt make 1mb blocks in 2009-2014 even with a 1mb allowable buffer..
pools done their own risk analysis and done their own preferential increments below consensus.
Pages: « 1 ... 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 [825] 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 ... 1467 »
Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!