There's only so much schmoozing you can do. I highly doubt at this rate we'll see segwit take off. It's too bad but ugh, the 95% that was decided on was a stretch from the get go.
gmaxwell has a trick up his sleeve.. he can cause a hardfork to get a softfork activated. by deliberately orphaning blocks that oppose segwit. not because the block data is bad.. but purely because its not voting in favour of segwit. If there is some reason when the users of Bitcoin would rather have it activate at 90% (e.g. lets just imagine some altcoin publicly raised money to block an important improvement to Bitcoin) then even with the 95% rule the network could choose to activate it at 90% just by orphaning the blocks of the non-supporters until 95%+ of the remaining blocks signaled activation.
so funny that blockstream/core said that all the segwit data was good and backward compatible. but now suggesting everyone should upgrade to be full nodes and that they can even orphan off blocks to get their way(hard fork).. if the data was truely backward compatible there should be no reason to orphan.. thats the point of why a softfork was promoted and how it was promised to the community. to not cause disruption by/of orphans.
|
|
|
You also made up the 5% as arbitrary.
seriously?? you been fanboying core for a year and you dont know where the 95% i mentioned originated.. it was core that set the bar so high. but have a nice day blah my personal number that i think is safe differs to blockstreams 95%.. but i only mentioned 95% because blockstream would send in the usual intern centralist bandwaggon if i said anything different. so to avoid argument i just used their numbers so they cant argue.. anyway, when user nodes set their settings the consensus is measured and the pools are the ones that decide when to push out blocks with the least risk. yes pools choose when, as its in their interest to not lose $12k in 10minutes. logically even if there is a clear majority (pick random number of majority all you like).. pools will then do their own flagging of intent. EG imagine nodes new limit will be 1.3mb by large majority consensus.. then pools flag also has majority consensus to say yes too.. but when activating it.. they are going to be smart. first block after activation... 1.001mb then slowly get to the 1.3mb over time they are not irrationally going to push out a 1.3mb block the very next block after activation. they will test the water. it might take 2 days and 2 hours(0.001mb) increments per block(2days *144blocks per day=288+12blocks in 2hours =300 adjustments) to see the orphan risk as it climbs to 1.3mb new limit. thats the logical and safe way.
|
|
|
Good job Roger Ver, it looks like SegWit has absolutely no chance at all of getting anywhere near 95%. I doubt they will ever get over 50%.
SegWit is not Bitcoin. SegWit is an altcoin.
I feel it is my duty as a concerned citizen and valuable contributor of the Bitcoin community to declare SegWit dead on arrival.
no point giving roger Ver the blame or fame. Ver only had ~10% so dont let blockstream make you believe Ver alone.. it's the other 65% that were not in agreement with blockstream aswell... but we all know blockstream are gonna ply lots of investors with alcohol and free drinks in miami this month. so we will soon see if those investors linked to mining farms suddenly change their minds after a bit of blockstream ass kissing
|
|
|
You also made up the 5% as arbitrary.
seriously?? you been fanboying core for a year and you dont know where the 95% i mentioned originated.. it was core that set the bar so high. but have a nice day
|
|
|
most EU/UK debit cards are actually bank account cards. so exchanges would ask you to wire transfer them funds using that facility of your bank account. rather than using the long mastercard number.
|
|
|
There are Chinese mines relying on hydro power, as we've seen on this forum, and having really cheap rates. The EU has funds for countries to take advantage of their wind power and install wind turbines. The USA is aimed at becoming the number one battery producer with Tesla (and SolarCity providing solar panels)... Mining can only get greener overtime: there will be more and more ways to use green energy and it will be cheaper to do so, thus more profitable... So no wonder miners are going green.
electric is not the issue. its the cost of buying the asic. EG ~$450 electric(USA)... asic $2100 its the asic that puts a real dent into ROI for miners. you have to be an asic manufacturer making a rig for $400 just to break even.. we all know antpool is doing it and loving it.
|
|
|
The people on your list read like the United Nations of Bitcoin. Just like the venerable dignitaries of the United Nations, they are important people, certainly, but powerless and not really in control of anything.
united nations of hyperledger..
|
|
|
for instance if there were more results than just 10 1.2 2 1.2 5 17 3 6 8 1.3 1.6 1 2 1.2 5 17 3 6 8 1.3 1.6
where 95% wantd 1.2 min but there was 5% holding back. then pools would way up the need for more buffer vs orphan risk (5% lagger) and then decide to push on for more and leave the lagger behind having to tweak their setting up to be part of the network or left unsyncing (standard 95% consensus even core/blockstream think is acceptable)
No if it's least common denominator then it takes only 1 person to fuck it up. If 3999 nodes agree on 2 mb, but 1 node doesnt, then it's still 1mb. And if you add some weight to it, then it's too arbitrary. Median is a good choice. In this example of yours: 1.2 2 1.2 5 17 3 6 8 1.3 1.6 The median is 2.5 Which means that it's the consensus of 60% of the nodes. Or you can use PercentilesWhere the 50% percentile is the median, but if you think the median is too high, then use the 40% percentile: 1.84MB or the 25% percentile 1.375 The 25% percentile is literally a consensus of 75% in this case, where we round it up due to small sample size it's 80%. using the median and then finding a random size after that is foolish imagine your numbers.. randomly saying that 25%=1.375mb.. no.. 1.2 1.2 1.3 are all excluded . meaning its 30% node drop/orphan risk based on 1.375mb figure randomly saying that 40%=1.84mb.. yes.. 1.2 1.2 1.3 1.6 are all excluded . meaning its 40% node drop/orphan risk based on 1.84mb figure where and why would you choose a random number of 1.375 or 1.84 is another variable of debate.. afterall to a 2mb node(next number after 1.6mb) will be wondering why halt at 1.87mb if no one is saying they cant cope with 1.88-1.99. there is too much iffyness and orphan risk of medians. especially the way you played around after, to get your magic numbers. now i think about it. its not "least common figure", i was thinking of. its to sort the amounts into ascending order.. take off 5% of results from smallest end.. and then whatever the lowest number of what is left is the acceptable (lowest of 95% is the new buffer size) EG 1 1.2 1.2 1.2 1.3 1.3 1.6 1.6 2 2 3 3 5 5 6 6 8 8 17 17 also where you said 1 node can hold it up. i said 5% so based on 5000 nodes, more than 250 nodes would need to hold at 1mb to hold it up. not just one node. the benefits of segwit are exaggerated. the most foolish thing is letting someone make a TX with 20,000sigops. and then cry that tx's using many sigops take longer to process.. logical solution is restrict sigops so that bloated tx's dont take up too much blockspace, which also cuts down on processing time.
Yeah but we have PhD's working on this. So I kind of trust them better, to be better experts on this. The miners can flip flop, but the devs are experts in IN or Network Engineering. people can have a PHD in anything.. bitcoin physics and theoretic's is not covered in their syllabus. also some have had PHD's before the millenium so dont expect the tech they learned about is the same tech available today.. IT PhD's get outdated faster than most peoples wives. dont ideally trust someone because of qualifications.. without understanding what that qualification actually taught or didnt teach. do you even know when these PhD guys even got their qualifications and what technologies were available at the time.
|
|
|
you do realize that when you accept a donation through a bitcoin address then there is nothing private about it.
he may mean display his address publicly but have a private admin page to se progress. or he may mean private as in personal.. meaning not hosted on some service people need to visit. but his own personal site
|
|
|
is the average tx size increasing in the time? because i remember it was 300bytes before, this also lead to more fee of course
the numbers for average txbytes over 8 years has changed. EG 2009 was under 250 and it become more over time. the numbers of the OP's data in the spreadsheet is only numbers of 1m tx's which is well under 500 blocks = less than a week of data so its not going to reveal much long term change just short term. i did do a few selective averages of 0-335k tx's = 447byte 335k-666k tx's =473byte 666k-1m tx's =454byte and they all average about 447bytes - 473bytes is the average tx size increasing in the time? because i remember it was 300bytes before, this also lead to more fee of course if i remember correctly the size of a tx is only based on how many imput you receive and some byte from the output this mean that many are doing few big transaction and receiving many small one? correct?
using old legacy transactions ((148 * inputsused) + (34 * outputs used) ) +-10 variance = tx size estimate as for multisigs. well that a whole different calculation to work out the bytes of a tx, as there are more variables involved im sure someone else has found a workable calculation to work it out for multisigs. but to answer your question. multisigs do use more bytes per tx.. if you were to compare a 2in 2out multsig to a 2in 2out legacy tx oh and lets not forget LN's settlments which will also include extra bytes for CLTV and CSV data will bloat a tx even if its still just a 2in 2out. yep segwit suggests more tx space but then LN settlements then refill that space with larger tx's..
|
|
|
that calculation is at 2015/2016 calculations. (motherboard.vice article wrote march 2016 using data from before that date)
here is an update using hashrate of this week
today's hashrate converts to at most 200,000 S9 ASICS (<2.6m Thash)
at 1.3kwh per asic =260000kwh (260mw)
asia's 5cent electric = $13,000 an hour USA's 10cent electric = $26,000 an hour UK's 20cent electric= $52,000 an hour
asia's 5cent electric = $312,000 a day USA's 10cent electric = $624,000 a day UK's 20cent electric= $1,248,000 a day
asia's 5cent electric = $113,880,000 a year USA's 10cent electric = $227,760,000 a year UK's 20cent electric= $455,520,000 a year
enjoy
by the way UK miners are rare.. so world combined electric cost per year is between $113m-$227m much less than last years estimate
|
|
|
first you can get total balance of the address here https://blockchain.info/q/addressbalance/<insert address> this will give you balance in satoshis. so divide it by 100,000,000 to get how many bitcoin you have and then you can get the dollar price of a whole bitcoin https://blockchain.info/q/24hrpriceand do the math to get dollar value of balance balance * price then you do the math of your target (100 / target) * dollar balance and then use any common website progress bar code for the code that the front end (user) will see
|
|
|
blockchains have been around alot longer then people think. but how satoshi utilised it made it innovative.
kind of like wheels have been around thousands of years. but how the first combustion engine maker utilised it made vehicles revolutionary(as the wheels and round turning cogs/gears concept inside engine)
so although (when you peal off all the add-on features satoshi patched together PoW, hashing, transaction cryptography, etc..) the blockchain concept is very simple and not revolutionary (data linked to other data(blocks of data chained to other blocks)).. blockchain does open up a new direction of possibilities.
so is blockchain innovation yes so is blockchain revolutionary no
so is bitcoin innovation yes so is bitcoin revolutionary yes
|
|
|
I have to say this , I start to agree with you more and more.
For example every node sets their maximum block size they can handle, and then the lowest common denominator, or the median will be used as block size.
So if there are 10 nodes for example and they set respectively: 1 2 1 5 17 3 6 8 1 1
Then the median: 2.5 mb block can be set, that will satisfy most nodes.
median.. hell no.. for instance if it was median. 4 out of 10 would be fut off and not syncing. what it would be is as it already is.. is the common least denominator meaning using your numbers it would stick to 1.. (like today we already have a few nodes at 2->16 but imagine people slightly adjusted numbers as time went on EG 1.2 2 1.2 5 17 3 6 8 1.3 1.6 pools will now make blocks at 1.2 then EG 1.5 2 1.5 5 17 3 6 8 1.5 1.6 pools will now make blocks at 1.5 then EG 1.7 2 1.7 5 17 3 6 8 1.7 1.7 and each time.. EVERYONE is happy or for instance if there were more results than just 10 (eg over 5000) 1.2 2 1.2 5 17 3 6 8 1.3 1.6 1 2 1.2 5 17 3 6 8 1.3 1.6 where 95% wanted 1.2 min but there was 5% holding back at 1. then pools would weigh up the need for more buffer vs orphan risk (5% lagger) and then decide to push on for more and leave the lagger behind having to tweak their setting up to be part of the network or left unsyncing (standard 95% consensus even core/blockstream think is acceptable) However Segwit should still be implemented. Segwit has other features that are important. And after that passes ,we could try to implement this one.
the benefits of segwit are exaggerated. the most foolish thing is letting someone make a TX with 20,000sigops. and then cry that tx's using many sigops take longer to process.. logical solution is restrict sigops so that bloated tx's dont take up too much blockspace, dont use as many sigops, which also cuts down on processing time.
|
|
|
Ok. i'd just love to see someone against block size rise discuss with you, I'd learn a lot ![Grin](https://bitcointalk.org/Smileys/default/grin.gif) many have tried rebuttling me, but just used insults and not logic, not stats, not real life scenarios. but yes. dont take anyones info on face value and run your own scenarios. the short gist of blocksizes is that with dynamic blocks (node users set the blocksize buffer setting themselves) anything under that setting is acceptable. EG some people already have theirs set at 2mb, 4mb and up. and they are running on the network accepting (1mb)blocks now. perfectly fine.. no issues using consensus (feature in bitcoin) if the majority flagged a certain level. then pools would flag a level they are happy with too and it grows only at a rate the majority are happy with. so the rational minded people know it wont jump to gigabytes over night but natural pace that nodes can cope with. after all if they cant cope, they wont flag desire for it. and any increase wont activate, due to consensus not being reached only issue is core are withholding an implimentation of that sort of feature to allow users to self adjust. thus we are stuck at 1mb until those running core are spoonfed whatever core devs want to give their users. so take the blocksize doomsday rhetorics with a large pinch of salt, it just core devs (blockstream paid + lots of unpaid interns) holding things back, because they prfer pushing users into centralist contracts where they can make money charging fee's and revoking payments
|
|
|
I suppose the network bandwidth is the weakest link here.
In that case, what if the block size increase will be pegged to half of the global average bandwidth increase.
So if bandwidth increases by 5% yearly, then we can increase block size by 2.5%. How about that?
what if i told you that using dynamic rules AND consensus.. nodes only flag desire to increase when they can handle it. and it only increases if the majority can handle it. they all set their own max buffer flag, and blocksizes of larger amounts only grow to the scale the majority can happily cope with. meaning it will not surpass what people can cope with, because if larger sizes cant be coped with by nodes they wont flag desire for.. we dont need devs to spoonfeed what they feel/they desire, when the network itself can do it. devs have already said 8mb is safe but they prefer their 4mb weight. (compared to their old fake doomsday rhetoric of 2mb was bad) so there is no reason to keep the baseblock at 1mb
|
|
|
if you are antpool/bitmain/bitfury etc.. the cost is zero.. thanks to their retail sales of asics.. well kinda
at most its just the electric price which is ~$225 per 13thash asic for 6months. but this can also be offset thanks to their retail sales too
this is because the retail price they sell their ASICS to others is 4-5x the cost of manufacturing an ASIC. so for every asic sold antpool can keep 3-4 asics themselves or have 1 asic themselves and spare funds for electric, wages and shelter lease.
but if you are in the west having to buy an asic and pay the electric
first its $2100 for 13thash asic, (pre christmas it was just $1600/asic) then the electric, shelter lease, salary and the PSU needs to be added
but first the electric if you are in the US its ~$450 for 6 months electric per asic (totalling ~$2700 all included for 6 month running time or ~$3100 for a year) if you are in the UK its ~$900 for 6 months electric per asic (totalling ~$3100 all included for 6 month running time or ~$3500 for a year)
some maths was done before christmas that calculated it would take 1 asic 6 months to get 1btc. so if asic manufacturers were to pay electric(~$225(add a bit more for wage/shelter leasing)), then $300 sounds right. thanks to the freebie asic they get due to their retail business
but.. here is the but.. the hashrate competition and the difficulty has changed since then. antpool alone opened a new facility and their hashrate has jumped considerably, as has the network difficulty.
the network difficulty has jumped ~29% since previous calculations of predicted 6month for 1btc/asic.. the network hashrate has jumped ~30% since previous calculations of predicted 6month for 1btc/asic..
factoring in other things. like knowing hashrate/difficulty changes, at 6 months now only gets you 0.89btc and takes a further 2 and a half months to get 1btc
and yes i done calculations that step down the income per fortnight dependant on difficulty/hashrate competition variation. where those fortnightly stepping down income, really affect income as time passes.
summary for asic manufacturers its between 0-$400/btc if including electric leases etc (compared to prechristmas $300) for asic consumers its between $2900(US)-$3500(uk) (compared to prechristmas $2100(US)-$2600(uk))
|
|
|
based on the million tx's (mine stopped at 1,048,575 results)
average tx confirmed in 40mins 30mins average tx size 458bytes average tx fee 36892sat-41750sat (depending on if you include or exclude the 0fee tx's in the average)
average fee per byte is 91sat/byte max fee per byte of range 34883 sat/byte min fee per byte of range 0 -- as for the max fee.. either the source data has an error or someone lastweek paid ALOT for one of their transaction
max tx size 98888 bytes (98.9KB) min tx size 170 bytes -- as for the max bytes..either the source data has an error or someone lastweek had a near 99kb tx(filling 10% of block with 1tx)
|
|
|
Because now I don't understand why so many people are against size block rise,
when 12 paid blockstream centralist devs hand out scripts to their 90 interns who desire to impress the paid guys in the hopes the interns get a job by showing loyalty. and each of them start going out and repeating the "blocksize rise is doomsday" rhetoric.. many people believe, not because its true but because they are hearing the same story from 100 different people. so they dont even bother to individually investigate, research or run scenarios. but accept the spoonfed info on face value due to "trust". afterall their mindset is '100 people cant be wrong'.. mark twain: " a lie can travel around the world, while the truth is putting on its shoes"you can usually spot the scripts. when they throw out words like "we are conservative" or "go fork your altcoin" to anyone thats not core friendly its like blockstream are taking a lesson from mass media https://www.youtube.com/watch?v=eZVv2AOCnaAhttps://youtu.be/jH8dejYGa5A?t=36sfunny part is, this rhetoric thats passed around lacks stats or real life data. but people will sheep follow just out of trust that if 10 people say the same thing, even if there are no stats to back it up. it must be true they even went out of their way to say that although satoshi said 32mb is ok.. the chinese firewall has issues far below 32mb. the problem with that theory is that, the chinese firewall does not have issues. infact chinese pools have stratum servers outside the firewall that hold the actual block data and only need to send a small hash (few bytes) to ASIC farms.. thus ASIC farms inside china are not even handling the real blockchain data anyway(no ASIC has a hard drive). thus killing off the chinese firewall theory/doomsday rhetoric.. yep the 8mb number is to do with the chinese firewall. but the chinese pools have mitigated that risk. though for now the community are only seeking 2mb-4mb so we can delve deeper into the 8mb theory at a later date. but all in all the community think baby steps is safe. which we all agree.. even as far back as 2015 we all thought we settled our differences and agreed that 2mb was fine. but.. its the blockstream centralists and their sheep that are the ones throwing out the gigabyte blocks by midnight doomsday, not anyone else (with atleast half a brain)
|
|
|
It's good there are some people who actually do such stuff. And if they do free seminars for people they must have good sponsors which also means that there are rich influential people who are interested in spreading the knowledge about btc. Maybe, they gain something from it. Maybe, they just think it is something people have to know (like culture or math). In my 3rd world country I studied at Technological lyceum and I believe right after my graduation some people came to the school and did some small introductory seminars for high school students about btc and it being profitable. That is nice, I guess.
learn enough about BTC and then you can become the lecturer. you can make a nice income. but worth doing a few small seminars for free to get your name known locally first.. its how andreas antanopolis started off and now he is making a career out of it, fully funded by bitcoin.. paid for by conference ticket purchases (thus even organising events is no big cost to get lecturers in)
|
|
|
|