Bitcoin Forum
May 24, 2024, 02:20:53 AM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
  Home Help Search Login Register More  
  Show Posts
Pages: « 1 ... 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 [831] 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 ... 1467 »
16601  Bitcoin / Bitcoin Discussion / Re: Last Time. In small words. Why 2MB impossible. Why Soft Fork. Why SegWit First.. on: May 05, 2017, 12:32:27 AM
Yep, maybe. What I meant was that big spam transactions would cost them an amount of hashing power that they - if they ignored these transactions or give them a very low priority - could use better to find blocks faster and get an advantage over their competitors.

a ASIC does not have a hard drive.. it does not matter to an asic if a block is 250bytes or a gigabyte. the "hashing" is the same
an asic is just given a hash and rehashes it.

data or bloat does not hinder ASICS one bit.. it only hinders the pool/server that validates/relays full block data.



2 MB + SW in my idea would occur in >2019. If Bitcoin's growth continues at the same speed than until now (30-50% transaction volume growth per year) then we could see pretty full mempools then. OK, maybe not if sidechains or extension blocks are functioning.
I don't agree with the instant jumping visions anyways. Why not 1.2 MB now, 1.4 MB next year and so on, until we hit 2 MB? These kind of approaches make more sense to me.
its taken years of debate and still no guarantee on moving the block size once.. do you honestly think moving to 1.2mb is going to benefit the network, and then have another few years of debating to gt 1.4mb..

if your talking about progressive blocksize movements that are automated by the protocol and not dev decisions per change.. then you are now waking up to the whole point of dynamics.. finally your looking passed blockstream control and starting to think about the network moving forward without dev spoon feeding . finally only took you 2 years (even if you think that hard limiting it at silly low amounts is good)

give it 2 more years and you will wake up to hard limit of 4mb and soft limit that moves up in increments.
EG
like the last 8 years (rplace hard with consensus and soft with policy, and you will start to understand)
1mb consensus 0.25mb policy 2009-2011
1mb consensus 0.5mb policy 2011-2013
1mb consensus 0.75mb policy 2013-2015
1mb consensus 0.99mb policy 2015-2017
to become
4mb consensus 2mb policy 2017-2018
where policy grows

oh and guess what.. pools never have just jumped from 0 to 0.25.. or 0.25 to 0.5..
even when policy allowed it, pools took things cautiously to avoid orphan risks

so say
4mb consensus 2mb policy 2017-2018 was implemented
pools wont make a 2mb block the very first block of activation. they would test the water with 1.000250mb to see the risks, timings issues of bugs orphans etc.
and increment from there.

you may argue "but whats to stop a pool jumping to 4mb".. well the same reason pools didnt jump to 1mb.. and instead themselves went up in safe increments to protect their own risks of orphans and other issues (as my last paragraph explained)
also thats where nodes would have an extra safeguard.. but ill leave you to take a few years to realise the extra safeguard. which is what dynamics is all about.

so go spend 2 years shouting nonsense/irrelevant until it finally dawns on you
have a nice 2 years
16602  Bitcoin / Bitcoin Discussion / Re: Andreas redpills /r/btc loons on: May 04, 2017, 11:37:36 PM
1 line of code vs 5000 whatever lines of Segwit.  My Choice is clear.

well for anyone thats wrote an implementation thats clean and knows the purpose of header files
they would be able to change to blocks over 1mb with 1 line of code in a header file.

yet the cludge of core ends up needing to change multiple functions in atleast 4 files. which can then cause a snowball affect of other issues if those functions were needed for other things..

in short by being cludgy they dug themselves a hole and cant get out of it with 1 spade. so rather than dig themselves out or starting again with clean code. they are trying to get people to follow them into the hole
16603  Bitcoin / Bitcoin Discussion / Re: Last Time. In small words. Why 2MB impossible. Why Soft Fork. Why SegWit First.. on: May 04, 2017, 11:22:46 PM
You can't harm the network with sigops at 1 MB.

you can. think of the sigops as another "limit" of spam that once filled nothing else can get in

Quote
unsigned int GetLegacySigOpCount(const CTransaction& tx)
{
    unsigned int nSigOps = 0;
    BOOST_FOREACH(const CTxIn& txin, tx.vin)
    {
        nSigOps += txin.scriptSig.GetSigOpCount(false);
    }
    BOOST_FOREACH(const CTxOut& txout, tx.vout)
    {
        nSigOps += txout.scriptPubKey.GetSigOpCount(false);
    }
    return nSigOps;
}

we all know a tx bytes are made by (148*in)+(34*out) roughly(+-10bytes)

so lets make a tx that has 4k sigops
a) 3999input:1output= 591886bytes~(4ksigops)
b) 1input:3999output=136114bytes~(4ksigops)

5tx of (b)=680570bytes~(20ksigops)

screw it. i know there are many knitpickers
c) 1input:2856output=97252bytes~(2.857k sigops)
7tx of (c)=680764bytes(20k sigops)

so i can fill a blocks sigops limit easily with 7tx of (c)
and although its only 7tx, and only 0.68mb of data.. no other transactions can get into the base block.. not even segwit tx's
16604  Bitcoin / Bitcoin Discussion / Re: Last Time. In small words. Why 2MB impossible. Why Soft Fork. Why SegWit First.. on: May 04, 2017, 10:44:25 PM
Malleability Fixes - not fixed. just offering a 'opt-in' keypair type thats disarmed from doing malleability (only the innocent will happily disarm)
It is fixed. Nobody has claimed that it was fixed for the legacy keypairs.

its not fixed.
the problem with quadratics /malleability is that malicious people will use it to do mallicious things.
unless mallicious people CANNOT do it. then its not fixed.

EG its illegal to use drugs in countries..
does not mean the war on drugs is fixed by some rule. because people still use and sell drugs.

unless there was a way to permenantly guarantee that no one can sell/use drugs, the war on drugs is not fixed.

segwit is not a war on quadratics fix
its not even a prohibition (think about the 1920's alcohol prohibition)
its just a gesture rule where people can voluntarily move to a gated community that voluntarily wants never touch quadratics. and it wil turn out that the only people wanting to move are the people that never intended to spam in the first place
16605  Bitcoin / Bitcoin Discussion / Re: Last Time. In small words. Why 2MB impossible. Why Soft Fork. Why SegWit First.. on: May 04, 2017, 10:09:10 PM
Malleability Fixes
Linear scaling of sighash operations
Signing of input values
Increased security for multisig via pay-to-script-hash (P2SH)
Script versioning
Reducing UTXO growth
Efficiency gains when not verifying signatures

Malleability Fixes - not fixed. just offering a 'opt-in' keypair type thats disarmed from doing malleability (only the innocent will happily disarm)
Linear scaling of sighash operations - not fixed. just offering a 'opt-in' keypair type thats disarmed from doing quadratics (only the innocent will happily disarm)
same for the rest.

but im glad you admit that adding code to force users to only pay using a certain transaction method is " equal to censorship.".. now think about it
in regards to people saying
"Maybe they can be prohibited and all people owning bitcoins on non-Segwit keys have to transfer them to Segwit addresses."
"prioritise native->SW... SW->SW"
"Q: users move funds to segwit keys A: Which is guaranteed to happen"

also glad your now seeing the truth that its just a optimum HOPE not a reasonable expectation much like the 8 year hope of 7tx/s
"In regards to forcing people into Segwit addresses: While everyone using SW keys would be an optimal future, forcing them into doing this may set a dangerous precedent."

16606  Bitcoin / Bitcoin Discussion / Re: LN+segwit vs big blocks, levels of centralization. on: May 04, 2017, 09:49:49 PM
I run a non-mining full node; it entertains me to do so.  I configure it to allow 60 links (8 outbound (default) and 52 inbound).  Although it does vary, I do find my node runs along with near the maximum number of links all the time.  Well, after a restart, sometimes it can take many hours to build back up.

Although the miners could (and probably do otherwise how do they get transactions?) run a full node (or more than one for redundancy?), there's nothing obliging them to accept many incoming links.  Wouldn't that leave users fighting for limited connections without folks like me?  Should I increase my link count even more?  I configure my mobile phone based wallet to only connect to my full node exclusively.  Sometimes I can't get a link so I go to my full node a disconnect a peer (sorry).

So far I have plenty of bandwidth, CPU, and storage.  I guess I would be ok with a bigger block but would be very happy with a cap on transaction size.  Am I helping at all to keep things decentralized?

If I drop off then it might not be such a big deal but if other folks like me do then what?

most pools connect via things like fibre or supernodes as it used to be called and let the fibre/supernodes propogate the data out to all the other nodes thus taking pressure off the pools from getting huge demand for direct connections from random users.

as for the random users and merchants that build up the symbiotic relationship of the diverse decentralise per network that keep pools in line and ach other inline. thats more of a question of the 8 dgree's of separation.

for instance if instead of 52 connections there was only 8.
if everyone had only 8 connections
8*8*8*8=4096
the data would not propogate to everyone in 4 hops/relays(based on bitnode count ~7000)
8*8*8*8*8=~32k
the data would propogate to everyone in 5 hops/relays
10*10*10*10=10k
the data would propogate to everyone in 4 hops/relays
20*20*20=8k
the data would propogate to everyone in 3 hops/relays
84*84=7056
the data would propogate to everyone in 2 hops/relays

so ill leave you to rationalise if you should step up and be more of a super node by going upto 84 as a nice healthy 2 relay number.
P.S if your going to want to help the network sync. doont use prunned/no witness features. otherwise people cant grab full data from you.

if you want to use prunned/no witness. then just let 8 connections and be at the bottom end cesspit of nodes that cant reliably sync with each other
16607  Bitcoin / Bitcoin Discussion / Re: Andreas redpills /r/btc loons on: May 04, 2017, 09:18:03 PM
My argument is, even if miners got BU or some other proposal through that gave them blocksize control, they are not going to raise the blocksize over an amount where they would lose out on rising transaction fees. Meaning, if you want bigger blocks so fees return to .10 cent, it won't happen. If you think they'll cripple one source of income, you're living in la la land.

1. pools dont NEED fee's today, yea its a bonus that can fluctuate but its not their main income. the reward is their main income.
the switch of income:bonus (from reward:fee to fee:reward) wont happen foe DECADES

2. pools would prefer to do things nodes accept to ensure they get atleast the reward. so pools are not gonna push any new rule or rock the boat with any new feature unless they know its not gonna get orphaned or going to cause spendability issues with certain merchants, which they prefer to spend the rewards on.

3. you may scream that coinbase might be segwit positive but what if the merchants/exchange/private investor they trade with prefers something else... think about that!! (do you even know what method a pool uses to get its fiat)

4. to highlight point one. pools have and will find the best ways to be efficient and within acceptable rules to get their blocks accepted. even if it means abstaining from a new rule change, even if it means starting a new 'empty block' while verifying a competitors solved block from a previous round.

5. i told you to get todays 'bonus' from users paying 2014 fee's would require an 8mb block. but as i have said for months pools dont see the fee's as a NEEDED income. its just a bonus. what is more important is getting their block accepted by the nodes first. because 12.5btc is more important than 1.5btc..

6. emphasising point 5 screwing around trying to get more than x fee's by increasing the risk of losing the 12.5+fee.. is like a walmart employee trying to screw $10 out of a cash register each month, risking losing $1k a month job... its just not logical to try being greedy


Experts in the world of cryptography have long come to the conclusion that the optimal block is 2 megabytes, although Chinese miners think in large volumes But as far as I know this will not happen.

core guessed and gave fake reasons for 2mb years ago
better experts now found 32mb can work, 8mb is the general world wide no issue acceptability..
core accepted 8mb was a good safe number
core/blockstream prefer 4mb to be extra cautious because they know their compact blocks might need to ask twice for data now and again
core/blockstream prefer 1mb base with fake reasons of 'but pools NEED their fee's'.

strangely core removed lots of CODE that allows for reasonable controlled fee's thus allowing the fee's to get so out of control.
yep even in low demand, gmaxwlls 'average fee' concept keeps fee up. its not as reactive to low demand to make the price drop when demand is low
core stopped acting like devs and more like economist/bankers screaming "just pay more". yet core/blockstream have shown lack of communication with pools to actually ask the questions. 'what should core/blockstream do to make pools and nodes mutually happy' (the community)

all blockstream have done is 'would buying you a plane ticket and a seat an an exclusive bildrburg close door meting buy you blockstreams vote into following blockstream'.. yet 65% of pools even when bribed with all inclusive weekends still abstain/say nay.. because pools can see the cludgy code of segwit
16608  Bitcoin / Bitcoin Discussion / Re: Just hypotetical and curious on: May 04, 2017, 03:05:46 PM
So it was basically the institutions and governnants corruption which lead to the crash, which gave breeding to bitcoin concept to born?

just like it took the titanic before boat makers decided to double layer their hull.

though many people were trying to make money alternatives for generations. it take people who have the drive/ambition/knowledge/ inspiration to really make it happen.

EG aids has been around for thousands of years. but it only became worthy of looking at ways to prevent/cure/treat once it really started affecting wealthy straight couples

EG smoking and cancer risks for the rich were not a big deal for the rich. until the rich guys started dying of cancer.
16609  Bitcoin / Bitcoin Discussion / Re: Is diversity in bitcoin client implementations a good or a bad thing? on: May 04, 2017, 02:40:49 PM
The only way for everyone to follow precisely the same rules is for every node to be share the same consensus implementation.

but if everyone was following the exact same code.. EVERYONE gets affected if there was a bug
EG the 2013 event happened because everyone was virtually core managed..


however imagine if there were some 0.8 (leveldb) nodes and btcd(wrote in GO and leveldb storage) the network would continue and make blocks and only the 0.7(which would have been less majority, if more diversity existed)) would have been left unsynced. and it would have been far far easier to just say "hey 0.7 time to upgrade"

by having diversity, only the non rule following nodes get kicked off the network or cant sync.
EG assert bug was not a network wide bug because of diversity

thus bitcoin continues due to diversity and only the problem code implementation would stop.

diverse decentralised peer network has its reasons to be diverse and decentralised.


the only advantage of everyone centralizing the exact same code line for line. is to make changes easily without veto because everyone would be forced to change without choice.. which also has its own risks and exploitability by doing this.
16610  Bitcoin / Bitcoin Discussion / Re: Is diversity in bitcoin client implementations a good or a bad thing? on: May 04, 2017, 02:33:39 PM
I thought that Satoshi gave the github keys to Gavin ?

nope

satoshi was working on sourceforge right up to when he left december 2010

gavin however had his own repo on github thats started something like june 2010.

when satoshi left, gavin was deemed the main go to guy so people started using his github repo, which he opened up to other people to use aswell. he actually said in 2012-13ish that in a couple years he may move onto other projects
which he eventually did by giving the main maintainer keys to laanwj

gavin continued as just a contributor until core guys decided to cut gavin off due to the craig wright drama with a pretence that gavin "must have got hacked"
16611  Bitcoin / Bitcoin Discussion / Re: Satoshi Nakamoto's stack on: May 04, 2017, 01:30:34 PM
A quantum attack on a hash is not very easy, compared to a quantum attack on public key crypto (RSA, Diffie-Hellman, or EC style).  A quantum attack on a hashed value still takes 2^(n/2) quantum iterations (so 2^80 for one single address).  As a quantum computer is a very delicate *analogue* machine, there's no reason to think that 2^80 iterations on a quantum machine will be faster than 2^80 iterations on a classical cluster (on the contrary).

On the other hand, a quantum attack on a public key takes about 3n iterations, so all elliptic curve, or factoring stuff is essentially dead.

put simply
sha is a very binary heavy puzzle
ECDSA if very vector heavy puzzle

quantum computers can play with vectors easily compared to binary.
trying to solve a binary puzzle with a non binary method and then have the result back in binary is not as efficient use of quantum.

some have estimated that solving a binary logic problem with quantum results in only a 2x efficiency. where as a vector logic problem can be something like 256x efficient
16612  Bitcoin / Bitcoin Discussion / Re: Last Time. In small words. Why 2MB impossible. Why Soft Fork. Why SegWit First.. on: May 04, 2017, 01:11:42 PM
Those two can be rewritten into one point. The obvious solution, which I've been telling you about is, prioritizing native -> SW and SW -> SW transactions.
segwit only = bigger block ONLY IF:
1. users move funds to segwit keys
Which is guaranteed to happen

LOL guaranteed. LOL

do pools prioritise LEAN transactions to allow more transactions in.. nope
do pools prioritise mature transactions to evade spammers.. nope (spam: those that intentionally respend every block)
do pools prioritise transactions with fee.. nope (empty blocks/btcc zero fee confirm)

you HOPE and have FAITH that pools will.. but 65% of pools are abstaining or saying no to wanting to prioritise segwit as a protocol. so they are not going to prioritise segwit transactions.

in short. no guarantee, no fix. just gesture, half expectations and faith
much like the expectation of
"if pools prioritise lean tx's we can get 7tx/s 2009-2017".. yet in last 8 years never had a block of 7tx/s

yes on testnet it can be seen but thats test net where 1 person is creating the tx's in a certain agenda display of expectation.. when dealing with real world people using it for real world needs. reality does not reach expectation or hope

P.S your "2.1mb" expectation is the exact same 7tx/s expectation that has been promoted since 2009.. but never reached
its all if's maybe's half gestures hopes faith trust.. not actual real rules that enforce it
16613  Bitcoin / Bitcoin Discussion / Re: Last Time. In small words. Why 2MB impossible. Why Soft Fork. Why SegWit First.. on: May 04, 2017, 12:54:09 PM
Segwit == block size > 1 MB.

The faulty understanding is that Segwit != bigger blocks. Just because it handles data differently, that doesn't mean that the block aren't bigger.

segwit only = bigger block ONLY IF:
1. users move funds to segwit keys
2. segwit keys get accepted into blocks
3. native spammers dont fill the base block with native spam

16614  Bitcoin / Bitcoin Discussion / Re: Last Time. In small words. Why 2MB impossible. Why Soft Fork. Why SegWit First.. on: May 04, 2017, 12:43:51 PM
There is no way that CoinBase would have added LTC without SW.

so charlie Lee working at coinbase has nothing to do with it.
16615  Bitcoin / Bitcoin Discussion / Re: Gavin Andresen: stay away from Blockstream, Greg and Samson are toxic trolls. on: May 04, 2017, 11:38:13 AM
I don't wanna get into the drama silliness but Mow really does troll pretty hard on twitter lol

he is well funded  now as a blockstream employee.

i guess bobbly lee wasnt paying him enough or the DCG cartel thought he would be more useful under the blockstream subsiduary rather than the btcc subsiduary

http://dcg.co/portfolio/#b

http://www.coindesk.com/blockstream-55-million-series-a/
Quote
Disclaimer: CoinDesk is a subsidiary of Digital Currency Group, which has an ownership stake in Blockstream.
16616  Bitcoin / Bitcoin Discussion / Re: Satoshi Nakamoto's stack on: May 04, 2017, 08:21:00 AM
not sure, according to this http://historyofbitcoin.org/ Hal was minign with satoshi very early, so you have already a competitor with satoshi

basically for all the 2009 there at least two mining, but no sign of other mining, if they really mined for an entire year before other joined then yes, he mined much mroe than 1M even accounting Hal there

within a couple weeks of genesis there were atleast 5 people mining.

within 6 months a couple dozen atleast.
figures get more murkier after that.

yep even theymos was around early on (using sirius-m username)

if you want proof others were working on bitcoin in january 2009
Nicholas Bohm- http://satoshinakamoto.me/2009/01/25/re-bitcoin-list-problems/
hal finney - http://satoshinakamoto.me/2009/01/25/re-bitcoin-v0-1-released-2/

theres other names too. should anyone want to research it.
google is your friend
16617  Bitcoin / Bitcoin Discussion / Re: Gavin Andresen: stay away from Blockstream, Greg and Samson are toxic trolls. on: May 04, 2017, 07:55:59 AM
Yea, the master troll in action again. So you saying the users/miners should be in control of the coding and developing? How fucked up would that be, if the users and miners were in control of programming the code. If I were getting paid to shill for Blockstream, I would probably not be able to afford a McDonald burger with my post history. ^LoL^

Whoever has the best code, will have the users/miners support, or that is the theory.... Lately the people with the most money to spam the network and to use backdoors <ASICBOOST> seem to control the consensus. ^grrrrrr^

the only implementation that has bypassed users support.. is blockstream(core)
asic boost has nothing to do with it. just like opencl had nothing to do with any decisions of core back in the day of the GPU mining.

imagine it. imagine core in 2012 wanted to change something but couldnt because it would cause issues with ATI's openCL. people would laugh at core if they started blaming ATI.
same goes for now.. if segwit hits a wall before being active, then REWRITE SEGWIT!!

its all just finger pointing drama, to get everyon looking in every direction apart from blockstream.
so just look at blockstream
EG blockstream made cludgy code instead of a clean node upgrade event.
EG blockstream made it so only pools get the vote('going soft').
EG blockstream going soft is an admitted backdoor exploit and they admit they want to add more backdoors to be able to go soft more often. (in th wrong hands its called a trojan)
EG blockstream now crying because all them all-inclusive exotic weekends didnt buy the pools into flag waving by last christmas (due to 65% abstaining/ objecting to the cludgy code)
EG blockstream now found out their 2merkle cludge is not as compatible as first thought (so now asking abstainers/objectors to reprogram themselves, to use filter/bridging nodes, to fork off, to add code to avoid attack vectors segwit causes, even offering to PoW bomb just to get the cludgy code in)

yep. instead of just backing down and going for a plan B of a 1 merkle segwit with proper block size increase for the entire network benefit, blockstream still want to bypass the idea of a network consensus upgrade and go straight to a controversial bilateral split..

much simpler blockstream just redo segwit as a proper 1 merkle version, remove the cludge and add the other features the community want too instead of pointing the fingers to blame pools, other implementations (which have done nothing for 2 years) and wasting upto 3 years just to push the cludgy version (UASF 'late2018' mandatory activation) and then make yet again half baked promises that they will remove the cludge and make it 1merkle full proper upgrade after that.. (no one believes them anymore)

a 1 merkle rewrite with the extra features the community want to unite the community. is safer AND FASTER to implement than the 2merkle cludge and all the threats half promises and features that wont 100% fix the network issues.. that blockstream are 'demanding get implemented or else'
16618  Bitcoin / Bitcoin Discussion / Re: Andreas redpills /r/btc loons on: May 04, 2017, 06:57:51 AM
Are you that stupid? Miners want competitive fees, they make more money that way. Empty blocks help miners create a backlog and force people to increase fees, that's what miners want, MORE MONEY.

Do you think miners want lower fees? You think they want to help users out? They want to control the blocksize so they can increase it when appropriate so they can make more money not to help users but to help themselves.

the empty block is not about causing a backlog intentionally. its about instead of waiting 10 seconds/minute to validate a block before making a new block its about starting a new block WHILE validating the previous block. thus unable to add new tx to the new block attempt because they are unsure if the first one is all valid..

what you find is that pools do this alot. and after validating a previous block they start adding tx's (each round of using up all the nonces) and the only time you really ever see an empty block is when they are lucky enough to get a solution within seconds(first round) to not have been able to start adding tx's to a block

BTW, you posted a lot of irreverent information to my post AND you still didnt answer my questions," Tell me what size of a block increase (and how many transactions) will reduce fees for users to the 2014 level while maintaining or exceeding current levels of miner fees? Or do you think miners are going to give up that money out the kindness of their hearts?"
5. there is no need to push fee's to $1+ a tx EVER. far better to naturally grow the blocksize in levels nodes can handle (even core admit 8mb is 'safe') thus allowing a 2015 10c fee ($220 total) to be upto $1760 total just by allowing more 10cent tx's in. not forcing $1 fees by holding tx count limits down to cause a upto $2k total (which pools dont need right now anyway)
replace 2015 with 2014 and you have your answer.
mempool bloat changes.. but based on the last year where mempools average 3-4mb average.. then it would need 4mb blocks to bring conjection down. which to get to or exceed the 2014 fee of upto 10cent would far exceed the totals of 2014 tx fee income.. obviously
16619  Bitcoin / Bitcoin Discussion / Re: Andreas redpills /r/btc loons on: May 04, 2017, 06:08:39 AM

Why would anyone want to use lightening when they can do on chain transactions?
 

Because, for a small payment, the on chain transaction will be too expensive. That is exactly what we want. I do not want kiddies paying for a Hamburger on our blockchain!! They can use LN for that...

Why use a bicycle instead of a car?? Why do you pretend to be stupid?

I can't tell if you're being sarcastic or not.

You WANT on chain transactions to be expensive?  If you really believe that, you've been brainwashed by Core.   This is just basic common sense: people will rather pay less than pay more for the same thing.  

A blocksize increase does not guarantee that on-chain fees will be low, and your precious BU does not guarantee that miners will even want to create larger blocks once they have the power to, instead of using smaller blocks to gain more fees. Why the fuck would miners even want bigger blocks when they make more money with smaller ones? Giving miners more power to manipulate Bitcoin is a bad idea that any non-shill can plainly see. You're the one that sounds brainwashed, by the big blockers.

A bit like saying why would companies build bigger factories when a small one would do?

Learn economics. Bigger blocks = more txs = more fees.

Ridiculous, here's why (From Blockchain.info):

Total transaction fees from May 2nd 2014 = $5830

Total transaction fees from May 2nd 2017 = $271,104

x46.5 increase.

Tell me what size of a block increase (and how many transactions) will reduce fees for users to the 2014 level while maintaining or exceeding current levels of miner fees? Or do you think miners are going to give up that money out the kindness of their hearts?

Not to mention, even with a backlog of transactions, miners are still producing empty blocks.

1. blockstream(CORE) removed all the fee controlling code. = core caused a fee rise. not pools.. core become bankers by not relying on code to control things and just instead shouted "just pay more"

2. blockstream(CORE) bypassed node consensus by going soft = core gave the only veto power to pools for segwit... pools didnt have control before core gave it to them. pools dont have control over other implementation proposals

3. other implementations are sticking with the standard NODE and POOL symbiotic consensus. thats has existed since day one and made no threats of splits/ PoW changing / banning nodes

4. re-implementing a new 'priority' formulae can actually reward lean average users with cheap fee's while penalising bloated repeat spenders (moving funds very block spammers).

5. there is no need to push fee's to $1+ a tx EVER. far better to naturally grow the blocksize in levels nodes can handle (even core admit 8mb is 'safe') thus allowing a 2015 10c fee ($220 total) to be upto $1760 total just by allowing more 10cent tx's in. not forcing $1 fees by holding tx count limits down to cause a upto $2k total (which pools dont need right now anyway)

6. lastly to debunk your mindset that pools want fee's .. i will hand you your own words "miners are still producing empty blocks.". if they cared about fee's they wouldnt empty block.. logically
16620  Bitcoin / Bitcoin Discussion / Re: Last Time. In small words. Why 2MB impossible. Why Soft Fork. Why SegWit First.. on: May 04, 2017, 05:17:19 AM
BU refused to add that to their implementation, which led us to where we are today. If one is so confident about their position, a bilateral split rather than a hostile split with no replay protection is the right way to go. As it currently stands, without replay protection a split would cause a lot of daamage.

core are the only ones wanting to create a bilateral or contentious split. core have been the ones screaming for anything not core to split away..
so if cores code causes a bilateral/contentious split (BIP9 and UASF has that ability) then core should be the one that adds "replay protection" if core was to decide to pull the split pin. in short core should take the heat. (and yes it is possible)

all other implementations want to stay together as a diverse per network. so why the hell should they be told to add in code and then be the ones that move away to allow core to have a dominant tier network.

ok lets word it this way..
anyone abstaining from segwit by just sticking with 0.12 rules being told to program a new version with replay protection..
yet core who have changed the code refuse to add in code to avoid risks of replay protection. (facepalm)

lastly if you dont think core code is cludgy.
ask yourself why the cludgy maths of native sigops is in line 4xx of a .cpp file and not in a header files(.h) such as policy/consensus.
and why the cludgy maths requires reading four different files instead of just having it all in a single header file as a set variable (easy to do)

and when it comes to changing from a 2merkle block to a 1 merkle block (which core pretends to promise later)... its not a simple edit of one file to change the metrics. but requires yet another big rewrite to undo the cludge of creating their 2merkle block

if you done a cs college course it wont teach you how to read the cludge any better. it would teach you how to read c++ and then recognise cludge when you see it.. because devs are not doing the basics of arranging variables in a logical way.

in short if the current devs of core/blockstream jumped over to litecoin or hyper ledger and retired their desire to work on bitcoin (devs are not immortal, their interests do change) the cludge they leave behind makes it doubly as hard to sort out for anyone new coming in.

your devotion to devs without knowing c++ reveals more about your lack of understanding bitcoin, but your adoration of a temporary team and trust that their word twisting should be good enough.

P.S im laughing at how you took the glory of 'explaining it'.. yet you were not 'questioner2' in IRC. and if you read the entire conversation then you would see that segwit does not 'fix' things. it just twists things

you try admitting it but prefer to word twist it
Quote
Quote
however there is still the 5tx's to fill the 80k issue
where it doesnt need to be 16k/tx of actual sigops.. only 4k/tx..
No. 5 legacy TX with 4k each fill up the block which is "normal" behavior today, and Segwit wouldn't change that.

"Segwit wouldn't change that." = "segwit doesnt fix that"

so much cludge while keeping spam attack vectors open.

oh and i did admit i was wrong about the quadratic risk not being worse while at a 2merkle implementation(using math cludge). its no better either.
its just a maths game. of faking how many sigops it really does. by multiplying the rule.
i laugh at a 'sigopcount' variable that gets told not to count real sigops but hold a number thats not related to real sigops count, but a math cludge

and when segwit becomes a 1 merkle block. (removing the witness factor would be part of that) it will become a problem. which you have shown a bit of worry over.. but would not outright admit


why waste 2 years on the cludge of a 2 merkle with a promise of a 1 merkle within the year after..(3 year wasted)
when they could have done a 1 merkle initially with all the other features users want and have the 1 year grace period.(1 year and united community)

why even threaten the bilateral/controversial split for a 2 merkle and then promise a 1 merkle a year after doing a split. theres just no logic to it relating to keeping a diverse peer network.. but alot of logic when seeing the desire of a dominant blockstream owned tier network
Pages: « 1 ... 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 [831] 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 ... 1467 »
Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!