Bitcoin Forum
July 21, 2019, 06:22:24 PM *
News: Latest Bitcoin Core release: 0.18.0 [Torrent] (New!)
 
   Home   Help Search Login Register More  
Pages: « 1 2 3 4 5 6 7 8 9 10 11 [12] 13 14 15 16 17 »  All
  Print  
Author Topic: Satoshi Nakamoto: "Bitcoin can scale larger than the Visa Network"  (Read 17023 times)
sgbett
Legendary
*
Offline Offline

Activity: 1834
Merit: 1045



View Profile
March 09, 2016, 08:43:34 PM
 #221

Setting a block size limit of 1MB was, and continues to be a hacky workaround.
It is certainly not a hacky workaround. It is a limit that was needed (it still is for the time being).

Theory drives development, but in practice sometimes hacky workarounds are needed.
If it can be avoided, not really.

The block size limit was a hacky workaround to the expensive to validate issue. An issue that is now mitigated by other much better solutions, not least a well incentivised distributed mining economy. That is now smart enough to route around such an attack, making it prohibitively expensive to maintain.
So exactly what is the plan, replace one "hacky workaround" with another? Quite a lovely way forward. Segwit is being delivered and it will ease the validation problem and increase the transaction capacity. What is the problem exactly?

Problem: an attacker can create a block that is so expensive to validate that other miners would get stuck validation the block.
Hack: Set an arbitrary limit which is way above what we need right now, but closes the attack vector.
Solution: 1 transaction blocks.

Problem: the block size limit is causing transactions to get stuck in the mempool
Hack: raise the block size limit to 2MB
Solution: remove the block size limit

Segwit isn't a solution designed to fix the block size limit. Its a solution to another problem that right now is undefined, that is being sold as a solution to a problem that is being actively curated by those who refuse to remove a prior temporary hack.

What problem is it that requires signatures to be segregated into another data structure and not counted towards the fees. Nobody can give a straight answer to that very simple question. Why is witness data priced differently?

"A purely peer-to-peer version of electronic cash would allow online payments to be sent directly from one party to another without going through a financial institution" - Satoshi Nakamoto
*my posts are not investment advice*
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction. Advertise here.
1563733344
Hero Member
*
Offline Offline

Posts: 1563733344

View Profile Personal Message (Offline)

Ignore
1563733344
Reply with quote  #2

1563733344
Report to moderator
1563733344
Hero Member
*
Offline Offline

Posts: 1563733344

View Profile Personal Message (Offline)

Ignore
1563733344
Reply with quote  #2

1563733344
Report to moderator
1563733344
Hero Member
*
Offline Offline

Posts: 1563733344

View Profile Personal Message (Offline)

Ignore
1563733344
Reply with quote  #2

1563733344
Report to moderator
Lauda
GrumpyKitty
Legendary
*
Offline Offline

Activity: 2282
Merit: 2087


Modern Liberalism is a Mental Disorder


View Profile
March 09, 2016, 08:46:13 PM
 #222

You and I think very much alike.  Lauda, can you point us at a really big but totally legit/non-abusive transaction?
I don't think that there are many transactions that are so large in nature (both 'abusive' and non). This is the one that I'm aware of. However, you'd also have to define what you mean by "big". Do you mean something quite unusually big (e.g. 100kB) or something that fills up the entire block? I'd have to a lot more analysis to try and find one (depending on the type).

Segwit isn't a solution designed to fix the block size limit. Its a solution to another problem that right now is undefined, that is being sold as a solution to a problem that is being actively curated by those who refuse to remove a prior temporary hack.
TX malleability (e.g.) is 'undefined'? Segwit provides additional transaction capacity while carrying other benefits. How exactly is this bad?

What problem is it that requires signatures to be segregated into another data structure and not counted towards the fees. Nobody can give a straight answer to that very simple question. Why is witness data priced differently?
The question would have to be correct for one to be able to answer it. In this case, I have no idea what you are trying to ask.

David Rabahy
Hero Member
*****
Offline Offline

Activity: 709
Merit: 500



View Profile
March 09, 2016, 08:47:34 PM
 #223

Oh, I was wrong; get over it, I am.  Smiley  We can't just add together inputs.  Here's an example address https://blockchain.info/unspent?active=1Gx8ivf4xSCqNNtUXQxoyBFd4FeGZvwCHT&format=html with multiple outputs, 7 in this case.  To spend the entire lot would involve a transaction with 7 inputs, i.e. not one with just 1 input with the net amount.  Bummer.

So, then the question is what happened to 19MxhZPumMt9ntfszzCTPmWNQeh6j6QqP2 that it had so many tiny outputs in it?

Still the owner could have created multiple smaller transactions instead of one large one.
David Rabahy
Hero Member
*****
Offline Offline

Activity: 709
Merit: 500



View Profile
March 09, 2016, 08:55:49 PM
 #224

Imagine this; you agree to sell something to someone and they will pay you in Bitcoins.  It turns out they have an address with a zillion little outputs in it.  So, they go to launch a send to you and find the fee is going to be huge (to cover the cost of all those inputs in a timely fashion).  The deal falls through; Bitcoin loses.

Now we can wonder how their address ended up so fragmented but what does it matter?  Maybe they were collecting a zillion little drips from faucets.  Whatever; they can't spent it like a large output.
adamstgBit
Legendary
*
Offline Offline

Activity: 1904
Merit: 1005


Trusted Bitcoiner


View Profile WWW
March 09, 2016, 09:01:11 PM
 #225

Imagine this; you agree to sell something to someone and they will pay you in Bitcoins.  It turns out they have an address with a zillion little outputs in it.  So, they go to launch a send to you and find the fee is going to be huge (to cover the cost of all those inputs in a timely fashion).  The deal falls through; Bitcoin loses.

Now we can wonder how their address ended up so fragmented but what does it matter?  Maybe they were collecting a zillion little drips from faucets.  Whatever; they can't spent it like a large output.
the needs the of the many outweigh the needs of the few, or the spammer.
the TX inquestion comes from a "stress test", someone wanted to see how much SPAM bitcoin could swallow at once.
if the TX size was made to be >1MB what would've happened then?
finding a good limit shouldn't be very hard.


Set an arbitrary limit which is way above what we need right now, but closes the attack vector.
agreed.

David Rabahy
Hero Member
*****
Offline Offline

Activity: 709
Merit: 500



View Profile
March 09, 2016, 09:04:52 PM
 #226

Naively increasing the block size isn't the be all answer.  Sure, when the workload (mempool) is just a bunch of classic small transactions with few inputs then it's great for low fees.  But when a transaction comes along with a huge number of inputs (innocent or malevolent) it will clog up the works forcing everyone to perform a long computation to verify it.  One of these monsters can ruin your day if the calculation takes a significantly longer than 1 block interval.  Or does it?  So, we're behind for a little while but then we catch up.  Or are we saying there are monsters out there that could take hours or even days to verify?

Is there a tendency over time for transactions to become bushier?  When the exchange rate is much larger then the Bitcoin amounts in transaction will tend to be smaller.  Does this lead to fragmentation?
ATguy
Sr. Member
****
Offline Offline

Activity: 424
Merit: 250



View Profile
March 09, 2016, 09:07:22 PM
Last edit: March 09, 2016, 09:32:25 PM by ATguy
 #227

what are the implications of this "quadratic TX validation" you guys are talking about?

we can't have TX with a huge amount of inputs? or somthing?
Exactly.  If/when a transaction comes in with zillions of inputs then everyone verifying it will be subjected to a long computation.

zillions of inputs!  Grin this i can understand


This is what BIP109 fixes and why 2 MB hard fork is usefull to be activated as soon as possible. For more info why reducing to 1.3 GB Signature operations in BIP109  2 MB hard fork used by Bitcoin Classic is necessary and SegWit does not help with:


https://www.reddit.com/r/btc/comments/47f0b0/f2pool_testing_classic/d0deh29

Quote
The worst case block validation costs that I know of for a 2.2 GHz CPU for the status quo, SegWit SF, and the Classic 2 MB HF (BIP109) are as follows (estimated):

1 MB (status quo):  2 minutes 30 seconds (19.1 GB hashed)
1 MB + SegWit:      2 minutes 30 seconds (19.1 GB hashed)
2 MB Classic HF:              10 seconds (1.3 GB hashed)

SegWit makes it possible to create transactions that don't hash a lot of data, but it does not make it impossible to create transactions that do hash a lot of data.

.Liqui Exchange.Trade and earn 24% / year on BTC, LTC, ETH
....Brand NEW..........................................Payouts every 24h. Learn more at official thread
David Rabahy
Hero Member
*****
Offline Offline

Activity: 709
Merit: 500



View Profile
March 09, 2016, 09:08:01 PM
 #228

Anyone know where someone is tracking average transaction size (# of inputs) over time?
David Rabahy
Hero Member
*****
Offline Offline

Activity: 709
Merit: 500



View Profile
March 09, 2016, 09:09:48 PM
 #229

what are the implications of this "quadratic TX validation" you guys are talking about?

we can't have TX with a huge amount of inputs? or somthing?
Exactly.  If/when a transaction comes in with zillions of inputs then everyone verifying it will be subjected to a long computation.

zillions of inputs!  Grin this i can understand


This is what BIP109 fixes and why 2 MB hard fork is usefull to be activated as soon as possible. For more info why reducing to 1.3 GB Signature operations in BIP109  2 MB hard fork used by Bitcoin Classic is necessary:


http://8btc.com/forum.php?mod=viewthread&tid=29511&page=1#pid374998

Quote
The worst case block validation costs that I know of for a 2.2 GHz CPU for the status quo, SegWit SF, and the Classic 2 MB HF (BIP109) are as follows (estimated):

1 MB (status quo):  2 minutes 30 seconds (19.1 GB hashed)
1 MB + SegWit:      2 minutes 30 seconds (19.1 GB hashed)
2 MB Classic HF:              10 seconds (1.3 GB hashed)

SegWit makes it possible to create transactions that don't hash a lot of data, but it does not make it impossible to create transactions that do hash a lot of data.

Whoa.  Hmm, is there a 0.12 version of Classic yet?
adamstgBit
Legendary
*
Offline Offline

Activity: 1904
Merit: 1005


Trusted Bitcoiner


View Profile WWW
March 09, 2016, 09:10:47 PM
 #230

Naively increasing the block size isn't the be all answer.  Sure, when the workload (mempool) is just a bunch of classic small transactions with few inputs then it's great for low fees.  But when a transaction comes along with a huge number of inputs (innocent or malevolent) it will clog up the works forcing everyone to perform a long computation to verify it.  One of these monsters can ruin your day if the calculation takes a significantly longer than 1 block interval.  Or does it?  So, we're behind for a little while but then we catch up.  Or are we saying they are monsters out there that could take hours or even days to verify?
we can't allow any transactions whether or not they are innocent or malevolent to clog up the network. there's no debating this.

Is there a tendency over time for transactions to become bushier?  When the exchange rate is much larger than the Bitcoin amounts in transaction will tend to be smaller.  Does this lead to fragmentation?
yes i believe this is the case, one day, the coins will be way too fragmented.

some kind of "defragmention" will need to take place at one point.

i dont blieve this is a problem for us to worry about... its to far in the future. ( i'm guessing , i do a lot of guesswork )

sgbett
Legendary
*
Offline Offline

Activity: 1834
Merit: 1045



View Profile
March 09, 2016, 09:13:39 PM
 #231

You and I think very much alike.  Lauda, can you point us at a really big but totally legit/non-abusive transaction?
I don't think that there are many transactions that are so large in nature (both 'abusive' and non). This is the one that I'm aware of. However, you'd also have to define what you mean by "big". Do you mean something quite unusually big (e.g. 100kB) or something that fills up the entire block? I'd have to a lot more analysis to try and find one (depending on the type).

Segwit isn't a solution designed to fix the block size limit. Its a solution to another problem that right now is undefined, that is being sold as a solution to a problem that is being actively curated by those who refuse to remove a prior temporary hack.
TX malleability (e.g.) is 'undefined'? Segwit provides additional transaction capacity while carrying other benefits. How exactly is this bad?

What problem is it that requires signatures to be segregated into another data structure and not counted towards the fees. Nobody can give a straight answer to that very simple question. Why is witness data priced differently?
The question would have to be correct for one to be able to answer it. In this case, I have no idea what you are trying to ask.

Fixing TX Malleability is beneficial to everyone.

This *other benefits* - they include giving the ability to introduce consensus changes without hard forking. This is because we are told that a contentious hard fork is a terrible thing. How does anyone know this for sure!?

A hard fork is good. (Note the absence of the word contentious). A hard fork establishes Nakamoto consensus, and is the only consensus vital to the ongoing successful operation of the bitcoin network. The incentives that drive this consensus mechanism are sound. The fear from those that do not see this is overwhelming. To subvert this is to destroy fundamental parts of bitcoins architecture.

I thought you would understand what I meant when I asked the question, sorry if I have used the wrong terminology or something. I can make it a broader question, then perhaps we can investigate the specifics.

Why does segregated witness change the tx fee calculation?

"A purely peer-to-peer version of electronic cash would allow online payments to be sent directly from one party to another without going through a financial institution" - Satoshi Nakamoto
*my posts are not investment advice*
David Rabahy
Hero Member
*****
Offline Offline

Activity: 709
Merit: 500



View Profile
March 09, 2016, 09:17:13 PM
 #232

https://github.com/bitcoinclassic/bitcoinclassic/releases/tag/v0.12.0cl1
David Rabahy
Hero Member
*****
Offline Offline

Activity: 709
Merit: 500



View Profile
March 09, 2016, 09:22:53 PM
 #233

This is what BIP109 fixes and why 2 MB hard fork is usefull to be activated as soon as possible. For more info why reducing to 1.3 GB Signature operations in BIP109 2 MB hard fork used by Bitcoin Classic is necessary:

http://8btc.com/forum.php?mod=viewthread&tid=29511&page=1#pid374998

Quote
The worst case block validation costs that I know of for a 2.2 GHz CPU for the status quo, SegWit SF, and the Classic 2 MB HF (BIP109) are as follows (estimated):

1 MB (status quo):  2 minutes 30 seconds (19.1 GB hashed)
1 MB + SegWit:      2 minutes 30 seconds (19.1 GB hashed)
2 MB Classic HF:              10 seconds (1.3 GB hashed)

SegWit makes it possible to create transactions that don't hash a lot of data, but it does not make it impossible to create transactions that do hash a lot of data.
Hmm, the link took me to a whole lot of Asian looking characters; am I meant to use a translator?  As such I couldn't find the quoted material.

Question:  Can the BIP109 magic be applied if we have the 1MB block size limit?  If not, why not?
sgbett
Legendary
*
Offline Offline

Activity: 1834
Merit: 1045



View Profile
March 09, 2016, 09:23:59 PM
 #234

Set an arbitrary limit which is way above what we need right now, but closes the attack vector.
agreed.

and last I heard its exactly how the attack remains mitigated in classic...

AS BU supporter though, we don't need limits!

IMHO the financial incentives are strong enough that block size (in terms of both bandwidth to transmit/ and CPU to process) is self limiting. Propagation time is a combination of the two things and to (over)simplify propagation time vs orphan risk is enough to make sure miners don't do stupid things, unless they want to lose money.

The full math is here - David you would probably be interested in this if you haven't already seen it.

http://www.bitcoinunlimited.info/resources/1txn.pdf

The paper also describes how the sigops attack is mitigated through miners simply mining 1tx blocks whilst validating then pushing that out to other miners whilst they are still validating the 'poison' block. Rational miners will validate the smaller block, and they also be able to mine another block on top of this, orphaning the poison block.

The attacker would get one shot, and would quickly be shut out. If you have enough hash rate to be mining blocks yourself its really much more profitable to behave!

"A purely peer-to-peer version of electronic cash would allow online payments to be sent directly from one party to another without going through a financial institution" - Satoshi Nakamoto
*my posts are not investment advice*
YarkoL
Legendary
*
Offline Offline

Activity: 997
Merit: 1008



View Profile
March 09, 2016, 09:25:06 PM
 #235


Why does segregated witness change the tx fee calculation?

My guess: To incentivize users to upgrade into segwit.
That is the carrot, and the raising fees of regular txs, the stick.

“God does not play dice"
David Rabahy
Hero Member
*****
Offline Offline

Activity: 709
Merit: 500



View Profile
March 09, 2016, 09:26:00 PM
 #236

Oh, I see, per https://github.com/bitcoin/bips/blob/master/bip-0109.mediawiki, it is just artificial.  The same sigop and hash limits could, in theory, be used at any block size limit.
David Rabahy
Hero Member
*****
Offline Offline

Activity: 709
Merit: 500



View Profile
March 09, 2016, 09:28:50 PM
 #237

The full math is here - David you would probably be interested in this if you haven't already seen it.

http://www.bitcoinunlimited.info/resources/1txn.pdf

The paper also describes how the sigops attack is mitigated through miners simply mining 1tx blocks whilst validating then pushing that out to other miners whilst they are still validating the 'poison' block. Rational miners will validate the smaller block, and they also be able to mine another block on top of this, orphaning the poison block.

The attacker would get one shot, and would quickly be shut out. If you have enough hash rate to be mining blocks yourself its really much more profitable to behave!
Yummy; thanks.
ATguy
Sr. Member
****
Offline Offline

Activity: 424
Merit: 250



View Profile
March 09, 2016, 09:31:21 PM
 #238


Hmm, the link took me to a whole lot of Asian looking characters; am I meant to use a translator?  As such I couldn't find the quoted material.

Question:  Can the BIP109 magic be applied if we have the 1MB block size limit?  If not, why not?

Sorry try this:
https://www.reddit.com/r/btc/comments/47f0b0/f2pool_testing_classic/d0deh29

It can, but it needs to be in hardfork, so 2MB is usefull anyway.



Yes, already available few days. Note the sigop is reduced to 1.3 GB only after the 2 MB hard fork is activated and grace period over, blocks with more sigops will become invalid the same way blocks over 1MB are invalid now.

Notable changes from Bitcoin Core version 0.12.0:


Quote
Bitcoin Classic 0.12.0 is based on Bitcoin Core version 0.12.0, and is compatible with its blockchain files and wallet.
For a full list of changes in 0.12.0, visit Core’s website here.
Additionally, this release includes all changes and additions made in Bitcoin Classic 0.11.2, most notably the increase of the block size limit from one megabyte to two megabytes.

    Opt-in RBF is set to disabled by default. In the next release, opt-in RBF will be completely removed.
    The RPC command "getblockchaininfo" now displays BIP109's (2MB) status.
    The chainstate obfuscation feature from Bitcoin Core is supported, but not enabled

.Liqui Exchange.Trade and earn 24% / year on BTC, LTC, ETH
....Brand NEW..........................................Payouts every 24h. Learn more at official thread
Lauda
GrumpyKitty
Legendary
*
Offline Offline

Activity: 2282
Merit: 2087


Modern Liberalism is a Mental Disorder


View Profile
March 09, 2016, 09:33:14 PM
 #239

Fixing TX Malleability is beneficial to everyone. This *other benefits* - they include giving the ability to introduce consensus changes without hard forking. This is because we are told that a contentious hard fork is a terrible thing. How does anyone know this for sure!?
So being able to run multiple soft forks at once is a bad thing for you? Include the ability to introduce consensus changes without a HF? Source please.

Why does segregated witness change the tx fee calculation?
I don't really have an answer to this question. This might do:
My guess: To incentivize users to upgrade into segwit.
That is the carrot, and the raising fees of regular txs, the stick.

The same sigop and hash limits could, in theory, be used at any block size limit.
Replacing 1 limit with another is anything, but a nice way of solving problems.

franky1
Legendary
*
Offline Offline

Activity: 2450
Merit: 1451



View Profile
March 09, 2016, 09:34:43 PM
Last edit: March 09, 2016, 09:47:32 PM by franky1
 #240

Naively increasing the block size isn't the be all answer.  Sure, when the workload (mempool) is just a bunch of classic small transactions with few inputs then it's great for low fees.  But when a transaction comes along with a huge number of inputs (innocent or malevolent) it will clog up the works forcing everyone to perform a long computation to verify it.  One of these monsters can ruin your day if the calculation takes a significantly longer than 1 block interval.  Or does it?  So, we're behind for a little while but then we catch up.  Or are we saying there are monsters out there that could take hours or even days to verify?

Is there a tendency over time for transactions to become bushier?  When the exchange rate is much larger then the Bitcoin amounts in transaction will tend to be smaller.  Does this lead to fragmentation?

thats under the assumtion that with a 2mb buffer.. miners will allow themselves to jump to 1.995mb of data instantly.

the real assumtion is however just like in 2013. miners knew they suddenly became able to grow passed the 500k bug and utilize the 1mb buffer. but it took a couple years for them to slowly grow,
and that was the decision of the miners.

we should not leave it to blockstream to set a 1.1mb limit every 2 years knowing that miners will be at the max in maybe 4 months.
instead it should be a 2mb buffer and then let the miners have their own separate preferential rules to grow slowly and just ignore obvious spam transactions until they drop out of the mempool after 48 hours.
knowing that they can happily grow by 0.1mb very 4months+ without needing to ask blockstream for permission or receive abuse or insults

analogy
knowing one day you are going to have 19 children in the next xx years(you already have 9 and live in a 10bedroom house).
(1.9mb data in x years time, currently at 900k data with a 1mb buffer)
would you go through the headache of 2 years of mortgages and legal stuff to get an 11 bedroom house then another 2 years of headaches for a 12 bedroom house
or would you:
go through one headache and get a 20 bedroom house and then spend the next 20 years impregnating your wife 10 times, slowly gaining a child once every couple years.

i know segwit tries to say, lets stay with 10 rooms and fit in some bunkbeds.. so more kids can fit into the 10 rooms. but the problem is that blockstreams other features. like confidential transaction codes. makes all the kids obese with twice the amount of clothing that needs storing too.. so the house becomes overcrowded and slow to get everyone ready in the morning.
which leads to blockstream to instead of expanding to a 20 bedroom house. pushes some of the kids to get adopted by the neighbours(sidechains). and only allowed to visit the real family home if they pay rent

I DO NOT TRADE OR ACT AS ESCROW ON THIS FORUM EVER.
Please do your own research & respect what is written here as both opinion & information gleaned from experience. many people replying with insults but no on-topic content substance, automatically are 'facepalmed' and yawned at
Pages: « 1 2 3 4 5 6 7 8 9 10 11 [12] 13 14 15 16 17 »  All
  Print  
 
Jump to:  

Sponsored by , a Bitcoin-accepting VPN.
Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!