Bitcoin Forum
May 01, 2026, 04:04:25 AM *
News: Latest Bitcoin Core release: 30.2 [Torrent]
 
   Home   Help Search Login Register More  
Pages: « 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 [52] 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 »
  Print  
Author Topic: [Apr 2026]Mempool empty, Consolidate your small inputs @0.28 sat/vbyte  (Read 94338 times)
This is a self-moderated topic. If you do not want to be moderated by the person who started this topic, create a new topic. (39 posts by 4+ users deleted.)
cryptosize
Sr. Member
****
Offline Offline

Activity: 2128
Merit: 415


View Profile
May 29, 2024, 02:06:33 PM
 #1021

What's "Bitcoin"? I remember blocks being temporarily limited from 30 to 1 MB by Satoshi in the early days, to reduce spam and blockchain growth.
Satoshi also envisioned the blockchain to grow by 100 GB per day, because that'd equate to "just the size of 12 DVD", completely ignoring the bottleneck in verification. He believed that the blockchain should be accessible only by server farms that generate new coins, everybody else should opt-in to SPV. It's clear to me today that Satoshi wasn't the best figure to see as the leading expert.


HmmMAA will scold you for doubting Satoshi! Grin

On a serious note, after reading all Satoshi's emails that leaked a while ago, I came to the conclusion that it was most likely a band of people with varying interests/agendas.

That's the simplest explanation I can think of, because there are even more elaborate ones (like perhaps he had bipolar disorder). There are lots of inexplicable contradictions in those emails.
BlackHatCoiner
Legendary
*
Offline Offline

Activity: 2016
Merit: 9722


Bitcoin is ontological repair


View Profile
May 29, 2024, 02:10:21 PM
 #1022

That's the simplest explanation I can think of, because there are even more elaborate ones (like perhaps he had bipolar disorder). There are lots of inexplicable contradictions in those emails.
The forum user was a big blocker, while the coder was a small blocker.  Tongue

 
 b1exch.to 
  ETH      DAI   
  BTC      LTC   
  USDT     XMR    
.███████████▄▀▄▀
█████████▄█▄▀
███████████
███████▄█▀
█▀█
▄▄▀░░██▄▄
▄▀██▄▀█████▄
██▄▀░▄██████
███████░█████
█░████░█████████
█░█░█░████░█████
█░█░█░██░█████
▀▀▀▄█▄████▀▀▀
Synchronice
Legendary
*
Offline Offline

Activity: 1568
Merit: 1160



View Profile
May 29, 2024, 05:34:40 PM
Merited by JayJuanGee (1)
 #1023

What about the halving after that? Or the second after that? In less than 20 years from now, the block subsidy will be less than 0.1 bitcoin. Block space has to be valuable.
Does it matter if every person from 10 people pay 100 sat/vByte as a fee or every person from 100 people pay 10 sat/vByte as a fee? It doesn't matter because the outcome is the same. Bitcoin can't grow with such a low block size, it will simply make it unattractive and people will actively start to look for alternatives.
Simply, there is no way that by increasing the block size, miners will lose the profit, no, that's not true.

Yes, I know, it may sound a bit harsh. On the other hand we start getting used to use on-chain transactions only when it's meaningful (hence worth paying even 50$+ for it). For the rest, for small amounts like for example the signature campaigns or paying for VPN, sorry, but LN, no matter how imperfect it is, is the solution we should really consider. At least until the proper solution is discovered and implemented.
Why is LN a solution? What if everyone moves on LN? Will miners still profit?

Bitcoin is not meant for buying coffee or conducting other low-value transactions, at least not on-chain. It represents the best monetary standard we can have, and using it for such purposes undervalues its true potential.
I can't agree with you, Bitcoin is meant for P2P transactions and that includes everything, starting from Coffee to Car and so on. The simple fact is that Bitcoin wasn't meant for massive usage and it's blocks size was 1 MB because it was enough for 2010 and for a few upcoming years. As time goes, number of Bitcoin users increase and the technology advances, so it's perfectly okay and to my mind, even necessary, to increase the block size.
LoyceV (OP)
Legendary
*
Offline Offline

Activity: 4032
Merit: 21712


Thick-Skinned Gang Leader and Golden Feather 2021


View Profile WWW
May 29, 2024, 06:12:41 PM
Merited by BlackHatCoiner (4)
 #1024

I haven't argued that rising the block size to 10-16 MB will eliminate Bitcoin. I'm just saying that it doesn't solve the problem, it only alleviates it.
I think we can agree on this, but with one difference: I'd like to see the "breathing space" in more Bitcoin transactions, while you don't want it. I'd like to see more on-chain transactions until a more permanent scaling solution is in place.

Quote
With block size 1 MB, it takes around 2 seconds to verify the block.
When syncing Bitcoin Core (on not very recent hardware), I already see multiple blocks per second being verified. I recently did it on a 5 years old Xeon, and the IBD took 11 hours. That's 15 MB block data verification per second on average.

¡uʍop ǝpᴉsdn pɐǝɥ ɹnoʎ ɥʇᴉʍ ʎuunɟ ʞool no⅄
ABCbits
Legendary
*
Offline Offline

Activity: 3598
Merit: 10021



View Profile
May 30, 2024, 10:50:18 AM
 #1025

Yes, I know, it may sound a bit harsh. On the other hand we start getting used to use on-chain transactions only when it's meaningful (hence worth paying even 50$+ for it). For the rest, for small amounts like for example the signature campaigns or paying for VPN, sorry, but LN, no matter how imperfect it is, is the solution we should really consider. At least until the proper solution is discovered and implemented.
Why is LN a solution? What if everyone moves on LN? Will miners still profit?

As reminder, you need 2 Bitcoin on-chain TX when you use LN in order to open and close LN channel. That's where miner earn some income.

I haven't argued that rising the block size to 10-16 MB will eliminate Bitcoin. I'm just saying that it doesn't solve the problem, it only alleviates it.
I think we can agree on this, but with one difference: I'd like to see the "breathing space" in more Bitcoin transactions, while you don't want it. I'd like to see more on-chain transactions until a more permanent scaling solution is in place.

In addition, other scaling option (e.g. using LN or sidechain) needs on-chain TX to open LN channel or "peg" the coin on the sidechain. So very high TX fee would make those option not very attractive.

███████████████████████████
███████▄████████████▄██████
████████▄████████▄████████
███▀█████▀▄███▄▀█████▀███
█████▀█▀▄██▀▀▀██▄▀█▀█████
███████▄███████████▄███████
███████████████████████████
███████▀███████████▀███████
████▄██▄▀██▄▄▄██▀▄██▄████
████▄████▄▀███▀▄████▄████
██▄███▀▀█▀██████▀█▀███▄███
██▀█▀████████████████▀█▀███
███████████████████████████
.
.Duelbits PREDICT..
█████████████████████████
█████████████████████████
███████████▀▀░░░░▀▀██████
██████████░░▄████▄░░████
█████████░░████████░░████
█████████░░████████░░████
█████████▄▀██████▀▄████
████████▀▀░░░▀▀▀▀░░▄█████
██████▀░░░░██▄▄▄▄████████
████▀░░░░▄███████████████
█████▄▄█████████████████
█████████████████████████
█████████████████████████
.
.WHERE EVERYTHING IS A MARKET..
█████
██
██







██
██
██████
Will Bitcoin hit $200,000
before January 1st 2027?

    No @1.15         Yes @6.00    
█████
██
██







██
██
██████

  CHECK MORE > 
LoyceV (OP)
Legendary
*
Offline Offline

Activity: 4032
Merit: 21712


Thick-Skinned Gang Leader and Golden Feather 2021


View Profile WWW
May 30, 2024, 11:05:32 AM
 #1026

In addition, other scaling option (e.g. using LN or sidechain) needs on-chain TX to open LN channel or "peg" the coin on the sidechain. So very high TX fee would make those option not very attractive.
I've never used any sidechain, and it seems like "wrapped" centralized tokens are much more popular than actual sidechains. Replacing central banks by businesses is not what I hoped for.

¡uʍop ǝpᴉsdn pɐǝɥ ɹnoʎ ɥʇᴉʍ ʎuunɟ ʞool no⅄
BlackHatCoiner
Legendary
*
Offline Offline

Activity: 2016
Merit: 9722


Bitcoin is ontological repair


View Profile
May 30, 2024, 01:00:39 PM
 #1027

When syncing Bitcoin Core (on not very recent hardware), I already see multiple blocks per second being verified. I recently did it on a 5 years old Xeon, and the IBD took 11 hours. That's 15 MB block data verification per second on average.
My bad. I must have counted the time it takes for a Raspberry Pi to do it, and I ignored batch verification, or other techniques Bitcoin Core implements to increase efficiency. I just multiplied the average total transactions times the time it takes to verify just one ECDSA signature. However, I do believe certain transaction types take more time to verify than typical, which can be used as an attack vector. I'll get back to it when I find the data to back this claim.

For the sake of the discussion, let's assume that the 16 MB limit is a harmless one. How do we enforce it in a softfork way? Segwit was enforced in a clever way, by separating the witness data from the transaction data. AFAIK, it's impossible to achieve it in softfork, unless you've figured out of another way to restructure the transaction data.

"Why shouldn't this be done in a hardfork way?". It's already history, called "Bitcoin Cash". There is no point in redoing the same thing.

 
 b1exch.to 
  ETH      DAI   
  BTC      LTC   
  USDT     XMR    
.███████████▄▀▄▀
█████████▄█▄▀
███████████
███████▄█▀
█▀█
▄▄▀░░██▄▄
▄▀██▄▀█████▄
██▄▀░▄██████
███████░█████
█░████░█████████
█░█░█░████░█████
█░█░█░██░█████
▀▀▀▄█▄████▀▀▀
LoyceV (OP)
Legendary
*
Offline Offline

Activity: 4032
Merit: 21712


Thick-Skinned Gang Leader and Golden Feather 2021


View Profile WWW
May 31, 2024, 05:34:28 AM
 #1028

For the sake of the discussion, let's assume that the 16 MB limit is a harmless one. How do we enforce it in a softfork way? Segwit was enforced in a clever way, by separating the witness data from the transaction data. AFAIK, it's impossible to achieve it in softfork, unless you've figured out of another way to restructure the transaction data.
I don't think a softfork can do this, but then again, when the limit was lowered, it must have been a hardfork too. Except for back then there wasn't much controversy about it.

Quote
"Why shouldn't this be done in a hardfork way?". It's already history, called "Bitcoin Cash". There is no point in redoing the same thing.
That was 7 years ago, surrounded by loads of contoversy, and promoted by some people with their own agenda. I don't think it's right to use that failure to keep blocks the same size for eternity. I don't think a proper change, coming from the Bitcoin Core devs, that actually improves Bitcoin's future will be rejected.

¡uʍop ǝpᴉsdn pɐǝɥ ɹnoʎ ɥʇᴉʍ ʎuunɟ ʞool no⅄
ABCbits
Legendary
*
Offline Offline

Activity: 3598
Merit: 10021



View Profile
May 31, 2024, 08:12:02 AM
 #1029

In addition, other scaling option (e.g. using LN or sidechain) needs on-chain TX to open LN channel or "peg" the coin on the sidechain. So very high TX fee would make those option not very attractive.
I've never used any sidechain, and it seems like "wrapped" centralized tokens are much more popular than actual sidechains. Replacing central banks by businesses is not what I hoped for.

Yeah, "wrapped" coin is definitely more popular. But either way, you still need one Bitcoin on-chain TX in order to send your Bitcoin to exchange/other service, to obtain the "wrapped" coin.

For the sake of the discussion, let's assume that the 16 MB limit is a harmless one. How do we enforce it in a softfork way? Segwit was enforced in a clever way, by separating the witness data from the transaction data. AFAIK, it's impossible to achieve it in softfork, unless you've figured out of another way to restructure the transaction data.

Most straightforward option would be increasing witness discount from 4 to 16. But without making Ordinal or other TX which use OP_FALSE OP_IF ... OP_ENDIF non-standard, it'll just make cost to spam on Bitcoin blockchain even cheaper.

███████████████████████████
███████▄████████████▄██████
████████▄████████▄████████
███▀█████▀▄███▄▀█████▀███
█████▀█▀▄██▀▀▀██▄▀█▀█████
███████▄███████████▄███████
███████████████████████████
███████▀███████████▀███████
████▄██▄▀██▄▄▄██▀▄██▄████
████▄████▄▀███▀▄████▄████
██▄███▀▀█▀██████▀█▀███▄███
██▀█▀████████████████▀█▀███
███████████████████████████
.
.Duelbits PREDICT..
█████████████████████████
█████████████████████████
███████████▀▀░░░░▀▀██████
██████████░░▄████▄░░████
█████████░░████████░░████
█████████░░████████░░████
█████████▄▀██████▀▄████
████████▀▀░░░▀▀▀▀░░▄█████
██████▀░░░░██▄▄▄▄████████
████▀░░░░▄███████████████
█████▄▄█████████████████
█████████████████████████
█████████████████████████
.
.WHERE EVERYTHING IS A MARKET..
█████
██
██







██
██
██████
Will Bitcoin hit $200,000
before January 1st 2027?

    No @1.15         Yes @6.00    
█████
██
██







██
██
██████

  CHECK MORE > 
LoyceV (OP)
Legendary
*
Offline Offline

Activity: 4032
Merit: 21712


Thick-Skinned Gang Leader and Golden Feather 2021


View Profile WWW
May 31, 2024, 08:27:35 AM
 #1030

you still need one Bitcoin on-chain TX in order to send your Bitcoin to exchange/other service, to obtain the "wrapped" coin.
Isn't it more likely they use dollars or other coins for that? I don't expect real Bitcoin owners to exchange it for a made-up centralized token, but especially Binance tries really hard to make people withdraw their own made-up tokens instead of Bitcoin.

¡uʍop ǝpᴉsdn pɐǝɥ ɹnoʎ ɥʇᴉʍ ʎuunɟ ʞool no⅄
BlackHatCoiner
Legendary
*
Offline Offline

Activity: 2016
Merit: 9722


Bitcoin is ontological repair


View Profile
May 31, 2024, 08:55:20 AM
 #1031

I don't think a softfork can do this, but then again, when the limit was lowered, it must have been a hardfork too. Except for back then there wasn't much controversy about it.
That was a softfork. When you invalidate a currently valid rule, it's a softfork, and in this case, you invalidate blocks with size > 1 MB. However, a few years after this, a hardfork emerged.

Any decision taken by Satoshi was unquestionably followed, because... it was Satoshi. It's good that we don't have that source of influence anymore.

That was 7 years ago, surrounded by loads of contoversy, and promoted by some people with their own agenda.
It still is surrounded by loads of controversy. First things first, what's the ideal block size increase? We're agreeing on the 16 MB limit, but stompix wants it to double on every halving. Another one might think 16 MB is still very small. I can already predict that the hardfork will end up just like Bitcoin Cash; uncertainty on the ideal adjustment will encourage people to stick with the tested, conservative 4 MB.

I don't think a proper change, coming from the Bitcoin Core devs, that actually improves Bitcoin's future will be rejected.
I want to remind you, at this point, that before Bitcoin Cash, there were Bitcoin developers who supported big blocks. The fact that they split a long time ago highlights how unlikely it is for the current small block developers to suddenly change their minds.

 
 b1exch.to 
  ETH      DAI   
  BTC      LTC   
  USDT     XMR    
.███████████▄▀▄▀
█████████▄█▄▀
███████████
███████▄█▀
█▀█
▄▄▀░░██▄▄
▄▀██▄▀█████▄
██▄▀░▄██████
███████░█████
█░████░█████████
█░█░█░████░█████
█░█░█░██░█████
▀▀▀▄█▄████▀▀▀
cryptosize
Sr. Member
****
Offline Offline

Activity: 2128
Merit: 415


View Profile
May 31, 2024, 01:06:56 PM
 #1032

For the sake of the discussion, let's assume that the 16 MB limit is a harmless one. How do we enforce it in a softfork way? Segwit was enforced in a clever way, by separating the witness data from the transaction data. AFAIK, it's impossible to achieve it in softfork, unless you've figured out of another way to restructure the transaction data.
I don't think a softfork can do this, but then again, when the limit was lowered, it must have been a hardfork too. Except for back then there wasn't much controversy about it.
Are you sure about that?

Because even serious bugs early on were fixed with a soft fork.

Quote
"Why shouldn't this be done in a hardfork way?". It's already history, called "Bitcoin Cash". There is no point in redoing the same thing.
That was 7 years ago, surrounded by loads of contoversy, and promoted by some people with their own agenda. I don't think it's right to use that failure to keep blocks the same size for eternity. I don't think a proper change, coming from the Bitcoin Core devs, that actually improves Bitcoin's future will be rejected.
But this isn't ETH where Vitalik's team controls the network... SHA-256 miners are in charge of the BTC network, so good luck trying to convince them.
LoyceV (OP)
Legendary
*
Offline Offline

Activity: 4032
Merit: 21712


Thick-Skinned Gang Leader and Golden Feather 2021


View Profile WWW
June 01, 2024, 07:18:27 AM
 #1033

SHA-256 miners are in charge of the BTC network
Miners are in charge of creating new blocks, every user is in charge of which consensus rules they accept. Of course, without miners that means there are no new blocks.
It's kinda sad the "one computer, one vote" thing can't work.

¡uʍop ǝpᴉsdn pɐǝɥ ɹnoʎ ɥʇᴉʍ ʎuunɟ ʞool no⅄
BlackHatCoiner
Legendary
*
Offline Offline

Activity: 2016
Merit: 9722


Bitcoin is ontological repair


View Profile
June 01, 2024, 09:23:51 AM
 #1034

Miners is another obstacle I completely overlooked. By rising the block size to 16 MB, you need to convince them that this will be more beneficial for their pockets, which is very debatable. At the moment, a simple wave of Ordinals can rise their block fees income by 100%.

You need to convince them that on-chain transaction volume will eventually increase by orders of magnitude compared to before, but none of us can responsibly make that promise. No one can be held accountable for the potential shortcomings.

 
 b1exch.to 
  ETH      DAI   
  BTC      LTC   
  USDT     XMR    
.███████████▄▀▄▀
█████████▄█▄▀
███████████
███████▄█▀
█▀█
▄▄▀░░██▄▄
▄▀██▄▀█████▄
██▄▀░▄██████
███████░█████
█░████░█████████
█░█░█░████░█████
█░█░█░██░█████
▀▀▀▄█▄████▀▀▀
garlonicon
Copper Member
Legendary
*
Offline Offline

Activity: 944
Merit: 2318


View Profile
June 01, 2024, 12:15:49 PM
Merited by BlackHatCoiner (8), LoyceV (4), ABCbits (3), JayJuanGee (2)
 #1035

Quote
Then we should reduce the space to 10kb, allowing only $10k+ tx because buying coffee with bitcoin is pointless, right?
Note that mining pools can do so, without asking anyone for permission. And they didn't, for some reason. So, you can ask them, why they didn't put that kind of limit in the blocks they produce? Also, you can ask some node operators, why they collect and process transactions, which are below one satoshi per virtual byte? For example here, you can see some grey block of cheap transactions at the bottom of the chart: https://jochen-hoenicke.de/queue/#BTC,all,weight,0

Quote
To be honest I would love for the ones being against bigger blocks would make up their mind and form a group so they don't go against each other cause I keep hearing contrarian arguments
Note that keeping the limit as it is, is the easiest thing to achieve, because it is about preserving "status quo". Which means, no matter if you want to increase or decrease some default values, related to the size of the block, or to the default fees, or to some other default rules, you need to reach consensus. And guess what: reaching consensus is hard, even in some topics like Segwit or Taproot, it took years. And convincing miners to set their max block size limit into 32 MiB (the default from the first version, used by Satoshi) is as hard, as convincing them to go for 10 kB (there were times, when the practical limit was lower than 1 MB, because of BDB locks, and stuff like that).

Also, increasing the size of the block makes this attack worse, than it currently is: https://bitcointalk.org/index.php?topic=140078.msg1491085#msg1491085

Quote
Code:
The bandwidth might not be as prohibitive as you think.  A typical transaction
would be about 400 bytes (ECC is nicely compact).  Each transaction has to be
broadcast twice, so lets say 1KB per transaction.  Visa processed 37 billion
transactions in FY2008, or an average of 100 million transactions per day.
That many transactions would take 100GB of bandwidth, or the size of 12 DVD or
2 HD quality movies, or about $18 worth of bandwidth at current prices.
There is one problem with that approach: verification. Sending the whole chain is not a problem. But verifying still is. And what is the bottleneck of verification? For example CPU speed, which depends on frequency:

2011-09-13: Maximum Speed | AMD FX Processor Takes Guinness World Record
Quote
On August 31, an AMD FX processor achieved a Guiness World Record with a frequency of 8.429GHz, a stunning result for a modern, multi-core processor. The record was achieved with several days of preparation and an amazing and inspired run in front of world renowned technology press in Austin, Texas.

2022-12-21: First 9 GHz CPU (overclocked Intel 13900K)
Quote
It's over 9000. ElmorLabs KTH-USB: https://elmorlabs.com/product/elmorla... Validation: https://valid.x86.fr/t14i1f

Thank you to Asus and Intel for supporting the record attempt!

Intel Core i9-13900K
Asus ROG Maximus Z790 Apex
G.Skill Trident Z5 2x16GB
ElmorLabs KTH-USB Thermometer
ElmorLabs Volcano CPU container

See? Humans are still struggling with reaching 8-9 GHz, and you need a liquid nitrogen to maintain that value. And more than a decade ago, the situation was pretty much the same. So, the CPU speed is not "doubled" every year. Instead, you have just more and more cores, and you have for example 64-core processor, instead of having 2-core or 4-core.

Which means that yes, you can download 100 GB, maybe even more. But is the whole system really trustless, if you have no chance of verifying that data, and you have to trust, that all of them are correct? Imagine that you can download the whole chain very quickly, but it is not verified. What then?

Also note, that if something can be done in parallel, then yes, you can use 64-core processor, and execute 64 different things at the same time. However, many steps during validation are sequential. The whole chain is a sequence of blocks. The whole block is a sequence of transactions (and their order does matter, if one output is an input in another transaction in the same block). The hashing of legacy transactions is sequential (also in cases like bare multisig, which has O(n^2) complexity for no reason).

So yes, you can have 64-core processor with 4 GHz each, but a single core with 256 GHz would allow much more scaling. And this is one of the reasons, why we don't have bigger blocks. The progress in validation time is just not sufficient to increase it much further.
LoyceV (OP)
Legendary
*
Offline Offline

Activity: 4032
Merit: 21712


Thick-Skinned Gang Leader and Golden Feather 2021


View Profile WWW
June 01, 2024, 12:36:22 PM
 #1036

Miners is another obstacle I completely overlooked. By rising the block size to 16 MB, you need to convince them that this will be more beneficial for their pockets, which is very debatable. At the moment, a simple wave of Ordinals can rise their block fees income by 100%.

You need to convince them that on-chain transaction volume will eventually increase by orders of magnitude compared to before, but none of us can responsibly make that promise. No one can be held accountable for the potential shortcomings.
In a way, it's a flaw of the way Bitcoin works: miners have a financial incentive to keep transaction fees high, even if that reduces Bitcoin's usability.

¡uʍop ǝpᴉsdn pɐǝɥ ɹnoʎ ɥʇᴉʍ ʎuunɟ ʞool no⅄
BlackHatCoiner
Legendary
*
Offline Offline

Activity: 2016
Merit: 9722


Bitcoin is ontological repair


View Profile
June 01, 2024, 01:03:09 PM
 #1037

Also, increasing the size of the block makes this attack worse, than it currently is: https://bitcointalk.org/index.php?topic=140078.msg1491085#msg1491085
That's the attack I was looking for but couldn't find. Thanks, garlonicon! As I understand it, this attack requires dedicating a little less than 1 MB of block space to execute. Currently, it is very expensive, but if the block size limit is 16 MB, a mining pool attacker could always fill a quarter of their block with these transactions. This would force the rest of the network to spend more than 12 minutes verifying them, effectively giving the attacker 12 minutes to mine alone.

Is that perhaps one of the best arguments in favor of a small block size? I see that it can practically only be resolved if we start implementing even more severe exclusions in the Script, but I'm not sure if it could be trivially resolved otherwise.

 
 b1exch.to 
  ETH      DAI   
  BTC      LTC   
  USDT     XMR    
.███████████▄▀▄▀
█████████▄█▄▀
███████████
███████▄█▀
█▀█
▄▄▀░░██▄▄
▄▀██▄▀█████▄
██▄▀░▄██████
███████░█████
█░████░█████████
█░█░█░████░█████
█░█░█░██░█████
▀▀▀▄█▄████▀▀▀
LoyceV (OP)
Legendary
*
Offline Offline

Activity: 4032
Merit: 21712


Thick-Skinned Gang Leader and Golden Feather 2021


View Profile WWW
June 01, 2024, 01:36:22 PM
Merited by BlackHatCoiner (4), ABCbits (1), vjudeu (1)
 #1038

if the block size limit is 16 MB, a mining pool attacker could always fill a quarter of their block with these transactions. This would force the rest of the network to spend more than 12 minutes verifying them, effectively giving the attacker 12 minutes to mine alone.
What happens during those 12 minutes? I'd expect the mining pools to continue mining for that block, and chances are they'll find a new block within 12 minutes. Once that happens, they can stop verifying the attacker's block, and the attacker risks having his block replaced.

¡uʍop ǝpᴉsdn pɐǝɥ ɹnoʎ ɥʇᴉʍ ʎuunɟ ʞool no⅄
vjudeu
Copper Member
Legendary
*
Offline Offline

Activity: 909
Merit: 2363


View Profile
June 01, 2024, 02:13:17 PM
Merited by LoyceV (4), JayJuanGee (1)
 #1039

Quote
What happens during those 12 minutes?
Different pools will do different things. The attacker will know in advance, that "yes, this block is correct" or "no, it abuses sigops limit, or any other quirky rule". And during those 12 minutes, different pools may apply different strategies. One strategy is to mine a block with just the coinbase transaction, and nothing else. But: on top of which block it should be mined?

An honest miner can decide to mine on top of what is already validated. But then, there is a risk, that this "12 minutes block" is not an attack. Maybe all of those transactions were flying in mempools, and some mining pool just included all of them, without having any evil plan in mind? But then, how to quickly check all rules, without performing full block validation?

Quote
Once that happens, they can stop verifying the attacker's block
It depends. Mining a block with some coinbase transaction, and nothing else, would be always valid, if the previous block is valid. Which means, that mining pools can mine on top of attacker's block, and then they will keep validating it.

Some example of sigops limit violation: https://bitcointalk.org/index.php?topic=5447129.msg62014494#msg62014494

And then, imagine that you know in advance, that some mining pools use some kind of simplified block validation, and they don't check every rule (to get a better performace, because of using outdated custom software, or for whatever reason). Then, you may broadcast for example a lot of 1-of-3 multisig transactions into the network on purpose, and wait for other pools to grab them, and mine a new block. Then, you have 80k sigops limit, but some of their blocks may accidentally have 80,003. And there are more quirky rules to exploit.

Also, if you broadcast transactions, which are valid, but can become invalid in a particular context, then you can always say later: "See? Those transactions are valid, because they are included in other blocks, we did nothing wrong!".

Quote from: satoshi
I've moved on to other things.
cryptosize
Sr. Member
****
Offline Offline

Activity: 2128
Merit: 415


View Profile
June 01, 2024, 03:28:04 PM
 #1040

Quote
Then we should reduce the space to 10kb, allowing only $10k+ tx because buying coffee with bitcoin is pointless, right?
Note that mining pools can do so, without asking anyone for permission. And they didn't, for some reason. So, you can ask them, why they didn't put that kind of limit in the blocks they produce? Also, you can ask some node operators, why they collect and process transactions, which are below one satoshi per virtual byte? For example here, you can see some grey block of cheap transactions at the bottom of the chart: https://jochen-hoenicke.de/queue/#BTC,all,weight,0

Quote
To be honest I would love for the ones being against bigger blocks would make up their mind and form a group so they don't go against each other cause I keep hearing contrarian arguments
Note that keeping the limit as it is, is the easiest thing to achieve, because it is about preserving "status quo". Which means, no matter if you want to increase or decrease some default values, related to the size of the block, or to the default fees, or to some other default rules, you need to reach consensus. And guess what: reaching consensus is hard, even in some topics like Segwit or Taproot, it took years. And convincing miners to set their max block size limit into 32 MiB (the default from the first version, used by Satoshi) is as hard, as convincing them to go for 10 kB (there were times, when the practical limit was lower than 1 MB, because of BDB locks, and stuff like that).

Also, increasing the size of the block makes this attack worse, than it currently is: https://bitcointalk.org/index.php?topic=140078.msg1491085#msg1491085

Quote
Code:
The bandwidth might not be as prohibitive as you think.  A typical transaction
would be about 400 bytes (ECC is nicely compact).  Each transaction has to be
broadcast twice, so lets say 1KB per transaction.  Visa processed 37 billion
transactions in FY2008, or an average of 100 million transactions per day.
That many transactions would take 100GB of bandwidth, or the size of 12 DVD or
2 HD quality movies, or about $18 worth of bandwidth at current prices.
There is one problem with that approach: verification. Sending the whole chain is not a problem. But verifying still is. And what is the bottleneck of verification? For example CPU speed, which depends on frequency:

2011-09-13: Maximum Speed | AMD FX Processor Takes Guinness World Record
Quote
On August 31, an AMD FX processor achieved a Guiness World Record with a frequency of 8.429GHz, a stunning result for a modern, multi-core processor. The record was achieved with several days of preparation and an amazing and inspired run in front of world renowned technology press in Austin, Texas.

2022-12-21: First 9 GHz CPU (overclocked Intel 13900K)
Quote
It's over 9000. ElmorLabs KTH-USB: https://elmorlabs.com/product/elmorla... Validation: https://valid.x86.fr/t14i1f

Thank you to Asus and Intel for supporting the record attempt!

Intel Core i9-13900K
Asus ROG Maximus Z790 Apex
G.Skill Trident Z5 2x16GB
ElmorLabs KTH-USB Thermometer
ElmorLabs Volcano CPU container

See? Humans are still struggling with reaching 8-9 GHz, and you need a liquid nitrogen to maintain that value. And more than a decade ago, the situation was pretty much the same. So, the CPU speed is not "doubled" every year. Instead, you have just more and more cores, and you have for example 64-core processor, instead of having 2-core or 4-core.

Which means that yes, you can download 100 GB, maybe even more. But is the whole system really trustless, if you have no chance of verifying that data, and you have to trust, that all of them are correct? Imagine that you can download the whole chain very quickly, but it is not verified. What then?

Also note, that if something can be done in parallel, then yes, you can use 64-core processor, and execute 64 different things at the same time. However, many steps during validation are sequential. The whole chain is a sequence of blocks. The whole block is a sequence of transactions (and their order does matter, if one output is an input in another transaction in the same block). The hashing of legacy transactions is sequential (also in cases like bare multisig, which has O(n^2) complexity for no reason).

So yes, you can have 64-core processor with 4 GHz each, but a single core with 256 GHz would allow much more scaling. And this is one of the reasons, why we don't have bigger blocks. The progress in validation time is just not sufficient to increase it much further.
Very insightful post... I guess we'll need a graphene TeraHertz CPU for proper verification scaling. Smiley
Pages: « 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 [52] 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 »
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!