Bitcoin Forum
August 25, 2024, 11:36:09 PM *
News: All versions of Windows are affected by a critical security bug; make sure you update.
 
   Home   Help Search Login Register More  
Pages: [1] 2 »  All
  Print  
Author Topic: Addressing Block size and occasional mempool congestion  (Read 346 times)
Felicity_Tide (OP)
Full Member
***
Offline Offline

Activity: 140
Merit: 137

cout << "Bitcoin";


View Profile
June 04, 2024, 05:50:28 AM
Merited by NotATether (3), d5000 (1), ABCbits (1), vjudeu (1)
 #1

Whether we like it or not, the problem of scabiliity is not a topic that should be treated as a done deal. We sometimes don't talk much about it especially when the network is working smoothly and no obvious signs of congestion. But we later go back to same problem when TX fees increases, and so many pending transactions are all seated at the mempool waiting to be confirmed, at this point in time, those who are able to pay higher fees get their transactions ahead of others, but for how long are we going to continue like this ?.

I spent several hours, roaming around the internet and trying to figure out every suggested plans in addressing the problem of scalability. I even read across few BIPs such as BIP101 and others that have all been rejected so far by the Bitcoin community, as they weren't satisfactory enough to address the issue of block sizes.


The idea of making a block bigger has been embraced and also discouraged by the Bitcoin community due to prons and cons that are attached. The introduction of Segregated Witness (SegWit) which was proposed in BIP-148, allows block capacity to be indirectly increased thereby removing signature from Bitcoin transaction data. This also means that there are more space to accommodate more transactions, only when certain parts of the transaction is removed. To me, that sounded healthy, but it showed the extent the Bitcoin community was/is willing to go inorder to address this issue. For every SegWit address, it can begin with bc1 or 3, but it main purpose is to offer lower tx fees by taking up less block space. Even with this implementation, we haven't been able to say "Goodbye" to congestion.

With the inability for SegWit to address the issue of small block size, SegWit2x was introduced basically to increase the size of blocks to 2mb for more accommodation of transactions, but this idea wasn't enough to get approval from the community due to the absence of replay protection. Meaning, the absence of this protection could cause a replay attack. Lightning network on the other hand requires creating a payment channel between two individuals. This was suppose to address the issue, but it doesn't have a say in the size of blocks either. And also, the technicalities behind it has not carried everyone along in understanding it.

The issue of congestion is still very much on ground, though it has failed off as there are no current events or any rush to trigger such. Certain periods like halving, Bull run and Bear run are ideal times to witness such. Unarguably, congestion will come, which some of us must have prepared the LN as a backup plan, while others can save themselves with the extra tx fees, but what's now left for others like myself rather than to sit and wait on the queue along other transactions that are available on the mempool ?.

Just a simple but decision making question for both developers and none developers today:
1. What do you think is a possible solution to this problem?.


I am 100% open to correction as I still see myself as a learner. Pardon any of my error and share your personal opinion


GitHub: https://github.com/bitcoin/bips/tree/master
vjudeu
Hero Member
*****
Offline Offline

Activity: 819
Merit: 1957



View Profile
June 04, 2024, 06:35:01 AM
Merited by pooya87 (4), BlackHatCoiner (4), ABCbits (2), d5000 (1), DdmrDdmr (1), Felicity_Tide (1)
 #2

Quote
but for how long are we going to continue like this ?.
As long, as needed, to reach transaction joining, and improve batching.

Quote
Even with this implementation, we haven't been able to say "Goodbye" to congestion.
Because those tools are not there, to get rid of congestion for legacy transactions. They are there, to allow cheaper transactions for those, who opt-in. Everyone else will pay for that, as usual, because those changes are backward-compatible.

Quote
but this idea wasn't enough to get approval from the community due to the absence of replay protection.
1. It was because of hard-fork, not because of replay protection.
2. If you want to introduce replay protection, it can be done at the level of the coinbase transaction, but BTC simply didn't introduce replay protection as a soft-fork, and altcoins like BCH didn't bother to make it "gradually activated", or to maintain any compatibility in sighashes. Imagine how better some altcoins could be, if all of their transactions would be compatible with BTC, and if everything, what was confirmed on BCH, would be eventually confirmed on BTC, and vice versa. Then, you would have 1:1 peg, and avoid a lot of issues.

Quote
Meaning, the absence of this protection could cause a replay attack.
It is a feature, not a bug. For example, some people told about things like "flippening", where some altcoin would reach bigger hashrate than BTC, and take the lead. But: those altcoin creators introduced incompatible changes, which effectively destroyed any chance for such "flippening". Because guess what: it is possible to start from two different points, then reach identical UTXO set on both chains, and then simply switch into the heaviest chain, without affecting any user. But many people wanted to split coins, not to merge them. And if splitting and dumping coins is profitable, then the end result can be easily predicted.

Quote
This was suppose to address the issue, but it doesn't have a say in the size of blocks either.
If you want to solve the problem of scalability, then the perfect solution is when you don't have to care about things like the maximum size of the block. Then, it "scales": if you can do 2x more transactions, without touching the maximum block size, then it is "scalable". If you can do 2x more transactions, and it consumes 2x more resources, then it is not "scaling" anymore. It is just "linear growth". And it can be done without any changes in the code, just release N altcoins: BTC1, BTC2, BTC3, ..., BTCN, and tada! You have N times more space!

Quote
but what's now left for others like myself rather than to sit and wait on the queue along other transactions that are available on the mempool ?.
If you need some technical solution, then you need a better code. Then, you have two options: write a better code, or find someone, who will do that.

Quote
What do you think is a possible solution to this problem?.
Transaction joining, batching, and having more than one person on a single UTXO in a decentralized way.

█▀▀▀











█▄▄▄
▀▀▀▀▀▀▀▀▀▀▀
e
▄▄▄▄▄▄▄▄▄▄▄
█████████████
████████████▄███
██▐███████▄█████▀
█████████▄████▀
███▐████▄███▀
████▐██████▀
█████▀█████
███████████▄
████████████▄
██▄█████▀█████▄
▄█████████▀█████▀
███████████▀██▀
████▀█████████
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
c.h.
▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄
▀▀▀█











▄▄▄█
▄██████▄▄▄
█████████████▄▄
███████████████
███████████████
███████████████
███████████████
███░░█████████
███▌▐█████████
█████████████
███████████▀
██████████▀
████████▀
▀██▀▀
larry_vw_1955
Sr. Member
****
Offline Offline

Activity: 1162
Merit: 454


View Profile
June 04, 2024, 06:40:33 AM
Merited by vjudeu (1)
 #3

Whether we like it or not, the problem of scabiliity is not a topic that should be treated as a done deal.

the way its being worked on is like this:

https://www.msn.com/en-us/money/other/mastercard-launches-p2p-crypto-transactions-across-14-countries/ar-BB1nh8Kp

Global payments giant Mastercard just launched the first P2P pilot transaction of Mastercard Crypto Credential. This new service will allow users to send and receive crypto using aliases instead of blockchain addresses.

Quote
Just a simple but decision making question for both developers and none developers today:
1. What do you think is a possible solution to this problem?.


you could allow blocks to be 50MB in size. and at the same time, only let transactions be 500 bytes maximum or something.
jrrsparkles
Sr. Member
****
Offline Offline

Activity: 2464
Merit: 276


Eloncoin.org - Mars, here we come!


View Profile
June 04, 2024, 06:45:27 AM
 #4

The debate about increasing the block space has been raised multiple times in the Bitcoin community but it has been rejected because it will lead to the centralization of miners which is completely against the idea of Bitcoin's existence. L2 layer can be seen as a solution too in such a way that we don't use on-chain for making every payment assuming Bitcoin is widely accepted as a form of payment in the next 20 years which will eliminate a lot of stress on the network while bigger payments can be made onchain.

Segwit increase the block size from 1MB to 4vMB that means we actually can able to fit more transaction than what we used to a decade ago but is this enough after a decade we never know.









▄▄████████▄▄
▄▄████████████████▄▄
▄██
████████████████████▄
▄███
██████████████████████▄
▄████
███████████████████████▄
███████████████████████▄
█████████████████▄███████
████████████████▄███████▀
██████████▄▄███▄██████▀
████████▄████▄█████▀▀
██████▄██████████▀
███▄▄█████
███████▄
██▄██████████████
░▄██████████████▀
▄█████████████▀
████████████
███████████▀
███████▀▀
Mars,           
here we come!
▄▄███████▄▄
▄███████████████▄
▄███████████████████▄
▄██████████
███████████
▄███████████████████████▄
█████████████████████████
█████████████████████████
█████████████████████████
▀█
██████████████████████▀
▀██
███████████████████▀
▀███████████████████▀
▀█████████
██████▀
▀▀███████▀▀
ElonCoin.org.
████████▄▄███████▄▄
███████▄████████████▌
██████▐██▀███████▀▀██
███████████████████▐█▌
████▄▄▄▄▄▄▄▄▄▄██▄▄▄▄▄
███▐███▀▄█▄█▀▀█▄█▄▀
███████████████████
█████████████▄████
█████████▀░▄▄▄▄▄
███████▄█▄░▀█▄▄░▀
███▄██▄▀███▄█████▄▀
▄██████▄▀███████▀
████████▄▀████▀
█████▄▄
.
"I could either watch it
happen or be a part of it"

▬▬▬▬▬
larry_vw_1955
Sr. Member
****
Offline Offline

Activity: 1162
Merit: 454


View Profile
June 04, 2024, 07:08:30 AM
Merited by vjudeu (1)
 #5

The debate about increasing the block space has been raised multiple times in the Bitcoin community but it has been rejected because it will lead to the centralization of miners which is completely against the idea of Bitcoin's existence.
why not just be honest and say it this way:

Quote
The debate about increasing the block space has been raised multiple times in the Bitcoin community but it has been rejected because it will lead to larger storage requirements and people don't want to do that.

1TB is old tech. they are making hard drives 20+TB now. lets get with the times.  Shocked

NotATether
Legendary
*
Offline Offline

Activity: 1708
Merit: 7180


In memory of o_e_l_e_o


View Profile WWW
June 04, 2024, 08:19:10 AM
Merited by vjudeu (1)
 #6

Whether we like it or not, the problem of scabiliity is not a topic that should be treated as a done deal.

the way its being worked on is like this:

https://www.msn.com/en-us/money/other/mastercard-launches-p2p-crypto-transactions-across-14-countries/ar-BB1nh8Kp

Global payments giant Mastercard just launched the first P2P pilot transaction of Mastercard Crypto Credential. This new service will allow users to send and receive crypto using aliases instead of blockchain addresses.

The way you could implement such a thing into Bitcoin would be like you could have DNS entries for a particular domain where you define one record for the alias you want to use, containing a signed BIP322 transaction. The record is being used as a challenge.

for example:

For Bitcoin.org

TXT record for "btc.challenge.alice": <BIP322 signed transaction which includes the address in the transaction body being signed>

to be able to send bitcoin to alice@bitcoin.org

Now this presents two benefits:

1. You can have as many aliases for a domain as you want, just by replacing alice with some other name like bob, and this creates records with two different names
2. You can actually verify that alice@bitcoin.org owns an address like 14758AB.... because the signed transaction also includes the public key in the scriptsig/witness data.

Sure, it wouldn't be random aliases, but I imagine in order to fix that you'd be able to take the hash160 and a large enough dictionary list of words, and assign every 20 bits to a dictionary word.

ABCbits
Legendary
*
Offline Offline

Activity: 2982
Merit: 7818


Crypto Swap Exchange


View Profile
June 04, 2024, 09:08:03 AM
Merited by pooya87 (4), d5000 (2), Husna QA (2), vjudeu (1)
 #7

The introduction of Segregated Witness (SegWit) which was proposed in BIP-148, allows block capacity to be indirectly increased thereby removing signature from Bitcoin transaction data.

You fell into somewhat common misconception. Block capacity increase realized due to change on how transaction size is calculated. Block still contain transaction along with it's signature data. It's only removed when a node (which support segwit) send block/TX data to a node (which doesn't support segwit).

For every SegWit address, it can begin with bc1 or 3, but it main purpose is to offer lower tx fees by taking up less block space. Even with this implementation, we haven't been able to say "Goodbye" to congestion.

SegWit main goal is to solve transaction malleability.

1. What do you think is a possible solution to this problem?.

I've seen some people focus on single approach (such as only focus on LN or only focus on block size increase). But IMO we should accept various method to mitigate the problem, such as making OP_FALSE OP_IF ... OP_ENDIF non-standard, increase block size and use LN/sidechain (if it match how you use Bitcoin) altogether.

The debate about increasing the block space has been raised multiple times in the Bitcoin community but it has been rejected because it will lead to the centralization of miners which is completely against the idea of Bitcoin's existence.

Your statement isn't relevant with today's condition,
1. Miner usually join mining pool.
2. Mining pool can afford decent server and internet to run full node.
3. Compact block help improving block propagation.


L2 layer can be seen as a solution too in such a way that we don't use on-chain for making every payment assuming Bitcoin is widely accepted as a form of payment in the next 20 years which will eliminate a lot of stress on the network while bigger payments can be made onchain.

Even with L2, you still need to create few Bitcoin on-chain transaction. For example, "peg" your Bitcoin to L2 and remove the "peg" from L2.

Segwit increase the block size from 1MB to 4vMB that means we actually can able to fit more transaction than what we used to a decade ago but is this enough after a decade we never know.

If you've tried to make Bitcoin transaction few times in past year or follow the news, surely you know there are times when you're forced to pay high fee rate to get your transaction confirmed quickly.

█▀▀▀











█▄▄▄
▀▀▀▀▀▀▀▀▀▀▀
e
▄▄▄▄▄▄▄▄▄▄▄
█████████████
████████████▄███
██▐███████▄█████▀
█████████▄████▀
███▐████▄███▀
████▐██████▀
█████▀█████
███████████▄
████████████▄
██▄█████▀█████▄
▄█████████▀█████▀
███████████▀██▀
████▀█████████
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
c.h.
▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄
▀▀▀█











▄▄▄█
▄██████▄▄▄
█████████████▄▄
███████████████
███████████████
███████████████
███████████████
███░░█████████
███▌▐█████████
█████████████
███████████▀
██████████▀
████████▀
▀██▀▀
vjudeu
Hero Member
*****
Offline Offline

Activity: 819
Merit: 1957



View Profile
June 04, 2024, 12:41:02 PM
 #8

Quote
you could allow blocks to be 50MB in size. and at the same time, only let transactions be 500 bytes maximum or something.
Then, a single user will do more than a single transaction.

Quote
1TB is old tech. they are making hard drives 20+TB now. lets get with the times.
You don't want to run a full archival node, even if the size of the chain is below 1 TB. Would you run a node, if we increase it? I guess not.

So, why do you want to increase it, and not participate in the costs of doing so?

Quote
The way you could implement such a thing into Bitcoin would be like you could have DNS entries for a particular domain where you define one record for the alias you want to use, containing a signed BIP322 transaction.
You mean LNURL for on-chain payments?

Quote
making OP_FALSE OP_IF ... OP_ENDIF non-standard
It could help, but would be not enough, when you have mining pools, willing to bypass such limitations.

Quote
Your statement isn't relevant with today's condition
It somewhat is, but taken from another angle: big mining pools will handle it, but regular users may stop running non-mining nodes. And that will indirectly lead to mining centralization, because then, nobody except big pools will agree to run a full archival node 24/7. And in that case, it will be possible to skip more and more steps, if users will stop caring about validating the output, produced by those mining pools.

Quote
For example, "peg" your Bitcoin to L2 and remove the "peg" from L2.
For that reason, decentralized sidechains are needed, because then you end up with a single, batched on-chain transaction every sometimes (for example every three months). And I guess, sooner or later, people may be forced to connect their coins, and to handle more than one person on a single UTXO, when it will be too expensive to make single-user transactions anymore.

█▀▀▀











█▄▄▄
▀▀▀▀▀▀▀▀▀▀▀
e
▄▄▄▄▄▄▄▄▄▄▄
█████████████
████████████▄███
██▐███████▄█████▀
█████████▄████▀
███▐████▄███▀
████▐██████▀
█████▀█████
███████████▄
████████████▄
██▄█████▀█████▄
▄█████████▀█████▀
███████████▀██▀
████▀█████████
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
c.h.
▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄
▀▀▀█











▄▄▄█
▄██████▄▄▄
█████████████▄▄
███████████████
███████████████
███████████████
███████████████
███░░█████████
███▌▐█████████
█████████████
███████████▀
██████████▀
████████▀
▀██▀▀
jrrsparkles
Sr. Member
****
Offline Offline

Activity: 2464
Merit: 276


Eloncoin.org - Mars, here we come!


View Profile
June 04, 2024, 03:52:02 PM
 #9

Segwit increase the block size from 1MB to 4vMB that means we actually can able to fit more transaction than what we used to a decade ago but is this enough after a decade we never know.

If you've tried to make Bitcoin transaction few times in past year or follow the news, surely you know there are times when you're forced to pay high fee rate to get your transaction confirmed quickly.
I agree, the least possible fee in the last 12-14 months of the period used for TX to get included in the block was 6-7sat/vb which is higher than 1sat/vb that is what we always expect to pay whenever we want to transact bitcoin but also the same time we should not forget that ordinals spam still plays a role in that surge of fees than the actual adoption and I don't think increasing the block space is not going to solve them either.

Another thing is we also have to consider the mining rewards cause increasing the block space will make all the transactions go through the lowest possible fee, which means they have to sustain only with a minimal cost if block rewards reduced that may lead to a decrease in hash rate of the network so that it makes 51% attack more possible in theory at a cheaper cost than what is the cost of attempting the same now.









▄▄████████▄▄
▄▄████████████████▄▄
▄██
████████████████████▄
▄███
██████████████████████▄
▄████
███████████████████████▄
███████████████████████▄
█████████████████▄███████
████████████████▄███████▀
██████████▄▄███▄██████▀
████████▄████▄█████▀▀
██████▄██████████▀
███▄▄█████
███████▄
██▄██████████████
░▄██████████████▀
▄█████████████▀
████████████
███████████▀
███████▀▀
Mars,           
here we come!
▄▄███████▄▄
▄███████████████▄
▄███████████████████▄
▄██████████
███████████
▄███████████████████████▄
█████████████████████████
█████████████████████████
█████████████████████████
▀█
██████████████████████▀
▀██
███████████████████▀
▀███████████████████▀
▀█████████
██████▀
▀▀███████▀▀
ElonCoin.org.
████████▄▄███████▄▄
███████▄████████████▌
██████▐██▀███████▀▀██
███████████████████▐█▌
████▄▄▄▄▄▄▄▄▄▄██▄▄▄▄▄
███▐███▀▄█▄█▀▀█▄█▄▀
███████████████████
█████████████▄████
█████████▀░▄▄▄▄▄
███████▄█▄░▀█▄▄░▀
███▄██▄▀███▄█████▄▀
▄██████▄▀███████▀
████████▄▀████▀
█████▄▄
.
"I could either watch it
happen or be a part of it"

▬▬▬▬▬
thecodebear
Hero Member
*****
Offline Offline

Activity: 2212
Merit: 844


View Profile
June 04, 2024, 05:07:51 PM
Merited by d5000 (1), ABCbits (1)
 #10

The debate about increasing the block space has been raised multiple times in the Bitcoin community but it has been rejected because it will lead to the centralization of miners which is completely against the idea of Bitcoin's existence.
why not just be honest and say it this way:

Quote
The debate about increasing the block space has been raised multiple times in the Bitcoin community but it has been rejected because it will lead to larger storage requirements and people don't want to do that.

1TB is old tech. they are making hard drives 20+TB now. lets get with the times.  Shocked




I think the main two technological reasons to not increase block size are storage space, as you mentioned, and propagation time through the network.

If it were only storage space of nodes alone, the blocksize could gradually increase over time as storage gets cheaper, still keeping nodes super cheap to set up.

But I believe the main reason is propagation time, and someone correct me if I'm wrong. The more data in blocks the longer it takes to send data over the internet and propagate across the bitcoin network, including the time it takes nodes to process the blocks to make sure they are valid. This congestion would cause a lot more orphaned blocks and slow down the network. Also a theoretical entity trying to do a 51% attack could mine a bunch of blocks by itself without sending it out over the network, using the network's long propagation times and often-orphaned blocks to have a better chance to get ahead of the chain and then send out their mined blocks to actually do a 51% attack.
d5000
Legendary
*
Offline Offline

Activity: 4018
Merit: 7129


Decentralization Maximalist


View Profile
June 04, 2024, 07:47:20 PM
Last edit: June 05, 2024, 12:05:33 AM by d5000
Merited by ABCbits (4)
 #11

I think the main two technological reasons to not increase block size are storage space, as you mentioned, and propagation time through the network.
Yes, propagation is more important than storage space - I would even argue that storage is almost irrelevant. Miners' requirements are also irrelevant becauses miners in today's times are not longer the laptop miners from 2010/11 or even the home ASIC miners from 2012/13 but mostly highly optimized server farms. There may be still some small miners but they have already to dedicate a lot of organizational work if they don't want to mine at a loss, so storage and bandwidth are minor problems for them.

So the reason we see no higher block size is actually "domestic" full nodes, not miners.

I think the main factors besides block propagation are CPU and memory requirements. 4 MB blocks (the current maximum) needs, according to a Bitfury study, about 16 MB of memory. So on a state-of-the art PC with 16MB+ RAM, you can still run a full Bitcoin node in the background, even if it would already affect your other activities a bit probably. But if the block size was significantly higher, you would need a dedicated device for that purpose, and not exactly a cheap one.

Transaction joining, batching, and having more than one person on a single UTXO in a decentralized way.
I think the "buzzwords" related to these methods should be mentioned and explained as the OP is clearly not an expert. Smiley

Transaction joining and batching - this can be achieved with Lightning, Sidechains, and Rollups.

While many know some basics about these concepts, I'll try to describe them "from the main chain's point of view":

- On Lightning, basically, you create a simple special transaction which enables you to set rules to transact off-chain (without storing the transaction in a block) until a double spend conflict arises (where LN provides a mechanism to penalize the cheater and get the coins back) or you need on-chain Bitcoins. LN is basically almost mature with some flaws remaining.
- Sidechains are quite similar: you create a "peg-in" transaction enabling to transact off-chain from the mainchain point of view. You set some rules in the peg-out transaction, but the main security depends on the rules of an alternate chain, and conflicts are solved mostly on this alternate chain so it depends on both Bitcoin's and the altchain's security. You can "unlock" the coins ("peg-out") obeying a complex ruleset involving both chains. The big problems are still security and peg-out rules, there are currently several approaches being experimented with and none can be considered mature.
- Rollups are like sidechains, you also peg-in and peg-out, but the transaction data of the off-chain transactions is compressed in some way and a proof that everything was fine (or nothing was wrong) is stored on the main chain. Rollups are already widespread on Ethereum so they are entering maturity stage. Some approaches exist for Bitcoin, most I've seen however have serious flaws.

See the Sidechain Observer thread for some comments on existing sidechain/rollup solutions. (Edit: Stupid me, confused peg-in and peg-out. Corrected.)

So in all three cases, with a single on-chain transaction you can potentially create an almost infinite number of "value transfers", on rollups there is a limitation however due to their higher on-chain requirements.

"Having more than one person on a single UTXO in a decentralized way" - there are concepts like Statechains and Ark (not to be confused with a similarly named altcoin). You do a special multisig transaction with a ruleset like on Lightning with a counterparty (the "Operator") you have to trust, and then a special derivation of the private key can be given along the "payment chain", e.g. the UTXO is "re-used" several times.

█▀▀▀











█▄▄▄
▀▀▀▀▀▀▀▀▀▀▀
e
▄▄▄▄▄▄▄▄▄▄▄
█████████████
████████████▄███
██▐███████▄█████▀
█████████▄████▀
███▐████▄███▀
████▐██████▀
█████▀█████
███████████▄
████████████▄
██▄█████▀█████▄
▄█████████▀█████▀
███████████▀██▀
████▀█████████
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
c.h.
▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄
▀▀▀█











▄▄▄█
▄██████▄▄▄
█████████████▄▄
███████████████
███████████████
███████████████
███████████████
███░░█████████
███▌▐█████████
█████████████
███████████▀
██████████▀
████████▀
▀██▀▀
Felicity_Tide (OP)
Full Member
***
Offline Offline

Activity: 140
Merit: 137

cout << "Bitcoin";


View Profile
June 04, 2024, 08:32:21 PM
 #12

Quote
This was suppose to address the issue, but it doesn't have a say in the size of blocks either.
If you want to solve the problem of scalability, then the perfect solution is when you don't have to care about things like the maximum size of the block. Then, it "scales": if you can do 2x more transactions, without touching the maximum block size, then it is "scalable".

All corrections noted.
But from the reply above, was this not what SegWit was implemented for ?, Which I think it worked to some extent but could not solve the problem completely.

Quote
but what's now left for others like myself rather than to sit and wait on the queue along other transactions that are available on the mempool ?.
If you need some technical solution, then you need a better code. Then, you have two options: write a better code, or find someone, who will do that.

Coding?. That's literally a new chapter for me after learning as much technical related concepts as possible. I guess my transactions will be sitting at the mempool for the time being  Grin, Though, I hope to go into that area someday.


Quote
Transaction joining, batching, and having more than one person on a single UTXO in a decentralized way.

Just learnt something new.
Batching is not bad at all. I think this might be effective most especially on exchanges that wants to process multiple transaction request at same time.

1. What do you think is a possible solution to this problem?.

I've seen some people focus on single approach (such as only focus on LN or only focus on block size increase). But IMO we should accept various method to mitigate the problem, such as making OP_FALSE OP_IF ... OP_ENDIF non-standard, increase block size and use LN/sidechain (if it match how you use Bitcoin) altogether.

All corrections noted.
Most people have been left with no choice than to master the use of LN. Due to some technicalities, I am not sure if everyone might want to learn it except the congestion matters gets out of hand, forcing majority to learn. Good choice.
Increasing size of blocks is of course the leading solution which lots of people have doubts on. Don't you think increasing size of blocks would affect mining?, thereby requiring more mining powers. Sorry for asking too many questions, Can you please clearify me on this?.


I think the "buzzwords" related to these methods should be mentioned and explained as the OP is clearly not an expert. Smiley

Transaction joining and batching - this can be achieved with Lightning, Sidechains, and Rollups.

I Just finished reading through. Thanks for breaking it down and the well detailed explanation.
larry_vw_1955
Sr. Member
****
Offline Offline

Activity: 1162
Merit: 454


View Profile
June 04, 2024, 11:23:45 PM
 #13


If it were only storage space of nodes alone, the blocksize could gradually increase over time as storage gets cheaper, still keeping nodes super cheap to set up.
maybe it should be gradually increasing over time. that's a good idea.

Quote
But I believe the main reason is propagation time, and someone correct me if I'm wrong.
so the best we can do is to make a single 4 mb block every 10 minutes? i doubt that's the best we can do.

Quote
The more data in blocks the longer it takes to send data over the internet and propagate across the bitcoin network, including the time it takes nodes to process the blocks to make sure they are valid. This congestion would cause a lot more orphaned blocks and slow down the network.
if you want to participate as a miner on the bitcoin network it goes without saying you need a fast internet connection and whether you are broadcasting a 4mb block or a 40mb block, the difference should be minimal in time. internet speeds are going up, fiber is being rolled out, etc. if someone doesn't even have say 50mb down and 10mb up then they are living in the past. it's just that simple. and if their connection is really slow like still using dialup speeds then they are not even contributing to the bitcoin network at all anyway.

Quote
Also a theoretical entity trying to do a 51% attack could mine a bunch of blocks by itself without sending it out over the network, using the network's long propagation times and often-orphaned blocks to have a better chance to get ahead of the chain and then send out their mined blocks to actually do a 51% attack.
a proper scientific study could determine optimal block times as a function of block size. which would make this 51% attack scenario no more likely than it is right now.
pooya87
Legendary
*
Offline Offline

Activity: 3556
Merit: 10798



View Profile
June 05, 2024, 03:18:43 AM
Merited by ABCbits (4)
 #14

One of the most dangerous things when wanting to solve an issue is to interpret the cause of that issue wrongly! In this case:

But we later go back to same problem when TX fees increases, and so many pending transactions are all seated at the mempool waiting to be confirmed, at this point in time, those who are able to pay higher fees get their transactions ahead of others, but for how long are we going to continue like this ?.
What we've been experiencing for a long time is not a scaling issue. It is a spam attack issue that I've been talking about for just as long. The spam attack under the name Ordinals is injecting the mempool with a lot of junk transactions because there is a scam market creating the incentive for regular users to participate in that spam and basically fund the attack.

Now that the cause of the problem is clarified, we can focus on working solutions which is to prevent this exploit to make the attack either impossible or at the very least too expensive to be done.

Otherwise if this situation is interpretant wrong and as a scaling issue, the solutions such as increasing capacity would only make the problem worse because the spammers would have more space and cheaper transactions to continue their attack. This is why misinterpreting the issue is dangerous.

Quote
The introduction of Segregated Witness (SegWit) which was proposed in BIP-148,
SegWit BIPS are 141 and 143. The BIP148 has very little to do with SegWit.

Quote
thereby removing signature from Bitcoin transaction data.
Wrong.
SegWit does NOT remove anything from transaction data! It introduces a new "field" in each transaction known as witness that will hold signature (and other stack items needed for unlocking output scripts being spent).

Quote
This also means that there are more space to accommodate more transactions, only when certain parts of the transaction is removed.
Once again you made a conclusion based on wrong information. SegWit introduces Witness field and that way increases the capacity so that blocks can be as big as 4 MB instead of being capped at 1. By that increase it allows blocks to accommodate more transactions.

Quote
but it main purpose is to offer lower tx fees by taking up less block space. Even with this implementation, we haven't been able to say "Goodbye" to congestion.
The biggest goal of SegWit was to address transaction malleability.
The reason why it offers lower tx fee is not the smaller tx size (SegWit txs are in some cases bigger than legacy, byte-wise). The reason is because they use this new field called Witness that takes up that extra (3MB) space not the legacy 1 MB and they receive a discount for that reason.

Quote
SegWit2x was introduced basically to increase the size of blocks to 2mb
It is called 2x not 2MB which means it was doubling the capacity introduced by SegWit which would be 8 MvB weight.

Quote
wasn't enough to get approval from the community due to the absence of replay protection.
Wrong.
Replay protection is not even a Bitcoin related thing. It is only defined for altcoins that create an exact copy of Bitcoin protocol and its blockchain. Copycatcoins such as Bcash.

The reason for its lack of support was because we should have either chosen a hard fork from the start or a soft fork all they through. Mixing the two makes no sense.
IMO any future hard fork should address a lot of things not just a simple block size cap bump.

Quote
1. What do you think is a possible solution to this problem?.
I explained it at the beginning.
I want to add that I believe at some point we also need to address the scaling issue through a hard fork to fix a lot of things in the protocol (eg. merging Legacy and SegWit, fix bugs like sigopcount) and also increase the cap itself.

tromp
Legendary
*
Offline Offline

Activity: 989
Merit: 1108


View Profile
June 05, 2024, 06:10:43 AM
 #15

We sometimes don't talk much about it especially when the network is working smoothly and no obvious signs of congestion. But we later go back to same problem when TX fees increases, and so many pending transactions are all seated at the mempool waiting to be confirmed, at this point in time, those who are able to pay higher fees get their transactions ahead of others, but for how long are we going to continue like this ?.
You misunderstood how bitcoin was designed to run in the long term, where a state of congestion and high fees is the desired state, whereas lack of congestion is the problematic state.

When the block subsidy becomes insignificant in a decade or two, the only thing that will keep bitcoin secure is high total fees for every block, and that can only be achieved by keeping the network congested.
ABCbits
Legendary
*
Offline Offline

Activity: 2982
Merit: 7818


Crypto Swap Exchange


View Profile
June 05, 2024, 08:46:06 AM
 #16

Quote
making OP_FALSE OP_IF ... OP_ENDIF non-standard
It could help, but would be not enough, when you have mining pools, willing to bypass such limitations.

And that's why i also mention accepting multiple method. As for mining pool, i only hope they continue to charge premium for adding non-standard transaction. For example, https://mempool.space/ suggest 18 sat/vB for no priority and 27 sat/vB for high priority while https://slipstream.mara.com/ currently accept non-standard TX which have rate 81 sat/vB.

Quote
Your statement isn't relevant with today's condition
It somewhat is, but taken from another angle: big mining pools will handle it, but regular users may stop running non-mining nodes. And that will indirectly lead to mining centralization, because then, nobody except big pools will agree to run a full archival node 24/7. And in that case, it will be possible to skip more and more steps, if users will stop caring about validating the output, produced by those mining pools.

Fair point, although it's not like i suggest huge block size increase either.

--snip--

I think the main factors besides block propagation are CPU and memory requirements. 4 MB blocks (the current maximum) needs, according to a Bitfury study, about 16 MB of memory. So on a state-of-the art PC with 16MB+ RAM, you can still run a full Bitcoin node in the background, even if it would already affect your other activities a bit probably. But if the block size was significantly higher, you would need a dedicated device for that purpose, and not exactly a cheap one.

Do you mean this study https://bitfury.com/content/downloads/block-size-1.1.1.pdf? After many years, i realize they don't consider massive UTXO growth, compact block (which massively help block verification/propagation and reduce bandwidth) and other things.

1. What do you think is a possible solution to this problem?.
I've seen some people focus on single approach (such as only focus on LN or only focus on block size increase). But IMO we should accept various method to mitigate the problem, such as making OP_FALSE OP_IF ... OP_ENDIF non-standard, increase block size and use LN/sidechain (if it match how you use Bitcoin) altogether.
All corrections noted.
Most people have been left with no choice than to master the use of LN. Due to some technicalities, I am not sure if everyone might want to learn it except the congestion matters gets out of hand, forcing majority to learn. Good choice.
Increasing size of blocks is of course the leading solution which lots of people have doubts on. Don't you think increasing size of blocks would affect mining?, thereby requiring more mining powers. Sorry for asking too many questions, Can you please clearify me on this?.

No, maximum block size doesn't require higher mining power/hashrate. After all, mining basically perform sha256d on block header which always have size 80 bytes.

█▀▀▀











█▄▄▄
▀▀▀▀▀▀▀▀▀▀▀
e
▄▄▄▄▄▄▄▄▄▄▄
█████████████
████████████▄███
██▐███████▄█████▀
█████████▄████▀
███▐████▄███▀
████▐██████▀
█████▀█████
███████████▄
████████████▄
██▄█████▀█████▄
▄█████████▀█████▀
███████████▀██▀
████▀█████████
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
c.h.
▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄
▀▀▀█











▄▄▄█
▄██████▄▄▄
█████████████▄▄
███████████████
███████████████
███████████████
███████████████
███░░█████████
███▌▐█████████
█████████████
███████████▀
██████████▀
████████▀
▀██▀▀
BlackHatCoiner
Legendary
*
Offline Offline

Activity: 1624
Merit: 7947


Bitcoin is a royal fork


View Profile WWW
June 05, 2024, 10:03:48 AM
Merited by vjudeu (1)
 #17

I want to add that I believe at some point we also need to address the scaling issue through a hard fork to fix a lot of things in the protocol (eg. merging Legacy and SegWit, fix bugs like sigopcount) and also increase the cap itself.
That is unlikely to happen. It's going to break tons of software.

I think the main two technological reasons to not increase block size are storage space, as you mentioned, and propagation time through the network.
The issue, or feature, depending on your perspective, is that Bitcoin has a hard cap. This means that eventually, it will rely entirely on transaction fees, which in consequence means that the network must always be congested. Increasing the block size rises the risk of the network becoming unsustainable to continue operating at some point in the future.

So, it's not about "technological limitations" per se. It's mostly this economic problem.

vjudeu
Hero Member
*****
Offline Offline

Activity: 819
Merit: 1957



View Profile
June 05, 2024, 10:13:25 AM
 #18

Quote
That is unlikely to happen. It's going to break tons of software.
Some software will be broken anyway, and then people will have a choice: to upgrade, or to deal with some broken version somehow. For example: timestamps have four bytes allocated. Which means, that after year 2106, we will be forced into hard-forking anyway.

Another example: year 2038 problem. Many people thought we are resistant, but some versions are not, because of type casting between signed and unsigned, and between 32-bit and 64-bit values: https://bitcointalk.org/index.php?topic=5365359.msg58166985#msg58166985

Edit: By the way, a similar discussion is ongoing on Delving Bitcoin: https://delvingbitcoin.org/t/is-it-time-to-increase-the-blocksize-cap/941

█▀▀▀











█▄▄▄
▀▀▀▀▀▀▀▀▀▀▀
e
▄▄▄▄▄▄▄▄▄▄▄
█████████████
████████████▄███
██▐███████▄█████▀
█████████▄████▀
███▐████▄███▀
████▐██████▀
█████▀█████
███████████▄
████████████▄
██▄█████▀█████▄
▄█████████▀█████▀
███████████▀██▀
████▀█████████
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
c.h.
▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄
▀▀▀█











▄▄▄█
▄██████▄▄▄
█████████████▄▄
███████████████
███████████████
███████████████
███████████████
███░░█████████
███▌▐█████████
█████████████
███████████▀
██████████▀
████████▀
▀██▀▀
pooya87
Legendary
*
Offline Offline

Activity: 3556
Merit: 10798



View Profile
June 05, 2024, 12:28:30 PM
 #19

I want to add that I believe at some point we also need to address the scaling issue through a hard fork to fix a lot of things in the protocol (eg. merging Legacy and SegWit, fix bugs like sigopcount) and also increase the cap itself.
That is unlikely to happen. It's going to break tons of software.
I agree but I also think in the future we'll get to the point that benefits of a hard fork could outweigh its issues and that can be the incentive to get it done.

BlackHatCoiner
Legendary
*
Offline Offline

Activity: 1624
Merit: 7947


Bitcoin is a royal fork


View Profile WWW
June 05, 2024, 12:49:16 PM
 #20

Some software will be broken anyway, and then people will have a choice: to upgrade, or to deal with some broken version somehow. For example: timestamps have four bytes allocated. Which means, that after year 2106, we will be forced into hard-forking anyway.
There is one important difference with the year 2038 problem. We know exactly when this problem will appear. Therefore, we have to fix it before then, ideally a few years before 2038. There's no ideal year to hardfork for merging legacy with segwit, and it's not in the same level of necessity. The network won't stop working after year x if we don't merge legacy with segwit.

Edit: By the way, a similar discussion is ongoing on Delving Bitcoin: https://delvingbitcoin.org/t/is-it-time-to-increase-the-blocksize-cap/941
Cool. I like reading both sides' arguments of this debate.

I agree but I also think in the future we'll get to the point that benefits of a hard fork could outweigh its issues and that can be the incentive to get it done.
I believe it depends on the necessity, but it's very difficult for me to imagine a hardfork introducing significant changes and gaining nearly unanimous support, especially since the roadmap is somewhat oriented to implement changes via softforks.

Pages: [1] 2 »  All
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!