Bitcoin Forum
May 04, 2024, 06:18:09 PM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: [1] 2 »  All
  Print  
Author Topic: Segmenting/Reserving Block Space  (Read 440 times)
bipshuffle (OP)
Newbie
*
Offline Offline

Activity: 17
Merit: 30


View Profile
April 21, 2024, 04:40:08 PM
Merited by LoyceV (4), vapourminer (2), vjudeu (1), PowerGlove (1)
 #1

Perhaps reserving some fixed (or slowly varying) portions of each block to specific transaction types could help resolve issues regarding the tradeoff between high fees and blockchain freedom.

For example, 20% of block reserved specifically for lightening transactions, 20% reserved for ordinals, 60% reserved for general use.
Miners including transactions outside of the prescribed memory boundaries limitations (hard or soft to account for fluctuations in mempool tx type distributions), would have such blocks rejected by the network.
This would help isolate the fee explosion due to certain transaction types. It's a win for the miners (massive transaction fees within this part of the block - possibly even higher sat/byte due to decreased available block space) and a win for the users (LN/general transactions can be included in blocks without exorbitant fees)

I think Bitcoiners tend to agree that we shouldn't limit the utility of Bitcoin by disallowing any sort of transaction, but I don't think it's against the Bitcoin ethos to enforce some structure around transaction priority.

Seems feasible, different transaction types are already / could be made even more-so easily identifiable.

Of course, I understand this *may* result in some blocks having empty space depending on implementation, but it seems to me the tradeoff between enabling scalability through further facilitating LN and keeping the L1 chain reasonably open for large finality requiring transactions is well worth it.

Just looking to gather thoughts/sentiment towards an approach like this.
TalkImg was created especially for hosting images on bitcointalk.org: try it next time you want to post an image
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction.
1714846689
Hero Member
*
Offline Offline

Posts: 1714846689

View Profile Personal Message (Offline)

Ignore
1714846689
Reply with quote  #2

1714846689
Report to moderator
1714846689
Hero Member
*
Offline Offline

Posts: 1714846689

View Profile Personal Message (Offline)

Ignore
1714846689
Reply with quote  #2

1714846689
Report to moderator
odolvlobo
Legendary
*
Offline Offline

Activity: 4298
Merit: 3214



View Profile
April 21, 2024, 08:26:44 PM
Merited by ABCbits (2)
 #2

Perhaps reserving some fixed (or slowly varying) portions of each block to specific transaction types could help resolve issues regarding the tradeoff between high fees and blockchain freedom.

I see two requirements:
  • 1. An unambiguous system for classifying transactions that matches your intent.
  • 2. A decentralized and verifiable method for allocating the partitions.

Join an anti-signature campaign: Click ignore on the members of signature campaigns.
PGP Fingerprint: 6B6BC26599EC24EF7E29A405EAF050539D0B2925 Signing address: 13GAVJo8YaAuenj6keiEykwxWUZ7jMoSLt
bipshuffle (OP)
Newbie
*
Offline Offline

Activity: 17
Merit: 30


View Profile
April 21, 2024, 08:39:25 PM
 #3

Agreed on the requirements, neither of which appear insurmountable
(1. Will likely be some sort of heuristic system, factoring in things like tx memsize, inscribed data, etc
2. Such rules would be built in directly to Bitcoin core).

But I'm curious about the sentiment towards such an approach? Has the Bitcoin community considered such a solution before? If so, why hasn't it been implemented?
(Simply time/dev investment required or is there consensus on counter-arguments to this approach?)
HeRetiK
Legendary
*
Offline Offline

Activity: 2926
Merit: 2091


Cashback 15%


View Profile
April 21, 2024, 11:38:38 PM
Merited by LoyceV (4), ABCbits (3), d5000 (1), vjudeu (1)
 #4

At the base level Bitcoin doesn't know anything about LN-related transactions, Sidechain-related transactions, Ordinals, Colored Coins, etc. It arguably also shouldn't know anything about these things, neither explicitly (like the flag that indicates SegWit transactions) nor implicitly (via heuristics, as suggested). That's why these things are on a separate layer to begin with. The alternative is a brittle base layer that becomes more unreliable as new features and transactions types are added.

Which leads to the next problem: Segmenting blocks into pre-allocated spaces per transaction-type would require a hard fork whenever you want to change allocations and/or add/remove transaction types. Apart from general concerns of network stability, hard forks come with a lot of drama even with basic things as the blocksize in general (as seen in the fork wars of 2017). I don't want to imagine how this would look like if you'd have to get everyone to agree on allocations per transaction type. Honestly with the amount of projects in the space coming and going I'm not sure where we'd even begin. Worse still, any new project would get pretty much locked out of the blockchain, unless they somehow manage to get the devs to "approve" their transaction type and everyone else to accept their proposal for blockspace re-allocation (and thus a hard fork).

TL;DR this would likely cause a lot of problems both on a technical and a political/social level.

.
.HUGE.
▄██████████▄▄
▄█████████████████▄
▄█████████████████████▄
▄███████████████████████▄
▄█████████████████████████▄
███████▌██▌▐██▐██▐████▄███
████▐██▐████▌██▌██▌██▌██
█████▀███▀███▀▐██▐██▐█████

▀█████████████████████████▀

▀███████████████████████▀

▀█████████████████████▀

▀█████████████████▀

▀██████████▀▀
█▀▀▀▀











█▄▄▄▄
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
.
CASINSPORTSBOOK
▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄
▀▀▀▀█











▄▄▄▄█
LoyceV
Legendary
*
Offline Offline

Activity: 3304
Merit: 16596


Thick-Skinned Gang Leader and Golden Feather 2021


View Profile WWW
April 22, 2024, 08:45:44 AM
 #5

For example, 20% of block reserved specifically for lightening transactions, 20% reserved for ordinals, 60% reserved for general use.
Who's going to decide on those percentages? As much as I'd like the spam to stop, I don't think some "central authority in power" is the right way to do that. I also see no reason to reserve 20% for the spammers.

ABCbits
Legendary
*
Offline Offline

Activity: 2870
Merit: 7464


Crypto Swap Exchange


View Profile
April 22, 2024, 10:10:49 AM
Merited by vjudeu (1)
 #6

FYI, few months ago we discussed somewhat similar idea on A Proposal for easy-to-close Lightning Channels (and other uses).

Agreed on the requirements, neither of which appear insurmountable
(1. Will likely be some sort of heuristic system, factoring in things like tx memsize, inscribed data, etc
2. Such rules would be built in directly to Bitcoin core).

Bitcoin Core is just one of many Bitcoin full node software. Besides your idea probably require a soft/hard fork, how do you handle the fact that each node have slightly different TX set on their mempool?

█▀▀▀











█▄▄▄
▀▀▀▀▀▀▀▀▀▀▀
e
▄▄▄▄▄▄▄▄▄▄▄
█████████████
████████████▄███
██▐███████▄█████▀
█████████▄████▀
███▐████▄███▀
████▐██████▀
█████▀█████
███████████▄
████████████▄
██▄█████▀█████▄
▄█████████▀█████▀
███████████▀██▀
████▀█████████
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
c.h.
▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄
▀▀▀█











▄▄▄█
▄██████▄▄▄
█████████████▄▄
███████████████
███████████████
███████████████
███████████████
███░░█████████
███▌▐█████████
█████████████
███████████▀
██████████▀
████████▀
▀██▀▀
vjudeu
Hero Member
*****
Offline Offline

Activity: 678
Merit: 1560



View Profile
April 22, 2024, 11:58:33 AM
Merited by d5000 (1)
 #7

Quote
Miners including transactions outside of the prescribed memory boundaries limitations (hard or soft to account for fluctuations in mempool tx type distributions), would have such blocks rejected by the network.
This is bad idea, for many reasons. It should be applied in a local node policy, in the same way, as for example minimal transaction fees were picked. Then, it would be possible to change it, without forking the network.

Quote
For example, 20% of block reserved specifically for lightening transactions
Guess what: if you would strictly require that, then people could switch from single-key addresses into 2-of-2 multisig, where both keys would be owned by the same person, just to bypass your limits.

Quote
It's a win for the miners (massive transaction fees within this part of the block - possibly even higher sat/byte due to decreased available block space)
This is not the case. If it would be, then miners could shrink the maximum size of the block into 100 kB. And guess what: any mining pool can introduce such rule, without even recompiling source code, because there is an option in the configuration file:
Code:
Block creation options:

  -blockmaxweight=<n>
       Set maximum BIP141 block weight (default: 3996000)
And also, there are options in getblocktemplate command:
Code:
help getblocktemplate
getblocktemplate ( "template_request" )

...

  "sigoplimit" : n,                        (numeric) limit of sigops in blocks
  "sizelimit" : n,                         (numeric) limit of block size
  "weightlimit" : n,                       (numeric, optional) limit of block weight
So, if smaller blocks are so good for the miners, then why the biggest mining pools didn't introduce any such rules yet?

Quote
and a win for the users (LN/general transactions can be included in blocks without exorbitant fees)
No, because people will switch their other transaction types into what will be cheaper. Which means, that if single-key address will be more expensive than 2-of-2 multisig, then people will apply 2-of-2 multisig on their single-key transactions.

Quote
but I don't think it's against the Bitcoin ethos to enforce some structure around transaction priority.
It is acceptable, if you enforce it locally, on your node. But I think it is a bad idea to enforce it on consensus level.

Quote
Segmenting blocks into pre-allocated spaces per transaction-type would require a hard fork whenever you want to change allocations and/or add/remove transaction types.
Why? The only requirement is to keep the coinbase transaction, everything else can be empty if needed (or artificially filled, if you mess up with the rules), and everything, what you want to add, could be done in "v2 blocks", pointed by the new coinbase transaction. So, it could be a soft-fork, but obviously it would be more complicated, than it should be: https://petertodd.org/2016/forced-soft-forks#radical-changes

Quote
how do you handle the fact that each node have slightly different TX set on their mempool?
That's why making local rules per each node is much easier, than including such things into consensus rules.

█▀▀▀











█▄▄▄
▀▀▀▀▀▀▀▀▀▀▀
e
▄▄▄▄▄▄▄▄▄▄▄
█████████████
████████████▄███
██▐███████▄█████▀
█████████▄████▀
███▐████▄███▀
████▐██████▀
█████▀█████
███████████▄
████████████▄
██▄█████▀█████▄
▄█████████▀█████▀
███████████▀██▀
████▀█████████
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
c.h.
▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄
▀▀▀█











▄▄▄█
▄██████▄▄▄
█████████████▄▄
███████████████
███████████████
███████████████
███████████████
███░░█████████
███▌▐█████████
█████████████
███████████▀
██████████▀
████████▀
▀██▀▀
bipshuffle (OP)
Newbie
*
Offline Offline

Activity: 17
Merit: 30


View Profile
April 22, 2024, 03:14:23 PM
 #8

Thanks for all the thoughts.

Several things I'd like to point out:


Segmenting blocks into pre-allocated spaces per transaction-type would require a hard fork whenever you want to change allocations and/or add/remove transaction types. Apart from general concerns of network stability, hard forks come with a lot of drama even with basic things as the blocksize in general (as seen in the fork wars of 2017). I don't want to imagine how this would look like if you'd have to get everyone to agree on allocations per transaction type.


I don't believe a hard-fork should be required whenever allocations need to be altered. Allocations should be determined dynamically based on consensus rules.

This presumes two things are possible:
1.) Transactions can be accurately classified/segmented into meaningful bins (perhaps even simply binning by absolute memory size is sufficient).

2.) Consensus rules can be applied and verified at a network level
     -Since tx mempool may be different between nodes (as pointed out by ABCbits). Dynamic determination of consensus rules may be challenging. But perhaps nodes can broadcast and maintain records of summary statistics of their tx mempools intermittently, from which, soft rules can be determined (soft tx type distribution boundaries allow
      minor discrepancies between nodes to be mitigated by allowing miners to pick out transactions near the mean/away from the requirement bounds of the tx inclusion criteria) 

     -Once a block is broadcast with the transactions included, other nodes should be able to verify that the included transactions meet the dynamically agreed upon tx type distribution requirements.

The way I envision this might occur is similar to how block difficultly is set and recognized across the network.

Changes to how the algorithm/heuristics used to determine tx type/size distribution requirements would require forking, but once set no forks are required.

Quote
Honestly with the amount of projects in the space coming and going I'm not sure where we'd even begin. Worse still, any new project would get pretty much locked out of the blockchain, unless they somehow manage to get the devs to "approve" their transaction type and everyone else to accept their proposal for blockspace re-allocation (and thus a hard fork).

A large "other" category allocation partially resolves this. Another option is to base bins/categories on project agnostic metrics like tx size or "UTXO consolidation ratio" (something like if a tx has 3 UTXO input and 2 UTXO output (1.5 consolidation ratio), the network bins it (and prioritizes it) differently than a 1 UTXO input and 2 UTXO output (.5 consolidation ratio) transaction). The metrics would effectively be designed to bin transactions in a way that enables use of the network for any purpose, but keeps network traffic jams isolated to a fraction of the block.



As for switching transaction types/sizes to fill gaps in block allocation requirements as mentioned:

Quote
Guess what: if you would strictly require that, then people could switch from single-key addresses into 2-of-2 multisig, where both keys would be owned by the same person, just to bypass your limits.


Quote
No, because people will switch their other transaction types into what will be cheaper. Which means, that if single-key address will be more expensive than 2-of-2 multisig, then people will apply 2-of-2 multisig on their single-key transactions.

I don't think this is necessarily a bad thing, especially if tx type conversions/alterations can only go from "more efficient/desirable" to "less efficient/desirable". Ie, you can't possibly convert an ordinal inscription to meet the criteria of the bin reserved for smaller/alternative transactions; despite that block partition having high fees/long wait times. BUT you could alter your efficient transaction to fit within the parameters of less efficient portions of the block partition. This ensures block capacity remains highly utilized.



Admittedly, I'm an SWE but don't have much hands on experience with Bitcoin source code. I may be missing something.


 
NotATether
Legendary
*
Offline Offline

Activity: 1596
Merit: 6728


bitcoincleanup.com / bitmixlist.org


View Profile WWW
April 22, 2024, 04:20:05 PM
 #9

I don't like this idea. It doesn't solve the problem of keeping fees low and the mempool normal, because when you artificially limit the amount of bytes one kind of transaction gets in a block, desperate users will still increase the average network fee anyway.

.
.BLACKJACK ♠ FUN.
█████████
██████████████
████████████
█████████████████
████████████████▄▄
░█████████████▀░▀▀
██████████████████
░██████████████
████████████████
░██████████████
████████████
███████████████░██
██████████
CRYPTO CASINO &
SPORTS BETTING
▄▄███████▄▄
▄███████████████▄
███████████████████
█████████████████████
███████████████████████
█████████████████████████
█████████████████████████
█████████████████████████
███████████████████████
█████████████████████
███████████████████
▀███████████████▀
█████████
.
bipshuffle (OP)
Newbie
*
Offline Offline

Activity: 17
Merit: 30


View Profile
April 22, 2024, 04:32:28 PM
 #10

I don't like this idea. It doesn't solve the problem of keeping fees low and the mempool normal, because when you artificially limit the amount of bytes one kind of transaction gets in a block, desperate users will still increase the average network fee anyway.

The average fee doesn't matter nearly as much as the median fee IMO. This should help bring down the latter.
HeRetiK
Legendary
*
Offline Offline

Activity: 2926
Merit: 2091


Cashback 15%


View Profile
April 22, 2024, 04:42:18 PM
 #11

This presumes two things are possible:
1.) Transactions can be accurately classified/segmented into meaningful bins (perhaps even simply binning by absolute memory size is sufficient).

2.) Consensus rules can be applied and verified at a network level

That's the thing tho, neither is trivially solveable, if it even can be solved at all.

1) Reliable transaction classification would require leaky abstractions as I mentioned above. That's bad in regular software development, worse when it comes to the Bitcoin base layer. What exactly do you mean by "absolute memory size"? Are you referring to the size a transaction takes up in the mempool?

2) How exactly would you achieve dynamic consensus? Based on nodes is prone to Sybil attacks. Based on hashrate would lead to chain splits.

.
.HUGE.
▄██████████▄▄
▄█████████████████▄
▄█████████████████████▄
▄███████████████████████▄
▄█████████████████████████▄
███████▌██▌▐██▐██▐████▄███
████▐██▐████▌██▌██▌██▌██
█████▀███▀███▀▐██▐██▐█████

▀█████████████████████████▀

▀███████████████████████▀

▀█████████████████████▀

▀█████████████████▀

▀██████████▀▀
█▀▀▀▀











█▄▄▄▄
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
.
CASINSPORTSBOOK
▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄
▀▀▀▀█











▄▄▄▄█
NotATether
Legendary
*
Offline Offline

Activity: 1596
Merit: 6728


bitcoincleanup.com / bitmixlist.org


View Profile WWW
April 22, 2024, 04:43:15 PM
 #12

I don't like this idea. It doesn't solve the problem of keeping fees low and the mempool normal, because when you artificially limit the amount of bytes one kind of transaction gets in a block, desperate users will still increase the average network fee anyway.

The average fee doesn't matter nearly as much as the median fee IMO. This should help bring down the latter.

Technically, that is what I was referring to when I wrote "average", not the mean.

It would be like like taking the mean of 100 people's income including Bill Gates versus taking the median.

.
.BLACKJACK ♠ FUN.
█████████
██████████████
████████████
█████████████████
████████████████▄▄
░█████████████▀░▀▀
██████████████████
░██████████████
████████████████
░██████████████
████████████
███████████████░██
██████████
CRYPTO CASINO &
SPORTS BETTING
▄▄███████▄▄
▄███████████████▄
███████████████████
█████████████████████
███████████████████████
█████████████████████████
█████████████████████████
█████████████████████████
███████████████████████
█████████████████████
███████████████████
▀███████████████▀
█████████
.
odolvlobo
Legendary
*
Offline Offline

Activity: 4298
Merit: 3214



View Profile
April 22, 2024, 10:56:53 PM
Last edit: April 22, 2024, 11:18:51 PM by odolvlobo
 #13

I don't think that determining the type of the transaction by looking at its contents is feasible, mostly because of P2TR. Requiring a process for allocating bins makes the solution extremely difficult, not just because process of consensus would be complex, but also because it could potentially be manipulated by miners.

Perhaps there could be a different approach. A possible solution might be to modify the transaction weight calculation and determine a transaction's size based on quadratic weighting. Such a weighting would make a large transaction extra expensive and would discourage inefficient use of block space.

I think that abandoning the bin concept would simplify the solution tremendously. On the other hand, quadratic weighting might not solve the specific problem you are addressing, and it would certainly open up its own can of worms.


I don't like this idea. It doesn't solve the problem of keeping fees low and the mempool normal, because when you artificially limit the amount of bytes one kind of transaction gets in a block, desperate users will still increase the average network fee anyway.

I don't believe that a solution to the problem of keeping fees low even exists. The purpose here is to raise fees for certain types of transactions, and I don't think the average network fee is relevant. When I want to send bitcoins, I care about my fee and not the average. My fee only depends on my bin. If desperate users increase their fees in order to get space in a different bin, then it doesn't affect me.

Join an anti-signature campaign: Click ignore on the members of signature campaigns.
PGP Fingerprint: 6B6BC26599EC24EF7E29A405EAF050539D0B2925 Signing address: 13GAVJo8YaAuenj6keiEykwxWUZ7jMoSLt
bipshuffle (OP)
Newbie
*
Offline Offline

Activity: 17
Merit: 30


View Profile
April 23, 2024, 01:09:27 AM
 #14

I don't think that determining the type of the transaction by looking at its contents is feasible, mostly because of P2TR.

Perhaps P2TR could be its own transaction bin.

Quote
Requiring a process for allocating bins makes the solution extremely difficult, not just because process of consensus would be complex, but also because it could potentially be manipulated by miners.

Miners could, but they'd risk having their blocks rejected from nodes.

I like the idea about quadratic weight assignment; but as you mentioned, it has it's own set of issues - though perhaps these issues are more straightforward and a consensus could be easier to reach.


Quote
The purpose here is to raise fees for certain types of transactions, and I don't think the average network fee is relevant. When I want to send bitcoins, I care about my fee and not the average. My fee only depends on my bin.

Purpose is not necessarily to raise fees (though it may in some bins); its more to help smaller transactions and transactions excluding enormous amounts of metadata get processed in a timely manner and without insane fees. Perhaps the quadratic weighting solution is all that's needed. But from my perspective, it's likely something needs to be done to keep the network usable.

Quote
If desperate users increase their fees in order to get space in a different bin, then it doesn't affect me.

Nail on the head
d5000
Legendary
*
Offline Offline

Activity: 3906
Merit: 6172


Decentralization Maximalist


View Profile
April 25, 2024, 05:33:39 AM
 #15

I also don't like this idea, unfortunately.

Purpose is [...] to help smaller transactions and transactions excluding enormous amounts of metadata get processed in a timely manner and without insane fees.
HeRetiK and vjudeu have already written that you can't really tell if a transaction contains metadata or not. And if there is a significant fee increase for a group of transactions, it will try to escape its "bin".

I'll give you a practical example so you see what could happen if such a proposal was implemented: Stampchain SRC-20 (a protocol created to "improve" BRC-20, an Ordinals-based token protocol which clogged the blockchain last year, but in fact it is still worse).

SRC-20 is an insanely inefficient and dangerous protocol: it encodes the metadata inside a regular multisig output, i.e. creates a "fake" public key with the data of a JSON(!) text. While these transactions may have some structural elements in common, in reality nobody can tell if you are transacting coins with such a transaction or if it's encoded metadata. If there was some heuristics detecting them reliably, they could simply change the protocol slightly and it wouldn't be detected anymore.

I said "dangerous", because this kind of protocol creates a ton of UTXOs which will never be spent, and all validating nodes must take them into account and waste resources. Similar protocols were already around in 2013/14 and motivated the "legalization" of OP_RETURN for arbitrary data storage of up to 80 bytes in 2014 (the opcode is the base for token mechanisms like Runes, Counterparty and Omni, it was already added by Satoshi but was non-standard until v0.9).

You will never be able to keep all versions of all those protocols under control. You would have to adjust the "rules" for the "bins" constantly and even then those wanting to store useless metadata would still be able to bypass your rules. Protocols could even try to offer several transaction mechanisms for the same type of token to fit in different bins, so the users could use the cheapest bin.

Quote
But from my perspective, it's likely something needs to be done to keep the network usable.

We have discussed some related ideas extensively in several Ordinals-related threads since about a year. The only idea which really could help is change the protocol to be more similar to Monero or even better Grin. Maybe a pre-reservation (link was already provided by ABCbits above) of block space could help at least to more "even" fee behaviour too but I'm not sure about that, this brings a lot of additional complexity. Even some Bitcoin devs proposed "solutions" which simply didn't work (Luke-Jr's heuristic code, Ordisrespector ...). In my opinion, the best way is to improve L2s (LN, sidechains, statechains etc.) to move as much transaction activity as possible off the main chain.

█▀▀▀











█▄▄▄
▀▀▀▀▀▀▀▀▀▀▀
e
▄▄▄▄▄▄▄▄▄▄▄
█████████████
████████████▄███
██▐███████▄█████▀
█████████▄████▀
███▐████▄███▀
████▐██████▀
█████▀█████
███████████▄
████████████▄
██▄█████▀█████▄
▄█████████▀█████▀
███████████▀██▀
████▀█████████
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
c.h.
▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄
▀▀▀█











▄▄▄█
▄██████▄▄▄
█████████████▄▄
███████████████
███████████████
███████████████
███████████████
███░░█████████
███▌▐█████████
█████████████
███████████▀
██████████▀
████████▀
▀██▀▀
NotATether
Legendary
*
Offline Offline

Activity: 1596
Merit: 6728


bitcoincleanup.com / bitmixlist.org


View Profile WWW
April 25, 2024, 06:11:48 AM
Merited by ABCbits (1)
 #16

I don't like this idea. It doesn't solve the problem of keeping fees low and the mempool normal, because when you artificially limit the amount of bytes one kind of transaction gets in a block, desperate users will still increase the average network fee anyway.

I don't believe that a solution to the problem of keeping fees low even exists. The purpose here is to raise fees for certain types of transactions, and I don't think the average network fee is relevant.

This is basically what Luke-jr was trying to do a couple weeks ago with patching the datacarrier structures to interpret TapScripts (well not exactly increasing fees but to make large data transactions infeasible). It never reached a consensus obviously so it was never merged. It would've enforced a limit on the size of TapScripts. A similar failed pull request for enforcing the witness size limit is here.


I believe a partitioning of block space to increase the fees of data transactions is even less likely to get merged.

.
.BLACKJACK ♠ FUN.
█████████
██████████████
████████████
█████████████████
████████████████▄▄
░█████████████▀░▀▀
██████████████████
░██████████████
████████████████
░██████████████
████████████
███████████████░██
██████████
CRYPTO CASINO &
SPORTS BETTING
▄▄███████▄▄
▄███████████████▄
███████████████████
█████████████████████
███████████████████████
█████████████████████████
█████████████████████████
█████████████████████████
███████████████████████
█████████████████████
███████████████████
▀███████████████▀
█████████
.
bipshuffle (OP)
Newbie
*
Offline Offline

Activity: 17
Merit: 30


View Profile
April 25, 2024, 05:18:11 PM
 #17

Quote
I believe a partitioning of block space to increase the fees of data transactions is even less likely to get merged.

The intent of such partitioning *is not* to increase the fees of data transactions (though it may be a byproduct); the intent is to ensure that there's space in blocks available for BTC's intended usecase as a currency (this includes facilitating L2 transactions).


Quote
SRC-20 is an insanely inefficient and dangerous protocol: it encodes the metadata inside a regular multisig output, i.e. creates a "fake" public key with the data of a JSON(!) text. While these transactions may have some structural elements in common, in reality nobody can tell if you are transacting coins with such a transaction or if it's encoded metadata. If there was some heuristics detecting them reliably, they could simply change the protocol slightly and it wouldn't be detected anymore.

I said "dangerous", because this kind of protocol creates a ton of UTXOs which will never be spent, and all validating nodes must take them into account and waste resources. Similar protocols were already around in 2013/14 and motivated the "legalization" of OP_RETURN for arbitrary data storage of up to 80 bytes in 2014 (the opcode is the base for token mechanisms like Runes, Counterparty and Omni, it was already added by Satoshi but was non-standard until v0.9).

You will never be able to keep all versions of all those protocols under control. You would have to adjust the "rules" for the "bins" constantly and even then those wanting to store useless metadata would still be able to bypass your rules. Protocols could even try to offer several transaction mechanisms for the same type of token to fit in different bins, so the users could use the cheapest bin.

Are there any op codes that are integral to ordinals/other inscription that aren't critical for facilitating true L1/L2 BTC monetary txs? OP_RETURN, OP_PUSHBYTES, OP_PUSHDATA come to mind, but I haven't studied Lightning / other L2s enough to be sure these aren't required.
Perhaps, partitioning blocks based on tx op codes could be broad based enough to have fairly static rules?



I'm continuing to push this both for educational and brainstorming purposes.
I'd really love to see a solution that mitigates the potential for what are effectively DDOS attacks on BTC, while preserving the ability for BTC to be multi-functional and uncensored.
d5000
Legendary
*
Offline Offline

Activity: 3906
Merit: 6172


Decentralization Maximalist


View Profile
April 25, 2024, 05:54:12 PM
Merited by HeRetiK (1)
 #18

Are there any op codes that are integral to ordinals/other inscription that aren't critical for facilitating true L1/L2 BTC monetary txs? OP_RETURN, OP_PUSHBYTES, OP_PUSHDATA come to mind, but I haven't studied Lightning / other L2s enough to be sure these aren't required.
But that's exactly the point! Mechanisms like Stampchain SRC-20 use only opcodes common in "normal" transactions (in this case OP_CHECKMULTISIG). Yes, OP_RETURN is (afaik) only used by "data transactions" (the other ones you mentioned, from my understanding, have other usecases), but it was made standard in Bitcoin 0.9+ to lower the impact of token systems and data transactions on the validating nodes.

Now imagine you "ban" OP_RETURN from the main bin and fees for OP_RETURN txes rise because their "bin" becomes congested - everybody wanting to use tokens on BTC would then use Stampchain or similar mechanisms, and the "main bin" becomes congested again (with worse consequences due to increased resource usage).

If you also ban multisig transactions from the main bin you affect Lightning, and multisig is not even necessary for such a protocol. There are older protocols that use the sequence number for metadata, e.g. the first version of EPOBC.

By the way, regarding Lightning: there may be situation where it would have advantages that "LN transactions" could get all the necessary block space. If you restrict LN transactions (in whatever way) to a 20% "bin" then you may for example delay the closure of channels if a massive node tries to attack.

I think you should really re-read my, Heretiks, odolvlobos and vjudeus posts to understand what's wrong with your proposal.

█▀▀▀











█▄▄▄
▀▀▀▀▀▀▀▀▀▀▀
e
▄▄▄▄▄▄▄▄▄▄▄
█████████████
████████████▄███
██▐███████▄█████▀
█████████▄████▀
███▐████▄███▀
████▐██████▀
█████▀█████
███████████▄
████████████▄
██▄█████▀█████▄
▄█████████▀█████▀
███████████▀██▀
████▀█████████
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
c.h.
▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄
▀▀▀█











▄▄▄█
▄██████▄▄▄
█████████████▄▄
███████████████
███████████████
███████████████
███████████████
███░░█████████
███▌▐█████████
█████████████
███████████▀
██████████▀
████████▀
▀██▀▀
bipshuffle (OP)
Newbie
*
Offline Offline

Activity: 17
Merit: 30


View Profile
April 25, 2024, 10:58:56 PM
 #19

Quote
I think you should really re-read my, Heretiks, odolvlobos and vjudeus posts to understand what's wrong with your proposal.

I believe I understand the point you all are making.
In summary, you feel as though there's no way to design bins in a manner which would prevent people from simply circumventing them by cleverly designing their transactions to fit into low fee bins (which may in turn actually make the issue worse because those transactions might become even less efficient).

 
Still, is there really no broad based way to accomplish something like this?

Consider the most simple case; say we wish to implement binary binning - one bin with a very precise and very common transaction op code signature, let's just say P2PKH: [OP_DUP, OP_HASH160, Hash160(seckey.pub), OP_EQUALVERIFY, OP_CHECKSIG] (I know this is pretty much a legacy op-code, just using it for simplicity).
Couldn't we help such transactions occur without issue by ensuring some fraction of each block is available for  such transactions?
I can't imagine that any inscription could occur via this particular op sequence (could it?).

If we can achieve this, could we not expand scope to parse out other common and precise transaction types?
The majority of the block could remain "all transactions" (including those which are specifically reserved).

Perhaps it's not possible with LN atm (I'm not sure of the op codes used to open/close LN channels), but it appears possible at a basic level.
d5000
Legendary
*
Offline Offline

Activity: 3906
Merit: 6172


Decentralization Maximalist


View Profile
April 26, 2024, 01:30:16 AM
Merited by ABCbits (5), bipshuffle (4), HeRetiK (1)
 #20

Consider the most simple case; say we wish to implement binary binning - one bin with a very precise and very common transaction op code signature, let's just say P2PKH: [OP_DUP, OP_HASH160, Hash160(seckey.pub), OP_EQUALVERIFY, OP_CHECKSIG] (I know this is pretty much a legacy op-code, just using it for simplicity).[...]
I can't imagine that any inscription could occur via this particular op sequence (could it?).
Yes, it can. You can encode the necessary metadata in the nSequence field, like EPOBC did, or create fake public key hashes aka addresses (P2PKH/P2WPKH) or public keys (P2PK). P2(W)PKH offers less bytes.

Basically to explain it in simple terms what you would do is to create a fake address with the data of the tokens. Let's say you represent something like P:DRC20:t:PEPE:v:2000 (p for Protocol, t for Token [symbol] and v for Value) first in hexadecimal numbers (503a44524332303a743a504550453a763a32303030) and then encode it into bech32 and this becomes an address (bc1qqqqqqqqqqqqqqqqqqpgr53zjgverqwn58fgy25z98fmr5v3sxqcqqyvute, you can try it here). Nobody would be able to spend this UTXO however, so it would clutter the UTXO set forever.

You create now an additional output with 1 satoshi to the address you want to mint/transfer the token to, optionally an output for change coins, and that's all what's needed - two or three P2(W)PKH outputs. While the bech32 address in this example looks a bit strange, this is only because I had to add zeroes to the hex value because it was too short (it has to be either 20 or 32 bytes).

Even if you create a completely new transaction type with even less data available, the "fake address" method would also work. You could try to separate transactions with more than 1 output, as it's difficult to encode everything in one P2(W)PKH ScriptPubKey but it would perhaps be still possible. But more important you would then make all transactions which have even one satoshi of change more expensive, so this would be unfeasible.

█▀▀▀











█▄▄▄
▀▀▀▀▀▀▀▀▀▀▀
e
▄▄▄▄▄▄▄▄▄▄▄
█████████████
████████████▄███
██▐███████▄█████▀
█████████▄████▀
███▐████▄███▀
████▐██████▀
█████▀█████
███████████▄
████████████▄
██▄█████▀█████▄
▄█████████▀█████▀
███████████▀██▀
████▀█████████
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
c.h.
▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄
▀▀▀█











▄▄▄█
▄██████▄▄▄
█████████████▄▄
███████████████
███████████████
███████████████
███████████████
███░░█████████
███▌▐█████████
█████████████
███████████▀
██████████▀
████████▀
▀██▀▀
Pages: [1] 2 »  All
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!