spartacusrex (OP)
|
|
April 01, 2019, 12:13:43 PM Last edit: April 01, 2019, 01:48:03 PM by spartacusrex Merited by aliashraf (2), ABCbits (1) |
|
This trick requires upgrades to the current Bitcoin scripting language. I'm pretty sure it can all be done right now on Liquid.
You need CHECKOUPUTVERIFY and some covenant logic. The ability to check Merkle Proofs - the branches of the tree pointing to some root. And bit-wise operations for checking and setting bits. We'll be using Bram Cohen's bitfields, and indicate spent output with a single bit.
When we get that.. Here's how :
You're an exchange and you need to pay 1024 people as they want to withdraw. You have each of their addresses.
Instead of creating a transaction that sends the right amount to each participant..
Create a hash tree that has Hash ( Index Amount Address ) as the leaf nodes. Get the root of the tree. This will create proofs per leaf of 10 hashes + 10 boolean left/right values (to descend the tree - to it's root). The Index value goes from 0-1023. It's a perfect binary tree. 2^10=1024.
Give each leaf node proof to each user. This does not reveal the other outputs.
Now create a covenant script.
Send the complete funds for all 1024 participants to this one address.
Anyone of the 1024 can present their proof as an input, and the correct signature, when trying to spend their exact allotted amount. The covenant script ensures there is an output of the new correct lower amount to the same address in the outputs.
How do we know the user has not spent before ? We store it in a bitfield. 1024 Bits is 128Bytes. So there is a 128 byte value passed from covenant to covenant, similar to the way eltoo does it, and you check the bit value from INDEX in the proof to see if you can spend. Once you do spend that bit is set to 1, the 128 byte value updated and passed on, and the next check will fail. This is nice as the address for the script does not change, even though it's functioning does.
This way 1024 separate users have access to the same transaction output. Saving quite a bit of space.
Once we get SIGHASH_NOINPUT we should be able to spend outputs from one transaction as inputs to another transaction in the same block and multiple users can call the script simultaneously (they won't specify a coinid just the script).
Lots of other use cases.. (Off-ramping from a sidechain)
|
Life is Code.
|
|
|
aliashraf
Legendary
Offline
Activity: 1456
Merit: 1175
Always remember the cause!
|
|
April 01, 2019, 04:13:10 PM Last edit: April 01, 2019, 06:10:08 PM by aliashraf |
|
Hello spartacus, excellent job. although I need a bit more time to fully digest your idea, it may be helpful to put your proposal this way: 1- Payer generates a txn which pays to a single Merkle root. 2- The Merkle hash tree has leafs each hashes of outputs mentioning amounts and addresses of receivers (or an output script?) 3- Each receiver, privately is handed a proof that is consisted of exact traversal path from its own leaf to the root and siblings necessary to prove a leaf as a member. 4- Full nodes maintain an additional bitfield data structure for such txn types to keep track of spent/unspent status of leafs. 5- Recipients can spend by disclosing the path to the root including the hash of their respected leaf, the actual output and required information/signature(s) depending on the output script. All good but a single issue (for now): How full nodes could ever verify that the payer has not over spent the input(s)?And I got (premature) solutions for this problem as well 1- You may consider including the list of amounts in the txn body. It adds like 1024*8 bytes (max) to txn size and reduces the effect you wish from 1000x to like 5x-10x . Not a smart solution. 2- Another and ways better solution would be including just sha256 hash of the amounts list in the txn and not the list itself, full nodes do not verify this hash at all, instead each receiver will be given the original list (same) along with the hash proof of the main tree for their respected leaf (exclusive) and we are done! It would be each receiver's job to verify the amounts, all of them. A collusion between payer and the receivers have no incentives as the first overspend attempt will be rejected by full nodes as they are always able to check the remained balance of the original utsxo. Still there is another issue: This idea implies a kind of interaction and wallet liveness which is acceptable as the cos t of the great performance boost, imo. Edit: It is also important to note that proofs and signatures can be safely pruned after a while and don't have to be maintained forever but spend txn should reveal the index permanently.
|
|
|
|
mda
Member
Offline
Activity: 144
Merit: 13
|
|
April 01, 2019, 06:09:56 PM |
|
An interesting idea but you will need another 1024 transactions to spend from the batch. Therefore throughput becomes 2x, even less taking into account all overhead.
|
|
|
|
aliashraf
Legendary
Offline
Activity: 1456
Merit: 1175
Always remember the cause!
|
|
April 01, 2019, 06:13:46 PM |
|
An interesting idea but you will need another 1024 transactions to spend from the batch. Therefore throughput becomes 2x, even less taking into account all overhead.
Spending outputs is not relevant. you are mixing things up.
|
|
|
|
spartacusrex (OP)
|
|
April 01, 2019, 06:32:30 PM |
|
4- Full nodes maintain an additional bitfield data structure for such txn types to keep track of spent/unspent status of leafs.
No - this data is available in the txn. Nothing special required. Using a covenant you can add data, in this case 4 hashes, to the scriptsig of the output. And enforce it. So it's available at the next spend. This is then updated, with a single bit set to 1, new data put on scriptsig, rinse, repeat. How full nodes could ever verify that the payer has not over spent the input(s)?
The script allows a user to spend an exact amount. It does this by enforcing that the new output with the same address be of a certain amount (current - user_amount). All this info is available in the proof. If the user doesn't collect all that is his, it'll go to the miners. You get 1 shot, then your bitfield is set and you can't spend again. You have to get it ALL in 1 go. 1- You may consider including the list of amounts in the txn body. It adds like 1024*8 bytes (max) to txn size and reduces the effect you wish from 1000x to like 5x-10x . Not a smart solution.
The amount is already stored in the proof, and presented at point of use. Only the root of the hash tree is stored in the txn. All the information is presented by the user and it either fits or it doesn't. HASH (INDEX AMOUNT ADDRESS)+MERKLE_PROOF An interesting idea but you will need another 1024 transactions to spend from the batch. Therefore throughput becomes 2x, even less taking into account all overhead.
This address is exactly the same as any other address you control. Just that to use it you need a private key AND a merkle proof. You can't lose your funds or have them spent. Therefore, sure, spending it still requires a transaction.. BUT initially this setup, paying 1024 people, would have taken 1024 outputs, and now it only takes 1. You can keep your coins there as long as you like.
|
Life is Code.
|
|
|
aliashraf
Legendary
Offline
Activity: 1456
Merit: 1175
Always remember the cause!
|
|
April 01, 2019, 06:57:34 PM |
|
4- Full nodes maintain an additional bitfield data structure for such txn types to keep track of spent/unspent status of leafs.
No - this data is available in the txn. Nothing special required. Using a covenant you can add data, in this case 4 hashes, to the scriptsig of the output. And enforce it. So it's available at the next spend. This is then updated, with a single bit set to 1, new data put on scriptsig, rinse, repeat. No need to store it in the txn and if you do, it should be considered immutable, useless for keeping track of further events (spends). How full nodes could ever verify that the payer has not over spent the input(s)?
The script allows a user to spend an exact amount. It does this by enforcing that the new output with the same address be of a certain amount (current - user_amount). All this info is available in the proof. If the user doesn't collect all that is his, it'll go to the miners. You get 1 shot, then your bitfield is set and you can't spend again. You have to get it ALL in 1 go. Users don't need to have access to the whole hash tree and raw outputs. They need partial/relevant proof. The user has to collect all of his output besides fees, it is how bitcoin works. It is not the problem. The problem raises when the original txn is getting to the blockchain first (no spends yet), it would be possible for malicious actors, paying multiple times to n users (each less than the total output) and convince each of them about the validity of their respected output because it is less than the total input. Users having full access to the whole hash tree and raw outputs is just naive. 1- You may consider including the list of amounts in the txn body. It adds like 1024*8 bytes (max) to txn size and reduces the effect you wish from 1000x to like 5x-10x . Not a smart solution.
The amount is already stored in the proof, and presented at point of use. Only the root of the hash tree is stored in the txn. All the information is presented by the user and it either fits or it doesn't. HASH (INDEX AMOUNT ADDRESS)+MERKLE_PROOF Again, you are addressing the wrong problem. Spending is OK, but confirming initially is a hurdle: Full nodes have not access/ don't store full information and users are not supposed to. Please, carefully examine my solution and let me know about your concerns.
|
|
|
|
spartacusrex (OP)
|
|
April 01, 2019, 07:11:04 PM |
|
4- Full nodes maintain an additional bitfield data structure for such txn types to keep track of spent/unspent status of leafs.
No - this data is available in the txn. Nothing special required. Using a covenant you can add data, in this case 4 hashes, to the scriptsig of the output. And enforce it. So it's available at the next spend. This is then updated, with a single bit set to 1, new data put on scriptsig, rinse, repeat. No need to store it in the txn and if you do it should be considered immutable, useless for keeping track of further events (spends). ... It is stored in the scriptsig of the new output. This IS NOT IMMUTABLE. This is EXACTLY what covenants are for. The covenant makes sure the correct data is appended to the scriptsig of the output.. Storing which index has been spent and which have still to be spent - as a single bit. As for cheating a user who does not have the full tree, it would be simple to use a SUM hash tree, so the parent uses the sum of the children in the hash value. The root has the total amount. Now the user KNOWS he has been given the correct amount - or the hash tree won't add up correctly. They do not need full access to the tree.. Again, you are addressing the wrong problem. Spending is OK, but confirming initially is a hurdle: Full nodes have not access/ don't store full information and users are not supposed to.
Please, carefully examine my solution and let me know about your concerns.
This ?.. Please elaborate.. '..confirming initially a hurdle..' ( I think we are not seeing exactly the same picture..)
|
Life is Code.
|
|
|
aliashraf
Legendary
Offline
Activity: 1456
Merit: 1175
Always remember the cause!
|
|
April 01, 2019, 07:24:40 PM |
|
4- Full nodes maintain an additional bitfield data structure for such txn types to keep track of spent/unspent status of leafs.
No - this data is available in the txn. Nothing special required. Using a covenant you can add data, in this case 4 hashes, to the scriptsig of the output. And enforce it. So it's available at the next spend. This is then updated, with a single bit set to 1, new data put on scriptsig, rinse, repeat. No need to store it in the txn and if you do it should be considered immutable, useless for keeping track of further events (spends). ... It is stored in the scriptsig of the new output. This IS NOT IMMUTABLE. This is EXACTLY what covenants are for. The covenant makes sure the correct data is appended to the scriptsig of the output.. Storing which index has been spent and which have still to be spent - as a single bit. well, what I'm saying is that there is absolutely no need to all of this convenants thing and complicating the proposal. It is just like how nodes maintain the UTXO right now, they can just have extra bitfields for this class of outputs. Period. As for cheating a user who does not have the full tree, it would be simple to use a SUM hash tree, so the parent uses the sum of the children in the hash value. The root has the total amount. Now the user KNOWS he has been given the correct amount - or the hash tree won't add up correctly. They do not need full access to the tree..
Right! But no need to a tree, you could simply generate an ordered array of amounts and commit to its hash in the txn, such that all users can verify how exactly the total sum of outputs is distributed, full nodes never care about it, all they have to do is controlling total spent transactions when they are confirming each new spend. Again, you are addressing the wrong problem. Spending is OK, but confirming initially is a hurdle: Full nodes have not access/ don't store full information and users are not supposed to.
Please, carefully examine my solution and let me know about your concerns.
This ?.. Please elaborate.. '..confirming initially a hurdle..' ( I think we are not seeing exactly the same picture..) Once the original txn is to be confirmed, this may be considered a problem, whether the creator has distributed the output amount faithfully or not? My solution is projecting this problem to each single receiver and maintaining the responsibility of nodes in the total sum control level.
|
|
|
|
aliashraf
Legendary
Offline
Activity: 1456
Merit: 1175
Always remember the cause!
|
|
April 01, 2019, 07:41:46 PM Last edit: April 29, 2019, 06:10:01 PM by aliashraf |
|
Op. I think we have a common perspective now. It looks very simple and obvious to me, no covenants, complementary UTXO data and extra amountsList data being hash-committed in the txn, and passed to each user who we wish to convince about our fidelity.
Now let's take a look at the bigger picture:
One important problem would be maintaining the wallets by users. They have to keep track not only of their private keys/seeds now to retrieve their balance they need the extra proofs to be backed-up and kept safe, for each output and it is great inconvenience.
|
|
|
|
mda
Member
Offline
Activity: 144
Merit: 13
|
|
April 01, 2019, 08:16:56 PM |
|
We store it in a bitfield. 1024 Bits is 128Bytes. This bitfield goes into every transaction 1024 times. Adding to this Merkle path and comparing to 250 bytes of an average transaction I would doubt even 1x throughput.
|
|
|
|
spartacusrex (OP)
|
|
April 01, 2019, 09:26:24 PM |
|
We store it in a bitfield. 1024 Bits is 128Bytes. This bitfield goes into every transaction 1024 times. Adding to this Merkle path and comparing to 250 bytes of an average transaction I would doubt even 1x throughput. The original transaction ( the one where the 'transaction batching' is going on ).. is still 1000x times smaller.. with a fee which is 1000x smaller. And you'll still have paid out to 1024 users.
|
Life is Code.
|
|
|
aliashraf
Legendary
Offline
Activity: 1456
Merit: 1175
Always remember the cause!
|
|
April 01, 2019, 10:14:03 PM |
|
We store it in a bitfield. 1024 Bits is 128Bytes. This bitfield goes into every transaction 1024 times. Adding to this Merkle path and comparing to 250 bytes of an average transaction I would doubt even 1x throughput. The original transaction ( the one where the 'transaction batching' is going on ).. is still 1000x times smaller.. with a fee which is 1000x smaller. And you'll still have paid out to 1024 users. He is arguing that if you need large proofs to spend, it would make things even worse. Actually, it will be a fair objection if you insist on including bitfield on spend txns. My proposed improvements doesn't need such an overhead tho.
|
|
|
|
mda
Member
Offline
Activity: 144
Merit: 13
|
|
April 02, 2019, 12:23:44 AM |
|
We store it in a bitfield. 1024 Bits is 128Bytes. This bitfield goes into every transaction 1024 times. Adding to this Merkle path and comparing to 250 bytes of an average transaction I would doubt even 1x throughput. The original transaction ( the one where the 'transaction batching' is going on ).. is still 1000x times smaller.. with a fee which is 1000x smaller. And you'll still have paid out to 1024 users. Merkle trees are useful when plain cryptography is not enough, like in inter-chain communication. In this proposal you are attempting an uneasy task to compress thirty-something bytes per output even more.
|
|
|
|
spartacusrex (OP)
|
|
April 02, 2019, 08:24:29 AM |
|
We store it in a bitfield. 1024 Bits is 128Bytes. This bitfield goes into every transaction 1024 times. Adding to this Merkle path and comparing to 250 bytes of an average transaction I would doubt even 1x throughput. The original transaction ( the one where the 'transaction batching' is going on ).. is still 1000x times smaller.. with a fee which is 1000x smaller. And you'll still have paid out to 1024 users. He is arguing that if you need large proofs to spend, it would make things even worse. Actually, it will be a fair objection if you insist on including bitfield on spend txns. My proposed improvements doesn't need such an overhead tho. .. Yes, yes.. The point about including the bitfield is that it doesn't ask the miners to fundamentally change. My technique uses a little clever scripting, that's all. The miners do exactly what they normally do. They don't need to start storing extra data, and changing their core functionality. It's just a scripting upgrade - and miners process scripts very well. I think that has simplicity benefits but we can disagree. We store it in a bitfield. 1024 Bits is 128Bytes. This bitfield goes into every transaction 1024 times. Adding to this Merkle path and comparing to 250 bytes of an average transaction I would doubt even 1x throughput. The original transaction ( the one where the 'transaction batching' is going on ).. is still 1000x times smaller.. with a fee which is 1000x smaller. And you'll still have paid out to 1024 users. Merkle trees are useful when plain cryptography is not enough, like in inter-chain communication. In this proposal you are attempting an uneasy task to compress thirty-something bytes per output even more. I fully accept that the spend transaction will include more data. The point was simply that the exchange or sidechain or who ever it is needing to do a large batch transaction, can now do so at a fraction of the fee and space requirements _initially_. In reality, as you say, by passing the burden of the fee / space to the spender. (Allthough spending may be a long time in the future when space is less of an issue - at least we are on-chain)
|
Life is Code.
|
|
|
aliashraf
Legendary
Offline
Activity: 1456
Merit: 1175
Always remember the cause!
|
|
April 02, 2019, 06:46:35 PM Last edit: April 02, 2019, 06:58:28 PM by aliashraf |
|
We store it in a bitfield. 1024 Bits is 128Bytes. This bitfield goes into every transaction 1024 times. Adding to this Merkle path and comparing to 250 bytes of an average transaction I would doubt even 1x throughput. The original transaction ( the one where the 'transaction batching' is going on ).. is still 1000x times smaller.. with a fee which is 1000x smaller. And you'll still have paid out to 1024 users. He is arguing that if you need large proofs to spend, it would make things even worse. Actually, it will be a fair objection if you insist on including bitfield on spend txns. My proposed improvements doesn't need such an overhead tho. .. Yes, yes.. The point about including the bitfield is that it doesn't ask the miners to fundamentally change. My technique uses a little clever scripting, that's all. The miners do exactly what they normally do. They don't need to start storing extra data, and changing their core functionality. It's just a scripting upgrade - and miners process scripts very well. I think that has simplicity benefits but we can disagree. I don't think it is any different in regard to implementation complexities and probable side-effects. No matter you are championing for a an improve in the script processing or what. Maintaining an extra data infrustructure for UTXO is not that complex and the costs involved in projecting the problem to scripting layer are not justifiable. It is generally a bad idea to solve a problem in scripting layer whenever it could be solved in core layer. We need to forget about what Core devs say and think, they are not good at improving bitcoin, we are far better. They are under pressure of real worl bitcoin and whales, we are not, we can do anything and implement any idea, it is not our mission to keep bitcoin "un-compromised" it's Greg Maxwell's job and theme song, we need to get rid of succh stupid considerations and innovate and innovate, forever! I think you are too excited about the covenant stuff. I've no doubts there would be an application for covenant scripting but this is not the one! Let's just don't start from covenants and focus on the core idea, I know you've started form covenants but here we are, no need to covenants at all! I'll fork from your idea, I don't care about covenants and their applications what I care about is the core idea: scaling batch processing in bitcoin and right now we have the solution (thanks to your original idea): adding a recursive definition of "unspent txn output' such that you can encapsulate more data to a transaction. it is what actually matters. So, I'm considering a far more efficient version of your idea eliminating the whole covenant thing and txn commitment to bitfields which is absolutely unnecessary and to give you the whole credit I'm calling it "SpartacusRex protocol". Are you in or not?
|
|
|
|
spartacusrex (OP)
|
|
April 03, 2019, 10:16:50 AM |
|
I don't think it is any different in regard to implementation complexities and probable side-effects. No matter you are championing for a an improve in the script processing or what. Maintaining an extra data infrustructure for UTXO is not that complex and the costs involved in projecting the problem to scripting layer are not justifiable. It is generally a bad idea to solve a problem in scripting layer whenever it could be solved in core layer.
I'm a big bitcoin fan. Period.. If we want this teck to be used it needs to function within the realms available. Scripting upgrades are all soft-fork. It can _already_ be done on Liquid. That makes it 1,000,000 times more likely to be used. Frankly it _will_ be useable if Bitcoin simply follows the current upgrade path. no changes to the current proposed changes required.. We need to forget about what Core devs say and think, they are not good at improving bitcoin, we are far better. They are under pressure of real worl bitcoin and whales, we are not, we can do anything and implement any idea, it is not our mission to keep bitcoin "un-compromised" it's Greg Maxwell's job and theme song, we need to get rid of succh stupid considerations and innovate and innovate, forever!
The work Core does cannot be over-estimated. They get a big THANK YOU! from me everyday of the week and twice on Sundays. So, I'm considering a far more efficient version of your idea eliminating the whole covenant thing and txn commitment to bitfields which is absolutely unnecessary and to give you the whole credit I'm calling it "SpartacusRex protocol". Are you in or not?
Awful name. ---------------------------- IF I was integrating this into a brand new coin.. then there might be ways of making this process cooler. And for that I'm all ears. I was thinking that if you had to off-board 1 million users from a side-chain that was under-attack, you could get them all back on-chain in 1 block. If you made the bitfield larger.. even less.
|
Life is Code.
|
|
|
aliashraf
Legendary
Offline
Activity: 1456
Merit: 1175
Always remember the cause!
|
|
April 03, 2019, 03:22:58 PM |
|
I don't think it is any different in regard to implementation complexities and probable side-effects. No matter you are championing for a an improve in the script processing or what. Maintaining an extra data infrustructure for UTXO is not that complex and the costs involved in projecting the problem to scripting layer are not justifiable. It is generally a bad idea to solve a problem in scripting layer whenever it could be solved in core layer.
I'm a big bitcoin fan. Period.. If we want this teck to be used it needs to function within the realms available. Scripting upgrades are all soft-fork. It can _already_ be done on Liquid. That makes it 1,000,000 times more likely to be used. Frankly it _will_ be useable if Bitcoin simply follows the current upgrade path. no changes to the current proposed changes required... I'm bigger fan, but I don't see bitcoin a dead project. My proposal requires no hard-fork. Implementation problems have nothing to do with the protocol. For instance you can implement bitcoin protocol (very inefficient tho) without maintaining a data structure for the UTXO, hence, adding or not an extra data structure is an implementation choice which you are choosing to avoid because you are afraid of touching the sacred bitcoin core code and you are ready to sacrifice the whole idea to convince them about their stupid "core" thing not being touched In either approach you have no chance to get it done, 0.000000 * 10^6 = 0 We need to forget about what Core devs say and think, they are not good at improving bitcoin, we are far better. They are under pressure of real worl bitcoin and whales, we are not, we can do anything and implement any idea, it is not our mission to keep bitcoin "un-compromised" it's Greg Maxwell's job and theme song, we need to get rid of succh stupid considerations and innovate and innovate, forever!
The work Core does cannot be over-estimated. They get a big THANK YOU! from me everyday of the week and twice on Sundays. You mean underestimated obviously and I'm not the one who underestimates anybody. We already know what happens to this idea it will be neglected or somebody will show up lecturing about unacceptable consequences of the backup problem and overlooking the huge on-chain scaling advantages, because we have a stupid second layer solution for it and all we have to do is keeping bitcoin as is. Period. So, I'm considering a far more efficient version of your idea eliminating the whole covenant thing and txn commitment to bitfields which is absolutely unnecessary and to give you the whole credit I'm calling it "SpartacusRex protocol". Are you in or not?
Awful name. ---------------------------- IF I was integrating this into a brand new coin.. then there might be ways of making this process cooler. And for that I'm all ears. I was thinking that if you had to off-board 1 million users from a side-chain that was under-attack, you could get them all back on-chain in 1 block. If you made the bitfield larger.. even less. I like the spirit
|
|
|
|
spartacusrex (OP)
|
|
April 04, 2019, 07:31:28 AM |
|
Actually - it seems perfectly possible to off-board 1 million users in a single transaction.. and without making the Bitfield any bigger. Recursive Bitfield Scripts! .. (lol.. of course) So same as before - we have a 4 hash bitfield to allow 1024 outputs from a single output. But each output is to another Bitfield script. Only the first 'spend' of the original transaction would need to post the next bitfield transaction - (not all of them - a nice optimisation). And then as usual 1024 normal transactions can be made. .. Go large! .. with a Triple-Decker.. and we get a billion outputs.. from a single txn output.. .. (not sure what for..)
|
Life is Code.
|
|
|
spartacusrex (OP)
|
|
April 04, 2019, 08:27:57 AM |
|
@Ali The amount of data we are now talking about is quite significant. To keep track of 1 billion outputs (1024*1024*1024), as a bitfield, in a centralised fashion requires 128MB. And that is for 1 single output, and does not include all the proofs (then it's really big)! By using this scripting technique (with the dreaded covenants.. i like, you no like), instead of the miners storing it, all the data is kept by the users. And each user will only store the tiny amount relevant to themselves (no different to storing a public key - it's not even secure data.. just proofs). The amount of potential data could be very large for any entities to hold in full. By using these scripts we get around all of that. Each user will only have to store small relevant amounts of data, which they present at spend. .... I'm going to have to re-raise your '..whole covenant thing and txn commitment to bitfields which is absolutely unnecessary..' and say 'Come on then - what's your way that is simpler, cleaner and more efficient than this way ?'
|
Life is Code.
|
|
|
aliashraf
Legendary
Offline
Activity: 1456
Merit: 1175
Always remember the cause!
|
|
April 04, 2019, 09:59:02 AM |
|
@Ali
The amount of data we are now talking about is quite significant. To keep track of 1 billion outputs (1024*1024*1024), as a bitfield, in a centralised fashion requires 128MB. And that is for 1 single output, and does not include all the proofs (then it's really big)!
By using this scripting technique (with the dreaded covenants.. i like, you no like), instead of the miners storing it, all the data is kept by the users. And each user will only store the tiny amount relevant to themselves (no different to storing a public key - it's not even secure data.. just proofs).
A hypothetical 1 billion nested outputs txn can be handled by making a time/space trade-off. Nodes can implement it without storing a single extra byte and querying the blockchain for the history, they can maintain an index to make it more efficient. There is no way to "distribute" data between users, eventually every node has to approve that a single nested output is not double spent and they need data, a copy of it to reach to a consensus, it is how blockchains work, remember? In each spending attempt, nodes have to verify that it is not attempted before and they need to either query the history or check an exclusively maintained data structure for speed purposes. a bitfield defintely. I'm going to have to re-raise your '..whole covenant thing and txn commitment to bitfields which is absolutely unnecessary..' and say 'Come on then - what's your way that is simpler, cleaner and more efficient than this way ?' In my proposal everything is straightforward: users maintain their proofs of leafs and supply it along with other supplementary data to fullfil the nested output script (pubkeys, signatures, scripts, etc.) nodes maintain 1024 long bitfields internally and tick the respected nested output as spent eventually. Only the first 'spend' of the original transaction would need to post the next bitfield transaction - (not all of them - a nice optimisation). And then as usual 1024 normal transactions can be made. .. Go large! .. with a Triple-Decker.. and we get a billion outputs.. from a single txn output.. .. (not sure what for..) This 'optimization" you are talking about could be more "optimized" by not requiring the bitfield at all! Because it is not the user's problem. This would be an anomaly to have weird txns that carry such scripts which nodes have to update them incrementally (in your optimized version), it is more clean and consistent to have them keeping track of nested outputs like what they already do for normal outputs.
|
|
|
|
|