Bitcoin Forum
May 03, 2024, 08:15:05 AM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: « 1 2 3 4 5 6 7 8 9 10 11 12 13 [14]  All
  Print  
Author Topic: Segwit details? N + 2*numtxids + numvins > N, segwit uses more space than 2MB HF  (Read 21358 times)
2112
Legendary
*
Offline Offline

Activity: 2128
Merit: 1065



View Profile
April 04, 2016, 02:55:50 AM
Last edit: April 04, 2016, 03:09:05 AM by 2112
 #261

You started writing really weird conflated stuff. What do fees have to do with transaction syntax? ... The amount of fees doesn't change the syntax, so doesn't require change of the version.

Sorry, I don't understand your objections.  

There are no "meta-rules" that specify what the validity rules can be.  They are not limited to "syntax", whatever that means.   Any computable predicate on bit strings could in principle be a validity rule, as long as it does not completely break the system.

Right now there are no validiy rules that refer to fees.  The minimum fee, like the Pirate Code, "is more what you'd call 'guideline' than actual rule"; each miner decides whether to require it (or even to require more than it).  But the minimum could be made into a validity rule.  the difference woudl be that each miner would not only impose it on his blocks, but also reject blocks solved by other miners that contain transactions that pay less than that fee.

Quote
The version field should be used to clearly describe syntax rules governing the transaction format.

As I wrote, this cannot be guaranteed.  If a fork (rule change) was executed to fix a bug or prevent an attack, the miners cannot continue to use the old rules for transactions that have the old version tag; that would negate the purpose of the fork.  They must reject such transactions.  

So, it is not safe to retain signed but unconfirmed transactions without broadcasting them.
I'm still unsure why we started talking about fees in this thread. The fees enter the consensus validity rules only when verifying that they aren't negative. The fees have to be positive or zero. The value of fees is only used when priority-sorting the already verified transactions.

Also, I don't believe in the existence of non-fixable bugs in the old rules that "fork (rule change) was executed to fix a bug or prevent an attack, the miners cannot continue to use the old rules for transactions that have the old version tag".

Edit: Getting back to the original argument:
Pre-signed but unbroadcast or unconfirmed transactions seem to be a tough problem. 
I disagree on the "tough" part. In my opinion this is less difficult than DOSbox/Wine on Linux or DOS subsystem in Windows 32 (and Itanium editions of Windows 64). It is more of the problem how much energy to spend on scoping the required area of backward compatibility and preparing/verifying test cases.
Perhaps the DOS/Windows argument wasn't the best. The better, but less well known, example would be mainframe disk device drivers. They easily cover old-style devices with interfaces designed in late 1960. The hardware implementations are "frozen" in the sense that nobody changes the relevant hardware logic anymore. It is just a small sub-area in the modern VLSI chips that implements exactly the same logic as the old TTL-style disk interface controller.

Nobody designs or writes an interface that is sprinkled with conditional logic to handle the old protocols (if () then {} else {}). There's one time inquiry to verify the protocol version used and then all operations are handled through indirection (e.g. (*handle_read[versn])(...).

Same idea could be applied to Bitcoin if the version field would be appropriately changed both in blocks and transactions.

Please comment, critique, criticize or ridicule BIP 2112: https://bitcointalk.org/index.php?topic=54382.0
Long-term mining prognosis: https://bitcointalk.org/index.php?topic=91101.0
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction.
1714724105
Hero Member
*
Offline Offline

Posts: 1714724105

View Profile Personal Message (Offline)

Ignore
1714724105
Reply with quote  #2

1714724105
Report to moderator
2112
Legendary
*
Offline Offline

Activity: 2128
Merit: 1065



View Profile
April 04, 2016, 03:17:19 AM
Last edit: April 04, 2016, 03:31:45 AM by 2112
 #262

* Malleability could be fixed by just skipping the signatures (the same data that SegWit would segregate) when computing the txid.  
That _is_ segregation of the signatures up to completely non-normative ordering of data transferred. Segwit could just as well order the data into the same place in the serialized transactions when sending them, but its cleaner to not do so.
The "cleaner" part is true only to subset of people: those that were actually considering the original Satoshi's design as "ideal" or "perfect".

I personally think that the original design where "transaction hash" is both a "transaction identifier" and "transaction checksum" as a sort of a "neat hack".

Edit:
What about fixing those "other problems" (I don't want to say "hard", because IMO they aren't "hard" by themselves) without the segregation? Impossible or just not worth it?
A strong malleability fix _requires_ segregation of signatures.

A less strong fix could be achieved without it if generality is abandoned (e.g. only works for a subset of script types, rather than all without question) and a new cryptographic signature system (something that provides unique signatures, not ECC signatures) was deployed.

And even with giving up on fixing malleability for most smart contracts, it's very challenging to be absolutely sure that a specific instance is actually non-malleable. This can be seen in the history of BIP62-- where at several points it was believed that it addressed all forms of malleability for the subset of transactions it attempted to fix, only to  later discover that there were additional forms.  If a design is inherently subject to malleability but you hope to fix it by disallowing all but one possible representation there is a near endless source of ways to get it wrong.

Segregation removes that problem. Segwitness using scripts achieve a strong base level of non-malleability without doubt or risk of getting it wrong, both in design and by script authors. And only segregation applies to all scripts, not just a careful subset of "inherently non-malleable rules".

Getting signatures out from under TXIDs is the natural design to prevent problems from malleability and engineers were lamenting that Bitcoin didn't work that way as far back as 2011/late-2012.
The requirement for segregation is really only for "logical" segregation, not "physical" segregation.

My opinion is that the main point of the contention is that more programmers agree that "logical" (or algebraic) segregation is OK. Only much smaller subset of programmers agree that "physical" segregation (being far away in the serialized bytestream on the wire or on the disk) is the correct way to implement the algebraic segregation.

Edit2:

In addition to the above there is an issue of what is the optimal length of "transaction id" and "witness id". Transaction identifiers have to be globally unique, whereas "witness identifiers" only have to be unique within the block that they refer to. So the optimal length of the witness id could be much lower than 256.


Please comment, critique, criticize or ridicule BIP 2112: https://bitcointalk.org/index.php?topic=54382.0
Long-term mining prognosis: https://bitcointalk.org/index.php?topic=91101.0
2112
Legendary
*
Offline Offline

Activity: 2128
Merit: 1065



View Profile
April 04, 2016, 05:16:22 PM
 #263

OK, so somebody posted then quickly deleted a follow up to my messages above. I only took a glance before I was interrupted, but the main take-away was that indeed I should clarify what I meant.

So lets roll back to the original Satoshi's transaction design.

There are basically 3 main goals that the transaction format has to fulfill:

1) reference source and destination of funds, as well as amounts
2) cryptographically sign the source references to prove that one has control over them
3) detect (and possibly correct) errors in the transmitted transaction: both intentional (tampering) and unintentional (channel errors)

The original Satoshi's design used single SHA256 hash to cover all tree goals. It was a neat idea to kill 3 birds with one stone. But then it turned out that only 2 birds get killed, the center one is only getting injured. At it has about two lives: low-S and high-S.

So then we start trying to address those 3 main goals using a separate fields in the new transaction format. I'm not really prepared to discuss all the possibilities.

Lets just discuss a possible encoding for single UTxO reference. The current design is an ordered pair of (256 bit transaction id,short integer index of the outputs within that transaction). Lets just also assume that for some reason it becomes extremely important to shorten that reference (e.g. transferring transactions with a QR code or some other ultra-low-power-and-bandwidth radio technology).

It may turn out that the better globally unique encoding is an ordered pair (short integer block number in the blockchain, short integer index to the preorder traversal of the Merkle tree of transactions and their outputs). It may be acceptable to be able to refer only to the confirmed transactions in this format.

I'm not trying to advocate the change of the current UTxO reference format. All I'm trying to convey is that there are various ways to achieve the required goals, with various trade-off in their implementation.

Both the original Satoshi's design as well as the current SegWit design suffer from "just-in-time design" syndrome. The choices were made quickly without properly discussing and comparing the alternates. The presumed target environment is only modern high-power high-speed high-temperature 32-bit and 64-bit processors and high bandwidth communication channels.

Around the turn of the century there was a cryptographic protocol called https://en.wikipedia.org/wiki/Secure_Electronic_Transaction . It was deservedly an unmitigated failure. But they did thing right in their design. The original "Theory of operations" SET document did a thorough analysis of design variants:

1) exact bit counts of various representations and encodings
2) estimated clock counts of the operations on the then-current mainstream 32-bit CPUs
3) estimated clock counts of the operations on the then-current 8-bit micro CPUs like GSM SIM cards
4) estimated line and byte counts of the implementation source and object codes
5) range of achievable gains possible by implementing special-purpose hardware cryptographic instructions with various target gate counts.

Again, I'm definitely not advocating anything like SET and its dual signatures. I'm just suggesting spending more time on balancing various trade-off and possible goal of the completed application.

Please comment, critique, criticize or ridicule BIP 2112: https://bitcointalk.org/index.php?topic=54382.0
Long-term mining prognosis: https://bitcointalk.org/index.php?topic=91101.0
Pages: « 1 2 3 4 5 6 7 8 9 10 11 12 13 [14]  All
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!