Bitcoin Forum
May 25, 2024, 10:21:09 PM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
  Home Help Search Login Register More  
  Show Posts
Pages: « 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 [21] 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 ... 288 »
401  Alternate cryptocurrencies / Altcoin Discussion / Re: The Ethereum VS Bitcoin Fork on: January 17, 2021, 10:24:19 PM
I agree that Turing completeness is unnecessary (I believe this is indisputable, in fact) and an unwarranted liability but I don't really agree that it is to blame for the dao drama.

Undecidability means you cannot decide if an arbitrary program will halt (and, as a consequence, you usually can't make strong statements about what an undecidable arbitrary program will or won't do, since you can't tell if it will or won't halt before an arbritary step).  But even though a language is Turing complete an enormous portion of programs you might want to run on it are decidable, if you limit yourself to only engage with decidable programs then it could be absolutely proven that they will or won't take certain actions.   One can also use Bitcoin insecurely-- for example you could publish your private key on Bitcointalk-- and that doesn't make Bitcoin itself bad.  So, similarly, "arbitrary" usage of Bitcoin is insecure, but sane users don't use Bitcoin in arbitrary ways: they use it in safe ways.

Even if script allowed recursion and or loops, you could easily write script -- including ones that *used* the recursion and loops-- which could automatically be proven to meet whatever safety property you desired.  An arbitrarily selected program would not be provable, of course, but it's easy to write ones that are.  The absence of recursion/loops/etc, instead, make it a lot easier to reason about the safety and stability of the consensus rules and reduce the risk of scripts that crash nodes or use unreasonable amounts of resources.

For the Dao Debacle and many other horrifying eth failures, the real issue in my view is that these smart contracts are in fact cryptographic protocols but the whole frequently-hopping-scam-spectrum business model of ethereum (basically cooking up crooked financial schemes faster than people can get arrested for them or the public can learn to not get ripped off by them) basically depends on ignoring this unfortunate fact.  Make a smart contract is not really different than inventing a new block cipher, secure communications protocol (like TLS), or zero knowledge proof.  As cryptographic protocols we should recognize that they are ABSURDLY brittle, and that even when designed and reviewed by the worlds foremost experts they can easily contain utterly devastating flaws that leave them completely without security and these flaws can go a long time before they're detected and suddenly exploited.  As such, an extraordinary level of diligence is required when creating them, justifying things like formal methods (e.g. techniques to prove properties of computer programs).

And the eth software stack and ecosystem is not particularly well suited to the required sort of diligence-- worse, their smart contracting functionality is primarily promoted as being extremely accessible and suitable for even relatively inexperienced UI developers.  This is a toxic mix with predictable results.

But faulty smart contracts are extremely common in ethereum and were common before the dao drama, and yet they did not result in editing the ledger to steal funds from the technically-correct-but-surprising execution of the smart contract-- violating the "Unstoppable code" headline promise of the ethereum crowd sale.   So the flaws in the dao contract and the ethereum ecosystem aren't really relevant to the drama because they don't distinguish it from 100 other events.  What does distinguish it is *who* lost money and the incredible control they wield over the system because they premined the vast majorty of its coins and continue to control billions of dollars worth of them, and the lack of integrity and self control exhibited by those persons.
402  Alternate cryptocurrencies / Altcoin Discussion / Re: The Ethereum VS Bitcoin Fork on: January 17, 2021, 02:52:07 PM
Removing the 'overflow' transaction worked within the protocol and consensus systems.  Hashpower extended one chain rather than another, and eventually the network reorged.

If you were running old unfixed software you followed along just fine.

By comparison, in eth-- the rules of the idiot scam investment contract allowed someone to take the funds. Absolutely nothing malfunctioned in ethereum itself (for a change...), it worked precisely as it was supposed to. The webpages for this "investment" were abundantly clear in allcaps text: the programmed rules are binding and final.  Ethereum developers invested millions in the sketchy contract (and were in, fact, part of its operators as official advisers and key holders) and then when it didn't go their way they edited the ledger to restore the funds and forced users to upgrade, using the threat of withdrawing their support (which was significant, due to their massive premine).

Worse, when some users created a fork the ethereum corporation used an exploit on it to steal funds from the original attacker and then tried to dump them on the market ... but they got caught and ended up returning most of them.  Bitcoin wasn't made mutable in a way that it wasn't always mutable.

So by comparison:

Bitcoin itself malfunctioned in an obvious way where it couldn't continue since anyone being able to print billions of coins out of nothing is an obviously useless system.  Working *within* the consensus rules, using the fact that nodes follow the most-work-valid-chain the malfunctioning transaction was kicked out in a way where any transaction could have been kicked out.  Doing so created no winners or losers (Bitcoin simply couldn't continue under the broken rules)-- and absolutely no one complained about or disagreed with the change.  The developers/miners  that drove fixing it stood to gain nothing personally from the event, other than just keeping Bitcoin working at all.

Imagine that you reject the fix and want to keep running the original anyone-can-print-coins-out-of-nothing code.  You can do that. What chain are you on?  The current Bitcoin chain.

Even if you wanted to be an absolutist that we had to do what the code said-- the bug was actually triggering undefined behaviour in the C++ language.  It's not formally defined what happens when those signed integers overflowed.  Rejecting the transaction was as valid an action as accepting it, according to the C++ language.  Had it happened to be the case that a popular compiler saturated on signed overflow instead of wrapping or if popular ones crashed on signed overflow (some actually do, but didn't in this case), then what happened-- a chain without the txn gaining the most hashpower-- would have likely happened naturally without any human intervention at all, though it might have taken a while.

The "problem" in the case of Ethereum was that it worked exactly as designed and advertised. People with effective control didn't like the result, and edited the ledger to return funds to themselves. They manipulated the markets by demanding trading halts in secret (the logs of which have subsequently been leaked). When a competing chain was formed by users who disagreed with the ethics they attacked it technically and economically.  They personally benefited from this change, and in subsequent similar cases of flawed smart contracts -- including ones losing much more in terms of dollar value-- they declined to intervene (and opposed intervention) when their own funds weren't at risk.

I don't think it's remotely comparable:

Overriding the system (editing the ledger) vs Within the system (adding hashpower to a branch without the bad transaction)
Incompatible with prior software vs compatible with prior software (as a result of the above)
Clearly correct function (Ethereum) vs Objectively and indisputably incorrect function (Bitcoin)
Ethereum change just edited the ledger and made no fix to the system  vs Bitcoin change fixed the system without editing the ledger and the ledger fixed itself.
Personal profit vs No particular personal gains
Subsequently didn't aid in identical cases where eth admins had no losses vs no subsequent cases in Bitcoin (even though there were plenty of cases where bitcoin figures had losses)
Backroom dealing and pressure vs transparent communication
Acting to destroy peoples ability to disagree vs no one disagreeing

If you want to look at a similar event in Bitcoin's history-- go look at MTGox losing the customers funds.  No one (at least no one serious / high profile) even suggested editing the ledger to "fix" that-- even though many well known Bitcoin personalities would have theoretically stood to gain *enormously* if such a thing were done.

Quote
Why do people (including highly ranked Bitcointalk members) think of Ethereum Classic as the ‘real Ethereum’?
I haven't seen that.  Because Ethereum is massively premined (a windfall they're currently in the process of locking in by reducing the issuance again and then making it go exclusively to existing large holders), it's a centrally controlled system as was aptly demonstrated by the event in question.  It is whatever it's scamming owners want it to be, as you can see.
403  Bitcoin / Bitcoin Discussion / Re: BitClub Network - The $722 million+ Bitcoin mining Ponzi Scheme on: January 17, 2021, 06:37:14 AM
It was hard to convince people that it wasn't legit when so many people were promoting it, as a result of promotional deals.

Maybe some good resources here: https://behindmlm.com/companies/bitclub-network/bitclub-network-ditch-bitcoin-payments-for-bitcoin-cash/

If you want to recover anything, however, you'll probably have to chase down all the involved parties since the principles seem to have spent or hidden the funds well.
404  Bitcoin / Bitcoin Technical Support / Re: Bitcoin Core 0.21.0 no incoming peers over Tor on: January 16, 2021, 07:30:15 PM
It may just take a  bit for the existing v3 peers to find out about you and bother connecting.

I don't think hacking to downgrade to v2 makes sense-- the tor network will be completely deactivating v2 in a couple months, so anything that hasn't migrated will just suddenly stop working.

Your node will work fine without any inbounds at all, and as more nodes switch over it'll be good that you're there providing capacity.

It's also likely the case that many of your prior connections were spies.  ... the spy companies tend to be really lazy and bad at their job (small blessings...), so I wouldn't be shocked if they didn't move off v2 until months after it was totally gone. Smiley 

There shouldn't be a problem if you're specifically connecting to that specific node using addnode/connect as long as your Tor version supports it, regardless of Core version. It doesn't check what kind of onion address type it is.
I'm pretty doubtful that will work.
405  Bitcoin / Development & Technical Discussion / Re: compact block relay: why not use Golomb Coded sets for encoding of shortids? on: January 07, 2021, 02:29:16 PM
In bip 152 compact block relay is described, which consist of a list of 6-byte shortids: https://github.com/bitcoin/bips/blob/master/bip-0152.mediawiki

In bip 158 Golomb-Coded Sets are described, which sounds to me like the ideal data structure for these IDs. I wonder if it would be beneficial to encode & transmit a golomb-coded sets for the shortids instead of the fixed 6-bytes? It should reduce the size of the transmitted data a bit, at the cost of a bit more processing power.

The data being sent isn't a set!  The order is needed.  The order could be guessed and a correction to the guess sent, but thats fairly complex.

GCS's savings come mostly from not encoding the permutation of elements. (Some savings comes from non-duplication, but when the 2^keybits is large compared to the number of entries, non-duplication saves very little.)

There are ways to make compact blocks smaller, much smaller in fact--  e.g. I had a prototype implementation that used minisketch which made typical compact blocks about 800 bytes.  But there seems to be littler reason to bother further reducing the size right now.

(This also required the above mentioned guess and correct ordering-- in my case I didn't implement correction and only supported it on blocks where the guess was 100% correct: ones where the miner didn't use prioritization, which is most of them)

Probably more important would be techniques from fibre that let blocks propagate unconditionally without round trips at all even when some txn are missing or some packets are lost.
406  Bitcoin / Development & Technical Discussion / Re: Transaction cut-through on: January 06, 2021, 06:34:50 PM
I believe the size of that is roughly the same as using a transaction wide aggregate in the transactions-- or at least extremely close.  (+/- details about how the transaction was serialize)
407  Bitcoin / Development & Technical Discussion / Re: Will Bitcoin ever be as fungible as Monero? on: January 04, 2021, 05:45:03 PM
Dash is just an outright scam.  Coinjoin was invented on Bitcoin and doesn't need any protocol features.  All the dash authors did was take something you could already do on Bitcoin and use it as a marketing sales point to dump an instamine on the unsuspecting public.  At least they stopped using my name to promote it (though there are a couple other shitty altcoin scams trying that game currently).

Recently they decided to be honest about it, when it was in their interest to do so.

On topic,  I expect bitcoin will continue to become more private over time as it becomes possible to do so without severe trade-off for those who are less interested in privacy.
408  Bitcoin / Development & Technical Discussion / Re: On bitcoin's very long term future without miner rewards on: January 03, 2021, 10:25:46 PM
Wallets supporting this will create transactions that can only be processed on top of the current block, severely undermining the chances of successful “fee-sniping”.
Miners can't violate that, it's physically impossible.  If they break the locktime rule their blocks are no longer blocks and get ignored. Even if 100% of previously active miners do so, that just means is that they stopped mining.  It's no different from a technical perspective than if they all decided to award themselves an additional 100 BTC per block.

The anti-snipe protection isn't especially powerful, but it's just another tool in the set... it amplifies the incentive protections that exist from the risk of orphaning vs the small marginal gain that exists in the presence of backlog.
409  Bitcoin / Development & Technical Discussion / Re: On bitcoin's very long term future without miner rewards on: January 03, 2021, 11:20:00 AM
I honestly couldn't understand what different it makes on the matter whether block size is limited or not.
Because when you have mined a full block and there is a backlog, the difference in income between mining the next block (all full of pending transactions) and going backwards and remining the prior block to take its fees is small (and comes at a cost of decreased chance of eventually being in the longest chain).  This is particularly true because new transactions that are arriving cannot be included in the prior block if it is remined.

Quote
Regarding the countermeasure, a miner could just use a modified code for the node to get rid of it. It's a countermeasure only on the surface.
No, if a miner violates anti-fee-sniping their block is invalid and will be painlessly ignored by the network.
410  Bitcoin / Development & Technical Discussion / Re: On bitcoin's very long term future without miner rewards on: January 03, 2021, 09:37:35 AM
The paper mistaken, because it analyses an abstraction which is too different from the real system. In particular it disregards that blocksize is limited, which is a factor known to address this instability since at least 2011. (There are other countermeasures as well, such as locktime based anti-fee sniping)

https://medium.com/@bergealex4/bitcoin-is-unstable-without-the-block-size-size-limit-70db07070a54

Quote
Do you think it will fork in other versions where there is always a tail reward like in other cryptocurrencies?
Absolutely not.

Quote
a hard fork which could happen if enough miners want
When it comes to hard forks what miners want is completely and utterly irrelevant. The consensus rules the users use are what decide who is or who isn't a miner.  If someone violates the rules individually and autonomously by the users then they just aren't a miner anymore, they might as well be sending viagra spam instead of blocks for all the users nodes would care about them.

Quote
I'll probably be dead before they will actually become tangible issues,
I wouldn't be so quick to assume that. The subsidy decline is geometric and will become small long before its zero.  About 10% of miner income comes from fees today.  And an income stabilizing backlog is no longer a theory but a practical reality.
411  Bitcoin / Development & Technical Discussion / Re: Is there any research on different key-value DBs suitable for bitcoin? on: January 02, 2021, 11:37:29 PM
While the UTXO feels very "unnatural" in comparison, as you're basically manually maintaining a index for a very specific question, and that without some sort of "structural sharing set"  -- your utxo is not going to be efficiently support re-orgs. 
Reorgs more than a couple blocks are unobservable rare, so I don't think it matters if they're linear time in depth.  For short reorgs of a couple blocks, one can just keep a separate dbcache per block in memory, and only merge/flush them once buried.  Doing this is extremely fast.

Somewhere I had a patch to do that for reorgs of up to depth 2,  but esp post compact blocks reorgs are so rare that no one seemed very interested.  It's also the case that reloading the evicted mempool txn ends up taking more time in reorg than anything else (though at least that gets deferred until after the reorg is completed).

Going further to generally support reorgs as a native operation has a lot of overhead.

Quote
A bit of a tangent, but bitcoin-core's real problem is it doesn't by come with a jump started utxo set. It's kind of insane that everyone is supposed to build it themselves when if each release came with a new utxo set hash hardcoded in, it'd be easy for dozens of people to verify it and then people would actually be able to use/recommend bitcoin-core instead of electrum or w/e
After years of trying to get cause progress on support for an assume-valid style utxo set, and finding it always sidebarred by complex commitment structure discussions (which aren't necessary for it, and I think make the wrong bandwidth/storage tradeoff), I've given up trying.  But I agree with you.
412  Bitcoin / Development & Technical Discussion / Re: Transaction cut-through on: January 02, 2021, 11:26:47 PM
Thanks for the figures.

Bitcoin tx size x ~ 400 bytes (or is it closer to 500?)
It's closer to 280 with a more compact serialization, which can be done with no consensus changes (see e.g. the blockstream sat codebase for an example implementation)

Quote
MW output + rangeproof size z ~ 700 bytes (600 with BP+)
Ah, my calculations would have been assuming 3kb or so, pre-BP range-proofs.
413  Bitcoin / Development & Technical Discussion / Re: Anyone have more information on the status of CoinPrune on: January 02, 2021, 11:27:58 AM
In short, it doesn't solve anything. I don't believe there's a security trade-off, because you're giving up ALL of it.
Well they can validate going forward ... so you could argue that it's a middle ground security because it's a single point in time exposure... assuming that the usage isn't what the ethereum usage has become:  In ethereum its normal for nodes to get stuck and fall behind, and people automatically have it fast sync in response (there is even setting in standard node software to do this!).  So, we've seen in practice that that middle ground doesn't appear to be stable.
414  Bitcoin / Development & Technical Discussion / Re: Transaction cut-through on: January 02, 2021, 11:21:37 AM
although, unlike what you say, it is more efficient and especially more scalable than the current bitcoin synchronization process. I was just reminding of the built-in zero-sum proof  property used there, proposing a smart implementation of UTXO commitment that proves the expected balance in each state of the machine hence reducing the security risks. It is possible without borrowing any further idea from Mimblewimble.
Sorry, perhaps you swallowed some altcoin scammers lies but it just isn't so. The "built-in zero-sum proof" isn't just some bolt on property, it's fundamental to the system and it has a substantial cost. To get it you must preserve for every transaction a kernel and for every unspent output you have to preserve a pedersen commitment and a cryptographic range proof.  It has the same asymptotic scaling as Bitcoin.  Bitcoin's communications cost is proportional to N_txn*x while the Mimblewimble zero sum property is proportional to N_txn*y + N_utxo*z,  and x is somewhat larger than y, and z is MUCH larger than x.

The result is that (as of the last time I ran the numbers) the resulting data needed to be transferred to sync (if bitcoin had used this all along) was larger with MW, although eventually once the history was enough larger than the utxo set MW could potentially become smaller, but even with an infinite history to utxo size the ratio between them would just be a small constant.
415  Bitcoin / Development & Technical Discussion / Re: Is there any research on different key-value DBs suitable for bitcoin? on: January 02, 2021, 11:12:51 AM
Here's the thing though. The current implementation of CDBBatch and CDBWrapper (which you have to adapt after you replace the database) only understand key and value data, there is no concept of primary keys or tables or any other SQL-specific stuff in it. And since Key and Value types are exposed in an API, it is impossible to drop-in an SQL database in it without changing the rest of the codebase that uses the database wrapper.

The only thing the software uses the "database" for is as a disk backed hashtable.  Thus the only interface the software needs or even remotely makes sense to use is a key,value interface. There wouldn't be any "primary keys or tables or any other SQL-specific stuff in it" because the software has no use for anything like that, and the database wrapper can use whatever is suitable for the backend.

You can backend that interface with anything you like, and other people have substituted other things into it (including sql databases).  If you google around you can find some of these failed experiments.  (Failed because the performance was extremely poor)
416  Bitcoin / Development & Technical Discussion / Re: Anyone have more information on the status of CoinPrune on: January 02, 2021, 07:16:12 AM
OP, does CoinPrune's proposal raise the issue of scaling the network out to, increase the number of full nodes that validate? From the name "CoinPrune", I believe they only raised a solution for the storage issue, not the bandwidth issue.

Not the OP but I read the paper. It's like "fast sync" almost all ethereum nodes use: it is intended to allow replacing full nodes with nodes that trust that the majority of coinprune participating miners were honest about correct utxo set.

Assuming you were okay with that security property it would remove most of the bandwidth required for synchronization but not at runtime. ... though if you're okay with a very strong assumption on the honesty of miners it opens the question of why not using a spv client. Smiley
417  Bitcoin / Development & Technical Discussion / Re: Transaction cut-through on: January 02, 2021, 07:06:54 AM
Unconfirmed transactions could be sent (even privately) between parties and whenever it is possible they are summarized by the original senders.
Yes, that is literally what this thread was about.

Quote
bitcoin implementing such ideas without disrupting the whole technology is impossible as long as we are obsessed with the infamous "Do not trust, verify!" slogan.

This is an empty inflammatory statement.

Quote
Let's get rid of this slogan for a moment:
Properly implementing UTXO commitment in bitcoin, we can prove that the UTXO set we are committing to is:

1) Provably immune against illegal inflation (just like Mimblewimble).

2) For any given unspent output (live coins) either there is witness data already available or for a long enough period of time such data has been available and the network has been actively confirming it ever and ever.

I think you've failed to understand the property being provided there.   Mimble wimble requires a considerable amount of non-prunable data: a kernel for every transaction.  Given these kernels, the current utxo entries and their range proofs (which have a size a hundred times larger than the comparable data in Bitcoin) one can verify for ones self that the utxo set was one authorized by creators of the kernels.  There is no sketchy hand-waving "long enough" assumption breaking the security properties (assuming that all activities were simple spends).

At the time the Mimblewimble concept was published the amount of data required to sync bitcoin without the history if it had used Mimblewimble would have been somewhat more than syncing bitcoin's full history is without it...  it didn't substantially lower the resource costs, but rather had the potential for improved privacy without substantially increasing the resources.  (Asymptotically, MW could be a small constant factor smaller. ... but the constant terms mean the history has to be *very* big before its smaller at all, and even there the difference is just the ratio of kernel size to a full transaction size so like a factor of 5 vs bitcoin transactions)

418  Bitcoin / Development & Technical Discussion / Re: Is there any research on different key-value DBs suitable for bitcoin? on: January 02, 2021, 05:21:56 AM
[I'm ignoring the wallet sidebar, because it's offtopic for the question AFAIK, and the answer depends a LOT more about what you're doing.]

I wonder if this happens in other fields.

Are there forums of people that don't have a lot of direct experience with practical plumbing plus people that have only done plumbing as part of nuclear reactors, theorizing on if solid gold fittings are superior to hand carved wooden pipe fittings?  Smiley

From the perspective of a Bitcoin node usage of the UTXO:

The UTXO set is logically a set.  There is no important natural order of the elements.  For a Bitcoin node the only operations is to insert an element (with replacement of duplicates), to lookup an item by key (create an output), or to delete an item by key (spend an output).   For that query load, the logical data-structure is some kind of hash-table since those can give asymptotic O(1) performance.  For Bitcoin's use this datastructure needs to be persisted on disk (both because it may be bigger than ram, also because people don't want to resync on restart) and be crash recoverable.

Previously-- until 2017-- (and still in some less sophisticated software), it also needed to support transactional updates, but we eliminated that requirement in Bitcoin core by using the blockchain itself as a write ahead log:  Bitcoind keeps a record that tracks of as what height were all utxo changes were last completely flushed to the database, and at startup if that block isn't the node's most recent block it just goes and reapplies all the changes since that block, which works because insert/delete is idempotent-- there is no harm in deleting an entry already deleted or inserting an entry that is already there.

Now, because all a node needs is a persistent hash table in some sense you could use practically any kind of database to support it, or even just a bare file system.  But validating the chain requires approximately three billion operations (currently) with a current database of sixty-eight million (tiny) entries.  As a result, most things people discuss aren't within two orders of magnitude of acceptable performance, like -- if it communicates over a network socket with a round-trip (thus context-switch) per operation that alone probably blows your entire performance budget.  LevelDB is among the fastest of the very simple in-process key-value stores, many things which claim to be faster are actually not-- at least not against the bitcoin like workload.  (there are many leveldb clones which meet this description, but the only way to know for sure is to try something out)

Because the usage is so simple, the particular choice is not very important so long as it is extremely fast.  Among all extremely fast choices the performance doesn't tend to differ that much because (1) their performance is driven by stuff like memory access speed, which is the same among them, and (2) the majority of the load is actually removed from the database in any case:  There is an in memory buffer (called the dbcache) that prevents most of the utxo entries from ever touching leveldb:  It turns out that most utxo created are spent quickly,  so that a when they are they can be settled in a simple non-persistant hash table without being written to the database.

There are plenty of ways that what Bitcoind does could be improved--  for example, changing dbcache to be an open hash-table to avoid memory allocations and improve locality would probably be a big win,  as would creating an incremental flushing facility to keep flushing constantly running in the background.

But no amount of "just swap out leveldb with FooDb" is likely to make an improvement.  For most values of Foo I've seen suggested it's so much slower that it kills performance. You're welcome to try-- the software abstracts it so that it's pretty easy to drop something else in, in the past people have and give up after their sqllite or whatever hasn't synced the first couple years of blocks in a *week* of runtime.  For any highly tuned system, speculating about the performance without getting in and understanding what's already there is unlikely to yield big improvements.

Personally I think modular buzzword thinking is a cancer of the IT industry:  "My car won't start, any idea whats wrong?"  "Have you tried MongoDB? I heard it has WebScale!".

Now if you want to talk about some block explorer or chainanalysis like usage then the needs are entirely different.  But the way to go about answering questions there isn't to pick some DBMS off a shelf,  it's to first think about what data it will need to handle and what queries will need to be performed against it, with what kind of performance...  and then that will inform the kinds of systems which could be considered.
419  Bitcoin / Development & Technical Discussion / Re: Anyone have more information on the status of CoinPrune on: January 02, 2021, 04:01:21 AM
Zero hits for it in my email, on the Bitcoin mailing lists, or on BCT, ... seems like the authors didn't care if the Bitcoin community ever saw it.

The idea is pretty similar to other commited utxo models.  It takes for form of periodically electing a block to have its utxo snapshotted (at considerable cost to nodes, hopefully offset by being infrequent) which (presumably after some delay) is included in unverified miner commitments.  Its not clear to me what advantage the paper's commitment structure has over proposals that use a rolling commitment which can be affirmed (and validated) at every block very cheaply plus a non-consensus-normative decision of what snapshots to keep.

The paper provides no meaningful security analysis and asserts that it maintains security but as far as I can determine it does not: It proposes something with SPV like security:  Users of it blindly trust miners to not, say, collectively reassign all the unspent coins from the first year to themselves.  It assumes "an honest majority of CoinPrune miners" but is silent on why the majority would be honest or what would happen if they're not.

As far as I can tell it's a somewhat worse than SPV security model, because miners blocks are not rejected by full nodes for committing to faithless snapshots, which would mean it lacks even the limited economic incentive argument for the security of SPV.
420  Bitcoin / Development & Technical Discussion / Re: Transaction cut-through on: January 01, 2021, 05:38:13 PM
You misunderstand what 'prunable' means.  It just means you don't have to keep it around after you verified it. It doesn't mean you don't need to verify it.  This thread has zero interaction with utxo commitment proposals.
Pages: « 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 [21] 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 ... 288 »
Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!