Bitcoin Forum
November 09, 2024, 08:01:59 AM *
News: Latest Bitcoin Core release: 28.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: [1]
  Print  
Author Topic: Bitcoin Theory (Byzantine Generals and Beyond)  (Read 21965 times)
socrates1024 (OP)
Full Member
***
Offline Offline

Activity: 126
Merit: 110


Andrew Miller


View Profile
August 09, 2012, 09:20:21 PM
Last edit: September 16, 2012, 01:29:55 AM by socrates1024
 #1

It's common knowledge that Bitcoin solves (or purports to solve) the Byzantine Generals problem.

This problem kicked off thirty years of research in distributed systems. It's a fascinating field, and also a very subtle one. Variations of the problem come in a handful of basic flavors, however what's possible and what's not depends heavily on the assumptions you make and what you require from the protocol. My goals in this post are to describe the basic components of the problem, point out some of my favorite reading materials, and hopefully begin a discussion on what assumptions and requirements are appropriate for Bitcoin (or altchains). Results may be gathered into a wiki. I also want to make an overall argument: although problems of this nature have been studied for decades, the academic field has not yet settled, and Bitcoin seems to reveal a couple of blind-spots.

Quote
Imagine two pedestrians trying to pass each other in a hallway, where they need to reach consensus on whether to pass on the left or on the right. Each initially has a preference for which way they want to go (e.g. right for Americans and left for Britons), and from the FLP result we know that if the pedestrians are deterministic and have their timing controlled by an adversary, each will give up and adopt the other pedestrian's preference at exactly the same time. The results are both embarrassing (starvation) and implausible (doesn't happen in real life). A solution is for one or both of the pedestrians to announce that they are choosing a new preference by flipping a coin. This gives a 50% probability that both will agree with each other, even if only one of the pedestrians flips a coin. Thus after a constant number of coin-flips, both pedestrians agree, and (assuming they can detect this agreement), they will have solved consensus.
James Aspnes, Yale, CS465 Course notes

Quote
Although there are 100 different results, there aren’t 100 different ideas... The basic idea upon which all the 100 impossibility proofs in distributed computing are based [is] the limitation imposed by local Knowledge.
A Hundred Impossibility Proofs for Distributed Computing. Nancy Lynch. Principles of Distributed Computing, 1989.

--------

All distributed systems face two essential challenges: latency, and failures.

  • Failures: The total number of processes (aka: participants, nodes, total-hashpower) is indicated by "N". Some portion "f" of these might fail. Kinds of failure worth describing:  permanently crashing, stopping-and-restarting, and Byzantine - which means behaving maliciously in the worst possible way. The resilience of a system is usually described as "N = 2f+1", which is the same as what we mean by 51%.
        Most previous work has assumed that N is known, that every process has a unique ID, and that everyone knows all the IDs from the start. Recent research has looked at anonymous networks and large-scale dynamic networks, showing that these differences are non trivial. Some work that approaches nearer and nearer to Bitcoin:
        
  • Latency: There are three popular timing models, which describe uncertain rate of message propagation in networks. The simplest case is called "synchronous communications", and it's when every message arrives within a certain known time limit, Δ (delta). Any messages that take longer than this have “timed out” and you can count them as failures. Protocols for this model often proceed in discrete rounds. The part of Bitcoin that rejects blocks based on invalid timestamps, for example, is indication of synchronicity assumptions. The hardest model is “asynchronous,” where packets can take longer and longer and longer to arrive.
        A medium-difficulty (and mostly realistic) scenario is called “partially synchronous,” where temporary partitions may occur but are eventually resolved. In this model, you don’t use any explicit time parameters. A maximum time delay Δ exists, but it is unknown. Cynthia Dwork, Nancy Lynch, and Larry Stockmeyer won the Dijkstra prize for this 1988 paper, which turned out to be very useful. If you’ve ever dreamed of a Bitcoin mining colony on Mars, or wondered what would happen if the transatlantic fiberoptic backbone were suddenly severed, you are probably thinking about the partially synchronous model. The best possible results in this model are of the 67% (3f+1) variety, rather than the 51% (2f+1) variety. If even the 67% honest participants are able to survive a partition that splits them in half for an unknown duration, then an attacker with 33% could pretend (simulate) being on the other side of a partition and disrupt you at any time.

  • Variations of the problem statement: The original Byzantine Generals problem (Leslie Lamport, 1978) featured asynchronous communications between a known set of deterministic processes with secure channels between every pair. The requirement is that the processes eventually make an irrevocable “final answer” decision, and when they do, they all agree. If something like the “confirmed after 6 blocks” rule were built into the Bitcoin protocol, then it would match this problem.
         There are several aspects of Bitcoin that make the problem harder, in particular anonymous/unknown-size networks. There are also many reasonable ways to make it easier.

    • Randomness, an essential part of Bitcoin, is enough of an addition to make otherwise-impossible things possible. Randomness is a way of breaking-symmetry (see the short story at the top of this post). If the puzzle difficulty is high enough, Bitcoin can break symmetry at any time scale. However, to use randomization, you must relax the problem so that either a) failure is possible, but with probability zero, or b) failure could occur with some negligible probability.

    • Random oracles (non-invertible cryptographic hash functions with uniform random outputs).
      Hash functions are something we accept with faith, at least for the time being. It has been proven that no hash function behaves perfectly. On the other hand, some simple properties (like collision-resistance) are provably attainable. I doubt anyone would argue that Bitcoin relies on untenable properties of hashes. Nonetheless, this is a reason why their use has not been so widely explored in distributed systems (counter example) - from a very traditional point of view, they are too strong an assumption. Any protocol using hashes will incur at least a positive but negligible chance of failure. Many consensus protocols have assumed perfect authenticated channels, yet any instantiation of these with primitives from modern cryptography is likely to introduce a random oracle at some point. I find it interesting that the original use of hash-based puzzles was by Dwork and Naor, two of the leading figures in distributed consensus, yet they have not considered using such puzzles in consensus algorithms. Even stranger, Naor and Nissim were the first to suggest balanced merkle trees for use in transaction systems (certificate revocation).

    • Stabilizing consensus - this is what you get when you remove the “final decision” requirement from the Byzantine Generals problem. The protocol guarantees you will eventually converge, but you can’t say for sure when it has happened. Dijkstra defined “self-stabilization” in 1974, and Angluin adapted it to the consensus problem in 2006. The term “eventually consistent” is relevant here. This, I think, the most appropriate problem definition for Bitcoin. We will eventually stabilize to a single agreed-upon value for each block, no matter what. But at any particular time, there is no way to rule out the possibility of a larger fork showing up. If this happens, then you take the larger fork and deal with it, and the protocol continues. Any irrevocable decisions (e.g., I wait for 6 confirmations and give you a bar of gold) are made according to a user’s personal risk analysis - if they get forked, it doesn’t count as a protocol failure.

---- Some final thoughts -----

Quote
Yes, I know that proof-of-work, as used in Bitcoin, is intended to give Byzantine fault tolerance, but my contention is that it doesn’t. And, furthermore, that it fails in a spectacularly inefficient way.
Ben Laurie, blathering on bitcoin.

It was this post by Ben Laurie that led me to read up on distributed systems. I suspect he’s wrong. At the very least, it’s possible to rearrange the problem statement in such a way that Bitcoin solves it. Anyone who has followed the development of this field over several decades will acknowledge that there is no single correct set of model assumptions for every application. Subtle variations have led to unexpected outcomes, for both possibility and impossibility.

It surprises me that as of yet, no one from distributed systems has said anything about the underlying consensus mechanism of Bitcoin. (Although Bitcoin and Red Balloons comes close, they focused on a red herring related to the gossip layer rather than the core.) I hope this changes soon, as distributed systems researchers and the Bitcoin community should have a lot to learn from each other. Like many areas of computer science, distributed systems theory was developed long before actual networks and internet were widespread. Many of the theoretical contributions have been dismissed in practice.

Quote
Unfortunately, Byzantine agreement requires a number of messages quadratic in the number ofparticipants, so it is infeasible for use in synchronizing a large number of replicas.
Pond, an implementation of OceanStore

Quote
The communication overhead of Byzantine Agreement is inherently large
server-initial agreement for general hierarchy wired/wireless networks

Therefore I think distributed systems folks would find it extremely exciting that an open source community has taken up the task of building a large-scale, high-stakes consensus network under supremely adversarial conditions. From what I’ve seen, the average Bitcoin user has an understanding of distributed systems to match that of a grad student, and could at least walk through the FLP impossibility result in terms of competing blockchain forks with equal strength.

amiller on freenode / 19G6VFcV1qZJxe3Swn28xz3F8gDKTznwEM
[my twitter] [research@umd]
I study Merkle trees, credit networks, and Byzantine Consensus algorithms.
theymos
Administrator
Legendary
*
Offline Offline

Activity: 5376
Merit: 13407


View Profile
August 09, 2012, 09:42:49 PM
 #2

Satoshi wrote about the Byzantine Generals problem:

Quote from: satoshi
A number of Byzantine Generals each have a computer and want to attack the King's wi-fi by brute forcing the password, which they've learned is a certain number of characters in length. Once they stimulate the network to generate a packet, they must crack the password within a limited time to break in and erase the logs, lest they be discovered. They only have enough CPU power to crack it fast enough if a majority of them attack at the same time.

They don't particularly care when the attack will be, just that they agree. It has been decided that anyone who feels like it will announce an attack time, which we'll call the "plan", and whatever plan is heard first will be the official plan. The problem is that the network is not instantaneous, and if two generals announce different plans at close to the same time, some may hear one first and others hear the other first.

They use a proof-of-work chain to solve the problem. Once each general receives whatever plan he hears first, he sets his computer to solve a difficult hash-based proof-of-work problem that includes the plan in its hash. The proof-of-work is difficult enough that with all of them working at once, it's expected to take 10 minutes before one of them finds a solution and broadcasts it to the network. Once received, everyone adjusts the hash in their proof-of-work computation to include the first solution, so that when they find the next proof-of-work, it chains after the first one. If anyone was working on a different plan, they switch to this one, because its proof-of-work chain is now longer.

After about two hours, the plan should be hashed by a chain of 12 proofs-of-work. Every general, just by verifying the difficulty of the proof-of-work chain, can estimate how much parallel CPU power per hour was expended on it and see that it must have required the majority of the computers to produce in the allotted time. At the least, most of them had to have seen the plan, since the proof-of-work is proof that they worked on it. If the CPU power exhibited by the proof-of-work is sufficient to crack the password, they can safely attack at the agreed time.

1NXYoJ5xU91Jp83XfVMHwwTUyZFK64BoAD
socrates1024 (OP)
Full Member
***
Offline Offline

Activity: 126
Merit: 110


Andrew Miller


View Profile
August 09, 2012, 09:57:53 PM
Last edit: September 07, 2012, 06:16:52 AM by socrates1024
 #3

Satoshi wrote about the Byzantine Generals problem:
Quote
it's expected to take 10 minutes before one of them finds a solution and broadcasts it to the network

The problem Satoshi described is not quite the Byzantine generals problem, for two reasons I mentioned in my post. First, the Byzantine Generals problem is specifically about asynchronous communications, in which you cannot assume that messages are delivered within a bounded amount of time, let alone a bounded time known a priori. This is at least half the challenge, and likely the more difficult half. Second, the original form of the problem is about deterministic protocols that are correct with absolute certainty. Even most randomized algorithms are expected to succeed with probability 1. The "expected" part there implies a non-zero probability of failure.

The differences between these models may seem a bit pedantic, but that's what I'm hoping to discuss.

Also, the particular model described in Satoshi's problem description (synchronous communications, known number of processes) is nearly identical to Exposing Computationally Challenged Byzantine Impostors (Jackson and Aspnes 2005)

A good reason to take time being precise/pedantic about the problem statement is that it makes it possible to bring in and compare results from other work. By far the most enjoyable survey on the (first ten years of study on) the problem is 100 Impossibility Proofs in Distributed Computing by Nancy Lynch, which made the argument that much effort was wasted in rehashing the same results in slightly different models that would later turn out to reduce to one another.

amiller on freenode / 19G6VFcV1qZJxe3Swn28xz3F8gDKTznwEM
[my twitter] [research@umd]
I study Merkle trees, credit networks, and Byzantine Consensus algorithms.
mollison
Full Member
***
Offline Offline

Activity: 157
Merit: 100



View Profile
August 10, 2012, 08:06:16 PM
 #4

I strongly suspect Bitcoin has some new, important results to contribute to the field of distributed systems. Glad to see someone else at least talking about this. If I had a lot of time on my hands, I'd try to write a paper on this, but I don't.
kjj
Legendary
*
Offline Offline

Activity: 1302
Merit: 1026



View Profile
August 10, 2012, 08:24:18 PM
 #5

I don't think that bitcoin is going to be very useful, academically speaking.  It really isn't for that.

Bitcoin is more like a "good enough" system, one intended to be used in the real world.  It produces approximations to perfection, but never is perfect.

For example, bitcoin doesn't prove, or even attempt to fully prove that a block is final, but as more blocks pile up, we can become increasingly confident that it won't be overruled.  But never totally sure, because every block back to the last checkpoint in theory could be replaced if a better chain is found tomorrow.

By the way, "good enough" isn't a bad thing.  Engineering is the art of understanding tolerances, and of understanding that nothing is absolute, to make things that are "good enough" for the real world.  Bitcoin is an outstanding example of engineering in that sense.

17Np17BSrpnHCZ2pgtiMNnhjnsWJ2TMqq8
I routinely ignore posters with paid advertising in their sigs.  You should too.
Pieter Wuille
Legendary
*
qt
Offline Offline

Activity: 1072
Merit: 1181


View Profile WWW
August 10, 2012, 08:45:12 PM
 #6

For example, bitcoin doesn't prove, or even attempt to fully prove that a block is final, but as more blocks pile up, we can become increasingly confident that it won't be overruled.  But never totally sure, because every block back to the last checkpoint in theory could be replaced if a better chain is found tomorrow.

That is exactly his point. Bitcoin is based on more weaker assumptions that what is commonly researched, and so it indeed cannot solve the Byzantine Generals problem. It can however reach something called stabilizing consensus, and that can also be researched and quantified.

I do Bitcoin stuff.
mollison
Full Member
***
Offline Offline

Activity: 157
Merit: 100



View Profile
August 10, 2012, 08:50:27 PM
 #7

I don't think that bitcoin is going to be very useful, academically speaking.  It really isn't for that.

Bitcoin appears to solve certain well-studied problems in the academic field of distributed computing in a novel way. So I don't think I agree with you here.

There is a reason nobody working on digital currency (academics or otherwise) had been able to come up with a truly robust digital currency (until Bitcoin). The reason is that Bitcoin really does solve some very hard, old problems in novel ways.
kjj
Legendary
*
Offline Offline

Activity: 1302
Merit: 1026



View Profile
August 11, 2012, 03:30:32 AM
 #8

I don't think that bitcoin is going to be very useful, academically speaking.  It really isn't for that.

Bitcoin appears to solve certain well-studied problems in the academic field of distributed computing in a novel way. So I don't think I agree with you here.

There is a reason nobody working on digital currency (academics or otherwise) had been able to come up with a truly robust digital currency (until Bitcoin). The reason is that Bitcoin really does solve some very hard, old problems in novel ways.

What I'm getting at is that there are two ways to consider the word "solved".  Addition was solved in the sense that it could be used in the real world thousands of years ago.  But it wasn't solved in the academic sense until Cantor, Peano, et al. proved that it really did what everyone already knew it did, and that wasn't that much more than a hundred years ago.

Most of bitcoin is architecture and engineering, not science.  People study buildings to understand what works and what doesn't, but a building doesn't "solve" or "prove" anything in the way that you could stick Q.E.D. at the end of it.

17Np17BSrpnHCZ2pgtiMNnhjnsWJ2TMqq8
I routinely ignore posters with paid advertising in their sigs.  You should too.
socrates1024 (OP)
Full Member
***
Offline Offline

Activity: 126
Merit: 110


Andrew Miller


View Profile
August 21, 2012, 12:52:04 AM
Last edit: August 21, 2012, 06:35:54 AM by socrates1024
 #9

Here's a way of describing the computational requirements in Bitcoin. This is intended to clarify and focus the technical discussion in other threads that discuss particular data structures and implementations.

Bitcoin is all about efficient validation.

There are two kinds of validation tasks in Bitcoin: transaction validation and fork validation. These are closely related, since validating transactions is at least a subcomponent of fork validation. The most important difference between the two is their relationship to the proof-of-work scheme, which affects the way we need to state the security and complexity requirements.

Stabilizing Input (Bounded UTXO Assumption)

Assumption: The set of unspent-outputs at any given time is bounded by some (possibly unknown) maximum size, M. This means that the txinputs catch up with the txoutputs at some point, trailing behind by no more than M. This also implies M is a maximum "fanout" in a transaction. In Bitcoin, the size is certainly bounded by M < 2.1e14, since that's the largest possible number of unique coins. This number is related to the number of bits in our hashes, since log M < 50 < 256 = k.

Transaction Validation

  • Assumptions: you're given a current block head to start with
  • Correctness requirement: deterministic accept/reject
  • Supporting data structure: the UTXO-set
  • Key performance quality: worst-case cost of a decision (as a function of the size of the UTXO-set)
  • Examples:  
    • clients validate transactions before deciding to relay them
    • miners validate new transactions before including them in work

Transactions are created by anonymous users of the network. Worst-case cost for rejecting an invalid transaction is important for DDoS resistance. On the other hand, worst-case cost for accepting a valid transaction is also important so that transactions propagate quickly.

The best asymptotic result so far is a balanced Merkle tree, which gets average-and-worst-case O(log M) validation effort per TXInput, and requires O(M) of untrusted storage (for example a DHT, or a cloud-storage provider, or a local disk). Other strategies include:
  • UltraPrune(without Merkle hashes), which requires O(M) of trusted storage (a local disk)
  • A hash trie, which has O(k) in the worst case (k > log M is a security assumption about hash-functions) but matches O(log M) in the average case


Fork Validation

  • Assumptions: 67% of all work is performed by honest nodes on valid chains
  • Correctness requirement: deterministic accept/reject
  • Supporting data structure: the blockchain
  • Key performance qualities:  
    • average accept cost, as a function of the size of the fork, N (total number of txins/txouts)
    • average rejection cost, as a function of W, the total amount of work (hashes) computed in order to assemble this fork
  • Examples:
    • If a larger valid fork appears, miners eventually switch to working on it
    • New nodes on the network must "bootstrap" themselves by choosing the largest valid chain from scratch (or since the last checkpoint)

Validating a fork means validating every txinput in the fork. Since the best Transaction Validation procedures require average O(log M) effort (per txinput), the optimal complexity for full validation is O(N log M). However, the average time it takes to reject a work-deficient fork can be improved using the "Hash-Value Highway".

The reason we are able to switch from "worst-case" to "average" costs for fork validation is that the proof-of-work scheme eventually reaches the "average-case" in a long enough streak, no matter what algorithm you use. Forks are validated work-first.


-- Conclusion --

This is a first attempt at a statement of Bitcoin's computational requirements. Now in technical discussions, we can ask whether a different problem statement makes weaker or stronger assumptions, or if it has weaker or stronger requirements. My goal is to eventually make an argument that satisfying these requirements leads directly to a successful stabilizing consensus protocol, but I'll save that for later.

amiller on freenode / 19G6VFcV1qZJxe3Swn28xz3F8gDKTznwEM
[my twitter] [research@umd]
I study Merkle trees, credit networks, and Byzantine Consensus algorithms.
matonis
Sr. Member
****
Offline Offline

Activity: 303
Merit: 251



View Profile WWW
September 04, 2012, 05:54:57 AM
 #10

It was this post by Ben Laurie that led me to read up on distributed systems. I suspect he’s wrong. At the very least, it’s possible to rearrange the problem statement in such a way that Bitcoin solves it. Anyone who has followed the development of this field over several decades will acknowledge that there is no single correct set of model assumptions for every application. Subtle variations have led to unexpected outcomes, for both possibility and impossibility.

It surprises me that as of yet, no one from distributed systems has said anything about the underlying consensus mechanism of Bitcoin. (Although Bitcoin and Red Balloons comes close, they focused on a red herring related to the gossip layer rather than the core.) I hope this changes soon, as distributed systems researchers and the Bitcoin community should have a lot to learn from each other. Like many areas of computer science, distributed systems theory was developed long before actual networks and internet were widespread. Many of the theoretical contributions have been dismissed in practice.


I recently came across this paper (and I'd be glad to hear your comments on their conclusion):

Can We Afford Integrity by Proof-of-Work? Scenarios Inspired by the Bitcoin Currency
http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2041492

I also disagree with Ben Laurie but that is because I don't believe efficiency needs to be the goal of a decentralized, non-political currency. Methodical, authoritative, and coercively-selective breeding will also lead to better and stronger humans, but that's not the world we want.

Founding Director, Bitcoin Foundation
I also cover the bitcoin economy for Forbes, American Banker, PaymentsSource, and CoinDesk.
socrates1024 (OP)
Full Member
***
Offline Offline

Activity: 126
Merit: 110


Andrew Miller


View Profile
September 07, 2012, 05:15:16 AM
Last edit: September 07, 2012, 06:02:06 AM by socrates1024
 #11

Sure. Assume the distributed consensus protocol works as intended. This paper asks whether a global Proof-of-Work system with a fixed budget could withstand three attack scenarios. The budget is based on an estimate of the current costs of our global digital transaction networks (e.g., debit cards): $283 billion per year.
Quote
If we take our results at face value, they indicate that a PoW network as costly as today’s electronic payment systems could easily withstand attacks by a single supercomputer, and most likely defend against a very successful botnet or a social disobedience attack.
So Bitcoin would defeat all three attacks, and under budget, too!

However, the attack model isn’t too compelling.
Quote
The only threat we consider is that it is designed to defend against: a single party gaining control over 50% of the total computation power.
No, Bitcoin is designed to withstand disruptive behavior from a minority of the network. Against a sustained majority attack, Bitcoin’s only hope is to be an unappealing target. Zooko already asked "What Would You Do With >50% of the World's Bitcoin Mining Power?" It turns out that the sort of maneuvers that become available at this level (e.g., stubbornly blacklisting addresses) aren’t directly profitable.

Although the attacks considered in this paper are not required to be profitable, neither are they “worst-case scenarios” for Bitcoin. Hash-power alone can not harm the network, although it can penetrate our shields. To actually inflict damage on Bitcoin’s users, the attacker should use the hash-power to deliver a payload containing massive double-spends.

Here are two components I recommend for a more realistic attack model:

(Computationally Bounded Adversary): The attacker has a budget, B, for a total amount of work (i.e., hashes, hash-energy, rather than hash-power). This is the expected number of hashes needed to build a fork B blocks long. The attacker can use these hashes all at once, or spread over any amount of time.

The bounded (rather than sustained) versions of the attacks from this paper are immediately more realistic. The sentient supercomputer that mines uncooperatively eventually gets reprogrammed. After a few weeks, most of “Occupy Bitcoin” have packed up their tents and gone home. The sharks with frickin’ GPUs attached to their heads will overheat and die.

(Mining as a commodity): Think of mining as something to purchase, rather than something to do. For example, you could hire a mercenary mining provider to work on a header containing the transactions of your choice. You could evaluate quality-of-service by asking the mining-provider to show you blocks-that-didn’t-quite-win, in addition to the winning blocks - similar to shares in p2pool.

The point of these two assumptions is that an attacker can purchase a quick “burst” of B blocks worth of hash-energy for the same price it would cost a miner at the normal rate. In this model, the attacker might temporarily wield much more hash-power than the rest of the network. Using the same $283 billion annual mining budget from the paper, a 7-block attack would cost an expected $30 million.

What would you do if you could summon a 7-block fork on demand? Well, MtGox (and pretty much everyone else) waits for 6-blocks of confirmation before making an irreversible decision. A profitable attack target might be to deposit $33 million worth of bitcoin into MtGox, withdraw it after 6 blocks, then rewind the network and double-spend the original $30 million back out. Expected profit: $3 million (%10 return). Since the attacker has a finite budget, then any attempt at producing a fixed number of blocks has a chance of failure. Potential downside: -$B if the attacker spends the entire budget without finding a 7th block. Potential upside: up to $33 million if the 7-blocks are found much faster than expected. This is the model of “gambler’s ruin”. Attacks of this nature might be profitable in the long run, but a losing streak leads to bankruptcy.

Here’s a different worst-case scenario. Dr. Evil announces that Dec 21, 2012 shall be “Double-Spend Doomsday.” He calls upon the entire population (e.g., 10% of Facebook users?) to participate by buying gold and cash with Bitcoins and sending a message with a double-spend to Dr. Evil’s email address. At 11:59pm, he will unleash a One Billion Dollar fork (about a day’s worth of transactions) containing all the double-spends he receives - harnessing the world’s collective greed to wreak havoc. If the threat is plausible, then perhaps business will grind to a halt for that day. Mutual suspicion combined with a loss of trust in the effectiveness of the PoW shield could result in economic damage, even if the attack is not realized.

Conclusion

This paper is peculiar, since its title and stated goals are completely mismatched to the actual analysis.
Quote
The main goal of this research is to scrutinize the claim of Bitcoin proponents that a decentralized PoW-based currency charges society fewer transaction costs than a centralized electronic payment systems.

I agree with you: cutting costs is not the main reason to use a proof-of-work system. The main reasons are to reduce risk by eliminating central points of failure and providing transparent, objective security claims.

Although this paper begrudgingly acknowledges that Bitcoin may achieve these goals, the attack models should be improved. I suggest a variation that involves bounded (rather than sustained) attacks, equal cost per block for both miners and attackers (though the attackers get their blocks as rapidly as they like), and estimating the damages caused by reordered transactions.

What’s next? At the microeconomic level, I would like to have a risk analysis model for Bitcoin transactions. What’s the optimal number of blocks to wait before giving you an ounce of gold? At the macro level, we should ask whether an attack on Bitcoin would inflict more economic damage than a similar-sized attack on an alternate system like debit cards. I’d expect these to be related, since user behavior will determine the potential damages for a given attack.

Finally, I’m glad these authors mentioned that Bitcoin may replace other systems besides (or in addition to) consumer card transactions. They omitted large transactions between governments and corporations from their model - but those seem like an ideal kind of transaction to require Bitcoin in order to take advantage of public transparency. Bitcoin could replace the postal service for official legal correspondence - instead of a registered mailing address, you could receive subpoenas to a Bitcoin address. The mining budget grows with the more roles Bitcoin can serve. I don’t know how the attacks and risks would be affected.

amiller on freenode / 19G6VFcV1qZJxe3Swn28xz3F8gDKTznwEM
[my twitter] [research@umd]
I study Merkle trees, credit networks, and Byzantine Consensus algorithms.
Forp
Full Member
***
Offline Offline

Activity: 195
Merit: 100


View Profile
September 13, 2012, 09:26:00 PM
 #12

In my interpretation of affairs it is important to understand that Bitcoin does not reach consensus, does not stabilize (in a common sense interpretation of the word, as in "eventually reaches consensus") and does not solve the Byzantine Generals problem.

Bitcoin is a highly probabilistic, random algorithm which "only" guarantees a certain probability distribution over time - which, admittedly, after a number of blocks is so centered, that we can treat the outcome as having reached consensus for practically all reasonable expectations.

This fine distinction explains, what otherwise could be seen as contradiction between classical results on Byzantine consensus and results (seemingly) achieved by Bitcoin.

Of course, Bitcoin by all means is a very interesting algorithm for academia.
socrates1024 (OP)
Full Member
***
Offline Offline

Activity: 126
Merit: 110


Andrew Miller


View Profile
September 13, 2012, 11:02:05 PM
 #13

In my interpretation of affairs it is important to understand that Bitcoin does not reach consensus, does not stabilize (in a common sense interpretation of the word, as in "eventually reaches consensus") and does not solve the Byzantine Generals problem.

You're right, it's important to understand the distinction: Bitcoin clearly does not solve Byzantine Generals, but hopefully it solves a related weaker problem instead. Is the weaker problem useful? I disagree with your point about the common sense interpretation - I think Bitcoin users have the correct intuition about how the system behaves.

Here's the crux of the distinction. If Bitcoin were a deterministic Byzantine Generals solution, then it would provide a claim like the following: "If you wait 6 blocks before making an irreversible action (e.g. handing over a bar of gold to a stranger), then you're 100% guaranteed not to get double-spent."

This is not one the services that Bitcoin offers to its users. Instead, as you say,
Bitcoin is a highly probabilistic, random algorithm which "only" guarantees a certain probability distribution over time - which, admittedly, after a number of blocks is so centered, that we can treat the outcome as having reached consensus for practically all reasonable expectations.

The generally accepted language for randomized consensus algorithms is that they "eventually converge with probability 1," mostly for the reason that this is a mathematically friendly statement work with. (See for instance Stumbling over Consensus Research: Misunderstandings and Issues) But this isn't necessarily what users want. Certainly the 100% deterministic version would be more user friendly, but it's not possible on an anonymous network. My opinion is that Bitcoin is the perfect balance - strong enough to satisfy demand, just weak enough to be viable. Here's a way of closing the gap between what we have and what we might want:

(Fork Insurance) Imagine there is a market for Bitcoin insurance. A pawn shop that wants to accept Bitcoin payments would probably want to purchase a fork insurance policy. The insurance policy would offer the 100% deterministic option to its customers: if the merchant dutifully waits for 6 blocks before letting the customer walk out with a gold bar, then the merchant is insured for up to $1000.

If you use Bitcoin directly, you're responsible for your own risk assessment and management. What's excellent about Bitcoin is you can do this anonymously yet benefit from the worldwide amortization of mining work. If you are incapable or unwilling to make your own risk assessment, then you can hire someone to make this sort of decision for you. Note this is an individual decision - there's no such thing as an official Bitcoin insurer. Also note that payment processors like BitPay already provide this interface to their customers. They put money in your account right away. Double-spends are their problem. So is evaluating the health of the network.

What's the price of Bitcoin insurance? If you were going to insure a Bitcoin exchange, how would you calculate that risk? And would you put more effort into auditing the exchange's php scripts for silly vulnerabilities or would you focus on shoring up against the dreaded 51% doomsday attack? Are there any Bitcoin actuaries who have started on this sort of modeling? I have no idea what the legal and regulatory environment is for this kind of insurance - is it more or less intimidating than money transmitting?

amiller on freenode / 19G6VFcV1qZJxe3Swn28xz3F8gDKTznwEM
[my twitter] [research@umd]
I study Merkle trees, credit networks, and Byzantine Consensus algorithms.
gmaxwell
Moderator
Legendary
*
expert
Offline Offline

Activity: 4270
Merit: 8805



View Profile WWW
September 13, 2012, 11:14:00 PM
Last edit: September 13, 2012, 11:38:19 PM by gmaxwell
 #14

Certainly the 100% deterministic version would be more user friendly, but it's not possible in
an actual system implemented in our universe.

After all… an asteroid could hit you. Crazy simultaneously bitflips could ruin any level of redundancy in any system. etc.

Pure mathematics can be deterministic, implemented systems can't be completely.  The functional question is does it converge fast enough to high enough probability that its risk that the convergence was false is minor and burred under all the externalities?

It may be easier to explain something when its platonic ideal is deterministic, but since the actual implementation can't be completely those explanations are inherently misleading. Smiley
socrates1024 (OP)
Full Member
***
Offline Offline

Activity: 126
Merit: 110


Andrew Miller


View Profile
September 13, 2012, 11:23:55 PM
 #15

It may be easier to explain something when its platonic ideal is deterministic, but since the actual implementation can't be completely those explanations are inherently misleading. Smiley

Even among the theoretical idealizations, the "deterministic 100%" requirement is so unrealistic that it has (somewhat) fallen out of favor. "Converges with probability 1" is also a platonic ideal, but it's much more useful and a much better fit for Bitcoin.

amiller on freenode / 19G6VFcV1qZJxe3Swn28xz3F8gDKTznwEM
[my twitter] [research@umd]
I study Merkle trees, credit networks, and Byzantine Consensus algorithms.
HZKto
Newbie
*
Offline Offline

Activity: 28
Merit: 0


View Profile
December 25, 2013, 02:39:50 AM
 #16

Here's the crux of the distinction. If Bitcoin were a deterministic Byzantine Generals solution, then it would provide a claim like the following: "If you wait 6 blocks before making an irreversible action (e.g. handing over a bar of gold to a stranger), then you're 100% guaranteed not to get double-spent."

There is some guarantee: https://github.com/bitcoin/bitcoin/blob/285cf7a1a6cb660b57cbc75f63e49736b51d705e/src/main.cpp#L2255-L2260
Code:
int64_t deltaTime = pblock->GetBlockTime() - pcheckpoint->nTime;
if (deltaTime < 0)
{
    return state.DoS(100, error("ProcessBlock() : block with timestamp before last checkpoint"),
                     REJECT_CHECKPOINT, "timestamp before checkpoint");
}

P.S. Thanks for summarizing paper references!
Pages: [1]
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!