Bitcoin Forum

Bitcoin => Development & Technical Discussion => Topic started by: iamnotback on November 05, 2016, 11:34:02 AM



Title: Blockchain-Free Cryptocurrencies: Framework for Truly Decentralised Fast Txns
Post by: iamnotback on November 05, 2016, 11:34:02 AM
https://eprint.iacr.org/2016/871

https://iohk.io/docs/research/A%20Blockchain-free%20Approach%20for%20a%20Cryptocurrency%20-%20Input%20Output%20HongKong.pdf

I am very sleepy and haven't read the paper entirely, just scanned it. So I will likely have to some errors in any analysis I do in this groggy state-of-mind.

I want to rattle off a potential list of flaws that come to mind immediately.

1. It is not plausibly scalable for every payer to receive notice of, nor validate/record the graph metrics for, every transaction in the network. Payers must rely on some supernodes, which then become fulcrums for selfish game theory strategies which likely can break collaborative Nash equilibrium assumption. For example, a supernode could lie about a double-spend, causing massive orphanage once discovered, possibly gaining profits by speculatively shorting the value of the token. Supernodes could collude to do such malfeasance, even a 51% attack. So the claim that the resistance to centralization has been entirely mitigated seems to be debatable. The paper does mention pruning (from computations) the ancestors when their fees have been consumed, but afaics this doesn't mitigate the need of verifiers to receive a broadcast of every (or large fraction of all) transaction(s).

2. There is no total order in the described system, thus any partial order DAG only exists from the perspective of those partial orders which reference it. Thus the reward for any DAG is always subject to being retaken by an entity which can apply more PoW than was originally applied. Thus the selfish-mining flaw appears to apply. A miner with 1/4 or 1/3 of the a DAG partial orders's hashrate lie in wait to allow others to waste their PoW on a DAG while building a hidden parallel DAG claiming the same rewards. Then release the hidden DAG later orphaning all those said transactions and rewards, thus increasing their share of the rewards (including minted coins) relatively speaking higher than the proportion of their hashrate would otherwise provide without the selfish mining strategy. And it appears to me to be catastrophically worse than for Satoshi's design, in that there will likely be multiple unmerged DAGs branches at any moment, so the attacker probably needs much less than 1/4 of the network hashrate to selfish mine any one of those coexistent DAG branches.

Quote from: section 3.1 page 18
The first natural but often unstated assumption is that a majority of players follow the correctness rules of the protocol.

...

Equally important is the assumption of rational participants (whether they are cheating or not), and we likewise assume that majority of the computing power is held by rational players.

From the analysis I did of Iota's DAG, it seems impossible to presume the majority players obey any Nash equilibrium in a blockless DAG design. It appears to be a fundamentally insoluble issue. In other words, it is not sufficient to analyze the security and convergence game theory (properties) from a holistic systemic perspective and instead per DAG branch partial order strategies arise.

3. I intuitively expect some flaw around the variable control over fees collected per unit of PoW expended, i.e. control over difficulty. But I am too sleepy to work through this part of the paper right now.

I considered a design like this last year. And I came to the conclusion that there is no way to avoid centralization employing proof-of-work incentivized by profit, regardless of any design that could possibly be contemplated (https://bitcointalk.org/index.php?topic=1319681.0).

Btw, I don't understand why that paper failed to cite the prior art of Iota's and Sergio Demian Lerner (https://bitcointalk.org/index.php?topic=1177633.0)'s DAGs.


Edit: Section "2.1 Collaborative Proof Of Work" on page 7 of the white paper explains well the mathematical concept of cumulative proof-of-work as a proxy for measuring the relative resources consumed by chain as the metric for the chain length in a longest-chain-rule.


Title: Re: Blockchain-Free Cryptocurrencies: Framework for Truly Decentralised Fast Txns
Post by: dsattler on November 05, 2016, 04:50:40 PM
Btw, I don't understand why that paper failed to cite the prior art of Iota's and Sergio Demian Lerner (https://bitcointalk.org/index.php?topic=1177633.0)'s DAGs.

Don't forget Byteball, a new consensus algorithm and private untraceable payments using DAG, no POW, no POS! ;)
https://bitcointalk.org/index.php?topic=1608859.0


Title: Re: Blockchain-Free Cryptocurrencies: Framework for Truly Decentralised Fast Txns
Post by: iamnotback on November 06, 2016, 04:06:44 AM
Btw, I don't understand why that paper failed to cite the prior art of Iota's and Sergio Demian Lerner (https://bitcointalk.org/index.php?topic=1177633.0)'s DAGs.

Don't forget Byteball, a new consensus algorithm and private untraceable payments using DAG, no POW, no POS! ;)
https://bitcointalk.org/index.php?topic=1608859.0

Also found this:

https://www.youtube.com/watch?v=zjT7wQNg_s4

The innovation claimed is that everyone can agree on 11 of 12 centralized supernodes to order the transactions, thus we wouldn't need PoW nor blocks if this claim were true and desirable.

If that claim were true, then we wouldn't have Visa and Mastercard dominant today.

Since people can't agree, this is why the governance of society is a power vacuum. The most ruthless and powerful are sucked into the vacuum to provide the top-down organization (discipline) that society requires to function. So it will be no different outcome in this case, where the 12 supernodes will be controlled by one entity (even pretending to be 12 entities via a Sybil attack). Because the users will never be able to agree on any evolution away from the 12 by forming a consensus on an exact 12, since they are only allowed a mutation of 1 at a time. And any higher rate of mutation would make it implausible to define a total order.

Tangentially (off-topic for technical discussion) although the creator appears to have good intentions, I argue his distribution method is highly flawed. Giving away coins for free means most will dump them on the market, thus collapsing the price. Well maybe that is by design, so someone can scoop them up cheap and then later after price hits rock bottom, then that group can pump & dump it making the usual fortune by mining the n00b speculators.


Title: Re: Blockchain-Free Cryptocurrencies: Framework for Truly Decentralised Fast Txns
Post by: iamnotback on November 09, 2016, 05:34:11 PM
Quote from: section 3.1 page 18
The first natural but often unstated assumption is that a majority of players follow the correctness rules of the protocol.

...

Equally important is the assumption of rational participants (whether they are cheating or not), and we likewise assume that majority of the computing power is held by rational players.

From the analysis I did of Iota's DAG, it seems impossible to presume the majority players obey any Nash equilibrium in a blockless DAG design. It appears to be a fundamentally insoluble issue. In other words, it is not sufficient to analyze the security and convergence game theory (properties) from a holistic systemic perspective and instead per DAG branch partial order strategies arise.

We must differentiate Iota's design because afair it has no reward for doing proof-of-work other than the "altruism-prime" motivation (https://blog.ethereum.org/2014/11/25/proof-stake-learned-love-weak-subjectivity/) and afaics Iota does not have the localized incentive of Theorem 2 mentioned below.

You seem to be a smart guy. Here is a challenge for you - design such a system based on DAG that allows to issue coins a-la Bitcoin (we start with 0 supply) without weakening the security of the system. I think 1 week is enough for you. Do you accept the challenge? These links may be helpful:
- https://en.wikipedia.org/wiki/CAP_theorem
- https://en.wikipedia.org/wiki/Nash_equilibrium
- https://en.wikipedia.org/wiki/Pareto_efficiency

Just because there's a PoW component (initially at least), which produces new coins. You might not like mining, but it's established enough that few would seriously object to using it for distribution. (Though it's of course always preferable if the PoW is GPU/ASIC resistant.)

Why would you extend my branches if by invalidating them you would earn more coins?

Afaics Iota's convergence depends on all payers and payees adopting the same strategy¹ with no incentive present to choose one strategy over another (which is why I never thought Iota could maintain a Nash equilibrium without centralized servers enforcing a strategy).

The above quote from the white paper is the normal resistance up to well known "51% attack" assumption. Theorem 2 in section 3.2 of page 19 (https://eprint.iacr.org/2016/871.pdf#page=19) explains that the honest, rational participant has (presuming a Nash equilibrium) a probabilistic and opportunity cost incentive to apply proof-of-work (i.e. append) on the "leading edge" analogous to the longest chain rule incentive in Bitcoin.

Yet a Nash equilibrium requires that there are no other plausible strategies in conflict with each other. So we must consider:

Even with or without direct monetary rewards (e.g. minted coins or non-burned txn fees), selfish mining can be conceptualized more generally as the asymmetry (for different proof-of-work participants, aka miners in Bitcoin) of the cost of effective PoW (or burned txn fees), for whatever PoW (or burned txn fees) accomplishes in the consensus system. So even for Iota or DagCoin which afair don't monetarily reward the PoW (i.e. afaik the PoW is simply burned), the asymmetry still exists in terms of the value of what PoW can effect in the system. Thus as CfB wrote, "a more sophisticated strategy may be more profitable" given some externalities such as achieving a double-spend and shorting the token's exchange value.

And afaics, this is where the paper errs just below the proof of Theorem 2:

Quote
A stronger property can be made for those transactions that further satisfy property #3 (https://eprint.iacr.org/2016/871.pdf#page=10)—
namely that the prize of the new transaction be larger still than the prize of its parents before
the new transaction came into existence. As long as this property is true, not only will honest
verifiers have an incentive to prefer the new transaction over its parents, but even dishonest
clients—who might think of actively denying certain valid transactions—will still find it advan-
tageous to prefer the new transaction.

The possibility of non-Nash equilibrium attacks are acknowledged but in a dismissive tone (and afaics an incorrect presumption of "convergence" being final unless "convergence" means probabilistic assurance of some multiple "as confirmations" of 50% of all proof-of-work of all branches as descendants of our branch):

Quote from: Concerted attacks, Section 4 of page 21
We note that
partially verified transactions have temporary exposure to a concerted attack, since a powerful
attacker may have the temporary local ability to overpower the honest majority by focusing
all of its efforts against a specific target. We note that once a transaction nears or reaches
convergence, it will be as strongly affirmed as it would be in a Blockchain system of equivalent
total verification power.

There is little value in using energy to remove a previous transaction, outside of attacks that
focus on transactions one may wish to remove, such as in a double spend scenario, see Theorem
1.

What I wrote previously is afaics true when either minting rewards are present or for transactions can earn some fees because they don't "satisfy property #3 (https://eprint.iacr.org/2016/871.pdf#page=10)":

2. There is no total order in the described system [insert: unless we reach probabilistic "convergence" as I described it above], thus any partial order DAG only exists from the perspective of those partial orders which reference it. Thus the reward for any DAG is always subject to being retaken by an entity which can apply more PoW than was originally applied. Thus the selfish-mining flaw appears to apply. A miner with 1/4 or 1/3 of the a DAG partial orders's hashrate lie in wait to allow others to waste their PoW on a DAG while building a hidden parallel DAG claiming the same rewards. Then release the hidden DAG later orphaning all those said transactions and rewards, thus increasing their share of the rewards (including minted coins) relatively speaking higher than the proportion of their hashrate would otherwise provide without the selfish mining strategy. And it appears to me to be catastrophically worse than for Satoshi's design, in that there will likely be multiple unmerged DAGs branches at any moment, so the attacker probably needs much less than 1/4 of the network hashrate to selfish mine any one of those coexistent DAG branches.

However, if the quoted selfish mining doesn't require 1/4 to 1/3 of total systemic hashrate because the network hashrate is split amongst several coexistent branches of the DAG (which at any moment have not yet been converged), then it also means the selfish miner is only becoming relatively wealthier than the participants on the attacked branch and not w.r.t. to transactions in other branches of the systemic DAG. Yet I also posit it means multiple selfish miners probabilistically on different branches don't need to be coordinated, so the threshold-of-attack is lower and thus economically there should be more such attackers (than for Satoshi's design).

Even if we remove minting from described system and require that all transactions "satisfy property #3 (https://eprint.iacr.org/2016/871.pdf#page=10)" so that the only incentive to converge on leading edges is an "altruism-prime" to have one's transaction confirmed (which is in theory qualitatively an undersupplied public good and empirically weaker than an individualized for-profit incentive (https://blog.ethereum.org/2014/11/25/proof-stake-learned-love-weak-subjectivity/)), then afaics the potential attack becomes a combination of a selfish mining attack in the sense of causing others on the same branch to waste proof-of-work resources (thus of course the others becoming relatively less profitable than the attacker) combined with a double-spend attack on the lie-in-wait branch and noting that for the honest participants the cumulative proof-of-work (in this constrained design variant) would necessarily need to cost significantly less than the value of the transactions in the branch (since given there is no reward then the proof-of-work is effectively a transaction fee). Thus I posit the double-spend attack becomes quite plausible because the security is so low. The vulnerability is ostensibly much greater than (as quoted below) for Bitcoin, because of only being secured by the said commensurate value of proof-of-work as "transaction fees" and because as adapted from the above quote, "there will likely be multiple unmerged DAGs branches at any moment, so the attacker probably needs much less than 51% of the network hashrate to lie-in-wait on any one of those coexistent DAG branches".

@TomHolden, I agree that Satoshi's PoW has the same potential vulnerability in that if double-spends exceed the value of what was burned to provide security, then a 51% lie-in-wait attack is possible funded by the value of the double-spends (possibly also shorting the exchange value in case the successful attack craters the price).

Thus, @tonych's concern applies to every consensus design (including Satoshi's) which is based on burning some resources as the metric of the longest-chain-rule (regardless whether multiple branches are merged to form the longest-chain, e.g. a DAG).


¹https://bitcointalk.org/index.php?topic=1319681.msg13538929#msg13538929
https://bitcointalk.org/index.php?topic=1319681.msg13533261#msg13533261


Title: Re: Blockchain-Free Cryptocurrencies: Framework for Truly Decentralised Fast Txns
Post by: iamnotback on November 09, 2016, 07:16:14 PM
3. I intuitively expect some flaw around the variable control over fees collected per unit of PoW expended, i.e. control over difficulty. But I am too sleepy to work through this part of the paper right now.

Okay so this design issue is explained in Automatic Drain Rate Adjustment of section 2.2.1 on page 12.

Please check my logic, because afaics that section doesn't correctly conceptualize the flaw and potential for attack.

In Bitcoin, relatively small discrepancies between miners' clocks as seen in timestamps w.r.t. to forming consensus on one chain and over lengthy 600 blocks readjustment windows is not equivalent to the case where difficulty is adjusted separately for each partial order branch of the DAG wherein hashrate can be volatile because it can be moved between branches at will; and thus it is necessary to adjust the difficulty much more frequently over shorter windows.

If the readjustment window is too long, then the high hashrate attacker can stall a branch for a long-time by throwing high hashrate at it until the difficulty adjusts then leave to another branch. Whereas if the readjustment window is short, then attacker can use timestamp manipulation to manipulate the system as well as rapidly undulating difficulty levels for different branches.

I expect Nash equilibrium failures (i.e. conflicting strategies) around the lack of consistency of difficulty levels between branches that need to converge.

As noted in Disruption and DoS of section 4 on page 21, transaction spam is handled heuristically and is orthogonal to the need for difficulty adjustment.

Afaics, difficulty adjustments and a DAG seem fundamentally incompatible. Afair Iota doesn't need to adjust difficulty because the proof-of-work isn't rewarded.


Title: Re: Blockchain-Free Cryptocurrencies: Framework for Truly Decentralised Fast Txns
Post by: iamnotback on November 10, 2016, 05:31:46 AM
3. I intuitively expect some flaw around the variable control over fees collected per unit of PoW expended, i.e. control over difficulty. But I am too sleepy to work through this part of the paper right now.

...

Afaics, difficulty adjustments and a DAG seem fundamentally incompatible. Afair Iota doesn't need to adjust difficulty because the proof-of-work isn't rewarded.

A more concise reason why minting and DAGs appear to be fundamentally incompatible is because:

  • As the white paper admits in section 2.2.1, there is no total order perspective for which to compute the systemic difficulty, thus it can only be computed per DAG branch.
  • Minting reward (per unit of proof-of-work computation) is maximized by mining on the branch with the least cumulative proof-of-work, so there is an incentive to maximize the breadth of the tree which is a Nash equilibrium conflict with the fee mechanism and Theorem 2's assumption of an incentive to apply proof-of-work (i.e. append) on the "leading edge", i.e. the "altruistic-prime"¹ of the fee mechanism is an undersupplied public good (https://blog.ethereum.org/2014/11/25/proof-stake-learned-love-weak-subjectivity/) relative to the individualized reward of minting.

¹ Given that systemically there is no income from fees because taking fees (instead of "pass-through") lowers the value for others to append to the branch. Thus the fees are effectively burned.


Title: Re: Blockchain-Free Cryptocurrencies: Framework for Truly Decentralised Fast Txns
Post by: iamnotback on November 10, 2016, 08:54:27 AM
Btw, I don't understand why that paper failed to cite the prior art of Iota's and Sergio Demian Lerner (https://bitcointalk.org/index.php?topic=1177633.0)'s DAGs.

Don't forget Byteball, a new consensus algorithm and private untraceable payments using DAG, no POW, no POS! ;)
https://bitcointalk.org/index.php?topic=1608859.0

...

The innovation claimed is that everyone can agree on 11 of 12 centralized supernodes to order the transactions, thus we wouldn't need PoW nor blocks if this claim were true and desirable.

... where the 12 supernodes will be controlled by one entity (even pretending to be 12 entities via a Sybil attack). Because the users will never be able to agree on any evolution away from the 12 by forming a consensus on an exact 12, since they are only allowed a mutation of 1 at a time. And any higher rate of mutation would make it implausible to define a total order.

The Byteball design is conceptually worse than (D)PoS from the analytical perspective that says the practical ability to change the top-down controlling entities is what differentiates (D)PoS from Byzantine fault tolerant federated designs (https://youtu.be/whdUSchadEs?t=1022) (<-- watch linked video from 17:00 until 22:45), except that perspective assumes that a majority of the stake can't be induced to collude to deviate from Nash equilibrium (w.r.t. to control over and thus outcomes from those ordering nodes in (D)PoS) which seems myopic because in reality the omnipresent power-law distribution of wealth insures the whales own greater than 50% of the stake and if the minnows are not individually economically incentivized thus are operating with "altruistic-prime" with an undersupplied-good opportunity cost which is the power vacuum of political economics (http://esr.ibiblio.org/?p=984) (and intuitively any individualized economic incentive will always be captured by economies-of-scale as exemplified by selfish mining, begetting the inviolable power-law distribution outcome).

This reason (as well the lack of scaling robustness) (https://bitcointalk.org/index.php?topic=1319681.msg16792628#msg16792628) are weaknesses; and (D)PoS is worse than Satoshi's design w.r.t. to Nash equilibrium because no value is extracted (such as not spent on an external resource in proof-of-work) thus 51% nothing-at-stake attacks are inexorable, as well as free when you can short the token on an exchange (https://bitcointalk.org/index.php?topic=1319681.msg13488432#msg13488432). However, these "wolverine federated systems in an illusory democratic sheepskin" are more computationally efficient than systems which employ proof-of-work.

IOHK has proved security for a PoS system (https://eprint.iacr.org/2016/889.pdf), but the assumption remains that the majority of the stake is not colluding to violate the Nash equilibrium (https://eprint.iacr.org/2016/889.pdf#page=3) and a majority of the stake remain online at all times (https://eprint.iacr.org/2016/889.pdf#page=3). I don't see what IOHK's PoS accomplishes which isn't already accomplished by DPoS? Is it more objective w.r.t. to violations of Nash equilibrium since in DPoS the majority of the stake can be offline so can't observe first-hand any violations? DPoS is presumably provably secure if a majority of the delegates adhere to the Nash equilibrium.

So in summary, we can hide "wolverine federated systems in an illusory democratic sheepskin" and gain computational efficiency. But the security problems (or more realistically the economic centralization problem since large stake holders need insidious means as there isn't sufficient shorting liquidity for them to scorch their earth) shift to the power vacuum of political economics and the inviolable power-law distribution of wealth (beget by economies-of-scale). Yet Satoshi's design also has these centralization problems due to the power vacuum of political economics and the inviolable power-law distribution of wealth (beget by economies-of-scale).

Will anyone find another class of solution which provides long-term stable resistance to the centralization inherent in the power vacuum of political economics and the inviolable power-law distribution of wealth (beget by economies-of-scale)? Is (D)PoS already more realistically resistant to insidious effects of centralization of vested interests "stake" than Satoshi's design?

This is the Holy Grail we seek because centralized ecosystems don't scale due to the stifling politics and vested interests. In my opinion (which is probably an analysis many others share), this is what is holding back Bitcoin lately.


Title: Re: Blockchain-Free Cryptocurrencies: Framework for Truly Decentralised Fast Txns
Post by: iamnotback on November 10, 2016, 11:03:17 AM
Is (D)PoS already more realistically resistant to insidious effects of centralization of vested interests "stake" than Satoshi's design?

No.

(D)PoS isn't a free market on transaction fees. Somebody has to pay for the servers whether it is taken out of the collective as "witness fees" from dilution as is the case for Steem. The vested power-law distributed stake interests have a monopoly and can charge (more than the costs up to) the maximum the market can bear, which some allege is also underway in Bitcoin as proof-of-work mining is allegedly centralizing with economies-of-scale.


Title: Re: Blockchain-Free Cryptocurrencies: Framework for Truly Decentralised Fast Txns
Post by: spartacusrex on November 10, 2016, 02:54:41 PM
I know you're not, but I'm glad you're back, Anonymint TBTP iamnotback..

Always enjoy squinting and leaning forward to read your 'light' posts.. (and invariably scratching my head)..


Title: Re: Blockchain-Free Cryptocurrencies: Framework for Truly Decentralised Fast Txns
Post by: iamnotback on November 10, 2016, 03:25:25 PM
Always enjoy squinting and leaning forward to read your 'light' posts.. (and invariably scratching my head)..

As my poor liver+digestive+delirium health allows, I will be trying to pull my thoughts into a more coherent document. This thread has been more stream-of-(in)consciousness while undulating in/out of severity of delirium or some sharpness of mind. Imagine playing an action packed video game where the screen blacks out every other 5 seconds whilst the game continues. Difficult to maintain continuity of thoughts and short-term memory.

Please feel free to raise any questions or quote any portions that need more clarification/discussion (or don't to avoid the masochism of reading more of my discombobulated babble).

Believe me, Iamnotback (http://www.chicagotribune.com/sports/basketball/bulls/chi-bulls-michael-jordan-im-back-20th-anniversary-20150318-htmlstory.html). I am barely here nor there. Maybe by February I will be back after the scheduled expert medical diagnosis.

I don't think many people fully understand DAGs. Ditto the microeconomics and game theory of blockchains. I am trying to gain a holistic understanding of the design axes.

One final point: there is a science of designing economic incentives so that rational players will behave in a desired way, and it’s called mechanism design (https://en.wikipedia.org/wiki/Mechanism_design). Creators of cryptocurrencies (as well as creators of applications such as the DAO) are essentially doing mechanism design. But mechanism design is hard, and our paper is the latest among many to point out that the mechanisms embedded in cryptocurrencies have flaws. Yet, sadly, the cryptocurrency community is currently disjoint from the mechanism design community. That is why I’m thrilled that mechanism design expert Matt Weinberg (https://www.cs.princeton.edu/~smattw/), who’s behind all the sophisticated theory in our paper, is joining Princeton’s faculty next semester. Expect more research from us on the mechanism design of cryptocurrencies!

Edit: and another potential reason why my explanation in this thread may lack complete clarity is because it might require discussing my design solution, which I am not ready to do. Thus I am writing a private document for that now and sharing publicly my analysis and brainstorming.


Title: Re: Blockchain-Free Cryptocurrencies: Framework for Truly Decentralised Fast Txns
Post by: iamnotback on November 10, 2016, 07:10:04 PM
But the security problems (or more realistically the economic centralization problem since large stake holders need insidious means as there isn't sufficient shorting liquidity for them to scorch their earth) shift to the power vacuum of political economics and the inviolable power-law distribution of wealth (beget by economies-of-scale). Yet Satoshi's design also has these centralization problems due to the power vacuum of political economics and the inviolable power-law distribution of wealth (beget by economies-of-scale).


Is (D)PoS already more realistically resistant to insidious effects of centralization of vested interests "stake" than Satoshi's design?

No.

(D)PoS isn't a free market on transaction fees. Somebody has to pay for the servers whether it is taken out of the collective as "witness fees" from dilution as is the case for Steem. The vested power-law distributed stake interests have a monopoly and can charge (more than the costs up to) the maximum the market can bear, which some allege is also underway in Bitcoin as proof-of-work mining is allegedly centralizing with economies-of-scale.

I am writing something privately more coherently driving towards the generative essence of what I am thinking about in the above quotes:


Power-law Distribution Control

A Nash equilibrium (https://en.wikipedia.org/wiki/Nash_equilibrium) can coexist with coordinated control over greater than 50% of the resources in a consensus ordering system, if there is no rationally better strategy employing said control which when deployed dictates a change to the optimum strategy of any system participant.

For example in a proof-of-work system, whether or not coordinated miners with a significant percentage of the system hashrate are selfish and stubborn mining¹ on new blocks immediately for themselves and propagating them slowly to other miners for a relatively more profitable mining strategy, doesn't dictate or change for the other system participants their optimum mining strategy and their optimum number of confirmations for a specific probability of a double-spend. Actually the concentrations of controlled hashrate even when less than 50% does slightly impact confirmation probabilities⁶, but this is ignored except for very large value transactions.

Another example is that control over all new blocks via control over a majority of the stake in DPoS system, enables a strategy of dictating the level of transaction fees, but it doesn't change the optimum strategy of any participant in the system (other than the futility of the minority stake voting). Whereas for a proof-of-work or non-delegated proof-of-stake system, the optimum strategy of the minority (hashrate or stake respectively) changes (to not mining or staking respectively) because all of their blocks will be orphaned, although effectively in DPoS the majority vote would just choose all the delegates so none would be orphaned.

Another counter example in proof-of-work or proof-of-stake systems is that a strategy of employing majority of the hashrate or stake respectively to issue double-spends does impact the strategy of other participants w.r.t. their computation of probabilities of a double-spend and their non-participation in the system.

The importance of this realization that a Nash equilibrium can coexist with a majority control over the resources of a consensus ordering system, is due to as follows an inviolable fact of physics and the economics of our universe.

Theorem: the control over the resources in every consensus ordering system will be power-law distributed. No counter example will be discovered.

Proof: Smaller mass is more attracted to larger mass because it maximizes the the entropy, aka the information content, of the system.[Moore2016] Lonesome mass has no frame-of-reference thus has a high probability of only one future. It is also possible to relate this to why we must have friction, oscillation, a numerable speed-of-light so the past and future light cones of special relativity don't collapse into undifferentiated voiding all distinguishable existence.

If this theorem holds and I can argue that the strategy of employing majority control over hashrate or stake to issue double-spends or to orphan all minority blocks, is not optimum for rational power-law distributions, then I can claim a Nash equilibrium can exist for consensus ordering systems.

And I do argue that power-law distributions have nothing to gain by destroying the value of the system, because their resources are not liquid and are too large to be offset by available liquidity of shorting the value of the system because equity liquidity is a minority fraction of the market capitalization. Thus it is also presumed that the rational power-law distribution would not even allow a rented 51% hashrate attack. Even a recycled attack seems irrational[Recycled].

However, the power-law distribution majority is not in control if there exist any attacks which only require a minority of resources and/or especially attacks (even if they have a low very probability of success) which either have nothing-at-stake (e.g. proof-of-stake if there is any such minority resources attack) or in any system which doesn't consume (burn) a resource which has a greater value than the probabilistic value of the said attack, thus said attack can be repeated at no cost until it (or no loss when it) succeeds. In which case, a rogue whale might deem it rational to attack and short the value of the system. Distributed Proof-of-Stake (DPoS) could potentially be rationally (perhaps even 51%) attacked by the exchanges (claiming a hacker did it) because they apparently control the private keys for voting, yet don't have contractual ownership and vested interest in the (value of the) stake.

The rational power-law distribution majority might orphan minority blocks, such as for the purpose of having a monopoly on transaction fees or blacklisting some UXTO, if it can't be objectively observed as a 51% attack causing fear of double-spends and protocol changes. Absent a total perspective which would otherwise not be a Byzantine Generals Problem, thus there is no objectivity over whether orphaned blocks are due to a 51% attack in Satoshi's design². Thus Satoshi's design doesn't have a Nash equilibrium, because if minority hashrate miners know there is a 51% attack, then their optimum strategy changes to quit mining. However, pools probably ameloriate this attack. Alternatively a less conspicuous monopoly on transaction fees can accomplished by the power-law distribution rejecting protocols which would otherwise allow transaction rate (supply) to match its demand, e.g. limiting the size of blocks of transactions in a blockchain.

So in addition to evaluating whether a consensus ordering algorithm has a Nash equilibrium, we also want to analyze the impacts given the natural and inviolable power-law distribution control over the resources of the system. Moreover instead of evaluating the design axes of consensus ordering systems only from the perspective of limits of the proportions of rationally self-interested malvolent participants for Byzantine fault tolerance, we should also incorporate the power-law distribution's majority control over the system resources as a potentially positive asset enabling some alternative designs, e.g. DPoS as an alternative to proof-of-work.


Proof-of-Work as Space Heaters Belies Economics of Specialization

Specialization enables economies-of-scale.

An example of an erroneous posited caveat[4] that proof-of-work mining resources would not become power-law distribution centralized due to the posited high electrical cost of dissipating heat in centralized mining farms coupled with the posited free electricity cost of using the “waste” heat of ASIC mining equipment as space heaters, is (in hindsight) incorrect because:

  • Two-phase immersion cooling is 4000 times more efficient at removing heat from high-power density data centers[5], reducing the 30 - 50% electricity overhead to 1%[6].
  • Electricity proximate to hydroelectric generation or subsidized electriciy costs approximately 50 - 75% less than the average electricity cost.
  • Heating is rarely needed year-round, 24 hours daily, at full output. Not running mining hardware at full output continuously renders its purchase cost depreciation much less economic because the systemic hashrate is always increasing and (because) ASIC efficiency is always increasing[7]. The posited purchase of obsolete mining equipment[8] is incorrect because `MR = MC` so a combination of increased demand for obsolete mining raising its price and weighted profit at the margins increasing thus increasing the mining difficulty so that savings due to waste heat is offset. Closer to home, to make it profitable enough to be worthwhile (to justify the pita of jerry–rigging a space heater for equipment not designed for the purpose) requires running so many 10s or 100s of kWH of relatively much less efficient (i.e. obsolete) hardware generating more heat than can be typically utilized (unless infernos are in sufficient decentralized demand).


Proof-of-Work on CPUs Belies Economics of Specialization

The posited caveat[4] that mining on general use computers (as a refutation of the power-law distribution of resources) would be economically viable if ASICs are not more efficient than (H + E) / E (even factoring that E might be pyschologically 0 because it is obscured in monthly variability of the electric bill) falls away at least because of the transition to power efficient (battery powered or fanless) devices which don't consume enough electricity to provide enough security for a longest-chain-rule blockchain even if millions of said devices were mining[9]. Or more generally because the portion of the general use computers' cost which represents circuits applicable to proof-of-work computation is equivalently too small.


[Moore2016] https://steemit.com/science/@anonymint/the-golden-knowledge-age-is-rising
[Recycled] https://bitcointalk.org/index.php?topic=1319681.msg16853429#msg16853429
[1] https://bitcointalk.org/index.php?topic=1319681.msg13800936#msg13800936
    https://bitcointalk.org/index.php?topic=1183043.msg13800901#msg13800901
    https://bitcointalk.org/index.php?topic=1319681.msg13778110#msg13778110
¹ https://arxiv.org/abs/1311.0243
  http://eprint.iacr.org/2015/796
  https://bitcointalk.org/index.php?topic=1361602.msg15823439#msg15823439
  https://blog.ethereum.org/2014/07/11/toward-a-12-second-block-time/
² https://bitcointalk.org/index.php?topic=1183043.msg13823607#msg13823607
[4] https://blog.ethereum.org/2014/06/19/mining/
[5] http://www.allied-control.com/immersion-cooling
[6] http://www.allied-control.com/publications/Analysis_of_Large-Scale_Bitcoin_Mining_Operations.pdf#page=9
[7] https://www.reddit.com/r/Bitcoin/comments/335107/i_am_thinking_of_using_a_bitcoin_miner_to_heat_my/
[8] https://bitcointalk.org/index.php?topic=918758.msg10109255#msg10109255
    https://bitcointalk.org/index.php?topic=1527954.msg16816538#msg16816538
[9] https://bitcointalk.org/index.php?topic=1361602.msg15553037#msg15553037
³ http://esr.ibiblio.org/?p=984
⁴ https://bitcointalk.org/index.php?topic=1171109.msg12376416#msg12376416
⁵ https://bitcointalk.org/index.php?topic=1671480.0
[13] https://eprint.iacr.org/2013/881.pdf
     http://ethereum.stackexchange.com/questions/314/what-is-ghost-and-what-is-its-relationship-to-frontier-and-casper
     https://blog.ethereum.org/2014/07/11/toward-a-12-second-block-time/
⁶ https://arxiv.org/abs/1402.2009
⁷ http://hackingdistributed.com/2014/12/17/changetip-must-die/
⁸ https://bitcointalk.org/index.php?topic=1319681.msg16805440#msg16805440
⁹ https://github.com/shelby3/hashsig/blob/master/DDoS%20Defense%20Employing%20Public%20Key%20Cryptography.md


Title: Re: Blockchain-Free Cryptocurrencies: Framework for Truly Decentralised Fast Txns
Post by: alkan on November 10, 2016, 09:46:02 PM
You may want to take a look at the Swirlds Hasgraph Consensus algorithm which doesn't rely on blockchains and PoW, PoS either.

It praises itself as being fair, fast, provable, Byzantine, ACID compliant, efficient, inexpensive, timestamped, DoS
resistant, and optionally non-permissioned.

For more information, see the white paper http://www.swirlds.com/downloads/SWIRLDS-TR-2016-01.pdf and my post https://bitcointalk.org/index.php?topic=1400715.0;prev_next=next.






Title: Re: Blockchain-Free Cryptocurrencies: Framework for Truly Decentralised Fast Txns
Post by: iamnotback on November 11, 2016, 07:24:10 AM
You may want to take a look at the Swirlds Hasgraph Consensus algorithm which doesn't rely on blockchains and PoW, PoS either.

It praises itself as being fair, fast, provable, Byzantine, ACID compliant, efficient, inexpensive, timestamped, DoS
resistant, and optionally non-permissioned.

For more information, see the white paper http://www.swirlds.com/downloads/SWIRLDS-TR-2016-01.pdf and my post https://bitcointalk.org/index.php?topic=1400715.0;prev_next=next.

In Core Concepts of section 2 on page 4, we see the key design facet for obtaining consensus on a total order (selecting from the many partial order DAG branches) is the concept of "Famous witnesses", which is analogous to the "Witnesses" in section 6 on page 9 of the Byteball white paper. The difference is that Byteball restricts the number of these witnesses to 12 and only allows disagreement over 1 witness during each consensus round (which I thus argued would become controlled by the power-law distribution and the salient issue is they would be quite static and unresponsive to free market needs because the power-law distribution isn't real-time omniscient).

Swirlds appears to have some similar attributes as Stellar's SCIP in that I presume a Sybil attack can indefinitely stall consensus as afaics there doesn't appear to be some resource constraint on nodes which would keep the power-law distribution in control. Byteball burns transaction fees but these are hardcoded and not set by a free market competition.

Essentially my analysis is that Byteball is headed in the correct general design direction but there are some pitfalls in their design decisions. For example, in addition to what I already stated, I also foresee scaling issues in the design choices.


Title: Re: Blockchain-Free Cryptocurrencies: Framework for Truly Decentralised Fast Txns
Post by: iamnotback on November 12, 2016, 09:57:18 PM
Byteball spends transaction fees to the witnesses (and perhaps the payer portion is effectively burned as it is passed along?) instead of employing proof-of-work (but I am not yet clear if this is used as the metric of the chain length in any way in consensus algorithm). These fees per section “1. Introduction: Exchange rate” on page 3 are tied to the system wide exchange value of adding bytes to the database. Byteball has the incorrect monetary theory, because the confidence in and thus the value of money is greater the higher the senioriage (https://bitcointalk.org/index.php?topic=1665943.msg16749910#msg16749910).

Quote from: Byteball whitepaper
3. Native   currency:   bytes

Next,   we   need   to   introduce   some   friction   to   protect   against   spamming   the   database   
with   useless   messages.      The   barrier   to   entry should   roughly   reflect   the   utility   of   
storage   for   the   user   and   the   cost   of   storage   for   the   network.      The   simplest   measure   
for   both   of   these is the   size   of   the storage   unit.

I vehemently disagree. I think utility should not reflect storage costs but rather all costs including validation, etc..


Title: Re: Blockchain-Free Cryptocurrencies: Framework for Truly Decentralised Fast Txns
Post by: iamnotback on November 13, 2016, 08:39:21 AM
One more follow up on Byteball, the design appears to be broken in numerous ways:

https://bitcointalk.org/index.php?topic=1608859.msg16860979#msg16860979

https://bitcointalk.org/index.php?topic=1608859.msg16860875#msg16860875

This isn't supposed to be an altcoin discussion forum. I was originally analyzing blockless chain designs and someone claimed Byteball as prior art. I will no go further on this tangent here. Readers can click the links above and follow the discussion there if they want.


Title: Re: Blockchain-Free Cryptocurrencies: Framework for Truly Decentralised Fast Txns
Post by: Zcrypt_ZXT on November 14, 2016, 12:13:25 PM
One of the most interesting thread i've read in a while..Thanks for posting about this. will give a deep read asap.


Title: Re: Blockchain-Free Cryptocurrencies: Framework for Truly Decentralised Fast Txns
Post by: Fuserleer on November 17, 2016, 03:45:24 AM
Thought I'd chime in here as I've spent a number of years now investigating possible solutions to allow a distributed ledger to process a high tps throughput (VISA+ scale), yet remain trust-less and decentralized (no super nodes, witnesses, or any of the other myriad of semi-centralization tricks to allow scale).  I'm not going to delve too much into the technical with this post, just share some of the ideas and philosophies that I had and where I ultimately settled.  Perhaps it can give others some ideas, inspirations, etc

First though a quick recap....

Way back when (late 2012) the question I wanted an answer to was; at what TPS did a pedigree Satoshi block chain secured with POW start to become problematic.

I performed a number of tests which ultimately concluded in a figure of 150-300 TPS depending on the topology of the network and the average performance of nodes within the network graph.  Past that point orphan thrashing began and deteriorated the performance of the network in general and the efficiency of POW mining rapidly (using efficient and POW in the same sentence seems a real oxymoron now!).  These days with a higher average node spec and internet connections, I'd wager ~500 TPS would be possible before any headaches (a block size of about 300MB if anyone is wondering).

After that (2013) I started to experiment with different ledger architectures, the first of which was what I called a Block Tree (it was really more akin to a DAG).  Without getting into too much detail the premise was that at times of high load, the "tree" could widen and portions of the network could be in varying states of total consensus (parts of the tree missing for example) but ensure a correct consensus for the parts of the tree they had.  With a large enough portion of the correct tree nodes could estimate the chances of being out of consensus before the fact, when load then decreased the tree would narrow again and lagging nodes would eventually catch up.

There was some improvement (especially with regard to load spikes), but ultimately the same issues as a block chain surfaced at a higher load, and with extreme continuous load the whole thing fell on its ass.

I then went "full DAG" and dropped the blocks, which again resulted in further improvement, but traditional consensus algorithms (POW, POS, etc) again led to ultimate upper limits and various new problems such as no true global state that a block chain based approach provides.  A DAG also couldn't support a large number of other features that were determined as "must have" for a real mass market targeted product.

That was end of 2014 and I went back to the drawing board completely and developed a ledger architecture called CAST (Channeled Asynchronous State Tree) and a consensus mechanism called EVEI (Evolving Voters via Endorseable Interactions).  Together they allow scaling to VERY high throughput and meet all the necessary requirements.

The eureka moment was upon the realization that it is possible to split the data from the state, yet ensure that the data determines the state.  This yields a number of very important properties when considering scalability:

1.  The states are small (2000 tps consumes around 50kb per second)
2.  The states have multiple points of origin
3.  The states can be split into sub-states that reference a sub-set of the total transactions

First lets look at blocks and block chains with regard to the above points:

In a block chain the block is the state AND the data.  This is required due to how the consensus operates with mining, specifically the miner of the next block may have transactions that others do not know about so the state data has to be packaged as the state itself (this is true no matter the algorithm, POW, POS, DPOS etc).

This in turn leads to there being only a single point of origin for the next valid block and so it has to propagate over the network.  This leads to the inevitable latency and CAP considerations. If the block is too large and takes too long to fully propagate, orphan thrashing begins to occur and reduces overall performance and efficiency.  Another side effect is that ALL transactions are broadcast twice, once when the transaction is created, and later within the block itself further adding to network and bandwidth overheads.

Finally a block obviously can not be split into sub-blocks once it has been mined to mitigate any of the above.

Going back to CAST and EVEI.  In a gossip driven P2P network it can be assumed that the majority of nodes will always know about the majority of transactions, therefore the majority of nodes will output the same state independently and without any specific state communication with each other.  This covers points #1 and #2, whereby the states can be small due to the redundant requirement of the data being embedded in the state and provides multiple points of origin for the state, grossly reducing propagation time (the majority of nodes have the state so in a healthy network propagation is practically zero).

This greatly increases the performance of the network and its efficiency.  I've witnessed continuous loads of > 500 tps over long periods of time and short term spikes of > 2,500 tps in both small and large networks consisting of hardware ranging from PIs to enterprise servers with no issues.

Furthermore, having a global state of the ledger with consensus mitigates a lot of the problems associated with a DAG and its progressive state mechanics.

Some might argue that CAST + EVEI is then a block chain, and yes there are some similarities and overlap, but the principles and operational functionality underpinning it is radically different thus I consider it in a different camp.  Either way, call it what you will :)

Moving on, 500-2,500+ tps is pretty good, especially when hardware such as a Pi is able to keep pace most of the time with minimal issues, but, it's not enough.  VISA alone on Black Friday reportedly processes peaks of 40,000 tps, but even when discarding Black Friday, adding MasterCard, Amex, Paypal, and all the banking payments into the mix, it quickly becomes obvious that a couple 1000 tps is not enough for a global payments system.  Throw IoT in the bag too and the requirements roll into the 100,000+ very quickly.  Which is where #3 comes into play.

Block chains are generally unstructured, with the block containing a soup of transactions from various addresses.  CAST on the other hand is very structured, with addresses owning one or more channels and each transaction has at least 2 components...a spend and a claim.  The spend lives in the spenders channel and the claim lives in the receivers.  With this structuring it is very easy to chop to the ledger up into more manageable partitions.

This then leads to a conclusion of; with a structured ledger, and compact states that are determined by the data itself, it should therefore be possible for the global ledger state to also be split into sub-states according to each data partition.  WIN!

Nodes can configure according to their performance and support n partitions rather than having to upgrade or even go offline to stay in the game as load increases over time.  

EVEI consensus operates at a partition level, and the global state is simply a culmination of all partition level state consensus outcomes.  This functions reliably due to the fact that most nodes will operate more than a single partition and the variance of node partition configurations in the network will lead to an amount of overlap.  This overlap provides an auditable causality of the global state from current and past partition states.

Partitioning the data does bring with it some overhead, and presently the sweet spot seems to be about 1000 partitions before the curve exponent gets too large.  This can probably be improved, but even if not, 1000 partitions each with the ability to process ~500 tps should be more than enough scale for now!

Some might be thinking, "hmm that partitioning thing sounds awfully similar to Ethereums sharding" and it does because it is.  However, Ethereum's partitioning/sharding implementation is inferior due to 3 points:

1.  It uses a block chain/s and is more akin to a set of side chains, which means there cant be a true consensus on global state
2.  It is difficult and inefficient for shards to communicate due to the architecture of its smart contract VM and ambiguous state data
3.  It's at least 2 years out, EVEI and CAST are not :)

Conclusion and TL;DR:  To scale, remove the block chain, replace with a structured ledger and states that are decoupled from data, use consensus that embraces determinism...then chop the ledger into smaller chunks :)



Title: Re: Blockchain-Free Cryptocurrencies: Framework for Truly Decentralised Fast Txns
Post by: Jabbawa on November 18, 2016, 10:18:02 AM
Thought I'd chime in here as I've spent a number of years now investigating possible solutions to allow a distributed ledger to process a high tps throughput (VISA+ scale), yet remain trust-less and decentralized (no super nodes, witnesses, or any of the other myriad of semi-centralization tricks to allow scale).  I'm not going to delve too much into the technical with this post, just share some of the ideas and philosophies that I had and where I ultimately settled.  Perhaps it can give others some ideas, inspirations, etc

First though a quick recap....

Way back when (late 2012) the question I wanted an answer to was; at what TPS did a pedigree Satoshi block chain secured with POW start to become problematic.

I performed a number of tests which ultimately concluded in a figure of 150-300 TPS depending on the topology of the network and the average performance of nodes within the network graph.  Past that point orphan thrashing began and deteriorated the performance of the network in general and the efficiency of POW mining rapidly (using efficient and POW in the same sentence seems a real oxymoron now!).  These days with a higher average node spec and internet connections, I'd wager ~500 TPS would be possible before any headaches (a block size of about 300MB if anyone is wondering).

After that (2013) I started to experiment with different ledger architectures, the first of which was what I called a Block Tree (it was really more akin to a DAG).  Without getting into too much detail the premise was that at times of high load, the "tree" could widen and portions of the network could be in varying states of total consensus (parts of the tree missing for example) but ensure a correct consensus for the parts of the tree they had.  With a large enough portion of the correct tree nodes could estimate the chances of being out of consensus before the fact, when load then decreased the tree would narrow again and lagging nodes would eventually catch up.

There was some improvement (especially with regard to load spikes), but ultimately the same issues as a block chain surfaced at a higher load, and with extreme continuous load the whole thing fell on its ass.

I then went "full DAG" and dropped the blocks, which again resulted in further improvement, but traditional consensus algorithms (POW, POS, etc) again led to ultimate upper limits and various new problems such as no true global state that a block chain based approach provides.  A DAG also couldn't support a large number of other features that were determined as "must have" for a real mass market targeted product.

That was end of 2014 and I went back to the drawing board completely and developed a ledger architecture called CAST (Channeled Asynchronous State Tree) and a consensus mechanism called EVEI (Evolving Voters via Endorseable Interactions).  Together they allow scaling to VERY high throughput and meet all the necessary requirements.

The eureka moment was upon the realization that it is possible to split the data from the state, yet ensure that the data determines the state.  This yields a number of very important properties when considering scalability:

1.  The states are small (2000 tps consumes around 50kb per second)
2.  The states have multiple points of origin
3.  The states can be split into sub-states that reference a sub-set of the total transactions

First lets look at blocks and block chains with regard to the above points:

In a block chain the block is the state AND the data.  This is required due to how the consensus operates with mining, specifically the miner of the next block may have transactions that others do not know about so the state data has to be packaged as the state itself (this is true no matter the algorithm, POW, POS, DPOS etc).

This in turn leads to there being only a single point of origin for the next valid block and so it has to propagate over the network.  This leads to the inevitable latency and CAP considerations. If the block is too large and takes too long to fully propagate, orphan thrashing begins to occur and reduces overall performance and efficiency.  Another side effect is that ALL transactions are broadcast twice, once when the transaction is created, and later within the block itself further adding to network and bandwidth overheads.

Finally a block obviously can not be split into sub-blocks once it has been mined to mitigate any of the above.

Going back to CAST and EVEI.  In a gossip driven P2P network it can be assumed that the majority of nodes will always know about the majority of transactions, therefore the majority of nodes will output the same state independently and without any specific state communication with each other.  This covers points #1 and #2, whereby the states can be small due to the redundant requirement of the data being embedded in the state and provides multiple points of origin for the state, grossly reducing propagation time (the majority of nodes have the state so in a healthy network propagation is practically zero).

This greatly increases the performance of the network and its efficiency.  I've witnessed continuous loads of > 500 tps over long periods of time and short term spikes of > 2,500 tps in both small and large networks consisting of hardware ranging from PIs to enterprise servers with no issues.

Furthermore, having a global state of the ledger with consensus mitigates a lot of the problems associated with a DAG and its progressive state mechanics.

Some might argue that CAST + EVEI is then a block chain, and yes there are some similarities and overlap, but the principles and operational functionality underpinning it is radically different thus I consider it in a different camp.  Either way, call it what you will :)

Moving on, 500-2,500+ tps is pretty good, especially when hardware such as a Pi is able to keep pace most of the time with minimal issues, but, it's not enough.  VISA alone on Black Friday reportedly processes peaks of 40,000 tps, but even when discarding Black Friday, adding MasterCard, Amex, Paypal, and all the banking payments into the mix, it quickly becomes obvious that a couple 1000 tps is not enough for a global payments system.  Throw IoT in the bag too and the requirements roll into the 100,000+ very quickly.  Which is where #3 comes into play.

Block chains are generally unstructured, with the block containing a soup of transactions from various addresses.  CAST on the other hand is very structured, with addresses owning one or more channels and each transaction has at least 2 components...a spend and a claim.  The spend lives in the spenders channel and the claim lives in the receivers.  With this structuring it is very easy to chop to the ledger up into more manageable partitions.

This then leads to a conclusion of; with a structured ledger, and compact states that are determined by the data itself, it should therefore be possible for the global ledger state to also be split into sub-states according to each data partition.  WIN!

Nodes can configure according to their performance and support n partitions rather than having to upgrade or even go offline to stay in the game as load increases over time.  

EVEI consensus operates at a partition level, and the global state is simply a culmination of all partition level state consensus outcomes.  This functions reliably due to the fact that most nodes will operate more than a single partition and the variance of node partition configurations in the network will lead to an amount of overlap.  This overlap provides an auditable causality of the global state from current and past partition states.

Partitioning the data does bring with it some overhead, and presently the sweet spot seems to be about 1000 partitions before the curve exponent gets too large.  This can probably be improved, but even if not, 1000 partitions each with the ability to process ~500 tps should be more than enough scale for now!

Some might be thinking, "hmm that partitioning thing sounds awfully similar to Ethereums sharding" and it does because it is.  However, Ethereum's partitioning/sharding implementation is inferior due to 3 points:

1.  It uses a block chain/s and is more akin to a set of side chains, which means there cant be a true consensus on global state
2.  It is difficult and inefficient for shards to communicate due to the architecture of its smart contract VM and ambiguous state data
3.  It's at least 2 years out, EVEI and CAST are not :)

Conclusion and TL;DR:  To scale, remove the block chain, replace with a structured ledger and states that are decoupled from data, use consensus that embraces determinism...then chop the ledger into smaller chunks :)



Great post! Very interesting.

What are your thoughts on close group consensus and datachains aka the maidsafe solution?

I understand that this has all been theoretical and hard to investigate for the last couple of years, but as of last month things have become much clearer with progress made and dev tutorials etc.

IF (and I understand it is a fairly big 'if') they pull it off, SAFEcoin should scale positively, be instant/zero confirmation times, no mining or centralisation risks (proof of resource), no fees and it will be completely private/anonymous like real digital cash - not to mentioned backed by real computing resources so more tangible in value than even gold.

Sounds like I'm shilling I know, but really I just want to know how close a look you have taken at what they are doing in the last few months? Testsafecoin is due for release in January. I don't doubt it will be delayed further because everything always is, but do you not think that datachains hold the most promise?

https://blog.maidsafe.net/2015/01/29/consensus-without-a-blockchain/

I'm not saying that anyone should be 100% convinced they can pull it off even after 11 years on the job, but IF they do...?


Title: Re: Blockchain-Free Cryptocurrencies: Framework for Truly Decentralised Fast Txns
Post by: BiTrading on November 18, 2016, 11:56:16 AM
Fuserleer, you should check out IOTA (iotatoken.com). It would be interesting to hear your opinion about it.


Title: Re: Blockchain-Free Cryptocurrencies: Framework for Truly Decentralised Fast Txns
Post by: TransaDox on November 18, 2016, 02:48:32 PM
Great post! Very interesting.

What are your thoughts on closed group consensus and datachains aka the maidsafe solution?


XOR huh? That's the torrent (DHT) distance function too and interesting features arise......

The distance function is not related to the Merkle or DAG which means that if a node with a remote random ID (as defined by the DHT spec) , is required to cache data in its routing table, AND the decision of which data it should cache is also defined as the distance between its NodeID and the hash. Then the closest nodes to another randomly generated node ID will effectively be a pseudo random sampling of the block chain/CAST (or whatever ledger technology is used).

This means that random samples of the ledger or ledger state can be stored throughout the network and assembled just-in-time as needed. Since some blocks are cached locally by the node (as a function of their Merkle/DAG distance from the Node ID). The cached blocks act as random checkpoints to reconstitute the chain or tree. As the node fills in the data between the hashes for its own benefit, it will be sourcing from multiple and disparate nodes thus filling in the missing pieces requiring a sybil attack to have node IDs near each (random) checkpoint in the hope they get chosen over other "close" nodes. Once all data has been filled between two checkpoints, the confidence that the correct data has been received is extremely high to the point where data connecting one or two checkpoints would allow safe transactions to begin (faster bootstrap) while continuing to fill in the others until the entire chain/tree has been verified and the cached blocks become verified checkpoints. Subsequent bootstraps can start from the last verified checkpoint to the head and a periodic churn of checkpoints can be used over time.

Assuming a significant number of nodes (a reasonable assumption due the necessity of mining and the removal of large storage on a single device), the resistance to a sybil attack is extremely high and attempts detectable.

This above may not seem relevant to your post. But the "Close Groups" detailed in the Maidsafe only need to add the state data (CAST) or block headers/data (bitcoin) to their routing tables and therefore cache the data in the distributed network.


Title: Re: Blockchain-Free Cryptocurrencies: Framework for Truly Decentralised Fast Txns
Post by: Jabbawa on November 18, 2016, 02:59:59 PM
Great post! Very interesting.

What are your thoughts on close group consensus and datachains aka the maidsafe solution?

This above may not seem relevant to your post. But the "Close Groups" detailed in the Maidsafe only need to add the state data (CAST) or block headers/data (bitcoin) to their routing tables and therefore cache the data in the distributed network.

Thanks for the response. I'm not a developer myself so found it challenging to follow and I had to read it a few times to try to take it all in.  :-\ Can you just follow-up with the implications of the last sentence please? Just to make sure I've understood correctly, it would be helpful to have it spelled out for me. :)


Title: Re: Blockchain-Free Cryptocurrencies: Framework for Truly Decentralised Fast Txns
Post by: iamnotback on November 21, 2016, 06:13:11 AM
EVEI consensus operates at a partition level, and the global state is simply a culmination of all partition level state consensus outcomes.  This functions reliably due to the fact that most nodes will operate more than a single partition and the variance of node partition configurations in the network will lead to an amount of overlap.  This overlap provides an auditable causality of the global state from current and past partition states.

How do you determine finality of consensus for cross-partition transactions in order to prevent a double-spend?


Title: Re: Blockchain-Free Cryptocurrencies: Framework for Truly Decentralised Fast Txns
Post by: iamnotback on November 21, 2016, 07:51:07 AM
What are your thoughts on close group consensus and datachains aka the maidsafe solution?

I understand that this has all been theoretical and hard to investigate for the last couple of years, but as of last month things have become much clearer with progress made and dev tutorials etc.

IF (and I understand it is a fairly big 'if') they pull it off, SAFEcoin should scale positively, be instant/zero confirmation times, no mining or centralisation risks (proof of resource), no fees and it will be completely private/anonymous like real digital cash - not to mentioned backed by real computing resources so more tangible in value than even gold.

Sounds like I'm shilling I know, but really I just want to know how close a look you have taken at what they are doing in the last few months? Testsafecoin is due for release in January. I don't doubt it will be delayed further because everything always is, but do you not think that datachains hold the most promise?

https://blog.maidsafe.net/2015/01/29/consensus-without-a-blockchain/

I'm not saying that anyone should be 100% convinced they can pull it off even after 11 years on the job, but IF they do...?

MaidSafe has always been a steaming pile of BS.

I was the first one in this forums who proposed proof-of-storage (or proof-of-retrievability) in 2013 and quickly dismissed it because there is no way to prove that the nodes aren't sybil attacked and thus really stored redundantly. I have already refuted that various white papers that have come since that time, including Sia's developer and others such as some paper I think from IOHK.

For illustrative purposes, when Alice pays a coin to Bob via the client, she submits a payment request. The Transaction Managers check that Alice is the current owner of the coin by retrieving her public key and confirming that it has been signed by the correct and corresponding private key. The Transaction Managers will only accept a signed message from the existing owner. This proves beyond doubt that Alice is the owner of the coin and the ownership of that specific coin is then transferred to Bob and now only Bob is able to transfer that coin to another user. This sending of data (coins) between users is contrary to the Bitcoin protocol where bitcoins aren’t actually sent to another user, rather bitcoin clients send signed transaction data to the blockchain.

The lack of blockchain means that it is not possible to scrutinise all the transactions that have ever taken place or follow the journey of a specific coin.

Lol. Don't you understand that means you have to trust all the Transaction Managers and the blockchain can't prove a damn thing.

That is utter nonsense.



Title: Re: Blockchain-Free Cryptocurrencies: Framework for Truly Decentralised Fast Txns
Post by: TransaDox on November 21, 2016, 12:46:53 PM
Thanks for the response. I'm not a developer myself so found it challenging to follow and I had to read it a few times to try to take it all in.  :-\ Can you just follow-up with the implications of the last sentence please? Just to make sure I've understood correctly, it would be helpful to have it spelled out for me. :)

  • Full or not full node distinction becomes moot.
  • Lower Footprint - Disk usage of <1GB regardless of block-chain size.
  • Faster Synch  - A couple of hours of synchronizing to the network, from cold, before being able to make a transaction rather than days as it currently stands.
  • Low risk - Works safely alongside the current distribution methods (both for full and SPV) and can be used as an accelerator (or not  ::) ) during early adoption when there are few capable nodes.


Title: Re: Blockchain-Free Cryptocurrencies: Framework for Truly Decentralised Fast Txns
Post by: Jabbawa on November 21, 2016, 06:15:43 PM
Thanks for the response. I'm not a developer myself so found it challenging to follow and I had to read it a few times to try to take it all in.  :-\ Can you just follow-up with the implications of the last sentence please? Just to make sure I've understood correctly, it would be helpful to have it spelled out for me. :)

  • Full or not full node distinction becomes moot.
  • Lower Footprint - Disk usage of <1GB regardless of block-chain size.
  • Faster Synch  - A couple of hours of synchronizing to the network, from cold, before being able to make a transaction rather than days as it currently stands.
  • Low risk - Works safely alongside the current distribution methods (both for full and SPV) and can be used as an accelerator (or not  ::) ) during early adoption when there are few capable nodes.

TY :)


Title: Re: Blockchain-Free Cryptocurrencies: Framework for Truly Decentralised Fast Txns
Post by: iamnotback on November 21, 2016, 08:30:24 PM
  • Full or not full node distinction becomes moot.

Don't lie. The trust failures have not been incorporated into your lack of analysis of the game theory.


Title: Re: Blockchain-Free Cryptocurrencies: Framework for Truly Decentralised Fast Txns
Post by: TransaDox on November 21, 2016, 09:40:57 PM
Don't lie. The trust failures have not been incorporated into your lack of analysis of the game theory.

Then you need to think harder about how one fills in the blocks between two checkpoints and what happens if a number of nodes feed you incorrect blocks.

Hint: Malicious nodes feeding blocks are no different than orphan blocks.


Title: Re: Blockchain-Free Cryptocurrencies: Framework for Truly Decentralised Fast Txns
Post by: iamnotback on November 22, 2016, 04:27:02 AM
Don't lie. The trust failures have not been incorporated into your lack of analysis of the game theory.

Then you need to think harder about how one fills in the blocks between two checkpoints and what happens if a number of nodes feed you incorrect blocks.

Hint: Malicious nodes feeding blocks are no different than orphan blocks.

Write a white paper, otherwise you are just spewing incomprehensible babble. Invariably those who can't write it down in a whitepaper, are spewing incorrect babble.

Nodes can be Sybil attacked. Propagation ordering is not proof nor consensus. Write a whitepaper that explains the Byzantine fault tolerance in your design.


Title: Re: Blockchain-Free Cryptocurrencies: Framework for Truly Decentralised Fast Txns
Post by: TransaDox on November 22, 2016, 11:01:02 AM
Write a white paper, otherwise you are just spewing incomprehensible babble. Invariable those who can't write it down in a whitepaper, are spewing incorrect babble.

No. I have a family to feed so the software that I work on is carefully chosen and purely academic papers to prove to game theorists that something has merit is not even on the radar. Add to that the vehement resistance to anything that changes the status quo away from centralisation and there is little incentive for me to do anything like that.

Bitcoin is heading towards being a credit card back end and I pretty much agree and feel the same as TPTB_need_war (https://bitcointalk.org/index.php?topic=1319681.msg13488328#msg13488328) from one of the links in your previous posts.

Quote
We are not producing any fundamental breakthrough on the problem of decentralized electronic money. I do not like to work on things that I feel are misdirected and destined for failure in the end. I don't want to get rich by fooling other people (or fooling myself).

I'll throw a few ideas and software techniques in for others to run with but until I see that centralisation even being talked about as an issue then I have to spend time on software that feeds my family as I'm not independently wealthy nor part of the paid bitcoin industry. What does game theory have to say about altruism?

Nodes can be Sybil attacked. Propagation ordering is not proof nor consensus. Write a whitepaper that explains the Byzantine fault tolerance in your design.

Then you still haven't understood. It is the most unordered propagation you can get and not only from 8 connections, but hundreds. The block chain is still used as the proof-that doesn't change. I am merely talking about a delivery and storage mechanism for the block chain which can provide additional assurances to accelerate the distribution whilst still supplying the network with full node capabilities (without every single block on every disk).

However. Thanks for at least asking about it even if you can't be bothered to think it through. It gives me a little more hope that there are people in the community that are still thinking about technical improvements rather than get-rich-quick protocol schemes to directly monetise the blockchain.


Title: Re: Blockchain-Free Cryptocurrencies: Framework for Truly Decentralised Fast Txns
Post by: iamnotback on November 22, 2016, 06:29:22 PM
Then you still haven't understood. It is the most unordered propagation you can get and not only from 8 connections, but hundreds. The block chain is still used as the proof-that doesn't change. I am merely talking about a delivery and storage mechanism for the block chain which can provide additional assurances to accelerate the distribution whilst still supplying the network with full node capabilities (without every single block on every disk).

Then you are not talking about a decentralized consensus on the finality of transactions, i.e. that can't result in a double-spend.

Dude I have a very deep understanding the of possibilities for Byzantine fault tolerance for the CAP theorem consistency, partition tolerance, and access (liveness) in consensus ordering systems. Putting unordered items into a distributed database has nothing to do with it.


Title: Re: Blockchain-Free Cryptocurrencies: Framework for Truly Decentralised Fast Txns
Post by: TransaDox on November 22, 2016, 07:26:20 PM
Then you are not talking about a decentralized consensus on the finality of transactions, i.e. that can't result in a double-spend.

Dude I have a very deep understanding the of possibilities for Byzantine fault tolerance for the CAP theorem consistency, partition tolerance, and access (liveness) in consensus ordering systems. Putting unordered items into a distributed database has nothing to do with it.

Indeed. However I was responding to Jabbawa about the XOR distance function that Maidsafe uses and the interesting features that arise when applied to bitcoin as a DHT.

You then called me a liar "spewing incomprehensible babble".


Title: Re: Blockchain-Free Cryptocurrencies: Framework for Truly Decentralised Fast Txns
Post by: Jabbawa on November 22, 2016, 11:02:40 PM
Then you are not talking about a decentralized consensus on the finality of transactions, i.e. that can't result in a double-spend.

Dude I have a very deep understanding the of possibilities for Byzantine fault tolerance for the CAP theorem consistency, partition tolerance, and access (liveness) in consensus ordering systems. Putting unordered items into a distributed database has nothing to do with it.

Indeed. However I was responding to Jabbawa about the XOR distance function that Maidsafe uses and the interesting features that arise when applied to bitcoin as a DHT.

You then called me a liar "spewing incomprehensible babble".

I appreciated the responses to my question TransaDox, I've learned a fair bit. The conversation is out of my pay grade, so I've had to do a lot of side-reading to make sense of it.

I'm not sure why you got attacked for it. Who knows if maidsafe will pull it off? It certainly feels like an esoteric challenge to understand it all. And far too complicated to be dismissed out of hand. It's several new layers of internet protocol after all, it doesn't play by the same rules and the deeper I look the more fascinated I become.







Title: Re: Blockchain-Free Cryptocurrencies: Framework for Truly Decentralised Fast Txns
Post by: TransaDox on November 23, 2016, 11:20:36 AM
I appreciated the responses to my question TransaDox, I've learned a fair bit. The conversation is out of my pay grade, so I've had to do a lot of side-reading to make sense of it.

I'm not sure why you got attacked for it. Who knows if maidsafe will pull it off? It certainly feels like an esoteric challenge to understand it all. And far too complicated to be dismissed out of hand. It's several new layers of internet protocol after all, it doesn't play by the same rules and the deeper I look the more fascinated I become.

Domain experts tend to be very focused on their sphere of expertise to the exclusion of all else. Therefore the forum style of conversations where there are sporadic wanderings - akin to sidechains in bitcoin parlance - tend to get conflated and perceived as noise to their message.  Some are more tolerant than others of these interruptions and partake in the side conversations to entertain and explain to non-domain experts. At the other extreme; others lambaste as "you don't know what you are talking about" and deem it beneath them to explain, entertain or teach. This thread is somewhere in between.

I wouldn't get too bogged down in the Maidsafe implementation. It is a monetisation of the blockchain based on the work of IPFS (https://ipfs.io/ipfs/QmR7GSQM93Cx5eAg6a6yRzNde1FQv7uL6X1o4k7zrJa3LX/ipfs.draft3.pdf). If one reads the well written IPFS document I linked to, it gives a better overview of the technology. Maidsafe has used an alt-coin as a method of account for the BitSwap Protocol (Section 3.4). I also suspect Kim Dotcoms new platform will be something similar to BitSwap (if not exactly).


Title: Re: Blockchain-Free Cryptocurrencies: Framework for Truly Decentralised Fast Txns
Post by: Jabbawa on November 23, 2016, 09:40:08 PM
Many thanks, I'm making my way through that IPFS doc now and enjoying it.





Title: Re: Blockchain-Free Cryptocurrencies: Framework for Truly Decentralised Fast Txns
Post by: xizmax on December 01, 2016, 11:31:11 AM
Many thanks, I'm making my way through that IPFS doc now and enjoying it.

Couldn't agree more.
TransaDox' characterization of forum parlance is also spot on :)


Title: Re: Blockchain-Free Cryptocurrencies: Framework for Truly Decentralised Fast Txns
Post by: iamnotback on December 12, 2016, 07:24:25 AM
TransaDox' characterization of forum parlance is also spot on :)

In that respect his posts have the same flaw. He should post a well developed, peer reviewed whitepaper instead of referring to nonsense from MaidSafe as somehow being coherent.

Write a white paper, otherwise you are just spewing incomprehensible babble. Invariably those who can't write it down in a whitepaper, are spewing incorrect babble.

Nodes can be Sybil attacked. Propagation ordering is not proof nor consensus. Write a whitepaper that explains the Byzantine fault tolerance in your design.


Title: Re: Blockchain-Free Cryptocurrencies: Framework for Truly Decentralised Fast Txns
Post by: iamnotback on December 22, 2016, 06:57:54 AM
My off-the-top-of-my-head quick list of issues with SPECTRE:

https://medium.com/@shelby_78386/quoting-from-the-whitepaper-29e9fbc0ebec#.f4n0rdaho

Please check my logic?



There are many cases where you might see conflicting transactions in the network that we're broadcast legitimately by honest users.

An obvious one that springs to mind is a company that has a number of nodes across the planet and is processing payments in some form.  If one (or more) of the nodes are subject to some lag, they might create and broadcast a payment that already exists via some smart-contract logic perhaps, yet the producing node is not aware of it being a duplicate due to lag (or any other reason).

That's not being dishonest and is a legitimate case that can, and will happen.

Good point. Their requirement is basically one of requiring external synchronization, but asynchrony is the norm on networks. Synchrony is generally impossible.

I added the following edit to my comment at Medium:

Quote
Edit: Fuserleer (eMunie developer) has pointed out that this requires external synchronization which is generally impossible on networks, e.g. wherein a company has multiple nodes across the network which issue transactions asynchronously. Thus employing the blockchain as the synchronization mechanism. If the company tried to employ their own blockchain for synchronization, then forwarded the transactions to your blockchain, then it requires one node to synchronize to do the forwarding, which is not resilient. In general, asynchrony can’t be avoided.