Bitcoin Forum
November 13, 2024, 02:19:03 AM *
News: Check out the artwork 1Dq created to commemorate this forum's 15th anniversary
 
   Home   Help Search Login Register More  
Pages: [1] 2 »  All
  Print  
Author Topic: Blockchain-Free Cryptocurrencies: Framework for Truly Decentralised Fast Txns  (Read 7065 times)
iamnotback (OP)
Sr. Member
****
Offline Offline

Activity: 336
Merit: 265



View Profile
November 05, 2016, 11:34:02 AM
Last edit: November 09, 2016, 05:00:02 PM by iamnotback
 #1

https://eprint.iacr.org/2016/871

https://iohk.io/docs/research/A%20Blockchain-free%20Approach%20for%20a%20Cryptocurrency%20-%20Input%20Output%20HongKong.pdf

I am very sleepy and haven't read the paper entirely, just scanned it. So I will likely have to some errors in any analysis I do in this groggy state-of-mind.

I want to rattle off a potential list of flaws that come to mind immediately.

1. It is not plausibly scalable for every payer to receive notice of, nor validate/record the graph metrics for, every transaction in the network. Payers must rely on some supernodes, which then become fulcrums for selfish game theory strategies which likely can break collaborative Nash equilibrium assumption. For example, a supernode could lie about a double-spend, causing massive orphanage once discovered, possibly gaining profits by speculatively shorting the value of the token. Supernodes could collude to do such malfeasance, even a 51% attack. So the claim that the resistance to centralization has been entirely mitigated seems to be debatable. The paper does mention pruning (from computations) the ancestors when their fees have been consumed, but afaics this doesn't mitigate the need of verifiers to receive a broadcast of every (or large fraction of all) transaction(s).

2. There is no total order in the described system, thus any partial order DAG only exists from the perspective of those partial orders which reference it. Thus the reward for any DAG is always subject to being retaken by an entity which can apply more PoW than was originally applied. Thus the selfish-mining flaw appears to apply. A miner with 1/4 or 1/3 of the a DAG partial orders's hashrate lie in wait to allow others to waste their PoW on a DAG while building a hidden parallel DAG claiming the same rewards. Then release the hidden DAG later orphaning all those said transactions and rewards, thus increasing their share of the rewards (including minted coins) relatively speaking higher than the proportion of their hashrate would otherwise provide without the selfish mining strategy. And it appears to me to be catastrophically worse than for Satoshi's design, in that there will likely be multiple unmerged DAGs branches at any moment, so the attacker probably needs much less than 1/4 of the network hashrate to selfish mine any one of those coexistent DAG branches.

Quote from: section 3.1 page 18
The first natural but often unstated assumption is that a majority of players follow the correctness rules of the protocol.

...

Equally important is the assumption of rational participants (whether they are cheating or not), and we likewise assume that majority of the computing power is held by rational players.

From the analysis I did of Iota's DAG, it seems impossible to presume the majority players obey any Nash equilibrium in a blockless DAG design. It appears to be a fundamentally insoluble issue. In other words, it is not sufficient to analyze the security and convergence game theory (properties) from a holistic systemic perspective and instead per DAG branch partial order strategies arise.

3. I intuitively expect some flaw around the variable control over fees collected per unit of PoW expended, i.e. control over difficulty. But I am too sleepy to work through this part of the paper right now.

I considered a design like this last year. And I came to the conclusion that there is no way to avoid centralization employing proof-of-work incentivized by profit, regardless of any design that could possibly be contemplated.

Btw, I don't understand why that paper failed to cite the prior art of Iota's and Sergio Demian Lerner's DAGs.


Edit: Section "2.1 Collaborative Proof Of Work" on page 7 of the white paper explains well the mathematical concept of cumulative proof-of-work as a proxy for measuring the relative resources consumed by chain as the metric for the chain length in a longest-chain-rule.
dsattler
Legendary
*
Offline Offline

Activity: 924
Merit: 1000


View Profile
November 05, 2016, 04:50:40 PM
 #2

Btw, I don't understand why that paper failed to cite the prior art of Iota's and Sergio Demian Lerner's DAGs.

Don't forget Byteball, a new consensus algorithm and private untraceable payments using DAG, no POW, no POS! Wink
https://bitcointalk.org/index.php?topic=1608859.0

Bitcointalk member since 2013! Smiley
iamnotback (OP)
Sr. Member
****
Offline Offline

Activity: 336
Merit: 265



View Profile
November 06, 2016, 04:06:44 AM
Last edit: November 15, 2016, 12:59:39 AM by iamnotback
 #3

Btw, I don't understand why that paper failed to cite the prior art of Iota's and Sergio Demian Lerner's DAGs.

Don't forget Byteball, a new consensus algorithm and private untraceable payments using DAG, no POW, no POS! Wink
https://bitcointalk.org/index.php?topic=1608859.0

Also found this:

https://www.youtube.com/watch?v=zjT7wQNg_s4

The innovation claimed is that everyone can agree on 11 of 12 centralized supernodes to order the transactions, thus we wouldn't need PoW nor blocks if this claim were true and desirable.

If that claim were true, then we wouldn't have Visa and Mastercard dominant today.

Since people can't agree, this is why the governance of society is a power vacuum. The most ruthless and powerful are sucked into the vacuum to provide the top-down organization (discipline) that society requires to function. So it will be no different outcome in this case, where the 12 supernodes will be controlled by one entity (even pretending to be 12 entities via a Sybil attack). Because the users will never be able to agree on any evolution away from the 12 by forming a consensus on an exact 12, since they are only allowed a mutation of 1 at a time. And any higher rate of mutation would make it implausible to define a total order.

Tangentially (off-topic for technical discussion) although the creator appears to have good intentions, I argue his distribution method is highly flawed. Giving away coins for free means most will dump them on the market, thus collapsing the price. Well maybe that is by design, so someone can scoop them up cheap and then later after price hits rock bottom, then that group can pump & dump it making the usual fortune by mining the n00b speculators.
iamnotback (OP)
Sr. Member
****
Offline Offline

Activity: 336
Merit: 265



View Profile
November 09, 2016, 05:34:11 PM
Last edit: November 15, 2016, 01:07:54 AM by iamnotback
 #4

Quote from: section 3.1 page 18
The first natural but often unstated assumption is that a majority of players follow the correctness rules of the protocol.

...

Equally important is the assumption of rational participants (whether they are cheating or not), and we likewise assume that majority of the computing power is held by rational players.

From the analysis I did of Iota's DAG, it seems impossible to presume the majority players obey any Nash equilibrium in a blockless DAG design. It appears to be a fundamentally insoluble issue. In other words, it is not sufficient to analyze the security and convergence game theory (properties) from a holistic systemic perspective and instead per DAG branch partial order strategies arise.

We must differentiate Iota's design because afair it has no reward for doing proof-of-work other than the "altruism-prime" motivation and afaics Iota does not have the localized incentive of Theorem 2 mentioned below.

You seem to be a smart guy. Here is a challenge for you - design such a system based on DAG that allows to issue coins a-la Bitcoin (we start with 0 supply) without weakening the security of the system. I think 1 week is enough for you. Do you accept the challenge? These links may be helpful:
- https://en.wikipedia.org/wiki/CAP_theorem
- https://en.wikipedia.org/wiki/Nash_equilibrium
- https://en.wikipedia.org/wiki/Pareto_efficiency

Just because there's a PoW component (initially at least), which produces new coins. You might not like mining, but it's established enough that few would seriously object to using it for distribution. (Though it's of course always preferable if the PoW is GPU/ASIC resistant.)

Why would you extend my branches if by invalidating them you would earn more coins?

Afaics Iota's convergence depends on all payers and payees adopting the same strategy¹ with no incentive present to choose one strategy over another (which is why I never thought Iota could maintain a Nash equilibrium without centralized servers enforcing a strategy).

The above quote from the white paper is the normal resistance up to well known "51% attack" assumption. Theorem 2 in section 3.2 of page 19 explains that the honest, rational participant has (presuming a Nash equilibrium) a probabilistic and opportunity cost incentive to apply proof-of-work (i.e. append) on the "leading edge" analogous to the longest chain rule incentive in Bitcoin.

Yet a Nash equilibrium requires that there are no other plausible strategies in conflict with each other. So we must consider:

Even with or without direct monetary rewards (e.g. minted coins or non-burned txn fees), selfish mining can be conceptualized more generally as the asymmetry (for different proof-of-work participants, aka miners in Bitcoin) of the cost of effective PoW (or burned txn fees), for whatever PoW (or burned txn fees) accomplishes in the consensus system. So even for Iota or DagCoin which afair don't monetarily reward the PoW (i.e. afaik the PoW is simply burned), the asymmetry still exists in terms of the value of what PoW can effect in the system. Thus as CfB wrote, "a more sophisticated strategy may be more profitable" given some externalities such as achieving a double-spend and shorting the token's exchange value.

And afaics, this is where the paper errs just below the proof of Theorem 2:

Quote
A stronger property can be made for those transactions that further satisfy property #3
namely that the prize of the new transaction be larger still than the prize of its parents before
the new transaction came into existence. As long as this property is true, not only will honest
verifiers have an incentive to prefer the new transaction over its parents, but even dishonest
clients—who might think of actively denying certain valid transactions—will still find it advan-
tageous to prefer the new transaction.

The possibility of non-Nash equilibrium attacks are acknowledged but in a dismissive tone (and afaics an incorrect presumption of "convergence" being final unless "convergence" means probabilistic assurance of some multiple "as confirmations" of 50% of all proof-of-work of all branches as descendants of our branch):

Quote from: Concerted attacks, Section 4 of page 21
We note that
partially verified transactions have temporary exposure to a concerted attack, since a powerful
attacker may have the temporary local ability to overpower the honest majority by focusing
all of its efforts against a specific target. We note that once a transaction nears or reaches
convergence, it will be as strongly affirmed as it would be in a Blockchain system of equivalent
total verification power.

There is little value in using energy to remove a previous transaction, outside of attacks that
focus on transactions one may wish to remove, such as in a double spend scenario, see Theorem
1.

What I wrote previously is afaics true when either minting rewards are present or for transactions can earn some fees because they don't "satisfy property #3":

2. There is no total order in the described system [insert: unless we reach probabilistic "convergence" as I described it above], thus any partial order DAG only exists from the perspective of those partial orders which reference it. Thus the reward for any DAG is always subject to being retaken by an entity which can apply more PoW than was originally applied. Thus the selfish-mining flaw appears to apply. A miner with 1/4 or 1/3 of the a DAG partial orders's hashrate lie in wait to allow others to waste their PoW on a DAG while building a hidden parallel DAG claiming the same rewards. Then release the hidden DAG later orphaning all those said transactions and rewards, thus increasing their share of the rewards (including minted coins) relatively speaking higher than the proportion of their hashrate would otherwise provide without the selfish mining strategy. And it appears to me to be catastrophically worse than for Satoshi's design, in that there will likely be multiple unmerged DAGs branches at any moment, so the attacker probably needs much less than 1/4 of the network hashrate to selfish mine any one of those coexistent DAG branches.

However, if the quoted selfish mining doesn't require 1/4 to 1/3 of total systemic hashrate because the network hashrate is split amongst several coexistent branches of the DAG (which at any moment have not yet been converged), then it also means the selfish miner is only becoming relatively wealthier than the participants on the attacked branch and not w.r.t. to transactions in other branches of the systemic DAG. Yet I also posit it means multiple selfish miners probabilistically on different branches don't need to be coordinated, so the threshold-of-attack is lower and thus economically there should be more such attackers (than for Satoshi's design).

Even if we remove minting from described system and require that all transactions "satisfy property #3" so that the only incentive to converge on leading edges is an "altruism-prime" to have one's transaction confirmed (which is in theory qualitatively an undersupplied public good and empirically weaker than an individualized for-profit incentive), then afaics the potential attack becomes a combination of a selfish mining attack in the sense of causing others on the same branch to waste proof-of-work resources (thus of course the others becoming relatively less profitable than the attacker) combined with a double-spend attack on the lie-in-wait branch and noting that for the honest participants the cumulative proof-of-work (in this constrained design variant) would necessarily need to cost significantly less than the value of the transactions in the branch (since given there is no reward then the proof-of-work is effectively a transaction fee). Thus I posit the double-spend attack becomes quite plausible because the security is so low. The vulnerability is ostensibly much greater than (as quoted below) for Bitcoin, because of only being secured by the said commensurate value of proof-of-work as "transaction fees" and because as adapted from the above quote, "there will likely be multiple unmerged DAGs branches at any moment, so the attacker probably needs much less than 51% of the network hashrate to lie-in-wait on any one of those coexistent DAG branches".

@TomHolden, I agree that Satoshi's PoW has the same potential vulnerability in that if double-spends exceed the value of what was burned to provide security, then a 51% lie-in-wait attack is possible funded by the value of the double-spends (possibly also shorting the exchange value in case the successful attack craters the price).

Thus, @tonych's concern applies to every consensus design (including Satoshi's) which is based on burning some resources as the metric of the longest-chain-rule (regardless whether multiple branches are merged to form the longest-chain, e.g. a DAG).


¹https://bitcointalk.org/index.php?topic=1319681.msg13538929#msg13538929
https://bitcointalk.org/index.php?topic=1319681.msg13533261#msg13533261
iamnotback (OP)
Sr. Member
****
Offline Offline

Activity: 336
Merit: 265



View Profile
November 09, 2016, 07:16:14 PM
Last edit: November 10, 2016, 05:03:18 AM by iamnotback
 #5

3. I intuitively expect some flaw around the variable control over fees collected per unit of PoW expended, i.e. control over difficulty. But I am too sleepy to work through this part of the paper right now.

Okay so this design issue is explained in Automatic Drain Rate Adjustment of section 2.2.1 on page 12.

Please check my logic, because afaics that section doesn't correctly conceptualize the flaw and potential for attack.

In Bitcoin, relatively small discrepancies between miners' clocks as seen in timestamps w.r.t. to forming consensus on one chain and over lengthy 600 blocks readjustment windows is not equivalent to the case where difficulty is adjusted separately for each partial order branch of the DAG wherein hashrate can be volatile because it can be moved between branches at will; and thus it is necessary to adjust the difficulty much more frequently over shorter windows.

If the readjustment window is too long, then the high hashrate attacker can stall a branch for a long-time by throwing high hashrate at it until the difficulty adjusts then leave to another branch. Whereas if the readjustment window is short, then attacker can use timestamp manipulation to manipulate the system as well as rapidly undulating difficulty levels for different branches.

I expect Nash equilibrium failures (i.e. conflicting strategies) around the lack of consistency of difficulty levels between branches that need to converge.

As noted in Disruption and DoS of section 4 on page 21, transaction spam is handled heuristically and is orthogonal to the need for difficulty adjustment.

Afaics, difficulty adjustments and a DAG seem fundamentally incompatible. Afair Iota doesn't need to adjust difficulty because the proof-of-work isn't rewarded.
iamnotback (OP)
Sr. Member
****
Offline Offline

Activity: 336
Merit: 265



View Profile
November 10, 2016, 05:31:46 AM
Last edit: November 10, 2016, 03:55:36 PM by iamnotback
 #6

3. I intuitively expect some flaw around the variable control over fees collected per unit of PoW expended, i.e. control over difficulty. But I am too sleepy to work through this part of the paper right now.

...

Afaics, difficulty adjustments and a DAG seem fundamentally incompatible. Afair Iota doesn't need to adjust difficulty because the proof-of-work isn't rewarded.

A more concise reason why minting and DAGs appear to be fundamentally incompatible is because:

  • As the white paper admits in section 2.2.1, there is no total order perspective for which to compute the systemic difficulty, thus it can only be computed per DAG branch.
  • Minting reward (per unit of proof-of-work computation) is maximized by mining on the branch with the least cumulative proof-of-work, so there is an incentive to maximize the breadth of the tree which is a Nash equilibrium conflict with the fee mechanism and Theorem 2's assumption of an incentive to apply proof-of-work (i.e. append) on the "leading edge", i.e. the "altruistic-prime"¹ of the fee mechanism is an undersupplied public good relative to the individualized reward of minting.

¹ Given that systemically there is no income from fees because taking fees (instead of "pass-through") lowers the value for others to append to the branch. Thus the fees are effectively burned.
iamnotback (OP)
Sr. Member
****
Offline Offline

Activity: 336
Merit: 265



View Profile
November 10, 2016, 08:54:27 AM
Last edit: November 10, 2016, 11:10:30 AM by iamnotback
 #7

Btw, I don't understand why that paper failed to cite the prior art of Iota's and Sergio Demian Lerner's DAGs.

Don't forget Byteball, a new consensus algorithm and private untraceable payments using DAG, no POW, no POS! Wink
https://bitcointalk.org/index.php?topic=1608859.0

...

The innovation claimed is that everyone can agree on 11 of 12 centralized supernodes to order the transactions, thus we wouldn't need PoW nor blocks if this claim were true and desirable.

... where the 12 supernodes will be controlled by one entity (even pretending to be 12 entities via a Sybil attack). Because the users will never be able to agree on any evolution away from the 12 by forming a consensus on an exact 12, since they are only allowed a mutation of 1 at a time. And any higher rate of mutation would make it implausible to define a total order.

The Byteball design is conceptually worse than (D)PoS from the analytical perspective that says the practical ability to change the top-down controlling entities is what differentiates (D)PoS from Byzantine fault tolerant federated designs (<-- watch linked video from 17:00 until 22:45), except that perspective assumes that a majority of the stake can't be induced to collude to deviate from Nash equilibrium (w.r.t. to control over and thus outcomes from those ordering nodes in (D)PoS) which seems myopic because in reality the omnipresent power-law distribution of wealth insures the whales own greater than 50% of the stake and if the minnows are not individually economically incentivized thus are operating with "altruistic-prime" with an undersupplied-good opportunity cost which is the power vacuum of political economics (and intuitively any individualized economic incentive will always be captured by economies-of-scale as exemplified by selfish mining, begetting the inviolable power-law distribution outcome).

This reason (as well the lack of scaling robustness) are weaknesses; and (D)PoS is worse than Satoshi's design w.r.t. to Nash equilibrium because no value is extracted (such as not spent on an external resource in proof-of-work) thus 51% nothing-at-stake attacks are inexorable, as well as free when you can short the token on an exchange. However, these "wolverine federated systems in an illusory democratic sheepskin" are more computationally efficient than systems which employ proof-of-work.

IOHK has proved security for a PoS system, but the assumption remains that the majority of the stake is not colluding to violate the Nash equilibrium and a majority of the stake remain online at all times. I don't see what IOHK's PoS accomplishes which isn't already accomplished by DPoS? Is it more objective w.r.t. to violations of Nash equilibrium since in DPoS the majority of the stake can be offline so can't observe first-hand any violations? DPoS is presumably provably secure if a majority of the delegates adhere to the Nash equilibrium.

So in summary, we can hide "wolverine federated systems in an illusory democratic sheepskin" and gain computational efficiency. But the security problems (or more realistically the economic centralization problem since large stake holders need insidious means as there isn't sufficient shorting liquidity for them to scorch their earth) shift to the power vacuum of political economics and the inviolable power-law distribution of wealth (beget by economies-of-scale). Yet Satoshi's design also has these centralization problems due to the power vacuum of political economics and the inviolable power-law distribution of wealth (beget by economies-of-scale).

Will anyone find another class of solution which provides long-term stable resistance to the centralization inherent in the power vacuum of political economics and the inviolable power-law distribution of wealth (beget by economies-of-scale)? Is (D)PoS already more realistically resistant to insidious effects of centralization of vested interests "stake" than Satoshi's design?

This is the Holy Grail we seek because centralized ecosystems don't scale due to the stifling politics and vested interests. In my opinion (which is probably an analysis many others share), this is what is holding back Bitcoin lately.
iamnotback (OP)
Sr. Member
****
Offline Offline

Activity: 336
Merit: 265



View Profile
November 10, 2016, 11:03:17 AM
 #8

Is (D)PoS already more realistically resistant to insidious effects of centralization of vested interests "stake" than Satoshi's design?

No.

(D)PoS isn't a free market on transaction fees. Somebody has to pay for the servers whether it is taken out of the collective as "witness fees" from dilution as is the case for Steem. The vested power-law distributed stake interests have a monopoly and can charge (more than the costs up to) the maximum the market can bear, which some allege is also underway in Bitcoin as proof-of-work mining is allegedly centralizing with economies-of-scale.
spartacusrex
Hero Member
*****
Offline Offline

Activity: 718
Merit: 545



View Profile
November 10, 2016, 02:54:41 PM
 #9

I know you're not, but I'm glad you're back, Anonymint TBTP iamnotback..

Always enjoy squinting and leaning forward to read your 'light' posts.. (and invariably scratching my head)..

Life is Code.
iamnotback (OP)
Sr. Member
****
Offline Offline

Activity: 336
Merit: 265



View Profile
November 10, 2016, 03:25:25 PM
Last edit: November 12, 2016, 12:36:44 PM by iamnotback
 #10

Always enjoy squinting and leaning forward to read your 'light' posts.. (and invariably scratching my head)..

As my poor liver+digestive+delirium health allows, I will be trying to pull my thoughts into a more coherent document. This thread has been more stream-of-(in)consciousness while undulating in/out of severity of delirium or some sharpness of mind. Imagine playing an action packed video game where the screen blacks out every other 5 seconds whilst the game continues. Difficult to maintain continuity of thoughts and short-term memory.

Please feel free to raise any questions or quote any portions that need more clarification/discussion (or don't to avoid the masochism of reading more of my discombobulated babble).

Believe me, Iamnotback. I am barely here nor there. Maybe by February I will be back after the scheduled expert medical diagnosis.

I don't think many people fully understand DAGs. Ditto the microeconomics and game theory of blockchains. I am trying to gain a holistic understanding of the design axes.

One final point: there is a science of designing economic incentives so that rational players will behave in a desired way, and it’s called mechanism design. Creators of cryptocurrencies (as well as creators of applications such as the DAO) are essentially doing mechanism design. But mechanism design is hard, and our paper is the latest among many to point out that the mechanisms embedded in cryptocurrencies have flaws. Yet, sadly, the cryptocurrency community is currently disjoint from the mechanism design community. That is why I’m thrilled that mechanism design expert Matt Weinberg, who’s behind all the sophisticated theory in our paper, is joining Princeton’s faculty next semester. Expect more research from us on the mechanism design of cryptocurrencies!

Edit: and another potential reason why my explanation in this thread may lack complete clarity is because it might require discussing my design solution, which I am not ready to do. Thus I am writing a private document for that now and sharing publicly my analysis and brainstorming.
iamnotback (OP)
Sr. Member
****
Offline Offline

Activity: 336
Merit: 265



View Profile
November 10, 2016, 07:10:04 PM
Last edit: November 21, 2016, 05:52:29 AM by iamnotback
 #11

But the security problems (or more realistically the economic centralization problem since large stake holders need insidious means as there isn't sufficient shorting liquidity for them to scorch their earth) shift to the power vacuum of political economics and the inviolable power-law distribution of wealth (beget by economies-of-scale). Yet Satoshi's design also has these centralization problems due to the power vacuum of political economics and the inviolable power-law distribution of wealth (beget by economies-of-scale).


Is (D)PoS already more realistically resistant to insidious effects of centralization of vested interests "stake" than Satoshi's design?

No.

(D)PoS isn't a free market on transaction fees. Somebody has to pay for the servers whether it is taken out of the collective as "witness fees" from dilution as is the case for Steem. The vested power-law distributed stake interests have a monopoly and can charge (more than the costs up to) the maximum the market can bear, which some allege is also underway in Bitcoin as proof-of-work mining is allegedly centralizing with economies-of-scale.

I am writing something privately more coherently driving towards the generative essence of what I am thinking about in the above quotes:


Power-law Distribution Control

A Nash equilibrium can coexist with coordinated control over greater than 50% of the resources in a consensus ordering system, if there is no rationally better strategy employing said control which when deployed dictates a change to the optimum strategy of any system participant.

For example in a proof-of-work system, whether or not coordinated miners with a significant percentage of the system hashrate are selfish and stubborn mining¹ on new blocks immediately for themselves and propagating them slowly to other miners for a relatively more profitable mining strategy, doesn't dictate or change for the other system participants their optimum mining strategy and their optimum number of confirmations for a specific probability of a double-spend. Actually the concentrations of controlled hashrate even when less than 50% does slightly impact confirmation probabilities⁶, but this is ignored except for very large value transactions.

Another example is that control over all new blocks via control over a majority of the stake in DPoS system, enables a strategy of dictating the level of transaction fees, but it doesn't change the optimum strategy of any participant in the system (other than the futility of the minority stake voting). Whereas for a proof-of-work or non-delegated proof-of-stake system, the optimum strategy of the minority (hashrate or stake respectively) changes (to not mining or staking respectively) because all of their blocks will be orphaned, although effectively in DPoS the majority vote would just choose all the delegates so none would be orphaned.

Another counter example in proof-of-work or proof-of-stake systems is that a strategy of employing majority of the hashrate or stake respectively to issue double-spends does impact the strategy of other participants w.r.t. their computation of probabilities of a double-spend and their non-participation in the system.

The importance of this realization that a Nash equilibrium can coexist with a majority control over the resources of a consensus ordering system, is due to as follows an inviolable fact of physics and the economics of our universe.

Theorem: the control over the resources in every consensus ordering system will be power-law distributed. No counter example will be discovered.

Proof: Smaller mass is more attracted to larger mass because it maximizes the the entropy, aka the information content, of the system.[Moore2016] Lonesome mass has no frame-of-reference thus has a high probability of only one future. It is also possible to relate this to why we must have friction, oscillation, a numerable speed-of-light so the past and future light cones of special relativity don't collapse into undifferentiated voiding all distinguishable existence.

If this theorem holds and I can argue that the strategy of employing majority control over hashrate or stake to issue double-spends or to orphan all minority blocks, is not optimum for rational power-law distributions, then I can claim a Nash equilibrium can exist for consensus ordering systems.

And I do argue that power-law distributions have nothing to gain by destroying the value of the system, because their resources are not liquid and are too large to be offset by available liquidity of shorting the value of the system because equity liquidity is a minority fraction of the market capitalization. Thus it is also presumed that the rational power-law distribution would not even allow a rented 51% hashrate attack. Even a recycled attack seems irrational[Recycled].

However, the power-law distribution majority is not in control if there exist any attacks which only require a minority of resources and/or especially attacks (even if they have a low very probability of success) which either have nothing-at-stake (e.g. proof-of-stake if there is any such minority resources attack) or in any system which doesn't consume (burn) a resource which has a greater value than the probabilistic value of the said attack, thus said attack can be repeated at no cost until it (or no loss when it) succeeds. In which case, a rogue whale might deem it rational to attack and short the value of the system. Distributed Proof-of-Stake (DPoS) could potentially be rationally (perhaps even 51%) attacked by the exchanges (claiming a hacker did it) because they apparently control the private keys for voting, yet don't have contractual ownership and vested interest in the (value of the) stake.

The rational power-law distribution majority might orphan minority blocks, such as for the purpose of having a monopoly on transaction fees or blacklisting some UXTO, if it can't be objectively observed as a 51% attack causing fear of double-spends and protocol changes. Absent a total perspective which would otherwise not be a Byzantine Generals Problem, thus there is no objectivity over whether orphaned blocks are due to a 51% attack in Satoshi's design². Thus Satoshi's design doesn't have a Nash equilibrium, because if minority hashrate miners know there is a 51% attack, then their optimum strategy changes to quit mining. However, pools probably ameloriate this attack. Alternatively a less conspicuous monopoly on transaction fees can accomplished by the power-law distribution rejecting protocols which would otherwise allow transaction rate (supply) to match its demand, e.g. limiting the size of blocks of transactions in a blockchain.

So in addition to evaluating whether a consensus ordering algorithm has a Nash equilibrium, we also want to analyze the impacts given the natural and inviolable power-law distribution control over the resources of the system. Moreover instead of evaluating the design axes of consensus ordering systems only from the perspective of limits of the proportions of rationally self-interested malvolent participants for Byzantine fault tolerance, we should also incorporate the power-law distribution's majority control over the system resources as a potentially positive asset enabling some alternative designs, e.g. DPoS as an alternative to proof-of-work.


Proof-of-Work as Space Heaters Belies Economics of Specialization

Specialization enables economies-of-scale.

An example of an erroneous posited caveat[4] that proof-of-work mining resources would not become power-law distribution centralized due to the posited high electrical cost of dissipating heat in centralized mining farms coupled with the posited free electricity cost of using the “waste” heat of ASIC mining equipment as space heaters, is (in hindsight) incorrect because:

  • Two-phase immersion cooling is 4000 times more efficient at removing heat from high-power density data centers[5], reducing the 30 - 50% electricity overhead to 1%[6].
  • Electricity proximate to hydroelectric generation or subsidized electriciy costs approximately 50 - 75% less than the average electricity cost.
  • Heating is rarely needed year-round, 24 hours daily, at full output. Not running mining hardware at full output continuously renders its purchase cost depreciation much less economic because the systemic hashrate is always increasing and (because) ASIC efficiency is always increasing[7]. The posited purchase of obsolete mining equipment[8] is incorrect because `MR = MC` so a combination of increased demand for obsolete mining raising its price and weighted profit at the margins increasing thus increasing the mining difficulty so that savings due to waste heat is offset. Closer to home, to make it profitable enough to be worthwhile (to justify the pita of jerry–rigging a space heater for equipment not designed for the purpose) requires running so many 10s or 100s of kWH of relatively much less efficient (i.e. obsolete) hardware generating more heat than can be typically utilized (unless infernos are in sufficient decentralized demand).


Proof-of-Work on CPUs Belies Economics of Specialization

The posited caveat[4] that mining on general use computers (as a refutation of the power-law distribution of resources) would be economically viable if ASICs are not more efficient than (H + E) / E (even factoring that E might be pyschologically 0 because it is obscured in monthly variability of the electric bill) falls away at least because of the transition to power efficient (battery powered or fanless) devices which don't consume enough electricity to provide enough security for a longest-chain-rule blockchain even if millions of said devices were mining[9]. Or more generally because the portion of the general use computers' cost which represents circuits applicable to proof-of-work computation is equivalently too small.


[Moore2016] https://steemit.com/science/@anonymint/the-golden-knowledge-age-is-rising
[Recycled] https://bitcointalk.org/index.php?topic=1319681.msg16853429#msg16853429
[1] https://bitcointalk.org/index.php?topic=1319681.msg13800936#msg13800936
    https://bitcointalk.org/index.php?topic=1183043.msg13800901#msg13800901
    https://bitcointalk.org/index.php?topic=1319681.msg13778110#msg13778110
¹ https://arxiv.org/abs/1311.0243
  http://eprint.iacr.org/2015/796
  https://bitcointalk.org/index.php?topic=1361602.msg15823439#msg15823439
  https://blog.ethereum.org/2014/07/11/toward-a-12-second-block-time/
² https://bitcointalk.org/index.php?topic=1183043.msg13823607#msg13823607
[4] https://blog.ethereum.org/2014/06/19/mining/
[5] http://www.allied-control.com/immersion-cooling
[6] http://www.allied-control.com/publications/Analysis_of_Large-Scale_Bitcoin_Mining_Operations.pdf#page=9
[7] https://www.reddit.com/r/Bitcoin/comments/335107/i_am_thinking_of_using_a_bitcoin_miner_to_heat_my/
[8] https://bitcointalk.org/index.php?topic=918758.msg10109255#msg10109255
    https://bitcointalk.org/index.php?topic=1527954.msg16816538#msg16816538
[9] https://bitcointalk.org/index.php?topic=1361602.msg15553037#msg15553037
³ http://esr.ibiblio.org/?p=984
https://bitcointalk.org/index.php?topic=1171109.msg12376416#msg12376416
https://bitcointalk.org/index.php?topic=1671480.0
[13] https://eprint.iacr.org/2013/881.pdf
     http://ethereum.stackexchange.com/questions/314/what-is-ghost-and-what-is-its-relationship-to-frontier-and-casper
     https://blog.ethereum.org/2014/07/11/toward-a-12-second-block-time/
https://arxiv.org/abs/1402.2009
http://hackingdistributed.com/2014/12/17/changetip-must-die/
https://bitcointalk.org/index.php?topic=1319681.msg16805440#msg16805440
https://github.com/shelby3/hashsig/blob/master/DDoS%20Defense%20Employing%20Public%20Key%20Cryptography.md
alkan
Full Member
***
Offline Offline

Activity: 149
Merit: 103


View Profile
November 10, 2016, 09:46:02 PM
 #12

You may want to take a look at the Swirlds Hasgraph Consensus algorithm which doesn't rely on blockchains and PoW, PoS either.

It praises itself as being fair, fast, provable, Byzantine, ACID compliant, efficient, inexpensive, timestamped, DoS
resistant, and optionally non-permissioned.

For more information, see the white paper http://www.swirlds.com/downloads/SWIRLDS-TR-2016-01.pdf and my post https://bitcointalk.org/index.php?topic=1400715.0;prev_next=next.




iamnotback (OP)
Sr. Member
****
Offline Offline

Activity: 336
Merit: 265



View Profile
November 11, 2016, 07:24:10 AM
Last edit: November 11, 2016, 07:56:24 AM by iamnotback
 #13

You may want to take a look at the Swirlds Hasgraph Consensus algorithm which doesn't rely on blockchains and PoW, PoS either.

It praises itself as being fair, fast, provable, Byzantine, ACID compliant, efficient, inexpensive, timestamped, DoS
resistant, and optionally non-permissioned.

For more information, see the white paper http://www.swirlds.com/downloads/SWIRLDS-TR-2016-01.pdf and my post https://bitcointalk.org/index.php?topic=1400715.0;prev_next=next.

In Core Concepts of section 2 on page 4, we see the key design facet for obtaining consensus on a total order (selecting from the many partial order DAG branches) is the concept of "Famous witnesses", which is analogous to the "Witnesses" in section 6 on page 9 of the Byteball white paper. The difference is that Byteball restricts the number of these witnesses to 12 and only allows disagreement over 1 witness during each consensus round (which I thus argued would become controlled by the power-law distribution and the salient issue is they would be quite static and unresponsive to free market needs because the power-law distribution isn't real-time omniscient).

Swirlds appears to have some similar attributes as Stellar's SCIP in that I presume a Sybil attack can indefinitely stall consensus as afaics there doesn't appear to be some resource constraint on nodes which would keep the power-law distribution in control. Byteball burns transaction fees but these are hardcoded and not set by a free market competition.

Essentially my analysis is that Byteball is headed in the correct general design direction but there are some pitfalls in their design decisions. For example, in addition to what I already stated, I also foresee scaling issues in the design choices.
iamnotback (OP)
Sr. Member
****
Offline Offline

Activity: 336
Merit: 265



View Profile
November 12, 2016, 09:57:18 PM
Last edit: November 12, 2016, 11:01:11 PM by iamnotback
 #14

Byteball spends transaction fees to the witnesses (and perhaps the payer portion is effectively burned as it is passed along?) instead of employing proof-of-work (but I am not yet clear if this is used as the metric of the chain length in any way in consensus algorithm). These fees per section “1. Introduction: Exchange rate” on page 3 are tied to the system wide exchange value of adding bytes to the database. Byteball has the incorrect monetary theory, because the confidence in and thus the value of money is greater the higher the senioriage.

Quote from: Byteball whitepaper
3. Native   currency:   bytes

Next,   we   need   to   introduce   some   friction   to   protect   against   spamming   the   database   
with   useless   messages.      The   barrier   to   entry should   roughly   reflect   the   utility   of   
storage   for   the   user   and   the   cost   of   storage   for   the   network.      The   simplest   measure   
for   both   of   these is the   size   of   the storage   unit.

I vehemently disagree. I think utility should not reflect storage costs but rather all costs including validation, etc..
iamnotback (OP)
Sr. Member
****
Offline Offline

Activity: 336
Merit: 265



View Profile
November 13, 2016, 08:39:21 AM
Last edit: November 13, 2016, 09:10:22 AM by iamnotback
 #15

One more follow up on Byteball, the design appears to be broken in numerous ways:

https://bitcointalk.org/index.php?topic=1608859.msg16860979#msg16860979

https://bitcointalk.org/index.php?topic=1608859.msg16860875#msg16860875

This isn't supposed to be an altcoin discussion forum. I was originally analyzing blockless chain designs and someone claimed Byteball as prior art. I will no go further on this tangent here. Readers can click the links above and follow the discussion there if they want.
Zcrypt_ZXT
Newbie
*
Offline Offline

Activity: 14
Merit: 0


View Profile
November 14, 2016, 12:13:25 PM
 #16

One of the most interesting thread i've read in a while..Thanks for posting about this. will give a deep read asap.
Fuserleer
Legendary
*
Offline Offline

Activity: 1064
Merit: 1020



View Profile WWW
November 17, 2016, 03:45:24 AM
Last edit: November 17, 2016, 03:59:06 AM by Fuserleer
 #17

Thought I'd chime in here as I've spent a number of years now investigating possible solutions to allow a distributed ledger to process a high tps throughput (VISA+ scale), yet remain trust-less and decentralized (no super nodes, witnesses, or any of the other myriad of semi-centralization tricks to allow scale).  I'm not going to delve too much into the technical with this post, just share some of the ideas and philosophies that I had and where I ultimately settled.  Perhaps it can give others some ideas, inspirations, etc

First though a quick recap....

Way back when (late 2012) the question I wanted an answer to was; at what TPS did a pedigree Satoshi block chain secured with POW start to become problematic.

I performed a number of tests which ultimately concluded in a figure of 150-300 TPS depending on the topology of the network and the average performance of nodes within the network graph.  Past that point orphan thrashing began and deteriorated the performance of the network in general and the efficiency of POW mining rapidly (using efficient and POW in the same sentence seems a real oxymoron now!).  These days with a higher average node spec and internet connections, I'd wager ~500 TPS would be possible before any headaches (a block size of about 300MB if anyone is wondering).

After that (2013) I started to experiment with different ledger architectures, the first of which was what I called a Block Tree (it was really more akin to a DAG).  Without getting into too much detail the premise was that at times of high load, the "tree" could widen and portions of the network could be in varying states of total consensus (parts of the tree missing for example) but ensure a correct consensus for the parts of the tree they had.  With a large enough portion of the correct tree nodes could estimate the chances of being out of consensus before the fact, when load then decreased the tree would narrow again and lagging nodes would eventually catch up.

There was some improvement (especially with regard to load spikes), but ultimately the same issues as a block chain surfaced at a higher load, and with extreme continuous load the whole thing fell on its ass.

I then went "full DAG" and dropped the blocks, which again resulted in further improvement, but traditional consensus algorithms (POW, POS, etc) again led to ultimate upper limits and various new problems such as no true global state that a block chain based approach provides.  A DAG also couldn't support a large number of other features that were determined as "must have" for a real mass market targeted product.

That was end of 2014 and I went back to the drawing board completely and developed a ledger architecture called CAST (Channeled Asynchronous State Tree) and a consensus mechanism called EVEI (Evolving Voters via Endorseable Interactions).  Together they allow scaling to VERY high throughput and meet all the necessary requirements.

The eureka moment was upon the realization that it is possible to split the data from the state, yet ensure that the data determines the state.  This yields a number of very important properties when considering scalability:

1.  The states are small (2000 tps consumes around 50kb per second)
2.  The states have multiple points of origin
3.  The states can be split into sub-states that reference a sub-set of the total transactions

First lets look at blocks and block chains with regard to the above points:

In a block chain the block is the state AND the data.  This is required due to how the consensus operates with mining, specifically the miner of the next block may have transactions that others do not know about so the state data has to be packaged as the state itself (this is true no matter the algorithm, POW, POS, DPOS etc).

This in turn leads to there being only a single point of origin for the next valid block and so it has to propagate over the network.  This leads to the inevitable latency and CAP considerations. If the block is too large and takes too long to fully propagate, orphan thrashing begins to occur and reduces overall performance and efficiency.  Another side effect is that ALL transactions are broadcast twice, once when the transaction is created, and later within the block itself further adding to network and bandwidth overheads.

Finally a block obviously can not be split into sub-blocks once it has been mined to mitigate any of the above.

Going back to CAST and EVEI.  In a gossip driven P2P network it can be assumed that the majority of nodes will always know about the majority of transactions, therefore the majority of nodes will output the same state independently and without any specific state communication with each other.  This covers points #1 and #2, whereby the states can be small due to the redundant requirement of the data being embedded in the state and provides multiple points of origin for the state, grossly reducing propagation time (the majority of nodes have the state so in a healthy network propagation is practically zero).

This greatly increases the performance of the network and its efficiency.  I've witnessed continuous loads of > 500 tps over long periods of time and short term spikes of > 2,500 tps in both small and large networks consisting of hardware ranging from PIs to enterprise servers with no issues.

Furthermore, having a global state of the ledger with consensus mitigates a lot of the problems associated with a DAG and its progressive state mechanics.

Some might argue that CAST + EVEI is then a block chain, and yes there are some similarities and overlap, but the principles and operational functionality underpinning it is radically different thus I consider it in a different camp.  Either way, call it what you will Smiley

Moving on, 500-2,500+ tps is pretty good, especially when hardware such as a Pi is able to keep pace most of the time with minimal issues, but, it's not enough.  VISA alone on Black Friday reportedly processes peaks of 40,000 tps, but even when discarding Black Friday, adding MasterCard, Amex, Paypal, and all the banking payments into the mix, it quickly becomes obvious that a couple 1000 tps is not enough for a global payments system.  Throw IoT in the bag too and the requirements roll into the 100,000+ very quickly.  Which is where #3 comes into play.

Block chains are generally unstructured, with the block containing a soup of transactions from various addresses.  CAST on the other hand is very structured, with addresses owning one or more channels and each transaction has at least 2 components...a spend and a claim.  The spend lives in the spenders channel and the claim lives in the receivers.  With this structuring it is very easy to chop to the ledger up into more manageable partitions.

This then leads to a conclusion of; with a structured ledger, and compact states that are determined by the data itself, it should therefore be possible for the global ledger state to also be split into sub-states according to each data partition.  WIN!

Nodes can configure according to their performance and support n partitions rather than having to upgrade or even go offline to stay in the game as load increases over time.  

EVEI consensus operates at a partition level, and the global state is simply a culmination of all partition level state consensus outcomes.  This functions reliably due to the fact that most nodes will operate more than a single partition and the variance of node partition configurations in the network will lead to an amount of overlap.  This overlap provides an auditable causality of the global state from current and past partition states.

Partitioning the data does bring with it some overhead, and presently the sweet spot seems to be about 1000 partitions before the curve exponent gets too large.  This can probably be improved, but even if not, 1000 partitions each with the ability to process ~500 tps should be more than enough scale for now!

Some might be thinking, "hmm that partitioning thing sounds awfully similar to Ethereums sharding" and it does because it is.  However, Ethereum's partitioning/sharding implementation is inferior due to 3 points:

1.  It uses a block chain/s and is more akin to a set of side chains, which means there cant be a true consensus on global state
2.  It is difficult and inefficient for shards to communicate due to the architecture of its smart contract VM and ambiguous state data
3.  It's at least 2 years out, EVEI and CAST are not Smiley

Conclusion and TL;DR:  To scale, remove the block chain, replace with a structured ledger and states that are decoupled from data, use consensus that embraces determinism...then chop the ledger into smaller chunks Smiley


Jabbawa
Full Member
***
Offline Offline

Activity: 179
Merit: 100


View Profile
November 18, 2016, 10:18:02 AM
Last edit: November 18, 2016, 02:55:21 PM by Jabbawa
 #18

Thought I'd chime in here as I've spent a number of years now investigating possible solutions to allow a distributed ledger to process a high tps throughput (VISA+ scale), yet remain trust-less and decentralized (no super nodes, witnesses, or any of the other myriad of semi-centralization tricks to allow scale).  I'm not going to delve too much into the technical with this post, just share some of the ideas and philosophies that I had and where I ultimately settled.  Perhaps it can give others some ideas, inspirations, etc

First though a quick recap....

Way back when (late 2012) the question I wanted an answer to was; at what TPS did a pedigree Satoshi block chain secured with POW start to become problematic.

I performed a number of tests which ultimately concluded in a figure of 150-300 TPS depending on the topology of the network and the average performance of nodes within the network graph.  Past that point orphan thrashing began and deteriorated the performance of the network in general and the efficiency of POW mining rapidly (using efficient and POW in the same sentence seems a real oxymoron now!).  These days with a higher average node spec and internet connections, I'd wager ~500 TPS would be possible before any headaches (a block size of about 300MB if anyone is wondering).

After that (2013) I started to experiment with different ledger architectures, the first of which was what I called a Block Tree (it was really more akin to a DAG).  Without getting into too much detail the premise was that at times of high load, the "tree" could widen and portions of the network could be in varying states of total consensus (parts of the tree missing for example) but ensure a correct consensus for the parts of the tree they had.  With a large enough portion of the correct tree nodes could estimate the chances of being out of consensus before the fact, when load then decreased the tree would narrow again and lagging nodes would eventually catch up.

There was some improvement (especially with regard to load spikes), but ultimately the same issues as a block chain surfaced at a higher load, and with extreme continuous load the whole thing fell on its ass.

I then went "full DAG" and dropped the blocks, which again resulted in further improvement, but traditional consensus algorithms (POW, POS, etc) again led to ultimate upper limits and various new problems such as no true global state that a block chain based approach provides.  A DAG also couldn't support a large number of other features that were determined as "must have" for a real mass market targeted product.

That was end of 2014 and I went back to the drawing board completely and developed a ledger architecture called CAST (Channeled Asynchronous State Tree) and a consensus mechanism called EVEI (Evolving Voters via Endorseable Interactions).  Together they allow scaling to VERY high throughput and meet all the necessary requirements.

The eureka moment was upon the realization that it is possible to split the data from the state, yet ensure that the data determines the state.  This yields a number of very important properties when considering scalability:

1.  The states are small (2000 tps consumes around 50kb per second)
2.  The states have multiple points of origin
3.  The states can be split into sub-states that reference a sub-set of the total transactions

First lets look at blocks and block chains with regard to the above points:

In a block chain the block is the state AND the data.  This is required due to how the consensus operates with mining, specifically the miner of the next block may have transactions that others do not know about so the state data has to be packaged as the state itself (this is true no matter the algorithm, POW, POS, DPOS etc).

This in turn leads to there being only a single point of origin for the next valid block and so it has to propagate over the network.  This leads to the inevitable latency and CAP considerations. If the block is too large and takes too long to fully propagate, orphan thrashing begins to occur and reduces overall performance and efficiency.  Another side effect is that ALL transactions are broadcast twice, once when the transaction is created, and later within the block itself further adding to network and bandwidth overheads.

Finally a block obviously can not be split into sub-blocks once it has been mined to mitigate any of the above.

Going back to CAST and EVEI.  In a gossip driven P2P network it can be assumed that the majority of nodes will always know about the majority of transactions, therefore the majority of nodes will output the same state independently and without any specific state communication with each other.  This covers points #1 and #2, whereby the states can be small due to the redundant requirement of the data being embedded in the state and provides multiple points of origin for the state, grossly reducing propagation time (the majority of nodes have the state so in a healthy network propagation is practically zero).

This greatly increases the performance of the network and its efficiency.  I've witnessed continuous loads of > 500 tps over long periods of time and short term spikes of > 2,500 tps in both small and large networks consisting of hardware ranging from PIs to enterprise servers with no issues.

Furthermore, having a global state of the ledger with consensus mitigates a lot of the problems associated with a DAG and its progressive state mechanics.

Some might argue that CAST + EVEI is then a block chain, and yes there are some similarities and overlap, but the principles and operational functionality underpinning it is radically different thus I consider it in a different camp.  Either way, call it what you will Smiley

Moving on, 500-2,500+ tps is pretty good, especially when hardware such as a Pi is able to keep pace most of the time with minimal issues, but, it's not enough.  VISA alone on Black Friday reportedly processes peaks of 40,000 tps, but even when discarding Black Friday, adding MasterCard, Amex, Paypal, and all the banking payments into the mix, it quickly becomes obvious that a couple 1000 tps is not enough for a global payments system.  Throw IoT in the bag too and the requirements roll into the 100,000+ very quickly.  Which is where #3 comes into play.

Block chains are generally unstructured, with the block containing a soup of transactions from various addresses.  CAST on the other hand is very structured, with addresses owning one or more channels and each transaction has at least 2 components...a spend and a claim.  The spend lives in the spenders channel and the claim lives in the receivers.  With this structuring it is very easy to chop to the ledger up into more manageable partitions.

This then leads to a conclusion of; with a structured ledger, and compact states that are determined by the data itself, it should therefore be possible for the global ledger state to also be split into sub-states according to each data partition.  WIN!

Nodes can configure according to their performance and support n partitions rather than having to upgrade or even go offline to stay in the game as load increases over time.  

EVEI consensus operates at a partition level, and the global state is simply a culmination of all partition level state consensus outcomes.  This functions reliably due to the fact that most nodes will operate more than a single partition and the variance of node partition configurations in the network will lead to an amount of overlap.  This overlap provides an auditable causality of the global state from current and past partition states.

Partitioning the data does bring with it some overhead, and presently the sweet spot seems to be about 1000 partitions before the curve exponent gets too large.  This can probably be improved, but even if not, 1000 partitions each with the ability to process ~500 tps should be more than enough scale for now!

Some might be thinking, "hmm that partitioning thing sounds awfully similar to Ethereums sharding" and it does because it is.  However, Ethereum's partitioning/sharding implementation is inferior due to 3 points:

1.  It uses a block chain/s and is more akin to a set of side chains, which means there cant be a true consensus on global state
2.  It is difficult and inefficient for shards to communicate due to the architecture of its smart contract VM and ambiguous state data
3.  It's at least 2 years out, EVEI and CAST are not Smiley

Conclusion and TL;DR:  To scale, remove the block chain, replace with a structured ledger and states that are decoupled from data, use consensus that embraces determinism...then chop the ledger into smaller chunks Smiley



Great post! Very interesting.

What are your thoughts on close group consensus and datachains aka the maidsafe solution?

I understand that this has all been theoretical and hard to investigate for the last couple of years, but as of last month things have become much clearer with progress made and dev tutorials etc.

IF (and I understand it is a fairly big 'if') they pull it off, SAFEcoin should scale positively, be instant/zero confirmation times, no mining or centralisation risks (proof of resource), no fees and it will be completely private/anonymous like real digital cash - not to mentioned backed by real computing resources so more tangible in value than even gold.

Sounds like I'm shilling I know, but really I just want to know how close a look you have taken at what they are doing in the last few months? Testsafecoin is due for release in January. I don't doubt it will be delayed further because everything always is, but do you not think that datachains hold the most promise?

https://blog.maidsafe.net/2015/01/29/consensus-without-a-blockchain/

I'm not saying that anyone should be 100% convinced they can pull it off even after 11 years on the job, but IF they do...?
BiTrading
Member
**
Offline Offline

Activity: 95
Merit: 10


View Profile
November 18, 2016, 11:56:16 AM
 #19

Fuserleer, you should check out IOTA (iotatoken.com). It would be interesting to hear your opinion about it.
TransaDox
Full Member
***
Offline Offline

Activity: 219
Merit: 102


View Profile
November 18, 2016, 02:48:32 PM
 #20

Great post! Very interesting.

What are your thoughts on closed group consensus and datachains aka the maidsafe solution?


XOR huh? That's the torrent (DHT) distance function too and interesting features arise......

The distance function is not related to the Merkle or DAG which means that if a node with a remote random ID (as defined by the DHT spec) , is required to cache data in its routing table, AND the decision of which data it should cache is also defined as the distance between its NodeID and the hash. Then the closest nodes to another randomly generated node ID will effectively be a pseudo random sampling of the block chain/CAST (or whatever ledger technology is used).

This means that random samples of the ledger or ledger state can be stored throughout the network and assembled just-in-time as needed. Since some blocks are cached locally by the node (as a function of their Merkle/DAG distance from the Node ID). The cached blocks act as random checkpoints to reconstitute the chain or tree. As the node fills in the data between the hashes for its own benefit, it will be sourcing from multiple and disparate nodes thus filling in the missing pieces requiring a sybil attack to have node IDs near each (random) checkpoint in the hope they get chosen over other "close" nodes. Once all data has been filled between two checkpoints, the confidence that the correct data has been received is extremely high to the point where data connecting one or two checkpoints would allow safe transactions to begin (faster bootstrap) while continuing to fill in the others until the entire chain/tree has been verified and the cached blocks become verified checkpoints. Subsequent bootstraps can start from the last verified checkpoint to the head and a periodic churn of checkpoints can be used over time.

Assuming a significant number of nodes (a reasonable assumption due the necessity of mining and the removal of large storage on a single device), the resistance to a sybil attack is extremely high and attempts detectable.

This above may not seem relevant to your post. But the "Close Groups" detailed in the Maidsafe only need to add the state data (CAST) or block headers/data (bitcoin) to their routing tables and therefore cache the data in the distributed network.
Pages: [1] 2 »  All
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!