Bitcoin Forum
December 13, 2024, 11:29:30 PM *
News: Latest Bitcoin Core release: 28.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: « 1 2 3 4 5 6 7 8 9 10 [11] 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 »
  Print  
Author Topic: The Ethereum Paradox  (Read 99905 times)
monsterer
Legendary
*
Offline Offline

Activity: 1008
Merit: 1007


View Profile
February 17, 2016, 09:49:38 AM
 #201

But that underlined wasn't the problem. Did you forget the point about the Nash equilibrium and all validators needing to trust that the validators from other partition didn't lie.

If validators lie so convincingly, the whole network is lying and you have a lost cause. Transactions will not propagate to the block producers in all other cases.
TPTB_need_war
Sr. Member
****
Offline Offline

Activity: 420
Merit: 262


View Profile
February 17, 2016, 09:56:57 AM
 #202

But that underlined wasn't the problem. Did you forget the point about the Nash equilibrium and all validators needing to trust that the validators from other partition didn't lie.

If validators lie so convincingly, the whole network is lying and you have a lost cause. Transactions will not propagate to the block producers in all other cases.

monsterer w.r.t. to the underlined, I grow very weary of you subjecting all of us to your pretending you are some sort of expert. You may be expert on bitcoind (presumably you are given you created an exchange and btw I am ignorant of bitcoind specifics although I understand the conceptual aspects I need to know to do my design theory).

I have no idea why you can't comprehend what I already explained:

Note that validators can be computing a PoW block based on a hash of their partition and a hash of all the other partitions. Don't forget the power of Merkel trees.

The point is the block producers only need hashes of the partitions. They don't have to verify every transaction. I explained that this maintains Nash equilibrium in certain scenarios:

Note in case it wasn't clear from my upthread posts, strict partition (no cross-partition transactions) for crypto coin (i.e. asset transfers) maintains Nash equilibrium. But cross-partition transactions for asset transfers does not maintain Nash equilibrium (unless using a statistical check as I am proposing for my design, and some may think this is dubious but my white paper will make the argument for it). And strict partitioning for scripts can't exist, because the partitions are violated by external I/O.

It seems monsterer that you have an inflexible mind. You can only see things in one way which is what you already understand about how bitcoind works. There are other possible designs.

TPTB_need_war
Sr. Member
****
Offline Offline

Activity: 420
Merit: 262


View Profile
February 17, 2016, 10:15:15 AM
Last edit: February 17, 2016, 10:59:31 AM by TPTB_need_war
 #203


In my design, I have cross-partition transactions, but the way I accomplish this and maintain the Nash equilibrium is I entirely centralized verification, i.e. all transactions are validated by all centralized validators. This eliminates the problem that full nodes have unequal incomes but equal verification costs, thus ameliorates the economics that drives full nodes to become centralized. The centralized validators would still have the potential incentive to lie and short the coin. So the users in the system (hopefully millions of them) are constantly verifying a subsample of the transactions so that statistically a centralized validator is going to get caught fairly quickly, banned, and their hard won reputation entirely blown. Since these validations are done in a non-organized manner (i.e. they are randomly chosen by each node), then there is no viable concept of colluding to maintain a lie.


Regarding the last sentence, I take it that it refers to the extra validation
performed by the users? How do you ensure that the selection of the txs to be validated
is done randomly? And what incentives do the user nodes have to perform the extra validation at all?

Users are going to chose randomly because they have no reference point nor game theory incentive to base some priority on choosing non-randomly which transactions to validate. And I will probably make the transaction(s) they validate be those identified by the PoW share hash they must submit with their transaction.

Users have the incentive to validate, because it is an insignificant cost (perhaps some milliseconds of work every time they submit a transaction) and they have every incentive to make sure the block chain's Nash equilibrium isn't lost by a cheating centralized verifier. Also there may be a Lotto incentive where they win some reward by revealing a proof-of-cheating. There is no game theory by which users are better off by not validating, i.e. an individual failure to validate a subsample doesn't enable the user to short the coin. That was a very clever breakthrough insight (simple but still insightful).

Note my idea could also be applied to partitioning, so if centralized validators can't scale then I would still support partitioning (and even for scripts!), but I think partitioning and reputations of multiple validators muddles the design and also muddles the economics (i.e. I think the validators will end up centralized within a partition any way due to economics).

Note also that users can't normally validate constraints from the UXTO because they aren't full nodes. So they will be depending on a full node to relay to them the correct records from the UXTO, but these can be verified (by the non-full node users) to be correct from the Merkel trees.

So this is why I said maybe Ethereum should perhaps try to hire me, but I also wrote a paragraph upthread saying basically "do not hire me". Hope readers don't think I am begging for a job. Geez.

monsterer
Legendary
*
Offline Offline

Activity: 1008
Merit: 1007


View Profile
February 17, 2016, 10:18:20 AM
 #204

monsterer w.r.t. to the underlined, I grow very weary of you subjecting all of us to your pretending you are some sort of expert.

I have no idea why you can't comprehend what I already explained:

What you are saying makes no sense at all. There are two possibilities as I see it:

1. Invalid transactions arrive in the hands of the block producers. This implies the network as a whole has failed because invalid transaction should not be propagated.

2. Block producers produce blocks containing invalid transactions due to lack of validation. Their blocks will be orphaned by block producers which do validate, which is a net loss for those who don't, forcing these SPV miners out of business and causing centralisation.

Note in case it wasn't clear from my upthread posts, strict partition (no cross-partition transactions) for crypto coin (i.e. asset transfers) maintains Nash equilibrium. But cross-partition transactions for asset transfers does not maintain Nash equilibrium (unless using a statistical check as I am proposing for my design, and some may think this is dubious but my white paper will make the argument for it). And strict partitioning for scripts can't exist, because the partitions are violated by external I/O.

I don't agree with this assessment either. In a partitioned system which requires a cross partition transaction, this new transaction merges the two partitions; it places a ordering constraint on both partitions which forces the new transaction to be located after the two points of dependency (both parents).
TPTB_need_war
Sr. Member
****
Offline Offline

Activity: 420
Merit: 262


View Profile
February 17, 2016, 10:40:15 AM
Last edit: February 17, 2016, 10:56:13 AM by TPTB_need_war
 #205

monsterer w.r.t. to the underlined, I grow very weary of you subjecting all of us to your pretending you are some sort of expert.

I have no idea why you can't comprehend what I already explained:

What you are saying makes no sense at all.

That is uncivil accusation against your Professor.

No. As usual when you fail to understand what has been written, you fill a thread with your nonsense and you refuse to grasp your mistake even after it has been explained to you over and over and over again.

Sorry I can't tolerate this any more monsterer.

I guess this is an example to me of why very smart people don't work on very complex projects with people who are not smart enough to work efficiently. I don't like to look down on others, because I hate when others do that to me. Normally I would prefer to respond pretending I am your peer and not desiring to gain any perception from readers otherwise. But I don't know how else to react your repeated insistence to force your inability to comprehend new concepts on everyone else. I would think you might be embarrassed enough from other times where you eventually realized I was correct, to go take some quiet time and try to figure out your mistake. The problem is you think I am wrong and your lack of respect for my superior expertise, is where your noise problem originates. I don't mean that to be condescending nor uncivil. Rather I just wish you would get in touch with reality. Let me demonstrate this reality to you again as follows...

Btw, I like questions. But I also like when the person asking, actually tries to understand and think what I meant rather than just deciding to view it in only one way. And declare the Professor wrong.

There are two possibilities as I see it:

1. Invalid transactions arrive in the hands of the block producers. This implies the network as a whole has failed because invalid transaction should not be propagated.

2. Block producers produce blocks containing invalid transactions due to lack of validation. Their blocks will be orphaned by block producers which do validate, which is a net loss for those who don't, forcing these SPV miners out of business and causing centralisation.

#1 does not apply because the design is such that validation is split between partitions. This has been explained to you numerous times! #1 would be a design without partitions and where every full node verifies every transaction, which obviously can't scale decentralized and which due to economics that I explained will always end up centralized. It is a dead design. Bitcoin and Nxt (which is now run by a dictator) have proven these designs end up centralized as I predicted in 2013 (back when people like you said I was loony).

#2 is why the design for partitioning (or delegation of validation) has to maintain a Nash equilibrium, meaning the requirement that there exists no game theory advantage for partitions (or delegates) to lie about their validation. This point about Nash equilibrium has been explained to you numerous times!

Note in case it wasn't clear from my upthread posts, strict partition (no cross-partition transactions) for crypto coin (i.e. asset transfers) maintains Nash equilibrium. But cross-partition transactions for asset transfers does not maintain Nash equilibrium (unless using a statistical check as I am proposing for my design, and some may think this is dubious but my white paper will make the argument for it). And strict partitioning for scripts can't exist, because the partitions are violated by external I/O.

I don't agree with this assessment either. In a partitioned system which requires a cross partition transaction, this new transaction merges the two partitions; it places a ordering constraint on both partitions which forces the new transaction to be located after the two points of dependency (both parents).

Are you fucking blind?

Let me quote again the part that renders your underlined complaint irrelevant:

Note in case it wasn't clear from my upthread posts, strict partition (no cross-partition transactions) for crypto coin (i.e. asset transfers) maintains Nash equilibrium. But cross-partition transactions for asset transfers does not maintain Nash equilibrium (unless using a statistical check as I am proposing for my design, and some may think this is dubious but my white paper will make the argument for it). And strict partitioning for scripts can't exist, because the partitions are violated by external I/O.

Why can't you read and comprehend what you are reading! Fuck man. It is very frustrating to deal with someone like you who can't even add 1+1 together to realize what is being said.

The point is that in the cases where Nash equilibrium is maintained, then there is no lying which renders the block invalid.

The case of strict partioning which I explained upthread does not cause the block to be invalidated if the partition lied (and I think I explained in my video too)! Did you forget that again. Do I need to go quote that upthread statement by me again! Because the partitions are independent, thus a partion can be invalidated without needing to invalidate the entire block (i.e. the next block corrects the partition in the prior block by providing a proof-of-cheating).

monsterer
Legendary
*
Offline Offline

Activity: 1008
Merit: 1007


View Profile
February 17, 2016, 10:59:25 AM
 #206

#1 does not apply because the design is such that validation is split between partitions. This has been explained to you numerous times! #1 would be

What is 'the' design? This is a thread about Ethereum, where full nodes do validation and miners produce blocks.

a design without partitions and where every full node verifies every transaction, which obviously can't scale and which due to economics that I

A partitioned network operates exactly like two independent networks. If invalid transactions propagate throughout a network, that network is in trouble, regardless of whether there are partitions or not.

#2 is why the design for partitioning (or delegation of validation) has to maintain a Nash equilibrium, meaning the requirement that there exists no game theory advantage for partitions (or delegates) to lie about their validation. This point about Nash equilibrium has been explained to you numerous times!

My point is that if you allow block producers to produce invalid blocks, that will be gamed by those who do validate, leading to SPV miners being pushed out of business.

Are you fucking blind?

No need to take that tone.

You are clearly only thinking about your design here. Merging partitions increases the strength of the network; the only problem is providing the correct incentive to do the merge.
TPTB_need_war
Sr. Member
****
Offline Offline

Activity: 420
Merit: 262


View Profile
February 17, 2016, 11:01:42 AM
 #207

#1 does not apply because the design is such that validation is split between partitions. This has been explained to you numerous times! #1 would be

What is 'the' design? This is a thread about Ethereum, where full nodes do validation and miners produce blocks.

The thread is about Ethereum's promised future version named Casper which is supposed to introduce sharding (partitions). My gosh how did you miss that.

monsterer
Legendary
*
Offline Offline

Activity: 1008
Merit: 1007


View Profile
February 17, 2016, 11:12:48 AM
 #208

#1 does not apply because the design is such that validation is split between partitions. This has been explained to you numerous times! #1 would be

What is 'the' design? This is a thread about Ethereum, where full nodes do validation and miners produce blocks.

The thread is about Ethereum's promised future version named Casper which is supposed to introduce sharding (partitions). My gosh how did you miss that.

I think we're talking cross purposes then. I was talking about Ethereum as it is now, extended to handle partitions. I maintain Casper is an all around bad idea motivated by politics.
TPTB_need_war
Sr. Member
****
Offline Offline

Activity: 420
Merit: 262


View Profile
February 17, 2016, 11:32:19 AM
 #209

a design without partitions and where every full node verifies every transaction, which obviously can't scale and which due to economics that I

A partitioned network operates exactly like two independent networks. If invalid transactions propagate throughout a network, that network is in trouble, regardless of whether there are partitions or not.

You apparently still haven't comprehended that for a strict partitioning that obeys the Nash equilibrium, i.e. for transactions that are never allowed to cross-partitions, then the partition doesn't need a separate PoW nor separate block chain. It can be essentially merge-mined (but in the same block chain) without impacting the Nash equilibrium for the block producers. And my other point was that strict partitioning can't exist for scripting yet it can exist for asset transfers (e.g. crypto coin transactions).

#2 is why the design for partitioning (or delegation of validation) has to maintain a Nash equilibrium, meaning the requirement that there exists no game theory advantage for partitions (or delegates) to lie about their validation. This point about Nash equilibrium has been explained to you numerous times!

My point is that if you allow block producers to produce invalid blocks, that will be gamed by those who do validate, leading to SPV miners being pushed out of business.

You apparently still haven't understood the point. The blocks are not invalid when a strict partition is invalid.

Of course one might argue that strict partitioning (thus by definition is without cross-partition transactions) is not that flexible. But nevertheless the point remains that there is a design which refutes your assumption.

monsterer
Legendary
*
Offline Offline

Activity: 1008
Merit: 1007


View Profile
February 17, 2016, 11:50:39 AM
 #210

You apparently still haven't comprehended that for a strict partitioning that obeys the Nash equilibrium, i.e. for transactions that are never allowed to cross-partitions, then the partition doesn't need a separate PoW nor separate block chain. It can be essentially merge-mined (but in the same block chain) without impacting the Nash equilibrium for the block producers. And my other point was that strict partitioning can't exist for scripting yet it can exist for asset transfers (e.g. crypto coin transactions).

I understand perfectly. Under this model, the incentive for block producers is to exclude partitions from their blocks, because every partition they include increases the chance of their block being orphaned due to double spends. In fact, this is analogous to the problem faced by Iota.

You apparently still haven't understood the point. The blocks are not invalid when a strict partition is invalid.

Of course one might argue that strict partitioning (thus by definition is without cross-partition transactions) is not that flexible. But nevertheless the point remains that there is a design which refutes your assumption.

A strict partition which is invalid serves no purpose that I can see?
TPTB_need_war
Sr. Member
****
Offline Offline

Activity: 420
Merit: 262


View Profile
February 17, 2016, 11:54:18 AM
 #211

You apparently still haven't comprehended that for a strict partitioning that obeys the Nash equilibrium, i.e. for transactions that are never allowed to cross-partitions, then the partition doesn't need a separate PoW nor separate block chain. It can be essentially merge-mined (but in the same block chain) without impacting the Nash equilibrium for the block producers. And my other point was that strict partitioning can't exist for scripting yet it can exist for asset transfers (e.g. crypto coin transactions).

I understand perfectly. Under this model, the incentive for block producers is to exclude partitions from their blocks, because every partition they include increases the chance of their block being orphaned due to double spends. In fact, this is analogous to the problem faced by Iota.

No you don't. And you are slobbering all over the thread. I guess I have to put you on ignore again.

Excluding a partition means the coin dies and thus their block rewards become worthless. You seem to not even comprehend Nash equilibrium. Really this is getting to be too much. You constantly waste my time. And you feel no remorse.

You apparently still haven't understood the point. The blocks are not invalid when a strict partition is invalid.

Of course one might argue that strict partitioning (thus by definition is without cross-partition transactions) is not that flexible. But nevertheless the point remains that there is a design which refutes your assumption.

A strict partition which is invalid serves no purpose that I can see?

You seem to be incapable of remembering anything I write:

The case of strict partioning which I explained upthread does not cause the block to be invalidated if the partition lied (and I think I explained in my video too)! Did you forget that again. Do I need to go quote that upthread statement by me again! Because the partitions are independent, thus a partion can be invalidated without needing to invalidate the entire block (i.e. the next block corrects the partition in the prior block by providing a proof-of-cheating).

TPTB_need_war
Sr. Member
****
Offline Offline

Activity: 420
Merit: 262


View Profile
February 17, 2016, 12:06:29 PM
 #212

Note I edited my post on the prior page of this thread and inserted the following. See my new comments below this quoted text.

[1]
Correct with regard to your first scenario where 2 partitions never talk to each other in the future, you dont need to consider it.   If they do talk to each other in the future, and have to merge, this is where Bitcoin, blocks, POW and longest chain rule falls on its arse.  Only one partition can exist, there is no merge possibility so the other has to be destroyed.   Even if the 2 partitions have not existed for an extended period of time you are screwed as they can never merge without a significant and possibly destructive impact to ALL historic transactions prior to the partition event, so you end up with an unresolvable fork.  I feel this is a critical design issue which unfortunately for Bitcoin imposes a number of limitations.

CAP theorem certainly doesn't imply you can't ever fulfill C, A and P, as most of the time you can at least enough to get the job done.  What it does state is that you cant fulfill all 3 to any sufficient requirement 100% of the time, as there will always be some edge cases that requires the temporary sacrifice of C, A or P.  This isn't the end of the world though, as detecting an issue with P is possible once nodes with different partitions communicate, at which point you can sacrifice C, or A for a period of time while you deal with the issue of P.

If you structure your data set in a flexible enough manner, then you can limit the impact of P further.  Considering CAP theorem once again, there is no mandate that prohibits most of the network being in a state that fulfills C, A and P, with a portion of the network being in a state of partition conflict.  For example, if there are a network of 100 nodes, and 1 of those nodes has a different set of data to everyone else and thus is on its own partition, the remaining 99 nodes can still be in a state of CAP fulfillment.  The rogue node now has to sacrifice C or A, in order to deal with P while the rest of the network can continue on regardless.

All of this can be done without blocks quite easily, the difficulty is how to deal with P in the event of a failure, which is where consensus algorithms come into play.

Bitcoins consensus of blocks and POW doesn't allow for merging as stated, even if the transactions on both partitions are valid and legal.  

DAGs and Tangles DO allow merging of partitions but there are important gotchas to consider as TPTB rightly suggests, but they aren't as catastrophic as he imagines and I'm sure that CfB has considered them and implemented functionality to resolve them.

Channels also allows merging of partitions (obviously thats why Im here), but critically it allows a node to be in both states of CAP fulfillment simultaneously.  For the channels that it has P conflicts it can sacrifice C or A to those channels, for the rest it can still fulfill CAP.


Lets rewind a bit and look at whats really going on under Bitcoins hood.

Natural network partitions arise in BTC from 1 of 4 events happening:

1.  A node/nodes accept a block that has transactions which are double-spending an output present in another block
2.  A miner produces a block that conflicts with a block on the same chain height
3.  Network connectivity separates 2 parts of the network
4.  A miner has control of 51% or more

All 4 of these create a P inconsistency, and so the LCR (longest chain rule) kicks into action to resolve them. 

In the case of 1, miners can filter these against historic outputs and just reject the transaction.  If multiple transactions are presented in quick succession that spend the same output, miners pick one to include in a block, or they could reject all of them.  On the receipt of a valid block, the remaining double-spend transactions that are not in a block get dumped.  If a block with a higher POW then turns up, all nodes switch to that block, which may or may not include a different transaction of the double-spend set.

In the case of 2, this happens ALL the time.  Orphans cause temporary partitions in the network, but the duration between them is short enough that it doesn't cause any inconvenience.  Worst case you have to wait a little longer for your transaction to be included in the next block if the accepted block which negates the orphan block doesn't have yours in it.

In the case of 3, if the separation duration is short, see 2.  If its long and sustained, 1 of the partitions will have to be destroyed and undo any actions performed, legal or otherwise causing disruption and inconvenience.

In the case of 4, well, its just a disaster. Blocks can be replaced all the way back to the last checkpoint potentially and all transactions from that point could be destroyed.

There can also be local partition inconsistencies too, where a node has gone offline, and shortly after a block or blocks have been accepted by the network that invalidate one or more of the most recent blocks it has.  Once that node comes back online it syncs to the rest of the network and does not fulfill CAP at all.  The invalid blocks that is has prior to coming back online are destroyed and replaced. 

You could argue that this node creates a network level partition issue also to some degree, as it has blocks that the network doesn't, but the network will already have resolved this P issue in the past as it would have triggered an orphan event, thus I deem it to be a local P issue.

So whats my point?

In the cases of 1 or 2 there does not need to be any merging of partitions.  Bitcoin handles these events perfectly well with blocks, POW and LCR with minimal inconvenience to honest participants providing that the partition duration of the network is short (a few blocks). 

In the case of 3, which is by far the most difficult to resolve, the partition tolerance reduces proportional to the duration of the partitioned state, and becomes more difficult to resolve without consequence in any system, as there may be conflicting actions which diverge the resulting state of all partitions further away from each other.  These partition events will always become unsolvable at some point, no matter what the data structure, consensus mechanisms or other exotic methods employed, as it is an eventuality that one or more conflicts will occur.

The fact is that DAGs/Tangles and our channels have a better partition resolution performance in the case of event 3 as the data structures are more granular.  An inconsistency in P doesn't affect the entire data set, only a portion of it, thus it is resolvable without issue more frequently as the chances of a conflict preventing resolution is reduced.

Now, you haven't provided any detail on exactly how you imagine a data structure that uses blocks that could merge non-conflicting partitions, let alone conflicting ones.  In fact I see no workable method to do this with blocks that may contain transactions across the entire domain.  Furthermore, who creates these "merge" blocks and what would be the consensus mechanism to agree on them?  In the event of a conflict, how do you imagine that would be resolved?

When it comes to partition management and resolution where block based data structures are employed, Satoshi has already given you the best they can do in the simplest form.  Trying to do it better with blocks is IMO a goose chase and you'll get nowhere other than an extremely complicated and fragile system.

I believe I have figured out what Fuserleer's design is doing based on re-reading the descriptions above.

I believe what he is attempting is to define a data structure wherein he can partition double-spends so that they can not cross-partition each other. In other words, once there is a double-spend, instead of discarding it (and unrelated transactions in the same chain), he isolates those transactions which depend on the double-spend and prevents them from cross-pollinating each other in derivative transactions.

The problem with this of course is it ruins the incentive to converge. It becomes a divergent block chain where the incentive is to double-spend and create forks (within the same system).

I'd be interested to hear Fuserleer's retort. I will PM him.

monsterer
Legendary
*
Offline Offline

Activity: 1008
Merit: 1007


View Profile
February 17, 2016, 12:08:19 PM
 #213

No you don't. And you slobbering all over the thread. I guess I have to put you on ignore again.

Excluding a partition means the coin dies and thus their block rewards become worthless. You seem to not even comprehend Nash equilibrium. Really this is getting to be too much. You constantly waste my time. And you feel no remorse.

Yes, exactly the point. Excluding partitions is against consensus and leads to a divergent mess. That's the dichotomy at hand; one the one hand including partitions is sub-optimal for the block producers but on the other hand, it is essential for the network.

You can put me on ignore if you like, but you will be doing yourself a disservice if you are unable to prove to yourself that including N partitions does not increase the likelihood of a block being orphaned by some factor of N.

The case of strict partioning which I explained upthread does not cause the block to be invalidated if the partition lied (and I think I explained in my video too)! Did you forget that again. Do I need to go quote that upthread statement by me again! Because the partitions are independent, thus a partion can be invalidated without needing to invalidate the entire block (i.e. the next block corrects the partition in the prior block by providing a proof-of-cheating).

This is exactly why I don't like Casper; the theory is a total mess. Just do the validation inside the partition in the first place, rather than putting the cart before the horse like that.
TPTB_need_war
Sr. Member
****
Offline Offline

Activity: 420
Merit: 262


View Profile
February 17, 2016, 12:11:38 PM
 #214

That's the dichotomy at hand

There is no dichotomy in the case of strict partitions for asset transfers. I will not repeat myself again.

monsterer
Legendary
*
Offline Offline

Activity: 1008
Merit: 1007


View Profile
February 17, 2016, 01:12:13 PM
 #215

That's the dichotomy at hand

There is no dichotomy in the case of strict partitions for asset transfers. I will not repeat myself again.

Then, lets agree to disagree.
monsterer
Legendary
*
Offline Offline

Activity: 1008
Merit: 1007


View Profile
February 17, 2016, 01:40:37 PM
 #216

I believe I have figured out what Fuserleer's design is doing based on re-reading the descriptions above.

I believe what he is attempting is to define a data structure wherein he can partition double-spends so that they can not cross-partition each other. In other words, once there is a double-spend, instead of discarding it (and unrelated transactions in the same chain), he isolates those transactions which depend on the double-spend and prevents them from cross-pollinating each other in derivative transactions.

The problem with this of course is it ruins the incentive to converge. It becomes a divergent block chain where the incentive is to double-spend and create forks (within the same system).

I cannot speak for Fuserleer, but I can say that this is how the design I am writing up works. Double spends do not create orphaned branches, they simply become invalid and subsequent, unrelated transactions process as normal. Therefore, you cannot create a divergent mess by double spending, because they can coexist within the DAG - this is only possible because of the eventual total order.
Anima
Member
**
Offline Offline

Activity: 63
Merit: 10


View Profile
February 17, 2016, 01:57:55 PM
 #217


I believe I have figured out what Fuserleer's design is doing based on re-reading the descriptions above.

I believe what he is attempting is to define a data structure wherein he can partition double-spends so that they can not cross-partition each other. In other words, once there is a double-spend, instead of discarding it (and unrelated transactions in the same chain), he isolates those transactions which depend on the double-spend and prevents them from cross-pollinating each other in derivative transactions.

The problem with this of course is it ruins the incentive to converge. It becomes a divergent block chain where the incentive is to double-spend and create forks (within the same system).

I'd be interested to hear Fuserleer's retort. I will PM him .


No need. Note the date.

https://twitter.com/eMunie_Currency/status/563728882415992832

Best regards from Anima - proud member of the Radix team.
TPTB_need_war
Sr. Member
****
Offline Offline

Activity: 420
Merit: 262


View Profile
February 17, 2016, 02:41:36 PM
 #218


I believe I have figured out what Fuserleer's design is doing based on re-reading the descriptions above.

I believe what he is attempting is to define a data structure wherein he can partition double-spends so that they can not cross-partition each other. In other words, once there is a double-spend, instead of discarding it (and unrelated transactions in the same chain), he isolates those transactions which depend on the double-spend and prevents them from cross-pollinating each other in derivative transactions.

The problem with this of course is it ruins the incentive to converge. It becomes a divergent block chain where the incentive is to double-spend and create forks (within the same system).

I'd be interested to hear Fuserleer's retort. I will PM him .


No need. Note the date.

https://twitter.com/eMunie_Currency/status/563728882415992832

You can replace the words "block chain" with "consensus system" in my quoted text. The point about the design remains that even by making partitions strict and allowing double-spends to live in separate partitions, it creates afaics a divergent system that incentivizes double-spending, doesn't provide consensus over which double-spend is valid, and is thus chaotic failure.

TPTB_need_war
Sr. Member
****
Offline Offline

Activity: 420
Merit: 262


View Profile
February 17, 2016, 02:46:09 PM
 #219

I believe I have figured out what Fuserleer's design is doing based on re-reading the descriptions above.

I believe what he is attempting is to define a data structure wherein he can partition double-spends so that they can not cross-partition each other. In other words, once there is a double-spend, instead of discarding it (and unrelated transactions in the same chain), he isolates those transactions which depend on the double-spend and prevents them from cross-pollinating each other in derivative transactions.

The problem with this of course is it ruins the incentive to converge. It becomes a divergent block chain where the incentive is to double-spend and create forks (within the same system).

I cannot speak for Fuserleer, but I can say that this is how the design I am writing up works. Double spends do not create orphaned branches, they simply become invalid and subsequent, unrelated transactions process as normal. Therefore, you cannot create a divergent mess by double spending, because they can coexist within the DAG - this is only possible because of the eventual total order.

The system must have some means of converging on a consensus choice amongst competing double-spends.

Your thread is a discussion about that. We don't need to repeat that discussion here. I urge readers to click that link if they want to read what is being discussed about whether there could be an alternative to Iota/DAG, which is mathematical model which I assert the control of must be centralized in order to not diverge.

monsterer
Legendary
*
Offline Offline

Activity: 1008
Merit: 1007


View Profile
February 17, 2016, 02:48:35 PM
 #220

The system must have some means of converging on a consensus choice amongst competing double-spends.

Your thread is a discussion about that. We don't need to repeat that discussion here.

Eventual total ordering.
Pages: « 1 2 3 4 5 6 7 8 9 10 [11] 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 »
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!