I think ultimately you are perhaps concentrating on the issue of 2 or more partitions existing for an extended period of time.
You must be referring to monsterer's comments. I was always emphasizing that 2 or more temporary partitions can be tolerated if there is a resolution mechanism.
I was referring to you both.
No temporary partitions can be tolerated at all if the resolution of said partitions is the total destruction of one partition or another. This is the
only resolution method possible if the data structure is block based with POW or POS.
If you think otherwise, explain how two blocks secured with POW (or POS) could be merged between two block chains without loss of data? You don't need details on my, or anyone's protocols to explain how this is so.
Temporary partitions can be merged if they don't contain conflicting double-spends. The advantage of creating temporary partitions in this context (what I contemplated in my design) is it afaics can both enable a form of instant transactions and also it can radically reduce the data that must be propagated as a transient spike on block announcement, thus solving the propagation delay which drives orphan rate which is one of the main issues with increasing the block size in Bitcoin and other Satoshi coins such as Monero. Note I didn't write that the temporary partitions have to use PoW or PoS; only the global resolution mechanism does.
Lets rewind a bit and look at whats really going on under Bitcoins hood.
Natural
network partitions arise in BTC from 1 of 4 events happening:
1. A node/nodes accept a block that has transactions which are double-spending an output present in another block
2. A miner produces a block that conflicts with a block on the same chain height
3. Network connectivity separates 2 parts of the network
4. A miner has control of 51% or more
All 4 of these create a P inconsistency, and so the LCR (longest chain rule) kicks into action to resolve them.
In the case of 1, miners can filter these against historic outputs and just reject the transaction. If multiple transactions are presented in quick succession that spend the same output, miners pick one to include in a block, or they could reject all of them. On the receipt of a valid block, the remaining double-spend transactions that are not in a block get dumped. If a block with a higher POW then turns up, all nodes switch to that block, which may or may not include a different transaction of the double-spend set.
In the case of 2, this happens ALL the time. Orphans cause temporary partitions in the network, but the duration between them is short enough that it doesn't cause any inconvenience. Worst case you have to wait a little longer for your transaction to be included in the next block if the accepted block which negates the orphan block doesn't have yours in it.
In the case of 3, if the separation duration is short, see 2. If its long and sustained, 1 of the partitions will have to be destroyed and undo any actions performed, legal or otherwise causing disruption and inconvenience.
In the case of 4, well, its just a disaster. Blocks can be replaced all the way back to the last checkpoint potentially and all transactions from that point could be destroyed.
There can also be
local partition inconsistencies too, where a node has gone offline, and shortly after a block or blocks have been accepted by the network that invalidate one or more of the most recent blocks it has. Once that node comes back online it syncs to the rest of the network and does not fulfill CAP at all. The invalid blocks that is has prior to coming back online are destroyed and replaced.
You could argue that this node creates a network level partition issue also to some degree, as it has blocks that the network doesn't, but the network will already have resolved this P issue in the past as it would have triggered an orphan event, thus I deem it to be a local P issue.
So whats my point?
In the cases of 1 or 2 there does not need to be any merging of partitions. Bitcoin handles these events perfectly well with blocks, POW and LCR with minimal inconvenience to honest participants providing that the partition duration of the network is short (a few blocks).
In the case of 3, which is by far the most difficult to resolve, the partition tolerance reduces proportional to the duration of the partitioned state, and becomes more difficult to resolve without consequence in
any system, as there may be conflicting actions which diverge the resulting state of all partitions further away from each other. These partition events will always become unsolvable at some point, no matter what the data structure, consensus mechanisms or other exotic methods employed, as it is an eventuality that one or more conflicts will occur.
The fact is that DAGs/Tangles and our channels have a better partition resolution performance in the case of event 3 as the data structures are more granular. An inconsistency in P doesn't affect the entire data set, only a portion of it, thus it is resolvable without issue more frequently as the chances of a conflict preventing resolution is reduced.
Now, you haven't provided any detail on exactly how you imagine a data structure that uses blocks that could merge non-conflicting partitions, let alone conflicting ones. In fact I see no workable method to do this with blocks that may contain transactions across the entire domain. Furthermore, who creates these "merge" blocks and what would be the consensus mechanism to agree on them? In the event of a conflict, how do you imagine that would be resolved?
When it comes to partition management and resolution where block based data structures are employed, Satoshi has already given you the best they can do in the simplest form. Trying to do it better with blocks is IMO a goose chase and you'll get nowhere other than an extremely complicated and fragile system.
As for your (and Iota's) idea of using a different global resolution method, I maintain that no design will be sound if it doesn't use blocks with PoW (or the inferior PoS). The reason is because there is no way to gain Byzantine consistency on double-spends. I have thought about this in great detail for months and years. I will be very stunned if you or anyone else has found an exception to this rule. It seems to be fundamental to the Byzantine Generals Problem. As I explained upthread my understanding for Iota is that if competing cliques with less than 50% of the total hash rate each, for what ever reason decide not to confirm each other's chains, then there is no mechanism in Iota's DAG without blocks to force a consensus and thus the user of the currency has to contend with forks and multiple spends on multiple partitions. Although this may not occur in the early stages for Iota, if it really gains any significant value (not just insiders buying from themselves to pump the illusion of a large market cap and trade volume) that is when it will be tested conceptually. Satoshi's PoW design in Bitcoin (Monero, etc) has already passed this crucial test but PoS and other block chain consensus designs not yet. I suspect for eMunie what you are attempting to design is some resolution based on propagation and different powers for different types of nodes in the network, and I am confident I will be able to point out to you how this is unsound once you release the details. If you want to waste your effort in this direction, who am I to discourage you. I personally don't want to waste any more effort on pie-in-the-sky failure delusion. I can't really nail down with 100% certainty if your design is a delusion until I see all the details, but I strongly suspect it is. I don't say this to be unfriendly with you. I am concerned that we are wasting a lot of effort and resources. I am trying to be very frank with myself as well.
Apologies for misspelling your username upthread.
Note I have proposed that including PoW with each transaction may be a solution to the flaws in Satoshi's design which causes it to drive centralization, government takeover, and oligarchy. But I need to spend some more time going over the details of that idea. Note apparently Iota may be implementing some variant of that idea, but as I explained upthread, I don't think it can be Byzantine fault tolerant on Consistency without blocks.
Ive said this before and been laughed out of court, but I'm going to say it again.
Bitcoin is
not Byzantine fault tolerant for 2 reasons:
1. The nodes in the network are not agreeing on the data set, they are agreeing on the expenditure of a resource
2. Due to #1 someone with enough resource can undo and rewrite history
In a true Byzantine fault tolerant system, the data set is append only and the rules used to determine the inclusion of new data rely totally on the existing data.
Bitcoin, or any crypto for that matter, does not consider the existing and historical data set for conflict resolution of any kind, and this is an absolute critical failure.
To protect against this, Bitcoin has checkpoints to guard against the possible event that someone, at sometime could amass 51% or more of the hash rate (or less if you consider selfish mining and other tricks) and undo history. This is a fallacy though, as now we are all reliant on the developers that have permission to make code commits to set the checkpoints correctly, and not insert one that would point to a hidden block chain which has been quietly running in the background by them or some other malicious party who has influence over the developers.
Which leads to the frightening conclusion that no crypto-currency released so far, is fully Byzantine tolerant nor is one in development other than eMunie.
eMunie meets the requirements of a truly Byzantine fault tolerant system. It is append only, and history can absolutely
NOT be changed. Historical data is considered in conflict resolution, and is also used to determine who (among other things) is eligible in the future to vote on which data set is valid in conflict situations.