thesmokingman
|
|
February 16, 2016, 09:10:15 PM |
|
Good luck with that. One of the things that made Bitcoin great is consensus via economics that's advantageous for the individual and the group.
Yeah it is absolutely great that China's mining bloc controls 65% of Bitcoin's hashrate and has vetoed any block size increase, including Classic's proposed mere doubling of the block size to 2MB only. Ostensibly they want to force transaction fees higher to fatten their profits. This is called an oligarchy and it is great for individuals like us, so we can pay through the nose to the oligarchy. Thanks again for your incredible wisdom and including your sage proclamation that Szabo is a crypto god and was/is Satoshi. Peachy has joined you in my very exclusive Ignore list, which I reserve for the wisest soothsayer salesmen. Wow what a thread. I have one question TPTB and it's an honest question and not a veiled attempt at sarcasm. Any miner I have had the pleasure of dealing with has to turn some sort of profit to justify at a minimum paying their electric bill. Since you say the POW for your coin won't be profitable for miners, what would be our motivation to participate in the decentralized side of things? I know there are hobby miners who mine because they believe in the tech and want to secure the network, but surely there can't be enough of those people willing to turn a negative profit to help secure your network can there? I'm willing to mine BTC and pay for the electrical use out of pocket, but only because I believe the value will be higher in the future. You seem to have good ideas, but to a new person like me expecting miners to mine for no profit seems like an Achilles heel for everything you've said. Unless you expect the value of said coins to be just enough to cover electrical expenses and no higher, but that would take market manipulation wouldn't it to keep the price from going higher to keep from attracting "whales". Wouldn't that lead to centralization or create the need for centralized control over both aspects of the network? Centralization to verify and centralization to control the price/deter large investors etc If anything this thread has made me decide to go short (POW window only) vs long (POW&POS window) on ethereum, but not abandon it. I'll take my comments off the air lol...
|
|
|
|
tromp
Legendary
Offline
Activity: 990
Merit: 1110
|
|
February 16, 2016, 09:28:15 PM |
|
Since you say the POW for your coin won't be profitable for miners, what would be our motivation to participate in the decentralized side of things?
People spend $50B a year on lotteries. Perhaps the term lottery mining makes more sense than unprofitable mining?
|
|
|
|
hv_
Legendary
Offline
Activity: 2534
Merit: 1055
Clean Code and Scale
|
|
February 16, 2016, 09:49:07 PM |
|
Tone Vays at his best! Watch that starting at about 32 min https://www.youtube.com/watch?v=UmNKd3w1k6Qi wonder if first banks really embrace ETH and if fails then call the SEC for SCAM ! Anybody good in that game-theory -where is Nash equilibrium in that?
|
Carpe diem - understand the White Paper and mine honest. Fix real world issues: Check out b-vote.com The simple way is the genius way - Satoshi's Rules: humana veris _
|
|
|
TPTB_need_war
|
|
February 16, 2016, 10:18:37 PM Last edit: February 16, 2016, 11:24:24 PM by TPTB_need_war |
|
Props to monsterer for facing the beast head on I am sorry but his response demonstrates that he ostensibly didn't understand the point about modularity versus dependent typing, i.e. when programmability is also the objective. I feel no desire whatsoever to try to teach him and other readers who couldn't possibly understand some stuff about computer science that is I guess not comprehensible to mere mortals. I already tried to explain it a few times. I guess he can learn about the effects of I/O, modularity, dependent typing, Turing completeness, and the Halting Problem from other sources if he is so inclined. His vision of having scripts do dependent typing means he didn't pay attention to what I wrote about dependent typing. Or he somehow thinks what he wrote doesn't mean dependent typing. These issues have been worked on already by academics. He is apparently unaware of their findings. I don't know of just one single canonical comprehensive resource I could cite for him. Any way, just forsake the partitioning and the issue is "resolved". Well read below... We shall wait for the rebuttal.. hehe
My time isn't free and I have expended years foruming. So that is the extent of my rebuttal. One observation I make myself is that TPTB first implied pretty strongly that whatever the issue was (outside my understanding) it was so fundamentally flawed it was unsolvable guaranteeing Ethereums fall. However, as I understand it atm it's more of a "the direction is wrong, maybe there is a solution, Ethereum should hire me to solve the problem"
I said: * partitioning (of scriptable block chains) is flawed and is unsolvable. * verification must be (or will be regardless) centralized in order to scale I have not changed my position on that. Call that failure or not, depending on your expectations. In other words in the real world, I don't think it is that scalable unless they forsake decentralization. I have a idea about how to keep decentralization in face of those realities I allege. If they wanted to expend some of their $millions on me, I might find it difficult to decline if the amount offered was high enough (given it would be guaranteed income). But I really have something more exciting to work on which if I am successful could generate potentially $billions not $millions, so not only do I not expect them to be interested in my assistance, but I doubt I would really be interested as well. Because for one reason is I think their company culture is too much on hype and that turns me off. The mcap is already $400m so I am surely not interested in holding ETH for appreciation (although they may be able to hype it to $billion or more on next upgrade regardless of whether the tech works in the real world or not). I mean I don't really believe in the project. I have no idea if programmable block chains will even be useful for anything real. Perhaps I could become convinced, but then I might just decide to make own programmable block chain instead starting from a $10,000 mcap is much more attractive than starting from a $400m marketcap. They seem to be highly disorganized and do they really produce a lot of code? I don't know. I would need to dig in and any way I am already working on something which I find interesting. So I guess I just wrote a paragraph which basically says, "don't hire me". I feel an apology here is warranted in case I am completely wrong with my assumption. Clearly TPTB is a bright guy and surprisingly very pleasant as evident by the video. Shocking but true Thanks. Well I really am laid back but I guess I have a limited patience because a forum can consume all of my time, 24 hours a day, 7 days a week, 356 days a year for 3 fucking years. I am trying to quit and it just goes on and on an on. I have programming I need to be doing. Not this.
|
|
|
|
thesmokingman
|
|
February 16, 2016, 10:31:08 PM |
|
Since you say the POW for your coin won't be profitable for miners, what would be our motivation to participate in the decentralized side of things?
People spend $50B a year on lotteries. Perhaps the term lottery mining makes more sense than unprofitable mining? Wouldn't that still encourage large scale miners to take part? The potential for a reward is the reason all big players are in the game. Just some games offer lower variance rewards than others. GPU mining from what I read isn't incredibly profitable, but there are still people/farms with 150+ GPUs still taking part. Even if a lottery system was setup, wouldn't it still be worth large farms time to add just enough hashing power to trump other miners in an attempt to try and increase their chances of winning? And doesn't someone still have to control the lottery system that is used to reward miners? Maybe it's my lack of brain cells, but I don't see anything short of a centralized system that would provide incentives for small miners while discouraging large mining farms without some sort of human control. And to your point, if people spend $50B a year on lotteries, why wouldn't a large scale miner spend the same said money on said lottery system? Not trying to knock anything at all, I'm just trying to figure out what incentives a miner would have to mine/secure a network and not receive some sort of profit for it? The only way I see this working is the issuing authority and the mining/securing are all done under the same rough, or umbrella but spread out over the globe.
|
|
|
|
TPTB_need_war
|
|
February 16, 2016, 10:32:24 PM Last edit: February 17, 2016, 12:44:00 AM by TPTB_need_war |
|
Since you say the POW for your coin won't be profitable for miners, what would be our motivation to participate in the decentralized side of things?
There won't be any miners in the traditional sense. Only payers of transactions, who must include a PoW share else their transaction will not be accepted on the block chain. They mine at a loss. Consider it a transaction fee. They have a Nash equilibrium incentive to make sure they mine on the longest chain so their transactions get included on the chain and also possibly a less than profitable block reward. So yes this requires a good volume of transactions. Yeah I want to kill mining. Sorry if any with mining equipment hate me for that. I'd be quite pleased if I could turn Bitcoin mining farms into warehouses of expensive doorstops. If anything this thread has made me decide to go short (POW window only) vs long (POW&POS window) on ethereum, but not abandon it.
Yeah expect ETH to be pumped again with hype on the next upgrade. I tried my best to make it more difficult for them to hype the PoS(hit). But I assume they will invent some new technobabble (if they don't actually invent a true solution). I tried to my best to force the price lower (by explaining the technological flaws) so they couldn't raise more funding, but perhaps I didn't succeed although I see the price plummeted to 0.01 BTC since I started posting today. When I called the double-top at 0.016, I was preparing to unleash this onslaught on their coin and marketcap. If you attribute the decline to me, then I guess I erased about $200+ million from their market cap. So that makes me feel like it wasn't a complete waste of my time. But I didn't stretch the truth nor did I short ETH. I earned nothing on this except some satisfaction and perhaps some reputation (some haters too I am sure).
|
|
|
|
yefi
Legendary
Offline
Activity: 2842
Merit: 1511
|
|
February 16, 2016, 11:24:51 PM |
|
is Vitalik an order of magnitude smarter than Satoshi?
no, but his balls are an order of magnitude bigger when you are thinking about dropping a million dollars of your hard earned cash into a crypto community, knowing that the lead developer has the huevos to show himself in public and stand behind his creation instead of hiding like a coward or criminal offers the potential investor a certain level of clarity that is devilishly absent from the bitcoin community. I suppose that also makes Mark Twain a coward and like a criminal then.
|
|
|
|
TPTB_need_war
|
|
February 17, 2016, 12:00:49 AM Last edit: February 17, 2016, 08:21:00 AM by TPTB_need_war |
|
In fact, I do believe that perhaps the same Nash equilibrium failure that applies to scripting (as stated above) may apply in the cross-partition design for asset transfers because there is a cascade of history. I need to think about this more. I will try to remember to comment on this point later.
I've touched on this before, but you've reminded me again; partitions are the antithesis of consensus. Taking things to the extreme is helpful to illustrate the problem: with infinite partitions, in bitcoin, you are left with the DAG of the UTXO set, and no blocks or any agreement on what the order of transactions should be, in other words, no consensus. The LCR in bitcoin constantly forces miners to chose between candidate potential partitions (orphan chains). The nash equilibrium results in rational miners always choosing the longest branch to mine on to maximise their profits. More completely stated, the Nash equilibrium is that there is no other strategy other than the strategy of mining on the longest chain which is visible to all nodes, i.e. that there is no superior strategy other than the one that nodes are already doing and which is known to all nodes. Whereas, as I pointed out in my video, when a node (or colluding nodes) have > 33% of the hashrate, then for Satoshi' PoW design they can apply the selfish mining attack by withholding block solutions until the rest of the network catches up, thus the Nash equilibrium is destroyed by selfish mining in that case. Also I have pointed out in my video and the follow up posts in this thread about a meta issue that destroys the Nash equilibrirum, that for the case where there are external failures (external to the block chain's perspective of itself) due to external actuation of cross-partition state (even if the block chain thinks it is enforcing a strict partitioning with no cross partition state), the Nash equilibrium fails because the entire coin fails, thus the validators of partitions can't trust the validators of other partitions (because although they get their block reward, the external market value of the reward fails). It remains under study whether this applies to asset transfers too (or just to partitioning of scripts) and whether it applies for asset transfers in the strict partitioning block chain (which I argued in my video is immune to the problem) and/or in the cross-partitioning block chain (which I did not address in my video and Fuserleer raised this point hence). I hope readers don't get confused that I am making a distinction between when cross partitioned state is occurring by-design on the block chain and when it occurs externally because it can. For scripting it is impossible to enforce a strict partitioning because it is very clear that the external actuation can inject state from one partition into another partition (and even though the block chain can't determine this, the external users can and the external users can experience failure that the block chain is entirely unaware of due to this external Turing completeness, which is a very deep, meta, high IQ concept that apparently most people wouldn't think of ... note smooth indicated to me in a PM that he had thought of this issue of external Turing completeness before too). For asset transfers (no scripting), it is not yet 100% clear to me. I need to think about it more. Talking about partition unification for a moment; if two partitions are totally separate, merging them doesn't have any consequences for ordering because the individual transactions in each partition have been separate from each other, you can order them however you like as long as you obey the parent/child relationship in each partition.
Yes as long as the state from the two partitions did not leak into each other by any means (including the external meta case mentioned again above). Following up on that bolded commitment quoted above, cross-partition transactions even with asset transfers (e.g. a crypto currency, not scriptable block chains) seems to destroy the Nash equilibrium also, because the cascade of derivative transactions infects across partitions, yet the validators did not validate all partitions (i.e. not all transactions). Thus later if it is discovered that a partition lied about a transaction being valid, then downstream transactions in other partitions would invalidated (i.e. reverted). Which would of course cause the coin to be considered a failure and market price plummet. So it is the same as the case of strict partitioning with scripting. Thus I have no idea what Fuserleer is doing for eMunie that might possibly work soundly. I will have to wait for his white paper. In my design, I have cross-partition transactions, but the way I accomplish this and maintain the Nash equilibrium is I entirely centralized verification, i.e. all transactions are validated by all centralized validators. This eliminates the problem that full nodes have unequal incomes but equal verification costs, thus ameliorates the economics that drives full nodes to become centralized. The centralized validators would still have the potential incentive to lie and short the coin. So the users in the system (hopefully millions of them) are constantly verifying a subsample of the transactions so that statistically a centralized validator is going to get caught fairly quickly, banned, and their hard won reputation entirely blown. Since these validations are done in a non-organized manner (i.e. they are randomly chosen by each node), then there is no viable concept of colluding to maintain a lie. In case anyone has forgotten, I believe I have convincingly shown that it is impossible to design a consensus algorithm that will not centralize verification (if not also mining control in Satoshi's PoW and in PoS). So at least my design maintains decentralized control, while centralizing verification while also statistically decentralized checking the verification for lies. For example, imagine that a million users are earning a good income doing business based in permissionless commerce the government would like to eliminate (such as the Big Pharma corruption I exampled upthread), and so they fork away from the masses's block chain when the governments is able to use their control of Coinbase et al (imagine a world government level of cooperation). So then everyone can spend their coins on both forks. If there is this genuine Coasian barrier that forces the existence of a second fork, then the government can play Whack-A-Mole until they realize that the masses are catching on to the opportunities of freedom and individual empowerment. The point being that doing such a fork would be nearly infeasible in Satoshi's design because all those who move in mass action are not going to be supplying PoW mining in Satoshi's design (thus the new fork can be easily attacked). The economics are not conducive in Satoshi's design for maintaining the fight for permissionless commerce. This is the sort of ideal I want to work on! If I can be convinced I am not working on bullshit, I will be more inspired.
I think miners insterests are more aligned with users interests than you think. Afterall if the currency they are mining becomes worthless their operation becomes worthless as well. So anything that hurts the value of their currency is neither in the interest of the miners nore in the interest of users. Of course there are other subjects where their interests do not align.
The professional miners' are aligned to paying back the loans they incurred to buy mining farms. Frankly I think your post is delusional. Get a grip on economics. Usury (debt) enables the banksters to take entire control of the economics of mining and charge the costs to the collective. This is the fairytale lies crap that leads so many of us to be ideological fools. I want to kill this. I am so tired of these lies. I hope you realize that the per BTC costs of some of the mining farms running off 2 - 4 cent electricity in WA State, USA, are probably sub-$50 per BTC. And by aligning with government edicts and takeover which the dumb masses (and socialism) will be on board with, they are not shooting their own foot, rather maximizing the sustainability of their income source. Sorry if I am so forceful, but I have heard these sort of rationalizations for the past 3 years and I think it is time we stop being delusional, don't you?
|
|
|
|
YarkoL
Legendary
Offline
Activity: 996
Merit: 1013
|
|
February 17, 2016, 07:08:07 AM |
|
In my design, I have cross-partition transactions, but the way I accomplish this and maintain the Nash equilibrium is I entirely centralized verification, i.e. all transactions are validated by all centralized validators. This eliminates the problem that full nodes have unequal incomes but equal verification costs, thus ameliorates the economics that drives full nodes to become centralized. The centralized validators would still have the potential incentive to lie and short the coin. So the users in the system (hopefully millions of them) are constantly verifying a subsample of the transactions so that statistically a centralized validator is going to get caught fairly quickly, banned, and their hard won reputation entirely blown. Since these validations are done in a non-organized manner (i.e. they are randomly chosen by each node), then there is no viable concept of colluding to maintain a lie.
Regarding the last sentence, I take it that it refers to the extra validation performed by the users? How do you ensure that the selection of the txs to be validated is done randomly? And what incentives do the user nodes have to perform the extra validation at all? Apologies if you have answered these somewhere else, then I'd be grateful to receive a link. It seems to me that you are using terms "validation" and "verification" interchangeably in the above paragraph. (Or does verification refer to the extra checking performed by the users?)
|
“God does not play dice"
|
|
|
monsterer
Legendary
Offline
Activity: 1008
Merit: 1007
|
|
February 17, 2016, 08:20:13 AM |
|
Following up on that bolded commitment quoted above, cross-partition transactions even with asset transfers (e.g. a crypto currency, not scriptable block chains) seems to destroy the Nash equilibrium also, because the cascade of derivative transactions infects across partitions, yet the validators did not validate all partitions (i.e. not all transactions). I don't follow you. The network won't accept an invalid transaction, just as bitcoin doesn't accept an invalid block.
|
|
|
|
YarkoL
Legendary
Offline
Activity: 996
Merit: 1013
|
|
February 17, 2016, 08:24:27 AM |
|
Following up on that bolded commitment quoted above, cross-partition transactions even with asset transfers (e.g. a crypto currency, not scriptable block chains) seems to destroy the Nash equilibrium also, because the cascade of derivative transactions infects across partitions, yet the validators did not validate all partitions (i.e. not all transactions). I don't follow you. The network won't accept an invalid transaction, just as bitcoin doesn't accept an invalid block. The way I understand it (and that might be defective) is that the other partition has no way of validating the cross-partition tx. If it could do that, ie. if there were an unified database, then there would not really be a partition.
|
“God does not play dice"
|
|
|
TPTB_need_war
|
|
February 17, 2016, 08:25:54 AM Last edit: February 17, 2016, 11:20:58 AM by TPTB_need_war |
|
Following up on that bolded commitment quoted above, cross-partition transactions even with asset transfers (e.g. a crypto currency, not scriptable block chains) seems to destroy the Nash equilibrium also, because the cascade of derivative transactions infects across partitions, yet the validators did not validate all partitions (i.e. not all transactions). I don't follow you. The network won't accept an invalid transaction, just as bitcoin doesn't accept an invalid block. The entire point of partitions is that not all full nodes are validating (verifying) all transactions. Thus of course the full node that wins a block (in PoW, and analogously ditto in PoS or consensus-by-betting) is trusting the validators of other partitions to not lie to him. If that full node had to validate every transaction in every partition, then there wouldn't be partitions any more. The entire reason to make partitions is because verification costs are too high when every full node has to verify every transaction. Partitions exist to aid scaling. Partitions can also enable other features such as instant confirmations, but that is a tangential discussion and I am not going to give away all my design before I launch it. Also in case my other point got lost in the sea of words upthread, my other key point is that (in Satoshi's PoW) if every full node has to verify every transaction, then all full nodes have the same verification costs, but full nodes have various levels of income because they have various levels of hashrate. Thus over time, mining must become more centralized because those with higher hashrate are more profitable due to verification costs. So eliminating verification costs for full nodes is eliminating one of the economic reasons mining becomes more centralized over time. I had also mentioned some of the other reasons in my video, e.g. propagation costs meaning not the cost of the bandwidth but the cost of mining on the wrong chain for longer periods of time relative to those pools with more hashrate who see the new block instantly when they produce it. In my design, I eliminate propagation costs in a clever way something similar to what Iota is doing (but without the aspect of Iota that I assert won't allow it to converge without centralized control and enforcement of the math model that payers and payee's employ). Ditto I assume for Fuserleer and not wanting to give away his design for eMunie before he launches. Fuserleer has mentioned vaguely that he is using different data structures and that the one who commits a double-spend is then isolated into his own partition[1]. I don't know how he accomplishes this. Will be interesting to read his white paper. He has also said he is not using proof-of-work and rather some form of propagation and different nodes with different responsibilities. I await his white paper and can't pre-judge it, except to say I am very skeptical (but willing to be surprised). Note in case it wasn't clear from my upthread posts, strict partition (no cross-partition transactions) for crypto coin (i.e. asset transfers) maintains Nash equilibrium. But cross-partition transactions for asset transfers does not maintain Nash equilibrium (unless using a statistical check as I am proposing for my design, and some may think this is dubious but my white paper will make the argument for it). And strict partitioning for scripts can't exist, because the partitions are violated by external I/O.
The way I understand it (and that might be defective) is that the other partition has no way of validating the cross-partition tx. If it could do that, ie. if there were an unified database, then there would not really be a partition.
We were writing our posts at the same time. When I clicked to post mine, yours had appeared. Yes it seems you understand the issue.
[1] | Correct with regard to your first scenario where 2 partitions never talk to each other in the future, you dont need to consider it. If they do talk to each other in the future, and have to merge, this is where Bitcoin, blocks, POW and longest chain rule falls on its arse. Only one partition can exist, there is no merge possibility so the other has to be destroyed. Even if the 2 partitions have not existed for an extended period of time you are screwed as they can never merge without a significant and possibly destructive impact to ALL historic transactions prior to the partition event, so you end up with an unresolvable fork. I feel this is a critical design issue which unfortunately for Bitcoin imposes a number of limitations.
CAP theorem certainly doesn't imply you can't ever fulfill C, A and P, as most of the time you can at least enough to get the job done. What it does state is that you cant fulfill all 3 to any sufficient requirement 100% of the time, as there will always be some edge cases that requires the temporary sacrifice of C, A or P. This isn't the end of the world though, as detecting an issue with P is possible once nodes with different partitions communicate, at which point you can sacrifice C, or A for a period of time while you deal with the issue of P.
If you structure your data set in a flexible enough manner, then you can limit the impact of P further. Considering CAP theorem once again, there is no mandate that prohibits most of the network being in a state that fulfills C, A and P, with a portion of the network being in a state of partition conflict. For example, if there are a network of 100 nodes, and 1 of those nodes has a different set of data to everyone else and thus is on its own partition, the remaining 99 nodes can still be in a state of CAP fulfillment. The rogue node now has to sacrifice C or A, in order to deal with P while the rest of the network can continue on regardless.
All of this can be done without blocks quite easily, the difficulty is how to deal with P in the event of a failure, which is where consensus algorithms come into play.
Bitcoins consensus of blocks and POW doesn't allow for merging as stated, even if the transactions on both partitions are valid and legal.
DAGs and Tangles DO allow merging of partitions but there are important gotchas to consider as TPTB rightly suggests, but they aren't as catastrophic as he imagines and I'm sure that CfB has considered them and implemented functionality to resolve them.
Channels also allows merging of partitions (obviously thats why Im here), but critically it allows a node to be in both states of CAP fulfillment simultaneously. For the channels that it has P conflicts it can sacrifice C or A to those channels, for the rest it can still fulfill CAP.
Lets rewind a bit and look at whats really going on under Bitcoins hood.
Natural network partitions arise in BTC from 1 of 4 events happening:
1. A node/nodes accept a block that has transactions which are double-spending an output present in another block 2. A miner produces a block that conflicts with a block on the same chain height 3. Network connectivity separates 2 parts of the network 4. A miner has control of 51% or more
All 4 of these create a P inconsistency, and so the LCR (longest chain rule) kicks into action to resolve them.
In the case of 1, miners can filter these against historic outputs and just reject the transaction. If multiple transactions are presented in quick succession that spend the same output, miners pick one to include in a block, or they could reject all of them. On the receipt of a valid block, the remaining double-spend transactions that are not in a block get dumped. If a block with a higher POW then turns up, all nodes switch to that block, which may or may not include a different transaction of the double-spend set.
In the case of 2, this happens ALL the time. Orphans cause temporary partitions in the network, but the duration between them is short enough that it doesn't cause any inconvenience. Worst case you have to wait a little longer for your transaction to be included in the next block if the accepted block which negates the orphan block doesn't have yours in it.
In the case of 3, if the separation duration is short, see 2. If its long and sustained, 1 of the partitions will have to be destroyed and undo any actions performed, legal or otherwise causing disruption and inconvenience.
In the case of 4, well, its just a disaster. Blocks can be replaced all the way back to the last checkpoint potentially and all transactions from that point could be destroyed.
There can also be local partition inconsistencies too, where a node has gone offline, and shortly after a block or blocks have been accepted by the network that invalidate one or more of the most recent blocks it has. Once that node comes back online it syncs to the rest of the network and does not fulfill CAP at all. The invalid blocks that is has prior to coming back online are destroyed and replaced.
You could argue that this node creates a network level partition issue also to some degree, as it has blocks that the network doesn't, but the network will already have resolved this P issue in the past as it would have triggered an orphan event, thus I deem it to be a local P issue.
So whats my point?
In the cases of 1 or 2 there does not need to be any merging of partitions. Bitcoin handles these events perfectly well with blocks, POW and LCR with minimal inconvenience to honest participants providing that the partition duration of the network is short (a few blocks).
In the case of 3, which is by far the most difficult to resolve, the partition tolerance reduces proportional to the duration of the partitioned state, and becomes more difficult to resolve without consequence in any system, as there may be conflicting actions which diverge the resulting state of all partitions further away from each other. These partition events will always become unsolvable at some point, no matter what the data structure, consensus mechanisms or other exotic methods employed, as it is an eventuality that one or more conflicts will occur.
The fact is that DAGs/Tangles and our channels have a better partition resolution performance in the case of event 3 as the data structures are more granular. An inconsistency in P doesn't affect the entire data set, only a portion of it, thus it is resolvable without issue more frequently as the chances of a conflict preventing resolution is reduced.
Now, you haven't provided any detail on exactly how you imagine a data structure that uses blocks that could merge non-conflicting partitions, let alone conflicting ones. In fact I see no workable method to do this with blocks that may contain transactions across the entire domain. Furthermore, who creates these "merge" blocks and what would be the consensus mechanism to agree on them? In the event of a conflict, how do you imagine that would be resolved?
When it comes to partition management and resolution where block based data structures are employed, Satoshi has already given you the best they can do in the simplest form. Trying to do it better with blocks is IMO a goose chase and you'll get nowhere other than an extremely complicated and fragile system.
|
|
|
|
|
TPTB_need_war
|
|
February 17, 2016, 08:53:01 AM Last edit: February 17, 2016, 12:36:42 PM by TPTB_need_war |
|
I don't understand this... I don't understand that... I don't understand the other... Yes those guys made many errors or leaps of faith in their opinions. One point they forgot to make is that it doesn't matter if Ethereum did their ICO under Swiss laws. The USA has securities law is that if you advertise and market securities to US investors, then you are culpable under US law no matter where in the world you are. They will come after you. KimDotCom will soon learn this that you can't run and you can't hide from the USA. Don't forget that Sweden was involved in trying to extradite Assange and probably turning him over to the USA. And Switzerland has been caving in to USA demands for turning over US citizens hiding wealth in Swiss banks. And besides, Martin Armstrong has pointed out that the G20 will start sharing information and cooperating on enforcement as of 2017 (when the global economy will collapse in earnest and capital controls will be ramped up significantly). Here is something related to FinCEN which is not the same as SEC regulation, but nevertheless the same principle applies of filtering out US residents/citizens: You can avoid US customers, but it takes work America Plenty of businesses, some of my own clients included, have decided that the US market just isn’t for them. They’ve either soured on the idea of servicing US clients altogether, or have decided to launch and wait it out in jurisdictions like Canada until the US sees regulatory reform. This can be both profitable and practical, but simply incorporating the overseas market isn’t going to cut it. The smart business will develop a set of policies and procedures reasonably calculated to keep US residents out. A competent attorney can help guide you through this process, and I can give some very basic principles here. Firstly, a pre-emptive response to a question I get asked weekly: geofiltering incoming IP addresses is only the beginning. The business itself should detect the jurisdiction of the customer’s IP address, display that address, and ask the customer to confirm that this is his or her jurisdiction. Both customer and business can take affirmative steps: the customer can be required to click a button stating “I affirm that I am a resident of *country*,” and the business can require verifying documentation, like a passport or utility bill. Several providers offer these kinds of onboarding services. Your business should develop a risk profile for each of its customers in real time setting forth the probability that the customer is a US resident. The risk profile should take into account different factors like: (i) whether the customer registers a US bank account with your business, (ii) how many transfers to US bank accounts the customer requests (if you offer such a service), and (iii) how many times the customer accesses your service from within the US after setting up a new account. The record shouldn't just show that your business followed its own policies, but that those policies worked. If push comes to shove, a judge and jury would probably like to see that, every once in a while, your procedures actually caught a US resident trying to use your service, and that you closed his or her account. Finally, it should go without saying that your business should not advertise to US customers. This all might seem excessive for, or inapplicable to, your business and indeed it might be. The proper set of procedures will depend heavily upon the details of your business model and your degree of risk tolerance. For some, even crafting and implementing these policies may be just as unappetising as compliance. There is, in fact, a way to service US customers and avoid these burdens. Namely, you can become the agent of a Bank or Credit Union, as existing MSB Certified agents of banks, credit unions and money services businesses are typically exempt from registration and licensure requirements. Functionally, becoming an agent means hiring an attorney to negotiate and execute an agreement with the bank, credit union or MSB (called the “principal”) setting forth your relative rights and obligations. Btw, I should mention I was awake all night in a long chat with jl777 (i.e. the SuperNet) and he is working on decentralized exchange (and decentralized games such as poker) and I want to make sure those will interopt with the social network I am coding for the launch of my coin. That is your hint on how to find it. I won't be announcing it here. I suggested to James that he support my "rainy day" suggestion for foiling jamming, by allowing users of the DE to choose a "Coin Days Destroyed". I asked him to see what TierNolan thinks of my idea. James is checking his atomic transfer protocol with TierNolan who wrote the BIP for decentralized exchange. James is working on income models for the SuperNet, i.e. a very small fee on each DE trade. James is not a GUI programmer (I am but I don't want to code game front-ends because I don't love playing games at my 50.7 age), so we are looking for GUI programmers who want to receive a % of the fees. We prefer these people be independent, i.e. neither of us want to manage employees. I am very interested in doing the GUI programming for the social network.
|
|
|
|
monsterer
Legendary
Offline
Activity: 1008
Merit: 1007
|
|
February 17, 2016, 09:01:23 AM |
|
The entire point of partitions is that not all full nodes are validating (verifying) all transactions.
Thus of course the full node that wins a block (in PoW, and analogously ditto in PoS or consensus-by-betting) is trusting the validators of other partitions to not lie to him.
If that full node had to validate every transaction in every partition, then there wouldn't be partitions any more. The entire reason to make partitions is because verification costs are too high when every full node has to verify every transaction. Partitions exist to aid scaling.
Can we be clear on what you mean by validation? Validating a transaction (i.e. checking it is protocol valid) has no PoW cost associated with it, any full node can do this. Therefore any full node can reject an invalid transaction before it gets propagated around the network.
|
|
|
|
TPTB_need_war
|
|
February 17, 2016, 09:11:43 AM |
|
The entire point of partitions is that not all full nodes are validating (verifying) all transactions.
Thus of course the full node that wins a block (in PoW, and analogously ditto in PoS or consensus-by-betting) is trusting the validators of other partitions to not lie to him.
If that full node had to validate every transaction in every partition, then there wouldn't be partitions any more. The entire reason to make partitions is because verification costs are too high when every full node has to verify every transaction. Partitions exist to aid scaling.
Can we be clear on what you mean by validation? Validating a transaction (i.e. checking it is protocol valid) has no PoW cost associated with it, any full node can do this. Therefore any full node can reject an invalid transaction before it gets propagated around the network. Validating (a.k.a. verifying) also means checking that it isn't a double-spend, that the funds exist (either via UXTO or account balance). In a partitioned design, only the full nodes (a.k.a. validators) for each partition would validate and propagate the transactions for that partition. So yes you are correct to imply that partitioning means the P2P network is partitioned also (because otherwise DDoS spam amplication attacks would be plausible if peers relay that which they do not verify). I think all that should have been clear just by thinking about the only way partitioning can work. I am just wondering why you can't deduce these sort of things and instead need to ask? Note that validators can be computing a PoW block based on a hash of their partition and a hash of all the other partitions. Don't forget the power of Merkel trees.
|
|
|
|
monsterer
Legendary
Offline
Activity: 1008
Merit: 1007
|
|
February 17, 2016, 09:18:13 AM |
|
Validating (a.k.a. verifying) also means checking that it isn't a double-spend, that the funds exist (either via UXTO or account balance).
Agreed. This is not very compute intensive, though, compared to PoW. In a partitioned design, only the full nodes (a.k.a. validators) for each partition would validate and propagate the transactions for that partition. So yes you are correct to imply that partitioning means the P2P network is partitioned also (because otherwise DDoS spam amplication attacks would be plausible if peers relay that which they do not verify).
I think all that should have been clear just by thinking about the only way partitioning can work. I am just wondering why you can't deduce these sort of things and instead need to ask?
Note that validators can be computing a PoW block based on a hash of their partition and a hash of all the other partitions. Don't forget the power of Merkel trees.
My point is that validators don't lie without the entire network lying. That applies within partitions as well.
|
|
|
|
TPTB_need_war
|
|
February 17, 2016, 09:25:36 AM Last edit: February 17, 2016, 11:42:57 AM by TPTB_need_war |
|
Who is Kayne Who cares. Synereo should be coding and stop trying to hype vaporware. Oh yeah the AMPs exist but the social network design is flawed and doesn't exist. And Greg Meredith the main guy of Synereo has been leeching off Ethereum which is another hype driven P&D. When will you speculators ever learn to just say "No!". Edit: Karma: http://www.mirror.co.uk/news/world-news/kanye-west-album-bitcoin-scam-7382496
|
|
|
|
TPTB_need_war
|
|
February 17, 2016, 09:32:08 AM Last edit: February 17, 2016, 09:43:16 AM by TPTB_need_war |
|
Validating (a.k.a. verifying) also means checking that it isn't a double-spend, that the funds exist (either via UXTO or account balance).
Agreed. This is not very compute intensive, though, compared to PoW. That is why I said the problem is more acute for Ethereum and verifying long running scripts. That has been one of my main points about why Ethereum can't scale (at least not decentralized). Also realize even in the case of Bitcoin eventually scaling can outrun the costs of PoW, especially as block reward declines to 0 and assuming block size is allowed to increase so that transaction fees don't skyrocket. Of course Bitcoin is already broken, because the Chinese mining cartel controls 65% of the hashrate and they lied about the Great Firewall of China being a problem[1] because they really want to veto block size increases so they can maximize their profits via spiraling transactions fees which I predicted in 2013. And remember my point that on the next block reward halving (this year I think) then the lowest cost miners will survive and the marginal miners will lose profitability and thus China's 65% share will increase significantly. I also believe Chinese miners are operating with near 0 cost electricity with a "wink and a handshake" charging the electricity cost to the collective society. [1] | We know they are lying because they can put a pool abroad and send only a block hash across the GFW thus bandwidth is not an issue. They are clearly lying! |
In a partitioned design, only the full nodes (a.k.a. validators) for each partition would validate and propagate the transactions for that partition. So yes you are correct to imply that partitioning means the P2P network is partitioned also (because otherwise DDoS spam amplication attacks would be plausible if peers relay that which they do not verify).
I think all that should have been clear just by thinking about the only way partitioning can work. I am just wondering why you can't deduce these sort of things and instead need to ask?
Note that validators can be computing a PoW block based on a hash of their partition and a hash of all the other partitions. Don't forget the power of Merkel trees.
My point is that validators don't lie without the entire network lying. That applies within partitions as well. But that underlined wasn't the problem. Did you forget the point about the Nash equilibrium and all validators needing to trust that the validators from other partition didn't lie.
|
|
|
|
YarkoL
Legendary
Offline
Activity: 996
Merit: 1013
|
|
February 17, 2016, 09:35:06 AM |
|
Validating (a.k.a. verifying)
Here's a little off-topic excursion regarding these words, explaining why I brought this issue up.... In software testing community those mean different things (verified means that the software runs without apparent bugs, valid that it does what reads in the specs). Somehow I got it in my head that verification in Bitcoin means checking that the construction of the transaction is formally correct (has min txfees, no two coinbases etc) whereas validation means checking that the inputs have been unspent and evaluating the script. But it appears that there is no such distinction in the documentation, and validation indeed is the same as verification. Still I think it would be useful to assign different meanings to these terms.
|
“God does not play dice"
|
|
|
TPTB_need_war
|
|
February 17, 2016, 09:39:49 AM |
|
Apparently Ethereum is attempting to correctly use the terminology of verification vs. validation then.
|
|
|
|
|