shunsaitakahashi (OP)
Member
Offline
Activity: 94
Merit: 16
Research, Analyze and Invent Crypto Systems
|
|
May 28, 2018, 03:39:38 PM Last edit: May 28, 2018, 05:55:07 PM by shunsaitakahashi |
|
The problem with existing PoS protocols is that they do not have weak subjectivity, as Vitalik calls it... "Weak subjectivity is the idea that subjectivity is unacceptable for short timespans, but acceptable for long timespans." https://ethereum.stackexchange.com/questions/15659/what-is-weak-subjectivityEvery stake based system that I know of (including Steem, Tezos, EOS, Cardano, Difinity) has the so called "maximum rollback slot count." That means that algorithm is unable to determine the correct fork by itself forcing rejection of a fork that otherwise would win. This results in different honest parties having different views of the chain depending upon how long they have been online. They would have to find someone "trustworthy" among supposedly "untrustworthy" parties. This is Weak Subjectivity and is highly undesirable. If one thinks that Weak Subjectivity is actually desirable, let's start another thread for that discussion. Proof-of-Approval doesn't need Weak Subjectivity - subjectivity should never be acceptable in short timespan or long. PoW uses a similar scheme to prevent long-range attacks: developer checkpoints. It's another form non-algorithmic consensus (weak subjectivity).
Can you point me to PoW system that does use developer checkpoint? They shouldn't need it because the external resource consumption of PoW makes it physically impossible to build chains at a faster rate than their resources would permit. With selfish mining, one can stash away mined blocks for use in attack at a later time but they would be giving up earnings now for mounting a future attack. Not sure how does incentivizing nodes to achieve certain behavior, which is the very basis of the blockchains, can be considered only as the "best-case scenario."
If parties are willing to forego a large income, significantly more than the cost, they are simply not rational. Proof-of-Approval, just like every single blockchain protocol, does assume that the parties are rational. If parties are not rational, what would be motivating them to join the network and buy stake in the first place? You are presuming that not creating or approving blocks is only for irrational reasons. There are plenty of rational ones - like wanting turning off your computer. And uncontrollable ones like internet connectivity. My quote above was in the context of a party running the node in the cloud. They do not sleep or turn off. They cost $5/month. If the benefit of additional earnings is significantly more than $5/month, a rational party would move their node to cloud. In any case, the design is very unscalable and can only be square pegged into scalability by making huge assumptions about the money distribution of the network. The more assumptions you have to make, the less powerful your algorithm is.
The design in for 2018 (where cloud connectivity is at 10gbps), not for 2009 with 100mbps connectivity assumption. One can design for 100mbps connectivity but the solution would be trading off something valuable. Regarding money distribution, I agree with Dan Larimer ( https://steemit.com/cardamon/@dan/peer-review-of-cardano-s-ouroboros) that not designing for Pareto is a mistake not the other way around. Proof-of-Approval does not require the distribution to be Pareto, but it does require the distribution to be not uniform in large group (each party holds small amount of equal stake). No cryptocurrency today has uniform distribution of stake. Proof-of-Approval is highly scalable with stake distribution of any cryptocurrency in existence today.
|
Twitter @shunsatakahashi
|
|
|
monsterer2
|
|
May 28, 2018, 04:41:01 PM |
|
If you're presented with two identical looking epocs, with the same blocks and different epoc signatories with equal stake, how do you pick between them?
If two competing forks (with epochs) have absolutely equal stake (even to the billionth fraction) at the first separating block, that may result in forking the chain itself. I can't think of a solution for such a situation other than forking the chain. So why wouldn't the history attacker just do exactly that, present you with an identical looking fork with a different epoc? Surely you have to pick the higher stake epoc, which has to be greater than 50%, not 99%?
|
|
|
|
shunsaitakahashi (OP)
Member
Offline
Activity: 94
Merit: 16
Research, Analyze and Invent Crypto Systems
|
|
May 28, 2018, 05:05:24 PM |
|
So why wouldn't the history attacker just do exactly that, present you with an identical looking fork with a different epoc? Surely you have to pick the higher stake epoc, which has to be greater than 50%, not 99%?
This relates to 2.2.25 of the paper that describes the fork selection procedure. The attacker would fork from a block such that after that block there would be two possible successor blocks, one of whom is "real" and the other is the attack block. The fork selection procedure declares winner by determining which of the forks have a higher amount of signatory stake. The real chain would have near 100% stake (assuming the block is months in past) and the attack chain will have all stake of the private keys the attacker owns. Attacker cannot copy transactions from the real chain to the attack chain since they include hash of a recent block. Each fork must have >50% in order for blocks to be valid - that condition has to be met. But even with >50% stake, the attacker needs to exceed signatory stake that exists in the real chain. Note that signatory stake is stake of all stakeholders in a fork who have (a) created any block, or (b) approved any block, or (c) approved any epoch, or (d) signed any transaction (transferred or spent their stake) - transactions are context sensitive and contain a recent block hash For history attack through spent keys item (d) would ensure that the signatories of the real chain hold a very high % of stake.
|
Twitter @shunsatakahashi
|
|
|
Ix
|
|
May 28, 2018, 06:52:33 PM |
|
Proof-of-Approval doesn't need Weak Subjectivity - subjectivity should never be acceptable in short timespan or long. You haven't proven that your protocol isn't subjective. In fact, you can't, or you've proved that everyone has been looking at distributed systems incorrectly forever. You must ignore all manner of attacks to do this. You would also have to disprove relativity. The distinction Vitalik and I make is weak subjectivity. e.g. It isn't that subjective, but nor is it positively deterministic, which no protocol can be anyway. Can you point me to PoW system that does use developer checkpoint? They shouldn't need it because the external resource consumption of PoW makes it physically impossible to build chains at a faster rate than their resources would permit. With selfish mining, one can stash away mined blocks for use in attack at a later time but they would be giving up earnings now for mounting a future attack. Bitcoin, for one.
|
|
|
|
shunsaitakahashi (OP)
Member
Offline
Activity: 94
Merit: 16
Research, Analyze and Invent Crypto Systems
|
|
May 29, 2018, 12:06:21 AM |
|
You would also have to disprove relativity.
I was attempting for a serious and thoughtful discussion . Not sure if we are still having that. Can you point me to PoW system that does use developer checkpoint? They shouldn't need it because the external resource consumption of PoW makes it physically impossible to build chains at a faster rate than their resources would permit. With selfish mining, one can stash away mined blocks for use in attack at a later time but they would be giving up earnings now for mounting a future attack. Bitcoin, for one. Here are some additional details on the developer checkpoint you provided - https://bitcoin.stackexchange.com/questions/1797/what-are-checkpoints/70824#70824 "It is a long term goal of removing the checkpoints entirely, because they are a source of confusion over the security model and power the developers have. But currently the role they serve is to prevent low difficulty header flooding attacks, and there has been no alternative solution proposed yet (that I know of)."These checkpoints are clearly not needed for the protocol (otherwise the goal of removing them wouldn't make sense). They seem to be there because of some old software (which is being planned for update), not for protocol reasons. In other words, Bitcoin protocol doesn't need developer checkpoints and will not have any in future.
|
Twitter @shunsatakahashi
|
|
|
eli_lyd1
|
|
May 29, 2018, 05:48:14 AM |
|
Seems interesting, is there any project use Proof-of-Approval?
|
|
|
|
Ix
|
|
May 29, 2018, 06:12:27 AM Last edit: May 29, 2018, 02:14:52 PM by Ix |
|
I was attempting for a serious and thoughtful discussion . Not sure if we are still having that. Light can travel around the Earth around 7 times a second. That means the minimum latency between two peers on opposite sides of the Earth is roughly 1s/7/2 or 71ms. This is only one order of magnitude less than your 1s block confirmation times. Now, add in the fact that real latency is not the speed of light and that order of magnitude all but disappears. The point is, maybe reductio ad absurdum but maybe not, is that you have no control over any node's subjective view of the network. Weak subjectivity is a stronger principle than subjectivity. You can not eliminate subjectivity, and my snarky point doesn't invalidate the rest of what I said. Fault tolerance can't be hidden away by rationalizing actors, it is at the forefront of distributed networking. Here are some additional details on the developer checkpoint you provided - https://bitcoin.stackexchange.com/questions/1797/what-are-checkpoints/70824#70824"It is a long term goal of removing the checkpoints entirely, because they are a source of confusion over the security model and power the developers have. But currently the role they serve is to prevent low difficulty header flooding attacks, and there has been no alternative solution proposed yet (that I know of)."These checkpoints are clearly not needed for the protocol (otherwise the goal of removing them wouldn't make sense). They seem to be there because of some old software (which is being planned for update), not for protocol reasons. In other words, Bitcoin protocol doesn't need developer checkpoints and will not have any in future. The developers provided a solution to an attack that requires subjectivity. You asked, I provided. I'm not sure how you presume this applies to old software or that it isn't for protocol reasons, as both of these presumptions are incorrect. If new hardware was created that was an order of magnitude faster than today's hardware, you can damn well bet the bitcoin devs would be adding another checkpoint to the software tout de suite, still wondering what the solution to this problem is. The genesis block is hard-coded into the software and is therefore subjective itself, so the entire notion of any network starts with subjectivity. Extending that subjectivity to avoid simple but damaging attacks is hardly a crime.
|
|
|
|
sapotacoin
Copper Member
Newbie
Offline
Activity: 14
Merit: 0
|
|
May 29, 2018, 12:27:09 PM |
|
The developers provided a solution to an attack that requires subjectivity. You asked, I provided. I'm not sure how you presume this applies to old software or that it isn't for protocol reasons, as both of these presumptions are incorrect. If new hardware was created that was an order of magnitude faster than today's hardware, you can damn well bet the bitcoin devs would be adding another checkpoint to the software tout de suite, still wondering what the solution to this problem is. The genesis block is hard-coded into the software and is therefore subjective itself, so the entire notion of any network starts with subjectivity. Extending that subjectivity to avoid simple but damaging attacks is hardly a crime.
|
|
|
|
shunsaitakahashi (OP)
Member
Offline
Activity: 94
Merit: 16
Research, Analyze and Invent Crypto Systems
|
|
May 29, 2018, 03:40:05 PM |
|
If two competing forks (with epochs) have absolutely equal stake (even to the billionth fraction) at the first separating block, that may result in forking the chain itself. I can't think of a solution for such a situation other than forking the chain.
So why wouldn't the history attacker just do exactly that, present you with an identical looking fork with a different epoc? Surely you have to pick the higher stake epoc, which has to be greater than 50%, not 99%? Long timespan scenario - some epochs are not common to forksNote that there are 2 separate thresholds in play here. 1. >50% threshold for a block to be valid. If the attacker presents less stake than that, there not even an attack fork, the blocks are simply invalid. 2. When the attacker presents attack fork with valid blocks (each block having >50% approval), then the fork selection procedure comes into play. Now that procedure must choose between the real chain vs the attack fork. 3. Fork selection procedure chooses based on signatory stake. The signatory stake in the real chain would be >99% for long timespan history attacks. In order to win, the attacker must present stake greater than signatory stake in the real chain. Short timespan scenario - epochs are common to forksIf there are no epochs contained in the attack fork, the fork with maximum approval state at the top wins. By approving a block, parties are also approving the ancestry of that block. So, the block containing the maximum stake on top wins. Hope this explains it.
|
Twitter @shunsatakahashi
|
|
|
shunsaitakahashi (OP)
Member
Offline
Activity: 94
Merit: 16
Research, Analyze and Invent Crypto Systems
|
|
May 29, 2018, 03:47:42 PM |
|
Hello Eli_lyd1, Seems interesting, is there any project use Proof-of-Approval?
Just started the development process. 1. It's called Takanium - agree it's a bit cheesy but will go with it for now https://github.com/Takanium2. One super dev on board Looking for many many more. 3. Core would likely use Golang. Performance of C++, developer productivity of Python. 4. Smart contract white paper in process. 5. Smart contract would likely use NodeJS VM. It seems to be the best option at this time. 6. We believe in tested code. Code coverage is expected to be >90%. Any advise you can give us is very much appreciated. Regards, Shunsai
|
Twitter @shunsatakahashi
|
|
|
shunsaitakahashi (OP)
Member
Offline
Activity: 94
Merit: 16
Research, Analyze and Invent Crypto Systems
|
|
May 30, 2018, 03:39:09 PM |
|
Hello Ix, Light can travel around the Earth around 7 times a second. That means the minimum latency between two peers on opposite sides of the Earth is roughly 1s/7/2 or 71ms. This is only one order of magnitude less than your 1s block confirmation times. Now, add in the fact that real latency is not the speed of light and that order of magnitude all but disappears. The point is, maybe reductio ad absurdum but maybe not, is that you have no control over any node's subjective view of the network. Weak subjectivity is a stronger principle than subjectivity. You can not eliminate subjectivity, and my snarky point doesn't invalidate the rest of what I said. Fault tolerance can't be hidden away by rationalizing actors, it is at the forefront of distributed networking.
You are absolutely correct that 1 sec did assume closer proximity of >50% of stakeholders. The slot period is expected to be small by design - to make it difficult for nodes not on cloud to compete for block approvals. The genesis block is hard-coded into the software and is therefore subjective itself, so the entire notion of any network starts with subjectivity.
Hard-coded means that each node would see the genesis block identical - they have no choice but to accept the genesis block. That is not subjective, it is purely objective. The developers provided a solution to an attack that requires subjectivity. You asked, I provided.
Yes, I did not know of it before. Appreciate that I'm not sure how you presume this applies to old software ...
Git blame shows that code being older than 4 years old - it could actually be older since I didn't go all the way back. ...that it isn't for protocol reasons, as both of these presumptions are incorrect.
"It is a long term goal of removing the checkpoints entirely..."Can't remove the checkpoint if it is for the protocol reasons :-) If new hardware was created that was an order of magnitude faster than today's hardware, you can damn well bet the bitcoin devs would be adding another checkpoint to the software tout de suite, still wondering what the solution to this problem is.
Agree. If there was an unexpected breakthrough that made PoW dramatically easier to solve, all PoW protocols have to take some preventive measures. This discussion is providing me new insights and I really appreciate that. Shunsai
|
Twitter @shunsatakahashi
|
|
|
shunsaitakahashi (OP)
Member
Offline
Activity: 94
Merit: 16
Research, Analyze and Invent Crypto Systems
|
|
May 30, 2018, 03:45:40 PM |
|
Hello Sapotacoin, The developers provided a solution to an attack that requires subjectivity. You asked, I provided. I'm not sure how you presume this applies to old software or that it isn't for protocol reasons, as both of these presumptions are incorrect. If new hardware was created that was an order of magnitude faster than today's hardware, you can damn well bet the bitcoin devs would be adding another checkpoint to the software tout de suite, still wondering what the solution to this problem is. The genesis block is hard-coded into the software and is therefore subjective itself, so the entire notion of any network starts with subjectivity. Extending that subjectivity to avoid simple but damaging attacks is hardly a crime.
See the answers in reply to Ix. Regards, Shunsai
|
Twitter @shunsatakahashi
|
|
|
Ix
|
|
May 30, 2018, 10:17:51 PM |
|
Hard-coded means that each node would see the genesis block identical - they have no choice but to accept the genesis block. That is not subjective, it is purely objective. It's all just different types of subjectivity. Bitcoin Cash is a fork of bitcoin that its proponents argue, subjectively, is "bitcoin". The checkpoints were hardcoded from blocks a year in the past that no one who has watched the network would argue about. The weak subjectivity that you brought up is the literally the exact same thing (except Bitcoin relies on centralized developers rather than a community), and it prevents the long-range attacks that are oft-heralded as the downfall of PoS. The attacks that they are trying to prevent are different: for bitcoin it is primarily just a spam attack because the node can eventually objectively determine the longest chain (which is assumed to be the correct network - again Bitcoin Cash proponents would disagree), and for PoS it's rewriting the history of the network. Now while that sounds scary, the fix for both is the same, simple checkpoint. It also requires intrinsic information (stakeholder private keys) - for which you have to devise some ridiculous scenario to create an attack - vs. the extrinsic PoW which was incredibly weak at the start of Bitcoin and can be performed by anyone with a GPU. EOS and Ripple or whatever might have different concepts of weak or regular subjectivity, but I am arguing from the perspective of my proposal which is echoed by Vitalik's blog post, and is what you referenced. And most of the "nothing at stake" issues derived from the original concepts of PoS by QuantumMechanic and implemented by SunnyKing in PPCoin, and I am admittedly not familiar enough with the newer protocols to argue for or against how they do it - but I know there is *a* way to do things safely, that asks no more of anyone than trusting the place you download the software from in the first place. Because if you can't do that, you can't ever trust that you are on the correct network. Things can only get absurd from there. "It is a long term goal of removing the checkpoints entirely..." Can't remove the checkpoint if it is for the protocol reasons :-) Funny enough, I was googling to make sure these checkpoints still existed in the same way and found this coindesk article: https://www.coindesk.com/bitcoins-security-model-deep-dive/"On a related note, every blockchain system has its genesis block hard coded into the node software. You could argue that there is a social contract to the "shared history" that is the ledger – once a block is old enough, there is an understanding amongst everyone on the network that it will never be reverted. As such, when developers take a very old block and create a checkpoint out of it, it is done more so as an agreed-upon sanity check rather than as a dictation of history." They apparently feel the same way I do and used the same argument, the rabid "this is centralization" whiners be damned. (Not that an appeal to authority wins my argument, but I'd like to think it's an appeal to common sense, which is emboldened by other people coming up with the exact same rationale.) This discussion is providing me new insights and I really appreciate that.
Cool, I know I can be an overly argumentative and strongly opinionated ass, but I have been studying this stuff and devising alternative systems since 2011 so I've had a lot of time to work some things out. It's disheartening to me to see someone spend a lot of time attempting to fix things I don't see as problematic (or have simpler solutions) that are entirely conceived of to validate proof of waste. There is *a lot* of propaganda regarding bitcoin when you start diving into the technical details. I am *finally* putting my money where my mouth is and working on my own protocol full-time, but based on some of what you've said, we might disagree on my economic notions more than technical ones.
|
|
|
|
monsterer2
|
|
May 31, 2018, 09:51:39 AM |
|
If two competing forks (with epochs) have absolutely equal stake (even to the billionth fraction) at the first separating block, that may result in forking the chain itself. I can't think of a solution for such a situation other than forking the chain.
So why wouldn't the history attacker just do exactly that, present you with an identical looking fork with a different epoc? Surely you have to pick the higher stake epoc, which has to be greater than 50%, not 99%? Long timespan scenario - some epochs are not common to forksNote that there are 2 separate thresholds in play here. 1. >50% threshold for a block to be valid. If the attacker presents less stake than that, there not even an attack fork, the blocks are simply invalid. 2. When the attacker presents attack fork with valid blocks (each block having >50% approval), then the fork selection procedure comes into play. Now that procedure must choose between the real chain vs the attack fork. 3. Fork selection procedure chooses based on signatory stake. The signatory stake in the real chain would be >99% for long timespan history attacks. In order to win, the attacker must present stake greater than signatory stake in the real chain. Short timespan scenario - epochs are common to forksIf there are no epochs contained in the attack fork, the fork with maximum approval state at the top wins. By approving a block, parties are also approving the ancestry of that block. So, the block containing the maximum stake on top wins. Hope this explains it. Hi Shunsai, Why is it not possible to present two different epocs representing the same time period? That's where my attack angle was coming from. Cheers, Paul.
|
|
|
|
Traxo
|
|
May 31, 2018, 03:07:12 PM Last edit: May 31, 2018, 06:25:24 PM by Traxo |
|
If you're interested, you can check out my signature for a link to my whitepaper on the Decrits consensus algorithm which is relatively similar to yours (with an identical long range attack defense), and is 5+ years old. Another message from @anonymint which I received in private chat. He says that section 4.2.5 Scenario: Voices Colluding to Fork the Network is correct that with non-proof-of-work systems only the users which were online during the attack can detect malvolence and this requires bounded/partial asynchrony (i.e. not fully asynchronous as are Byteball and Hashgraph). So if those assumptions for super majority of users being online and bounded network asynchrony are not fulfilled, then security deposits are insufficient for security, although they might or might not help rate limit. So without any effective penalty on malevolence there's no cost to attacking it regardless if the validator set is permissionless or permissioned. He presumes the same vulnerability can be found in every non-proof-of-work design including Proof-of-Approval.
|
|
|
|
Ix
|
|
May 31, 2018, 07:17:29 PM |
|
So if those assumptions for super majority of users being online and bounded network asynchrony are not fulfilled, then security deposits are insufficient for security, although they might or might not help rate limit. So without any effective penalty on malevolence there's no cost to attacking it regardless if the validator set is permissionless or permissioned. I don't want to derail Shunsai's thread with my stuff unless it is a comparison with his, so after this feel free to respond in PMs or bump the Decrits thread. No supermajority is required to be online because the order of records is determined in advance (something you went on for days about being a vulnerability - but it's not). Only some nodes are required to be online. Any nodes. The attacking nodes will be forced to fork as part of the protocol - permanently if they keep the attack going for long enough. Nodes that were not online will be required to pick a fork - my concession is that this will be big news in any kind of remotely popular network, so picking the correct network will be an easy, one time event. Because after that, all the money of the attacker's fork is destroyed, and thus the attack can't be repeated without investing a lot of money again. (Contrary to PoW and most PoS which can be repeatedly attacked.) I also have another, whole side of making this process easier that is not documented in the whitepaper, but I'll save that for when I can alpha test. He presumes the same vulnerability can be found in every non-proof-of-work design including Proof-of-Approval.
Probably, but I accept and embrace it.
|
|
|
|
Traxo
|
|
May 31, 2018, 08:04:14 PM Last edit: May 31, 2018, 09:03:54 PM by Traxo |
|
my concession is that this will be big news in any kind of remotely popular network, so picking the correct network will be an easy, one time event.
This response is applicable to all nothing-at-stake systems, so I will reply in this thread. Your reply belies the fact that @anonymint refuted it before you wrote it. Did you not see the linked Medium post I cited for you in the prior post: https://medium.com/@shelby_78386/the-caveat-though-is-that-when-the-attacker-can-fork-the-vested-interests-of-some-of-the-users-9340dd037a61Public opinion can be manipulated. Just look at every fork war that has taken place already for evidence. There's no objectivity in public opinion. Just a lot of chest thumping and arguments about whose furk is longer and fatter. Because after that, all the money of the attacker's fork is destroyed
Which one is the attacker's fork? Again, please read the Medium post that was cited. No supermajority is required to be online because the order of records is determined in advance
Which section of your white paper explains this?
|
|
|
|
d5000
Legendary
Offline
Activity: 4088
Merit: 7483
Decentralization Maximalist
|
|
June 01, 2018, 06:17:44 AM Last edit: June 01, 2018, 06:53:41 AM by d5000 |
|
Hi, I've now looked a bit into your whitepaper (the 2.0 version) and read your conversation with monsterer (the discussion with Ix is still missing, it's pretty long, but it may be worth it). First I want to clarify that while I read a lot about cryptocurrency consensus models (specifically PoS models, as a - possible or impossible? - "solution" to the N@S problem intrigues me, e.g. Vitalik/Vlad's blogs on Casper, anonymint's and monsterer's posts etc.), I'm not a computer scientist nor a professional developer, and so maybe I am wrong in some interpretations. Well, what I like about your protocol is that you have created something like a DPoS/BFT model without a static "delegate" set (e.g. Bitshares, Tendermint or Casper). The problem of these systems is obviously 1) the "liveness" threshold and 2) the accepted centralization which could lead to social engineering and cartel attacks. As in your system everybody can become an "approver" (of blocks or epocs), that doesn't apply. ( However, you create another threshold as you pre-select some stakers to create blocks; if all block creators go offline, the blockchain stops this is probably wrong as simply the next slot would be chosen, but the problem persists in that it's likely that in some periods with low participation the block production will be slow). Some observations: 1) It seems that it will be very difficult to reach the quorum of 50% for block approvals. I think this was also mentioned by Ix. In Nxt for example, which even has "stake leasing", about 30% participate in forging. 2) If I understand it right, "block creators" compete to create blocks after a new time slot has begun. Isn't this Proof of Work, as the fastest block creator with the best Internet connections would have most chances to win? Wouldn't that lead to an arms race like in Bitcoin? 3) Time slots can only be orientative as timestamps can be easily faked. I like the "delayed" epoch approvals a bit, because this way high approval rates can be reached, which make history attacks very, very difficult, although I still haven't grasped if double-voting (what monsterer mentioned) may be a problem or not (if yes, maybe a "slashing" mechanism can help). I would however make epochs pretty long (e.g. the equivalent of a day) because otherwise participation would be lower, inflation would be high, and the coin would suffer from blockchain bloat and traffic issues (imagine millions of small stakers approving multiple epochs per day ...). Problem 1 could be mitigated with a "leasing" mechanism like in NXT or Waves that lets small stakers lease coins to big stakers with good hardware, but this has the drawback that then the social-engineering/cartel attacks like in DPoS may be possible too, only that "approval pools" would take the place of "delegates". An interesting idea could be to only let pools approve blocks with "leased" stake but not epochs, as epochs can be approved in a delayed manner and so the participation rate would be higher. Or, you only let stakers approve epochs, and use a more traditional PoS algorithm for the blocks. I countinued a bit in the whitepaper and found this sentence about the double-voting problem: Proof-of-Approval provides award for valid approvals, even on multiple blocks, as long as they are not in conflict, i.e., they share the same parent. An approver’s award is maximized when they approves all non-conflicting blocks. But if they approves any conflicting blocks, the award vanishes.
This looks like "duplicate stake detection" mechanisms in today's PoS coins, which are unfortunately not effective (or at least, only in a small subset of problems, mainly for "unintended" forks because of network problems which get orphaned earlier by this mechanism). The problem is the following: If you want to double spend, then you will publish your "conflicting chain" with the double-spend several blocks/slots after you forked it from the "honest" chain. Now if the staker has already received the reward for the approved block on the main chain (and maybe he has exchanged it for a good or another altcoin) then he can safely approve the attacker's second chain. And as timestamps can be faked, it's no issue to do that after some time. This seems only be (somewhat) solvable with "slasher"-like designs. There may be some protection against this problem because of your "block creator selection" process, but I think an attacker with 50% stake could game it, or at least keep trying until he succeeds. Maybe here I'm wrong however. PS: People like @abdulkhaliq123 and @zgreenz are almost surely bots that want to achieve a higher forum rank, their posts seems to be copied-and-pasted from one of the early post in this thread. It's not necessary to answer them as machines (and more so the bots used in this forum) are still too dumb to understand consensus protocols
|
|
|
|
Traxo
|
|
June 01, 2018, 01:32:20 PM Last edit: June 01, 2018, 02:26:58 PM by Traxo |
|
In that case, why approve blocks at all?
Who are you replying to? What specific issue are you attempting to take issue with? Let me presume that you're trying to say that if public opinion can be ambiguous then why approve blocks at all? If that is your point, then the answer is that proof-of-work is not ambiguous because there is an objective longest chain proven by the cumulative difficulty by adding the difficulty of all the blocks in the chain. IOW, to avoid making posts which are noise, it is helpful to learn Bitcoin 101 before commenting here. Anyway, educating is okay but the problem is if someone does not respond correctly, then incorrect ideas promulgate.
@d5000, I think you may remember the discussion you had last year with @anonymint (under one of his former pseudonyms) in Theymos' thread about altcoins. So it seems the points you are making here are reiterating some of that discussion about nothing-at-stake. @anonymint wrote down his analysis of the nothing-at-stake issue which will seem to apply to all of these non-proof-of-work consensus systems: https://gist.github.com/shelby3/e0c36e24344efba2d1f0d650cd94f1c7#oligarchy-if-pos-is-functioningHe does not think there will be any non-proof-of-work design that escapes from the nothing-at-stake problem except under the conditions he already mentioned when a super majority of the users are always online and network remains within a bounded asynchrony. He is confident a nothing-at-stake vulnerability can be identified in Proof-of-Approval. But who has the time to find the nothing-at-stake flaw in all of these non-proof-of-work designs? It is like everyone wants to try to reinvent the wheel of nothing-at-stake finding some way to blind themselves to the fact their design is also vulnerable. Well, what I like about your protocol is that you have created something like a DPoS/BFT model without a static "delegate" set (e.g. Bitshares, Tendermint or Casper).
The inviolable rule is that 100% finality of transaction confirmations can only be obtained with a permissioned validator set. And then of course there’s the liveness issue that the chain can get stuck and require a hardfork to unstuck. And of course what you wrote about the political corruption that results from that and/or delegating stake. That was covered again in detail in the discussion of EOS/DPoS in @anonymint's latest blog: https://steemit.com/cryptocurrency/@anonymint/scaling-decentralization-security-of-distributed-ledgers
|
|
|
|
shunsaitakahashi (OP)
Member
Offline
Activity: 94
Merit: 16
Research, Analyze and Invent Crypto Systems
|
|
June 01, 2018, 03:52:32 PM |
|
Hello Ix, Sorry for delay in reply Hard-coded means that each node would see the genesis block identical - they have no choice but to accept the genesis block. That is not subjective, it is purely objective. It's all just different types of subjectivity. Nomenclature aside, all blockchains are solutions to The Byzantine Generals Problem ( https://people.eecs.berkeley.edu/~luca/cs174/byzantine.pdf). The goal of a solution is to not have to trust any single party, or at least have as few instances of having to trust one or more of them. Of all the public blockchains today, PoW blockchains place fewest trust requirements from the parties joining the network. Most (all?) public stake based systems require parties to trust more of others more often. I believe, for that reason, PoW is the winner today (although some of it may be historic). I know I can be an overly argumentative and strongly opinionated ass, but I have been studying this stuff and devising alternative systems since 2011 so I've had a lot of time to work some things out.
The knowledge shows through and is appreciated. I am *finally* putting my money where my mouth is and working on my own protocol full-time, but based on some of what you've said, we might disagree on my economic notions more than technical ones. Looking forward to more discussions in near future :-) Regards, Shunsai
|
Twitter @shunsatakahashi
|
|
|
|