HunterMinerCrafter
|
|
October 14, 2015, 07:12:39 PM |
|
How can we assert that a "free" level is not duplicated without checking all prior blocks? The complexity of checking a block should grow linearly with the number of transactions in the block, and nothing else. If the time to check a block grows linearly by block depth then this creates a gradual slowdown of the network over time, which is not acceptable. The only operation that should be linear in block depth is resync!
What is the problem? Simple hash table is not enough for this? If you affraid that it will not fit in RAM you can contruct it in some file. A hash table (probably we'd actually use a radix tree, but such details are neither here nor there) would do it, the problem is the growth of this hashtable over time. The concern is not about the size of the hashtable itself, but the ever increasing time required to enumerate it for each block. Verifying a block (assuming the same number of transactions) should take the same amount of time regardless of it is block number 10, 1000, or 10000000. The complexity of block verification is supposed to be linear in the number of transactions in the block. It should not take any longer to verify a block with 5 transactions in 10 years from now than it takes to verify a block with 5 transactions today. Because this table would grow with each block the time to validate a block today would be significantly less than the time to validate the same block in 5 years from now. This is a problem, these times should be the same. Having complexity of block validation being more than linear in the number of transactions implies that eventually the network would reach a point where clients would not be able to validate blocks as quickly as they are produced. At such a point the network would cease to function. I suppose that we could resolve this by perpetually increasing the time between blocks, but this is undesirable for (hopefully) self-evident reasons. Let's start with something simple, verifying your domain over the bot. This could be done in a number of ways... signing a message or sending coins, including some predefined token message into the blockchain, etc. Your call.
LOL So increased or switched-off production will not convince you? It would, if done correctly. However, if done correctly it can only convince *me* and not the community at large. (The prearranged switch off time would have to be privately arranged between us, and undisclosed.) I'd prefer that you do this in a way that can demonstrate to everyone, not just to me... but I'll take what I can get. If this is the route that you'd like to take, I can arrange a switch-off time with you in PM. EDIT: more typos
|
|
|
|
ElvenArcher
Newbie
Offline
Activity: 37
Merit: 0
|
|
October 14, 2015, 08:03:09 PM |
|
The concern is not about the size of the hashtable itself, but the ever increasing time required to enumerate it for each block.
Just rebuild it every few years and it will always give you O(1). There is no problem at all. And your radix will have constant search time too (beware of 10 ms seek time if you want to use tree on HDD). So time of verification will always be limited.
|
|
|
|
HunterMinerCrafter
|
|
October 14, 2015, 08:43:10 PM |
|
Just rebuild it every few years and it will always give you O(1). There is no problem at all.
How do you figure? It can never be O(1), at best the time complexity will grow O(log) by depth. This is still strictly more than just linear in transaction count! As such, there will always be "some point" where the block validity check time grows to be too much. Maybe we can push this out to a significantly long curve, hundreds/thousands of years or so, but I'm quite skeptical that this could be made to work reasonably for even a decade. I think until we could put some hard numbers behind increase in block size and block validation time we cannot just assume that it would be "no problem at all..."
|
|
|
|
e1ghtSpace
Legendary
Offline
Activity: 1540
Merit: 1001
Crypto since 2014
|
|
October 14, 2015, 09:02:53 PM |
|
I wrote all this and no one notices?? I don't follow what you're getting at here... In any case, it doesn't explain why the miners wouldn't simply *only* include their own transactions, and ignore anyone else's.
The bot miner would need to get a larger reward than if they simply provided a solution themselves, otherwise there is no incentive to take someone else's solutions... In other words, the reward for the "human" solution would need to be a net negative in order for the bot to see incentive to include it in a block instead of their own solution.
What I'm saying is, if the bot were to only allow their own solution into the block they would be losing motocoins. This is because the reward for humans miners is based off the average human solves in the last 10 blocks (regardless of how many humans play in the current round) not just the current block (and the bot gets 10% of each human's reward). So if 10 humans solve a block, and the bot will get 10% of their rewards, then if the bot chooses not to allow human solves then they won't get the human's 10% reward. This gives them an incentive to include the human's reward. If they only accept their own solution, and an average of 10 humans mined the last 10 blocks, then they will only get: 10 coins (from the bot's block) + 10 coins / 10 (average human players of the last 10 blocks) * 1 (number of human solutions in block) But if they accepted, say 10 humans rewards, then their payout would look like this: 10 coins + 10 /10 *10 / 10 (10% of the human's reward) So in total the bot would get 11 coins by accepting human solutions. If they also accepted their own solution with the human solutions they would get 12 coins. This means on average the bots will get 1 extra Motocoin if they accept all human solutions. I don't know if I explained that right but I don't really have much time to check. Human solutions should be n=10 to stop "premining" from the bot or someone else because then it could stop them from suddenly releasing 100 solutions in one block to get lots of coins but penalise real human miners. Edit: Can you also answer this please? http://www.qmole.ukWould it be possible to compile Motocoin for iOS on the Qmole app? PROGRAMMING LANGUES
Common Lisp (ECL) C, C++ (gcc, g++, clang, clang++) Java (JamVM) Clojure Lua Scheme (Gambit) OCaml Python Perl A+ APL
Edit 2: Wow, syncing is taking forever, 3+ hours and its only synced 100,000 blocks. Will we ever get on of those blockchain torrents (i've forgotten the name) where you put it in your %appdata%/Motocoin folder and the wallet imports the blocks? Maybe its bottlenecked by my CPU... Its an i5-4670K @ 3.40 GHz Can human blocks be based off bot blocks, but only from the last 10 minutes, so they have 10 minutes to complete their level or a new level is generated?
|
|
|
|
HunterMinerCrafter
|
|
October 14, 2015, 09:17:45 PM |
|
I wrote all this and no one notices??
I still don't follow what you're getting at with this. Can you write out an overview of the proposed logic, defined in a more abstract form? (Formulas instead of numbers?) Edit: Can you also answer this please? I don't know much about qmole. Does it have SDL support? If so, it might possibly work, but is probably nontrivial. Can human blocks be based off bot blocks, but only from the last 10 minutes, so they have 10 minutes to complete their level or a new level is generated?
I've been mulling over something similar, but don't have an answer just yet. One thing that I think would be safe to say is that things are much simpler all around if we do not actually allow an indefinite time for solutions.... We can put a very long limit, but I'm increasingly thinking that any sustainable solution must put *some* limit! I'm still holding out hope for some solution that doesn't require any reduction in security, as well...
|
|
|
|
ElvenArcher
Newbie
Offline
Activity: 37
Merit: 0
|
|
October 14, 2015, 09:22:44 PM |
|
How do you figure? It can never be O(1), at best the time complexity will grow O(log) by depth.
Where did you get that Log? You don't want to rebuild it completely and just want to make some kind of a tree? Or are you talking about collisions? Anyway you log is naturally limited by hash size so I don't see any problem.
|
|
|
|
HunterMinerCrafter
|
|
October 14, 2015, 10:34:33 PM |
|
Where did you get that Log? You don't want to rebuild it completely and just want to make some kind of a tree? Or are you talking about collisions? Anyway you log is naturally limited by hash size so I don't see any problem.
?!? Let's consider some options: First the naive approach, simply keeping a list of hashes for every level seen. Of course searching this list is O(N) in the number of levels seen. Of course a super-linear growth by block depth is right out as unacceptable. So let's imagine storing instead in a bintree. (It could be splay, radix, whatever...) Now we can search in O(log(N)). The value for N grows (by up to 10) with each block, the number of level hashes stored. The limit is not on the number of bits in the hash, but on the total number of values of hashes, and this is quite large. This is sub-linear, but still growing. Finally, we could use a hash table, as you point out. This allows for searching in O(1) in the average case, yes, but still grows as O(N) in the worst case. We cannot assume average case because there is nothing that actually asserts uniformity of the data. We can say that it will be sub-linear, but also more than constant. Whether this would end up better or worse than the log tree in practice is hard to say, but we do know that it could be just as bad as the naive approach. Considering that the miner picks their seeds there is nothing to prevent degeneracy, so what we have here is primed for a potential DoS attack, as well, where a bot miner picks clustered level hashes to solve in order to drive this search time up toward O(N). There is no actual O(1) option, here. This is the problem... no matter what you do, checking for the duplicate level will grow in time, slowing the block validity check bit by bit every block, until the network falls over. Block validity checking *really* does need to be strictly linear in the number of transactions, and *not* at *all* more than this. There's no escaping that. Your check would need to somehow be O(1) and it just simply can't be... If you have some magical way to search a set in less than O(log) worst case I'm sure the world would like to hear about it... This would be a groundbreaking accomplishment!
|
|
|
|
e1ghtSpace
Legendary
Offline
Activity: 1540
Merit: 1001
Crypto since 2014
|
|
October 14, 2015, 11:02:05 PM Last edit: October 15, 2015, 09:19:44 PM by e1ghtSpace |
|
I wrote all this and no one notices??
I still don't follow what you're getting at with this. Can you write out an overview of the proposed logic, defined in a more abstract form? (Formulas instead of numbers?) Ok, the human's reward is decided like this: A Human's Payout = coin reward * ( number of blocks to average / number of human solutions in the last number of blocks to average) So, if 20 human solutions were included in the last 10 blocks (average of 2 solutions per block) then the reward of the next block would look like this: (if coin_reward = 10 and the number of blocks to average is 10) 10 * (10 / 20) But the bot that solves the real block should receive 10% of each human player's reward otherwise there is no incentive to include the human solutions. I'm still holding out hope for some solution that doesn't require any reduction in security, as well...
Well, you could argue that the security of this solution to the problem isn't reduced because no humans are playing right now anyway.
|
|
|
|
domob
Legendary
Offline
Activity: 1135
Merit: 1170
|
|
October 15, 2015, 06:10:50 AM |
|
There is no actual O(1) option, here. This is the problem... no matter what you do, checking for the duplicate level will grow in time, slowing the block validity check bit by bit every block, until the network falls over. Block validity checking *really* does need to be strictly linear in the number of transactions, and *not* at *all* more than this. There's no escaping that. Your check would need to somehow be O(1) and it just simply can't be...
If you have some magical way to search a set in less than O(log) worst case I'm sure the world would like to hear about it... This would be a groundbreaking accomplishment!
Actually there is (at least on average), which is a hash table (see the corresponding Wikipedia entry, for instance). That's what ElvenArcher has been saying above.
|
Use your Namecoin identity as OpenID: https://nameid.org/Donations: 1 domobKsPZ5cWk2kXssD8p8ES1qffGUCm | NMC: NC domobcmcmVdxC5yxMitojQ4tvAtv99pY BM-GtQnWM3vcdorfqpKXsmfHQ4rVYPG5pKS | GPG 0xA7330737
|
|
|
ElvenArcher
Newbie
Offline
Activity: 37
Merit: 0
|
|
October 15, 2015, 09:29:33 AM |
|
Considering that the miner picks their seeds there is nothing to prevent degeneracy, so what we have here is primed for a potential DoS attack, as well, where a bot miner picks clustered level hashes to solve in order to drive this search time up toward O(N).
So you want to use hash table without calculating hashes at all? Of course every node will have different random salt. You can attack only if you know it (And you will be able to attack only this particular node, it can detect it easily and change the salt, but if someone have access to other's moto folder wouldn't it be more sane to steal the wallet instead of this almost useless salt). Your check would need to somehow be O(1) and it just simply can't be...
You can tell controversial things about something that nobobody can really prove. But it is mindless to do such thing in this particular case when everybody can go to wikipedia and see that O(1). If you was able to write your bot and make patches for MOTO you obviously know how to program and I'm pretty sure that you are aware of all basic algorithms. But first you tell us that you don't see any solution at all except checking every block. Then you tell us that there are in fact some algorithms that can be used but they will require Log(n) time. If you are uncomfortable with that hash table or if you think that it is unnecessary waste of HDD you can simply tell us that. There is no need to invent sophisticated logical constructions with purposely embedded flaws. This flawed conversation can lasts forever. I don't like when someone tries to manipulate me that way. Just forget about that hash table, 24 hours would be pretty enough for human to solve the level. Just check last 1440 blocks.
|
|
|
|
HunterMinerCrafter
|
|
October 15, 2015, 03:25:35 PM |
|
Actually there is (at least on average), which is a hash table (see the corresponding Wikipedia entry, for instance). That's what ElvenArcher has been saying above.
HI DOMOB! How've you been? How's life in HUC? Of course I understand that a hashtable allows for search of sorted set in O(1) time on average, as I elaborated in my last post... The problem is that it is O(1) on average but grows to O(n) in the worst case. In practice, it will land somewhere in between, depending upon the distribution of the hashes stored. (Which is up to the miners' selection of hashes.) "Somewhere in between" is more than the maximum O(1) time that can be implicitly assumed as safe to add to block validity check! If block validity check is anything more than linear in the number of transactions in the block then there is, of course, a potential problem. In order to be able to say that such an added complexity is safe for the network we would need to be able to quantify a priori that the slowdown potentially introduced over time by the hashes added every block does not significantly impact the block validity check in any meaningful term, even if miners are selecting a degenerate distribution of hashes on average. So far, I have seen no such measure of the performance impact. I will not just make a change that could kill the network within a few years on blind faith that it shouldn't be "any problem." We need to know that it will have a non-feasibly low probability of being any problem. If it can be shown that the natural growth of this table and the amount of degeneracy of the set sorting that can be introduced on average by malicious hash selections is highly unlikely to introduce any significant added time to block validity check, at least for many decades, I would be much more comfortable with the proposal.
|
|
|
|
HunterMinerCrafter
|
|
October 15, 2015, 03:45:16 PM |
|
Considering that the miner picks their seeds there is nothing to prevent degeneracy, so what we have here is primed for a potential DoS attack, as well, where a bot miner picks clustered level hashes to solve in order to drive this search time up toward O(N).
So you want to use hash table without calculating hashes at all? Of course every node will have different random salt. You can attack only if you know it (And you will be able to attack only this particular node, it can detect it easily and change the salt, but if someone have access to other's moto folder wouldn't it be more sane to steal the wallet instead of this almost useless salt). Errr.... huh? I have no idea what you're on about here. The attack that I was referring to has nothing to do with targeting any specific miner. I was referring to the fact that when the miners make their own hash selections, they could choose to only use hashes that increase the degeneracy of the set, to push the complexity of searching this set toward the O(n) limit. They would not need to compromise any particular other miner to do this, it is an attack against the network as a whole. They would only need to churn hashes and ignore (not even try to solve) the ones that are well distributed relative to what is already in the set, instead working only on solving the ones that increase the degeneracy in the set after they would be added to it. Your check would need to somehow be O(1) and it just simply can't be...
You can tell controversial things about something that nobobody can really prove. But it is mindless to do such thing in this particular case when everybody can go to wikipedia and see that O(1). If you was able to write your bot and make patches for MOTO you obviously know how to program and I'm pretty sure that you are aware of all basic algorithms. Errr... proving this is not hard. Try it yourself in a testnet with a couple of nodes: just add a simple progressive slowdown into the block validity check, like usleep(blockdepth) or even usleep(log(blockdepth)) and then run out a lot of low difficulty blocks, and see how many it takes before the time to check one block becomes significantly large to start to cause fragmentation of consensus. Then continue running it to see how many blocks it takes before the time to check one block exceeds the average block time. But first you tell us that you don't see any solution at all except checking every block. Then you tell us that there are in fact some algorithms that can be used but they will require Log(n) time.
This log solution and this "checking every block" solution are the same solution.... If you are uncomfortable with that hash table or if you think that it is unnecessary waste of HDD you can simply tell us that.
As I said before, the concern is not with the space complexity or storage requirements. There is no need to invent sophisticated logical constructions with purposely embedded flaws. This flawed conversation can lasts forever. I don't like when someone tries to manipulate me that way.
I am not "inventing" anything, or attempting to manipulate. I have stated a fact about the complexity constraints of the blockchain algorithm, and challenged you to establish that your proposal falls within these constraints. I'll just add this to the ever-growing list of challenges that you refuse to rise to. Just forget about that hash table, 24 hours would be pretty enough for human to solve the level. Just check last 1440 blocks.
This is a simpler and much more sound proposal. I would (of course) still much prefer that the network not be paying for work that doesn't secure anything, but if that is the best we can do then we'll live with it. Can you outline the details for discussion?
|
|
|
|
ElvenArcher
Newbie
Offline
Activity: 37
Merit: 0
|
|
October 15, 2015, 04:16:19 PM |
|
Ok, if you really don't understand: all wallets will use some strong cryptographic hash to determine the index of an entry of that table. They will hash the item along with some big random salt (unique for each wallet).
They will NOT use first bits of plain item data as an entry index.
Miner will NOT be able to select specific levels for degeneracy attack cause he will in fact not know what hash function each node use for that table.
The only problem is that those hash tables will not be interchangable between different wallets. Is that a problem?
Anyway, limited time (24 hours) solution will be simpler and enough and I'm absolutely not insisting on using that hash table.
|
|
|
|
|
ElvenArcher
Newbie
Offline
Activity: 37
Merit: 0
|
|
October 15, 2015, 08:31:50 PM |
|
HMC, I've just reread your previous posts.
It seems that you completely forgot how hash table works.
Sorry, I thought you just wanted to play funny games with my brain.
|
|
|
|
e1ghtSpace
Legendary
Offline
Activity: 1540
Merit: 1001
Crypto since 2014
|
|
October 16, 2015, 07:27:43 AM |
|
HMC, I've just reread your previous posts.
It seems that you completely forgot how hash table works.
Sorry, I thought you just wanted to play funny games with my brain.
Did you stop your bot? Because I can't connect to any nodes.
|
|
|
|
domob
Legendary
Offline
Activity: 1135
Merit: 1170
|
|
October 16, 2015, 07:36:44 AM |
|
Something completely separate: I'm planning to write an academic paper describing human-mining and gaming on blockchains. This paper could be submitted to the Ledger journal or some other publication. I want to describe some ideas in a technical way, but there won't be too much mathematics or cryptography (more about ideas of system design). So far, I'm only aware of Motocoin's proof-of-play and Huntercoin's decentralised game world that would fall into this category.
Of course, my own perspective is mostly on Huntercoin. I also have ideas for a few new concepts I want to publish in the paper that are not yet implemented in Huntercoin, but interesting research. If someone from the Motocoin community is interested in a collaboration, I would welcome this. Ideally, you should write a section or two about Motocoin, proof-of-play and the issues as well as proposed solutions related to botting -- and, of course, be co-author of the paper. Is there any interest in that?
|
Use your Namecoin identity as OpenID: https://nameid.org/Donations: 1 domobKsPZ5cWk2kXssD8p8ES1qffGUCm | NMC: NC domobcmcmVdxC5yxMitojQ4tvAtv99pY BM-GtQnWM3vcdorfqpKXsmfHQ4rVYPG5pKS | GPG 0xA7330737
|
|
|
ElvenArcher
Newbie
Offline
Activity: 37
Merit: 0
|
|
October 16, 2015, 09:40:21 AM |
|
Did you stop your bot? Because I can't connect to any nodes.
Try 37.153.97.65:13107 Ideally, you should write a section or two about Motocoin, proof-of-play and the issues as well as proposed solutions related to botting -- and, of course, be co-author of the paper. Is there any interest in that?
I could check this article to ensure that it doesn't contain any serious mistakes about botting. But I'm not a native speaker and also do not know as much as HMC about target time calculation, anti-warp feature and the history of MOTO...
|
|
|
|
domob
Legendary
Offline
Activity: 1135
Merit: 1170
|
|
October 16, 2015, 10:57:44 AM |
|
Ideally, you should write a section or two about Motocoin, proof-of-play and the issues as well as proposed solutions related to botting -- and, of course, be co-author of the paper. Is there any interest in that?
I could check this article to ensure that it doesn't contain any serious mistakes about botting. But I'm not a native speaker and also do not know as much as HMC about target time calculation, anti-warp feature and the history of MOTO... Thanks for the offer - I've take a closer look, and apparently the journal restricts articles to max. 4,000 words. That's very tight -- so I'll have to see what to focus on. Maybe I just include the new ideas I have without going into detail for either Motocoin nor Huntercoin (although I'll be sure to mention them). I'll post a preview of the article in any case, of course. We could also try to work together on another, more review-like article describing the existing systems in addition. But that would be something for the future, not immediate for me.
|
Use your Namecoin identity as OpenID: https://nameid.org/Donations: 1 domobKsPZ5cWk2kXssD8p8ES1qffGUCm | NMC: NC domobcmcmVdxC5yxMitojQ4tvAtv99pY BM-GtQnWM3vcdorfqpKXsmfHQ4rVYPG5pKS | GPG 0xA7330737
|
|
|
HunterMinerCrafter
|
|
October 16, 2015, 04:34:16 PM |
|
Ok, if you really don't understand: all wallets will use some strong cryptographic hash to determine the index of an entry of that table. They will hash the item along with some big random salt (unique for each wallet).
I really do understand this. It doesn't make a difference, because the user chooses their salt, their address, and which hashes to generate levels from or not... so they control their *own* hash selection. It has nothing to do with other miners, other wallets, other anything except for the hashes the users mine into this list/tree/hashtable/whatever state history. They will NOT use first bits of plain item data as an entry index.
Miner will NOT be able to select specific levels for degeneracy attack cause he will in fact not know what hash function each node use for that table.
They do know the state of the table at any given moment, and for any given hash can determine if adding that hash would make that state more or less degenerate, if it would drive complexity up or not. If they can affect this distribution of the hashes along a sort order, then they can affect the complexity of future searches by selectively completing hashes that `fall into the sort` in a way that leads to a degenerate complexity. (In other words bail out of the work function after "just" the sha hashing to generate the level if the level does not hash to something within some selected deviation of some desired statistical distribution.) In other words, they can choose to make their mining work over time "push" away from the average case by working toward building particular non-uniform distributions into their hashes, and "build out" structure that tends toward worst case. As such, we *do* have to consider both average case and worst case in such an index. It is the only way that we can answer "how bad *can* it get under attack?" to know if it would likely kill the network in a day of attack, a year of attack, decades of attack, or more. Can we at least agree that there is a zero point at which the size of the set would overwhelm average block time, even if we could only show that time as being eons from now? Can you understand why I would ask you to establish what that timeframe would be before enacting the change? Of course at the end of the day it is somewhat of a moot point. You have >51% of the network and can effectively do whatever you want with your chain.... so make a patch, put it out there for the world, and start mining on it. We'll see what happens? The only problem is that those hash tables will not be interchangable between different wallets. Is that a problem?
Generating the table per-address probably makes the problem slightly worse, because then the miner doesn't have to worry about anyone else affecting the state of his hashtable's distribution as he works on extending it out to be increasingly degenerate. Anyway, limited time (24 hours) solution will be simpler and enough and I'm absolutely not insisting on using that hash table.
Yes, I think this much we can agree on! Putting known finite bound(s) on things often obviates questions of complexity in this way. (Shameless plug for tauchainzennetcointhingy, heh...) So let us just go ahead and put the "check for duplicates" based approach behind us, and leave the complexity concern of level duplication checks as an academic exercise for the reader?
|
|
|
|
|