one miner get all the reward when he find the good nonce, and all the other work done for the block is useless.
The "useless" work is what makes the hashcash style PoW secure. It still need a way spread the work in sort that you have this same kind of distribution of reward based on miner ID or address, except that the miner is just idle until he is given the work, and the distribution depends on another algorithm that define which address is going to do which part of the work for a given block, and all other miner just idle and do no work, so they don't 'loose' anything
Then the main question is what provides the incentive for miners to cooperate (and suffer the network latency penalty) rather than mine selfishly 100% of the time. For example, if the block interval is 10 minutes, each stages takes 1 ms to calculate and the average network delay is 49 ms, we can support up to 12000 cooperating miners on the network. However, a selfish miner can calculate 600000 stages locally (with 600000 different addresses) and win the whole block reward every time because his blockchain contains more work than the cooperating blockchain. Ok let say on 10 minutes block you would create chunks of 10 sec of work, like first generating the total ring chain to be computed, then breaking it down to séries of sub ring chain, in sort that each sub chain need to hash its address or id with the previous work. Now let say this miner id is not just the address, but an ip/address pair. Each time a new node appear on the network it register itself on the network, and put on the global list of miners id, each time a new block arrive this address/Ip pair is hashed with the new block signature and miners id sorted on this hash, and the first 60 are selected for the next block. The Ip will be used to send the work to the miner and to send it to the next so Ip can checked in and out even if that wouldnt prevent 3 ips to collude to steal work. It could be made stronger if all nodes do traceroute on miners and a consensus can be reached on topography of ips, i tend to think its a problem that has a degree of byzantine fault tolerance as any node can check the traceroute of other nodes and deduce if the traceroute sent by another node is incohérent, i think its a classic problem of graph theory with a byzantine fault tolerance, similar to this Techniques for Detection of Malicious Packet Drops in Networks , taking in account that the topography doesn't have to be 100% accurate, but at least give sufficient probability that two nodes are not located too close to each others, and using some connectivity testing along path with a technique similar to the link. Some 'hard' consensus could be added if there is too much conflict above the byzantine fault tolerance of the system. Would be a long shot, but wouldnt this garantee a certain degree of decentralisation ?
|
|
|
the distribution of time between solutions must be exponential (since it's the only memoryless distribution), i.e. the probability of finding a solution at time t < T is:
This ok, but need to see the bigger picture of why these property makes it ideal proof of work. The idea with the model you describe is the distribution of the reward is based on equal chance to win the reward for each unit of work done, with 99% of the work done that doesn't participate in the final solution, only one miner get all the reward when he find the good nonce, and all the other work done for the block is useless. The idea of OP for reward distribution is clearly different because each unit of work done participate to the elaboration of the final proof and earn a reward. The problem is how to force the work to be shared between different miners to distribute the reward, as each unit of work has 100% chance of being rewarded, and the total amount of work is determined for a given block target time, the distribution must use another mechanism. If a miner is never allocated a "work slot", then he just idle and cost nothing, when he is allocated a "work slot", he compute the proof using the previous one from another miner and gain the reward. It still need a way spread the work in sort that you have this same kind of distribution of reward based on miner ID or address, except that the miner is just idle until he is given the work, and the distribution depends on another algorithm that define which address is going to do which part of the work for a given block, and all other miner just idle and do no work, so they don't 'loose' anything, and you could have still same idea of the probability to given work to do in a time T will be evenly distributed between all miners, even if a single persons could spawn many miners. It would still end with the same sort of calcultion except the C for core become a miner ID, and miner is idle until his ID is selected to mine a block. It's not the same principle than bitcoin pow, but i think it could be viable , or not completely impossible to solve, but maybe i'm missing something.
|
|
|
My main point is that you cannot have a permissionless proof of work system that's not parallelizable.
Really ? you have link to a book or paper explaining this ?
|
|
|
Then it's not progress-free. The miner who starts first will always solve the block, which is not what you want in a decentralized cryptocurrency.
It's not progress-free, but the work can still be distributed on different addresses, selected evenly for each block. In the end it still have same property than progress-free work for decentralization of the mining. But it needs another system to distribute the work than the pow itself.
|
|
|
If you make the unit of work small, you will achieve progress-freeness, but you will lose non-parallelizability. Why ? If each unit of work depend on the result of previous work, it's still non parallelizable ? "Proof of miner id" is susceptible to sybil attacks since you can't limit the generation of new addresses without some central authority.
In short, there is no way to distinguish many small miners from a single entity mining in parallel with many addresses.
I spent a fair bit of time researching this myself and I'm pretty sure it's a dead end.
Even if already it get to the point of not being able to distinguish many small miners from a big entity, they will still all mine with equal work-cost, which is already a win as i see it And its possible to find way to prevent it to a degree, or make it harder to do. The principle in itself could be close to some onion routing as each node re encrypt the previous message, except it cycle back to the original point after a certain number of rounds. It would need to establish a route through nodes to compute all the work sequentially, adding their address to the computation. But maybe the OP has a solution to this need to wait a bit for updates
|
|
|
When you pay 1000 times, you need to consider how much money you will use, and I am not sure that you can win the games even if you use big money as the bet. About the chances, I don't think that the opportunity to win will bigger too because no matter what strategy you use, you need a lucky streak to win. I agree that we need to quit while we can, so we don't risk more money in gambling.
The more monney you can bet, the higher are the chances to win. If you play 1000 times 1 on the dice, you are still going to win a certain number of times, there are statistically good chances that you will be above at some point, all is too see how much can reasonably expect to gain with the 1000 rolls and stopping when it get close to that. Not really, the more money you can bet, the higher the chances for you to get lost in a long time. Yes, you can win a certain number of times, but you don't know how much the winning and how much money you can get in gambling. But for the losses, I think the loss will be bigger than your winning, so I think you need to think twice to bet for more money. But if you can accept the risk and the consequences, then you can go in that way. Im also more talking in the view of statistics, for a fair game where there is no house edge, and the distribution you cant expect from a fair gambling game, if the game is rigged you shouldnt be playing it (unless you can exploit the trick at your advantage ).
|
|
|
The more monney you can bet, the higher are the chances to win.
If you play 1000 times 1 on the dice, you are still going to win a certain number of times, there are statistically good chances that you will be above at some point, all is too see how much can reasonably expect to gain with the 1000 rolls and stopping when it get close to that.
How much money do you have and how much are you betting on each roll? Let's say you have $1000 and bet $1 at a time, to make sure that you can last at least 1000 rolls. Statistically you will have lost $10 by the end of this game, assuming a typical 1% house edge - that's what you can reasonably expect. Your chances to be "up" at any point during this game are not higher than the chances of winning any single bet, in fact they're getting lower because of the house edge slowly decreasing your bankroll. If you bet more than $1 you'll be losing even more and increase your chances to go bankrupt before you reach 1000 rolls. If you keep playing until you don't have any money then yes, it needs to stop playing when you have been more on a lucky streak than what can be expected, and there will still be certain chances to have 3 time a 1 in 6 rolls which will make you on a luck streak, and there are still probability that certain sequences will keep you above, but the chances of the same number happening again decrease exponentially after each time, but there will be always more 1 every 6 rolls at some point than others, and on the average it's the house edge. If you consider that playing 1000 times will always cost you money with the house edge then why do you even play ?
|
|
|
You are always going to end up with gains at some point if you play long enough, all is to know when you are on a "lucky streak" and you have more chances to loose than win in the number of rolls you can play compared to what you gained so far.
Simply "quit while you're ahead"? Yes. That would be a smart move for any gambler. However it's incorrect that you will always end up with gains. Due to house edge and not having unlimited funds it's more likely that you will end up with losses. If you want to play 1000 times and you're just ahead of 2% you can still have chances to do better, if you just did x3 after 2 rolls, it's very unlikely you're going to beat that unless you want to play a very long time When you pay 1000 times, you need to consider how much money you will use, and I am not sure that you can win the games even if you use big money as the bet. About the chances, I don't think that the opportunity to win will bigger too because no matter what strategy you use, you need a lucky streak to win. I agree that we need to quit while we can, so we don't risk more money in gambling. The more monney you can bet, the higher are the chances to win. If you play 1000 times 1 on the dice, you are still going to win a certain number of times, there are statistically good chances that you will be above at some point, all is too see how much can reasonably expect to gain with the 1000 rolls and stopping when it get close to that.
|
|
|
they will all take the same time to compute
Then it's not progress-free, which is a fundamental requirement for decentralized proof of work. PoW that is not progress-free will not work in practice because it encourages selfish mining, i.e. each miner will mine their own chain to avoid the network latency delay. Its a problem i pointed before, but not sure it cannot be solved. I didnt see a full solution for this, but in theory it should still be possible to force a break down of the work, like forcing each ring to be computed with a different address, as it can still be broken into small pieces and the proof still need all the work to be done, and the minimal unit of work is very small, so maybe it can still be forced into small bits including network latency. I still dont see a perfect solution for this, but i think it can be worked around. I think if a way To make poisson like distribution on the address or miner id can be found, it can replace the progress free problem up stream by forcing a break down of the work, it require zero initialisation time , and the minimal unit of work can be very small, so i think it would be equivalent. It still have same property than any miner id has equal chance to get a reward even with small computational power, even if it needs another mechanism to distribute the work than the pow itself. It could be a system similar in some aspect to ourobos in cardanno, even if the difference is cardanno works with POS so the reward is related to the stake power of an address, and here the reward distribution would be more based on a "proof of miner id" than the pow itself, but then anyone with even small cpu power can participate and have equal chance to get reward with other miners, and its less vulnerable than POS to long range attack with the nothing at stake problem, even if i cant see a good way To avoid penalising miners with high network latency, without ending with problems if a node in the mining chain stop responding or have high latency to compute his part of the work. Another potential problem i see is that the total pow is not going to scale with the number of miners, the amount of work to do to solve a full block will stay constant for a given block target, which will limit the maximum number of miner that can get a reward for a given block, so there is lower probability for a miner to get a reward in short time on a large network with lots of miners. There are certain number of issue to study well, but i dont think its completely unsolvable even if it would more thinking and few more mechanism for it To work Well in practice.
|
|
|
2. Even if you remove the timestamp and nonce from the block header, any miner can generate as many wallet addresses as they want to get unlimited parallelization.
Only one address will get the reward, and they cant work in parallel on the same chain, no ring chain can advance faster with parallelisation. They can compute differents ring chain in // with different address, they will all take the same time to compute, and only one reward, so there is no huge benefits To that. ( if i get it right ) The issue with the address spamming can become a problem to distribute the work across different miners, but even if there is address spamming, they will still have same cost/hash To compute the ring for each address, and everyone can address spam as well, so i think its solvable. In any case it comes back closer To 1 cpu one vote, even if someone with many addresses could get more work and reward in a pooled work, even with a single cpu, the number of cpu doesnt matter. This idea is not new. Search for the "RSA timelock puzzle" - this is the most famous non-parallelizable proof of work. However, it only works when some central authority is generating the work. It is absolutely useless for decentralized consensus in cryptocurrencies.
There are many problems that can be solved only throught recursion, but its not cyclic like this, so it needs the solution to be known first, then puzzle made from it, for system like this where the solution is known first there are many way to have proof of work, but here the solution is not known first, but the work can still be verified in one step.
|
|
|
I made some code to find all possible rings, and storing them in a csv file it's all the combination i found : https://github.com/NodixBlockchain/rbf/blob/master/keylst.csvAlso made a thing to generate loops path randomly and run the forward/backward thing, it seems to work I did some more testing, testing more combination of rings it doesn't seem too bad But there are probably some combination that works better than others. Why did you spend time on this? This has already been done by me and it can be seen from my video. In addition, this is not necessary, because in this code 32-bit numbers will not be used, 256-bit numbers will be used there, and completely different keys are needed for them. Just to do some testing but it seems ok but ok i wait for the full code But it's not very long to compute, it took less than a minute to scan most possible value, even doing it four times with four different number to make sure there is no false positive. Tested with many different combination of rings up To 1024 picked randomly seems To work
|
|
|
You are always going to end up with gains at some point if you play long enough, all is to know when you are on a "lucky streak" and you have more chances to loose than win in the number of rolls you can play compared to what you gained so far.
Simply "quit while you're ahead"? Yes. That would be a smart move for any gambler. However it's incorrect that you will always end up with gains. Due to house edge and not having unlimited funds it's more likely that you will end up with losses. If you want to play 1000 times and you're just ahead of 2% you can still have chances to do better, if you just did x3 after 2 rolls, it's very unlikely you're going to beat that unless you want to play a very long time
|
|
|
I made some code to find all possible rings, and storing them in a csv file it's all the combination i found : https://github.com/NodixBlockchain/rbf/blob/master/keylst.csvAlso made a thing to generate loops path randomly and run the forward/backward thing, it seems to work I did some more testing, testing more combination of rings it doesn't seem too bad But there are probably some combination that works better than others.
|
|
|
Sha256 has very high entropy, zero regression or anything it has non linear components to cascade and amplify all the source entropy capped with a mod.
Good. I think I understand what you don’t understand. Let's do so - I'll finish the code today and shoot the last video. Then I will post the video and my code. You will take one ring from this code, BUT + take my hash function Mystique to it and use it to mask the starting number. That is, your sequence should be like this: Take a weak starting number, for example - 0x1 (256 bit, like in SHA256) We perform its hashing through Mystigue. We calculate one ring through RBF. You then pass this combination through your tests and see what they show you. OK? Oki Can do more testing adding the hash, im always up to cracking a good number sequence
|
|
|
If its to make a full coin, i can get into this, but also it needs block explorer, wallet etc and existing software for this will probably need to be modified as well to take in account the block signature format.
Im not sure how difficult it can be To adapt all the bitcore code for this algorithm, as blocks are also indexed with the block header hash, it probably needs changes in many places but i can look into this. Already got familliar with some versions of bitcore.
Sha256 has very high entropy, zero regression or anything it has non linear components to cascade and amplify all the source entropy capped with a mod.
I understand the problem with finding working keys as the number of rounds has to be found by brute force.
I still put my number crunching neurons at work on this one to find simple solution.
Sbox is just essentially a table look up, its not going to use lot of cpu power, but not sure if this can keep the cyclic property with the cypher implementation, but i believe similar technique can be used to increase entropy. With the sparx concept of large weak sbox it mean you can change it yourself and any weak sbox is supposed to keep the non linearity.
But adding entropy to break linear regression ( linear as in linear system) is not necessarily complex or it doesnt need lot of cpu power, the problem is to keep it cyclic, otherwise simple algorithm can works well.
With the few research i did so far, it doesnt seem that non linear function can be cyclic, because determined cycle period means its not non linear. Maybe its possible to find such non linear function that has at least bounded cycle period but didnt find this so far.
|
|
|
I need think this more, but its possible there is still hidden property on the last numbers that are going to be used for the signature, as its only one step away from the original, and lot of the entropy has been removed at this point. There are way to scramble it more, but need to keep the cyclic property as well, which is the more difficult part. Maybe using something like onion cypher, and each round encrypt the last exploiting same property of bit rotation with certain algorithm its possible it will still cycle back. And then you will have something with a strong entropy. Im looking into simple cypher like this https://www.cryptolux.org/index.php/SPARX SPARX is a family of ARX-based 64- and 128-bit block ciphers. Only addition modulo 216, 16-bit XOR and 16-bit rotations are needed to implement any version. SPARX-n/k denotes the version encrypting an n-bit block with a k-bit key.
The SPARX ciphers have been designed according to the Long Trail Strategy put forward by its authors in the same paper. It can be seen as a counterpart of the Wide-Trail Strategy suitable for algorithms built using a large and weak S-Box rather than a small strong one. This method allows the designers to bound the differential and linear trial probabilities, unlike for all other ARX-based designs. Non-linearity is provided by SPECKEY, a 32-bit block cipher identical to SPECK-32except for its key addition. The linear layer is very different from that of, say, the AES as it consists simply in a linear Feistel round for all versions.
The designers claim that no attack using less than 2k operations exists against SPARX-n/k in neither the single-key nor in the related-key setting. They also faithfully declare that they have not hidden any weakness in these ciphers. SPARX is free for use and its source code is available in the public domain (it can be obtained below).It doesnt need something very strong, just to resist 10 minutes brute force attacks, even using all the cpu power of the world on the brute force, even 64bits of "true entropy" would be enough from a 256bits space it mean even if its 75% broken its still ok for this purpose. I will rename the files to remove confusion between rbf & hash
|
|
|
Why do constant hard fork? My POW algorithm cannot be used advantageously on ASIC / GPU / FPGA. There is no need to change it. It will work without changes throughout the life of New Bitcoin. Check out the gist of my algorithm: https://bitcointalk.org/index.php?topic=5207996.0Interesting. I don't understand the technical details well enough to judge, but I'm inherently skeptical of the claim that the algorithm cannot be parallelized and thus that ASICs would be impossible. I'd like to see what some of the experts in Development & Technical Discussion have to say. It cannot be parallelized, and its hard to make specialzed hardware who can beat a 1 cycle instruction, that would be really splitting hairs, and clock frequency its pretty much capped to common hardware. There can be other problems with it, but i think they can be solved with good security. Not sure if it can all work out together but it worth a try
|
|
|
Luck is not something that will pop up, luck is also created. There are a lot of things that should be consider when doing gambling, you should have enoung skills to gamble your money and avoid losses. Proper risk management can help us to avoid losses, most of my wins in gambling is because of my luck and also enough skills and knowledge.
I wonder how the skills and knowledge could contribute greatly to your earnings and winnings because for me, the only way I think I could win when betting is based on pure luck. As we are to inspect the gambling platforms, it is not merely possible to spot any software issues where we could take advantage to win (Maybe that is based on my skill because I am in to computers). Most of the time, they are generating truly random figures that is impossible for us to predict. Not sure if this is a really good idea casino its like the insurances when you start to win too much they kick you out:)
|
|
|
Ring Bit Function. 3 part. https://www.youtube.com/watch?v=9-7NmuZXbdU&feature=youtu.beAn explanation of the new POW algorithm, which I call the Ring Bit Function (RBF), with C ++ code examples. In this part we will masking our chain of RBF rings. I will do some testing to check the sequence you get from numbers like 0x00010000 or such, number with low entropy, and put that in regression test or check the distribution. If it can find a regression it mean you can compute round N in a single Step. It wouldnt surprise me that with certain numbers some regression can be found on the sequence, if not then good Roughly speaking, if the number can be compressed a lot, it mean the "entropy" is low and the security would be related to how much you can compress it. If you have only zero, it can be compressed to just 0 ans it doesnt matter if you have 256 zeroes or one. I have long solved this problem. The answer in this video ... Even if there is large part of the number that are eliminated from the brute force, it can still be ok because the attack time is short, essentially the target time between blocks, so in the 10minutes for bitcoin but would still need something a bit stronger.
|
|
|
But its just in case if there can be too much weak numbers on long ring maybe it could improve, especially that the brute force can be made with // cores.
Maybe even simple huffman coding could remove some problems in case there is lot of zero or repetitive bit sequence. To make the number "more compact" so to speak. Even maybe only as a test like if the number can be easily compressed with huffman it mean if has low entropy and it should be changed.
I just don’t understand what you’re talking about ... Really. What are the weak numbers? How do you want to crack them? Take any of the small rings already generated by me (in the pictures) and try to crack it in any way known to you. If you succeed - I will start to think about what to do with it. And now I think that you do not understand what you say, because you rely on experience that cannot be compared with this case. Show in practice what you mean by weak rings and the possibilities of their direct attacks. I will do some testing to check the sequence you get from numbers like 0x00010000 or such, number with low entropy, and put that in regression test or check the distribution. If it can find a regression it mean you can compute round N in a single Step. It wouldnt surprise me that with certain numbers some regression can be found on the sequence, if not then good Roughly speaking, if the number can be compressed a lot, it mean the "entropy" is low and the security would be related to how much you can compress it. If you have only zero, it can be compressed to just 0 ans it doesnt matter if you have 256 zeroes or one. The actual size of the key when using Smart brute force is related to the entropy of the key. If you have 255 zero and 1 one in a 256 bits key it mean the brute force is not on 2^256. And it mean the sequence will be predictible and the brute force will only need small.number of test to find the signature. In any case it shouldnt be too hard to strengthen it just to avoid degenrate number that lead to predictible sequence. What make the algorithm hard to reverse is the same principle than cypher algorithm which still can have some weakness in the simple form ( without sbox or any other thing), especially with low entropy input. With a hash normally it give good entropy, but after many rings the signature could loose entropy and the rings become easy to predict. But maybe it doesnt matter too much but would need to be sure and not waiting for an attack to show its broken
|
|
|
|