qdoop (OP)
Member
Offline
Activity: 119
Merit: 112
_copy_improve_
|
|
October 14, 2014, 03:56:54 PM Last edit: October 15, 2014, 01:35:39 AM by qdoop |
|
Currently for a block hash to be accepted (proof of work) should have this form XXX....YYYYYYYYYYYYYYYYYYYYYYYYYYY..... where X's : Zeros Y's : Don't cares actual pattern depends on the difficulty Difficulty Target demands at least a minimum number of zero X's. This requirement does not adapt well with sadden network hash power changes and the time required to find a hash may vary significantly. We propose a different algorithm as proof of work that adapts well and works without specifying a Difficulty target. Instead of Y's being don't cares we count the bit changes (0 to 1 transitions) within them. So we force a block hash to satisfy two contradictory requirements due to the fixed bit length of the hash (256bits) Counting 0 to 1 transitions a hash could have at max 128 something really rare. Given two hashes with the same number of transitions the one with the most X's being zero wins. I exact starting from the left and comparing the hashes bit by bit the first having 0 where the other hash has 1 wins. (Dominant 0 bit) How the network develops consciousness? A) All participating nodes try to maximize hash 0 to 1 transitions B) At given time intervals nodes publish their best so far C) A node receiving two or more hashes always prefers the one with most transitions and if equal the one with most dominant 0 bits. *A node that finds a really rare block and publishes it on time radically improves the security of the network. This is preliminary work an we would like your comments or suggest similar works from others.
|
|
|
|
|
|
|
|
|
If you want to be a moderator, report many posts with accuracy. You will be noticed.
|
|
|
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction.
|
|
btchris
|
|
October 14, 2014, 04:54:31 PM |
|
A hash is just a big integer (in this context), there's no need to do something this complicated if all you're after is changing the hash-target scheme to an exactly-one-block-per-unit-time scheme. Just compare the hash's integral values, and the value closest to zero wins.
However do you think it's a good idea for the Bitcoin network to flood itself with one or two hundred fully solved blocks every 10 minutes? How would you implement an anti-DOS scheme to prevent an attacker from flooding the network with thousands or millions of solved-with-low-difficulty blocks?
|
|
|
|
qdoop (OP)
Member
Offline
Activity: 119
Merit: 112
_copy_improve_
|
|
October 14, 2014, 05:36:09 PM Last edit: October 14, 2014, 06:10:06 PM by qdoop |
|
thanks for replying.
Flooding with fake blocks or transactions does not depend on the form of the hash. What prevents me generating 1000's of fake blocks starting with zeros?
Putting a node to also count bit transitions slows down desired hash generation and may help against ASICs monopoly.
If hash is ok instead of going straight to merkle tree verification you have also check transition count.
[edit] If counting bit transitions is simpler than hash evaluation then once you have a really 'good' valid block it makes it faster to drop fake blocks by simple bit transitions counting
|
|
|
|
btchris
|
|
October 14, 2014, 06:54:02 PM Last edit: October 23, 2014, 12:29:22 PM by btchris |
|
thanks for replying.
Flooding with fake blocks or transactions does not depend on the form of the hash. What prevents me generating 1000's of fake blocks starting with zeros?
A node does not send a recently received block (or rather, it does not notify connected nodes that it has a new block to send) until after it has fully verified the block, including the hash target, so the only way to flood the network with blocks is if you can generate them (with the current difficulty). As I understand it in your proposal, every X minutes, all nodes will need to see all potential blocks before a consensus can be achieved, which seems to be open to DoS. Putting a node to also count bit transitions slows down desired hash generation and may help against ASICs monopoly.
Adding a new restriction to the target hash would slow down hash generation, but we already have a way to slow down hash generation: increase the difficulty. Keep in mind that difficulty is not a bit count, it's more finely granular than that. [edit] If counting bit transitions is simpler than hash evaluation then once you have a really 'good' valid block it makes it faster to drop fake blocks by simple bit transitions counting
I'm afraid I don't understand what you're saying here.
|
|
|
|
Rannasha
|
|
October 14, 2014, 07:49:43 PM |
|
How the network develops consciousness? A) All participating nodes try to maximize hash 0 to 1 transitions B) At given time intervals nodes publish their best so far C) A node receiving two or more hashes always prefers the one with most transitions and if equal the one with most dominant 0 bits. *A node that finds a really rare block and publishes it on time radically improves the security of the network.
The problem here lies with "at given time intervals". How do you ensure that all miners have synchronized clocks? And how will you deal with network delays, when a miner submits his block but it takes considerable time to cross the network. How long do other nodes wait to collect hashes from miners before they determine a winner? And how do you enforce that all nodes make their decision at the same time? The current system works because it doesn't depend on all players having their clocks properly synced and there are clear rules that establish what happens when, for example due to poor network connectivity or whatever other reason, there are multiple competing branches of the blockchain.
|
|
|
|
qdoop (OP)
Member
Offline
Activity: 119
Merit: 112
_copy_improve_
|
|
October 14, 2014, 08:12:29 PM |
|
First i want to make it clear that i agree with you that counting the dominant zeros/integer size in a hash is much simpler and faster.
Point A) It is obvious that you do not forward anything received before you double check it and compare it in case of block hash with your own best result. So a modest performance node can only flood minor performance nodes. Sorry if i did not mention that but i was aware of this practice and should not change it.
Point B) Yes you can set a minimum difficulty in both cases but just to set a minimum performance level so you don't have to much spam. I do't consider problem have each node verify few blocks each round in fact may be beneficial for the hole system instead of spending all processing power chasing the best block.
Point C) What is the max range of possible bit transitions? 128 Is it easy to generate them? not so straight Is it easy to count them? yes simpler than generating a random one of them or calculating a hash
|
|
|
|
qdoop (OP)
Member
Offline
Activity: 119
Merit: 112
_copy_improve_
|
|
October 14, 2014, 08:23:20 PM |
|
How the network develops consciousness? A) All participating nodes try to maximize hash 0 to 1 transitions B) At given time intervals nodes publish their best so far C) A node receiving two or more hashes always prefers the one with most transitions and if equal the one with most dominant 0 bits. *A node that finds a really rare block and publishes it on time radically improves the security of the network.
The problem here lies with "at given time intervals". How do you ensure that all miners have synchronized clocks? And how will you deal with network delays, when a miner submits his block but it takes considerable time to cross the network. How long do other nodes wait to collect hashes from miners before they determine a winner? And how do you enforce that all nodes make their decision at the same time? The current system works because it doesn't depend on all players having their clocks properly synced and there are clear rules that establish what happens when, for example due to poor network connectivity or whatever other reason, there are multiple competing branches of the blockchain. I think there is a free time service publicly available at http://time.gov/Many things over the internet depend on it so you CANNOT shut it down so easily Modern pc clocks drift only few seconds daily. So even if the service goes down once synced with it a node can continue functioning for several hours or days without problem.
|
|
|
|
Rannasha
|
|
October 14, 2014, 08:29:49 PM |
|
How the network develops consciousness? A) All participating nodes try to maximize hash 0 to 1 transitions B) At given time intervals nodes publish their best so far C) A node receiving two or more hashes always prefers the one with most transitions and if equal the one with most dominant 0 bits. *A node that finds a really rare block and publishes it on time radically improves the security of the network.
The problem here lies with "at given time intervals". How do you ensure that all miners have synchronized clocks? And how will you deal with network delays, when a miner submits his block but it takes considerable time to cross the network. How long do other nodes wait to collect hashes from miners before they determine a winner? And how do you enforce that all nodes make their decision at the same time? The current system works because it doesn't depend on all players having their clocks properly synced and there are clear rules that establish what happens when, for example due to poor network connectivity or whatever other reason, there are multiple competing branches of the blockchain. I think there is a free time service publicly available at http://time.gov/Many things over the internet depend on it so you CANNOT shut it down so easily Modern pc clocks drift only few seconds daily. So even if the service goes down once synced with it a node can continue functioning for several hours or days without problem. And what if someone willingly doesn't use that specific NTP server, in order to stage some sort of attack or to disrupt the network? Or if someone for some reason gains control of time.gov? Your solution requires trust in a central entity, which is the antithesis of Bitcoin.
|
|
|
|
qdoop (OP)
Member
Offline
Activity: 119
Merit: 112
_copy_improve_
|
|
October 14, 2014, 08:56:03 PM |
|
You know CAP theorem. Says that a PARTITIONED system cannot be simultaneously CONSISTENT and AVAILABLE.
So bitcoin network has to enforce periodically some rule for nodes to become CONSISTENT like agreeing on a common time. Else eventually the system will always split in two highly AVAILABLE but not CONSISTENT parts (block chain fork)
Bitcoin is just fine but talking about ways to improve it by relaxing some requirements in favor of a simpler behavior (such as using some external time references widely available over the internet) i think always helps. For example there is an ongoing discussion on increasing transactions volume
|
|
|
|
cr1776
Legendary
Offline
Activity: 4032
Merit: 1299
|
|
October 14, 2014, 09:27:23 PM |
|
You know CAP theorem. Says that a PARTITIONED system cannot be simultaneously CONSISTENT and AVAILABLE.
So bitcoin network has to enforce periodically some rule for nodes to become CONSISTENT like agreeing on a common time. Else eventually the system will always split in two highly AVAILABLE but not CONSISTENT parts (block chain fork)
Bitcoin is just fine but talking about ways to improve it by relaxing some requirements in favor of a simpler behavior (such as using some external time references widely available over the internet) i think always helps. For example there is an ongoing discussion on increasing transactions volume
I think many would disagree with the premise of the statement that using a central server is an improvement, whether it be for time or anything else. Likewise, most enjoy discussing well thought-out proposals that could be used. Centralization opens up more attack vectors and is the antithesis of the protocol design, so I think you'll see a lot of resistance to anything that centralizes things and many people wouldn't switch to a fork that includes centralization by design. :-)
|
|
|
|
btchris
|
|
October 14, 2014, 09:31:19 PM |
|
Bitcoin is just fine but talking about ways to improve it by relaxing some requirements in favor of a simpler behavior (such as using some external time references widely available over the internet) i think always helps. I have to disagree with you, but this part is a matter of opinion. I think Bitcoin should strive to be as decentralized as possible. It's not perfectly decentralized now, but I see no reason to add additional potential points of failure or points of control. Rather I'd prefer the opposite -- remove what few remaining centralized points are left. It is obvious that you do not forward anything received before you double check it and compare it in case of block hash with your own best result. So a modest performance node can only flood minor performance nodes. Sorry if i did not mention that but i was aware of this practice and should not change it.
I suspect that the majority of nodes on the network are not miners, and therefore have no concept of best-block-so-far-in-my-pool, and remain susceptible to a DoS. What is the max range of possible bit transitions? 128 Is it easy to generate them? not so straight Is it easy to count them? yes simpler than generating a random one of them or calculating a hash
I think you just described a PoW system... hard to generate and easy to verify. How this add-on PoW is any improvement of the PoW already in use?
|
|
|
|
deepceleron
Legendary
Offline
Activity: 1512
Merit: 1028
|
|
October 14, 2014, 10:51:25 PM |
|
Currently for a block hash to be accepted (proof of work) should have this form XXX....YYYYYYYYYYYYYYYYYYYYYYYYYYY..... where X's : Zeros Y's : Don't cares Difficulty Target demands at least a minimum number of zero X's. This requirement does not adapt well with sadden network hash power changes and the time required to find a hash may vary significantly. We propose a different algorithm as proof of work that adapts well and works without specifying a Difficulty target. Instead of Y's being don't cares we count the bit changes (0 to 1 transitions) within them. So we force a block hash to satisfy two contradictory requirements due to the fixed bit length of the hash (256bits) Counting 0 to 1 transitions a hash could have at max 128 something really rare. Given two hashes with the same number of transitions the one with the most X's being zero wins. I exact starting from the left and comparing the hashes bit by bit the first having 0 where the other hash has 1 wins. (Dominant 0 bit) How the network develops consciousness? A) All participating nodes try to maximize hash 0 to 1 transitions B) At given time intervals nodes publish their best so far C) A node receiving two or more hashes always prefers the one with most transitions and if equal the one with most dominant 0 bits. *A node that finds a really rare block and publishes it on time radically improves the security of the network. This is preliminary work an we would like your comments or suggest similar works from others. This is a wealth of nonsense that shows a complete lack of understanding of how mining, target, difficulty, or even binary math work, which then conjures up a nonexistent issue. A SHA256 hash is 256 bits long. 256 bits can represent a number between 0 -and- 115792089237316195423570985008687907853269984665640564039457584007913129639935. A hash of arbitrary data will return a hash with a value within this interval, seemingly at random, with equal distribution. Difficulty specifies that a lower number threshold is required to "win", that a found hash value must be significantly smaller than the maximum. If we make the threshold 100 times smaller, only one in 100 hashes will win. The starting point with Bitcoin at difficulty 1 requres a hash value 4295032833.000015 smaller than the maximum, meaning only 1 in ~4.3 billion hashes will meet the difficulty 1 challenge. The actual difference between steps can be easily calculated. Bitcoin encodes the difficulty target with nearly six significant figures in hex. I won't try to explain how it is actually encoded. Here we show the next possible difference increment after difficulty 1 is difficulty 1.000015259254738: >>> (0xffff * 2.0**208) / (0xfffe * 2.0**208) 1.000015259254738 The current difficulty target is: 0x00000000000000001F6973000000000000000000000000000000000000000000 which is difficulty 35002482026.13323 The next possible difficulty target increment is 0x00000000000000001F6972000000000000000000000000000000000000000000 which is difficulty 35002499029.10224 The ratio between these two is 1.0000004857646665, which is enough accuracy that even a one-second difference in the two-week mining period measurement will result in a different difficulty.
|
|
|
|
qdoop (OP)
Member
Offline
Activity: 119
Merit: 112
_copy_improve_
|
|
October 14, 2014, 10:54:26 PM |
|
You know CAP theorem. Says that a PARTITIONED system cannot be simultaneously CONSISTENT and AVAILABLE.
So bitcoin network has to enforce periodically some rule for nodes to become CONSISTENT like agreeing on a common time. Else eventually the system will always split in two highly AVAILABLE but not CONSISTENT parts (block chain fork)
Bitcoin is just fine but talking about ways to improve it by relaxing some requirements in favor of a simpler behavior (such as using some external time references widely available over the internet) i think always helps. For example there is an ongoing discussion on increasing transactions volume
I think many would disagree with the premise of the statement that using a central server is an improvement, whether it be for time or anything else. Likewise, most enjoy discussing well thought-out proposals that could be used. Centralization opens up more attack vectors and is the antithesis of the protocol design, so I think you'll see a lot of resistance to anything that centralizes things and many people wouldn't switch to a fork that includes centralization by design. :-) How you discover available nodes to connect to? By using some standard locations an well known nodes Is it the same perhaps more centralized than peaking time from various places around the internet?
|
|
|
|
qdoop (OP)
Member
Offline
Activity: 119
Merit: 112
_copy_improve_
|
|
October 14, 2014, 11:22:39 PM |
|
Currently for a block hash to be accepted (proof of work) should have this form XXX....YYYYYYYYYYYYYYYYYYYYYYYYYYY..... where X's : Zeros Y's : Don't cares Difficulty Target demands at least a minimum number of zero X's. This requirement does not adapt well with sadden network hash power changes and the time required to find a hash may vary significantly. We propose a different algorithm as proof of work that adapts well and works without specifying a Difficulty target. Instead of Y's being don't cares we count the bit changes (0 to 1 transitions) within them. So we force a block hash to satisfy two contradictory requirements due to the fixed bit length of the hash (256bits) Counting 0 to 1 transitions a hash could have at max 128 something really rare. Given two hashes with the same number of transitions the one with the most X's being zero wins. I exact starting from the left and comparing the hashes bit by bit the first having 0 where the other hash has 1 wins. (Dominant 0 bit) How the network develops consciousness? A) All participating nodes try to maximize hash 0 to 1 transitions B) At given time intervals nodes publish their best so far C) A node receiving two or more hashes always prefers the one with most transitions and if equal the one with most dominant 0 bits. *A node that finds a really rare block and publishes it on time radically improves the security of the network. This is preliminary work an we would like your comments or suggest similar works from others. This is a wealth of nonsense that shows a complete lack of understanding of how mining, target, difficulty, or even binary math work, which then conjures up a nonexistent issue. A SHA256 hash is 256 bits long. 256 bits can represent a number between 0 -and- 115792089237316195423570985008687907853269984665640564039457584007913129639935. A hash of arbitrary data will return a hash with a value within this interval, seemingly at random, with equal distribution. Difficulty specifies that a lower number threshold is required to "win", that a found hash value must be significantly smaller than the maximum. If we make the threshold 100 times smaller, only one in 100 hashes will win. The starting point with Bitcoin at difficulty 1 requres a hash value 4295032833.000015 smaller than the maximum, meaning only 1 in ~4.3 billion hashes will meet the difficulty 1 challenge. The actual difference between steps can be easily calculated. Bitcoin encodes the difficulty target with nearly six significant figures in hex. I won't try to explain how it is actually encoded. Here we show the next possible difference increment after difficulty 1 is difficulty 1.000015259254738: >>> (0xffff * 2.0**208) / (0xfffe * 2.0**208) 1.000015259254738 The current difficulty target is: 0x00000000000000001F6973000000000000000000000000000000000000000000 which is difficulty 35002482026.13323 The next possible difficulty target increment is 0x00000000000000001F6972000000000000000000000000000000000000000000 which is difficulty 35002499029.10224 The ratio between these two is 1.0000004857646665, which is enough accuracy that even a one-second difference in the two-week mining period measurement will result in a different difficulty. btchris does not find any problem with using the number of initial zero bits as a measure of proof of work. I allready agreed with him that by counting the bit transitions might be to complicated. Thats why we asked for comments because we wanted to spot the implications of the our approach. Lets suppose some one finds by chance a hash with many transitions or initial zeros. How can we take it in consideration? There are possibilities that the system might fork. If as a simple rule require that the branch with most transitions is preferred then all nodes know what to do. Finally yes asking for most transitions may be over killing but a system would perform just fine with out any mention to difficulty That's my personal belief
|
|
|
|
PenAndPaper
|
|
October 14, 2014, 11:34:58 PM |
|
Currently for a block hash to be accepted (proof of work) should have this form XXX....YYYYYYYYYYYYYYYYYYYYYYYYYYY..... where X's : Zeros Y's : Don't cares Difficulty Target demands at least a minimum number of zero X's. This requirement does not adapt well with sadden network hash power changes and the time required to find a hash may vary significantly. We propose a different algorithm as proof of work that adapts well and works without specifying a Difficulty target. Instead of Y's being don't cares we count the bit changes (0 to 1 transitions) within them. So we force a block hash to satisfy two contradictory requirements due to the fixed bit length of the hash (256bits) Counting 0 to 1 transitions a hash could have at max 128 something really rare. Given two hashes with the same number of transitions the one with the most X's being zero wins. I exact starting from the left and comparing the hashes bit by bit the first having 0 where the other hash has 1 wins. (Dominant 0 bit) How the network develops consciousness? A) All participating nodes try to maximize hash 0 to 1 transitions B) At given time intervals nodes publish their best so far C) A node receiving two or more hashes always prefers the one with most transitions and if equal the one with most dominant 0 bits. *A node that finds a really rare block and publishes it on time radically improves the security of the network. This is preliminary work an we would like your comments or suggest similar works from others. This is a wealth of nonsense that shows a complete lack of understanding of how mining, target, difficulty, or even binary math work, which then conjures up a nonexistent issue. This.Also an advice to anyone who thinks he can come up with a better or fundamentally different or even slightly different pow algorithm... Learn the very basics!
|
|
|
|
qdoop (OP)
Member
Offline
Activity: 119
Merit: 112
_copy_improve_
|
|
October 14, 2014, 11:46:26 PM |
|
Currently for a block hash to be accepted (proof of work) should have this form XXX....YYYYYYYYYYYYYYYYYYYYYYYYYYY..... where X's : Zeros Y's : Don't cares Difficulty Target demands at least a minimum number of zero X's. This requirement does not adapt well with sadden network hash power changes and the time required to find a hash may vary significantly. We propose a different algorithm as proof of work that adapts well and works without specifying a Difficulty target. Instead of Y's being don't cares we count the bit changes (0 to 1 transitions) within them. So we force a block hash to satisfy two contradictory requirements due to the fixed bit length of the hash (256bits) Counting 0 to 1 transitions a hash could have at max 128 something really rare. Given two hashes with the same number of transitions the one with the most X's being zero wins. I exact starting from the left and comparing the hashes bit by bit the first having 0 where the other hash has 1 wins. (Dominant 0 bit) How the network develops consciousness? A) All participating nodes try to maximize hash 0 to 1 transitions B) At given time intervals nodes publish their best so far C) A node receiving two or more hashes always prefers the one with most transitions and if equal the one with most dominant 0 bits. *A node that finds a really rare block and publishes it on time radically improves the security of the network. This is preliminary work an we would like your comments or suggest similar works from others. This is a wealth of nonsense that shows a complete lack of understanding of how mining, target, difficulty, or even binary math work, which then conjures up a nonexistent issue. This.Also an advice to anyone who thinks he can come up with a better or fundamentally different or even slightly different pow algorithm... Learn the very basics! Thanks for repeating things I all ready know. If you claim than there is never going to exist a better POW algorithm then i can ensure you that you are wrong PS. https://en.wikipedia.org/wiki/Hamming_distance
|
|
|
|
btchris
|
|
October 14, 2014, 11:54:43 PM |
|
...
btchris does not find any problem with using the number of initial zero bits as a measure of proof of work. ... Please don't paraphrase what I said... what I said was "we already have a way to slow down hash generation: increase the difficulty. Keep in mind that difficulty is not a bit count [emphasis added], it's more finely granular than that." In other words, I completely agree with deepceleron.
|
|
|
|
qdoop (OP)
Member
Offline
Activity: 119
Merit: 112
_copy_improve_
|
|
October 15, 2014, 12:14:29 AM Last edit: October 15, 2014, 12:49:31 AM by qdoop |
|
...
btchris does not find any problem with using the number of initial zero bits as a measure of proof of work. ... Please don't paraphrase what I said... what I said was "we already have a way to slow down hash generation: increase the difficulty. Keep in mind that difficulty is not a bit count [emphasis added], it's more finely granular than that." In other words, I completely agree with deepceleron. This is clear and thank you for clarifying! A bit shift modifies possibility by a factor of 2 (half or double it). Just a question if you know for sure. When calculating the total difficulty of a chain we count the values of the actual hashes or the difficulty targets in the blocks?
|
|
|
|
PenAndPaper
|
|
October 15, 2014, 12:14:42 AM |
|
Currently for a block hash to be accepted (proof of work) should have this form XXX....YYYYYYYYYYYYYYYYYYYYYYYYYYY..... where X's : Zeros Y's : Don't cares Difficulty Target demands at least a minimum number of zero X's. This requirement does not adapt well with sadden network hash power changes and the time required to find a hash may vary significantly. We propose a different algorithm as proof of work that adapts well and works without specifying a Difficulty target. Instead of Y's being don't cares we count the bit changes (0 to 1 transitions) within them. So we force a block hash to satisfy two contradictory requirements due to the fixed bit length of the hash (256bits) Counting 0 to 1 transitions a hash could have at max 128 something really rare. Given two hashes with the same number of transitions the one with the most X's being zero wins. I exact starting from the left and comparing the hashes bit by bit the first having 0 where the other hash has 1 wins. (Dominant 0 bit) How the network develops consciousness? A) All participating nodes try to maximize hash 0 to 1 transitions B) At given time intervals nodes publish their best so far C) A node receiving two or more hashes always prefers the one with most transitions and if equal the one with most dominant 0 bits. *A node that finds a really rare block and publishes it on time radically improves the security of the network. This is preliminary work an we would like your comments or suggest similar works from others. This is a wealth of nonsense that shows a complete lack of understanding of how mining, target, difficulty, or even binary math work, which then conjures up a nonexistent issue. This.Also an advice to anyone who thinks he can come up with a better or fundamentally different or even slightly different pow algorithm... Learn the very basics! Thanks for repeating things I all ready know. If you claim than there is never going to exist a better POW algorithm then i can ensure you that you are wrong PS. https://en.wikipedia.org/wiki/Hamming_distanceThe concept of "a better pow algorithm" is relative. Some may argue that "memory hard" pow algorithms are better for crypto currencies. The problem is that you don't understand the very fundamental concept on top of which pow algorithms are based. And that concept can't get any better because it is AS SIMPLE (and elegant) AS IT GETS.
|
|
|
|
qdoop (OP)
Member
Offline
Activity: 119
Merit: 112
_copy_improve_
|
|
October 15, 2014, 12:24:58 AM |
|
Currently for a block hash to be accepted (proof of work) should have this form XXX....YYYYYYYYYYYYYYYYYYYYYYYYYYY..... where X's : Zeros Y's : Don't cares Difficulty Target demands at least a minimum number of zero X's. This requirement does not adapt well with sadden network hash power changes and the time required to find a hash may vary significantly. We propose a different algorithm as proof of work that adapts well and works without specifying a Difficulty target. Instead of Y's being don't cares we count the bit changes (0 to 1 transitions) within them. So we force a block hash to satisfy two contradictory requirements due to the fixed bit length of the hash (256bits) Counting 0 to 1 transitions a hash could have at max 128 something really rare. Given two hashes with the same number of transitions the one with the most X's being zero wins. I exact starting from the left and comparing the hashes bit by bit the first having 0 where the other hash has 1 wins. (Dominant 0 bit) How the network develops consciousness? A) All participating nodes try to maximize hash 0 to 1 transitions B) At given time intervals nodes publish their best so far C) A node receiving two or more hashes always prefers the one with most transitions and if equal the one with most dominant 0 bits. *A node that finds a really rare block and publishes it on time radically improves the security of the network. This is preliminary work an we would like your comments or suggest similar works from others. This is a wealth of nonsense that shows a complete lack of understanding of how mining, target, difficulty, or even binary math work, which then conjures up a nonexistent issue. This.Also an advice to anyone who thinks he can come up with a better or fundamentally different or even slightly different pow algorithm... Learn the very basics! Thanks for repeating things I all ready know. If you claim than there is never going to exist a better POW algorithm then i can ensure you that you are wrong PS. https://en.wikipedia.org/wiki/Hamming_distanceThe concept of "a better pow algorithm" is relative. Some may argue that "memory hard" pow algorithms are better for crypto currencies. The problem is that you don't understand the very fundamental concept on top of which pow algorithms are based. And that concept can't get any better because it is AS SIMPLE (and elegant) AS IT GETS. Ok I do not understand it I have to admit that. Just a question if you know for sure. When calculating the total difficulty of a chain we count the values of the actual hashes or the difficulty targets in the blocks?
|
|
|
|
|