valiron


May 02, 2015, 05:53:43 PM Last edit: May 02, 2015, 06:20:56 PM by valiron 

All 4 nounces very close by.
Not close at all. A difference 300,000 is about one thirteenth of the maximum range, which means consecutive nonces will be this close together over 10 times a day. 4Byte nounce is between 1 and 2^321=4.294.967.295 right? Where is your 300.000 being 1/13th coming from? I meant 300,000,000 (that's the closeness we're talking about right?), but I misplaced a few zeros somewhere around the second glass of absinthe. This is why you shouldn't drink and derive. Be careful with absynthe... Let's look closer at nounces: We assume that nounces are uniformly distributed (not exactly true since if we start increasingly with nounce 0 they follow a Poisson law, but taking into account that nounce cycles many times before finding the solution it is well approximated by the uniform distribution). We look at distance mod 2^32. nounce(354641)nounce(354640) = 19.452.599 probability 19.452.599/(2^321)*2 = 1.8% nounce(354642)nounce(354641) = 5.394.922 probability 5.394.922/(2^321)*2 = 0.12% nounce(354642)nounce(354641) = 313.864.936 probability 313.864.936/(2^321)*2 =7.2% Combined probability 0.000155% that is 1 in 645 161 times. For me, this is just evidence that these blocks are not mined the usual way with repetitive trials. EDIT: Corrected the 645.161. Thanks to jl2012.








Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction. Advertise here.




valiron


May 02, 2015, 05:56:43 PM 

Sure, but at current difficulty rates the round many times through the whole range. I did take this into account. IF you want to be more accurate you have to average the translated Poisson distribution. In first approximation it is uniform, in particular in the range considered with nounces around 2.1 billion




valiron


May 02, 2015, 06:02:19 PM Last edit: May 03, 2015, 03:26:51 PM by valiron 

It is clear that someone found a trick for fast mining. I kind of happen to know what might be...
Is it IntelliHash? It is premining at some extend. Won't disclose more for the moment. you should expose it now bro. if true it needs to be addressed sooner rather then later. valiron, On pins and needles here, seriously you start the thread asking like you have no idea, realistically only to puff yourself up in the end.. Waiting for the PM me and for 100 BTC I will tell you message next.. I don't understand your message.




jl2012
Legendary
Offline
Activity: 1792
Merit: 1010


May 02, 2015, 06:05:53 PM 

All 4 nounces very close by.
Not close at all. A difference 300,000 is about one thirteenth of the maximum range, which means consecutive nonces will be this close together over 10 times a day. 4Byte nounce is between 1 and 2^321=4.294.967.295 right? Where is your 300.000 being 1/13th coming from? I meant 300,000,000 (that's the closeness we're talking about right?), but I misplaced a few zeros somewhere around the second glass of absinthe. This is why you shouldn't drink and derive. Be careful with absynthe... Let's look closer at nounces: We assume that nounces are uniformly distributed (not exactly true since if we start increasingly with nounce 0 they follow a Poisson law, but taking into account that nounce cycles many times before finding the solution it is well approximated by the uniform distribution). We look at distance mod 2^32. nounce(354641)nounce(354640) = 19.452.599 probability 19.452.599/(2^321)*2 = 1.8% nounce(354642)nounce(354641) = 5.394.922 probability 5.394.922/(2^321)*2 = 0.12% nounce(354642)nounce(354641) = 313.864.936 probability 313.864.936/(2^321)*2 =7.2% Combined probability 0.000155% that is 1 in 64.5 million of times. Are you trolling? 0.000155% is 1 in 645161 And this is nonsense. Just some made up data nounce(1)nounce(0) = 5% nounce(2)nounce(1) = 20% nounce(3)nounce(2) = 10% nounce(4)nounce(3) = 1% nounce(5)nounce(4) = 5% nounce(6)nounce(5) = 10% Combined probability 0.000005% that is 1 in 20 million of times. Bitcoin in broken!!!

Donation address: 374iXxS4BuqFHsEwwxUuH3nvJ69Y7Hqur3 (Bitcoin ONLY) LRDGENPLYrcTRssGoZrsCT1hngaH3BVkM4 (LTC) PGP: D3CC 1772 8600 5BB8 FF67 3294 C524 2A1A B393 6517



valiron


May 02, 2015, 06:11:52 PM 

All 4 nounces very close by.
Not close at all. A difference 300,000 is about one thirteenth of the maximum range, which means consecutive nonces will be this close together over 10 times a day. 4Byte nounce is between 1 and 2^321=4.294.967.295 right? Where is your 300.000 being 1/13th coming from? I meant 300,000,000 (that's the closeness we're talking about right?), but I misplaced a few zeros somewhere around the second glass of absinthe. This is why you shouldn't drink and derive. Be careful with absynthe... Let's look closer at nounces: We assume that nounces are uniformly distributed (not exactly true since if we start increasingly with nounce 0 they follow a Poisson law, but taking into account that nounce cycles many times before finding the solution it is well approximated by the uniform distribution). We look at distance mod 2^32. nounce(354641)nounce(354640) = 19.452.599 probability 19.452.599/(2^321)*2 = 1.8% nounce(354642)nounce(354641) = 5.394.922 probability 5.394.922/(2^321)*2 = 0.12% nounce(354642)nounce(354641) = 313.864.936 probability 313.864.936/(2^321)*2 =7.2% Combined probability 0.000155% that is 1 in 64.5 million of times. Are you trolling? 0.000155% is 1 in 645161 And this is nonsense. Just some made up data nounce(1)nounce(0) = 5% nounce(2)nounce(1) = 20% nounce(3)nounce(2) = 10% nounce(4)nounce(3) = 1% nounce(5)nounce(4) = 5% nounce(6)nounce(5) = 10% Combined probability 0.000005% that is 1 in 20 million of times. Bitcoin in broken!!! I just did a rough approximation, only valid for small probabilities and few events. You are welcome to do the exact computation.




Foxpup
Legendary
Offline
Activity: 3276
Merit: 2211
Vile Vixen


May 02, 2015, 06:14:41 PM 

Are you trolling? 0.000155% is 1 in 645161
I was just about to say that. Guess I'm not the only one who needs to be careful. And this is nonsense. Just some made up data
nounce(1)nounce(0) = 5%
nounce(2)nounce(1) = 20%
nounce(3)nounce(2) = 10%
nounce(4)nounce(3) = 1%
nounce(5)nounce(4) = 5%
nounce(6)nounce(5) = 10%
Combined probability 0.000005% that is 1 in 20 million of times. Bitcoin in broken!!!
It is premining at some extend. Won't disclose more for the moment.

Will pretend to do unspeakable things (while actually eating a taco) for bitcoins: 1K6d1EviQKX3SVKjPYmJGyWBb1avbmCFM4I am not on the scammers' paradise known as Telegram! Do not believe anyone claiming to be me offforum without a signed message from the above address! Accept no excuses and make no exceptions!



valiron


May 02, 2015, 06:18:37 PM 

Are you trolling? 0.000155% is 1 in 645161
You are right on this one of course. I correct it thanks.




smolen


May 02, 2015, 06:47:06 PM 

Nonce are not uniformly distributed because miners always start scanning from 0.
Also Bitfury ASIC (and may be others too) don't scan full nonce range due to h/w optimizations.

Of course I gave you bad advice. Good one is way out of your price range.



jl2012
Legendary
Offline
Activity: 1792
Merit: 1010


May 02, 2015, 06:52:24 PM 

All 4 nounces very close by.
Not close at all. A difference 300,000 is about one thirteenth of the maximum range, which means consecutive nonces will be this close together over 10 times a day. 4Byte nounce is between 1 and 2^321=4.294.967.295 right? Where is your 300.000 being 1/13th coming from? I meant 300,000,000 (that's the closeness we're talking about right?), but I misplaced a few zeros somewhere around the second glass of absinthe. This is why you shouldn't drink and derive. Be careful with absynthe... Let's look closer at nounces: We assume that nounces are uniformly distributed (not exactly true since if we start increasingly with nounce 0 they follow a Poisson law, but taking into account that nounce cycles many times before finding the solution it is well approximated by the uniform distribution). We look at distance mod 2^32. nounce(354641)nounce(354640) = 19.452.599 probability 19.452.599/(2^321)*2 = 1.8% nounce(354642)nounce(354641) = 5.394.922 probability 5.394.922/(2^321)*2 = 0.12% nounce(354642)nounce(354641) = 313.864.936 probability 313.864.936/(2^321)*2 =7.2% Combined probability 0.000155% that is 1 in 64.5 million of times. Are you trolling? 0.000155% is 1 in 645161 And this is nonsense. Just some made up data nounce(1)nounce(0) = 5% nounce(2)nounce(1) = 20% nounce(3)nounce(2) = 10% nounce(4)nounce(3) = 1% nounce(5)nounce(4) = 5% nounce(6)nounce(5) = 10% Combined probability 0.000005% that is 1 in 20 million of times. Bitcoin in broken!!! I just did a rough approximation, only valid for small probabilities and few events. You are welcome to do the exact computation. You calculate in a wrong way. You should define the meaning of "close" a priori. That could be 20%, 10%, or 1%. Let say you choose 10%, the P(1.8%, 0.12%, 7.2%) should be 1/1000, not 1/645161. And let say you choose 2%, the P(1.8%, 0.12%, 7.2%) should be 1/2551 (0.02*0.02*0.98). Therefore, one event of this kind is expected in about 2 weeks. Please stop here (and edit your misleading topic) unless you find something really statistical significantly deviated from the theoretical distribution.

Donation address: 374iXxS4BuqFHsEwwxUuH3nvJ69Y7Hqur3 (Bitcoin ONLY) LRDGENPLYrcTRssGoZrsCT1hngaH3BVkM4 (LTC) PGP: D3CC 1772 8600 5BB8 FF67 3294 C524 2A1A B393 6517



valiron


May 02, 2015, 07:50:07 PM 

All 4 nounces very close by.
Not close at all. A difference 300,000 is about one thirteenth of the maximum range, which means consecutive nonces will be this close together over 10 times a day. 4Byte nounce is between 1 and 2^321=4.294.967.295 right? Where is your 300.000 being 1/13th coming from? I meant 300,000,000 (that's the closeness we're talking about right?), but I misplaced a few zeros somewhere around the second glass of absinthe. This is why you shouldn't drink and derive. Be careful with absynthe... Let's look closer at nounces: We assume that nounces are uniformly distributed (not exactly true since if we start increasingly with nounce 0 they follow a Poisson law, but taking into account that nounce cycles many times before finding the solution it is well approximated by the uniform distribution). We look at distance mod 2^32. nounce(354641)nounce(354640) = 19.452.599 probability 19.452.599/(2^321)*2 = 1.8% nounce(354642)nounce(354641) = 5.394.922 probability 5.394.922/(2^321)*2 = 0.12% nounce(354642)nounce(354641) = 313.864.936 probability 313.864.936/(2^321)*2 =7.2% Combined probability 0.000155% that is 1 in 64.5 million of times. Are you trolling? 0.000155% is 1 in 645161 And this is nonsense. Just some made up data nounce(1)nounce(0) = 5% nounce(2)nounce(1) = 20% nounce(3)nounce(2) = 10% nounce(4)nounce(3) = 1% nounce(5)nounce(4) = 5% nounce(6)nounce(5) = 10% Combined probability 0.000005% that is 1 in 20 million of times. Bitcoin in broken!!! I just did a rough approximation, only valid for small probabilities and few events. You are welcome to do the exact computation. You calculate in a wrong way. You should define the meaning of "close" a priori. That could be 20%, 10%, or 1%. Let say you choose 10%, the P(1.8%, 0.12%, 7.2%) should be 1/1000, not 1/645161. And let say you choose 2%, the P(1.8%, 0.12%, 7.2%) should be 1/2551 (0.02*0.02*0.98). Therefore, one event of this kind is expected in about 2 weeks. Please stop here (and edit your misleading topic) unless you find something really statistical significantly deviated from the theoretical distribution. I don't understand what you mean. OK, let me do the computation and explain things carefully. You can tell me on which point you disagree. (0) Put your 2^321 integer values on a circle of perimeter 2. This geometrical representation will help you. (1) We assume uniform distribution of nounces. This is correct as first approximation, but not totally accurate as pointed out before by several people. We may extract the historical distribution and use it. (2) The probability that two consecutive nounces are closer as nounce(354641) and nounce(354640) is 1.8%. It is the minor arc length between the two nounces on the circle. Same for nounce(354642) and nounce(354641), and for nounce(354643) and nounce(354644). Otherwise, please correct me if you disagree. (3) We assume independence of nounces with respect to previous nounces, i.e. we consider nounces as independent random variables. This implies that distance between nounce(n+2) and nounce(n+1) is independent of the distance between nounce(n+1) and nounce(n). (4) Thus, the probability of having three consecutive events of the sort described is just the product of the probabilities, it is 1 over 645161. The probability of seeing this is on average once each 12.27 years at an average production of one block (nounce) every 10 minutes. My conclusion is that the nounces produced by this miner are likely not independent and the mining procedure is not the usual one and it uses previous block computations or doesn't uses much the nounce variable. But this is just one piece of evidence. The second one, about the block size, also points to the fact that it is the same miner who mined the blocks. 731 kB blocks are quite common as noted earlier by someone else, but it is not very likely either to find them consecutively. Moreover I bet that they cluster more often than expected and this can be checked running statistics on the blockchain. The third piece of evidence is how close in time are these blocks. THe probability is not alarmingly small and can be computed by the Poisson distribution that follow times between blocks. The fourth piece of evidence is the nonchronological timestamps that suggest that the timestap maleability is also used as nounce (this fact was already noted for blocks with only one transaction). The fifth piece of evidence is that the first block is mined by AntPool and the next 3 by anonymous. It is not so common to have consecutive anonymous blocks, This indicates that the miner is trying to hide that he is the same one mining. All this "coincidences" are extremely unlikely and point that something is going on there.




gmaxwell
Moderator
Legendary
Offline
Activity: 3416
Merit: 5148


May 02, 2015, 08:00:10 PM 

Also, A lot of hardware only searches a subset of nonces. The size is irrelevant; it's just roughly the soft target most miners use... the size isn't even available to the mining algorithm, which works only on the block header, other than being the amount of data after a dozen layers of sha256 that feed into the tree root hash in the header.




jl2012
Legendary
Offline
Activity: 1792
Merit: 1010


May 02, 2015, 08:07:28 PM 

All 4 nounces very close by.
Not close at all. A difference 300,000 is about one thirteenth of the maximum range, which means consecutive nonces will be this close together over 10 times a day. 4Byte nounce is between 1 and 2^321=4.294.967.295 right? Where is your 300.000 being 1/13th coming from? I meant 300,000,000 (that's the closeness we're talking about right?), but I misplaced a few zeros somewhere around the second glass of absinthe. This is why you shouldn't drink and derive. Be careful with absynthe... Let's look closer at nounces: We assume that nounces are uniformly distributed (not exactly true since if we start increasingly with nounce 0 they follow a Poisson law, but taking into account that nounce cycles many times before finding the solution it is well approximated by the uniform distribution). We look at distance mod 2^32. nounce(354641)nounce(354640) = 19.452.599 probability 19.452.599/(2^321)*2 = 1.8% nounce(354642)nounce(354641) = 5.394.922 probability 5.394.922/(2^321)*2 = 0.12% nounce(354642)nounce(354641) = 313.864.936 probability 313.864.936/(2^321)*2 =7.2% Combined probability 0.000155% that is 1 in 64.5 million of times. Are you trolling? 0.000155% is 1 in 645161 And this is nonsense. Just some made up data nounce(1)nounce(0) = 5% nounce(2)nounce(1) = 20% nounce(3)nounce(2) = 10% nounce(4)nounce(3) = 1% nounce(5)nounce(4) = 5% nounce(6)nounce(5) = 10% Combined probability 0.000005% that is 1 in 20 million of times. Bitcoin in broken!!! I just did a rough approximation, only valid for small probabilities and few events. You are welcome to do the exact computation. You calculate in a wrong way. You should define the meaning of "close" a priori. That could be 20%, 10%, or 1%. Let say you choose 10%, the P(1.8%, 0.12%, 7.2%) should be 1/1000, not 1/645161. And let say you choose 2%, the P(1.8%, 0.12%, 7.2%) should be 1/2551 (0.02*0.02*0.98). Therefore, one event of this kind is expected in about 2 weeks. Please stop here (and edit your misleading topic) unless you find something really statistical significantly deviated from the theoretical distribution. I don't understand what you mean. OK, let me do the computation and explain things carefully. You can tell me on which point you disagree. (0) Put your 2^321 integer values on a circle of perimeter 2. This geometrical representation will help you. (1) We assume uniform distribution of nounces. This is correct as first approximation, but not totally accurate as pointed out before by several people. We may extract the historical distribution and use it. (2) The probability that two consecutive nounces are closer as nounce(354641) and nounce(354640) is 1.8%. It is the minor arc length between the two nounces on the circle. Same for nounce(354642) and nounce(354641), and for nounce(354643) and nounce(354644). Otherwise, please correct me if you disagree. (3) We assume independence of nounces with respect to previous nounces, i.e. we consider nounces as independent random variables. This implies that distance between nounce(n+2) and nounce(n+1) is independent of the distance between nounce(n+1) and nounce(n). (4) Thus, the probability of having three consecutive events of the sort described is just the product of the probabilities, it is 1 over 645161. The probability of seeing this is on average once each 12.27 years at an average production of one block (nounce) every 10 minutes. If you can't see why you are committing an elementary statistics fallacy, just consider this: 1. P is an uniformly distributed variable from 0 to 1, with mean = 0.5 2. There is 144 blocks per day 3. The probability calculated, in the way you suggest, is about 0.5^144 = 4*10(44), which should NEVER happen  For the consecutive 731kb blocks, it just showed there were too many unconfirmed tx and miners had to use the maximum size.

Donation address: 374iXxS4BuqFHsEwwxUuH3nvJ69Y7Hqur3 (Bitcoin ONLY) LRDGENPLYrcTRssGoZrsCT1hngaH3BVkM4 (LTC) PGP: D3CC 1772 8600 5BB8 FF67 3294 C524 2A1A B393 6517



valiron


May 02, 2015, 08:13:15 PM 

Also, A lot of hardware only searches a subset of nonces. The size is irrelevant; it's just roughly the soft target most miners use... the size isn't even available to the mining algorithm, which works only on the block header, other than being the amount of data after a dozen layers of sha256 that feed into the tree root hash in the header. The distribution of nounces is maybe not uniform in a 10% range of nounce 2.148.000.000. Indeed 2.147.483.648 = 2^31 so the nounces of the blocks are in binary 10000....0000 +/ some small stuff. Also, are there technical descriptions of the algorithms used by different ASICs?




valiron


May 02, 2015, 08:17:46 PM 

All 4 nounces very close by.
Not close at all. A difference 300,000 is about one thirteenth of the maximum range, which means consecutive nonces will be this close together over 10 times a day. 4Byte nounce is between 1 and 2^321=4.294.967.295 right? Where is your 300.000 being 1/13th coming from? I meant 300,000,000 (that's the closeness we're talking about right?), but I misplaced a few zeros somewhere around the second glass of absinthe. This is why you shouldn't drink and derive. Be careful with absynthe... Let's look closer at nounces: We assume that nounces are uniformly distributed (not exactly true since if we start increasingly with nounce 0 they follow a Poisson law, but taking into account that nounce cycles many times before finding the solution it is well approximated by the uniform distribution). We look at distance mod 2^32. nounce(354641)nounce(354640) = 19.452.599 probability 19.452.599/(2^321)*2 = 1.8% nounce(354642)nounce(354641) = 5.394.922 probability 5.394.922/(2^321)*2 = 0.12% nounce(354642)nounce(354641) = 313.864.936 probability 313.864.936/(2^321)*2 =7.2% Combined probability 0.000155% that is 1 in 64.5 million of times. Are you trolling? 0.000155% is 1 in 645161 And this is nonsense. Just some made up data nounce(1)nounce(0) = 5% nounce(2)nounce(1) = 20% nounce(3)nounce(2) = 10% nounce(4)nounce(3) = 1% nounce(5)nounce(4) = 5% nounce(6)nounce(5) = 10% Combined probability 0.000005% that is 1 in 20 million of times. Bitcoin in broken!!! I just did a rough approximation, only valid for small probabilities and few events. You are welcome to do the exact computation. You calculate in a wrong way. You should define the meaning of "close" a priori. That could be 20%, 10%, or 1%. Let say you choose 10%, the P(1.8%, 0.12%, 7.2%) should be 1/1000, not 1/645161. And let say you choose 2%, the P(1.8%, 0.12%, 7.2%) should be 1/2551 (0.02*0.02*0.98). Therefore, one event of this kind is expected in about 2 weeks. Please stop here (and edit your misleading topic) unless you find something really statistical significantly deviated from the theoretical distribution. I don't understand what you mean. OK, let me do the computation and explain things carefully. You can tell me on which point you disagree. (0) Put your 2^321 integer values on a circle of perimeter 2. This geometrical representation will help you. (1) We assume uniform distribution of nounces. This is correct as first approximation, but not totally accurate as pointed out before by several people. We may extract the historical distribution and use it. (2) The probability that two consecutive nounces are closer as nounce(354641) and nounce(354640) is 1.8%. It is the minor arc length between the two nounces on the circle. Same for nounce(354642) and nounce(354641), and for nounce(354643) and nounce(354644). Otherwise, please correct me if you disagree. (3) We assume independence of nounces with respect to previous nounces, i.e. we consider nounces as independent random variables. This implies that distance between nounce(n+2) and nounce(n+1) is independent of the distance between nounce(n+1) and nounce(n). (4) Thus, the probability of having three consecutive events of the sort described is just the product of the probabilities, it is 1 over 645161. The probability of seeing this is on average once each 12.27 years at an average production of one block (nounce) every 10 minutes. If you can't see why you are committing an elementary statistics fallacy, just consider this: 1. P is an uniformly distributed variable from 0 to 1, with mean = 0.5 2. There is 144 blocks per day 3. The probability calculated, in the way you suggest, is about 0.5^144 = 4*10(44), which should NEVER happen  For the consecutive 731kb blocks, it just showed there were too many unconfirmed tx and miners had to use the maximum size. The probability of having 144 independent random variables all smaller that their mean value is what you computed. It is the same as computing the probability of throwing 144 times a coin and seeing all times heads. You are right. It is extremely unlikely and should never happen. Where is the fallacy?





valiron


May 02, 2015, 08:36:28 PM 

Can you explain the mathematical reason why nounces produced by ASICs are not uniform? The references you provide obviously do not explain that (nice paper by the way). Also, the point here is that the distribution around 2^31 may be not uniform. Any mathematical reason for that? From the pure hashing point of view, all nounces should have the same probability of success. If they appear with a nonuniform distribution is because the mining algorithm do not treat all of them equally, which is quite possible but must have a mathematical reason behind. Anyway, the fact that all 4 blocks have a nounce close to 2^31 is more evidence that they were mined by the same miner. There are many other nounces that are not nearby 2^31. Too many similarities between the numbers of the 4 blocks...




gmaxwell
Moderator
Legendary
Offline
Activity: 3416
Merit: 5148


May 02, 2015, 08:41:20 PM 

FWIW, I think Valiron is engaging in misconduct here. At first there is an "innocent" observational question and then after people point out that the observation is expected (because of hardware that only uses a limited set of nonces, and because of the block softtarget maximum) he had adopted a position of "secret knowing" that substantiates his position and yet he will not explain it.
Of course, it's possible for someone to be innocently ignorant, even likely (especially considering Valiron's posting history; there are plenty of optimizations you could be unaware of, or structure about mining that lay people misunderstand that could be mistaken as some advantage) but there is no reason to play secrecy games there, and secrecy is actively poisonous to having your understanding elaborated. Likewise, it's possible to actually know secrets, but then you don't go hinting about them on the forum. One possible way gain from the pattern of posts here would to manipulate the market with FUD about the security of the hashing algorithm, another would be to try to scam greedy buyer into buying these "premining" secrets; so these are my working theories, and I've negatively rated valiron accordingly.
(I'd debated instead locking the thread; as a thread of "Oh whats this" "its that" "oh no its not, it's something else but I won't tell you" "we told you its this" "bad math bad math" isn't a good use of the forum; but I thought giving a chance for a correction would be more useful).




gmaxwell
Moderator
Legendary
Offline
Activity: 3416
Merit: 5148


May 02, 2015, 08:48:29 PM 

Can you explain the mathematical reason why nounces produced by ASICs are not uniform? The references you provide obviously do not explain that.
Because mining ASIC use "sea of hashers", they take one midstate work unit and broadcast it to hundreds (or even thousands) of SHA256 engines, each one tries a different nonce for the same work. You only have a finite number of engines so only a subset of nonces will get used, also some engines will fail (sometimes the same engine on every chip of a particular make) adding additional gaps. The allocation schemes differ from device to device (e.g. some hardware only produces even nonces or multiple of 64 nonces, some hardware only produces nonces in a range 01024, etc.) There is also an optimization you can do where you actually hardwire the engines for given nonces and grind the first half, though I don't know if anyone bothers with it. Anyway, the fact that all 4 blocks have a nounce close to 2^31 is more evidence that they were mined by the same miner.
Same miner or similar hardware, perhaps sure? and so what? Its not uncommon for a large miner (or a hardware type with a large share of the hashrate) to find four blocks consecutively; there is effectively a calculator for that in the bitcoin whitepaper.




valiron


May 02, 2015, 08:55:13 PM Last edit: May 04, 2015, 09:53:42 AM by valiron 

FWIW, I think Valiron is engaging in misconduct here. At first there is an "innocent" observational question and then after people point out that the observation is expected (because of hardware that only uses a limited set of nonces, and because of the block softtarget maximum) he had adopted a position of "secret knowing" that substantiates his position and yet he will not explain it.
Of course, it's possible for someone to be innocently ignorant, even likely (especially considering Valiron's posting history; there are plenty of optimizations you could be unaware of, or structure about mining that lay people misunderstand that could be mistaken as some advantage) but there is no reason to play secrecy games there, and secrecy is actively poisonous to having your understanding elaborated. Likewise, it's possible to actually know secrets, but then you don't go hinting about them on the forum. One possible way gain from the pattern of posts here would to manipulate the market with FUD about the security of the hashing algorithm, another would be to try to scam greedy buyer into buying these "premining" secrets; so these are my working theories, and I've negatively rated valiron accordingly.
(I'd debated instead locking the thread; as a thread of "Oh whats this" "its that" "oh no its not, it's something else but I won't tell you" "we told you its this" "bad math bad math" isn't a good use of the forum; but I thought giving a chance for a correction would be more useful).
Sorry, I didn't mean any kind of misconduct or second intention. For the disclaimer I don't participate in active trading. If you want I can edit and erase all hints at what I know (deleted). I don't mind discussing this openly, or if you prefer we can discuss it in a separate thread, but I don't think it is material to be exposed through posts in a forum. It would be better to discuss it in detail after a research paper is published. I am only interested in discussing the mathematical/computational aspects. As said, I can edit and remove everything that could sound alarming. It is not my intention to spread any kind of FUD on bitcoin.




inBitweTrust


May 02, 2015, 08:56:06 PM 

FWIW, I think Valiron is engaging in misconduct here. At first there is an "innocent" observational question and then after people point out that the observation is expected (because of hardware that only uses a limited set of nonces, and because of the block softtarget maximum) he had adopted a position of "secret knowing" that substantiates his position and yet he will not explain it.
I suspect as much as well, but to give Valiron the benefit of the doubt and for other lurkers to potentially learn something I will try explaining it as simply as possible. Can you explain the mathematical reason why nounces produced by ASICs are not uniform? The references you provide obviously do not explain that (nice paper by the way).
The papers do explain biases towards certain numbers and why certain sets of numbers appear more often than other numbers and how these probabilistic biases can mislead you into drawing erroneous conclusions. Based upon the quickness of your reply you obviously didn't read the papers so I will provide a video for you to understand this principle https://www.youtube.com/watch?v=4UgZ5FqdYIQIn the video the bias is created because the sampled numbers are not random but selected based upon our bias to start at 0 or 1 and work in a linear manner as humans. The reason why the nonces produced aren't randomly uniform is because ASIC's search through random numbers in a linear and non random manner within certain ranges. There are many potential nonces that that could satisfy the block to hash given a specific difficulty but since asics search for these nonces in a linear fashion within a given range it greatly increases the probability that similar(contrasted to the potential range of possibilities of potential numbers) nonces will be found for each block. This is further emphasized by the fact that there are now very large mining pools running most of the same exact hardware for most of their hashrate which has the exact same characteristics on how it searches for valid nonces.




