WanderingPhilospher
Sr. Member
  
Offline
Activity: 1428
Merit: 274
Shooters Shoot...
|
 |
January 28, 2025, 04:13:53 PM |
|
A small size sample test on prefixes. Using this code: import base58 min = base58.b58decode('1BY8GQb111111111111111111111111111').hex() max = base58.b58decode('1BY8GQbzzzzzzzzzzzzzzzzzzzzzzzzzzz').hex()
# Get total possibilities, and remove checksum bytes n = (int(max, 16) - int(min, 16) + 1) >> 32
# Get AVERAGE count over ANY 66 bits set n = n / (2**(160 - 41))
print(n)
So for the 6 character prefix of "1BY8GQb", the code says there should be an average of 143.55 prefixes that start with 1BY8GQb, inside a 41 bit range. I ran two, 41 bit ranges. 1st range = 155 found 1BY8GQb prefixes 2nd range = 145 found 1BY8GQb prefixes So if one was using some step size / increment size to try and gauge the next 1BY8GQb prefix moving forward with different ranges, by doing some average distance between the prefixes, how would they decide? Which sample range above would they use? Let's take it one step further, same code, different input: import base58 min = base58.b58decode('1BY8GQbn11111111111111111111111111').hex() max = base58.b58decode('1BY8GQbnzzzzzzzzzzzzzzzzzzzzzzzzzz').hex()
# Get total possibilities, and remove checksum bytes n = (int(max, 16) - int(min, 16) + 1) >> 32
# Get AVERAGE count over ANY 66 bits set n = n / (2**(160 - 41))
print(n)
So for the 7 character prefix of "1BY8GQbn", the code says there should be an average of 2.47 prefixes that start with 1BY8GQbn, inside a 41 bit range. Results: 1st range = 4 found 1BY8GQbn prefixes 2nd range = 3 found 1BY8GQbn prefixes So again, how does one determine a good step / increment size? Take the averages in between found prefixes and then subtract some arbitrary amount, in hopes not to skip over a prefix...the prefix that is actually the full address they are looking for? That's the gamble with this strategy. Is it right or wrong, no. Is it a sure fire way to narrow down the search space and find the address, also no. The other thing about ripemd160 hashes, you can have the same 40 leading bytes, and have two different address prefixes. Example: 739437BB 3Dxxxxxx This can create a single difference, or an entire difference. So to me, that's why it's hard to truly get an idea of how many times a prefix may be in a certain size range. We can use some tools and get an idea, but as it's been said, it's just an idea, an average based off of x y and z.
|
|
|
|
|
|
mcdouglasx
|
 |
January 28, 2025, 04:16:46 PM |
|
The probability of whatever pair of two keys you want to pick to have an identical / semi-identical / totally different (pick whatever characteristic and whatever degree you like) is always the same, no matter how many times you repeat the test. The chances do not change just because you are doing the extractions (hashing). Just because you flipped a coin heads 100 times in a row does not mean there are greater chances to have it flip tail on the 101st flip. At most, you can pretend the coin is rigged, but you can never be sure, unless you repeat 100 flips many many times, and aach 100 sequence of flips ends up the same. That is the pattern, and if you can't get a pattern, you definitely can't get statistics to predict the future.
You are talking about each coin flip having a 50% probability of landing on " heads" and a 50% probability of landing on " tails," with no relation to previous results. However, in the scenario you propose, there is a connection. Although each coin flip has a 50% (0.5) probability of landing on " heads" and a 50% (0.5) probability of landing on " tails," when we want to calculate the probability of getting "heads" in consecutive flips, we multiply the individual probabilities: The probability of getting " heads" in one flip is 0.5, in two consecutive flips is (0.5 \times 0.5 = 0.25), in 100 consecutive flips is (0.5^100), which is an extremely small number. So, it is not like you think that the probability is always the same. This is known as compound probability, which refers to the probability of multiple independent events occurring in sequence. I think you are missing the point, the bigger picture or what he is posting and saying and what others counter with.
In that sense, you are right; he cannot ensure whether a prefix is not in a site based on statistics alone.
|
|
|
|
|
bibilgin
Newbie
Offline
Activity: 262
Merit: 0
|
 |
January 28, 2025, 04:44:11 PM |
|
Would you like to put a wager on it? 0.1 BTC ?
Also, there is no right or wrong way to search for 67...well the right way to scan is to do 100% scan. I think what some people have said to you, is that you can do what you are doing, but you more than likely, probabilisticly, you will skip over some prefixes; and one of those could be the actual address you are looking for. That's all. But you double down and talk about how your way is a sure fire way. But I know for a fact, you have missed at least one key.
But I will say this, keep doing what you are doing. You will find the key or you won't.
I see your 0.1 BTC bet and I say let's increase it. Let's say 0.2 BTC.  Because you are not looking for proof. But I also know you are bluffing.  You can only skip some wallets this way. You want to give information in case you lose your chance. I know that. But Random scanning, Sequential scanning (wasting time), I am aware that my chances are higher compared to these. A small size sample test on prefixes. Using this code: import base58 min = base58.b58decode('1BY8GQb111111111111111111111111111').hex() max = base58.b58decode('1BY8GQbzzzzzzzzzzzzzzzzzzzzzzzzzzz').hex()
# Get total possibilities, and remove checksum bytes n = (int(max, 16) - int(min, 16) + 1) >> 32
# Get AVERAGE count over ANY 66 bits set n = n / (2**(160 - 41))
print(n)
In such tests, the wallet character length is important. A wallet with a length of 33 is significantly different from a wallet with a length of 34. So when you calculate in bits, you don't know how many 33 or 34 wallets you have in the bit range you are looking for.
|
|
|
|
|
WanderingPhilospher
Sr. Member
  
Offline
Activity: 1428
Merit: 274
Shooters Shoot...
|
 |
January 28, 2025, 04:51:23 PM |
|
Would you like to put a wager on it? 0.1 BTC ?
Also, there is no right or wrong way to search for 67...well the right way to scan is to do 100% scan. I think what some people have said to you, is that you can do what you are doing, but you more than likely, probabilisticly, you will skip over some prefixes; and one of those could be the actual address you are looking for. That's all. But you double down and talk about how your way is a sure fire way. But I know for a fact, you have missed at least one key.
But I will say this, keep doing what you are doing. You will find the key or you won't.
I see your 0.1 BTC bet and I say let's increase it. Let's say 0.2 BTC.  Because you are not looking for proof. But I also know you are bluffing.  You can only skip some wallets this way. You want to give information in case you lose your chance. I know that. But Random scanning, Sequential scanning (wasting time), I am aware that my chances are higher compared to these. A small size sample test on prefixes. Using this code: import base58 min = base58.b58decode('1BY8GQb111111111111111111111111111').hex() max = base58.b58decode('1BY8GQbzzzzzzzzzzzzzzzzzzzzzzzzzzz').hex()
# Get total possibilities, and remove checksum bytes n = (int(max, 16) - int(min, 16) + 1) >> 32
# Get AVERAGE count over ANY 66 bits set n = n / (2**(160 - 41))
print(n)
In such tests, the wallet character length is important. A wallet with a length of 33 is significantly different from a wallet with a length of 34. So when you calculate in bits, you don't know how many 33 or 34 wallets you have in the bit range you are looking for. Who on here can be a trusted agent to hold his .2 BTC in escrow, and then send it to me, once I show the private key? Anyone?
|
|
|
|
|
|
kTimesG
|
 |
January 28, 2025, 05:41:42 PM |
|
You are talking about each coin flip having a 50% probability of landing on "heads" and a 50% probability of landing on "tails," with no relation to previous results. However, in the scenario you propose, there is a connection. Although each coin flip has a 50% (0.5) probability of landing on "heads" and a 50% (0.5) probability of landing on "tails," when we want to calculate the probability of getting "heads" in consecutive flips, we multiply the individual probabilities:
The probability of getting "heads" in one flip is 0.5, in two consecutive flips is (0.5 \times 0.5 = 0.25), in 100 consecutive flips is (0.5^100), which is an extremely small number.
So, it is not like you think that the probability is always the same. This is known as compound probability, which refers to the probability of multiple independent events occurring in sequence.
In one sequence of 100 coin flips, landing heads 100 times in a row has the exact same probability as landing 100 tails in a row, and the exact same probability as landing the sequence: HHTHHTTTHHTTHTH.... THTT For flip #101, no matter what sequence you got up to that point (all heads, all tails, anything in between), the probability for the next flip to be head is 50%, and the probability for it to land as tail is 50%. By your logic, a 50% probability (of success) is different than a 50% probability (of failure). So what are you compounding, since the result is always the same no matter of the sequence? If you deny these laws, you can always do a simulation in Python (for lower lengths), and extrapolate.
|
Off the grid, training pigeons to broadcast signed messages.
|
|
|
Kelvin555
Jr. Member
Offline
Activity: 63
Merit: 1
|
 |
January 28, 2025, 05:44:41 PM |
|
I see your 0.1 BTC bet and I say let's increase it. Let's say 0.2 BTC.  Because you are not looking for proof. But I also know you are bluffing.   Who on here can be a trusted agent to hold his .2 BTC in escrow, and then send it to me, once I show the private key? Anyone?
There are people on this forum who run professional escrow services, reach out to them or let bib choose someone at least a senior member in this forum.
|
|
|
|
|
|
mcdouglasx
|
 |
January 28, 2025, 06:01:44 PM |
|
You are talking about each coin flip having a 50% probability of landing on "heads" and a 50% probability of landing on "tails," with no relation to previous results. However, in the scenario you propose, there is a connection. Although each coin flip has a 50% (0.5) probability of landing on "heads" and a 50% (0.5) probability of landing on "tails," when we want to calculate the probability of getting "heads" in consecutive flips, we multiply the individual probabilities:
The probability of getting "heads" in one flip is 0.5, in two consecutive flips is (0.5 \times 0.5 = 0.25), in 100 consecutive flips is (0.5^100), which is an extremely small number.
So, it is not like you think that the probability is always the same. This is known as compound probability, which refers to the probability of multiple independent events occurring in sequence.
In one sequence of 100 coin flips, landing heads 100 times in a row has the exact same probability as landing 100 tails in a row, and the exact same probability as landing the sequence: HHTHHTTTHHTTHTH.... THTT For flip #101, no matter what sequence you got up to that point (all heads, all tails, anything in between), the probability for the next flip to be head is 50%, and the probability for it to land as tail is 50%. By your logic, a 50% probability (of success) is different than a 50% probability (of failure). So what are you compounding, since the result is always the same no matter of the sequence? If you deny these laws, you can always do a simulation in Python (for lower lengths), and extrapolate. the coin toss is always 50% as an individual value, so 101 is individually 50% likely, but as a whole it multiplies exponentially, that's why i repeat that the nature of statistics and probabilities are essentially counterintuitive, you're right and so am i. i'm not making anything up, there are plenty of studies on this, look up research on compound probability on the internet, your perception is just because you ignore the compound and see simple probabilities as your only truth.
|
|
|
|
|
bnbguru
Newbie
Offline
Activity: 5
Merit: 0
|
 |
January 28, 2025, 06:10:03 PM |
|
Is this puzzle solved? 
|
|
|
|
|
|
kTimesG
|
 |
January 28, 2025, 06:29:55 PM |
|
the coin toss is always 50% as an individual value, so 101 is individually 50% likely, but as a whole it multiplies exponentially, that's why i repeat that the nature of statistics and probabilities are essentially counterintuitive, you're right and so am i. i'm not making anything up, there are plenty of studies on this, look up research on compound probability on the internet, your perception is just because you ignore the compound and see simple probabilities as your only truth.
So to be clear, are you saying there are higher chances to get a "tail" for flip #101, if you had a sequence of 100 heads? Assuming the coin is a fair coin (50% individual flip chances).
|
Off the grid, training pigeons to broadcast signed messages.
|
|
|
|
mcdouglasx
|
 |
January 28, 2025, 07:03:51 PM Last edit: January 28, 2025, 07:37:12 PM by mcdouglasx |
|
the coin toss is always 50% as an individual value, so 101 is individually 50% likely, but as a whole it multiplies exponentially, that's why i repeat that the nature of statistics and probabilities are essentially counterintuitive, you're right and so am i. i'm not making anything up, there are plenty of studies on this, look up research on compound probability on the internet, your perception is just because you ignore the compound and see simple probabilities as your only truth.
So to be clear, are you saying there are higher chances to get a "tail" for flip #101, if you had a sequence of 100 heads? Assuming the coin is a fair coin (50% individual flip chances). I told you that we are both right, the individual probabilities will be 50%, and the probability of being able to get 101 heads in a row is very small. Is it just as likely to win a bet that consists of 2 consecutive "heads" tosses? as 1? No, right? Winning the first toss is just as likely as winning the second toss individually, but you are less likely to win both in a row because there are 4 possible scenarios, you win the first and the second. you win the first and lose the second, you lose the first and win the second. you lose both. So you want to argue that compound probabilities don't exist and ignore the scientific data in your favor in order to be able to be right?. update: That is, your argument is based on the fact that you have guessed the previous hundred faces and you skip the probabilities that such a trick entails. It is a long way of saying that a coin toss has a 50/50 probability. Therefore, in terms of prefixes it is similar but adding more variables, that is why it is easier to guess 1 prefix than 10. That is why you can speculate every approximate number of steps a prefix is more probable, it is not exact, but the probabilities favor it, if I tell you that I found a 15 character prefix in private key 1 and another in 10000000000. Would you bet that within that range there is one more prefix? No, of course, because it is improbable, but at the same time possible.
|
|
|
|
|
Kelvin555
Jr. Member
Offline
Activity: 63
Merit: 1
|
 |
January 28, 2025, 07:34:25 PM |
|
The probability of whatever pair of two keys you want to pick to have an identical / semi-identical / totally different (pick whatever characteristic and whatever degree you like) is always the same, no matter how many times you repeat the test. The chances do not change just because you are doing the extractions (hashing). Just because you flipped a coin heads 100 times in a row does not mean there are greater chances to have it flip tail on the 101st flip. At most, you can pretend the coin is rigged, but you can never be sure, unless you repeat 100 flips many many times, and aach 100 sequence of flips ends up the same. That is the pattern, and if you can't get a pattern, you definitely can't get statistics to predict the future.
You are talking about each coin flip having a 50% probability of landing on " heads" and a 50% probability of landing on " tails," with no relation to previous results. However, in the scenario you propose, there is a connection. Although each coin flip has a 50% (0.5) probability of landing on " heads" and a 50% (0.5) probability of landing on " tails," when we want to calculate the probability of getting "heads" in consecutive flips, we multiply the individual probabilities: The probability of getting " heads" in one flip is 0.5, in two consecutive flips is (0.5 \times 0.5 = 0.25), in 100 consecutive flips is (0.5^100), which is an extremely small number. So, it is not like you think that the probability is always the same. This is known as compound probability, which refers to the probability of multiple independent events occurring in sequence. Yes, there is compound probability but in the context, he meant just the probability for the individual 101st flip nothing more, not like he said 101st and 102nd flips consecutively, if he referred to two or more consecutive flips that's when we have compound probability.
|
|
|
|
|
|
mcdouglasx
|
 |
January 28, 2025, 07:47:18 PM |
|
Yes, there is compound probability but in the context, he meant just the probability for the individual 101st flip nothing more, not like he said 101st and 102nd flips consecutively, if he referred to two or more consecutive flips that's when we have compound probability.
The context is the puzzle and the probability of averaging how many steps a prefix is repeated is a matter of compound probability, because you have to guess a certain number of prefixes and depending on the size of the prefix the statistics change. In one sequence of 100 coin flips, landing heads 100 times in a row has the exact same probability as landing 100 tails in a row, and the exact same probability as landing the sequence:
HHTHHTTTHHTTHTH.... THTT
Do you see this in red as logical?
|
|
|
|
|
Kelvin555
Jr. Member
Offline
Activity: 63
Merit: 1
|
 |
January 28, 2025, 08:13:50 PM |
|
The context is the puzzle and the probability of averaging how many steps a prefix is repeated is a matter of compound probability, because you have to guess a certain number of prefixes and depending on the size of the prefix the statistics change. In one sequence of 100 coin flips, landing heads 100 times in a row has the exact same probability as landing 100 tails in a row, and the exact same probability as landing the sequence:
HHTHHTTTHHTTHTH.... THTT
Do you see this in red as logical? Yes it's logical, same probability to find any sequence of heads and tails in a fair coin. For the prefix, what do you mean by size, number of characters or bits ?
|
|
|
|
|
|
mcdouglasx
|
 |
January 28, 2025, 08:48:49 PM |
|
Yes it's logical, same probability to find any sequence of heads and tails in a fair coin.
For the prefix, what do you mean by size, number of characters or bits ?
Exactly, because it is synonymous with saying that it is equally likely for heads or tails to come up in a single toss. He says it like that just to argue, but it makes no sense as an argument to debate the probability between prefixes because it requires a data set. Therefore, there is a compound probability because it is unlikely that one prefix is next to another identical one.
|
|
|
|
|
Kelvin555
Jr. Member
Offline
Activity: 63
Merit: 1
|
 |
January 28, 2025, 09:41:27 PM |
|
Exactly, because it is synonymous with saying that it is equally likely for heads or tails to come up in a single toss. He says it like that just to argue, but it makes no sense as an argument to debate the probability between prefixes because it requires a data set. Therefore, there is a compound probability because it is unlikely that one prefix is next to another identical one.
Yes, well it's true. Before I continue I need to say this, I don't in any way support the users trying to impose rules or so on how the keys are to be found or on how to participate in the challenge, I am more of the libertarian view that anyone is allowed to participate or search in any way he/she deems fit, as long as it's your time, money and hardware, you are allowed to test your idea extensively regardless of how foolish the idea sounds to others. If you think someone is a crackpot, you can ignore the user and move on with your life.
|
|
|
|
|
bibilgin
Newbie
Offline
Activity: 262
Merit: 0
|
 |
January 28, 2025, 09:48:08 PM |
|
Who on here can be a trusted agent to hold his .2 BTC in escrow, and then send it to me, once I show the private key? Anyone?
Frankly, I CANNOT TRUST anyone on the Forum. It's just hard to believe you. If you had said something like that the first day we talked. You would have been believable. But then, writing something like that just seemed like an article made to show me that I was wrong. Exactly, because it is synonymous with saying that it is equally likely for heads or tails to come up in a single toss. He says it like that just to argue, but it makes no sense as an argument to debate the probability between prefixes because it requires a data set. Therefore, there is a compound probability because it is unlikely that one prefix is next to another identical one.
@mcdouglasx Dude, don't stress yourself out so much. You can never change the minds of people who have fixed ideas. If we write it briefly, maybe it will be understandable. The decimal difference between 2 different wallets with the prefix 1BY8GQbnue is more PROBABLY higher than the decimal difference between 2 different wallets with the prefix 1BY8GQbnueY. If 1BY8GQbnueaxxXX - 1BY8GQbnuebxxXX = 1000 then 1BY8GQbnueYaxXX - 1BY8GQbnueYbxXX = 2000 decimal difference PROBABILITY is higher. For example. I'm not saying my scanning system is 100% accurate. But I do a jump scan in a certain number combination.
|
|
|
|
|
|
kTimesG
|
 |
January 28, 2025, 10:20:17 PM Last edit: January 28, 2025, 10:32:49 PM by kTimesG |
|
Yes it's logical, same probability to find any sequence of heads and tails in a fair coin.
For the prefix, what do you mean by size, number of characters or bits ?
Exactly, because it is synonymous with saying that it is equally likely for heads or tails to come up in a single toss. He says it like that just to argue, but it makes no sense as an argument to debate the probability between prefixes because it requires a data set. Therefore, there is a compound probability because it is unlikely that one prefix is next to another identical one. Exactly, it requires a data set, that data set is for example the entire 66-bit range of Puzzle 67, end-to-end, hashed and analyzed. Just like 100 coin flips will most likely never have any "averaga distances" no matter how you split it, encode it, or read it in diagonal, just as well any 66-bit set of hashes will never be the same as OTHER 66 bt sets, it will have its own particularities that make it impossible to predict, as a whole or in parts. Without that data set, we are working with the "fair coin" concept, where every next bit of every next hash has an equal 50% chance of appearing. This implies there can not be a pattern, and assuming hashes are more or less "averaged" or in whatever else relation you can think of (such as a common prefix) only works if the data set has an infinite size, and the standard deviation approaches the real deviation. So, why is it so unlikely in your mind that two consecutive keys have the same hash? Because I can't think of any. The probability for the "next hash" or "any other hash" to be identical to the "current hash" is exactly the same as the probability of the second hash being any other hash, or the same hash. This is why I don't believe it is right to say "there are less chances for the next key to have the same hash" - because nothing changed.
|
Off the grid, training pigeons to broadcast signed messages.
|
|
|
|
mcdouglasx
|
 |
January 28, 2025, 10:55:06 PM |
|
So, why is it so unlikely in your mind that two consecutive keys have the same hash?
because compound probability comes into play, when trying to guess two identical hashes, just like trying to flip heads twice but with exponential probability difficulty.
|
|
|
|
|
|
kTimesG
|
 |
January 28, 2025, 11:15:50 PM |
|
So, why is it so unlikely in your mind that two consecutive keys have the same hash?
because compound probability comes into play, when trying to guess two identical hashes, just like trying to flip heads twice but with exponential probability difficulty. So to be clear, are you saying that if you "flip" (hash) a "coin" (ripemd(sha256(pubKey))) once to a "head" (target hash of Puzzle 67), then the next "flip" has less chances of being "head" as well? Assuming the "coin" (RIPEMD) is a "fair coin" (any bit of any hash has an equal 50% chance)? Also, why (or what entity determines this) is it important to compare two consecutive "flips" and not maybe every other flip, or maybe a rule of using the flips every other 38757 flips? Events do not have memory, they do not care about order, history, and results. That is why they are called independent events. I don't think it is relevant if the hashes that are identical are next to each other or separated by a prime number, or maybe if their bits of the hashed message are xor'ed with "0xdeadcafe" or negated to each other. Sets are sets, the positions you use to create them, or their order, do not matter an inch longer than what a brain perceives.
|
Off the grid, training pigeons to broadcast signed messages.
|
|
|
bibilgin
Newbie
Offline
Activity: 262
Merit: 0
|
 |
January 28, 2025, 11:26:30 PM |
|
So to be clear, are you saying that if you "flip" (hash) a "coin" (ripemd(sha256(pubKey))) once to a "head" (target hash of Puzzle 67), then the next "flip" has less chances of being "head" as well? Assuming the "coin" (RIPEMD) is a "fair coin" (any bit of any hash has an equal 50% chance)?
Also, why (or what entity determines this) is it important to compare two consecutive "flips" and not maybe every other flip, or maybe a rule of using the flips every other 38757 flips?
Events do not have memory, they do not care about order, history, and results. That is why they are called independent events. I don't think it is relevant if the hashes that are identical are next to each other or separated by a prime number, or maybe if their bits of the hashed message are xor'ed with "0xdeadcafe" or negated to each other. Sets are sets, the positions you use to create them, or their order, do not matter an inch longer than what a brain perceives.
Let's be more clear. Think of a dice, there are numbers from 1 to 6. Imagine doing this in software. After the first throw, the number 3 comes up, the probability of the number 3 coming up again is low, even if the software resets itself (applies the hashing process) in the second throw. There are many reasons for this, you know. For example; The concept of time is the biggest factor.
|
|
|
|
|
|