Bitcoin Forum
May 09, 2026, 09:20:57 PM *
News: Latest Bitcoin Core release: 31.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: [1]
  Print  
Author Topic: Re: Bitcoin puzzle transaction ~32 BTC prize to who solves it  (Read 41 times)
nordwork313 (OP)
Newbie
*
Offline

Activity: 2
Merit: 0


View Profile
May 08, 2026, 09:53:18 AM
 #1

11 years passed and people are still trying to solve Puzzle 71, not even talking about 72.

Most people are approaching the problem as pure brute force.

I don't think the problem is only computational power.

The problem is structure.

Suppose you have 256 cubes with different colors.

You only need 9 correct cubes and they must also be in the correct order.

Testing every possible arrangement directly is impractical.

The logical thing would be reducing the active search domain.

Take the 256 cubes and split them into smaller groups.

For example, groups of 50.

Then generate many different grouped combinations and test each group independently.

The same idea can be applied to keyspace search.

Instead of operating on the full

space at once, operate on smaller structured subsets.

If one subset fails, try another grouping.

If none work, reshuffle the groups and repeat.

Eventually one grouping may contain all required bytes together inside a smaller searchable region.

At that point GPU search becomes much more practical because the effective entropy is reduced.

The objective is not breaking cryptography.

The objective is reorganizing the search topology.

The probability increases with each regrouping attempt:

where p is the probability that the required bytes exist inside one subset and n is the number of regrouping attempts.

The bottleneck may not be hash speed itself.

The bottleneck may be how the search space is organized.
ldabasim
Newbie
*
Offline

Activity: 18
Merit: 0


View Profile
May 08, 2026, 01:30:12 PM
 #2

Can you give an example, lets say we're searching for a number between 10 and 99, how can we do it in less than 89 tries without using grover's algo? The fact that we don't need a double digit number but two 0-9 numbers in the right order doesn't help. Or do you mean jumping between different ranges when we have multiple puzzles will help sonehow.
flapduck
Full Member
***
Offline

Activity: 142
Merit: 167


View Profile
May 08, 2026, 01:59:17 PM
 #3

I think you're giving "structure" more credit than it has here. Splitting the space into cute little boxes does not reduce entropy unless the boxes are built from information correlated with the actual key. Otherwise it's just brute force wearing a nicer hat.

There are no "9 correct cubes" you can isolate independently. A private key is one scalar, and the address check gives you basically one bit of useful feedback: match or no match. You don't get to learn that byte 3 was good, byte 7 was bad, or that some subgroup is warmer than another. secp256k1 plus HASH160 has no mercy for partial guesses. Avalanche does its job, annoyingly well.

Chunking the search space is still useful, but only for engineering reasons: distributing work, avoiding overlap, checkpointing, GPU scheduling, keeping logs sane, etc. But reshuffling failed chunks and trying new groupings does not make the key more likely to fall into your net. If you searched X candidates, your probability is X divided by the remaining valid interval. The topology does not add fairy dust.

The only real reductions come from actual leakage or constraints: bad RNG, known generator pattern, revealed public key where a different attack becomes relevant, or some mistake in how the puzzle was made. Without that, Puzzle 71 is just a very large haystack where every straw looks cryptographically identical until one of them bites.

flapduck reporting for duty
nordwork313 (OP)
Newbie
*
Offline

Activity: 2
Merit: 0


View Profile
May 08, 2026, 04:21:35 PM
 #4

I want to clarify the idea more mathematically.

I am not saying that grouping 256 bytes into groups of 50 magically breaks secp256k1 or HASH160.

I agree that if the private key is uniformly random in the full interval, then no grouping method gives an advantage.

For a uniform key space of size N, after testing X candidates, the probability is simply:

[
P_{\text{full}}=\frac{X}{N}
]

That is true.

But my point is about early-hit probability under a structured prior.

If the key was not generated uniformly, but instead has some dictionary-like or byte-structured origin, then the search problem is different.

In that case, we should not search the full interval in blind order first. We should search smaller structured subsets first.

Let:

[
D = 256
]

be the byte dictionary.

Let:

[
g = 50
]

be the size of one active group.

Let:

[
r
]

be the number of required bytes that must be present together in one group before the internal search becomes useful.

The probability that one random group of 50 bytes contains all r required bytes from a dictionary of 256 bytes is:

[
p=\frac{\binom{50}{r}}{\binom{256}{r}}
]

For example, if the key depends on 9 required bytes, then:

[
p=\frac{\binom{50}{9}}{\binom{256}{9}}
]

Numerically:

[
p \approx 2.22 \times 10^{-7}
]

So one group is not enough.

But the idea is not to test only one group.

The idea is to keep generating different regroupings and testing each group independently.

After n independent regrouping attempts, the probability that at least one group contains all required bytes is:

[
P_n = 1-(1-p)^n
]

This is the important part.

The chance grows with every regrouping.

For r=9:

[
p \approx 2.22 \times 10^{-7}
]

If we test 256 independent groups:

[
P_{256}=1-(1-p)^{256}
]

[
P_{256}\approx 0.0000568
]

That is about:

[
0.00568%
]

or roughly:

[
1 \text{ chance in } 17,600
]

That is not a guarantee.

But it is a measurable early-hit probability.

If we test one million different groups:

[
P_{1,000,000}=1-(1-p)^{1,000,000}
]

[
P_{1,000,000}\approx 0.199
]

That is about:

[
19.9%
]

If we test five million different groups:

[
P_{5,000,000}=1-(1-p)^{5,000,000}
]

[
P_{5,000,000}\approx 0.670
]

That is about:

[
67%
]

If we test ten million different groups:

[
P_{10,000,000}=1-(1-p)^{10,000,000}
]

[
P_{10,000,000}\approx 0.891
]

That is about:

[
89.1%
]

So the method is not based on one fixed group.

It is based on repeated regrouping of the dictionary.

The goal is to eventually create a group of 50 bytes that contains all required bytes.

Once this happens, the internal active search space becomes much smaller than the full key interval.

This is where the early-hit advantage comes from.

The full brute-force range can still be searched separately.

But searching the full range blindly gives:

[
P_{\text{full}}=\frac{X}{N}
]

where N is extremely large.

In the structured method, if a successful group G contains the required bytes, then the search is temporarily concentrated inside a much smaller space:

[
S_G \ll N
]

The probability model becomes:

[
P_{\text{structured}} = P(k \in S_G)\cdot \frac{X}{|S_G|}
]

The structured search is better than full blind search when:

[
P(k \in S_G)\cdot \frac{X}{|S_G|} > \frac{X}{N}
]

After cancelling X:

[
P(k \in S_G) > \frac{|S_G|}{N}
]

This is the mathematical condition.

If the subset has more probability mass than its size would have under a uniform distribution, then searching it first is rational.

That is exactly the idea.

The method does not claim that every group is better.

It claims that if the real key has dictionary structure, then some groups have much higher value than random full-range scanning.

The full range treats every key as equally likely:

[
P(k)=\frac{1}{N}
]

The dictionary method assumes that the key may come from a smaller generation process:

[
k = f(b_1,b_2,\ldots,b_r)
]

where:

[
b_i \in D
]

If that assumption is correct, then the effective entropy is not the entropy of the full interval.

It is closer to:

[
H_{\text{eff}} = \log_2 |S|
]

instead of:

[
H_{\text{full}} = \log_2 N
]

That is why this method can have a better chance of an early hit.

Not because it defeats cryptography.

Not because HASH160 gives partial feedback.

Not because a wrong key tells us which byte is correct.

It gives no such feedback.

The advantage exists only in the search ordering.

The full range is blind.

The dictionary-group method gives priority to subsets that may carry higher prior probability if the key was generated with structure.

For a simple analogy, suppose we search for a number from 10 to 99.

If the number is uniform random, there is no better method than testing candidates.

But if we know the number was created from a pattern, for example digits from a smaller set, then testing that smaller set first gives a higher chance of an early hit.

Full range:

[
90 \text{ candidates}
]

Structured subset:

[
9 \text{ candidates}
]

If the real number is likely to be inside those 9 candidates, searching those first is better.

The same logic applies here.

The question is not:

“Does grouping reduce entropy for a uniform key?”

No, it does not.

The real question is:

“Does the puzzle key have any non-uniform generation structure?”

If yes, then grouping and regrouping the byte dictionary can give a better early-hit probability than blind full-range scanning.

If no, then the method becomes ordinary brute force.

So my claim is limited and mathematical:

1. 256 bytes can be regrouped into many 50-byte active subsets.

2. A single group succeeds if it contains all required bytes.

3. The probability for one group is:

[
p=\frac{\binom{50}{r}}{\binom{256}{r}}
]

4. The probability after n regroupings is:

[
P_n=1-(1-p)^n
]

5. This probability increases with n.

6. Once a good group exists, GPU search inside that group is much smaller than blind full-range search.

7. Therefore, if the key has dictionary structure, this method has a higher chance of an early hit.

8. If the key is perfectly uniform, then there is no advantage.

That is the exact mathematical distinction.
Pages: [1]
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!