Bitcoin Forum
January 11, 2026, 07:57:01 AM *
News: Due to a wallet-migration bug, you should not upgrade Bitcoin Core. But if you already did, there's no need to downgrade.
 
   Home   Help Search Login Register More  
Pages: « 1 2 3 4 [5] 6 »  All
  Print  
Author Topic: BITCOIN PUZZLE: THE PREFIX DILEMMA - A follow up  (Read 1272 times)
This is a self-moderated topic. If you do not want to be moderated by the person who started this topic, create a new topic.
mcdouglasx
Hero Member
*****
Offline Offline

Activity: 882
Merit: 504



View Profile WWW
January 05, 2026, 10:38:54 PM
 #81

snip

@nomachine code

PREFIX_LENGTH = 3
simulations = 20000



Code:
Methodology          | Found    | Total Checks    | ROI (Success/Mh)
--------------------------------------------------------------------
Linear 77%           | 15394    | 94,565,205      |        162.7871
Prefix Jump          | 16354    | 95,821,569      |        170.6714


PREFIX_LENGTH = 4
simulations = 20000

Code:
Methodology          | Found    | Total Checks    | ROI (Success/Mh)
--------------------------------------------------------------------
Linear 77%           | 15406    | 94,340,070      |        163.3028
Prefix Jump          | 19729    | 99,612,895      |        198.0567

@kTimesG You can keep appealing to your elementary school teachers, but I suggest you ask a high school teacher to explain what a statistical trend means on a sample of 20,000 iterations.

I've upped the ante: 20,000 simulations with PREFIX_LENGTH = 4.

The results are devastating for your theory:

Linear 77%: ROI of 163.3

Prefix Jump: ROI of 198.0

With the same infrastructure and time, the "smart jump" found 4,323 extra keys. In the real world of cryptographic analysis, that's not a "counting error," it's a 21% efficiency gain.

Your mistake is trying to analyze a search heuristic as if it were a closed probability equation.

The probability of each hash is uniform (no one denies that).

But the hedging strategy doesn't have to be.

By jumping, my method "cleans up" the noise and positions itself faster in fresh areas of the search space. Your "common sense" says it's impossible; my 20,000 tests prove it's inevitable.

Keep the urn theory; those of us who seek results keep the rake.

Regarding your 23% fallacy, how do you intend to calculate ROI if you don't count all the scanned keys? Doesn't that effort count in the search, or do we omit the electricity and time so it fits with your statistics?

█████████████████████████
█████████████████████████
███████▀█████████▀███████
█████████████████████████
█████████████████████████
████████████▀████████████
███████▀███████▄███████
███████████▄▄▄███████████
█████████████████████████
█████████████████████████
█████████████████████████
█████████████████████████
█████████████████████████

 2UP.io 
NO KYC
CASINO
██████████████████████████
████████████████████████
███████████████████████
███████████████████
██████████████████████
███████████████████████
███████████████████████
██████████████████
███████████████████████
██████████████████
███████████████████████
████████████████████████
██████████████████████████
███████████████████████████████████████████████████████████████████████████████████████
 
FASTEST-GROWING CRYPTO
CASINO & SPORTSBOOK

 

███████████████████████████████████████████████████████████████████████████████████████
███████████████████████████
█████████████████████████
███████████████████████
███████████████████████
████████████████████████
███████████████████████
███████████████████████
██████████████████████
████████████████████████
███████████████████████
███████████████████████
█████████████████████████
███████████████████████████
 

...PLAY NOW...
kTimesG
Full Member
***
Offline Offline

Activity: 714
Merit: 221


View Profile
January 06, 2026, 12:37:23 AM
 #82

Regarding your 23% fallacy, how do you intend to calculate ROI if you don't count all the scanned keys? Doesn't that effort count in the search, or do we omit the electricity and time so it fits with your statistics?

It's not omitted: the work for that 23% is already accounted for, because the success rate is 77%.

That's what normal people call common sense, and high school teachers call correctly computing averages. You call it inevitable strategic hedging for breaking the laws of a uniform distribution that runs in a set so small that's indistinguishable from a list of distinct numbers (which was the entire point of my code - no need for anything remotely close to crypto to demonstrate such a fundamental concept).

Off the grid, training pigeons to broadcast signed messages.
mcdouglasx
Hero Member
*****
Offline Offline

Activity: 882
Merit: 504



View Profile WWW
January 06, 2026, 01:51:57 AM
Last edit: January 06, 2026, 04:22:27 AM by mcdouglasx
 #83

Regarding your 23% fallacy, how do you intend to calculate ROI if you don't count all the scanned keys? Doesn't that effort count in the search, or do we omit the electricity and time so it fits with your statistics?

It's not omitted: the work for that 23% is already accounted for, because the success rate is 77%.

That's what normal people call common sense, and high school teachers call correctly computing averages. You call it inevitable strategic hedging for breaking the laws of a uniform distribution that runs in a set so small that's indistinguishable from a list of distinct numbers (which was the entire point of my code - no need for anything remotely close to crypto to demonstrate such a fundamental concept).

@kTimesG, let me explain it to you in simple terms (or coins and sand) to see if it helps your 'common sense' kick in:

Imagine there are coins buried evenly on a beach. Your strategy is to dig very deep every meter, but because you get tired, you only manage to check 77% of the beach. If the gold coin is in the 23% you didn't walk, you've lost.

My strategy is different: I dig until I find any coin. As soon as I find one, I assume the probability of there being another one directly below it is low, so I move on to the next meter. Thanks to these jumps, I cover the entire beach.

Have I changed the distribution of the coins? No. Have I broken the laws of probability? No, I haven't either. I've simply decided not to waste time digging an endless hole in one spot when I can find the gold coin by walking further with the same effort.

That's why my ROI is 198 and yours is 163. I found the coins in the last stretch of the beach, the one you didn't even set foot on because you were busy digging where there was nothing left.

I think this will help you understand the difference between probability of success and coverage efficiency. In each hole (block), I'll have a probability of finding the gold coin first, while you'll leave clusters of unexcavated land because you're tired.

Since you like abstract probability, I'll give you a slap of intelligence.




If you can't understand this analogy now and apply it to my search, it's because you're playing dumb so you don't lose the debate.


█████████████████████████
█████████████████████████
███████▀█████████▀███████
█████████████████████████
█████████████████████████
████████████▀████████████
███████▀███████▄███████
███████████▄▄▄███████████
█████████████████████████
█████████████████████████
█████████████████████████
█████████████████████████
█████████████████████████

 2UP.io 
NO KYC
CASINO
██████████████████████████
████████████████████████
███████████████████████
███████████████████
██████████████████████
███████████████████████
███████████████████████
██████████████████
███████████████████████
██████████████████
███████████████████████
████████████████████████
██████████████████████████
███████████████████████████████████████████████████████████████████████████████████████
 
FASTEST-GROWING CRYPTO
CASINO & SPORTSBOOK

 

███████████████████████████████████████████████████████████████████████████████████████
███████████████████████████
█████████████████████████
███████████████████████
███████████████████████
████████████████████████
███████████████████████
███████████████████████
██████████████████████
████████████████████████
███████████████████████
███████████████████████
█████████████████████████
███████████████████████████
 

...PLAY NOW...
GinnyBanzz
Jr. Member
*
Offline Offline

Activity: 162
Merit: 5


View Profile
January 06, 2026, 09:45:45 AM
 #84

Regarding your 23% fallacy, how do you intend to calculate ROI if you don't count all the scanned keys? Doesn't that effort count in the search, or do we omit the electricity and time so it fits with your statistics?

It's not omitted: the work for that 23% is already accounted for, because the success rate is 77%.

That's what normal people call common sense, and high school teachers call correctly computing averages. You call it inevitable strategic hedging for breaking the laws of a uniform distribution that runs in a set so small that's indistinguishable from a list of distinct numbers (which was the entire point of my code - no need for anything remotely close to crypto to demonstrate such a fundamental concept).

@kTimesG, let me explain it to you in simple terms (or coins and sand) to see if it helps your 'common sense' kick in:

Imagine there are coins buried evenly on a beach. Your strategy is to dig very deep every meter, but because you get tired, you only manage to check 77% of the beach. If the gold coin is in the 23% you didn't walk, you've lost.

My strategy is different: I dig until I find any coin. As soon as I find one, I assume the probability of there being another one directly below it is low, so I move on to the next meter. Thanks to these jumps, I cover the entire beach.



How does this relate to puzzle 71 though? The analogy surely doesnt work. There is just a single key in the entire space that you want, so how does the "dig until i find any coin" make any sense in this context?
kTimesG
Full Member
***
Offline Offline

Activity: 714
Merit: 221


View Profile
January 06, 2026, 12:34:58 PM
Last edit: January 06, 2026, 12:59:42 PM by kTimesG
 #85

My strategy is different: I dig until I find any coin. As soon as I find one, I assume the probability of there being another one directly below it is low, so I move on to the next meter. Thanks to these jumps, I cover the entire beach.

No, you don't cover the entire beach, you cover the exact same amount that you intended to cover even before you started digging. And your assumption about the probability being lower contradicts the meaning of a uniform distribution (is it the #1.000.003 time you hear this?)

There's only one coin. Yes, there are chances that it appears twice (because it's a 160-bit uniform distribution) but in a 70-bits space, the chances of that occurring are basically zero (with very, very, VERY many zeros in the fractional part). At most, let's say that there are some chances that you might find some two values that share 140 bits, at most, out of the 160.

Infinitesimally smaller chances that this is even for the value you're looking for.

Conclusion: you can treat it as a set of distinct numbers, and run your non-sense prefix algorithm on a simple shuffled list, to compute whatever stats you ever want.

And the shuffling is simply for fun, you might as well use ordered numbers. It does not matter at all, it was just to ilustrate that the "location bias" does not exist as a concept. If it would have existed, the shuffling would have messed up the results, or not? Do we then proceed to location bias of the location indices?

But now at least I know what your username stands for: MagiCianD. Because the only way to ever guess a secret number more efficiently than trying out all of them, is via magic. Or telepathy, which is a good analogy for what you're attempting to do: cheat out on what the Universe serves back, and forcing it to provide you with what YOU want to get back. It doesn't work that way, unfortunately.

Off the grid, training pigeons to broadcast signed messages.
mcdouglasx
Hero Member
*****
Offline Offline

Activity: 882
Merit: 504



View Profile WWW
January 06, 2026, 02:21:34 PM
Last edit: January 06, 2026, 03:02:28 PM by mcdouglasx
 #86

How does this relate to puzzle 71 though? The analogy surely doesnt work. There is just a single key in the entire space that you want, so how does the "dig until i find any coin" make any sense in this context?

In engineering, to test the efficiency of an algorithm, these types of tests are performed. yes, the key is unique in puzzle 71, but by doing tests in small ranges, we know how efficient an algorithm is for the real search; the one that finds more keys with the same effort has, on paper, a greater chance of finding the real target.



snip

Every time you write using semantics, ignoring the data, and failing to refute why I get my ROI or why I get more keys than you using the same effort, which obviously means that prefix searches offer more opportunities, and that it does matter how you search since the central debate of the thread is "strategy for search engines with limited resources", you come across as a purist trapped in infinite statistics and, at the same time, as someone without arguments, creating your own disrepute for being a reductionist on the forum because you ignore what a heuristic search is and confuse probability with strategy, and your only defense is:

Uniform distribution.
Insults.
Sarcasm.
Poorly focused textbook theory.
Magic.


While these things are true (in the infinite world of statistics), something no one here denies, you're trying to deflect from the main topic of the thread, which is more opportunity for every million hashes. Ironically, the prefix gives me more keys on average. You assume the searcher wants to cover 100% of the range, when in reality the searcher wants to find the key before covering 100%, so 4000 extra keys is a huge advantage.

And of course, you're ignoring the main points:

Local Density
Opportunity Cost
Optimization Engineering
Strategic Coverage


In short: you're solving a textbook probability problem, while I'm solving a search engineering problem (which is the topic of the thread, by the way).

It's curious that you mention the uniform distribution for the millionth time. No one here denies that the probability of each hash is the same; What you're denying is that there are smarter ways to spend energy to find that probability.

By ignoring the data from the 40,000 simulations (where the prefix method found 4,323 more keys with the same effort), you've stopped debating and started preaching.

█████████████████████████
█████████████████████████
███████▀█████████▀███████
█████████████████████████
█████████████████████████
████████████▀████████████
███████▀███████▄███████
███████████▄▄▄███████████
█████████████████████████
█████████████████████████
█████████████████████████
█████████████████████████
█████████████████████████

 2UP.io 
NO KYC
CASINO
██████████████████████████
████████████████████████
███████████████████████
███████████████████
██████████████████████
███████████████████████
███████████████████████
██████████████████
███████████████████████
██████████████████
███████████████████████
████████████████████████
██████████████████████████
███████████████████████████████████████████████████████████████████████████████████████
 
FASTEST-GROWING CRYPTO
CASINO & SPORTSBOOK

 

███████████████████████████████████████████████████████████████████████████████████████
███████████████████████████
█████████████████████████
███████████████████████
███████████████████████
████████████████████████
███████████████████████
███████████████████████
██████████████████████
████████████████████████
███████████████████████
███████████████████████
█████████████████████████
███████████████████████████
 

...PLAY NOW...
kTimesG
Full Member
***
Offline Offline

Activity: 714
Merit: 221


View Profile
January 06, 2026, 03:13:31 PM
Last edit: January 06, 2026, 04:11:54 PM by kTimesG
 #87

My strategy is different: I dig until I find any coin. As soon as I find one, I assume the probability of there being another one directly below it is low, so I move on to the next meter. Thanks to these jumps, I cover the entire beach.

Are you OK?

It's curious that you mention the uniform distribution for the millionth time. No one here denies that the probability of each hash is the same; What you're denying is that there are smarter ways to spend energy to find that probability.

By ignoring the data from the 40,000 simulations (where the prefix method found 4,323 more keys with the same effort), you've stopped debating and started preaching.

What YOU are ignoring is that when one decomposes your ROI magic formula, it's simply a function that solely depends on the success rate. And the success rate depends on the number of checked keys (which is, on average, half of the number of maximum keys to be checked).


We are revolving circles (literally), and there is no end in sight.

It would be embarrassing to have to actually apply your stupid nonsense to a list of numbers, and pretend that because of this, we can magically find stuff faster then expected (because this will be the obvious conclusion). If you really want to go that path, it would mean we'll all have to join the land of absurdities where when a contradiction is proven, then it makes it legitimate!!!!!!

Off the grid, training pigeons to broadcast signed messages.
mcdouglasx
Hero Member
*****
Offline Offline

Activity: 882
Merit: 504



View Profile WWW
January 07, 2026, 04:24:39 PM
Last edit: January 07, 2026, 04:49:35 PM by mcdouglasx
 #88

It's fascinating to see how theory yields to data. @kTimesG hides behind the N/2 rule to avoid explaining why my ROI is 21% higher, while @allianzatech admits my method is 'faster' but tries to downplay it by saying it doesn't change quantum probability. For a real searcher, 'being faster' is the only advantage that matters, since we don't have infinite time and energy for 71 bits. If my method finds 4,323 more keys using roughly the same resources, it means I have 21% more lottery tickets for the same price.

the expected probability of finding a specific private key remains unchanged under standard cryptographic assumptions (random oracle model).

It's like telling someone who invented an engine 21% more efficient that "the total energy of the universe remains the same." True, but irrelevant to the person who has to pay the electricity bill.


We cannot ignore ROI in a thread where it is the most important thing because it gives an advantage to the search engine with few resources, just because theory doesn't know how to explain it.


█████████████████████████
█████████████████████████
███████▀█████████▀███████
█████████████████████████
█████████████████████████
████████████▀████████████
███████▀███████▄███████
███████████▄▄▄███████████
█████████████████████████
█████████████████████████
█████████████████████████
█████████████████████████
█████████████████████████

 2UP.io 
NO KYC
CASINO
██████████████████████████
████████████████████████
███████████████████████
███████████████████
██████████████████████
███████████████████████
███████████████████████
██████████████████
███████████████████████
██████████████████
███████████████████████
████████████████████████
██████████████████████████
███████████████████████████████████████████████████████████████████████████████████████
 
FASTEST-GROWING CRYPTO
CASINO & SPORTSBOOK

 

███████████████████████████████████████████████████████████████████████████████████████
███████████████████████████
█████████████████████████
███████████████████████
███████████████████████
████████████████████████
███████████████████████
███████████████████████
██████████████████████
████████████████████████
███████████████████████
███████████████████████
█████████████████████████
███████████████████████████
 

...PLAY NOW...
kTimesG
Full Member
***
Offline Offline

Activity: 714
Merit: 221


View Profile
January 07, 2026, 05:58:24 PM
Last edit: January 08, 2026, 02:30:08 PM by kTimesG
 #89

It's fascinating to see how theory yields to data. @kTimesG hides behind the N/2 rule to avoid explaining why my ROI is 21% higher

Are you blind? Or you can't read? Again: your ROI formula only depends on the success rate.

If you can't figure out why your ROI only depends on the success rate, that's really your problem, maybe you missed math in elementary school?

Your ROI = successes / total_checks

But total_checks is directly computable from the number of successes.

What does this tell you? Really, please, what does this tell you, when you see that you have computed some number directly from another number?

Here's even a better ROI for you:

Scan all keys. You'll get 100% success rate, and a ROI of 200.0000 (yeah, with "roughly the same effort" as your non-sense champion CPU cycles burner you are so proud of).

You are fantasizing about better ROI with "same effort" but in reality you simply scanned more keys, which yielded a higher success rate, and compared it to something that scanned LESS KEYS, and obviously had a lower success rate, and hence a lower "ROI".

Here's the worst ROI:

Scan a single key. You''ll get 1/N success rate, and a ROI of 100.005.

You are using 1st grade math tricks to try and convince a cryptographic community that your non-sense comparisons make any real sense. They do not: they simply show that you continue to fail understanding fundamental arithmetics.

X axis: dataset size
Blue line: success rate == max keys to check
Red line: non-sensical "ROI" (e.g. 1M * finds / total_checks)


Off the grid, training pigeons to broadcast signed messages.
nomachine
Full Member
***
Offline Offline

Activity: 798
Merit: 134



View Profile
January 08, 2026, 02:24:12 PM
Merited by mcdouglasx (5), vapourminer (1)
 #90

Upgrade Over the Original Script

Code:
import secp256k1 as ice
import random
import time
import math

# --- Configuration ---
PUZZLE = 71
LOWER_BOUND = 2 ** (PUZZLE - 1)
UPPER_BOUND = 2 ** PUZZLE - 1

TOTAL_SIZE = 10_000
BLOCK_REF = 4_096
PREFIX_LENGTH = 3
SIMULATIONS = 500
FIXED_JUMP_RATE = 0.23

TARGET_HEX = "f6f5431d25bbf7b12e8add9af5e3475c44a0a5b8".lower()

# --- Helpers ---
def generate_h160_real(priv_key):
    return ice.privatekey_to_h160(0, True, priv_key).hex()

def prefix_match_len(a: str, b: str) -> int:
    i = 0
    for x, y in zip(a, b):
        if x != y:
            break
        i += 1
    return i

def linear_scan_77(hashes, target_hash):
    limit = int(len(hashes) * 0.77)
    checks = 0
    for i in range(limit):
        checks += 1
        if hashes[i] == target_hash:
            return {"checks": checks, "found": True}
    return {"checks": limit, "found": False}

def prefix_jump_scan(hashes, block_ref, prefix_len, target_hash):
    target_prefix = target_hash[:prefix_len]
    checks = 0
    i = 0
    dataset_len = len(hashes)
    while i < dataset_len:
        checks += 1
        current_hash = hashes[i]
        if current_hash == target_hash:
            return {"checks": checks, "found": True}
        if current_hash.startswith(target_prefix):
            i += int(block_ref * FIXED_JUMP_RATE)
        else:
            i += 1
    return {"checks": checks, "found": False}

# --- Statistics ---
def log_scale_prefix(prefix_hist):
    return {k: math.log10(v) for k, v in prefix_hist.items() if v > 0}

def shannon_entropy(prefix_hist):
    total = sum(prefix_hist.values())
    entropy = 0.0
    for count in prefix_hist.values():
        p = count / total
        entropy -= p * math.log2(p)
    return entropy

def ks_test_uniform(prefix_hist):
    values = []
    total = sum(prefix_hist.values())

    for k in sorted(prefix_hist):
        values.append(prefix_hist[k] / total)

    n = len(values)
    uniform = 1 / n

    cdf_emp = 0.0
    cdf_uni = 0.0
    D = 0.0

    for v in values:
        cdf_emp += v
        cdf_uni += uniform
        D = max(D, abs(cdf_emp - cdf_uni))

    return D

# --- Experiment ---
def run_experiment():
    results = {
        "Linear 77%": {"found": 0, "total_checks": 0},
        "Prefix Jump": {"found": 0, "total_checks": 0},
        "Target Stats": {
            "max_prefix": 0,
            "prefix_hist": {}
        }
    }

    print(f"--- STARTING SIMULATION: {SIMULATIONS} ITERATIONS ---")
    start_time = time.time()

    for s in range(1, SIMULATIONS + 1):
        dataset = [random.randint(LOWER_BOUND, UPPER_BOUND) for _ in range(TOTAL_SIZE)]
        hashes = []

        for k in dataset:
            h = generate_h160_real(k)
            hashes.append(h)

            pfx = prefix_match_len(h, TARGET_HEX)
            ts = results["Target Stats"]

            ts["max_prefix"] = max(ts["max_prefix"], pfx)
            ts["prefix_hist"][pfx] = ts["prefix_hist"].get(pfx, 0) + 1

            if h == TARGET_HEX:
                print("\n" + "!" * 90)
                print("TARGET FOUND")
                print(f"Private Key : {k}")
                print(f"HASH160     : {h}")
                print("!" * 90)
                return

        target_idx = random.randint(0, TOTAL_SIZE - 1)
        target_hash = hashes[target_idx]

        res_l = linear_scan_77(hashes, target_hash)
        if res_l["found"]:
            results["Linear 77%"]["found"] += 1
        results["Linear 77%"]["total_checks"] += res_l["checks"]

        res_p = prefix_jump_scan(hashes, BLOCK_REF, PREFIX_LENGTH, target_hash)
        if res_p["found"]:
            results["Prefix Jump"]["found"] += 1
        results["Prefix Jump"]["total_checks"] += res_p["checks"]

        if s % 10 == 0:
            print(f"Progress: {s}/{SIMULATIONS}")

    total_time = time.time() - start_time

    print("\n" + "=" * 85)
    print(f"FINAL BENCHMARK | Iterations: {SIMULATIONS} | Time: {total_time:.2f}s")
    print("=" * 85)

    for name, data in results.items():
        if "total_checks" in data:
            roi = (data["found"] / data["total_checks"]) * 1_000_000
            print(f"{name:<15} Found={data['found']:<6} Checks={data['total_checks']:<12,} ROI={roi:.4f}")

    # --- Advanced Analysis ---
    ts = results["Target Stats"]

    print("\n" + "=" * 85)
    print("ADVANCED PREFIX ANALYSIS")
    print("=" * 85)

    print(f"Max prefix matched (hex chars): {ts['max_prefix']}")

    log_scaled = log_scale_prefix(ts["prefix_hist"])
    print("\nLog10 Prefix Frequencies:")
    for k in sorted(log_scaled):
        print(f"Prefix {k:2d}: log10(freq) = {log_scaled[k]:.6f}")

    entropy = shannon_entropy(ts["prefix_hist"])
    max_entropy = math.log2(len(ts["prefix_hist"]))

    print("\nShannon Entropy:")
    print(f"Observed : {entropy:.6f}")
    print(f"Maximum  : {max_entropy:.6f}")
    print(f"Ratio    : {entropy / max_entropy:.6f}")

    D = ks_test_uniform(ts["prefix_hist"])
    print("\nKolmogorov–Smirnov Test:")
    print(f"D statistic: {D:.6f}")

# --- Main ---
if __name__ == "__main__":
    run_experiment()

I want to clarify what was actually done in this modified script, because it’s easy to miss the point if you only skim the code.

The original script was a search performance benchmark. It generated random private keys, converted them to HASH160 values, and compared two search strategies (a partial linear scan and a prefix-based jump heuristic). The only thing it really measured was how many checks were required to find a randomly chosen target that was guaranteed to exist inside the dataset.

That’s an important distinction: the target was always present by construction. So success was never in question, only speed.

The modified version does not replace any of the original logic. The same heuristics, same parameters, same random generation remain intact. What was added is an instrumentation layer that observes the behavior of the hash space itself.

The biggest change is the introduction of a fixed external target hash:

Code:
f6f5431d25bbf7b12e8add9af5e3475c44a0a5b8


This target is not selected from the dataset and does not change per iteration. That alone changes the nature of the experiment. Instead of asking “how fast can we find something that must exist?”, we are now testing how randomly generated HASH160 values relate to a single, fixed point in the hash space.

For every generated hash, the script now measures how many leading hex characters match the fixed target. This produces a prefix-length distribution rather than a simple found/not-found result.

Prefix matches decay exponentially in a well-behaved hash function. Plotting frequencies on a log scale reveals whether that decay is smooth and linear (as expected) or if there are steps, plateaus, or clustering artifacts. Without log scaling, these deviations are effectively invisible.

Entropy measures how close the observed distribution is to ideal randomness. High entropy indicates a uniform, information-dense distribution. Reduced entropy indicates structure or bias.

The Kolmogorov–Smirnov tes test compares the empirical distribution of prefix matches against an ideal uniform distribution. This provides a single statistic that captures cumulative deviation across the entire dataset. It’s a way to formally ask: “Does this distribution behave like random noise, or not?”

Quote
--- STARTING SIMULATION: 500 ITERATIONS ---
Progress: 10/500
Progress: 20/500
Progress: 30/500
Progress: 40/500
Progress: 50/500
Progress: 60/500
Progress: 70/500
Progress: 80/500
Progress: 90/500
Progress: 100/500
Progress: 110/500
Progress: 120/500
Progress: 130/500
Progress: 140/500
Progress: 150/500
Progress: 160/500
Progress: 170/500
Progress: 180/500
Progress: 190/500
Progress: 200/500
Progress: 210/500
Progress: 220/500
Progress: 230/500
Progress: 240/500
Progress: 250/500
Progress: 260/500
Progress: 270/500
Progress: 280/500
Progress: 290/500
Progress: 300/500
Progress: 310/500
Progress: 320/500
Progress: 330/500
Progress: 340/500
Progress: 350/500
Progress: 360/500
Progress: 370/500
Progress: 380/500
Progress: 390/500
Progress: 400/500
Progress: 410/500
Progress: 420/500
Progress: 430/500
Progress: 440/500
Progress: 450/500
Progress: 460/500
Progress: 470/500
Progress: 480/500
Progress: 490/500
Progress: 500/500

=====================================================================================
FINAL BENCHMARK | Iterations: 500 | Time: 75.30s
=====================================================================================
Linear 77%      Found=393    Checks=2,270,158    ROI=173.1157
Prefix Jump     Found=409    Checks=2,306,366    ROI=177.3353

=====================================================================================
ADVANCED PREFIX ANALYSIS
=====================================================================================
Max prefix matched (hex chars): 5

Log10 Prefix Frequencies:
Prefix  0: log10(freq) = 6.670978
Prefix  1: log10(freq) = 5.466074
Prefix  2: log10(freq) = 4.265926
Prefix  3: log10(freq) = 3.043755
Prefix  4: log10(freq) = 1.913814
Prefix  5: log10(freq) = 0.698970

Shannon Entropy:
Observed : 0.359533
Maximum  : 2.584963
Ratio    : 0.139086

Kolmogorov–Smirnov Test:
D statistic: 0.770912


One more important detail: if the fixed target hash is ever matched exactly, the script immediately halts and prints the corresponding private key. There’s no post-processing, no interpretation, no statistical hand-waving.


BTC: bc1qdwnxr7s08xwelpjy3cc52rrxg63xsmagv50fa8
Akito S. M. Hosana
Jr. Member
*
Offline Offline

Activity: 420
Merit: 8


View Profile
January 08, 2026, 02:41:48 PM
 #91

Wait… so you’re saying the computer is like… checking these random numbers against one magic code, and then it also counts how many letters match before it finds it? And then it… does math on those counts? I don’t get half of this, but it sounds like it’s measuring how ‘surprised’ the numbers are?  Tongue
GinnyBanzz
Jr. Member
*
Offline Offline

Activity: 162
Merit: 5


View Profile
January 08, 2026, 02:53:25 PM
Merited by vapourminer (1)
 #92

I can't tell if there are some real geniuses in this thread, or they are just utterly deldued. My understanding is there is absolutely no shortcut here and brute force + pure luck is your only chance.
kTimesG
Full Member
***
Offline Offline

Activity: 714
Merit: 221


View Profile
January 08, 2026, 03:01:48 PM
 #93

I can't tell if there are some real geniuses in this thread, or they are just utterly deldued. My understanding is there is absolutely no shortcut here and brute force + pure luck is your only chance.

Geniuses (if you consider people who fail at math geniuses).

Code:
def linear_scan_100(hashes, target_hash):
    checks = 0
    for h in hashes:
        checks += 1
        if h == target_hash:
            return {"checks": checks, "found": True}
    return {"checks": len(hashes), "found": False}

Code:
=====================================================================================
FINAL BENCHMARK | Iterations: 50000 | Time: 650.30s
=====================================================================================
Linear 100%     Found=50000  Checks=250,982,876  ROI=199.2168
Prefix Jump     Found=40940  Checks=240,170,621  ROI=170.4621

Off the grid, training pigeons to broadcast signed messages.
Akito S. M. Hosana
Jr. Member
*
Offline Offline

Activity: 420
Merit: 8


View Profile
January 08, 2026, 03:26:52 PM
 #94

I ran a 10,000-iteration HASH160 benchmark and asked an GROK to analyze it.

Quote


=====================================================================================
FINAL BENCHMARK | Iterations: 10000 | Time: 1348.54s
=====================================================================================
Linear 77%      Found=7704   Checks=47,313,993   ROI=162.8271
Prefix Jump     Found=8165   Checks=47,864,694   ROI=170.5850

=====================================================================================
ADVANCED PREFIX ANALYSIS
=====================================================================================
Max prefix matched (hex chars): 7

Log10 Prefix Frequencies:
Prefix  0: log10(freq) = 7.971985
Prefix  1: log10(freq) = 6.767627
Prefix  2: log10(freq) = 5.563703
Prefix  3: log10(freq) = 4.360726
Prefix  4: log10(freq) = 3.148911
Prefix  5: log10(freq) = 1.986772
Prefix  6: log10(freq) = 0.778151
Prefix  7: log10(freq) = 0.301030

Shannon Entropy:
Observed : 0.359659
Maximum  : 3.000000
Ratio    : 0.119886

Kolmogorov–Smirnov Test:
D statistic: 0.812530


Linear 77%: Found 7,704 / 47,313,993 checks

Prefix Jump: Found 8,165 / 47,864,694 checks

Max prefix with target: 7 hex chars

Shannon entropy ratio: 0.12 / 3.0

KS D-statistic: 0.81

Analysis: The log-scale histogram shows an exponential decay in prefix matches. Entropy is low, KS confirms a strongly non-uniform but expected distribution for this 71-bit test space.

Conclusion: Both heuristics perform similarly. Any ROI difference is statistical noise. For this random HASH160 setup, neither method offers a practical advantage — the hash space behaves effectively uniform.
mcdouglasx
Hero Member
*****
Offline Offline

Activity: 882
Merit: 504



View Profile WWW
January 08, 2026, 04:59:31 PM
 #95

Upgrade Over the Original Script

Code:
import secp256k1 as ice
import random
import time
import math

# --- Configuration ---
PUZZLE = 71
LOWER_BOUND = 2 ** (PUZZLE - 1)
UPPER_BOUND = 2 ** PUZZLE - 1

TOTAL_SIZE = 10_000
BLOCK_REF = 4_096
PREFIX_LENGTH = 3
SIMULATIONS = 500
FIXED_JUMP_RATE = 0.23

TARGET_HEX = "f6f5431d25bbf7b12e8add9af5e3475c44a0a5b8".lower()

# --- Helpers ---
def generate_h160_real(priv_key):
    return ice.privatekey_to_h160(0, True, priv_key).hex()

def prefix_match_len(a: str, b: str) -> int:
    i = 0
    for x, y in zip(a, b):
        if x != y:
            break
        i += 1
    return i

def linear_scan_77(hashes, target_hash):
    limit = int(len(hashes) * 0.77)
    checks = 0
    for i in range(limit):
        checks += 1
        if hashes[i] == target_hash:
            return {"checks": checks, "found": True}
    return {"checks": limit, "found": False}

def prefix_jump_scan(hashes, block_ref, prefix_len, target_hash):
    target_prefix = target_hash[:prefix_len]
    checks = 0
    i = 0
    dataset_len = len(hashes)
    while i < dataset_len:
        checks += 1
        current_hash = hashes[i]
        if current_hash == target_hash:
            return {"checks": checks, "found": True}
        if current_hash.startswith(target_prefix):
            i += int(block_ref * FIXED_JUMP_RATE)
        else:
            i += 1
    return {"checks": checks, "found": False}

# --- Statistics ---
def log_scale_prefix(prefix_hist):
    return {k: math.log10(v) for k, v in prefix_hist.items() if v > 0}

def shannon_entropy(prefix_hist):
    total = sum(prefix_hist.values())
    entropy = 0.0
    for count in prefix_hist.values():
        p = count / total
        entropy -= p * math.log2(p)
    return entropy

def ks_test_uniform(prefix_hist):
    values = []
    total = sum(prefix_hist.values())

    for k in sorted(prefix_hist):
        values.append(prefix_hist[k] / total)

    n = len(values)
    uniform = 1 / n

    cdf_emp = 0.0
    cdf_uni = 0.0
    D = 0.0

    for v in values:
        cdf_emp += v
        cdf_uni += uniform
        D = max(D, abs(cdf_emp - cdf_uni))

    return D

# --- Experiment ---
def run_experiment():
    results = {
        "Linear 77%": {"found": 0, "total_checks": 0},
        "Prefix Jump": {"found": 0, "total_checks": 0},
        "Target Stats": {
            "max_prefix": 0,
            "prefix_hist": {}
        }
    }

    print(f"--- STARTING SIMULATION: {SIMULATIONS} ITERATIONS ---")
    start_time = time.time()

    for s in range(1, SIMULATIONS + 1):
        dataset = [random.randint(LOWER_BOUND, UPPER_BOUND) for _ in range(TOTAL_SIZE)]
        hashes = []

        for k in dataset:
            h = generate_h160_real(k)
            hashes.append(h)

            pfx = prefix_match_len(h, TARGET_HEX)
            ts = results["Target Stats"]

            ts["max_prefix"] = max(ts["max_prefix"], pfx)
            ts["prefix_hist"][pfx] = ts["prefix_hist"].get(pfx, 0) + 1

            if h == TARGET_HEX:
                print("\n" + "!" * 90)
                print("TARGET FOUND")
                print(f"Private Key : {k}")
                print(f"HASH160     : {h}")
                print("!" * 90)
                return

        target_idx = random.randint(0, TOTAL_SIZE - 1)
        target_hash = hashes[target_idx]

        res_l = linear_scan_77(hashes, target_hash)
        if res_l["found"]:
            results["Linear 77%"]["found"] += 1
        results["Linear 77%"]["total_checks"] += res_l["checks"]

        res_p = prefix_jump_scan(hashes, BLOCK_REF, PREFIX_LENGTH, target_hash)
        if res_p["found"]:
            results["Prefix Jump"]["found"] += 1
        results["Prefix Jump"]["total_checks"] += res_p["checks"]

        if s % 10 == 0:
            print(f"Progress: {s}/{SIMULATIONS}")

    total_time = time.time() - start_time

    print("\n" + "=" * 85)
    print(f"FINAL BENCHMARK | Iterations: {SIMULATIONS} | Time: {total_time:.2f}s")
    print("=" * 85)

    for name, data in results.items():
        if "total_checks" in data:
            roi = (data["found"] / data["total_checks"]) * 1_000_000
            print(f"{name:<15} Found={data['found']:<6} Checks={data['total_checks']:<12,} ROI={roi:.4f}")

    # --- Advanced Analysis ---
    ts = results["Target Stats"]

    print("\n" + "=" * 85)
    print("ADVANCED PREFIX ANALYSIS")
    print("=" * 85)

    print(f"Max prefix matched (hex chars): {ts['max_prefix']}")

    log_scaled = log_scale_prefix(ts["prefix_hist"])
    print("\nLog10 Prefix Frequencies:")
    for k in sorted(log_scaled):
        print(f"Prefix {k:2d}: log10(freq) = {log_scaled[k]:.6f}")

    entropy = shannon_entropy(ts["prefix_hist"])
    max_entropy = math.log2(len(ts["prefix_hist"]))

    print("\nShannon Entropy:")
    print(f"Observed : {entropy:.6f}")
    print(f"Maximum  : {max_entropy:.6f}")
    print(f"Ratio    : {entropy / max_entropy:.6f}")

    D = ks_test_uniform(ts["prefix_hist"])
    print("\nKolmogorov–Smirnov Test:")
    print(f"D statistic: {D:.6f}")

# --- Main ---
if __name__ == "__main__":
    run_experiment()

I want to clarify what was actually done in this modified script, because it’s easy to miss the point if you only skim the code.

The original script was a search performance benchmark. It generated random private keys, converted them to HASH160 values, and compared two search strategies (a partial linear scan and a prefix-based jump heuristic). The only thing it really measured was how many checks were required to find a randomly chosen target that was guaranteed to exist inside the dataset.

That’s an important distinction: the target was always present by construction. So success was never in question, only speed.

The modified version does not replace any of the original logic. The same heuristics, same parameters, same random generation remain intact. What was added is an instrumentation layer that observes the behavior of the hash space itself.

The biggest change is the introduction of a fixed external target hash:

Code:
f6f5431d25bbf7b12e8add9af5e3475c44a0a5b8


This target is not selected from the dataset and does not change per iteration. That alone changes the nature of the experiment. Instead of asking “how fast can we find something that must exist?”, we are now testing how randomly generated HASH160 values relate to a single, fixed point in the hash space.

For every generated hash, the script now measures how many leading hex characters match the fixed target. This produces a prefix-length distribution rather than a simple found/not-found result.

Prefix matches decay exponentially in a well-behaved hash function. Plotting frequencies on a log scale reveals whether that decay is smooth and linear (as expected) or if there are steps, plateaus, or clustering artifacts. Without log scaling, these deviations are effectively invisible.

Entropy measures how close the observed distribution is to ideal randomness. High entropy indicates a uniform, information-dense distribution. Reduced entropy indicates structure or bias.

The Kolmogorov–Smirnov tes test compares the empirical distribution of prefix matches against an ideal uniform distribution. This provides a single statistic that captures cumulative deviation across the entire dataset. It’s a way to formally ask: “Does this distribution behave like random noise, or not?”

Thanks for the code @nomachine, this should clarify many points:

my tests:

PREFIX_LENGTH = 3
SIMULATIONS = 20000


Code:
=====================================================================================
FINAL BENCHMARK | Iterations: 20000 | Time: 3016.63s
=====================================================================================
Linear 77%      Found=15455  Checks=94,730,863   ROI=163.1464
Prefix Jump     Found=16411  Checks=95,683,151   ROI=171.5140

=====================================================================================
ADVANCED PREFIX ANALYSIS
=====================================================================================
Max prefix matched (hex chars): 8

Log10 Prefix Frequencies:
Prefix  0: log10(freq) = 8.273017
Prefix  1: log10(freq) = 7.068642
Prefix  2: log10(freq) = 5.864462
Prefix  3: log10(freq) = 4.660524
Prefix  4: log10(freq) = 3.461499
Prefix  5: log10(freq) = 2.240549
Prefix  6: log10(freq) = 1.204120
Prefix  7: log10(freq) = 0.000000
Prefix  8: log10(freq) = 0.000000

Shannon Entropy:
Observed : 0.359629
Maximum  : 3.169925
Ratio    : 0.113450

Kolmogorov–Smirnov Test:
D statistic: 0.826424

PREFIX_LENGTH = 4
SIMULATIONS = 20000


Code:
=====================================================================================
FINAL BENCHMARK | Iterations: 20000 | Time: 2932.67s
=====================================================================================
Linear 77%      Found=15417  Checks=94,720,809   ROI=162.7625
Prefix Jump     Found=19736  Checks=99,767,615   ROI=197.8197

=====================================================================================
ADVANCED PREFIX ANALYSIS
=====================================================================================
Max prefix matched (hex chars): 7

Log10 Prefix Frequencies:
Prefix  0: log10(freq) = 8.273005
Prefix  1: log10(freq) = 7.068802
Prefix  2: log10(freq) = 5.865262
Prefix  3: log10(freq) = 4.658603
Prefix  4: log10(freq) = 3.457579
Prefix  5: log10(freq) = 2.264818
Prefix  6: log10(freq) = 0.698970
Prefix  7: log10(freq) = 0.477121

Shannon Entropy:
Observed : 0.359755
Maximum  : 3.000000
Ratio    : 0.119918

Kolmogorov–Smirnov Test:
D statistic: 0.812508


If you look at the log10 frequencies of my 40,000 iterations, you'll see a perfect exponential decay (from 8.2 to 7.0, to 5.8...). This confirms that HASH160 is uniform, but it also validates my strategy:

1- Each prefix encountered is a "low-quality collision".

2- My algorithm uses this signal to clean up the noise and jump to a virgin area of ​​the space.

3- I'm not trying to change the probability; I'm trying to optimize navigation.

The observed entropy (0.35) and the K-S test (0.81) demonstrate that, even if the hash is uniform, the navigation strategy doesn't necessarily have to be.

I think the data still speaks for itself, showing that this is the best option for people with limited computing resources.




snip

We've been in agreement for ages, if you search 100% of the range, you'll find 100% of the keys. Congratulations on proving that 1=1.

Good luck scanning 100% of puzzle 71!



snip

The proper use of AI is to give it context, that's why prompts exist: for example, copy the last page of the thread, and then pass it your data... otherwise it will give you general, decontextualized answers.

█████████████████████████
█████████████████████████
███████▀█████████▀███████
█████████████████████████
█████████████████████████
████████████▀████████████
███████▀███████▄███████
███████████▄▄▄███████████
█████████████████████████
█████████████████████████
█████████████████████████
█████████████████████████
█████████████████████████

 2UP.io 
NO KYC
CASINO
██████████████████████████
████████████████████████
███████████████████████
███████████████████
██████████████████████
███████████████████████
███████████████████████
██████████████████
███████████████████████
██████████████████
███████████████████████
████████████████████████
██████████████████████████
███████████████████████████████████████████████████████████████████████████████████████
 
FASTEST-GROWING CRYPTO
CASINO & SPORTSBOOK

 

███████████████████████████████████████████████████████████████████████████████████████
███████████████████████████
█████████████████████████
███████████████████████
███████████████████████
████████████████████████
███████████████████████
███████████████████████
██████████████████████
████████████████████████
███████████████████████
███████████████████████
█████████████████████████
███████████████████████████
 

...PLAY NOW...
kTimesG
Full Member
***
Offline Offline

Activity: 714
Merit: 221


View Profile
January 08, 2026, 06:49:44 PM
 #96

We've been in agreement for ages, if you search 100% of the range, you'll find 100% of the keys. Congratulations on proving that 1=1.

Well yeah, except that you can replace that "100%" with any percent you want. This is what you refuse to accept, and it's also what you failed to prove. Whatever strategy you ever choose, once you compare it to whatever other strategy that simply scans the exact same amount of keys (no matter in what order), the results will be identical. This is a direct consequence of the definition of a uniform distribution.

Using non-proportional comparisons (like your fake non-sense ROI which is simply derived directly from the success rate (which is derived directly from the max amount of keys to scan)) is not science, it's a fake sentiment of accomplishment.

Fortunately, Bitcoin's safe as we speak, which is the ultimate validation of your inexistent optimized search.

Off the grid, training pigeons to broadcast signed messages.
mcdouglasx
Hero Member
*****
Offline Offline

Activity: 882
Merit: 504



View Profile WWW
January 08, 2026, 07:52:50 PM
 #97

We've been in agreement for ages, if you search 100% of the range, you'll find 100% of the keys. Congratulations on proving that 1=1.

Well yeah, except that you can replace that "100%" with any percent you want. This is what you refuse to accept, and it's also what you failed to prove. Whatever strategy you ever choose, once you compare it to whatever other strategy that simply scans the exact same amount of keys (no matter in what order), the results will be identical. This is a direct consequence of the definition of a uniform distribution.

Using non-proportional comparisons (like your fake non-sense ROI which is simply derived directly from the success rate (which is derived directly from the max amount of keys to scan)) is not science, it's a fake sentiment of accomplishment.

Fortunately, Bitcoin's safe as we speak, which is the ultimate validation of your inexistent optimized search.

No one here is invalidating mathematics; you're the only one here debating pure mathematics, on which we all agree 100%. But this thread isn't about statistics; that's your mistake.

It's a thread about finding the best way to cover space with a stochastic search. In other words, what you omit on paper in a linear search, we omit statistically by pruning with prefixes.

The data is plentiful, with graphs that demonstrate which method is better for "search engines with limited resources." You're arguing about pure mathematics (which no one here is invalidating), but you're using it to criticize a search method.

To give you an example of something we all know:

If we have a block of 10k keys, and you linearly examine 77% of them, and I examine the same but create the keys one by one randomly, clearly both will end up averaging the same; the purist statistics hold true. But in terms of search efficiency, is it equally efficient? No, right? Because while you search linearly, you take advantage of scalar multiplication, double, and add, while I'll end up spending more time generating key by key.

As you can see, the search method doesn't affect the probabilities, but it can give you time and energy advantages depending on which one you choose. So you're going around in circles theorizing in a thread about searches.

█████████████████████████
█████████████████████████
███████▀█████████▀███████
█████████████████████████
█████████████████████████
████████████▀████████████
███████▀███████▄███████
███████████▄▄▄███████████
█████████████████████████
█████████████████████████
█████████████████████████
█████████████████████████
█████████████████████████

 2UP.io 
NO KYC
CASINO
██████████████████████████
████████████████████████
███████████████████████
███████████████████
██████████████████████
███████████████████████
███████████████████████
██████████████████
███████████████████████
██████████████████
███████████████████████
████████████████████████
██████████████████████████
███████████████████████████████████████████████████████████████████████████████████████
 
FASTEST-GROWING CRYPTO
CASINO & SPORTSBOOK

 

███████████████████████████████████████████████████████████████████████████████████████
███████████████████████████
█████████████████████████
███████████████████████
███████████████████████
████████████████████████
███████████████████████
███████████████████████
██████████████████████
████████████████████████
███████████████████████
███████████████████████
█████████████████████████
███████████████████████████
 

...PLAY NOW...
kTimesG
Full Member
***
Offline Offline

Activity: 714
Merit: 221


View Profile
January 08, 2026, 08:01:41 PM
 #98

If we have a block of 10k keys, and you linearly examine 77% of them, and I examine the same but create the keys one by one randomly, clearly both will end up averaging the same; the purist statistics hold true. But in terms of search efficiency, is it equally efficient? No, right? Because while you search linearly, you take advantage of scalar multiplication, double, and add, while I'll end up spending more time generating key by key.

As you can see, the search method doesn't affect the probabilities, but it can give you time and energy advantages depending on which one you choose. So you're going around in circles theorizing in a thread about searches.

Are you now suggesting that prefix jumping is more efficient (computing wise) than a linear iteration? And agree that they both end up with the same average results?

If so, we clearly live in different universes, because the truth is the exact opposite. Oh, and BTW, there's no scalar mul in iterating EC pubKeys.

Off the grid, training pigeons to broadcast signed messages.
mcdouglasx
Hero Member
*****
Offline Offline

Activity: 882
Merit: 504



View Profile WWW
January 08, 2026, 08:23:19 PM
 #99

If we have a block of 10k keys, and you linearly examine 77% of them, and I examine the same but create the keys one by one randomly, clearly both will end up averaging the same; the purist statistics hold true. But in terms of search efficiency, is it equally efficient? No, right? Because while you search linearly, you take advantage of scalar multiplication, double, and add, while I'll end up spending more time generating key by key.

As you can see, the search method doesn't affect the probabilities, but it can give you time and energy advantages depending on which one you choose. So you're going around in circles theorizing in a thread about searches.

Are you now suggesting that prefix jumping is more efficient (computing wise) than a linear iteration? And agree that they both end up with the same average results?

If so, we clearly live in different universes, because the truth is the exact opposite. Oh, and BTW, there's no scalar mul in iterating EC pubKeys.

You, like the rest of the readers, clearly understood what I meant; you're simply grasping at straws to reject the search method and the previously published data. But I don't understand your personal vendetta against me. Let go of my arm; we're debating two different things. The healthiest thing is to let it go and let others give their own opinions. That way, neither you interfere with your dogma, nor I with my search. We've discussed the same thing enough, and the idea isn't to turn this WP thread into a personal debate.

█████████████████████████
█████████████████████████
███████▀█████████▀███████
█████████████████████████
█████████████████████████
████████████▀████████████
███████▀███████▄███████
███████████▄▄▄███████████
█████████████████████████
█████████████████████████
█████████████████████████
█████████████████████████
█████████████████████████

 2UP.io 
NO KYC
CASINO
██████████████████████████
████████████████████████
███████████████████████
███████████████████
██████████████████████
███████████████████████
███████████████████████
██████████████████
███████████████████████
██████████████████
███████████████████████
████████████████████████
██████████████████████████
███████████████████████████████████████████████████████████████████████████████████████
 
FASTEST-GROWING CRYPTO
CASINO & SPORTSBOOK

 

███████████████████████████████████████████████████████████████████████████████████████
███████████████████████████
█████████████████████████
███████████████████████
███████████████████████
████████████████████████
███████████████████████
███████████████████████
██████████████████████
████████████████████████
███████████████████████
███████████████████████
█████████████████████████
███████████████████████████
 

...PLAY NOW...
kTimesG
Full Member
***
Offline Offline

Activity: 714
Merit: 221


View Profile
January 08, 2026, 08:35:13 PM
 #100

Are you now suggesting that prefix jumping is more efficient (computing wise) than a linear iteration? And agree that they both end up with the same average results?

You, like the rest of the readers, clearly understood what I meant

No, I did not understand what you meant. Personal opinions are not facts, so if you make an absurd statement, the next decent thing is to present arguments. The alternative is to let it fly and pretend people understand. Maybe I'm too stupid and didn't get what you were trying to say there, so I simply asked for clarifications.

If you do agree that two whatever methods produce identical average results, then the next differentiating factor is their efficiency (since their efficacy is the same, right?), so please clarify further Smiley

Off the grid, training pigeons to broadcast signed messages.
Pages: « 1 2 3 4 [5] 6 »  All
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!