Bitcoin Forum
December 30, 2025, 06:38:28 PM *
News: Latest Bitcoin Core release: 30.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: « 1 2 [3]  All
  Print  
Author Topic: BITCOIN PUZZLE: THE PREFIX DILEMMA - A follow up  (Read 577 times)
This is a self-moderated topic. If you do not want to be moderated by the person who started this topic, create a new topic.
mcdouglasx
Sr. Member
****
Offline Offline

Activity: 868
Merit: 437



View Profile WWW
December 28, 2025, 03:05:15 PM
Last edit: December 28, 2025, 05:02:44 PM by mcdouglasx
 #41

It's regrettable...
a desperate maneuver....

Understand this...

There's no magical difference in GPU effort between processing a large block with prefixes or stopping at 65%. The workload is the same. What changes is that you're throwing away 35% of your probability due to an arbitrary stoppage, while we're reinvesting it in mobility.

Stop sugarcoating the "Dynamic Scheduler" as something impossible or that degrades performance tenfold. @WP is already doing it. Their tests demonstrate that it's feasible and efficient. You can't argue that something "can't be done" when someone has already published the results of doing it.

The prefix method is the best strategy for "trying your luck" because the "failure" isn't a dead end at the end of each block. It's a calculated statistical failure (less than 1/N).

The last proton in existence will decay itself before the mythological unicorns you are bragging about will prefix their way into breaking three layers of cryptography and de-contaminate the location biases of the freaking data entropy-filled blocks.

Since You are obviously right, as usual, you should take on Bram's advice and publish your remarkable ideas in a serious environment, where the world's cryptographers can take note of your astonishing discoveries.

After that, you should present the implementation that breaks the laws of basic computing: running s sequential algorithm in parallel.

At last, maybe you should also present your "dynamic scheduler"  that works just as fast as... not having any. Because "workloads". LMFAO is the best I can react, since it's clear you're totally clueless on why H160 even runs fast on a GPU (hint: it's because it isn't bloated with bullshit ideas).

Wow, you've turned a technical thread into a circus of nonsense, sarcasm, and things taken out of context, just because you can't accept that blind cutting is strategically absurd compared to the dynamic probability of prefixes... congratulations, you've earned the respect of the forum trolls!

█████████████████████████
█████████████████████████
███████▀█████████▀███████
█████████████████████████
█████████████████████████
████████████▀████████████
███████▀███████▄███████
███████████▄▄▄███████████
█████████████████████████
█████████████████████████
█████████████████████████
█████████████████████████
█████████████████████████

2UP.io 
NO KYC
CASINO
██████████████████████████
████████████████████████
███████████████████████
███████████████████
██████████████████████
███████████████████████
███████████████████████
██████████████████
███████████████████████
██████████████████
███████████████████████
████████████████████████
██████████████████████████
███████████████████████████████████████████████████████████████████████████████████████
 
FASTEST-GROWING CRYPTO
CASINO & SPORTSBOOK

 

███████████████████████████████████████████████████████████████████████████████████████
███████████████████████████
█████████████████████████
███████████████████████
███████████████████████
████████████████████████
███████████████████████
███████████████████████
██████████████████████
████████████████████████
███████████████████████
███████████████████████
█████████████████████████
███████████████████████████
 
WELCOME BONUS
200% + 500 FS
█▀▀▀











█▄▄▄
▀▀▀█











▄▄▄█
kTimesG
Full Member
***
Offline Offline

Activity: 700
Merit: 220


View Profile
December 29, 2025, 11:29:09 PM
 #42

There's absolutely nothing technical here in any aspects at all. The circus restarted the moment you brought up range contamination. So bringing up unicorns, mushrooms, or "blah-blah" in return is simply the appropriate thing to do.

Independent events do not care about any order, it's a waste of time to repeat the same thing a million times and get back the same exact broken record, which completely ignores the basic definition of a uniform distribution. Absolutely everything you are defending about your magic theorem is simply replaceable by swapping whatever H160 inputs with others, ending up with a perfectly continuous X% scan being absolutely equivalent and identical with whatever the magic theorem ends up scanning.

A perfectly continuous scan to which all of your arguments also apply equally identical. And no, it is not, like you mention, X% of each "block". It's simply whatever X% you want it to be: beginning, end, middle, or picked up from random inputs - they will all end up working exactly the same, and they all have the same exact arguments that you present.

This conclusion is obvious for anyone who spends more than 60 seconds thinking on the subject, but with you: you'll be taking this fallacy after you, when in fact, all you are actually doing is proving that H160 is, indeed a uniform distribution. If it wasn't, then your theory would actually have gains. However, since you don't have even the slightest proof that H160 is NOt a uniform distribution, then by immediate logic the theory also cannot be true.

Your attempts to use basic logic and math in the opposite order (from a fake conclusion to a fake hypothesis) by having no proofs other than "contaminated ranges" and "location biases", which do not exists, can only yield two consequences, ultimately:

1. Sarcasm.
2. A silent ignore by everyone. Probably the reason why you've long went past the patience of most of the forum members.

Maybe you'll think on that tattoo after all. It would be cool, and you'll have so many people curious about what independent events are about, instead of polluting the forum further.

Off the grid, training pigeons to broadcast signed messages.
mcdouglasx
Sr. Member
****
Offline Offline

Activity: 868
Merit: 437



View Profile WWW
Today at 01:13:46 AM
Last edit: Today at 04:56:30 AM by mcdouglasx
 #43

snip

You keep reciting the definition of "uniform distribution" like a mantra, but you don't understand that this is precisely the argument that's sinking you.

Nobody questions that the H160 is uniform. What I question is your lack of strategic vision. In a uniform distribution, each key has the same probability. Therefore, there's no logical reason to choose a sequential scan that prevents you from looking at the last 35% of the range. If everything is uniform, skipping based on a signal (prefix) is a way to optimize the scan, not to change the key's probability.

You say that a continuous scan of X% is "identical" to my prefix scan. False. In your continuous 65% scan, if the key is in the remaining 35%, your probability is ZERO.

In my prefix scan, I cover 100% of the range in a skipping fashion. If the key lies within that 35% you're ignoring, my method can find it because it doesn't have an arbitrary exclusion zone.

You talk about "basic computer science" but you ignore the fact that the world's most advanced search algorithms, like Google or AI, don't use flat sequential scans; they use heuristics. My method isn't magic; it's resource optimization. Your insistence that sequential is better because it's simpler is why you're stuck in theory while others are advancing in practice.

So you can keep trying to lecture on independent events, but the reality of the puzzle is simple: coverage is key. Your "guillotine" blinds you to the 35% potential success rate because of an obsession with continuity that contributes nothing to finding the key.

If my "magic theory" bothers you so much, it's because it exposes that your clean method is simply a quick way to give up on a third of the puzzle. The forum isn't polluted with technical debate; It becomes tainted by the arrogance of those who prefer to be right on paper rather than have a real chance of finding the key.

Keep your perfect scan; I'll stick with the method that doesn't force me to ignore the treasure if it falls outside your comfort zone.

I know it's a uniform distribution, I know what independent events are, but this is about search heuristics, not a statistics exam, which is based on an infinite assumption where questions are averaged. Remember, you're looking for a single, discrete event (the private key), not how many prefixes are in 2^256.

edit:

Since you like AI as you demonstrated in other threads, I asked him to modify the code for the comparison of the 3 methods: one with fixed cutoff, one with random stop, and another with prefixes (before you attack the AI ​​fallacy, the math is the same no matter where the code was taken from, it's just for demonstration purposes):

Code:
import hashlib
import random
import time
import math
import statistics

# === CONFIGURATION ===
TOTAL_SIZE = 100_000
RANGE_SIZE = 4_096
PREFIX_LENGTH = 3
SIMULATIONS = 2000

# Bitcoin secp256k1 order
SECP_ORDER = int("0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFEBAAEDCE6AF48A03BBFD25E8CD0364141", 16)

def generate_h160(data):
    """Generates RIPEMD160(SHA256(data)) hash."""
    h = hashlib.new('ripemd160', str(data).encode('utf-8'))
    return h.hexdigest()

def get_shuffled_blocks(total_blocks):
    blocks = list(range(total_blocks))
    random.shuffle(blocks)
    return blocks

# --- SEARCH METHODOLOGIES ---

def fixed_65_search(dataset, block_size, target_hash, block_order):
    """Simulates a static 65% cutoff strategy."""
    checks = 0
    cutoff = int(block_size * 0.65)
    for block_idx in block_order:
        start = block_idx * block_size
        for i in range(start, start + cutoff):
            if i >= len(dataset): break
            checks += 1
            if generate_h160(dataset[i]) == target_hash:
                return {"checks": checks, "found": True}
    return {"checks": checks, "found": False}

def random_stop_search(dataset, block_size, target_hash, block_order):
    """Simulates a random stop strategy within each block."""
    checks = 0
    for block_idx in block_order:
        start = block_idx * block_size
        stop_point = random.randint(1, block_size)
        for i in range(start, start + stop_point):
            if i >= len(dataset): break
            checks += 1
            if generate_h160(dataset[i]) == target_hash:
                return {"checks": checks, "found": True}
    return {"checks": checks, "found": False}

def prefix_heuristic_search(dataset, block_size, prefix_len, target_hash, block_order):
    """Simulates the dynamic prefix-based jumping strategy (Heuristic)."""
    target_prefix = target_hash[:prefix_len]
    checks = 0
    for block_idx in block_order:
        start = block_idx * block_size
        end = min(start + block_size, len(dataset))
        for i in range(start, end):
            checks += 1
            current_hash = generate_h160(dataset[i])
            if current_hash == target_hash:
                return {"checks": checks, "found": True}
            if current_hash.startswith(target_prefix):
                break
    return {"checks": checks, "found": False}

def run_experiment():
    results = {
        "fixed_65": {"found": 0, "dead_zone_hits": 0, "checks": []},
        "random_stop": {"found": 0, "dead_zone_hits": 0, "checks": []},
        "prefix_heuristic": {"found": 0, "dead_zone_hits": 0, "checks": []}
    }
    
    total_blocks = math.ceil(TOTAL_SIZE / RANGE_SIZE)
    dead_zone_threshold = int(RANGE_SIZE * 0.65)
    total_keys_in_dead_zone = 0

    print(f"Starting {SIMULATIONS} simulations...")

    for _ in range(SIMULATIONS):
        offset = random.randint(0, SECP_ORDER - TOTAL_SIZE)
        dataset = [offset + i for i in range(TOTAL_SIZE)]
        
        # Select a hidden target key
        target_idx = random.randint(0, TOTAL_SIZE - 1)
        target_hash = generate_h160(dataset[target_idx])
        
        # Verify if key is located in the 35% trailing 'Dead Zone'
        is_in_dead_zone = (target_idx % RANGE_SIZE) >= dead_zone_threshold
        if is_in_dead_zone:
            total_keys_in_dead_zone += 1

        block_order = get_shuffled_blocks(total_blocks)

        # Run Search Models
        f65 = fixed_65_search(dataset, RANGE_SIZE, target_hash, block_order)
        rnd = random_stop_search(dataset, RANGE_SIZE, target_hash, block_order)
        pre = prefix_heuristic_search(dataset, RANGE_SIZE, PREFIX_LENGTH, target_hash, block_order)

        # Record Statistics
        for res, key in zip([f65, rnd, pre], results.keys()):
            if res["found"]:
                results[key]["found"] += 1
                results[key]["checks"].append(res["checks"])
                if is_in_dead_zone:
                    results[key]["dead_zone_hits"] += 1

    print("\n" + "="*60)
    print("SEARCH STRATEGY ANALYSIS: THE DEAD ZONE CHALLENGE")
    print("="*60)
    print(f"Total Simulations: {SIMULATIONS}")
    print(f"Keys located in the 35% Dead Zone: {total_keys_in_dead_zone}")
    print("-" * 60)
    
    header = f"{'Method':<18} | {'Total Success':<15} | {'Dead Zone Hits':<15} | {'DZ Efficiency'}"
    print(header)
    print("-" * len(header))
    
    for name, data in results.items():
        dz_eff = (data["dead_zone_hits"] / total_keys_in_dead_zone * 100) if total_keys_in_dead_zone > 0 else 0
        print(f"{name:<18} | {data['found']:<15} | {data['dead_zone_hits']:<15} | {dz_eff:>12.2f}%")

    print("\n[Resource Efficiency]")
    for name, data in results.items():
        avg_checks = statistics.mean(data["checks"]) if data["checks"] else 0
        print(f"{name:<18}: {avg_checks:,.0f} average checks per find")

if __name__ == '__main__':
    run_experiment()


result

test 1

Code:
SEARCH STRATEGY ANALYSIS: THE DEAD ZONE CHALLENGE
============================================================
Total Simulations: 1000
Keys located in the 35% Dead Zone: 344
------------------------------------------------------------
Method             | Total Success   | Dead Zone Hits  | DZ Efficiency
----------------------------------------------------------------------
fixed_65           | 656             | 0               |         0.00%
random_stop        | 503             | 73              |        21.22%
prefix_heuristic   | 618             | 139             |        40.41%

[Resource Efficiency]
fixed_65          : 33,868 average checks per find
random_stop       : 26,736 average checks per find
prefix_heuristic  : 32,257 average checks per find

test 2

Code:
============================================================
SEARCH STRATEGY ANALYSIS: THE DEAD ZONE CHALLENGE
============================================================
Total Simulations: 1000
Keys located in the 35% Dead Zone: 346
------------------------------------------------------------
Method             | Total Success   | Dead Zone Hits  | DZ Efficiency
----------------------------------------------------------------------
fixed_65           | 654             | 0               |         0.00%
random_stop        | 535             | 74              |        21.39%
prefix_heuristic   | 654             | 151             |        43.64%

[Resource Efficiency]
fixed_65          : 32,656 average checks per find
random_stop       : 26,068 average checks per find
prefix_heuristic  : 32,430 average checks per find

test 3
Code:
============================================================
SEARCH STRATEGY ANALYSIS: THE DEAD ZONE CHALLENGE
============================================================
Total Simulations: 2000
Keys located in the 35% Dead Zone: 697
------------------------------------------------------------
Method             | Total Success   | Dead Zone Hits  | DZ Efficiency
----------------------------------------------------------------------
fixed_65           | 1303            | 0               |         0.00%
random_stop        | 1001            | 126             |        18.08%
prefix_heuristic   | 1239            | 305             |        43.76%

[Resource Efficiency]
fixed_65          : 33,029 average checks per find
random_stop       : 26,330 average checks per find
prefix_heuristic  : 31,815 average checks per find

test 4

Code:
============================================================
SEARCH STRATEGY ANALYSIS: THE DEAD ZONE CHALLENGE
============================================================
Total Simulations: 5000
Keys located in the 35% Dead Zone: 1721
------------------------------------------------------------
Method             | Total Success   | Dead Zone Hits  | DZ Efficiency
----------------------------------------------------------------------
fixed_65           | 3279            | 0               |         0.00%
random_stop        | 2559            | 296             |        17.20%
prefix_heuristic   | 3165            | 789             |        45.85%

[Resource Efficiency]
fixed_65          : 33,069 average checks per find
random_stop       : 25,639 average checks per find
prefix_heuristic  : 32,178 average checks per find

The 65% cutoff method is a guillotine that leaves you blind to 35% of the puzzle. The random stop is inefficient chaos. The prefix method is the only engineering strategy that balances coverage speed with the real possibility of finding the key anywhere within the range.

As you can see, statistically they may seem similar, but that doesn't change the fact that the prefix method doesn't leave a gap due to an arbitrary cutoff, nor does it leave it to the chaos of randomness.

Is it clear that the prefix method is the most effective for searching when we have limited resources?

What do you prefer now: relying on the fixed cutoff and risking that the target is in the unexplored 35%?

Leaving it to chance, with a random stop?

Or using prefixes, with a margin of error of less than 1/N distributed across 100% of the range?

As you can see, my method doesn't break the rules of mathematics; it doesn't affect the uniform distribution or the independence of events.

It's simply an intelligent approach, which should be valued instead of attacked by egocentric purists who don't grasp the essence of what I'm saying. They only see statistics and numbers, forgetting that WE'RE LOOKING FOR A SINGLE TARGET.

█████████████████████████
█████████████████████████
███████▀█████████▀███████
█████████████████████████
█████████████████████████
████████████▀████████████
███████▀███████▄███████
███████████▄▄▄███████████
█████████████████████████
█████████████████████████
█████████████████████████
█████████████████████████
█████████████████████████

2UP.io 
NO KYC
CASINO
██████████████████████████
████████████████████████
███████████████████████
███████████████████
██████████████████████
███████████████████████
███████████████████████
██████████████████
███████████████████████
██████████████████
███████████████████████
████████████████████████
██████████████████████████
███████████████████████████████████████████████████████████████████████████████████████
 
FASTEST-GROWING CRYPTO
CASINO & SPORTSBOOK

 

███████████████████████████████████████████████████████████████████████████████████████
███████████████████████████
█████████████████████████
███████████████████████
███████████████████████
████████████████████████
███████████████████████
███████████████████████
██████████████████████
████████████████████████
███████████████████████
███████████████████████
█████████████████████████
███████████████████████████
 
WELCOME BONUS
200% + 500 FS
█▀▀▀











█▄▄▄
▀▀▀█











▄▄▄█
kTimesG
Full Member
***
Offline Offline

Activity: 700
Merit: 220


View Profile
Today at 11:03:37 AM
Last edit: Today at 11:30:17 AM by kTimesG
 #44

The 65% cutoff method is a guillotine that leaves you blind to 35% of the puzzle. The random stop is inefficient chaos. The prefix method is the only engineering strategy that balances coverage speed with the real possibility of finding the key anywhere within the range.

As you can see, statistically they may seem similar, but that doesn't change the fact that the prefix method doesn't leave a gap due to an arbitrary cutoff, nor does it leave it to the chaos of randomness.

Is it clear that the prefix method is the most effective for searching when we have limited resources?

What do you prefer now: relying on the fixed cutoff and risking that the target is in the unexplored 35%?

Except that the prefix method also leaves a 35% blind gap. You keep forgetting about it, as if it never exists. Fix the AI nonsense and repeat your tests to also check for the cases where the magic theorem failed. Both the continuous and/or the random versions will find the target when prefix doesn't, and viceversa, making, obviously, all things equivalent in terms of results, successes, and so on.

So it's not more or less effective than a plain old continuous scan with the same number of operations; however, due to the facts that computing continuous public keys (which are the inputs to the H160) is orders of magnitude faster than arbitrarily stopping and restarting, it CAN NEVER be more efficient. This is the entire point people try to make you understand.

You keep confusing the uniformity of H160 with the uniformity of its inputs. Again: order does not matter, so the concept of "continuous blocks" does not exist, because the inputs don't have anything to do with the uniformity of H160. So, as an effect, the concept of "range contamination" also does not exist, because the ranges do not exist, because the concept of "order" does not exist. Which means; the magic theorem also cannot be true, since it relies on the existence of "order". Capisci?

The uniformity of H160 is about the H160 outputs, which have no relation whatsoever to what they hash (whether they're SHA hashes of a public key, words, sentences,  or terabytes of data).

So whether you do your tests with these inputs:

RIPEMD(SHA256(pubKey(correctTargetPrivateKey)))

together with other 2**70 - 1 other RIPEMD(whateverRandomData) - you'll still find the exact same amount of prefixes, and have the same 1 in 2**70 chances of hitting the key (e.g. the desired INPUT, whether it's some input of a SHA256, or maybe the 3 gigabytes of some known blob, or maybe some Shakespeare dramaturgy), at each and every attempt, for whatever way you want to arrange for your "blocks" in all possible permutations and combinations and setups. Relevance to finding what we're looking for? ZERO.

Off the grid, training pigeons to broadcast signed messages.
mcdouglasx
Sr. Member
****
Offline Offline

Activity: 868
Merit: 437



View Profile WWW
Today at 02:30:00 PM
Last edit: Today at 02:58:17 PM by mcdouglasx
 #45

snip

It's like telling a child that having 2 ice creams is the same whether you give them 2 or take 2 away from the 4 they had. The final number is the same, but the operational reality is completely different.

Your method 'removes' 35% of the equation's range, leaving you blind. My method 'distributes' the effort so that I never lose the opportunity for success. kTimesG, you're stuck looking at the final number on paper; I'm sticking with the method that truly has the flexibility to find the key no matter where it is along the way.

If the order didn't matter and it was all blind luck, the random stop should have performed just as well as my heuristic.

I think I've already made my point about search heuristics versus purist statistics clear. Let another user see for themselves and draw their own conclusions. I have things to do, money to try to earn, to waste time on this technical-purist-engineering cycle.

edit:

Regarding efficiency, the statistical significance indicates that it requires almost the same effort as your block-based method. Your cutoff at 65 is the same; you would need to query the target by key, while I would query the prefix. The target is only queried if the prefix is ​​found, and since the ratio is 1/N, a large block is used because the method is scalable, avoiding bottlenecks caused by small blocks.

In other words, it only requires one more step: querying the target after finding a prefix, and this action is negligible.

Happy new year.

█████████████████████████
█████████████████████████
███████▀█████████▀███████
█████████████████████████
█████████████████████████
████████████▀████████████
███████▀███████▄███████
███████████▄▄▄███████████
█████████████████████████
█████████████████████████
█████████████████████████
█████████████████████████
█████████████████████████

2UP.io 
NO KYC
CASINO
██████████████████████████
████████████████████████
███████████████████████
███████████████████
██████████████████████
███████████████████████
███████████████████████
██████████████████
███████████████████████
██████████████████
███████████████████████
████████████████████████
██████████████████████████
███████████████████████████████████████████████████████████████████████████████████████
 
FASTEST-GROWING CRYPTO
CASINO & SPORTSBOOK

 

███████████████████████████████████████████████████████████████████████████████████████
███████████████████████████
█████████████████████████
███████████████████████
███████████████████████
████████████████████████
███████████████████████
███████████████████████
██████████████████████
████████████████████████
███████████████████████
███████████████████████
█████████████████████████
███████████████████████████
 
WELCOME BONUS
200% + 500 FS
█▀▀▀











█▄▄▄
▀▀▀█











▄▄▄█
kTimesG
Full Member
***
Offline Offline

Activity: 700
Merit: 220


View Profile
Today at 04:13:19 PM
Last edit: Today at 04:25:29 PM by kTimesG
 #46

Your method 'removes' 35% of the equation's range, leaving you blind. My method 'distributes' the effort so that I never lose the opportunity for success. kTimesG, you're stuck looking at the final number on paper; I'm sticking with the method that truly has the flexibility to find the key no matter where it is along the way.

Let's try for the 1.000.001th time: your magic method removes 35% of the equation's success. A simple scan distributes the effort so that it never loses the opportunity for success (e.g. it touches parts of your missing 35%, just like the magic method touches parts of the normal scan's missing 35%).

If the order didn't matter and it was all blind luck...

# 1.000.002: there is no concept of order when we are talking about independent events pf a uniform distribution. When analyzing multiple independent events taken together, THEIR ORDER DOES NOT MATTER. There is absolutely no formula on Earth that uses their order, otherwise it is not an UNIFORM DISTRIBUTION OF INDEPENDENT EVENTS - it is SOMETHING ELSE. There are no "locations" to speak of, those are INPUT parameters for the stuff that goes into computing the H160's own input. Again: you might as well hash human DNAs for all that matters, and your prefix method will generate the exact same results, even though, oops, the "location" point of reference doesn't exist. What to do, what to do...

Regarding efficiency, the statistical significance indicates that it requires almost the same effort as your block-based method. Your cutoff at 65 is the same; you would need to query the target by key, while I would query the prefix. The target is only queried if the prefix is ​​found, and since the ratio is 1/N, a large block is used because the method is scalable, avoiding bottlenecks caused by small blocks.

You can take your statistical significance non-sense elsewhere. There is zero effort into nicely distributing X% amount of fixed work over N workers. Instead, the magic theorem is the one that arbitrarily breaks, not the fixed work method. You are in a very deep misunderstanding of basic things in computer science. And WTF is "query target by key"? There is no prefix to check for until AFTER the H160 is already FINISHED. Jesus Christ, are you OK? It's not the first time you make such a blatant statement. Still, I guess the entire codebase of human intelligence isn't of help here, let alone simply digging into the code of whatever vanity search of choice.

In other words, it only requires one more step: querying the target after finding a prefix, and this action is negligible.

It's stuff like this that clearly ensures there is nothing technical to speak about with you. Doing things in exact reverse is your trademark anyway. You must be from the future!

HNY.

Off the grid, training pigeons to broadcast signed messages.
mcdouglasx
Sr. Member
****
Offline Offline

Activity: 868
Merit: 437



View Profile WWW
Today at 04:41:08 PM
 #47

snip

10000 simulations.

Code:
============================================================
SEARCH STRATEGY ANALYSIS: THE DEAD ZONE CHALLENGE
============================================================
Total Simulations: 10000
Keys located in the 35% Dead Zone: 3349
------------------------------------------------------------
Method             | Total Success   | Dead Zone Hits  | DZ Efficiency
----------------------------------------------------------------------
fixed_65           | 6651            | 0               |         0.00%
random_stop        | 5191            | 599             |        17.89%
prefix_heuristic   | 6392            | 1467            |        43.80%

[Resource Efficiency]
fixed_65          : 32,706 average checks per find
random_stop       : 25,793 average checks per find
prefix_heuristic  : 32,191 average checks per find

Bitcoin hashes have an order, even though they are a uniform distribution. The order is dictated by the private keys within the range (you see them as "balls in an urn" that get mixed up every time you reach in). Whether you like it or not, this gives the hashes a fixed order. Private key 1 will always produce the same hash; it's not as if the hashes change every time you pass through that indexed range.

So, by cutting off at 65%, you're using the first 65% of the private keys in that block. Therefore, it will never be the same. And again, you're trying to distort the reality of the search with textbook theories that we all understand perfectly.

And yes, my method can fail if the target is after a false positive prefix, but it's an acceptable statistical error, as you'll see in the results. Because it's acceptable, because we don't know if the target is after a prefix in a block or not, it's better to bet on that than to bet on nothing since, as you'll see in the results, in your dead zone my proposal finds that target more than 40% of the time.

█████████████████████████
█████████████████████████
███████▀█████████▀███████
█████████████████████████
█████████████████████████
████████████▀████████████
███████▀███████▄███████
███████████▄▄▄███████████
█████████████████████████
█████████████████████████
█████████████████████████
█████████████████████████
█████████████████████████

2UP.io 
NO KYC
CASINO
██████████████████████████
████████████████████████
███████████████████████
███████████████████
██████████████████████
███████████████████████
███████████████████████
██████████████████
███████████████████████
██████████████████
███████████████████████
████████████████████████
██████████████████████████
███████████████████████████████████████████████████████████████████████████████████████
 
FASTEST-GROWING CRYPTO
CASINO & SPORTSBOOK

 

███████████████████████████████████████████████████████████████████████████████████████
███████████████████████████
█████████████████████████
███████████████████████
███████████████████████
████████████████████████
███████████████████████
███████████████████████
██████████████████████
████████████████████████
███████████████████████
███████████████████████
█████████████████████████
███████████████████████████
 
WELCOME BONUS
200% + 500 FS
█▀▀▀











█▄▄▄
▀▀▀█











▄▄▄█
WanderingPhilospher (OP)
Sr. Member
****
Offline Offline

Activity: 1456
Merit: 275

Shooters Shoot...


View Profile
Today at 04:46:58 PM
 #48

I think we got off track when we started comparing searching in a 65% (beginning, end, middle) and stopping and saying it had the EXACT same % to find the key as the prefix method. And that could have been my fault.

If we are searching for a specific h160, the 65% stop method would NOT be the same as the prefix method; AND we all agree that 100% scan is the superior method, so no need to bring it up.

We need to bring back to the forefront, that we are actually searching for a target hash...i.e. we are actually looking for something specific and not just quantities of prefixes.

If there are 0 or 1 prefix in a subrange we are searching:
the prefix method will know 100% of the time. It will scan 100% and find 0 or it will keep searching and find the 1, even if it takes 99% searching of the subrange.
the 65% stop method will only know there was 0 or 1 in the 65% of the range scanned (b, m e). But if the prefix is in the 66%-100%, it will not find the prefix.

If there are 1+ prefixes in a subrange we are searching:
the prefix method will find at least 1 of the prefixes.
the 65% stop method will find 0 to all prefixes, depending on where the prefixes are at in the subrange.

So, when searching for a specific target, the prefix method has a slight advantage because it will always find at least 1 prefix or do a 100% scan and conclude that none exist in the subrange.

BUT, if we are trying to find quantities of prefixes and not necessarily looking for a specific target, then both, on average, will find the same amount of prefixes.

Prefix method will always win, or at worse, tie, if there are 0 to 1 prefixes in a subrange.
If there are 2 prefixes in the subrange, the prefix method has a 50% chance of finding the target.
If there are 3 prefixes in the subrange, the prefix method has a 33% chance of finding the target.
etc.

65% method will always win, or at worse, tie, if ALL prefixes are within the 65% of the subrange searched.
If there are 1/2 prefixes within the 65% of the subrange searched = 50%.
If there are 0/2 prefixes within the 65% of the subrange searched = 0%.
If there are 2/3 prefixes within the 65% of the subrange searched = 66%.
If there are 1/3 prefixes within the 65% of the subrange searched = 33%.
If there are 0/3 prefixes within the 65% of the subrange searched = 0%.

Did I miss anything?
mcdouglasx
Sr. Member
****
Offline Offline

Activity: 868
Merit: 437



View Profile WWW
Today at 05:24:56 PM
Last edit: Today at 06:08:47 PM by mcdouglasx
 #49

I think we got off track when we started comparing searching in a 65% (beginning, end, middle) and stopping and saying it had the EXACT same % to find the key as the prefix method. And that could have been my fault.

If we are searching for a specific h160, the 65% stop method would NOT be the same as the prefix method; AND we all agree that 100% scan is the superior method, so no need to bring it up.

We need to bring back to the forefront, that we are actually searching for a target hash...i.e. we are actually looking for something specific and not just quantities of prefixes.

If there are 0 or 1 prefix in a subrange we are searching:
the prefix method will know 100% of the time. It will scan 100% and find 0 or it will keep searching and find the 1, even if it takes 99% searching of the subrange.
the 65% stop method will only know there was 0 or 1 in the 65% of the range scanned (b, m e). But if the prefix is in the 66%-100%, it will not find the prefix.

If there are 1+ prefixes in a subrange we are searching:
the prefix method will find at least 1 of the prefixes.
the 65% stop method will find 0 to all prefixes, depending on where the prefixes are at in the subrange.

So, when searching for a specific target, the prefix method has a slight advantage because it will always find at least 1 prefix or do a 100% scan and conclude that none exist in the subrange.

BUT, if we are trying to find quantities of prefixes and not necessarily looking for a specific target, then both, on average, will find the same amount of prefixes.

Prefix method will always win, or at worse, tie, if there are 0 to 1 prefixes in a subrange.
If there are 2 prefixes in the subrange, the prefix method has a 50% chance of finding the target.
If there are 3 prefixes in the subrange, the prefix method has a 33% chance of finding the target.
etc.

65% method will always win, or at worse, tie, if ALL prefixes are within the 65% of the subrange searched.
If there are 1/2 prefixes within the 65% of the subrange searched = 50%.
If there are 0/2 prefixes within the 65% of the subrange searched = 0%.
If there are 2/3 prefixes within the 65% of the subrange searched = 66%.
If there are 1/3 prefixes within the 65% of the subrange searched = 33%.
If there are 0/3 prefixes within the 65% of the subrange searched = 0%.

Did I miss anything?

Everything you've said is exactly correct. I know that the general statistics of the methods in an infinite world will tend to equalize, but ignoring the pure statistics and focusing on the search for a unique target in a finite indexed range, prefixes are the best strategy because they always have a statistical chance, whether there are 1, 2, 3, or more prefixes.

Even with more than one prefix, we will always have the possibility that the target is the first one we find, but with a cut-off limit of 65%, if the target is at 66%, it has zero chance. And I say it's the best strategy because a 100% search is not viable for search engines with limited resources nowadays because it's already too much of a waste of time and money using GPU farms.

█████████████████████████
█████████████████████████
███████▀█████████▀███████
█████████████████████████
█████████████████████████
████████████▀████████████
███████▀███████▄███████
███████████▄▄▄███████████
█████████████████████████
█████████████████████████
█████████████████████████
█████████████████████████
█████████████████████████

2UP.io 
NO KYC
CASINO
██████████████████████████
████████████████████████
███████████████████████
███████████████████
██████████████████████
███████████████████████
███████████████████████
██████████████████
███████████████████████
██████████████████
███████████████████████
████████████████████████
██████████████████████████
███████████████████████████████████████████████████████████████████████████████████████
 
FASTEST-GROWING CRYPTO
CASINO & SPORTSBOOK

 

███████████████████████████████████████████████████████████████████████████████████████
███████████████████████████
█████████████████████████
███████████████████████
███████████████████████
████████████████████████
███████████████████████
███████████████████████
██████████████████████
████████████████████████
███████████████████████
███████████████████████
█████████████████████████
███████████████████████████
 
WELCOME BONUS
200% + 500 FS
█▀▀▀











█▄▄▄
▀▀▀█











▄▄▄█
kTimesG
Full Member
***
Offline Offline

Activity: 700
Merit: 220


View Profile
Today at 05:34:55 PM
 #50

Bitcoin hashes have an order, even though they are a uniform distribution. The order is dictated by the private keys within the range (you see them as "balls in an urn" that get mixed up every time you reach in). Whether you like it or not, this gives the hashes a fixed order. Private key 1 will always produce the same hash; it's not as if the hashes change every time you pass through that indexed range.

The only "order" of any hash is the hash value itself. Again: you are using the root input of three layers of crypto algorithms as some basis of "order" in the final output (the uniform distribution).

So, yes, Bitcoin hashes do have an order indeed: it's the order of the H160's value, not the value of the private key that generated the public key that was hashed via SHA256, which ultimately resulted in the H160.

You also misunderstood the balls thing. The balls are not private keys, the balls are the H160 values.

So there are 2**160 balls (all possible balls) in the urn. There is no notion of private keys to speak of in this context. There is however, the notion that "once we extract a H160 ball, we put it back in, and we do this 2**70 times". So: where do you see the "signaling"? Do the balls talk to each other? Does the one who extracts the balls forget to put the ball back in? Or is the magic unicorn whispering to the balls: "hey, this guy's at the next private key, watch out what ball comes next!".

The simple fact that there are ~ 2**256 distinct by definition public keys in secp256k1, but only AT MOST 2**160 uniquely different Bitcoin hashes, is enough to classify "Bitcoin hashes have an order, based on their private key", as total non-sense.

Off the grid, training pigeons to broadcast signed messages.
WanderingPhilospher (OP)
Sr. Member
****
Offline Offline

Activity: 1456
Merit: 275

Shooters Shoot...


View Profile
Today at 05:44:11 PM
 #51

Quote
Even with more than 1 prefix, we will always have the chance that the target will be the first one we find, but at 65%.
How did you arrive at 65%?

To me, it would be based on how many prefixes are in the range, unless you are just doing an average.

If there are:
4 prefixes = 25%
3 prefixes = 33%
2 prefixes = 50%

So I would say on average a 36% we find the target hash, if there are more than 1 prefixes in a subrange. It would jump to 42% if there were never 4 prefixes in a subrange, but we can't conclude that 100%.

During puzzle 69, when I ran a lot of real world tests, actually searching for 69 but trying to come up with a good prefix method, I found that skipping 1 up to 2^x (depending on how many bits I was trying to match (40, 44 or 48); found prefixes more quickly and less 0 founds in a subrange.

So if I found a prefix with a private key of 0x100; then I would add some value to that private key and that would be my new start range + whatever size bits I was looking for. So if I was looking for 44 bit matches:

0x100 + 2^28 = 0x10000100:0x100000000000 = new search range (just as an example but all ranges were in the 69 bit range.) So I never broke the range into x amount of predetermined subranges, I let the found hashes' private keys, dictate the next subrange.
mcdouglasx
Sr. Member
****
Offline Offline

Activity: 868
Merit: 437



View Profile WWW
Today at 06:36:40 PM
 #52

Quote
Even with more than 1 prefix, we will always have the chance that the target will be the first one we find, but at 65%.
How did you arrive at 65%?

To me, it would be based on how many prefixes are in the range, unless you are just doing an average.

If there are:
4 prefixes = 25%
3 prefixes = 33%
2 prefixes = 50%

So I would say on average a 36% we find the target hash, if there are more than 1 prefixes in a subrange. It would jump to 42% if there were never 4 prefixes in a subrange, but we can't conclude that 100%.

Exactly, I just misspelled what I meant to say, which was this:

Quote
Even with more than one prefix, we will always have the possibility that the target is the first one we find, but with a cut-off limit of 65%, if the target is at 66%, it has zero chance.



Quote
During puzzle 69, when I ran a lot of real world tests, actually searching for 69 but trying to come up with a good prefix method, I found that skipping 1 up to 2^x (depending on how many bits I was trying to match (40, 44 or 48); found prefixes more quickly and less 0 founds in a subrange.

So if I found a prefix with a private key of 0x100; then I would add some value to that private key and that would be my new start range + whatever size bits I was looking for. So if I was looking for 44 bit matches:

0x100 + 2^28 = 0x10000100:0x100000000000 = new search range (just as an example but all ranges were in the 69 bit range.) So I never broke the range into x amount of predetermined subranges, I let the found hashes' private keys, dictate the next subrange.

We would need to run tests to see if it's more cost-effective to choose fixed blocks or to do it adaptively, but if you saw that you were finding more prefixes, it's a good indication that it's worth comparing both strategies using unique targets for both tests since you were achieving a more efficient coverage density by finding fewer empty blocks.

Similarly, it could be used for creating vanity addresses; if you find more fast prefixes, you'll create the vanity faster. But that's another topic for now.

█████████████████████████
█████████████████████████
███████▀█████████▀███████
█████████████████████████
█████████████████████████
████████████▀████████████
███████▀███████▄███████
███████████▄▄▄███████████
█████████████████████████
█████████████████████████
█████████████████████████
█████████████████████████
█████████████████████████

2UP.io 
NO KYC
CASINO
██████████████████████████
████████████████████████
███████████████████████
███████████████████
██████████████████████
███████████████████████
███████████████████████
██████████████████
███████████████████████
██████████████████
███████████████████████
████████████████████████
██████████████████████████
███████████████████████████████████████████████████████████████████████████████████████
 
FASTEST-GROWING CRYPTO
CASINO & SPORTSBOOK

 

███████████████████████████████████████████████████████████████████████████████████████
███████████████████████████
█████████████████████████
███████████████████████
███████████████████████
████████████████████████
███████████████████████
███████████████████████
██████████████████████
████████████████████████
███████████████████████
███████████████████████
█████████████████████████
███████████████████████████
 
WELCOME BONUS
200% + 500 FS
█▀▀▀











█▄▄▄
▀▀▀█











▄▄▄█
Pages: « 1 2 [3]  All
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!