|
mcdouglasx
|
 |
December 28, 2025, 03:05:15 PM Last edit: December 28, 2025, 05:02:44 PM by mcdouglasx |
|
It's regrettable... a desperate maneuver....
Understand this...
There's no magical difference in GPU effort between processing a large block with prefixes or stopping at 65%. The workload is the same. What changes is that you're throwing away 35% of your probability due to an arbitrary stoppage, while we're reinvesting it in mobility.
Stop sugarcoating the "Dynamic Scheduler" as something impossible or that degrades performance tenfold. @WP is already doing it. Their tests demonstrate that it's feasible and efficient. You can't argue that something "can't be done" when someone has already published the results of doing it.
The prefix method is the best strategy for "trying your luck" because the "failure" isn't a dead end at the end of each block. It's a calculated statistical failure (less than 1/N).
The last proton in existence will decay itself before the mythological unicorns you are bragging about will prefix their way into breaking three layers of cryptography and de-contaminate the location biases of the freaking data entropy-filled blocks. Since You are obviously right, as usual, you should take on Bram's advice and publish your remarkable ideas in a serious environment, where the world's cryptographers can take note of your astonishing discoveries. After that, you should present the implementation that breaks the laws of basic computing: running s sequential algorithm in parallel. At last, maybe you should also present your "dynamic scheduler" that works just as fast as... not having any. Because "workloads". LMFAO is the best I can react, since it's clear you're totally clueless on why H160 even runs fast on a GPU (hint: it's because it isn't bloated with bullshit ideas). Wow, you've turned a technical thread into a circus of nonsense, sarcasm, and things taken out of context, just because you can't accept that blind cutting is strategically absurd compared to the dynamic probability of prefixes... congratulations, you've earned the respect of the forum trolls!
|
| | 2UP.io | │ | NO KYC CASINO | │ | ██████████████████████████ ████████████████████████ ████████████████████████ ████████████████████████ ████████████████████████ ████████████████████████ ████████████████████████ ████████████████████████ ████████████████████████ ████████████████████████ ████████████████████████ ████████████████████████ ██████████████████████████ | ███████████████████████████████████████████████████████████████████████████████████████ FASTEST-GROWING CRYPTO CASINO & SPORTSBOOK ███████████████████████████████████████████████████████████████████████████████████████ | ███████████████████████████ █████████████████████████ █████████████████████████ █████████████████████████ █████████████████████████ █████████████████████████ █████████████████████████ █████████████████████████ █████████████████████████ █████████████████████████ █████████████████████████ █████████████████████████ ███████████████████████████ | │ |
WELCOME BONUS 200% + 500 FS | │ | █▀▀▀ █ █ █ █ █ █ █ █ █ █ █ █▄▄▄ | | ▀▀▀█ █ █ █ █ █ █ █ █ █ █ █ ▄▄▄█ |
|
|
|
|
kTimesG
|
 |
December 29, 2025, 11:29:09 PM |
|
There's absolutely nothing technical here in any aspects at all. The circus restarted the moment you brought up range contamination. So bringing up unicorns, mushrooms, or "blah-blah" in return is simply the appropriate thing to do.
Independent events do not care about any order, it's a waste of time to repeat the same thing a million times and get back the same exact broken record, which completely ignores the basic definition of a uniform distribution. Absolutely everything you are defending about your magic theorem is simply replaceable by swapping whatever H160 inputs with others, ending up with a perfectly continuous X% scan being absolutely equivalent and identical with whatever the magic theorem ends up scanning.
A perfectly continuous scan to which all of your arguments also apply equally identical. And no, it is not, like you mention, X% of each "block". It's simply whatever X% you want it to be: beginning, end, middle, or picked up from random inputs - they will all end up working exactly the same, and they all have the same exact arguments that you present.
This conclusion is obvious for anyone who spends more than 60 seconds thinking on the subject, but with you: you'll be taking this fallacy after you, when in fact, all you are actually doing is proving that H160 is, indeed a uniform distribution. If it wasn't, then your theory would actually have gains. However, since you don't have even the slightest proof that H160 is NOt a uniform distribution, then by immediate logic the theory also cannot be true.
Your attempts to use basic logic and math in the opposite order (from a fake conclusion to a fake hypothesis) by having no proofs other than "contaminated ranges" and "location biases", which do not exists, can only yield two consequences, ultimately:
1. Sarcasm. 2. A silent ignore by everyone. Probably the reason why you've long went past the patience of most of the forum members.
Maybe you'll think on that tattoo after all. It would be cool, and you'll have so many people curious about what independent events are about, instead of polluting the forum further.
|
Off the grid, training pigeons to broadcast signed messages.
|
|
|
|
mcdouglasx
|
 |
December 30, 2025, 01:13:46 AM Last edit: December 30, 2025, 04:56:30 AM by mcdouglasx |
|
snip
You keep reciting the definition of " uniform distribution" like a mantra, but you don't understand that this is precisely the argument that's sinking you. Nobody questions that the H160 is uniform. What I question is your lack of strategic vision. In a uniform distribution, each key has the same probability. Therefore, there's no logical reason to choose a sequential scan that prevents you from looking at the last 35% of the range. If everything is uniform, skipping based on a signal (prefix) is a way to optimize the scan, not to change the key's probability. You say that a continuous scan of X% is " identical" to my prefix scan. False. In your continuous 65% scan, if the key is in the remaining 35%, your probability is ZERO. In my prefix scan, I cover 100% of the range in a skipping fashion. If the key lies within that 35% you're ignoring, my method can find it because it doesn't have an arbitrary exclusion zone. You talk about " basic computer science" but you ignore the fact that the world's most advanced search algorithms, like Google or AI, don't use flat sequential scans; they use heuristics. My method isn't magic; it's resource optimization. Your insistence that sequential is better because it's simpler is why you're stuck in theory while others are advancing in practice. So you can keep trying to lecture on independent events, but the reality of the puzzle is simple: coverage is key. Your "guillotine" blinds you to the 35% potential success rate because of an obsession with continuity that contributes nothing to finding the key. If my " magic theory" bothers you so much, it's because it exposes that your clean method is simply a quick way to give up on a third of the puzzle. The forum isn't polluted with technical debate; It becomes tainted by the arrogance of those who prefer to be right on paper rather than have a real chance of finding the key. Keep your perfect scan; I'll stick with the method that doesn't force me to ignore the treasure if it falls outside your comfort zone. I know it's a uniform distribution, I know what independent events are, but this is about search heuristics, not a statistics exam, which is based on an infinite assumption where questions are averaged. Remember, you're looking for a single, discrete event ( the private key), not how many prefixes are in 2^256. edit:Since you like AI as you demonstrated in other threads, I asked him to modify the code for the comparison of the 3 methods: one with fixed cutoff, one with random stop, and another with prefixes (before you attack the AI fallacy, the math is the same no matter where the code was taken from, it's just for demonstration purposes): import hashlib import random import time import math import statistics
# === CONFIGURATION === TOTAL_SIZE = 100_000 RANGE_SIZE = 4_096 PREFIX_LENGTH = 3 SIMULATIONS = 2000
# Bitcoin secp256k1 order SECP_ORDER = int("0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFEBAAEDCE6AF48A03BBFD25E8CD0364141", 16)
def generate_h160(data): """Generates RIPEMD160(SHA256(data)) hash.""" h = hashlib.new('ripemd160', str(data).encode('utf-8')) return h.hexdigest()
def get_shuffled_blocks(total_blocks): blocks = list(range(total_blocks)) random.shuffle(blocks) return blocks
# --- SEARCH METHODOLOGIES ---
def fixed_65_search(dataset, block_size, target_hash, block_order): """Simulates a static 65% cutoff strategy.""" checks = 0 cutoff = int(block_size * 0.65) for block_idx in block_order: start = block_idx * block_size for i in range(start, start + cutoff): if i >= len(dataset): break checks += 1 if generate_h160(dataset[i]) == target_hash: return {"checks": checks, "found": True} return {"checks": checks, "found": False}
def random_stop_search(dataset, block_size, target_hash, block_order): """Simulates a random stop strategy within each block.""" checks = 0 for block_idx in block_order: start = block_idx * block_size stop_point = random.randint(1, block_size) for i in range(start, start + stop_point): if i >= len(dataset): break checks += 1 if generate_h160(dataset[i]) == target_hash: return {"checks": checks, "found": True} return {"checks": checks, "found": False}
def prefix_heuristic_search(dataset, block_size, prefix_len, target_hash, block_order): """Simulates the dynamic prefix-based jumping strategy (Heuristic).""" target_prefix = target_hash[:prefix_len] checks = 0 for block_idx in block_order: start = block_idx * block_size end = min(start + block_size, len(dataset)) for i in range(start, end): checks += 1 current_hash = generate_h160(dataset[i]) if current_hash == target_hash: return {"checks": checks, "found": True} if current_hash.startswith(target_prefix): break return {"checks": checks, "found": False}
def run_experiment(): results = { "fixed_65": {"found": 0, "dead_zone_hits": 0, "checks": []}, "random_stop": {"found": 0, "dead_zone_hits": 0, "checks": []}, "prefix_heuristic": {"found": 0, "dead_zone_hits": 0, "checks": []} } total_blocks = math.ceil(TOTAL_SIZE / RANGE_SIZE) dead_zone_threshold = int(RANGE_SIZE * 0.65) total_keys_in_dead_zone = 0
print(f"Starting {SIMULATIONS} simulations...")
for _ in range(SIMULATIONS): offset = random.randint(0, SECP_ORDER - TOTAL_SIZE) dataset = [offset + i for i in range(TOTAL_SIZE)] # Select a hidden target key target_idx = random.randint(0, TOTAL_SIZE - 1) target_hash = generate_h160(dataset[target_idx]) # Verify if key is located in the 35% trailing 'Dead Zone' is_in_dead_zone = (target_idx % RANGE_SIZE) >= dead_zone_threshold if is_in_dead_zone: total_keys_in_dead_zone += 1
block_order = get_shuffled_blocks(total_blocks)
# Run Search Models f65 = fixed_65_search(dataset, RANGE_SIZE, target_hash, block_order) rnd = random_stop_search(dataset, RANGE_SIZE, target_hash, block_order) pre = prefix_heuristic_search(dataset, RANGE_SIZE, PREFIX_LENGTH, target_hash, block_order)
# Record Statistics for res, key in zip([f65, rnd, pre], results.keys()): if res["found"]: results[key]["found"] += 1 results[key]["checks"].append(res["checks"]) if is_in_dead_zone: results[key]["dead_zone_hits"] += 1
print("\n" + "="*60) print("SEARCH STRATEGY ANALYSIS: THE DEAD ZONE CHALLENGE") print("="*60) print(f"Total Simulations: {SIMULATIONS}") print(f"Keys located in the 35% Dead Zone: {total_keys_in_dead_zone}") print("-" * 60) header = f"{'Method':<18} | {'Total Success':<15} | {'Dead Zone Hits':<15} | {'DZ Efficiency'}" print(header) print("-" * len(header)) for name, data in results.items(): dz_eff = (data["dead_zone_hits"] / total_keys_in_dead_zone * 100) if total_keys_in_dead_zone > 0 else 0 print(f"{name:<18} | {data['found']:<15} | {data['dead_zone_hits']:<15} | {dz_eff:>12.2f}%")
print("\n[Resource Efficiency]") for name, data in results.items(): avg_checks = statistics.mean(data["checks"]) if data["checks"] else 0 print(f"{name:<18}: {avg_checks:,.0f} average checks per find")
if __name__ == '__main__': run_experiment()
resulttest 1 SEARCH STRATEGY ANALYSIS: THE DEAD ZONE CHALLENGE ============================================================ Total Simulations: 1000 Keys located in the 35% Dead Zone: 344 ------------------------------------------------------------ Method | Total Success | Dead Zone Hits | DZ Efficiency ---------------------------------------------------------------------- fixed_65 | 656 | 0 | 0.00% random_stop | 503 | 73 | 21.22% prefix_heuristic | 618 | 139 | 40.41%
[Resource Efficiency] fixed_65 : 33,868 average checks per find random_stop : 26,736 average checks per find prefix_heuristic : 32,257 average checks per find test 2 ============================================================ SEARCH STRATEGY ANALYSIS: THE DEAD ZONE CHALLENGE ============================================================ Total Simulations: 1000 Keys located in the 35% Dead Zone: 346 ------------------------------------------------------------ Method | Total Success | Dead Zone Hits | DZ Efficiency ---------------------------------------------------------------------- fixed_65 | 654 | 0 | 0.00% random_stop | 535 | 74 | 21.39% prefix_heuristic | 654 | 151 | 43.64%
[Resource Efficiency] fixed_65 : 32,656 average checks per find random_stop : 26,068 average checks per find prefix_heuristic : 32,430 average checks per find test 3 ============================================================ SEARCH STRATEGY ANALYSIS: THE DEAD ZONE CHALLENGE ============================================================ Total Simulations: 2000 Keys located in the 35% Dead Zone: 697 ------------------------------------------------------------ Method | Total Success | Dead Zone Hits | DZ Efficiency ---------------------------------------------------------------------- fixed_65 | 1303 | 0 | 0.00% random_stop | 1001 | 126 | 18.08% prefix_heuristic | 1239 | 305 | 43.76%
[Resource Efficiency] fixed_65 : 33,029 average checks per find random_stop : 26,330 average checks per find prefix_heuristic : 31,815 average checks per find test 4 ============================================================ SEARCH STRATEGY ANALYSIS: THE DEAD ZONE CHALLENGE ============================================================ Total Simulations: 5000 Keys located in the 35% Dead Zone: 1721 ------------------------------------------------------------ Method | Total Success | Dead Zone Hits | DZ Efficiency ---------------------------------------------------------------------- fixed_65 | 3279 | 0 | 0.00% random_stop | 2559 | 296 | 17.20% prefix_heuristic | 3165 | 789 | 45.85%
[Resource Efficiency] fixed_65 : 33,069 average checks per find random_stop : 25,639 average checks per find prefix_heuristic : 32,178 average checks per find The 65% cutoff method is a guillotine that leaves you blind to 35% of the puzzle. The random stop is inefficient chaos. The prefix method is the only engineering strategy that balances coverage speed with the real possibility of finding the key anywhere within the range. As you can see, statistically they may seem similar, but that doesn't change the fact that the prefix method doesn't leave a gap due to an arbitrary cutoff, nor does it leave it to the chaos of randomness. Is it clear that the prefix method is the most effective for searching when we have limited resources? What do you prefer now: relying on the fixed cutoff and risking that the target is in the unexplored 35%? Leaving it to chance, with a random stop? Or using prefixes, with a margin of error of less than 1/N distributed across 100% of the range? As you can see, my method doesn't break the rules of mathematics; it doesn't affect the uniform distribution or the independence of events. It's simply an intelligent approach, which should be valued instead of attacked by egocentric purists who don't grasp the essence of what I'm saying. They only see statistics and numbers, forgetting that WE'RE LOOKING FOR A SINGLE TARGET.
|
| | 2UP.io | │ | NO KYC CASINO | │ | ██████████████████████████ ████████████████████████ ████████████████████████ ████████████████████████ ████████████████████████ ████████████████████████ ████████████████████████ ████████████████████████ ████████████████████████ ████████████████████████ ████████████████████████ ████████████████████████ ██████████████████████████ | ███████████████████████████████████████████████████████████████████████████████████████ FASTEST-GROWING CRYPTO CASINO & SPORTSBOOK ███████████████████████████████████████████████████████████████████████████████████████ | ███████████████████████████ █████████████████████████ █████████████████████████ █████████████████████████ █████████████████████████ █████████████████████████ █████████████████████████ █████████████████████████ █████████████████████████ █████████████████████████ █████████████████████████ █████████████████████████ ███████████████████████████ | │ |
WELCOME BONUS 200% + 500 FS | │ | █▀▀▀ █ █ █ █ █ █ █ █ █ █ █ █▄▄▄ | | ▀▀▀█ █ █ █ █ █ █ █ █ █ █ █ ▄▄▄█ |
|
|
|
|
kTimesG
|
 |
December 30, 2025, 11:03:37 AM Last edit: December 30, 2025, 11:30:17 AM by kTimesG |
|
The 65% cutoff method is a guillotine that leaves you blind to 35% of the puzzle. The random stop is inefficient chaos. The prefix method is the only engineering strategy that balances coverage speed with the real possibility of finding the key anywhere within the range.
As you can see, statistically they may seem similar, but that doesn't change the fact that the prefix method doesn't leave a gap due to an arbitrary cutoff, nor does it leave it to the chaos of randomness.
Is it clear that the prefix method is the most effective for searching when we have limited resources?
What do you prefer now: relying on the fixed cutoff and risking that the target is in the unexplored 35%?
Except that the prefix method also leaves a 35% blind gap. You keep forgetting about it, as if it never exists. Fix the AI nonsense and repeat your tests to also check for the cases where the magic theorem failed. Both the continuous and/or the random versions will find the target when prefix doesn't, and viceversa, making, obviously, all things equivalent in terms of results, successes, and so on. So it's not more or less effective than a plain old continuous scan with the same number of operations; however, due to the facts that computing continuous public keys (which are the inputs to the H160) is orders of magnitude faster than arbitrarily stopping and restarting, it CAN NEVER be more efficient. This is the entire point people try to make you understand.You keep confusing the uniformity of H160 with the uniformity of its inputs. Again: order does not matter, so the concept of "continuous blocks" does not exist, because the inputs don't have anything to do with the uniformity of H160. So, as an effect, the concept of "range contamination" also does not exist, because the ranges do not exist, because the concept of "order" does not exist. Which means; the magic theorem also cannot be true, since it relies on the existence of "order". Capisci? The uniformity of H160 is about the H160 outputs, which have no relation whatsoever to what they hash (whether they're SHA hashes of a public key, words, sentences, or terabytes of data). So whether you do your tests with these inputs: RIPEMD(SHA256(pubKey(correctTargetPrivateKey))) together with other 2**70 - 1 other RIPEMD(whateverRandomData) - you'll still find the exact same amount of prefixes, and have the same 1 in 2**70 chances of hitting the key (e.g. the desired INPUT, whether it's some input of a SHA256, or maybe the 3 gigabytes of some known blob, or maybe some Shakespeare dramaturgy), at each and every attempt, for whatever way you want to arrange for your "blocks" in all possible permutations and combinations and setups. Relevance to finding what we're looking for? ZERO.
|
Off the grid, training pigeons to broadcast signed messages.
|
|
|
|
mcdouglasx
|
 |
December 30, 2025, 02:30:00 PM Last edit: December 30, 2025, 02:58:17 PM by mcdouglasx |
|
snip
It's like telling a child that having 2 ice creams is the same whether you give them 2 or take 2 away from the 4 they had. The final number is the same, but the operational reality is completely different. Your method 'removes' 35% of the equation's range, leaving you blind. My method 'distributes' the effort so that I never lose the opportunity for success. kTimesG, you're stuck looking at the final number on paper; I'm sticking with the method that truly has the flexibility to find the key no matter where it is along the way. If the order didn't matter and it was all blind luck, the random stop should have performed just as well as my heuristic. I think I've already made my point about search heuristics versus purist statistics clear. Let another user see for themselves and draw their own conclusions. I have things to do, money to try to earn, to waste time on this technical-purist-engineering cycle. edit:Regarding efficiency, the statistical significance indicates that it requires almost the same effort as your block-based method. Your cutoff at 65 is the same; you would need to query the target by key, while I would query the prefix. The target is only queried if the prefix is found, and since the ratio is 1/N, a large block is used because the method is scalable, avoiding bottlenecks caused by small blocks. In other words, it only requires one more step: querying the target after finding a prefix, and this action is negligible. Happy new year.
|
| | 2UP.io | │ | NO KYC CASINO | │ | ██████████████████████████ ████████████████████████ ████████████████████████ ████████████████████████ ████████████████████████ ████████████████████████ ████████████████████████ ████████████████████████ ████████████████████████ ████████████████████████ ████████████████████████ ████████████████████████ ██████████████████████████ | ███████████████████████████████████████████████████████████████████████████████████████ FASTEST-GROWING CRYPTO CASINO & SPORTSBOOK ███████████████████████████████████████████████████████████████████████████████████████ | ███████████████████████████ █████████████████████████ █████████████████████████ █████████████████████████ █████████████████████████ █████████████████████████ █████████████████████████ █████████████████████████ █████████████████████████ █████████████████████████ █████████████████████████ █████████████████████████ ███████████████████████████ | │ |
WELCOME BONUS 200% + 500 FS | │ | █▀▀▀ █ █ █ █ █ █ █ █ █ █ █ █▄▄▄ | | ▀▀▀█ █ █ █ █ █ █ █ █ █ █ █ ▄▄▄█ |
|
|
|
|
kTimesG
|
 |
December 30, 2025, 04:13:19 PM Last edit: December 30, 2025, 04:25:29 PM by kTimesG |
|
Your method 'removes' 35% of the equation's range, leaving you blind. My method 'distributes' the effort so that I never lose the opportunity for success. kTimesG, you're stuck looking at the final number on paper; I'm sticking with the method that truly has the flexibility to find the key no matter where it is along the way.
Let's try for the 1.000.001th time: your magic method removes 35% of the equation's success. A simple scan distributes the effort so that it never loses the opportunity for success (e.g. it touches parts of your missing 35%, just like the magic method touches parts of the normal scan's missing 35%). If the order didn't matter and it was all blind luck...
# 1.000.002: there is no concept of order when we are talking about independent events pf a uniform distribution. When analyzing multiple independent events taken together, THEIR ORDER DOES NOT MATTER. There is absolutely no formula on Earth that uses their order, otherwise it is not an UNIFORM DISTRIBUTION OF INDEPENDENT EVENTS - it is SOMETHING ELSE. There are no "locations" to speak of, those are INPUT parameters for the stuff that goes into computing the H160's own input. Again: you might as well hash human DNAs for all that matters, and your prefix method will generate the exact same results, even though, oops, the "location" point of reference doesn't exist. What to do, what to do... Regarding efficiency, the statistical significance indicates that it requires almost the same effort as your block-based method. Your cutoff at 65 is the same; you would need to query the target by key, while I would query the prefix. The target is only queried if the prefix is found, and since the ratio is 1/N, a large block is used because the method is scalable, avoiding bottlenecks caused by small blocks.
You can take your statistical significance non-sense elsewhere. There is zero effort into nicely distributing X% amount of fixed work over N workers. Instead, the magic theorem is the one that arbitrarily breaks, not the fixed work method. You are in a very deep misunderstanding of basic things in computer science. And WTF is "query target by key"? There is no prefix to check for until AFTER the H160 is already FINISHED. Jesus Christ, are you OK? It's not the first time you make such a blatant statement. Still, I guess the entire codebase of human intelligence isn't of help here, let alone simply digging into the code of whatever vanity search of choice. In other words, it only requires one more step: querying the target after finding a prefix, and this action is negligible.
It's stuff like this that clearly ensures there is nothing technical to speak about with you. Doing things in exact reverse is your trademark anyway. You must be from the future! HNY.
|
Off the grid, training pigeons to broadcast signed messages.
|
|
|
|
mcdouglasx
|
 |
December 30, 2025, 04:41:08 PM |
|
snip
10000 simulations. ============================================================ SEARCH STRATEGY ANALYSIS: THE DEAD ZONE CHALLENGE ============================================================ Total Simulations: 10000 Keys located in the 35% Dead Zone: 3349 ------------------------------------------------------------ Method | Total Success | Dead Zone Hits | DZ Efficiency ---------------------------------------------------------------------- fixed_65 | 6651 | 0 | 0.00% random_stop | 5191 | 599 | 17.89% prefix_heuristic | 6392 | 1467 | 43.80%
[Resource Efficiency] fixed_65 : 32,706 average checks per find random_stop : 25,793 average checks per find prefix_heuristic : 32,191 average checks per find Bitcoin hashes have an order, even though they are a uniform distribution. The order is dictated by the private keys within the range (you see them as "balls in an urn" that get mixed up every time you reach in). Whether you like it or not, this gives the hashes a fixed order. Private key 1 will always produce the same hash; it's not as if the hashes change every time you pass through that indexed range. So, by cutting off at 65%, you're using the first 65% of the private keys in that block. Therefore, it will never be the same. And again, you're trying to distort the reality of the search with textbook theories that we all understand perfectly. And yes, my method can fail if the target is after a false positive prefix, but it's an acceptable statistical error, as you'll see in the results. Because it's acceptable, because we don't know if the target is after a prefix in a block or not, it's better to bet on that than to bet on nothing since, as you'll see in the results, in your dead zone my proposal finds that target more than 40% of the time.
|
| | 2UP.io | │ | NO KYC CASINO | │ | ██████████████████████████ ████████████████████████ ████████████████████████ ████████████████████████ ████████████████████████ ████████████████████████ ████████████████████████ ████████████████████████ ████████████████████████ ████████████████████████ ████████████████████████ ████████████████████████ ██████████████████████████ | ███████████████████████████████████████████████████████████████████████████████████████ FASTEST-GROWING CRYPTO CASINO & SPORTSBOOK ███████████████████████████████████████████████████████████████████████████████████████ | ███████████████████████████ █████████████████████████ █████████████████████████ █████████████████████████ █████████████████████████ █████████████████████████ █████████████████████████ █████████████████████████ █████████████████████████ █████████████████████████ █████████████████████████ █████████████████████████ ███████████████████████████ | │ |
WELCOME BONUS 200% + 500 FS | │ | █▀▀▀ █ █ █ █ █ █ █ █ █ █ █ █▄▄▄ | | ▀▀▀█ █ █ █ █ █ █ █ █ █ █ █ ▄▄▄█ |
|
|
|
WanderingPhilospher (OP)
Sr. Member
  
Offline
Activity: 1456
Merit: 275
Shooters Shoot...
|
 |
December 30, 2025, 04:46:58 PM |
|
I think we got off track when we started comparing searching in a 65% (beginning, end, middle) and stopping and saying it had the EXACT same % to find the key as the prefix method. And that could have been my fault.
If we are searching for a specific h160, the 65% stop method would NOT be the same as the prefix method; AND we all agree that 100% scan is the superior method, so no need to bring it up.
We need to bring back to the forefront, that we are actually searching for a target hash...i.e. we are actually looking for something specific and not just quantities of prefixes.
If there are 0 or 1 prefix in a subrange we are searching: the prefix method will know 100% of the time. It will scan 100% and find 0 or it will keep searching and find the 1, even if it takes 99% searching of the subrange. the 65% stop method will only know there was 0 or 1 in the 65% of the range scanned (b, m e). But if the prefix is in the 66%-100%, it will not find the prefix.
If there are 1+ prefixes in a subrange we are searching: the prefix method will find at least 1 of the prefixes. the 65% stop method will find 0 to all prefixes, depending on where the prefixes are at in the subrange.
So, when searching for a specific target, the prefix method has a slight advantage because it will always find at least 1 prefix or do a 100% scan and conclude that none exist in the subrange.
BUT, if we are trying to find quantities of prefixes and not necessarily looking for a specific target, then both, on average, will find the same amount of prefixes.
Prefix method will always win, or at worse, tie, if there are 0 to 1 prefixes in a subrange. If there are 2 prefixes in the subrange, the prefix method has a 50% chance of finding the target. If there are 3 prefixes in the subrange, the prefix method has a 33% chance of finding the target. etc.
65% method will always win, or at worse, tie, if ALL prefixes are within the 65% of the subrange searched. If there are 1/2 prefixes within the 65% of the subrange searched = 50%. If there are 0/2 prefixes within the 65% of the subrange searched = 0%. If there are 2/3 prefixes within the 65% of the subrange searched = 66%. If there are 1/3 prefixes within the 65% of the subrange searched = 33%. If there are 0/3 prefixes within the 65% of the subrange searched = 0%.
Did I miss anything?
|
|
|
|
|
|
mcdouglasx
|
 |
December 30, 2025, 05:24:56 PM Last edit: December 30, 2025, 06:08:47 PM by mcdouglasx |
|
I think we got off track when we started comparing searching in a 65% (beginning, end, middle) and stopping and saying it had the EXACT same % to find the key as the prefix method. And that could have been my fault.
If we are searching for a specific h160, the 65% stop method would NOT be the same as the prefix method; AND we all agree that 100% scan is the superior method, so no need to bring it up.
We need to bring back to the forefront, that we are actually searching for a target hash...i.e. we are actually looking for something specific and not just quantities of prefixes.
If there are 0 or 1 prefix in a subrange we are searching: the prefix method will know 100% of the time. It will scan 100% and find 0 or it will keep searching and find the 1, even if it takes 99% searching of the subrange. the 65% stop method will only know there was 0 or 1 in the 65% of the range scanned (b, m e). But if the prefix is in the 66%-100%, it will not find the prefix.
If there are 1+ prefixes in a subrange we are searching: the prefix method will find at least 1 of the prefixes. the 65% stop method will find 0 to all prefixes, depending on where the prefixes are at in the subrange.
So, when searching for a specific target, the prefix method has a slight advantage because it will always find at least 1 prefix or do a 100% scan and conclude that none exist in the subrange.
BUT, if we are trying to find quantities of prefixes and not necessarily looking for a specific target, then both, on average, will find the same amount of prefixes.
Prefix method will always win, or at worse, tie, if there are 0 to 1 prefixes in a subrange. If there are 2 prefixes in the subrange, the prefix method has a 50% chance of finding the target. If there are 3 prefixes in the subrange, the prefix method has a 33% chance of finding the target. etc.
65% method will always win, or at worse, tie, if ALL prefixes are within the 65% of the subrange searched. If there are 1/2 prefixes within the 65% of the subrange searched = 50%. If there are 0/2 prefixes within the 65% of the subrange searched = 0%. If there are 2/3 prefixes within the 65% of the subrange searched = 66%. If there are 1/3 prefixes within the 65% of the subrange searched = 33%. If there are 0/3 prefixes within the 65% of the subrange searched = 0%.
Did I miss anything?
Everything you've said is exactly correct. I know that the general statistics of the methods in an infinite world will tend to equalize, but ignoring the pure statistics and focusing on the search for a unique target in a finite indexed range, prefixes are the best strategy because they always have a statistical chance, whether there are 1, 2, 3, or more prefixes. Even with more than one prefix, we will always have the possibility that the target is the first one we find, but with a cut-off limit of 65%, if the target is at 66%, it has zero chance. And I say it's the best strategy because a 100% search is not viable for search engines with limited resources nowadays because it's already too much of a waste of time and money using GPU farms.
|
| | 2UP.io | │ | NO KYC CASINO | │ | ██████████████████████████ ████████████████████████ ████████████████████████ ████████████████████████ ████████████████████████ ████████████████████████ ████████████████████████ ████████████████████████ ████████████████████████ ████████████████████████ ████████████████████████ ████████████████████████ ██████████████████████████ | ███████████████████████████████████████████████████████████████████████████████████████ FASTEST-GROWING CRYPTO CASINO & SPORTSBOOK ███████████████████████████████████████████████████████████████████████████████████████ | ███████████████████████████ █████████████████████████ █████████████████████████ █████████████████████████ █████████████████████████ █████████████████████████ █████████████████████████ █████████████████████████ █████████████████████████ █████████████████████████ █████████████████████████ █████████████████████████ ███████████████████████████ | │ |
WELCOME BONUS 200% + 500 FS | │ | █▀▀▀ █ █ █ █ █ █ █ █ █ █ █ █▄▄▄ | | ▀▀▀█ █ █ █ █ █ █ █ █ █ █ █ ▄▄▄█ |
|
|
|
|
kTimesG
|
 |
December 30, 2025, 05:34:55 PM |
|
Bitcoin hashes have an order, even though they are a uniform distribution. The order is dictated by the private keys within the range (you see them as "balls in an urn" that get mixed up every time you reach in). Whether you like it or not, this gives the hashes a fixed order. Private key 1 will always produce the same hash; it's not as if the hashes change every time you pass through that indexed range.
The only "order" of any hash is the hash value itself. Again: you are using the root input of three layers of crypto algorithms as some basis of "order" in the final output (the uniform distribution). So, yes, Bitcoin hashes do have an order indeed: it's the order of the H160's value, not the value of the private key that generated the public key that was hashed via SHA256, which ultimately resulted in the H160. You also misunderstood the balls thing. The balls are not private keys, the balls are the H160 values. So there are 2**160 balls (all possible balls) in the urn. There is no notion of private keys to speak of in this context. There is however, the notion that "once we extract a H160 ball, we put it back in, and we do this 2**70 times". So: where do you see the "signaling"? Do the balls talk to each other? Does the one who extracts the balls forget to put the ball back in? Or is the magic unicorn whispering to the balls: "hey, this guy's at the next private key, watch out what ball comes next!". The simple fact that there are ~ 2**256 distinct by definition public keys in secp256k1, but only AT MOST 2**160 uniquely different Bitcoin hashes, is enough to classify "Bitcoin hashes have an order, based on their private key", as total non-sense.
|
Off the grid, training pigeons to broadcast signed messages.
|
|
|
WanderingPhilospher (OP)
Sr. Member
  
Offline
Activity: 1456
Merit: 275
Shooters Shoot...
|
 |
December 30, 2025, 05:44:11 PM |
|
Even with more than 1 prefix, we will always have the chance that the target will be the first one we find, but at 65%. How did you arrive at 65%? To me, it would be based on how many prefixes are in the range, unless you are just doing an average. If there are: 4 prefixes = 25% 3 prefixes = 33% 2 prefixes = 50% So I would say on average a 36% we find the target hash, if there are more than 1 prefixes in a subrange. It would jump to 42% if there were never 4 prefixes in a subrange, but we can't conclude that 100%. During puzzle 69, when I ran a lot of real world tests, actually searching for 69 but trying to come up with a good prefix method, I found that skipping 1 up to 2^x (depending on how many bits I was trying to match (40, 44 or 48); found prefixes more quickly and less 0 founds in a subrange. So if I found a prefix with a private key of 0x100; then I would add some value to that private key and that would be my new start range + whatever size bits I was looking for. So if I was looking for 44 bit matches: 0x100 + 2^28 = 0x10000100:0x100000000000 = new search range (just as an example but all ranges were in the 69 bit range.) So I never broke the range into x amount of predetermined subranges, I let the found hashes' private keys, dictate the next subrange.
|
|
|
|
|
|
mcdouglasx
|
 |
December 30, 2025, 06:36:40 PM Last edit: December 30, 2025, 07:56:29 PM by mcdouglasx |
|
Even with more than 1 prefix, we will always have the chance that the target will be the first one we find, but at 65%. How did you arrive at 65%? To me, it would be based on how many prefixes are in the range, unless you are just doing an average. If there are: 4 prefixes = 25% 3 prefixes = 33% 2 prefixes = 50% So I would say on average a 36% we find the target hash, if there are more than 1 prefixes in a subrange. It would jump to 42% if there were never 4 prefixes in a subrange, but we can't conclude that 100%. Exactly, I just misspelled what I meant to say, which was this: Even with more than one prefix, we will always have the possibility that the target is the first one we find, but with a cut-off limit of 65%, if the target is at 66%, it has zero chance.
During puzzle 69, when I ran a lot of real world tests, actually searching for 69 but trying to come up with a good prefix method, I found that skipping 1 up to 2^x (depending on how many bits I was trying to match (40, 44 or 48); found prefixes more quickly and less 0 founds in a subrange.
So if I found a prefix with a private key of 0x100; then I would add some value to that private key and that would be my new start range + whatever size bits I was looking for. So if I was looking for 44 bit matches:
0x100 + 2^28 = 0x10000100:0x100000000000 = new search range (just as an example but all ranges were in the 69 bit range.) So I never broke the range into x amount of predetermined subranges, I let the found hashes' private keys, dictate the next subrange. We would need to run tests to see if it's more cost-effective to choose fixed blocks or to do it adaptively, but if you saw that you were finding more prefixes, it's a good indication that it's worth comparing both strategies using unique targets for both tests since you were achieving a more efficient coverage density by finding fewer empty blocks. Similarly, it could be used for creating vanity addresses; if you find more fast prefixes, you'll create the vanity faster. But that's another topic for now. update:
I asked the AI to make the modification to the code for the comparison you mentioned. I chose a 30% jump after the prefix, but the results, although you need more checks on average, are more promising. import hashlib import random import math import statistics
# === CONFIGURATION === TOTAL_SIZE = 100_000 RANGE_SIZE = 4_096 PREFIX_LENGTH = 3 SIMULATIONS = 10000 ADAPTIVE_JUMP_RATE = 0.30
# Bitcoin secp256k1 order SECP_ORDER = int("0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFEBAAEDCE6AF48A03BBFD25E8CD0364141", 16)
def generate_h160(data): """Generates RIPEMD160(SHA256(data)) hash.""" h = hashlib.new('ripemd160', str(data).encode('utf-8')) return h.hexdigest()
# --- SEARCH METHODOLOGIES ---
def prefix_fixed_block_search(dataset, block_size, prefix_len, target_hash): """Original Heuristic: Fixed block stepping upon prefix collision.""" target_prefix = target_hash[:prefix_len] checks = 0 i = 0 while i < len(dataset): block_end = min(i + block_size, len(dataset)) found_in_block = False for j in range(i, block_end): checks += 1 current_hash = generate_h160(dataset[j]) if current_hash == target_hash: return {"checks": checks, "found": True} if current_hash.startswith(target_prefix): i = block_end # Jump to the start of the next fixed block found_in_block = True break if not found_in_block: i = block_end return {"checks": checks, "found": False}
def prefix_adaptive_jump_search(dataset, block_size, prefix_len, target_hash): """WP-Inspired Heuristic: 30% dynamic jump forward from collision point.""" target_prefix = target_hash[:prefix_len] jump_size = int(block_size * ADAPTIVE_JUMP_RATE) checks = 0 i = 0 while i < len(dataset): checks += 1 current_hash = generate_h160(dataset[i]) if current_hash == target_hash: return {"checks": checks, "found": True} if current_hash.startswith(target_prefix): # If prefix collision, perform a 30% jump ahead of current position i += jump_size else: i += 1 return {"checks": checks, "found": False}
def run_experiment(): results = { "fixed_block": {"found": 0, "checks": []}, "adaptive_jump": {"found": 0, "checks": []} } print(f"Starting {SIMULATIONS} simulations...") print(f"Setup: Range {RANGE_SIZE} | Adaptive Jump Size: {int(RANGE_SIZE * ADAPTIVE_JUMP_RATE)} units")
for s in range(SIMULATIONS): offset = random.randint(0, SECP_ORDER - TOTAL_SIZE) dataset = [offset + i for i in range(TOTAL_SIZE)] target_idx = random.randint(0, TOTAL_SIZE - 1) target_hash = generate_h160(dataset[target_idx]) # Test 1: Fixed Block Strategy res_fixed = prefix_fixed_block_search(dataset, RANGE_SIZE, PREFIX_LENGTH, target_hash) if res_fixed["found"]: results["fixed_block"]["found"] += 1 results["fixed_block"]["checks"].append(res_fixed["checks"])
# Test 2: Adaptive 30% Jump Strategy res_adapt = prefix_adaptive_jump_search(dataset, RANGE_SIZE, PREFIX_LENGTH, target_hash) if res_adapt["found"]: results["adaptive_jump"]["found"] += 1 results["adaptive_jump"]["checks"].append(res_adapt["checks"])
# --- FINAL REPORT --- print("\n" + "="*65) print("COMPARISON: FIXED BLOCKS VS ADAPTATIVE JUMP (30%)") print("="*65) header = f"{'Methodology':<20} | {'Success':<10} | {'Avg Checks':<15} | {'Efficiency'}" print(header) print("-" * len(header)) for name, data in results.items(): avg_checks = statistics.mean(data["checks"]) if data["checks"] else 0 success_rate = (data["found"] / SIMULATIONS) * 100 print(f"{name:<20} | {data['found']:<10} | {avg_checks:,.0f} | {success_rate:>10.2f}%")
if __name__ == '__main__': run_experiment()
So far 1000 simulations. Starting 1000 simulations... Setup: Range 4096 | Adaptive Jump Size: 1228 units
================================================================= COMPARISON: FIXED BLOCKS VS ADAPTATIVE JUMP (30%) ================================================================= Methodology | Success | Avg Checks | Efficiency ---------------------------------------------------------------- fixed_block | 613 | 31,864 | 61.30% adaptive_jump | 755 | 38,899 | 75.50% 10,000 simulations Starting 10000 simulations... Setup: Range 4096 | Adaptive Jump Size: 1228 units
================================================================= COMPARISON: FIXED BLOCKS VS ADAPTATIVE JUMP (30%) ================================================================= Methodology | Success | Avg Checks | Efficiency ---------------------------------------------------------------- fixed_block | 6300 | 32,010 | 63.00% adaptive_jump | 7692 | 38,629 | 76.92% In my opinion, without going into too much detail, adaptive hopping is an upgrade to the fixed-block method, although it forces a single path, so if the target is at the end, it will always take longer than with a randomized-block method. However, randomizing the fixed block would also send the target to the last blocks. But what if we randomized every time a block is scanned when using fixed blocks? .... The benefit of adaptive hopping is clear in comparison.
|
| | 2UP.io | │ | NO KYC CASINO | │ | ██████████████████████████ ████████████████████████ ████████████████████████ ████████████████████████ ████████████████████████ ████████████████████████ ████████████████████████ ████████████████████████ ████████████████████████ ████████████████████████ ████████████████████████ ████████████████████████ ██████████████████████████ | ███████████████████████████████████████████████████████████████████████████████████████ FASTEST-GROWING CRYPTO CASINO & SPORTSBOOK ███████████████████████████████████████████████████████████████████████████████████████ | ███████████████████████████ █████████████████████████ █████████████████████████ █████████████████████████ █████████████████████████ █████████████████████████ █████████████████████████ █████████████████████████ █████████████████████████ █████████████████████████ █████████████████████████ █████████████████████████ ███████████████████████████ | │ |
WELCOME BONUS 200% + 500 FS | │ | █▀▀▀ █ █ █ █ █ █ █ █ █ █ █ █▄▄▄ | | ▀▀▀█ █ █ █ █ █ █ █ █ █ █ █ ▄▄▄█ |
|
|
|
JackMazzoni
Jr. Member
Offline
Activity: 172
Merit: 6
|
 |
December 30, 2025, 11:11:19 PM |
|
I think the premix method is great if it records the blocks that it skips.
|
Need Wallet Recovery? PM ME. 100% SAFE
|
|
|
|
kTimesG
|
 |
December 31, 2025, 02:51:24 PM Last edit: December 31, 2025, 03:37:25 PM by kTimesG |
|
During puzzle 69, when I ran a lot of real world tests, actually searching for 69 but trying to come up with a good prefix method, I found that skipping 1 up to 2^x (depending on how many bits I was trying to match (40, 44 or 48); found prefixes more quickly and less 0 founds in a subrange.
So if I found a prefix with a private key of 0x100; then I would add some value to that private key and that would be my new start range + whatever size bits I was looking for. So if I was looking for 44 bit matches:
0x100 + 2^28 = 0x10000100:0x100000000000 = new search range (just as an example but all ranges were in the 69 bit range.) So I never broke the range into x amount of predetermined subranges, I let the found hashes' private keys, dictate the next subrange.
Something doesn't add up with what you are stating. If you never broke the range into subranges, then why you are also stating that you had "less 0 founds on a subrange"? Compared to what, since if you have a prefix hit, then you are no longer counting the scanning of the rest of that specific subrange on its initial bounds? So your "new subranges" overlap with "old subranges" which already contained your prefix hit, making your statement really ambiguous, since the subrange's definition is conditioned by an earlier event having a specific output. But this is just a psychological bias, and it doesn't respect the mathematical definition of "a set of independent events" because your subrange depends on an earlier event. In short: your sets overlap, in a dependent manner, so the arithmetic reflects this, but it's no longer an arithmetic performed on a uniform distribution of independent events. It's like saying, that if we have a subrange of 5 bits, and the sequence "1110000000" then we now have 3 subranges that hit, and 1 subrange that didn't, but in reality, we can only split 10 bits into two distinct sets, not 4 or more. You can replace each bit with some expansion of desired bit length, to apply the same bias to masive intervals. I know you are smart enough to understand that having any such sort of advantage (finding prefixes "faster" than the expected average per amount of checks) would basically mean Bitcoin is busted, in which case we can all close the lights and leave the building!
|
Off the grid, training pigeons to broadcast signed messages.
|
|
|
Akito S. M. Hosana
Jr. Member
Offline
Activity: 406
Merit: 8
|
 |
December 31, 2025, 08:41:33 PM Last edit: December 31, 2025, 09:09:09 PM by Akito S. M. Hosana |
|
The benefit of adaptive hopping is clear in comparison.
import secp256k1 as ice import multiprocessing as mp import os
# ============================================================ # CONFIGURATION # ============================================================
PUZZLE = 71 LOWER_BOUND = 2 ** (PUZZLE - 1) # smallest key for puzzle 71 UPPER_BOUND = 2 ** PUZZLE - 1 # largest key for puzzle 71
BLOCK_SIZE = 4096 # block size for prefix heuristic ADAPTIVE_JUMP_RATE = 0.30 # jump 30% of block size on prefix match PREFIX_LENGTH = 3 # number of hex chars for prefix heuristic CPU_WORKERS = mp.cpu_count() # number of parallel processes CHUNK_SIZE = 200_000 # keys per worker per chunk
# ============================================================ # TARGET HASH160 # ============================================================
TARGET_HEX = "f6f5431d25bbf7b12e8add9af5e3475c44a0a5b8" TARGET_BINARY = bytes.fromhex(TARGET_HEX) TARGET_PREFIX = TARGET_HEX[:PREFIX_LENGTH] # heuristic prefix
# ============================================================ # PRIVATE KEY → HASH160 FUNCTION # ============================================================
def privkey_to_h160(dec: int, compressed: bool = True) -> bytes: return ice.privatekey_to_h160(dec, compressed, 0)
# ============================================================ # SUCCESS HANDLER # ============================================================
def handle_success(dec: int): """ Called when the target HASH160 is found. Exits all workers immediately using os._exit(0). """ print("\n" + "="*72) print("🔥 PRIVATE KEY FOUND 🔥") print("="*72) print(f"PID : {os.getpid()}") print(f"DEC KEY : {dec}") print(f"HEX KEY : {hex(dec)}") print("="*72) os._exit(0)
# ============================================================ # WORKER FUNCTION # ============================================================
def worker(worker_id: int, start: int, end: int, found_event: mp.Event): k = start jump = int(BLOCK_SIZE * ADAPTIVE_JUMP_RATE) checks = 0
while k <= end: if found_event.is_set(): # exit if another worker found the key return
h160 = privkey_to_h160(k) h160_hex = h160.hex()
# full HASH160 match if h160 == TARGET_BINARY: found_event.set() handle_success(k)
# prefix heuristic jump if h160_hex.startswith(TARGET_PREFIX): k += jump else: k += 1
checks += 1
# periodic progress log if checks % 100_000 == 0: print(f"[PID {os.getpid()}] checked {checks:,} keys")
# ============================================================ # FULL RANGE DRIVER # ============================================================
def run(): """ Drives the full keyspace traversal from LOWER_BOUND to UPPER_BOUND. Splits each chunk among CPU_WORKERS to avoid overlapping ranges. """ print("="*72) print(f"Bitcoin Puzzle {PUZZLE} — Full Range Multiprocessing") print(f"Workers: {CPU_WORKERS}") print("="*72)
found_event = mp.Event()
# iterate over the entire keyspace in chunks for chunk_start in range(LOWER_BOUND, UPPER_BOUND + 1, CHUNK_SIZE * CPU_WORKERS): processes = []
chunk_end = min(chunk_start + CHUNK_SIZE * CPU_WORKERS - 1, UPPER_BOUND) progress_percent = (chunk_start - LOWER_BOUND) / (UPPER_BOUND - LOWER_BOUND) * 100
print(f"[INFO] Processing chunk: [{chunk_start} ({hex(chunk_start)}), " f"{chunk_end} ({hex(chunk_end)})], progress: {progress_percent:.6f}%")
# assign a unique sub-range to each worker for i in range(CPU_WORKERS): start = chunk_start + i * CHUNK_SIZE if start > UPPER_BOUND: break
end = min(start + CHUNK_SIZE - 1, UPPER_BOUND)
p = mp.Process(target=worker, args=(i, start, end, found_event), daemon=True) p.start() processes.append(p)
print(f"[+] Worker {i} → range [{start} ({hex(start)}), {end} ({hex(end)})]")
# wait for all workers in this chunk to finish for p in processes: p.join()
if found_event.is_set(): break # exit immediately if key found
print("\n[!] Full range exhausted. Target not found.")
# ============================================================ # ENTRY POINT # ============================================================
if __name__ == "__main__": run()
The prefix match is used only as a heuristic signal to guide traversal, while a full HASH160 match is still required for success. This implementation is intentionally written in Python for clarity and experimentation; a practical high‑performance version would need to be implemented in C++
|
|
|
|
|
|
mcdouglasx
|
 |
December 31, 2025, 09:26:14 PM |
|
code:import hashlib import random import math import statistics
# === CONFIGURATION === TOTAL_SIZE = 100_000 RANGE_SIZE = 4_096 PREFIX_LENGTH = 3 SIMULATIONS = 100 ADAPTIVE_JUMP_RATE = 0.30
# Bitcoin secp256k1 order SECP_ORDER = int("0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFEBAAEDCE6AF48A03BBFD25E8CD0364141", 16)
def generate_h160(data): """Generates RIPEMD160(SHA256(data)) hash.""" h = hashlib.new('ripemd160', str(data).encode('utf-8')) return h.hexdigest()
# --- SEARCH METHODOLOGIES ---
def prefix_fixed_block_search(dataset, block_size, prefix_len, target_hash): """Original Heuristic: Fixed block stepping upon prefix collision.""" target_prefix = target_hash[:prefix_len] checks = 0 i = 0 while i < len(dataset): block_end = min(i + block_size, len(dataset)) found_in_block = False for j in range(i, block_end): checks += 1 current_hash = generate_h160(dataset[j]) if current_hash == target_hash: return {"checks": checks, "found": True} if current_hash.startswith(target_prefix): i = block_end found_in_block = True break if not found_in_block: i = block_end return {"checks": checks, "found": False}
def prefix_adaptive_jump_search(dataset, block_size, prefix_len, target_hash): """WP-Inspired Heuristic: 30% dynamic jump forward from collision point.""" target_prefix = target_hash[:prefix_len] jump_size = int(block_size * ADAPTIVE_JUMP_RATE) checks = 0 i = 0 while i < len(dataset): checks += 1 current_hash = generate_h160(dataset[i]) if current_hash == target_hash: return {"checks": checks, "found": True} if current_hash.startswith(target_prefix): i += jump_size else: i += 1 return {"checks": checks, "found": False}
def run_experiment(): results = { "fixed_block": {"found": 0, "checks": []}, "adaptive_jump": {"found": 0, "checks": []} } print(f"Starting {SIMULATIONS} simulations...") print(f"Setup: Range {RANGE_SIZE} | Adaptive Jump Size: {int(RANGE_SIZE * ADAPTIVE_JUMP_RATE)} units")
for s in range(SIMULATIONS): offset = random.randint(0, SECP_ORDER - TOTAL_SIZE) dataset = [offset + i for i in range(TOTAL_SIZE)] target_idx = random.randint(0, TOTAL_SIZE - 1) target_hash = generate_h160(dataset[target_idx]) # Test 1: Fixed Block res_fixed = prefix_fixed_block_search(dataset, RANGE_SIZE, PREFIX_LENGTH, target_hash) results["fixed_block"]["checks"].append(res_fixed["checks"]) if res_fixed["found"]: results["fixed_block"]["found"] += 1
# Test 2: Adaptive Jump res_adapt = prefix_adaptive_jump_search(dataset, RANGE_SIZE, PREFIX_LENGTH, target_hash) results["adaptive_jump"]["checks"].append(res_adapt["checks"]) if res_adapt["found"]: results["adaptive_jump"]["found"] += 1
# --- FINAL REPORT --- print("\n" + "="*85) print("COMPARISON: FIXED BLOCKS VS ADAPTATIVE JUMP (30%)") print("="*85) # Añadida columna 'Total Checks' header = f"{'Methodology':<20} | {'Success':<8} | {'Avg Checks':<12} | {'Total Checks':<15} | {'Efficiency'}" print(header) print("-" * len(header)) for name, data in results.items(): total_checks = sum(data["checks"]) avg_checks = statistics.mean(data["checks"]) if data["checks"] else 0 success_rate = (data["found"] / SIMULATIONS) * 100 print(f"{name:<20} | {data['found']:<8} | {avg_checks:,.0f} | {total_checks:,.0f} | {success_rate:>10.2f}%")
if __name__ == '__main__': run_experiment()
Starting 10000 simulations... Setup: Range 4096 | Adaptive Jump Size: 1228 units
===================================================================================== COMPARISON: FIXED BLOCKS VS ADAPTATIVE JUMP (30%) ===================================================================================== Methodology | Success | Avg Checks | Total Checks | Efficiency ----------------------------------------------------------------------------- fixed_block | 6356 | 42,756 | 427,563,401 | 63.56% adaptive_jump | 7745 | 46,899 | 468,988,364 | 77.45%
10,000 simulations of 100,000 checks would be a total range of 1,000,000,000
Your update to my method reduces an area scan by approximately 53%.To put the adaptive method suggested by @WanderingPhilospher into context, less than 50% of the range is scanned to achieve a 77% success rate. This method not only proves that prefixes are a statistically valid checkpoint for target search and confirms that prefixes are not random noise, but it also represents a milestone in key finding, since randomness and a full scan would require too much effort. By achieving an approximate 77% success rate, you demonstrate that the prefix acts as an exclusion filter. You are "cleaning" the sterile areas of the range at a speed that brute force has not been able to match. If it were random noise, your success rate would be proportional to the scanned area (less than 53%), but at 77%, the efficiency is much higher than expected by chance. This confirms that the prefix is a real filtering heuristic. dismantling the purist arguments that everything is the same no matter how you look at it. Bravo @WanderingPhilospher, You have upgraded the method 
|
| | 2UP.io | │ | NO KYC CASINO | │ | ██████████████████████████ ████████████████████████ ████████████████████████ ████████████████████████ ████████████████████████ ████████████████████████ ████████████████████████ ████████████████████████ ████████████████████████ ████████████████████████ ████████████████████████ ████████████████████████ ██████████████████████████ | ███████████████████████████████████████████████████████████████████████████████████████ FASTEST-GROWING CRYPTO CASINO & SPORTSBOOK ███████████████████████████████████████████████████████████████████████████████████████ | ███████████████████████████ █████████████████████████ █████████████████████████ █████████████████████████ █████████████████████████ █████████████████████████ █████████████████████████ █████████████████████████ █████████████████████████ █████████████████████████ █████████████████████████ █████████████████████████ ███████████████████████████ | │ |
WELCOME BONUS 200% + 500 FS | │ | █▀▀▀ █ █ █ █ █ █ █ █ █ █ █ █▄▄▄ | | ▀▀▀█ █ █ █ █ █ █ █ █ █ █ █ ▄▄▄█ |
|
|
|
|
nomachine
|
 |
December 31, 2025, 10:26:33 PM Last edit: Today at 10:33:59 AM by nomachine |
|
snip
This script does literally nothing.  Below is an example of another script that actually does something import secp256k1 as ice import random import time from multiprocessing import Process, Value, Lock, cpu_count
# ============================================================ # CONFIG # ============================================================
PUZZLE = 71 LOWER_BOUND = 2 ** (PUZZLE - 1) UPPER_BOUND = 2 ** PUZZLE - 1
TARGET_HEX = "f6f5431d25bbf7b12e8add9af5e3475c44a0a5b8" TARGET_H160 = bytes.fromhex(TARGET_HEX)
PREFIX_BYTES = 2 # number of prefix bytes to trigger jump BLOCK_SIZE = 4096 ADAPTIVE_JUMP_RATE = 0.30 INFO_EVERY = 100_000 NO_HIT_THRESHOLD = 50_000 # random restart after no hits
jump_size = int(BLOCK_SIZE * ADAPTIVE_JUMP_RATE)
# ============================================================ # SHARED STATE # ============================================================
checks = Value('i', 0) hits = Value('i', 0) lock = Lock()
# ============================================================ # WORKER FUNCTION # ============================================================
def worker(worker_id): local_hits = 0 local_no_hit_counter = 0 k = random.randint(LOWER_BOUND, UPPER_BOUND) start_time = time.time()
while True: h = ice.privatekey_to_h160(0, True, k)
with lock: checks.value += 1 total_checks = checks.value
# Full hit if h == TARGET_H160: with lock: print(f"[FOUND] Worker {worker_id} | key={k} h160={h.hex()} after {total_checks:,} checks") break
# Prefix hit → jump mode if h[:PREFIX_BYTES] == TARGET_H160[:PREFIX_BYTES]: local_hits += 1 with lock: hits.value += 1 print(f"[HIT #{hits.value}] Worker {worker_id} | key={k} h160={h.hex()} (jump +{jump_size})") k += jump_size local_no_hit_counter = 0 else: k += 1 local_no_hit_counter += 1
if local_no_hit_counter > NO_HIT_THRESHOLD: k = random.randint(LOWER_BOUND, UPPER_BOUND) local_no_hit_counter = 0 with lock: print(f"[INFO] Worker {worker_id} | No hits for {NO_HIT_THRESHOLD} keys → random restart")
# wrap-around if k > UPPER_BOUND: k = random.randint(LOWER_BOUND, UPPER_BOUND)
# periodic info if total_checks % INFO_EVERY == 0: elapsed = time.time() - start_time rate = total_checks / elapsed if elapsed > 0 else 0 with lock: print(f"[INFO] Worker {worker_id} | Checked {total_checks:,} keys | hits={hits.value} | rate={rate:,.0f} keys/s")
# ============================================================ # MAIN # ============================================================
if __name__ == "__main__": num_workers = cpu_count() processes = []
print(f"[INFO] Starting {num_workers} worker processes...")
for i in range(num_workers): p = Process(target=worker, args=(i,)) p.start() processes.append(p)
for p in processes: p.join() This is so slow, you’d be waiting for the Sun to burn out and become a supernova before it finishes. 
|
BTC: bc1qdwnxr7s08xwelpjy3cc52rrxg63xsmagv50fa8
|
|
|
Akito S. M. Hosana
Jr. Member
Offline
Activity: 406
Merit: 8
|
 |
Today at 06:46:34 AM |
|
This is so slow, you’d be waiting for the Sun to burn out and become a supernova before it finishes. Can you modify this script to implement the following logic? Progressive Prefix Matching: The script should start by matching only the first byte of the HASH160. After finding 3 matches at the 1-byte level, it should increase to 2 bytes and continue progressively until reaching the full 20-byte match. Adaptive Jumping: When a prefix match is found, the script should jump forward in the keyspace instead of checking every key. Base jump size: 1,228 keys Adaptive multiplier: 2^(prefix_length - 1) Example jumps: 1-byte match: 1,228 × 1 = 1,228 keys forward 2-byte match: 1,228 × 2 = 2,456 keys forward 3-byte match: 1,228 × 4 = 4,912 keys forward 4-byte match: 1,228 × 8 = 9,824 keys forward 
|
|
|
|
|
|
nomachine
|
 |
Today at 09:43:56 AM Last edit: Today at 10:40:10 AM by nomachine |
|
1-byte match: 1,228 × 1 = 1,228 keys forward 2-byte match: 1,228 × 2 = 2,456 keys forward 3-byte match: 1,228 × 4 = 4,912 keys forward 4-byte match: 1,228 × 8 = 9,824 keys forward  """ Bitcoin Puzzle 71 Search - Corrected Version Matches actual target prefix bytes, not just prefix length """
import secp256k1 as ice import random import time import math from multiprocessing import Process, Manager, Lock, Event, cpu_count import logging import sys
# ============================================================ # CONFIGURATION # ============================================================
PUZZLE = 71 LOWER_BOUND = 2 ** (PUZZLE - 1) UPPER_BOUND = 2 ** PUZZLE - 1
TARGET_HEX = "f6f5431d25bbf7b12e8add9af5e3475c44a0a5b8" TARGET_H160 = bytes.fromhex(TARGET_HEX)
# Target prefix bytes - we'll match these progressively TARGET_PREFIXES = [] for i in range(1, len(TARGET_H160) + 1): TARGET_PREFIXES.append(TARGET_H160[:i])
INITIAL_PREFIX_INDEX = 0 # Start with 1 byte: TARGET_H160[:1] = f6 MAX_PREFIX_INDEX = len(TARGET_PREFIXES) - 1 # Full 20-byte match
# Jump settings BASE_JUMP_SIZE = 1228
# Performance settings BASE_NO_HIT_THRESHOLD = 50000 PREFIX_UPGRADE_HITS = 3 CHECK_FLUSH_INTERVAL = 1000
# Adaptive settings ENABLE_PERSISTENCE_MODE = True PERSISTENCE_THRESHOLD = 6 # Prefix index threshold, not byte count
# ============================================================ # LOGGING SETUP # ============================================================
logging.basicConfig( level=logging.INFO, format='[%(asctime)s] %(message)s', datefmt='%H:%M:%S' ) logger = logging.getLogger(__name__)
# ============================================================ # SHARED STATE # ============================================================
manager = Manager() shared = manager.dict() shared['checks'] = 0 shared['total_hits'] = 0 shared['current_prefix_index'] = INITIAL_PREFIX_INDEX shared['hits_at_current_level'] = 0 shared['restart_count'] = 0 shared['found'] = False shared['start_time'] = time.time()
lock = Lock() stop_event = Event()
# ============================================================ # ADAPTIVE FUNCTIONS # ============================================================
def get_adaptive_no_hit_threshold(prefix_index): """ Get the threshold for random restarts based on prefix index. """ if prefix_index >= PERSISTENCE_THRESHOLD and ENABLE_PERSISTENCE_MODE: return float('inf') multiplier = 2 ** prefix_index # prefix_index = bytes - 1 return BASE_NO_HIT_THRESHOLD * multiplier
def get_adaptive_upgrade_hits_needed(prefix_index): """ Get number of hits needed to upgrade to next prefix level. """ if prefix_index <= 2: # 1-3 bytes return PREFIX_UPGRADE_HITS elif prefix_index <= 5: # 4-6 bytes return 2 else: return 1
def calculate_adaptive_jump(prefix_index): """ Calculate jump size based on the MATCHED prefix index. Jump formula: BASE_JUMP_SIZE × 2^(prefix_index) where prefix_index = matched_bytes - 1 Examples: - 1-byte match (index 0): 1228 × 2^0 = 1228 × 1 = 1,228 keys forward - 2-byte match (index 1): 1228 × 2^1 = 1228 × 2 = 2,456 keys forward - 3-byte match (index 2): 1228 × 2^2 = 1228 × 4 = 4,912 keys forward - 4-byte match (index 3): 1228 × 2^3 = 1228 × 8 = 9,824 keys forward """ multiplier = 2 ** prefix_index return BASE_JUMP_SIZE * multiplier
def get_current_target_prefix(prefix_index): """ Get the actual target prefix bytes for the current level. """ return TARGET_PREFIXES[prefix_index]
# ============================================================ # WORKER FUNCTION # ============================================================
def worker(worker_id, shared_state, stop_event): """ Worker process that searches for the private key. """ local_no_hit_counter = 0 local_checks = 0 # Start with random key k = random.randint(LOWER_BOUND, UPPER_BOUND) # Get initial prefix index with lock: current_prefix_idx = shared_state['current_prefix_index'] logger.info(f"Worker {worker_id} started at key: {k}") while not stop_event.is_set(): # Generate HASH160 h = ice.privatekey_to_h160(0, True, k) local_checks += 1 # Check for full solution if h == TARGET_H160: with lock: shared_state['found'] = True shared_state['checks'] += local_checks stop_event.set() logger.info("=" * 80) logger.info("SOLUTION FOUND") logger.info("=" * 80) logger.info(f"Worker ID: {worker_id}") logger.info(f"Private Key: {k}") logger.info(f"HASH160: {h.hex()}") logger.info(f"Total Checks: {shared_state['checks']:,}") logger.info("=" * 80) return # Get current prefix index and target prefix with lock: if shared_state['found']: return prefix_idx = shared_state['current_prefix_index'] target_prefix = TARGET_PREFIXES[prefix_idx] matched_bytes = prefix_idx + 1 # prefix_idx = matched_bytes - 1 # Calculate adaptive parameters adaptive_no_hit_threshold = get_adaptive_no_hit_threshold(prefix_idx) upgrade_hits_needed = get_adaptive_upgrade_hits_needed(prefix_idx) # Check for prefix match if h[:matched_bytes] == target_prefix: local_no_hit_counter = 0 # Calculate jump based on the MATCHED prefix index adaptive_jump = calculate_adaptive_jump(prefix_idx) # Process hit with lock: shared_state['total_hits'] += 1 shared_state['hits_at_current_level'] += 1 total_hits = shared_state['total_hits'] hits_at_level = shared_state['hits_at_current_level'] # Check if we should upgrade prefix length should_upgrade = False if (hits_at_level >= upgrade_hits_needed and shared_state['current_prefix_index'] < MAX_PREFIX_INDEX): old_prefix_idx = shared_state['current_prefix_index'] new_prefix_idx = old_prefix_idx + 1 shared_state['current_prefix_index'] = new_prefix_idx shared_state['hits_at_current_level'] = 0 should_upgrade = True # Log the hit with jump info logger.info(f"Hit #{total_hits:04d} | Worker {worker_id:2d} | " f"Prefix: {matched_bytes:2d} bytes ({target_prefix.hex()}) | " f"Key: {k:24d} | " f"Jump: +{adaptive_jump:,}") # Log upgrade if it happened if should_upgrade: new_matched_bytes = matched_bytes + 1 new_target_prefix = TARGET_PREFIXES[new_prefix_idx] logger.info(f"Prefix upgrade: {matched_bytes} -> {new_matched_bytes} bytes") logger.info(f"Now searching for: {new_target_prefix.hex()}") # Apply jump k += adaptive_jump # Handle wrap-around if k > UPPER_BOUND: overflow = k - UPPER_BOUND k = LOWER_BOUND + overflow - 1 if overflow > 1000000: # Only log significant wrap-arounds logger.debug(f"Worker {worker_id} jumped {overflow:,} keys beyond range, wrapped to {k}") else: # No match - move to next key k += 1 local_no_hit_counter += 1 # Random restart logic should_restart = ( prefix_idx < PERSISTENCE_THRESHOLD and local_no_hit_counter >= adaptive_no_hit_threshold ) if should_restart: # Start from new random position k = random.randint(LOWER_BOUND, UPPER_BOUND) local_no_hit_counter = 0 with lock: shared_state['restart_count'] += 1 restart_num = shared_state['restart_count'] logger.info(f"Restart #{restart_num:04d} | Worker {worker_id:2d} | " f"Prefix: {matched_bytes:2d} bytes | " f"No hits for: {adaptive_no_hit_threshold:,} keys") # Handle wrap-around for linear search elif k > UPPER_BOUND: k = LOWER_BOUND overflow = UPPER_BOUND - LOWER_BOUND + 1 if overflow > 100000000: # Only log if we've searched a significant portion logger.debug(f"Worker {worker_id} linear wrap ({overflow:,} keys)") # Periodic sync with shared state if local_checks >= CHECK_FLUSH_INTERVAL: with lock: shared_state['checks'] += local_checks local_checks = 0 # Small sleep to prevent CPU overload if local_checks % 100 == 0: time.sleep(0.00001) # Final sync when stopping if local_checks > 0: with lock: shared_state['checks'] += local_checks
# ============================================================ # VALIDATION AND STARTUP # ============================================================
def validate_target(): """ Validate target and display search parameters. """ if len(TARGET_H160) != 20: logger.error(f"Invalid HASH160 length: {len(TARGET_H160)} bytes (expected 20)") return False logger.info("=" * 80) logger.info(f"BITCOIN PUZZLE {PUZZLE} SEARCH") logger.info("=" * 80) logger.info(f"Target HASH160: {TARGET_HEX}") logger.info("=" * 80) # Display progressive prefixes logger.info("PROGRESSIVE PREFIX MATCHING:") for i in range(min(6, len(TARGET_PREFIXES))): prefix_bytes = i + 1 prefix_hex = TARGET_PREFIXES[i].hex() logger.info(f" Level {i+1:2d}: {prefix_bytes:2d} bytes = {prefix_hex}") if len(TARGET_PREFIXES) > 6: logger.info(f" ... up to 20 bytes = {TARGET_HEX}") logger.info("=" * 80) # Display jump examples logger.info("ADAPTIVE JUMPING CONFIGURATION:") logger.info(f"Base Jump Size: {BASE_JUMP_SIZE:,} keys") logger.info("Jump Formula: BASE_JUMP_SIZE × 2^(matched_bytes - 1)") for i in range(1, 6): jump = calculate_adaptive_jump(i-1) # i-1 is prefix index logger.info(f" {i}-byte match: {BASE_JUMP_SIZE:,} × 2^{i-1} = {jump:,} keys") logger.info("=" * 80) # Display prefix progression settings logger.info("PREFIX PROGRESSION:") logger.info(f"Start: 1 byte ({TARGET_PREFIXES[0].hex()})") logger.info(f"Upgrade hits needed: {PREFIX_UPGRADE_HITS} hits (adaptive)") logger.info(f"Final target: {MAX_PREFIX_INDEX + 1} bytes (full match)") logger.info("=" * 80) # Display search space info logger.info("SEARCH SPACE:") total_keys = UPPER_BOUND - LOWER_BOUND + 1 logger.info(f"Range: 2^{PUZZLE-1} to 2^{PUZZLE} - 1") logger.info(f"Lower bound: {LOWER_BOUND:,}") logger.info(f"Upper bound: {UPPER_BOUND:,}") logger.info(f"Total keyspace: {total_keys:.2e} keys") logger.info("=" * 80) # Display performance settings logger.info("PERFORMANCE SETTINGS:") logger.info(f"Workers: {cpu_count()} processes") logger.info(f"Base no-hit threshold: {BASE_NO_HIT_THRESHOLD:,} keys") logger.info(f"Persistence mode: {'Enabled' if ENABLE_PERSISTENCE_MODE else 'Disabled'}") if ENABLE_PERSISTENCE_MODE: logger.info(f"Persistence threshold: {PERSISTENCE_THRESHOLD}+ bytes") logger.info("=" * 80) logger.info(f"Starting search...") return True
# ============================================================ # MONITOR FUNCTION # ============================================================
def monitor(shared_state, stop_event): """ Monitor process to display progress statistics. """ last_checks = 0 last_time = time.time() while not stop_event.is_set(): time.sleep(10) with lock: if shared_state['found']: return checks = shared_state['checks'] hits = shared_state['total_hits'] prefix_idx = shared_state['current_prefix_index'] hits_at_level = shared_state['hits_at_current_level'] restarts = shared_state['restart_count'] elapsed = time.time() - shared_state.get('start_time', time.time()) # Get current target prefix matched_bytes = prefix_idx + 1 target_prefix = TARGET_PREFIXES[prefix_idx].hex() upgrade_needed = get_adaptive_upgrade_hits_needed(prefix_idx) # Calculate rate time_diff = time.time() - last_time checks_diff = checks - last_checks rate = checks_diff / time_diff if time_diff > 0 else 0 # Display statistics logger.info(f"STATUS | Checks: {checks:,} | Rate: {rate:,.0f}/s | " f"Hits: {hits} | Prefix: {matched_bytes}B ({target_prefix}) | " f"Progress: {hits_at_level}/{upgrade_needed} | " f"Restarts: {restarts} | Time: {elapsed/3600:.1f}h") last_checks = checks last_time = time.time()
# ============================================================ # MAIN FUNCTION # ============================================================
def main(): """ Main function to coordinate the search. """ if not validate_target(): logger.error("Target validation failed.") sys.exit(1) num_workers = cpu_count() processes = [] # Start monitor process monitor_proc = Process(target=monitor, args=(shared, stop_event)) monitor_proc.daemon = True monitor_proc.start() # Start workers logger.info(f"Starting {num_workers} workers...") for i in range(num_workers): p = Process(target=worker, args=(i, shared, stop_event)) p.start() processes.append(p) time.sleep(0.2) logger.info(f"All {num_workers} workers started.") logger.info("=" * 80) try: for p in processes: p.join() except KeyboardInterrupt: logger.info("\n" + "=" * 80) logger.info("SEARCH INTERRUPTED - Shutting down...") logger.info("=" * 80) stop_event.set() time.sleep(2) for p in processes: if p.is_alive(): p.terminate() p.join(timeout=5) if monitor_proc.is_alive(): monitor_proc.terminate() monitor_proc.join(timeout=2) finally: elapsed = time.time() - shared.get('start_time', time.time()) logger.info("=" * 80) logger.info("SEARCH COMPLETED") logger.info("=" * 80) logger.info(f"Final Status:") logger.info(f" Found: {'YES' if shared['found'] else 'NO'}") logger.info(f" Total Checks: {shared['checks']:,}") logger.info(f" Total Hits: {shared['total_hits']}") if not shared['found']: prefix_idx = shared['current_prefix_index'] matched_bytes = prefix_idx + 1 target_prefix = TARGET_PREFIXES[prefix_idx].hex() logger.info(f" Final Prefix: {matched_bytes} bytes ({target_prefix})") logger.info(f" Hits at current level: {shared['hits_at_current_level']}") logger.info(f" Total Restarts: {shared['restart_count']}") logger.info(f" Elapsed Time: {elapsed:.2f} seconds ({elapsed/3600:.2f} hours)") if elapsed > 0: avg_rate = shared['checks'] / elapsed logger.info(f" Average Rate: {avg_rate:,.0f} keys/second") logger.info(f" Average Rate: {avg_rate * 3600:,.0f} keys/hour") logger.info("=" * 80)
# ============================================================ # ENTRY POINT # ============================================================
if __name__ == "__main__": try: main() except Exception as e: logger.error(f"Fatal error: {e}") import traceback traceback.print_exc() sys.exit(1)
|
BTC: bc1qdwnxr7s08xwelpjy3cc52rrxg63xsmagv50fa8
|
|
|
Akito S. M. Hosana
Jr. Member
Offline
Activity: 406
Merit: 8
|
 |
Today at 10:54:51 AM |
|
[11:50:48] Hit #0004 | Worker 6 | Prefix: 2 bytes (b907) | Key: 850674 | Jump: +2,456 [11:50:50] Hit #0005 | Worker 4 | Prefix: 2 bytes (b907) | Key: 594205 | Jump: +2,456 [11:50:51] Hit #0006 | Worker 4 | Prefix: 2 bytes (b907) | Key: 597762 | Jump: +2,456 [11:50:51] Prefix upgrade: 2 -> 3 bytes [11:50:51] Now searching for: b907c3 [11:50:54] STATUS | Checks: 95,000 | Rate: 9,498/s | Hits: 6 | Prefix: 3B (b907c3) | Progress: 0/3 | Restarts: 0 | Time: 0.0h [11:50:57] ================================================================================ [11:50:57] SOLUTION FOUND [11:50:57] ================================================================================ [11:50:57] Worker ID: 6 [11:50:57] Private Key: 863317 [11:50:57] HASH160: b907c3a2a3b27789dfb509b730dd47703c272868 [11:50:57] Total Checks: 124,283 [11:50:57] ================================================================================ [11:50:57] ================================================================================ [11:50:57] SEARCH COMPLETED [11:50:57] ================================================================================ [11:50:57] Final Status: [11:50:57] Found: YES [11:50:57] Total Checks: 124,283 [11:50:57] Total Hits: 6 [11:50:57] Total Restarts: 0 [11:50:57] Elapsed Time: 12.69 seconds (0.00 hours) [11:50:57] Average Rate: 9,790 keys/second [11:50:57] Average Rate: 35,245,004 keys/hour [11:50:57] ================================================================================ Works on Small Puzzles 
|
|
|
|
|
|