kTimesG
|
 |
April 26, 2025, 11:00:19 PM |
|
Un-vealing Scooby Doo: Origins methodChangelog: - revert reversion in order to beat 0day prefix method import hashlib import random import time
# Configuration TOTAL_SIZE = 100_000 RANGE_SIZE = 5_000 PREFIX_LENGTH = 3 SIMULATIONS = 500 SECP256K1_ORDER = int("0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFEBAAEDCE6AF48A03BBFD25E8CD0364141", 16)
# ENABLE REPRODUCIBLE TESTING # Note: this line can be safely kept for a release build random.seed(int.from_bytes(b'mcdouglasx'))
def ScoobyDoo_Origins(dataset, block, target_hash, order): checks = 0 for k in dataset: checks += 1 if generate_h160(k) == target_hash: return {"checks": checks, "found": True}
raise Exception('OHNOES. Your Programming Language Is Rigged!')
def generate_h160(data): h = hashlib.new('ripemd160', str(data).encode('utf-8')) return h.hexdigest()
def shuffled_range(n): arr = list(range(n + 1)) random.shuffle(arr) return arr
def prefixes_search(dataset, block, prefix, target_hash, order): prefix_hash = target_hash[:prefix] checks = 0 skipped_indices = [] skip_window = 4096 skip_percentage = 0.3
i = len(dataset) - 1 while i >= 0: checks += 1 h = generate_h160(dataset[i])
if h == target_hash: return {"checks": checks, "found": True}
if h.startswith(prefix_hash): skip_count = int(skip_window * skip_percentage)
for j in range(i-1, max(i-1 - skip_count, -1), -1): skipped_indices.append(j)
i -= skip_count + 1 # +1 else: i -= 1
for idx in sorted(skipped_indices, reverse=True): checks += 1 if generate_h160(dataset[idx]) == target_hash: return {"checks": checks, "found": True}
return {"checks": checks, "found": False}
def compare_methods(): results = { "Scooby_Doo": {"wins": 0, "total_checks": 0, "total_time": 0}, "prefixes": {"wins": 0, "total_checks": 0, "total_time": 0}, "ties": 0 }
for i in range(SIMULATIONS): max_range = SECP256K1_ORDER - TOTAL_SIZE - 1 random_offset = random.randrange(max_range) R = 1 + random_offset
dataset = [R + i for i in range(TOTAL_SIZE)] target_num = random.choice(dataset) target_hash = generate_h160(target_num) blocks = TOTAL_SIZE // RANGE_SIZE order = shuffled_range(blocks - 1)
start_time = time.perf_counter() seq_result = ScoobyDoo_Origins(dataset, RANGE_SIZE, target_hash, order) end_time = time.perf_counter() seq_time = end_time - start_time
start_time = time.perf_counter() pre_result = prefixes_search(dataset, RANGE_SIZE, PREFIX_LENGTH, target_hash, order) end_time = time.perf_counter() pre_time = end_time - start_time
results["Scooby_Doo"]["total_checks"] += seq_result["checks"] results["prefixes"]["total_checks"] += pre_result["checks"] results["Scooby_Doo"]["total_time"] += seq_time results["prefixes"]["total_time"] += pre_time
if seq_result["checks"] < pre_result["checks"]: results["Scooby_Doo"]["wins"] += 1 elif seq_result["checks"] > pre_result["checks"]: results["prefixes"]["wins"] += 1 else: results["ties"] += 1
print(f"Simulation {i + 1}: Scooby_Doo = {seq_result['checks']} checks in {seq_time:.6f}s | Prefix = {pre_result['checks']} checks in {pre_time:.6f}s")
avg_success_rate_Scooby_Doo = (results["Scooby_Doo"]["total_checks"] / results["Scooby_Doo"]["wins"] if results["Scooby_Doo"]["wins"] > 0 else float('inf')) avg_success_rate_prefixes = (results["prefixes"]["total_checks"] / results["prefixes"]["wins"] if results["prefixes"]["wins"] > 0 else float('inf')) avg_time_Scooby_Doo = (results["Scooby_Doo"]["total_time"] / results["Scooby_Doo"]["wins"] if results["Scooby_Doo"]["wins"] > 0 else float('inf')) avg_time_prefixes = (results["prefixes"]["total_time"] / results["prefixes"]["wins"] if results["prefixes"]["wins"] > 0 else float('inf'))
print(f""" === FINAL RESULTS === Wins: Scooby_Doo: {results['Scooby_Doo']['wins']} Prefix: {results['prefixes']['wins']} Ties: {results['ties']}
Total Checks:
Scooby_Doo: {results['Scooby_Doo']['total_checks']} Prefix: {results['prefixes']['total_checks']} Total Time:
Scooby_Doo: {results['Scooby_Doo']['total_time']:.6f} seconds Prefix: {results['prefixes']['total_time']:.6f} seconds
Averages (Total Time / Wins):
Scooby_Doo : {avg_time_Scooby_Doo:.6f} seconds/victory Prefix : {avg_time_prefixes:.6f} seconds/victory
Checks per Win: Scooby_Doo : {avg_success_rate_Scooby_Doo:.2f} checks/win Prefix : {avg_success_rate_prefixes:.2f} checks/win """)
print(f""" === Configuration === Total numbers: {TOTAL_SIZE:,} Block size: {RANGE_SIZE:,} Prefix: {PREFIX_LENGTH} characters (16^{PREFIX_LENGTH} combinations) Simulations: {SIMULATIONS} secp256k1 Order: {SECP256K1_ORDER} """)
if __name__ == '__main__': compare_methods()
=== FINAL RESULTS === Wins: Scooby_Doo: 256 Prefix: 244 Ties: 0
Total Checks:
Scooby_Doo: 25374425 Prefix: 24558252 Total Time:
Scooby_Doo: 24.387615 seconds Prefix: 24.969044 seconds
Averages (Total Time / Wins):
Scooby_Doo : 0.095264 seconds/victory Prefix : 0.102332 seconds/victory
Checks per Win: Scooby_Doo : 99118.85 checks/win Prefix : 100648.57 checks/win
Note: future versions of Scooby Doo reserve the right to become closed-source, in order to protect from reverse engineering. On another note: this is a never ending story. If Scooby Doo does random picks, someone will want to know the exact order of those random picks. If Scooby Doo does quantum entanglement, someone will want to know the exact quantum state of the entanglement. This is not acceptable. In effect, future versions of Scooby Doo will only ever accept as input the dataset that is to be searched, and the target to be found. Nothing more except the correct output solution (position of target) will be disclosed - no number of checks, no nothing. Users that wish to compare Scooby Doo with other methods have a very simple metric for comparison: the time it takes to find the correct solution.Everyone: The Scooby Doo methods work just as better as the prefix method does. Just check out the results for yourselves. Do not listen to trolls, use your brains.
Like I said before, you guys are kind of extreme. I think there's enough proof to show that using prefixes is pretty much always the better move when searching for hashes, or at least, that's how it looks. Yes, the truth is an extreme. What do you mean by "pretty much always"? Looks more like "never". Also, where is the proof you are talking about? And lastly, how does what looks? The Scooby Doo methods clearly show it finds the key faster or at least as fast as any prefix method. Do you want a link to the Scooby Doo methods?
|
Off the grid, training pigeons to broadcast signed messages.
|
|
|
pbies
|
 |
April 26, 2025, 11:13:24 PM |
|
For similar prefixes - is there a constant number between private keys integers?
Just curious...
Anyone checked that?
|
BTC: bc1qmrexlspd24kevspp42uvjg7sjwm8xcf9w86h5k
|
|
|
fixedpaul
Jr. Member
Offline
Activity: 51
Merit: 16
|
 |
April 26, 2025, 11:30:08 PM |
|
For similar prefixes - is there a constant number between private keys integers?
Just curious...
Anyone checked that?
You can easily check, the answer Is No, RIPEMD-160 generates uniformly distributed numbers. Anyone saying otherwise, anyone who believes in the "prefix method" — is either messing with you or never finished mandatory schooling
|
|
|
|
fecell
Jr. Member
Offline
Activity: 159
Merit: 2
|
 |
April 27, 2025, 03:07:15 AM |
|
Did U know... G = 115792089237316195423570985008687907852837564279074904382605163141518161494338 * G or simply G = (SECP256k1.order+1) * G

|
|
|
|
kTimesG
|
 |
April 27, 2025, 05:52:09 AM |
|
What I’m seeing here is you’re tweaking the search method. According to mcd, his method should work better no matter how you run it. If you wanna prove him wrong, you gotta find a way where, even with the same search strategy, the prefix method still messes up.
Otherwise, it’s not a fair fight, ya know?.
 The tweaking was bringing it back to its original form: a simple sequential search. Which results in a very obvious conclusion: If magic method beats sequential. But it loses to reverse sequential. And magic method is tweaked to beat reverse sequential. But loses to normal sequential. Then we have a draw. I'm not the one pushing a better method here. A better method should work better no matter against what it is being compared. That is a fair fight. Tweaking prefix endlessly based on how the other method works, and calling it "the same method" is like nuking an enemy when you know his exact GPS location.
|
Off the grid, training pigeons to broadcast signed messages.
|
|
|
Bram24732
Member

Offline
Activity: 154
Merit: 15
|
 |
April 27, 2025, 06:11:21 AM |
|
We reached the point where it’s no longer fun or interesting for me to debate. The dude is either trolling or too stubborn to even try and understand what we’re trying to explain. I’ll step off the forums for a bit. Good luck kTimesG, keep fighting the good fight 
|
|
|
|
nomachine
|
 |
April 27, 2025, 08:28:42 AM |
|
For similar prefixes - is there a constant number between private keys integers?
Just curious...
Anyone checked that?
You can easily check, the answer Is No, RIPEMD-160 generates uniformly distributed numbers. Anyone saying otherwise, anyone who believes in the "prefix method" — is either messing with you or never finished mandatory schooling The whole debate’s fueled by folks who either don’t get how cryptographic hashing works or are straight-up wishin’ for some ‘shortcut’ to crack these puzzles. Listen—uniform distributions don’t just spit out predictable prefixes unless the input’s rigged (and with random/pubkey data? Forget about it). Bitcoin addresses? They’re just base58-encoded RIPEMD-160 hashes with checksums slapped on. The prefix—like ‘1,’ ‘3,’ or ‘bc1’—comes from the script type (P2PKH, P2SH, ya know?), not the private key. The ‘puzzle’ hype? That’s ‘cause these private keys are generated randomly (think: 1, 2, … up to 2^160), and the game is brute-forcing the one that matches a funded address. It’s a grind, not some crypto voodoo. For me, the real juice is in the private key structure (WIF)—that’s what matters in puzzles like the ‘Bitcoin Puzzle.’ When keys get searched incrementally, their WIFs all start with the same ol’ ‘KwDiBf89QgGbjEhKnhXJuH7LrciVrZi3q…’ 
|
BTC: bc1qdwnxr7s08xwelpjy3cc52rrxg63xsmagv50fa8
|
|
|
Akito S. M. Hosana
Jr. Member
Offline
Activity: 364
Merit: 8
|
 |
April 27, 2025, 08:44:39 AM Last edit: April 27, 2025, 08:56:52 AM by Akito S. M. Hosana |
|
For me, the real juice is in the private key structure (WIF)
That juice only works up to 13 characters max—after that, it’s mission impossible, no cap. Or is there some checksum hack we don’t even know about? I don’t even get how that Python script cracks it at 13 characters… seems straight-up impossible. The only ‘solvable’ part of this puzzle is the sequential private key grind, and that’s straight-up brute force. But like you said, you’d need, like, 3000 GPUs, fam. And that’s where the whole convo just dies. 
|
|
|
|
nomachine
|
 |
April 27, 2025, 09:12:53 AM |
|
I don’t even get how that Python script cracks it at 13 characters… seems straight-up impossible.
Yo, you believe me when I say I can’t even make a C++ function that fails quickly enough on invalid checksums? 
|
BTC: bc1qdwnxr7s08xwelpjy3cc52rrxg63xsmagv50fa8
|
|
|
Akito S. M. Hosana
Jr. Member
Offline
Activity: 364
Merit: 8
|
 |
April 27, 2025, 09:42:19 AM |
|
I don’t even get how that Python script cracks it at 13 characters… seems straight-up impossible.
Yo, you believe me when I say I can’t even make a C++ function that fails quickly enough on invalid checksums?  The current Python code does this: [START_BYTES + CHARS[indices[i, -MISSING_CHARS:]].tobytes() for i in range(batch_size)]. START_WIF is like the prefix, ya feel? The script’s generating all possible WIFs that start with that prefix and slap on MISSING_CHARS (which is 52 - len(START_WIF)) more characters. CHARS is a NumPy array of uint8 (basically, the ASCII values). indices[:, -MISSING_CHARS:] is a 2D array (batch_size × MISSING_CHARS), where each spot is an index into CHARS. Then, hstack throws it together with suffix_chars.START_BYTES is a 1D array, so when it gets tiled, it becomes (batch_size, len(START_BYTES)). Then, hstack with suffix_chars (batch_size × MISSING_CHARS) gives you (batch_size, len(START_BYTES) + MISSING_CHARS). Finally, each row gets converted to bytes. Each process generates a batch of WIF keys, converts 'em to private keys (ASCII values), checks if they’re in range, and then computes the hash. NumPy’s doing all the heavy lifting—vectorized ops, smooth memory handling, all that good stuff. But in C? Nah, you ain’t got that luxury. Or maybe you could, but the code would be so gnarly and huge it ain’t even worth it. 
|
|
|
|
nomachine
|
 |
April 27, 2025, 09:51:59 AM Last edit: April 27, 2025, 10:49:15 AM by nomachine |
|
But in C? Nah, you ain’t got that luxury. Or maybe you could, but the code would be so gnarly and huge it ain’t even worth it.  This is just the beginning of the problem.  P.S. I just realized that 'Akito S. M. Hosana' is an anagram or deliberate variation of 'Satoshi Nakamoto.' 
|
BTC: bc1qdwnxr7s08xwelpjy3cc52rrxg63xsmagv50fa8
|
|
|
POD5
Member

Offline
Activity: 323
Merit: 10
Keep smiling if you're loosing!
|
 |
April 27, 2025, 11:21:10 AM Last edit: April 27, 2025, 11:37:24 AM by POD5 |
|
P.S. I just realized that 'Akito S. M. Hosana' is an anagram or deliberate variation of 'Satoshi Nakamoto.'  Yes, I had that tought too before...  But no, Satoshi has dedicated himself to other things and he wouldn't even read Bitcointalk. 
|
bc1qtmtmhzp54yvkz7asnqxc9j7ls6y5g93hg08msa
|
|
|
kTimesG
|
 |
April 27, 2025, 12:33:14 PM |
|
What I understood is that it's not a search function,it's a feature that, when added to whatever search system you choose, boosts its success rate.
The only way to prove that prefixes don't work is to find a search method where, if you apply the same logic with prefixes, it actually makes things worse. Otherwise, it's doing something right.
1. The Scooby Doo method (the original version) boosts the success rate of the corresponding sequential order method just as good as the prefix method, without ever needing to check for any prefixes. In other words, if one would have to choose between a method that pretty much does nothing except increase the statistical variance, and the prefix method, he has a valid choice that works just as well. 2. Are you OK? The Scooby Doo: The Mystery Returns! method proved that reverse sequential search is faster than the prefix method. In other words, if one would want to choose of finding a key faster, than the Scooby Doo: The Mystery Returns! method is the one that a sane person would choose. 3. In the same way, Scooby Doo: Origins method proved that a sequential search is faster than the Tweaked Prefix Method That Beats Scooby Doo 2. In other words, if one would want to find a key faster, and he can choose between these 2 methods, the one to choose would be Scooby Doo: Origins. QED, pretty much.
|
Off the grid, training pigeons to broadcast signed messages.
|
|
|
fixedpaul
Jr. Member
Offline
Activity: 51
Merit: 16
|
 |
April 27, 2025, 01:04:30 PM |
|
The only way to prove that prefixes don't work is to find a search method where, if you apply the same logic with prefixes, it actually makes things worse. Otherwise, it's doing something right.
Just replace in any prefix-method script the prefix "puzzle" with any prefix, even a random One. And you will see that absolutely nothing changes, you can try it yourself. If the prefix-method really does work, why does it perform the same way with any prefix instead of the puzzle prefix?
|
|
|
|
fixedpaul
Jr. Member
Offline
Activity: 51
Merit: 16
|
 |
April 27, 2025, 01:31:12 PM |
|
The only way to prove that prefixes don't work is to find a search method where, if you apply the same logic with prefixes, it actually makes things worse. Otherwise, it's doing something right.
Just replace in any prefix-method script the prefix "puzzle" with any prefix, even a random One. And you will see that absolutely nothing changes, you can try it yourself. If the prefix-method really does work, why does it perform the same way with any prefix instead of the puzzle prefix? What you're talking about was already brought up here random.randint(0, 5000) == 0: , and it doesn't give the same results. ...
Let's just drop it, like I said, not my battle. Try 4095 Instead of 5000. I already explained this a few posts ago. Same results Maybe generating an hash and looking at the first 3 hex digits (12 bits) is basically the same as generating a random number between 1 and 4096?
Maybe Is all rigged
|
|
|
|
kTimesG
|
 |
April 27, 2025, 01:45:55 PM |
|
At first, I doubted it too, just like you. But after seeing the evidence, I think you haven’t really read the whole thread. Anyway, this ain’t my fight, so I’ll respect your take.
I'm afraid I haven't really seen the evidence you're referring to. If it's about the "winnings" when compared with the same traversal order sibling method, then you are a victim of not understanding some basic statistics, and actually believing the psychedelic explanations, which lack any sort of theoretical basis or proofs whatsoever, let alone actual empirical results. It is totally irrelevant even if the so-called "better" method has 100% wins and solves stuff a billion times faster than some other method, if it only takes 5 seconds to come up with a counter-method that not only isn't so easy to beat, but it actually wins more. Isn't that something that, maybe, uhm... would make someone wonder why it happens? Perhaps because there was a linkage between how the methods were compared, when such a linkage should never exist? All the tests / experiments / results / scripts so far that the troller claims to "not have proven anything" have all demonstrated that a uniform distribution doesn't give a shit about what strategy you use to attack it. It's retarded to ever think that if some whatever search method can be completely obliterated by 3 lines of code, that it somehow broke cryptography. The only thing broken about it is common sense.
|
Off the grid, training pigeons to broadcast signed messages.
|
|
|
Bram24732
Member

Offline
Activity: 154
Merit: 15
|
 |
April 27, 2025, 05:36:25 PM |
|
I’m no mathematician
Then maybe just maybe trust what mathematicians have to say and not a random dude who likely has zero math background ? (Happy to be corrected on that if McD has any math credentials, which I really doubt)
|
|
|
|
fixedpaul
Jr. Member
Offline
Activity: 51
Merit: 16
|
 |
April 27, 2025, 05:47:00 PM |
|
Try 4095 Instead of 5000. I already explained this a few posts ago. Same results
o.k, i'll check. edt:Interesting results. Looks like when you tweak the parameters like you said, both methods end up working similar. But you're kinda proving mcd right here, even though you're not directly hunting for a single prefix, you're still using a prefix-based strategy with that 4095 pick. You're just not targeting something specific. So this doesn't wreck his theory... if anything, it backs it up. . . I’m just going by what each side posts here. not trying to mess with anyone’s opinions. I’m no mathematician, so I can’t be sure of anything. Exactly, I'm replicating what the mcd code does, which is stopping/pausing the range search, on average, every 4096 (16^3 prefix combination) keys. Nothing more, and it doesn't provide any advantage compared to a sequential or random search. It's just a way to show that the analysis claiming more "wins" is actually flawed. Anyway, you don't need to be a math expert to understand the concept of uniform distribution and independent events. Just a bit of common sense and logic is enough.
|
|
|
|
WanderingPhilospher
Sr. Member
  
Offline
Activity: 1386
Merit: 269
Shooters Shoot...
|
 |
April 27, 2025, 05:47:16 PM |
|
SCOOBY SCOOBY DOO! LOL Search is on auto pilot, and all dirt has been moved to its new home, so I had some time. Looking at all the test variants out there, and seeing this Scooby Doo method being used recently and tweaked by both sides, I thought I would look at it and do a little "tweaking". First, to me, there are flaws on both sides. I know we agreed a few weeks back, the only reason you would not start at the beginning of a range and just go sequentially, is because the "competition" would know where/what you have already searched...even though Bram's search status is not publicly known. We still agree on that, that straight sequential versus random subrange + sequential both still average out to 50% find average, correct? So in these tests being ran and tweaked, 100,000 seems to be the default total size and then a block size/sub-range size of 5,000. For whatever reason, brevity maybe? But ok, we can still go with that. But it is crazy to compare that size to say a size we are actually looking in, such as now, a 2^68 range size. So I tweaked the tests to just create a total size of 2^17, closest to the 100,000 qty mostly used on these tests. But, I changed it to where it is only 1 block/1 range. And if we all agree on (straight sequential versus random subrange + sequential both still average out to 50% find average) then w start both methods at key 1 and search sequentially til key 2^17. Anything wrong with that so far? I do not like the shuffle method, unless for each sub test, the shuffle is the same for both methods; if not, then really it is which random shuffle got the other method to the actual subrange, quicker, which is just dumb and not what we should be testing here. That is basically luck of the draw IF we shuffle the sub ranges differently for both methods. Ok, so any flaws yet in my tweaks? To summarize my tweaks: 1 range Both methods start at key 1 and work sequentially. Any issues? I will continue if no one objects to the tweaks so far. (Bare with me here) Edit: these comments crack me up lol: Exactly, I'm replicating what the mcd code does, which is stopping/pausing the range search, on average, every 4096 (16^3 prefix combination) keys. Nothing more, and it doesn't provide any advantage compared to a sequential or random search. It's just a way to show that the analysis claiming more "wins" is actually flawed.
Anyway, you don't need to be a math expert to understand the concept of uniform distribution and independent events. Just a bit of common sense and logic is enough. Then maybe just maybe trust what mathematicians have to say and not a random dude who likely has zero math background ? (Happy to be corrected on that if McD has any math credentials, which I really doubt) If you really think about x y and z you would see something...and the "math" would talk to you lol.
|
|
|
|
WanderingPhilospher
Sr. Member
  
Offline
Activity: 1386
Merit: 269
Shooters Shoot...
|
 |
April 27, 2025, 06:27:15 PM |
|
Any issues? I will continue if no one objects to the tweaks so far. (Bare with me here) No one will answer lol?! That's cool. No one wants to be "wrong" or even partially wrong. SO I have told the flaws with the one side's view. The other side's flaw is the skip window and the skip window % (skip_count = int(skip_window * skip_percentage)). Maybe not a flaw, but I am not sure any tests with data results were ran, or at least, not enough data/tests collected. The ones I have seen in the tests are way too large for the small range size and prefix length. With our 2^17 range size, it is easy and quick, to run 1,000 tests and collect the data from that; regarding how often 3 prefixes occur, the average distance apart, and what is a good skip count number that really skips some keys to try and speed up the search, but not too large where you are constantly skipping over the key you are looking for. SO you run those numbers and come up with a skip count threshold that you are comfortable with, risk versus reward. The lower the skip count, the more times prefix search will win versus sequential. It's just basic math, and no, I am not a math expert. The larger the skip count, the more the results will start to shift back to a 50/50. 500 tests each: smaller skip count: === FINAL RESULTS (Sequential, Full Range) === Wins: Scooby_Doo: 2 Prefix: 486 Ties: 12
Total Checks:
Scooby_Doo: 33938958 Prefix: 33952675 Total Time:
Larger skip count: === FINAL RESULTS (Sequential, Full Range) === Wins: Scooby_Doo: 22 Prefix: 466 Ties: 12
Total Checks:
Scooby_Doo: 33938958 Prefix: 34341946 Total Time:
The only time the prefix would lose, is if the skip count was larger than a found prefix and the actual key/prefix we were looking for. That is why running tests and analyzing the data for the skip count, based off of range size and prefix length is the most critical aspect, IMO.
|
|
|
|
|