|
fixedpaul
|
 |
March 30, 2025, 10:38:36 AM |
|
Hello everyone. I have published my optimized versions of VanitySearch (CUDA) with speed boost in case anyone is interested  The "bitcrack" version is specific to the puzzle and allows searching for addresses and prefixes (compressed) within a given range. The speed is about 6900 MKey/s on a 4090 and 8800 MKey/s on 5090. The second version, on the other hand, performs a standard search for vanity addresses (not just P2PKH compressed) but with the same optimizations in terms of math and CUDA code. Random searches with endomorphisms. https://github.com/FixedPaul/VanitySearch-Bitcrackhttps://github.com/FixedPaul/VanitySearchThank you for your work – it's truly impressive! The first program achieves a speed higher than any other solution I've seen. Even with a 33% power limit on an RTX 4090, it reaches around 2.3G keys per second. The second program delivers an even more record-breaking speed of about 4G keys per second under the same power limit. However, unfortunately, these impressive numbers are merely theoretical and not useful for solving puzzles, as the program does not support working with ranges. I wonder if it is possible to implement Bitcoin address prefix searching not only by the starting characters but also by any other positions within the address. For example, searching for characters at the end, in the middle, or even a combined search where part of the characters are at the beginning, part in the middle, and part at the end, and so on. Thanks! But why only 2.3 Gkey/s? A 4090 @300W should run at around 5.5 Gkey/s. As for the second program, as soon as I find some time, I'll also implement search within a specific range there (without endomorphisms, of course), so that it can search within a range rather than randomly—both for prefixes and wildcards, which is what you're asking for, if I understood correctly. 
|
|
|
|
|
Desyationer
Jr. Member
Offline
Activity: 64
Merit: 2
|
 |
March 30, 2025, 11:04:28 AM |
|
Hello everyone. I have published my optimized versions of VanitySearch (CUDA) with speed boost in case anyone is interested  The "bitcrack" version is specific to the puzzle and allows searching for addresses and prefixes (compressed) within a given range. The speed is about 6900 MKey/s on a 4090 and 8800 MKey/s on 5090. The second version, on the other hand, performs a standard search for vanity addresses (not just P2PKH compressed) but with the same optimizations in terms of math and CUDA code. Random searches with endomorphisms. https://github.com/FixedPaul/VanitySearch-Bitcrackhttps://github.com/FixedPaul/VanitySearchThank you for your work – it's truly impressive! The first program achieves a speed higher than any other solution I've seen. Even with a 33% power limit on an RTX 4090, it reaches around 2.3G keys per second. The second program delivers an even more record-breaking speed of about 4G keys per second under the same power limit. However, unfortunately, these impressive numbers are merely theoretical and not useful for solving puzzles, as the program does not support working with ranges. I wonder if it is possible to implement Bitcoin address prefix searching not only by the starting characters but also by any other positions within the address. For example, searching for characters at the end, in the middle, or even a combined search where part of the characters are at the beginning, part in the middle, and part at the end, and so on. Thanks! But why only 2.3 Gkey/s? A 4090 @300W should run at around 5.5 Gkey/s. As for the second program, as soon as I find some time, I'll also implement search within a specific range there (without endomorphisms, of course), so that it can search within a range rather than randomly—both for prefixes and wildcards, which is what you're asking for, if I understood correctly.  2.3 Gkey/s is only about a third of my 4090’s full potential. Under full load, the speed indeed reaches around 7 Gkey/s, just as you mentioned. And the second program, which doesn’t support range searches, can actually achieve over 10 Gkey/s at full power! However, I try not to push the GPU to its limits, as excessive heat increases wear on both the GPU and memory. My card doesn’t have liquid cooling, so I don’t see much sense in running it at maximum capacity for prolonged periods, especially for tasks like this. But when the temperature stays within safe limits, I’m comfortable letting it run key searches for many hours. Regarding your planned program improvements that would be absolutely fantastic! For now, I handle wildcard prefix searches using an additional filtering script in Python. This script processes the output of the main program, which searches for the first few characters but generates an overwhelming number of unnecessary results. The script intercepts this output and filters only the prefixes I’m interested in, even if they appear in different parts of the address.Without this filtering, the main program’s result file can grow to gigabytes within minutes. However, this approach using both the main program (such as yours) and the filtering script introduces significant limitations on search speed, possibly due to the high-intensity data output.
|
|
|
|
|
|
nomachine
|
 |
March 30, 2025, 11:14:02 AM |
|
How were they able to figure out that this happened? They deliberately things so that they don’t find the right range. I have been here for a long time, but I am only answering now.
HISTORY = [ (1, 1), (2, 3), (3, 7), (4, 8), (5, 21), (6, 49), (7, 76), (8, 224), (9, 467), (10, 514), (11, 1155), (12, 2683), (13, 5216), (14, 10544), (15, 26867), (16, 51510), (17, 95823), (18, 198669), (19, 357535), (20, 863317), (21, 1811764), (22, 3007503), (23, 5598802), (24, 14428676), (25, 33185509), (26, 54538862), (27, 111949941), (28, 227634408), (29, 400708894), (30, 1033162084), (31, 2102388551), (32, 3093472814), (33, 7137437912), (34, 14133072157), (35, 20112871792), (36, 42387769980), (37, 100251560595), (38, 146971536592), (39, 323724968937), (40, 1003651412950), (41, 1458252205147), (42, 2895374552463), (43, 7409811047825), (44, 15404761757071), (45, 19996463086597), (46, 51408670348612), (47, 119666659114170), (48, 191206974700443), (49, 409118905032525), (50, 611140496167764), (51, 2058769515153876), (52, 4216495639600700), (53, 6763683971478124), (54, 9974455244496707), (55, 30045390491869460), (56, 44218742292676575), (57, 138245758910846492), (58, 199976667976342049), (59, 525070384258266191), (60, 1135041350219496382), (61, 1425787542618654982), (62, 3908372542507822062), (63, 8993229949524469768), (64, 17799667357578236628), (65, 30568377312064202855), (66, 46346217550346335726), (67, 132656943602386256302) ]
# The constant in hex: 0x80000000000000000 (2^67) CONST = 147573952589676412928
for puzzle, key in HISTORY: hex_key = hex(key) # Multiply key by 1.004 (as in your earlier example) multiplied = key * 1.004 # Check if multiplied value is near a multiple of CONST ratio = multiplied / CONST print(f"Puzzle {puzzle}:") print(f" Private Key (dec): {key}") print(f" Private Key (hex): {hex_key}") print(f" Multiplied by 1.004: {multiplied:.2f}") print(f" Ratio to CONST (0x800...): {ratio:.10f}") print("-" * 50) The private keys in hex do not directly align with 0x80000000000000000 (no obvious masking or shifting). -------------------------------------------------- Puzzle 63: Private Key (dec): 8993229949524469768 Private Key (hex): 0x7cce5efdaccf6808 Multiplied by 1.004: 9029202869322567680.00 Ratio to CONST (0x800...): 0.0611842585 -------------------------------------------------- Puzzle 64: Private Key (dec): 17799667357578236628 Private Key (hex): 0xf7051f27b09112d4 Multiplied by 1.004: 17870866027008548864.00 Ratio to CONST (0x800...): 0.1210976986 -------------------------------------------------- Puzzle 65: Private Key (dec): 30568377312064202855 Private Key (hex): 0x1a838b13505b26867 Multiplied by 1.004: 30690650821312462848.00 Ratio to CONST (0x800...): 0.2079679393 -------------------------------------------------- Puzzle 66: Private Key (dec): 46346217550346335726 Private Key (hex): 0x2832ed74f2b5e35ee Multiplied by 1.004: 46531602420547723264.00 Ratio to CONST (0x800...): 0.3153104027 -------------------------------------------------- Puzzle 67: Private Key (dec): 132656943602386256302 Private Key (hex): 0x730fc235c1942c1ae Multiplied by 1.004: 133187571376795795456.00 Ratio to CONST (0x800...): 0.9025140890 -------------------------------------------------- The keys are randomly generated (or pseudo-randomly derived from an unknown seed). -------- import math
# Historical data: (puzzle_number, private_key_in_decimal) HISTORY = [ (1, 1), (2, 3), (3, 7), (4, 8), (5, 21), (6, 49), (7, 76), (8, 224), (9, 467), (10, 514), (11, 1155), (12, 2683), (13, 5216), (14, 10544), (15, 26867), (16, 51510), (17, 95823), (18, 198669), (19, 357535), (20, 863317), (21, 1811764), (22, 3007503), (23, 5598802), (24, 14428676), (25, 33185509), (26, 54538862), (27, 111949941), (28, 227634408), (29, 400708894), (30, 1033162084), (31, 2102388551), (32, 3093472814), (33, 7137437912), (34, 14133072157), (35, 20112871792), (36, 42387769980), (37, 100251560595), (38, 146971536592), (39, 323724968937), (40, 1003651412950), (41, 1458252205147), (42, 2895374552463), (43, 7409811047825), (44, 15404761757071), (45, 19996463086597), (46, 51408670348612), (47, 119666659114170), (48, 191206974700443), (49, 409118905032525), (50, 611140496167764), (51, 2058769515153876), (52, 4216495639600700), (53, 6763683971478124), (54, 9974455244496707), (55, 30045390491869460), (56, 44218742292676575), (57, 138245758910846492), (58, 199976667976342049), (59, 525070384258266191), (60, 1135041350219496382), (61, 1425787542618654982), (62, 3908372542507822062), (63, 8993229949524469768), (64, 17799667357578236628), (65, 30568377312064202855), (66, 46346217550346335726), (67, 132656943602386256302) ]
# Calculate exact bit flips between consecutive puzzles EXACT_FLIPS = [] for i in range(1, len(HISTORY)): prev_num, prev_key = HISTORY[i-1] curr_num, curr_key = HISTORY[i] flips = bin(prev_key ^ curr_key).count('1') EXACT_FLIPS.append((prev_num, curr_num, flips)) print(f"Puzzle {prev_num}→{curr_num}: {flips} bit flips")
print("\nFull flip count table:") print("Start → End | Bit Flips") print("-----------------------") for entry in EXACT_FLIPS: print(f"{entry[0]:3} → {entry[1]:3} | {entry[2]:3}")
Puzzle 49→50: 29 bit flips Puzzle 50→51: 25 bit flips Puzzle 51→52: 27 bit flips Puzzle 52→53: 26 bit flips Puzzle 53→54: 30 bit flips Puzzle 54→55: 31 bit flips Puzzle 55→56: 31 bit flips Puzzle 56→57: 33 bit flips Puzzle 57→58: 28 bit flips Puzzle 58→59: 30 bit flips Puzzle 59→60: 31 bit flips Puzzle 60→61: 25 bit flips Puzzle 61→62: 35 bit flips Puzzle 62→63: 34 bit flips Puzzle 63→64: 34 bit flips Puzzle 64→65: 37 bit flips Puzzle 65→66: 35 bit flips Puzzle 66→67: 31 bit flips But bit flips between consecutive private keys are very interesting. 
|
BTC: bc1qdwnxr7s08xwelpjy3cc52rrxg63xsmagv50fa8
|
|
|
Denevron
Newbie
Offline
Activity: 121
Merit: 0
|
 |
March 30, 2025, 11:46:29 AM |
|
How were they able to figure out that this happened? They deliberately things so that they don’t find the right range. I have been here for a long time, but I am only answering now.
HISTORY = [ (1, 1), (2, 3), (3, 7), (4, 8), (5, 21), (6, 49), (7, 76), (8, 224), (9, 467), (10, 514), (11, 1155), (12, 2683), (13, 5216), (14, 10544), (15, 26867), (16, 51510), (17, 95823), (18, 198669), (19, 357535), (20, 863317), (21, 1811764), (22, 3007503), (23, 5598802), (24, 14428676), (25, 33185509), (26, 54538862), (27, 111949941), (28, 227634408), (29, 400708894), (30, 1033162084), (31, 2102388551), (32, 3093472814), (33, 7137437912), (34, 14133072157), (35, 20112871792), (36, 42387769980), (37, 100251560595), (38, 146971536592), (39, 323724968937), (40, 1003651412950), (41, 1458252205147), (42, 2895374552463), (43, 7409811047825), (44, 15404761757071), (45, 19996463086597), (46, 51408670348612), (47, 119666659114170), (48, 191206974700443), (49, 409118905032525), (50, 611140496167764), (51, 2058769515153876), (52, 4216495639600700), (53, 6763683971478124), (54, 9974455244496707), (55, 30045390491869460), (56, 44218742292676575), (57, 138245758910846492), (58, 199976667976342049), (59, 525070384258266191), (60, 1135041350219496382), (61, 1425787542618654982), (62, 3908372542507822062), (63, 8993229949524469768), (64, 17799667357578236628), (65, 30568377312064202855), (66, 46346217550346335726), (67, 132656943602386256302) ]
# The constant in hex: 0x80000000000000000 (2^67) CONST = 147573952589676412928
for puzzle, key in HISTORY: hex_key = hex(key) # Multiply key by 1.004 (as in your earlier example) multiplied = key * 1.004 # Check if multiplied value is near a multiple of CONST ratio = multiplied / CONST print(f"Puzzle {puzzle}:") print(f" Private Key (dec): {key}") print(f" Private Key (hex): {hex_key}") print(f" Multiplied by 1.004: {multiplied:.2f}") print(f" Ratio to CONST (0x800...): {ratio:.10f}") print("-" * 50) The private keys in hex do not directly align with 0x80000000000000000 (no obvious masking or shifting). -------------------------------------------------- Puzzle 63: Private Key (dec): 8993229949524469768 Private Key (hex): 0x7cce5efdaccf6808 Multiplied by 1.004: 9029202869322567680.00 Ratio to CONST (0x800...): 0.0611842585 -------------------------------------------------- Puzzle 64: Private Key (dec): 17799667357578236628 Private Key (hex): 0xf7051f27b09112d4 Multiplied by 1.004: 17870866027008548864.00 Ratio to CONST (0x800...): 0.1210976986 -------------------------------------------------- Puzzle 65: Private Key (dec): 30568377312064202855 Private Key (hex): 0x1a838b13505b26867 Multiplied by 1.004: 30690650821312462848.00 Ratio to CONST (0x800...): 0.2079679393 -------------------------------------------------- Puzzle 66: Private Key (dec): 46346217550346335726 Private Key (hex): 0x2832ed74f2b5e35ee Multiplied by 1.004: 46531602420547723264.00 Ratio to CONST (0x800...): 0.3153104027 -------------------------------------------------- Puzzle 67: Private Key (dec): 132656943602386256302 Private Key (hex): 0x730fc235c1942c1ae Multiplied by 1.004: 133187571376795795456.00 Ratio to CONST (0x800...): 0.9025140890 -------------------------------------------------- The keys are randomly generated (or pseudo-randomly derived from an unknown seed). -------- import math
# Historical data: (puzzle_number, private_key_in_decimal) HISTORY = [ (1, 1), (2, 3), (3, 7), (4, 8), (5, 21), (6, 49), (7, 76), (8, 224), (9, 467), (10, 514), (11, 1155), (12, 2683), (13, 5216), (14, 10544), (15, 26867), (16, 51510), (17, 95823), (18, 198669), (19, 357535), (20, 863317), (21, 1811764), (22, 3007503), (23, 5598802), (24, 14428676), (25, 33185509), (26, 54538862), (27, 111949941), (28, 227634408), (29, 400708894), (30, 1033162084), (31, 2102388551), (32, 3093472814), (33, 7137437912), (34, 14133072157), (35, 20112871792), (36, 42387769980), (37, 100251560595), (38, 146971536592), (39, 323724968937), (40, 1003651412950), (41, 1458252205147), (42, 2895374552463), (43, 7409811047825), (44, 15404761757071), (45, 19996463086597), (46, 51408670348612), (47, 119666659114170), (48, 191206974700443), (49, 409118905032525), (50, 611140496167764), (51, 2058769515153876), (52, 4216495639600700), (53, 6763683971478124), (54, 9974455244496707), (55, 30045390491869460), (56, 44218742292676575), (57, 138245758910846492), (58, 199976667976342049), (59, 525070384258266191), (60, 1135041350219496382), (61, 1425787542618654982), (62, 3908372542507822062), (63, 8993229949524469768), (64, 17799667357578236628), (65, 30568377312064202855), (66, 46346217550346335726), (67, 132656943602386256302) ]
# Calculate exact bit flips between consecutive puzzles EXACT_FLIPS = [] for i in range(1, len(HISTORY)): prev_num, prev_key = HISTORY[i-1] curr_num, curr_key = HISTORY[i] flips = bin(prev_key ^ curr_key).count('1') EXACT_FLIPS.append((prev_num, curr_num, flips)) print(f"Puzzle {prev_num}→{curr_num}: {flips} bit flips")
print("\nFull flip count table:") print("Start → End | Bit Flips") print("-----------------------") for entry in EXACT_FLIPS: print(f"{entry[0]:3} → {entry[1]:3} | {entry[2]:3}")
Puzzle 49→50: 29 bit flips Puzzle 50→51: 25 bit flips Puzzle 51→52: 27 bit flips Puzzle 52→53: 26 bit flips Puzzle 53→54: 30 bit flips Puzzle 54→55: 31 bit flips Puzzle 55→56: 31 bit flips Puzzle 56→57: 33 bit flips Puzzle 57→58: 28 bit flips Puzzle 58→59: 30 bit flips Puzzle 59→60: 31 bit flips Puzzle 60→61: 25 bit flips Puzzle 61→62: 35 bit flips Puzzle 62→63: 34 bit flips Puzzle 63→64: 34 bit flips Puzzle 64→65: 37 bit flips Puzzle 65→66: 35 bit flips Puzzle 66→67: 31 bit flips But bit flips between consecutive private keys are very interesting.  Number of mutation options for 68 bit of 30: 17876288714431443296 Number of mutation options for 68 bit of 31: 21912870037044995008 Number of mutation options for 68 bit of 32: 25336755980333275478 Number of mutation options for 68 bit of 33: 27640097433090845976 Number of mutation options for 68 bit of 34: 28453041475240576740 Number of mutation options for 68 bit of 35: 27640097433090845976 Number of mutation options for 68 bit of 36: 25336755980333275478 Number of mutation options for 68 bit of 37: 21912870037044995008 Number of mutation options for 68 bit of 38: 17876288714431443296 Number of mutation options for 68 bit of 39: 13750991318793417920 Number of mutation options for 68 bit of 40: 9969468706125227992 Number of mutation options for 68 bit of 41: 6808417652963570336 Number of mutation options for 68 bit of 42: 4376839919762295216 Bit flipping (mutation) helps to find a solution faster, since there are fewer options than in the general range, and if you split it into 20 people, say from 30 to 50 (change one bit option for each), then you can find it even faster, even on the CPU! But for some reason you don’t want to add my idea to Cyclone 
|
|
|
|
|
|
kTimesG
|
 |
March 30, 2025, 11:48:39 AM |
|
I wonder if it is possible to implement Bitcoin address prefix searching not only by the starting characters but also by any other positions within the address. For example, searching for characters at the end, in the middle, or even a combined search where part of the characters are at the beginning, part in the middle, and part at the end, and so on.
This sounds awesome  So, instead of quickly filtering the H160, you want to first encode it to base58, and then do some regex stuff on the b58 string. It'll definitely have no impact on the throughput whatsoever. Puzzle 65→66: 35 bit flips Puzzle 66→67: 31 bit flips But bit flips between consecutive private keys are very interesting.  You're simply covering 99.999...9% of the range, if you do the math on summing up a few of these "how many flips on average". Of course, there are extremely low chances of having just a few flips, but then again, those account for 0% of the range.  I don't get why most people can't accept a simple fact: there is no cheating, all of these ideas to shorten the brute force are actually increasing the time. Only way to success is to check all pvtKeys to H160, as fast as possible, and there is exactly one single way to do it optimally: iterate, hash, check, repeat, on hardware that does it with the highest efficiency (hashrate / watt).
|
Off the grid, training pigeons to broadcast signed messages.
|
|
|
Akito S. M. Hosana
Jr. Member
Offline
Activity: 420
Merit: 8
|
 |
March 30, 2025, 11:51:43 AM |
|
Bit flipping (mutation) helps to find a solution faster, since there are fewer options than in the general range, and if you split it into 20 people, say from 30 to 50 (change one bit option for each), then you can find it even faster, even on the CPU! But for some reason you don’t want to add my idea to Cyclone  Just one moment. If I understand correctly, we don't actually have to search the whole range—just from the last private key onward ? 
|
|
|
|
|
Denevron
Newbie
Offline
Activity: 121
Merit: 0
|
 |
March 30, 2025, 11:56:18 AM |
|
Bit flipping (mutation) helps to find a solution faster, since there are fewer options than in the general range, and if you split it into 20 people, say from 30 to 50 (change one bit option for each), then you can find it even faster, even on the CPU! But for some reason you don’t want to add my idea to Cyclone  Just one moment. If I understand correctly, we don't actually have to search the whole range—just from the last private key onward ?  import sys import time import multiprocessing from itertools import combinations from concurrent.futures import ProcessPoolExecutor, as_completed import os from functools import partial
sys.path.append(r'E:\Work\python\secp256k1') import secp256k1 as ice
NUM_WORKERS = 1 TARGET_ADDRESS = "b907c3a2a3b27789dfb509b730dd47703c272868"
stop_event = multiprocessing.Event() key_found = multiprocessing.Value('b', False)
# Original key (from puzzle 19) KEY_67_HEX = "000000000000000000000000000000000000000000000000000000000005749f"
# We fix the senior 236 bits (zero bits) KEY_67_BIN = bin(int(KEY_67_HEX, 16))[2:].zfill(256) FIXED_BITS = KEY_67_BIN[:236] # Fixed 236 bits CHANGING_BITS = list(KEY_67_BIN[236:]) # Changeable bits (68 bits)
def mutate_fixed_bits(start_index, num_workers): for i, bit_indices in enumerate(combinations(range(20), 8)): if (i % num_workers) == start_index: mutated_bits = CHANGING_BITS[:] for index in bit_indices: mutated_bits[index] = '1' if mutated_bits[index] == '0' else '0' mutated_key_bin = FIXED_BITS + "".join(mutated_bits) mutated_key_hex = hex(int(mutated_key_bin, 2))[2:].zfill(64) yield mutated_key_hex
def check_key(worker_id, num_workers, keys_checked, lock): local_counter = 0 start_time = time.time()
for mutated_key_hex in mutate_fixed_bits(worker_id, num_workers): if key_found.value or stop_event.is_set(): return None
generated_address = ice.privatekey_to_h160(0, True, int(mutated_key_hex, 16)).hex()
if generated_address.startswith(TARGET_ADDRESS[:7]): print(f" Private Key: {mutated_key_hex}") print(f" Generated Hash: {generated_address}") print(f" Target Hash: {TARGET_ADDRESS}")
if generated_address == TARGET_ADDRESS: with lock: if not key_found.value: print(f"\n Key found {worker_id}: {mutated_key_hex} to address {TARGET_ADDRESS}!") with open("key.txt", "w") as f: f.write(f"Key found: {mutated_key_hex}\n") key_found.value = True stop_event.set() return mutated_key_hex local_counter += 1 if time.time() - start_time >= 300: with lock: keys_checked.value += local_counter local_counter = 0 start_time = time.time()
with lock: keys_checked.value += local_counter
return None
def print_status(start_time, keys_checked, lock): while not stop_event.is_set(): time.sleep(300) with lock: checked = keys_checked.value elapsed_time = time.time() - start_time print(f"Full time: {elapsed_time:.2f} sec. | Total Key: {checked}")
def main(): start_time = time.time()
with multiprocessing.Manager() as manager: keys_checked = manager.Value('i', 0) lock = manager.Lock()
tracker_process = multiprocessing.Process(target=print_status, args=(start_time, keys_checked, lock)) tracker_process.start()
check_key_partial = partial(check_key, num_workers=NUM_WORKERS, keys_checked=keys_checked, lock=lock)
with ProcessPoolExecutor(max_workers=NUM_WORKERS) as executor: futures = {executor.submit(check_key_partial, i): i for i in range(NUM_WORKERS)}
for future in as_completed(futures): result = future.result() if result: break
stop_event.set() tracker_process.terminate() tracker_process.join()
end_time = time.time() if not key_found.value: print("Key not found")
print(f"⏳ Full time: {end_time - start_time:.2f} sec.") os._exit(0)
if __name__ == "__main__": multiprocessing.freeze_support() main() run it, this is an example of how to find 20 (others are possible) here you only need to change 8 bits, if you put (range(20),  ) here - 9 instead of 8, you won't find the key, put 7, you won't find it either) here you need to change exactly 8 bits), and so on for many puzzles, only part of the bits in the key changes, mostly from 30 to 45 bits, depends on the puzzle number 
|
|
|
|
|
|
nomachine
|
 |
March 30, 2025, 12:01:38 PM |
|
But for some reason you don’t want to add my idea to Cyclone  It should be a completely new program, you can't just slip into this one. And it wouldn't be called Cyclone; it would be a Mutagen. Just one moment. If I understand correctly, we don't actually have to search the whole range—just from the last private key onward ?  It's not as simple as it seems. 
|
BTC: bc1qdwnxr7s08xwelpjy3cc52rrxg63xsmagv50fa8
|
|
|
Denevron
Newbie
Offline
Activity: 121
Merit: 0
|
 |
March 30, 2025, 12:04:50 PM |
|
But for some reason you don’t want to add my idea to Cyclone  It should be a completely new program, you can't just slip into this one. And it wouldn't be called Cyclone; it would be a Mutagen. Just one moment. If I understand correctly, we don't actually have to search the whole range—just from the last private key onward ?  It's not as simple as it seems.  can be called a mutagen  It's not that simple, but it speeds up the process, you've noticed it yourself) you can add it to Cyclon as a new method, and there you just don’t calculate the puzzle range, but enter the key by which the bits will change, and it’s better not in random mode, but in sequential mode, because the random mode will loop and that’s it, and you will be forever, say, chasing in 30 bits, and you won’t find it, and it will be, say, in 33 bits
|
|
|
|
|
|
fixedpaul
|
 |
March 30, 2025, 12:08:41 PM |
|
I don't get why most people can't accept a simple fact: there is no cheating, all of these ideas to shorten the brute force are actually increasing the time. Only way to success is to check all pvtKeys to H160, as fast as possible, and there is exactly one single way to do it optimally: iterate, hash, check, repeat, on hardware that does it with the highest efficiency (hashrate / watt).
I’ll join this discussion to say that I completely agree. There’s no way to bypass a uniform distribution, and many find it hard to admit. The reality is that we are simply looking for a random number among many – a ball in a box, and the only way we can increase the chances of finding the winning ball is by searching for the balls faster. I’ve added prefixes search to my software because many people asked for it. However, I don’t understand how this could be useful for the puzzle, unless using them as a proof of work.
|
|
|
|
|
|
nomachine
|
 |
March 30, 2025, 12:18:24 PM |
|
can be called a mutagen  It's not that simple, but it speeds up the process, you've noticed it yourself) you can add it to Cyclon as a new method, and there you just don’t calculate the puzzle range, but enter the key by which the bits will change, and it’s better not in random mode, but in sequential mode, because the random mode will loop and that’s it, and you will be forever, say, chasing in 30 bits, and you won’t find it, and it will be, say, in 33 bits import time import multiprocessing import secp256k1 as ice import random from math import comb import numpy as np
# Configuration TARGET_ADDR = "1HsMJxNiV7TLxmoF6uJNkydxPFDog4NQum" # Puzzle 20 BASE_KEY = "000000000000000000000000000000000000000000000000000000000005749f" # Puzzle 19 Private Key PUZZLE_NUM = 20 WORKERS = multiprocessing.cpu_count()
# Historical flip counts FLIP_TABLE = { 20:8, 21:8, 22:9, 23:9, 24:10, 25:11, 26:12, 27:13, 28:14, 29:15, 30:16, 31:17, 32:18, 33:19, 34:20, 35:21, 36:22, 37:23, 38:24, 39:25, 40:26, 41:27, 42:28, 43:29, 44:30, 45:31, 46:32, 47:33, 48:34, 49:35, 50:36, 51:37, 52:38, 53:39, 54:40, 55:41, 56:42, 57:43, 58:44, 59:45, 60:46, 61:47, 62:48, 63:49, 64:50, 65:51, 66:52, 67:53 }
def predict_flips(puzzle_num): """Predict flip count using linear regression with bounds""" if puzzle_num in FLIP_TABLE: return FLIP_TABLE[puzzle_num] # Linear regression x = np.array(list(FLIP_TABLE.keys())) y = np.array(list(FLIP_TABLE.values())) coeffs = np.polyfit(x, y, 1) predicted = round(coeffs[0] * puzzle_num + coeffs[1]) # Apply bounds (50-60% of bits) bit_length = puzzle_num lower = max(8, int(bit_length * 0.5)) upper = min(int(bit_length * 0.6), bit_length-5) return min(max(predicted, lower), upper)
def mutate_key(base_int, flip_positions): """Ultra-fast bit flipping using XOR mask""" flip_mask = sum(1 << bit for bit in flip_positions) return base_int ^ flip_mask
def worker(base_int, bit_length, flip_count, results, stop_event): checked = 0 bit_positions = list(range(bit_length)) start_time = time.time() while not stop_event.is_set(): # Generate random flip positions flip_pos = random.sample(bit_positions, flip_count) # Create mutated key priv_int = mutate_key(base_int, flip_pos) checked += 1 # Generate address addr = ice.privatekey_to_address(0, True, priv_int) if addr == TARGET_ADDR: hex_key = "%064x" % priv_int actual_flips = bin(base_int ^ priv_int).count('1') results.put((hex_key, checked, actual_flips)) stop_event.set() return # Progress update if checked % 10000 == 0: elapsed = time.time() - start_time speed = checked/elapsed if elapsed > 0 else 0 print(f"Checked {checked:,} | {speed:,.0f} keys/sec", end='\r')
def parallel_search(): bit_length = PUZZLE_NUM base_int = int(BASE_KEY, 16) flip_count = predict_flips(PUZZLE_NUM) total_combs = comb(bit_length, flip_count) print(f"\nSearching Puzzle {PUZZLE_NUM} (256-bit)") print(f"Base Key: {BASE_KEY}") print(f"Target: {TARGET_ADDR}") print(f"Predicted Flip Count: {flip_count} bits") print(f"Total Possible Combinations: 2^{int(np.log2(total_combs)):,}") print(f"Using {WORKERS} workers...\n") manager = multiprocessing.Manager() results = manager.Queue() stop_event = manager.Event() start_time = time.time() with multiprocessing.Pool(WORKERS) as pool: for _ in range(WORKERS): pool.apply_async(worker, (base_int, bit_length, flip_count, results, stop_event)) try: hex_key, checked, actual_flips = results.get(timeout=86400) # 24h timeout stop_event.set() elapsed = time.time() - start_time print(f"\nFound in {elapsed/3600:.2f} hours") print(f"Private Key: {hex_key}") print(f"Actual Bit Flips: {actual_flips}") print(f"Keys Checked: {checked:,}") print(f"Speed: {checked/elapsed:,.0f} keys/sec") solution_file = f"puzzle_{PUZZLE_NUM}_solution.txt" with open(solution_file, "w") as f: f.write(hex_key) print(f"\nSolution saved to {solution_file}") return hex_key except: print("\nKey not found in 24 hours - try adjusting flip count ±2") return None finally: pool.terminate()
if __name__ == "__main__": flip_count = predict_flips(PUZZLE_NUM) # First calculate flip_count print(f"Predicted flip count for {PUZZLE_NUM}: {flip_count} bits") solution = parallel_search() Here is my Python version. I still think this is useless. If someone convinces me otherwise. 
|
BTC: bc1qdwnxr7s08xwelpjy3cc52rrxg63xsmagv50fa8
|
|
|
Denevron
Newbie
Offline
Activity: 121
Merit: 0
|
 |
March 30, 2025, 12:29:24 PM |
|
can be called a mutagen  It's not that simple, but it speeds up the process, you've noticed it yourself) you can add it to Cyclon as a new method, and there you just don’t calculate the puzzle range, but enter the key by which the bits will change, and it’s better not in random mode, but in sequential mode, because the random mode will loop and that’s it, and you will be forever, say, chasing in 30 bits, and you won’t find it, and it will be, say, in 33 bits import time import multiprocessing import secp256k1 as ice import random from math import comb import numpy as np
# Configuration TARGET_ADDR = "1HsMJxNiV7TLxmoF6uJNkydxPFDog4NQum" # Puzzle 20 BASE_KEY = "000000000000000000000000000000000000000000000000000000000005749f" # Puzzle 19 Private Key PUZZLE_NUM = 20 WORKERS = multiprocessing.cpu_count()
# Historical flip counts FLIP_TABLE = { 20:8, 21:8, 22:9, 23:9, 24:10, 25:11, 26:12, 27:13, 28:14, 29:15, 30:16, 31:17, 32:18, 33:19, 34:20, 35:21, 36:22, 37:23, 38:24, 39:25, 40:26, 41:27, 42:28, 43:29, 44:30, 45:31, 46:32, 47:33, 48:34, 49:35, 50:36, 51:37, 52:38, 53:39, 54:40, 55:41, 56:42, 57:43, 58:44, 59:45, 60:46, 61:47, 62:48, 63:49, 64:50, 65:51, 66:52, 67:53 }
def predict_flips(puzzle_num): """Predict flip count using linear regression with bounds""" if puzzle_num in FLIP_TABLE: return FLIP_TABLE[puzzle_num] # Linear regression x = np.array(list(FLIP_TABLE.keys())) y = np.array(list(FLIP_TABLE.values())) coeffs = np.polyfit(x, y, 1) predicted = round(coeffs[0] * puzzle_num + coeffs[1]) # Apply bounds (50-60% of bits) bit_length = puzzle_num lower = max(8, int(bit_length * 0.5)) upper = min(int(bit_length * 0.6), bit_length-5) return min(max(predicted, lower), upper)
def mutate_key(base_int, flip_positions): """Ultra-fast bit flipping using XOR mask""" flip_mask = sum(1 << bit for bit in flip_positions) return base_int ^ flip_mask
def worker(base_int, bit_length, flip_count, results, stop_event): checked = 0 bit_positions = list(range(bit_length)) start_time = time.time() while not stop_event.is_set(): # Generate random flip positions flip_pos = random.sample(bit_positions, flip_count) # Create mutated key priv_int = mutate_key(base_int, flip_pos) checked += 1 # Generate address addr = ice.privatekey_to_address(0, True, priv_int) if addr == TARGET_ADDR: hex_key = "%064x" % priv_int actual_flips = bin(base_int ^ priv_int).count('1') results.put((hex_key, checked, actual_flips)) stop_event.set() return # Progress update if checked % 10000 == 0: elapsed = time.time() - start_time speed = checked/elapsed if elapsed > 0 else 0 print(f"Checked {checked:,} | {speed:,.0f} keys/sec", end='\r')
def parallel_search(): bit_length = PUZZLE_NUM base_int = int(BASE_KEY, 16) flip_count = predict_flips(PUZZLE_NUM) total_combs = comb(bit_length, flip_count) print(f"\nSearching Puzzle {PUZZLE_NUM} (256-bit)") print(f"Base Key: {BASE_KEY}") print(f"Target: {TARGET_ADDR}") print(f"Predicted Flip Count: {flip_count} bits") print(f"Total Possible Combinations: 2^{int(np.log2(total_combs)):,}") print(f"Using {WORKERS} workers...\n") manager = multiprocessing.Manager() results = manager.Queue() stop_event = manager.Event() start_time = time.time() with multiprocessing.Pool(WORKERS) as pool: for _ in range(WORKERS): pool.apply_async(worker, (base_int, bit_length, flip_count, results, stop_event)) try: hex_key, checked, actual_flips = results.get(timeout=86400) # 24h timeout stop_event.set() elapsed = time.time() - start_time print(f"\nFound in {elapsed/3600:.2f} hours") print(f"Private Key: {hex_key}") print(f"Actual Bit Flips: {actual_flips}") print(f"Keys Checked: {checked:,}") print(f"Speed: {checked/elapsed:,.0f} keys/sec") solution_file = f"puzzle_{PUZZLE_NUM}_solution.txt" with open(solution_file, "w") as f: f.write(hex_key) print(f"\nSolution saved to {solution_file}") return hex_key except: print("\nKey not found in 24 hours - try adjusting flip count ±2") return None finally: pool.terminate()
if __name__ == "__main__": flip_count = predict_flips(PUZZLE_NUM) # First calculate flip_count print(f"Predicted flip count for {PUZZLE_NUM}: {flip_count} bits") solution = parallel_search() Here is my Python version. I still think this is useless. If someone convinces me otherwise.  you don't need to use random, so you won't find a match, only if you guess the number of bits 
|
|
|
|
|
|
nomachine
|
 |
March 30, 2025, 01:47:29 PM |
|
can be called a mutagen  It's not that simple, but it speeds up the process, you've noticed it yourself) you can add it to Cyclon as a new method, and there you just don’t calculate the puzzle range, but enter the key by which the bits will change, and it’s better not in random mode, but in sequential mode, because the random mode will loop and that’s it, and you will be forever, say, chasing in 30 bits, and you won’t find it, and it will be, say, in 33 bits import time import multiprocessing import secp256k1 as ice import random from math import comb import numpy as np
# Configuration TARGET_ADDR = "1HsMJxNiV7TLxmoF6uJNkydxPFDog4NQum" # Puzzle 20 BASE_KEY = "000000000000000000000000000000000000000000000000000000000005749f" # Puzzle 19 Private Key PUZZLE_NUM = 20 WORKERS = multiprocessing.cpu_count()
# Historical flip counts FLIP_TABLE = { 20:8, 21:8, 22:9, 23:9, 24:10, 25:11, 26:12, 27:13, 28:14, 29:15, 30:16, 31:17, 32:18, 33:19, 34:20, 35:21, 36:22, 37:23, 38:24, 39:25, 40:26, 41:27, 42:28, 43:29, 44:30, 45:31, 46:32, 47:33, 48:34, 49:35, 50:36, 51:37, 52:38, 53:39, 54:40, 55:41, 56:42, 57:43, 58:44, 59:45, 60:46, 61:47, 62:48, 63:49, 64:50, 65:51, 66:52, 67:53 }
def predict_flips(puzzle_num): """Predict flip count using linear regression with bounds""" if puzzle_num in FLIP_TABLE: return FLIP_TABLE[puzzle_num] # Linear regression x = np.array(list(FLIP_TABLE.keys())) y = np.array(list(FLIP_TABLE.values())) coeffs = np.polyfit(x, y, 1) predicted = round(coeffs[0] * puzzle_num + coeffs[1]) # Apply bounds (50-60% of bits) bit_length = puzzle_num lower = max(8, int(bit_length * 0.5)) upper = min(int(bit_length * 0.6), bit_length-5) return min(max(predicted, lower), upper)
def mutate_key(base_int, flip_positions): """Ultra-fast bit flipping using XOR mask""" flip_mask = sum(1 << bit for bit in flip_positions) return base_int ^ flip_mask
def worker(base_int, bit_length, flip_count, results, stop_event): checked = 0 bit_positions = list(range(bit_length)) start_time = time.time() while not stop_event.is_set(): # Generate random flip positions flip_pos = random.sample(bit_positions, flip_count) # Create mutated key priv_int = mutate_key(base_int, flip_pos) checked += 1 # Generate address addr = ice.privatekey_to_address(0, True, priv_int) if addr == TARGET_ADDR: hex_key = "%064x" % priv_int actual_flips = bin(base_int ^ priv_int).count('1') results.put((hex_key, checked, actual_flips)) stop_event.set() return # Progress update if checked % 10000 == 0: elapsed = time.time() - start_time speed = checked/elapsed if elapsed > 0 else 0 print(f"Checked {checked:,} | {speed:,.0f} keys/sec", end='\r')
def parallel_search(): bit_length = PUZZLE_NUM base_int = int(BASE_KEY, 16) flip_count = predict_flips(PUZZLE_NUM) total_combs = comb(bit_length, flip_count) print(f"\nSearching Puzzle {PUZZLE_NUM} (256-bit)") print(f"Base Key: {BASE_KEY}") print(f"Target: {TARGET_ADDR}") print(f"Predicted Flip Count: {flip_count} bits") print(f"Total Possible Combinations: 2^{int(np.log2(total_combs)):,}") print(f"Using {WORKERS} workers...\n") manager = multiprocessing.Manager() results = manager.Queue() stop_event = manager.Event() start_time = time.time() with multiprocessing.Pool(WORKERS) as pool: for _ in range(WORKERS): pool.apply_async(worker, (base_int, bit_length, flip_count, results, stop_event)) try: hex_key, checked, actual_flips = results.get(timeout=86400) # 24h timeout stop_event.set() elapsed = time.time() - start_time print(f"\nFound in {elapsed/3600:.2f} hours") print(f"Private Key: {hex_key}") print(f"Actual Bit Flips: {actual_flips}") print(f"Keys Checked: {checked:,}") print(f"Speed: {checked/elapsed:,.0f} keys/sec") solution_file = f"puzzle_{PUZZLE_NUM}_solution.txt" with open(solution_file, "w") as f: f.write(hex_key) print(f"\nSolution saved to {solution_file}") return hex_key except: print("\nKey not found in 24 hours - try adjusting flip count ±2") return None finally: pool.terminate()
if __name__ == "__main__": flip_count = predict_flips(PUZZLE_NUM) # First calculate flip_count print(f"Predicted flip count for {PUZZLE_NUM}: {flip_count} bits") solution = parallel_search() Here is my Python version. I still think this is useless. If someone convinces me otherwise.  you don't need to use random, so you won't find a match, only if you guess the number of bits  import time import multiprocessing import secp256k1 as ice from math import comb import numpy as np from itertools import combinations # For sequential flip generation
# Configuration TARGET_ADDR = "1HsMJxNiV7TLxmoF6uJNkydxPFDog4NQum" # Puzzle 20 BASE_KEY = "000000000000000000000000000000000000000000000000000000000005749f" # Puzzle 19 Private Key PUZZLE_NUM = 20 WORKERS = multiprocessing.cpu_count()
# Historical flip counts (unchanged) FLIP_TABLE = { 20:8, 21:8, 22:9, 23:9, 24:10, 25:11, 26:12, 27:13, 28:14, 29:15, 30:16, 31:17, 32:18, 33:19, 34:20, 35:21, 36:22, 37:23, 38:24, 39:25, 40:26, 41:27, 42:28, 43:29, 44:30, 45:31, 46:32, 47:33, 48:34, 49:35, 50:36, 51:37, 52:38, 53:39, 54:40, 55:41, 56:42, 57:43, 58:44, 59:45, 60:46, 61:47, 62:48, 63:49, 64:50, 65:51, 66:52, 67:53 }
def predict_flips(puzzle_num): """Predict flip count using linear regression with bounds (unchanged)""" if puzzle_num in FLIP_TABLE: return FLIP_TABLE[puzzle_num] # Linear regression x = np.array(list(FLIP_TABLE.keys())) y = np.array(list(FLIP_TABLE.values())) coeffs = np.polyfit(x, y, 1) predicted = round(coeffs[0] * puzzle_num + coeffs[1]) # Apply bounds (50-60% of bits) bit_length = puzzle_num lower = max(8, int(bit_length * 0.5)) upper = min(int(bit_length * 0.6), bit_length-5) return min(max(predicted, lower), upper)
def mutate_key(base_int, flip_positions): """Ultra-fast bit flipping using XOR mask (unchanged)""" flip_mask = sum(1 << bit for bit in flip_positions) return base_int ^ flip_mask
def worker(base_int, bit_length, flip_count, results, stop_event, start_index, end_index): """Modified worker for sequential search""" checked = 0 bit_positions = list(range(bit_length)) start_time = time.time()
# Generate all combinations in the assigned range all_combinations = combinations(bit_positions, flip_count) # Skip to the start index (for parallelization) for _ in range(start_index): next(all_combinations, None) # Iterate through combinations sequentially for flip_pos in all_combinations: if stop_event.is_set(): return if checked >= (end_index - start_index): return priv_int = mutate_key(base_int, flip_pos) checked += 1 # Generate address addr = ice.privatekey_to_address(0, True, priv_int) if addr == TARGET_ADDR: hex_key = "%064x" % priv_int actual_flips = bin(base_int ^ priv_int).count('1') results.put((hex_key, checked, actual_flips)) stop_event.set() return # Progress update if checked % 10000 == 0: elapsed = time.time() - start_time speed = checked/elapsed if elapsed > 0 else 0 print(f"Checked {checked:,} | {speed:,.0f} keys/sec", end='\r')
def parallel_search(): bit_length = PUZZLE_NUM base_int = int(BASE_KEY, 16) flip_count = predict_flips(PUZZLE_NUM) total_combs = comb(bit_length, flip_count) print(f"\nSearching Puzzle {PUZZLE_NUM} (256-bit)") print(f"Base Key: {BASE_KEY}") print(f"Target: {TARGET_ADDR}") print(f"Predicted Flip Count: {flip_count} bits") print(f"Total Possible Combinations: 2^{int(np.log2(total_combs)):,}") print(f"Using {WORKERS} workers...\n") manager = multiprocessing.Manager() results = manager.Queue() stop_event = manager.Event() start_time = time.time() # Split work into chunks for each worker chunk_size = total_combs // WORKERS ranges = [(i * chunk_size, (i + 1) * chunk_size) for i in range(WORKERS)] with multiprocessing.Pool(WORKERS) as pool: for start, end in ranges: pool.apply_async(worker, (base_int, bit_length, flip_count, results, stop_event, start, end)) try: hex_key, checked, actual_flips = results.get(timeout=86400) # 24h timeout stop_event.set() elapsed = time.time() - start_time print(f"\nFound in {elapsed/3600:.2f} hours") print(f"Private Key: {hex_key}") print(f"Actual Bit Flips: {actual_flips}") print(f"Keys Checked: {checked:,}") print(f"Speed: {checked/elapsed:,.0f} keys/sec") solution_file = f"puzzle_{PUZZLE_NUM}_solution.txt" with open(solution_file, "w") as f: f.write(hex_key) print(f"\nSolution saved to {solution_file}") return hex_key except: print("\nKey not found in 24 hours - try adjusting flip count ±2") return None finally: pool.terminate()
if __name__ == "__main__": flip_count = predict_flips(PUZZLE_NUM) print(f"Predicted flip count for {PUZZLE_NUM}: {flip_count} bits") solution = parallel_search() Here is the script. I don't see any difference here. To solve puzzle 68 you need impossible speed.
|
BTC: bc1qdwnxr7s08xwelpjy3cc52rrxg63xsmagv50fa8
|
|
|
AlanJohnson
Member

Offline
Activity: 185
Merit: 11
|
 |
March 30, 2025, 02:30:52 PM |
|
I don't get why most people can't accept a simple fact: there is no cheating, all of these ideas to shorten the brute force are actually increasing the time. Only way to success is to check all pvtKeys to H160, as fast as possible, and there is exactly one single way to do it optimally: iterate, hash, check, repeat, on hardware that does it with the highest efficiency (hashrate / watt).
I’ll join this discussion to say that I completely agree. There’s no way to bypass a uniform distribution, and many find it hard to admit. The reality is that we are simply looking for a random number among many – a ball in a box, and the only way we can increase the chances of finding the winning ball is by searching for the balls faster. I’ve added prefixes search to my software because many people asked for it. However, I don’t understand how this could be useful for the puzzle, unless using them as a proof of work. You are 100% right. Some people just can't stand a simple fact that this race is over for small players...
|
|
|
|
|
Denevron
Newbie
Offline
Activity: 121
Merit: 0
|
 |
March 30, 2025, 02:33:23 PM |
|
can be called a mutagen  It's not that simple, but it speeds up the process, you've noticed it yourself) you can add it to Cyclon as a new method, and there you just don’t calculate the puzzle range, but enter the key by which the bits will change, and it’s better not in random mode, but in sequential mode, because the random mode will loop and that’s it, and you will be forever, say, chasing in 30 bits, and you won’t find it, and it will be, say, in 33 bits import time import multiprocessing import secp256k1 as ice import random from math import comb import numpy as np
# Configuration TARGET_ADDR = "1HsMJxNiV7TLxmoF6uJNkydxPFDog4NQum" # Puzzle 20 BASE_KEY = "000000000000000000000000000000000000000000000000000000000005749f" # Puzzle 19 Private Key PUZZLE_NUM = 20 WORKERS = multiprocessing.cpu_count()
# Historical flip counts FLIP_TABLE = { 20:8, 21:8, 22:9, 23:9, 24:10, 25:11, 26:12, 27:13, 28:14, 29:15, 30:16, 31:17, 32:18, 33:19, 34:20, 35:21, 36:22, 37:23, 38:24, 39:25, 40:26, 41:27, 42:28, 43:29, 44:30, 45:31, 46:32, 47:33, 48:34, 49:35, 50:36, 51:37, 52:38, 53:39, 54:40, 55:41, 56:42, 57:43, 58:44, 59:45, 60:46, 61:47, 62:48, 63:49, 64:50, 65:51, 66:52, 67:53 }
def predict_flips(puzzle_num): """Predict flip count using linear regression with bounds""" if puzzle_num in FLIP_TABLE: return FLIP_TABLE[puzzle_num] # Linear regression x = np.array(list(FLIP_TABLE.keys())) y = np.array(list(FLIP_TABLE.values())) coeffs = np.polyfit(x, y, 1) predicted = round(coeffs[0] * puzzle_num + coeffs[1]) # Apply bounds (50-60% of bits) bit_length = puzzle_num lower = max(8, int(bit_length * 0.5)) upper = min(int(bit_length * 0.6), bit_length-5) return min(max(predicted, lower), upper)
def mutate_key(base_int, flip_positions): """Ultra-fast bit flipping using XOR mask""" flip_mask = sum(1 << bit for bit in flip_positions) return base_int ^ flip_mask
def worker(base_int, bit_length, flip_count, results, stop_event): checked = 0 bit_positions = list(range(bit_length)) start_time = time.time() while not stop_event.is_set(): # Generate random flip positions flip_pos = random.sample(bit_positions, flip_count) # Create mutated key priv_int = mutate_key(base_int, flip_pos) checked += 1 # Generate address addr = ice.privatekey_to_address(0, True, priv_int) if addr == TARGET_ADDR: hex_key = "%064x" % priv_int actual_flips = bin(base_int ^ priv_int).count('1') results.put((hex_key, checked, actual_flips)) stop_event.set() return # Progress update if checked % 10000 == 0: elapsed = time.time() - start_time speed = checked/elapsed if elapsed > 0 else 0 print(f"Checked {checked:,} | {speed:,.0f} keys/sec", end='\r')
def parallel_search(): bit_length = PUZZLE_NUM base_int = int(BASE_KEY, 16) flip_count = predict_flips(PUZZLE_NUM) total_combs = comb(bit_length, flip_count) print(f"\nSearching Puzzle {PUZZLE_NUM} (256-bit)") print(f"Base Key: {BASE_KEY}") print(f"Target: {TARGET_ADDR}") print(f"Predicted Flip Count: {flip_count} bits") print(f"Total Possible Combinations: 2^{int(np.log2(total_combs)):,}") print(f"Using {WORKERS} workers...\n") manager = multiprocessing.Manager() results = manager.Queue() stop_event = manager.Event() start_time = time.time() with multiprocessing.Pool(WORKERS) as pool: for _ in range(WORKERS): pool.apply_async(worker, (base_int, bit_length, flip_count, results, stop_event)) try: hex_key, checked, actual_flips = results.get(timeout=86400) # 24h timeout stop_event.set() elapsed = time.time() - start_time print(f"\nFound in {elapsed/3600:.2f} hours") print(f"Private Key: {hex_key}") print(f"Actual Bit Flips: {actual_flips}") print(f"Keys Checked: {checked:,}") print(f"Speed: {checked/elapsed:,.0f} keys/sec") solution_file = f"puzzle_{PUZZLE_NUM}_solution.txt" with open(solution_file, "w") as f: f.write(hex_key) print(f"\nSolution saved to {solution_file}") return hex_key except: print("\nKey not found in 24 hours - try adjusting flip count ±2") return None finally: pool.terminate()
if __name__ == "__main__": flip_count = predict_flips(PUZZLE_NUM) # First calculate flip_count print(f"Predicted flip count for {PUZZLE_NUM}: {flip_count} bits") solution = parallel_search() Here is my Python version. I still think this is useless. If someone convinces me otherwise.  you don't need to use random, so you won't find a match, only if you guess the number of bits  import time import multiprocessing import secp256k1 as ice from math import comb import numpy as np from itertools import combinations # For sequential flip generation
# Configuration TARGET_ADDR = "1HsMJxNiV7TLxmoF6uJNkydxPFDog4NQum" # Puzzle 20 BASE_KEY = "000000000000000000000000000000000000000000000000000000000005749f" # Puzzle 19 Private Key PUZZLE_NUM = 20 WORKERS = multiprocessing.cpu_count()
# Historical flip counts (unchanged) FLIP_TABLE = { 20:8, 21:8, 22:9, 23:9, 24:10, 25:11, 26:12, 27:13, 28:14, 29:15, 30:16, 31:17, 32:18, 33:19, 34:20, 35:21, 36:22, 37:23, 38:24, 39:25, 40:26, 41:27, 42:28, 43:29, 44:30, 45:31, 46:32, 47:33, 48:34, 49:35, 50:36, 51:37, 52:38, 53:39, 54:40, 55:41, 56:42, 57:43, 58:44, 59:45, 60:46, 61:47, 62:48, 63:49, 64:50, 65:51, 66:52, 67:53 }
def predict_flips(puzzle_num): """Predict flip count using linear regression with bounds (unchanged)""" if puzzle_num in FLIP_TABLE: return FLIP_TABLE[puzzle_num] # Linear regression x = np.array(list(FLIP_TABLE.keys())) y = np.array(list(FLIP_TABLE.values())) coeffs = np.polyfit(x, y, 1) predicted = round(coeffs[0] * puzzle_num + coeffs[1]) # Apply bounds (50-60% of bits) bit_length = puzzle_num lower = max(8, int(bit_length * 0.5)) upper = min(int(bit_length * 0.6), bit_length-5) return min(max(predicted, lower), upper)
def mutate_key(base_int, flip_positions): """Ultra-fast bit flipping using XOR mask (unchanged)""" flip_mask = sum(1 << bit for bit in flip_positions) return base_int ^ flip_mask
def worker(base_int, bit_length, flip_count, results, stop_event, start_index, end_index): """Modified worker for sequential search""" checked = 0 bit_positions = list(range(bit_length)) start_time = time.time()
# Generate all combinations in the assigned range all_combinations = combinations(bit_positions, flip_count) # Skip to the start index (for parallelization) for _ in range(start_index): next(all_combinations, None) # Iterate through combinations sequentially for flip_pos in all_combinations: if stop_event.is_set(): return if checked >= (end_index - start_index): return priv_int = mutate_key(base_int, flip_pos) checked += 1 # Generate address addr = ice.privatekey_to_address(0, True, priv_int) if addr == TARGET_ADDR: hex_key = "%064x" % priv_int actual_flips = bin(base_int ^ priv_int).count('1') results.put((hex_key, checked, actual_flips)) stop_event.set() return # Progress update if checked % 10000 == 0: elapsed = time.time() - start_time speed = checked/elapsed if elapsed > 0 else 0 print(f"Checked {checked:,} | {speed:,.0f} keys/sec", end='\r')
def parallel_search(): bit_length = PUZZLE_NUM base_int = int(BASE_KEY, 16) flip_count = predict_flips(PUZZLE_NUM) total_combs = comb(bit_length, flip_count) print(f"\nSearching Puzzle {PUZZLE_NUM} (256-bit)") print(f"Base Key: {BASE_KEY}") print(f"Target: {TARGET_ADDR}") print(f"Predicted Flip Count: {flip_count} bits") print(f"Total Possible Combinations: 2^{int(np.log2(total_combs)):,}") print(f"Using {WORKERS} workers...\n") manager = multiprocessing.Manager() results = manager.Queue() stop_event = manager.Event() start_time = time.time() # Split work into chunks for each worker chunk_size = total_combs // WORKERS ranges = [(i * chunk_size, (i + 1) * chunk_size) for i in range(WORKERS)] with multiprocessing.Pool(WORKERS) as pool: for start, end in ranges: pool.apply_async(worker, (base_int, bit_length, flip_count, results, stop_event, start, end)) try: hex_key, checked, actual_flips = results.get(timeout=86400) # 24h timeout stop_event.set() elapsed = time.time() - start_time print(f"\nFound in {elapsed/3600:.2f} hours") print(f"Private Key: {hex_key}") print(f"Actual Bit Flips: {actual_flips}") print(f"Keys Checked: {checked:,}") print(f"Speed: {checked/elapsed:,.0f} keys/sec") solution_file = f"puzzle_{PUZZLE_NUM}_solution.txt" with open(solution_file, "w") as f: f.write(hex_key) print(f"\nSolution saved to {solution_file}") return hex_key except: print("\nKey not found in 24 hours - try adjusting flip count ±2") return None finally: pool.terminate()
if __name__ == "__main__": flip_count = predict_flips(PUZZLE_NUM) print(f"Predicted flip count for {PUZZLE_NUM}: {flip_count} bits") solution = parallel_search() Here is the script. I don't see any difference here. To solve puzzle 68 you need impossible speed. Speed is important, but if you haven't noticed the combinations are much smaller if you only change the bits in the key, and not randomly generate all possible private keys and their addresses) on a Python script, you can easily get to 50 +-, and 68 is possible, but it will take more time to check, say, 30 bits (mutations in the key), so good speed can be achieved in low-level programming languages, and these are C, C++ and similar languages  If I knew C or C++, I would have rewritten it from Python to it for better speed, but I don't have such knowledge! The most I can do is rewrite it to node.js, but it won't give any increase)
|
|
|
|
|
|
kTimesG
|
 |
March 30, 2025, 03:13:12 PM |
|
Speed is important, but if you haven't noticed the combinations are much smaller if you only change the bits in the key, and not randomly generate all possible private keys and their addresses) on a Python script, you can easily get to 50 +-, and 68 is possible, but it will take more time to check, say, 30 bits (mutations in the key), so good speed can be achieved in low-level programming languages, and these are C, C++ and similar languages  If I knew C or C++, I would have rewritten it from Python to it for better speed, but I don't have such knowledge! The most I can do is rewrite it to node.js, but it won't give any increase) The only thing you would get knowledge of is learning C and C++. Other than this, there's something else you'd learn: that converting private keys to hashes is around 100 to 1000 times slower than converting sequential private keys to hashes. Do you understand how the discrete logarithm works for an elliptic curve? Or generally, for a group? I'm not gonna say it loud what this means for your idea. You can do the math.
|
Off the grid, training pigeons to broadcast signed messages.
|
|
|
Denevron
Newbie
Offline
Activity: 121
Merit: 0
|
 |
March 30, 2025, 03:26:56 PM |
|
Speed is important, but if you haven't noticed the combinations are much smaller if you only change the bits in the key, and not randomly generate all possible private keys and their addresses) on a Python script, you can easily get to 50 +-, and 68 is possible, but it will take more time to check, say, 30 bits (mutations in the key), so good speed can be achieved in low-level programming languages, and these are C, C++ and similar languages  If I knew C or C++, I would have rewritten it from Python to it for better speed, but I don't have such knowledge! The most I can do is rewrite it to node.js, but it won't give any increase) The only thing you would get knowledge of is learning C and C++. Other than this, there's something else you'd learn: that converting private keys to hashes is around 100 to 1000 times slower than converting sequential private keys to hashes. Do you understand how the discrete logarithm works for an elliptic curve? Or generally, for a group? I'm not gonna say it loud what this means for your idea. You can do the math. Don't you think that you are contradicting yourself? private keys to hashes is around 100 to 1000 times slower than converting sequential private keys to hashes - these are the same operations ... The method I proposed works great! And I understand perfectly well that for large ranges you need speed or faith in randomness! Maybe someone will find my idea attractive and implement it, thereby speeding up finding solutions to puzzles!
|
|
|
|
|
Bram24732
Member

Offline
Activity: 322
Merit: 28
|
 |
March 30, 2025, 03:48:53 PM |
|
Speed is important, but if you haven't noticed the combinations are much smaller if you only change the bits in the key, and not randomly generate all possible private keys and their addresses) on a Python script, you can easily get to 50 +-, and 68 is possible, but it will take more time to check, say, 30 bits (mutations in the key), so good speed can be achieved in low-level programming languages, and these are C, C++ and similar languages  If I knew C or C++, I would have rewritten it from Python to it for better speed, but I don't have such knowledge! The most I can do is rewrite it to node.js, but it won't give any increase) The only thing you would get knowledge of is learning C and C++. Other than this, there's something else you'd learn: that converting private keys to hashes is around 100 to 1000 times slower than converting sequential private keys to hashes. Do you understand how the discrete logarithm works for an elliptic curve? Or generally, for a group? I'm not gonna say it loud what this means for your idea. You can do the math. Don't you think that you are contradicting yourself? private keys to hashes is around 100 to 1000 times slower than converting sequential private keys to hashes - these are the same operations ... The method I proposed works great! And I understand perfectly well that for large ranges you need speed or faith in randomness! Maybe someone will find my idea attractive and implement it, thereby speeding up finding solutions to puzzles! Those are not the same operations. Computing addr(pkey)…addr(pkey + n) is way faster than computing addr(pkey1)….addr(pkeyN). If you don’t understand why, have a look at how vanity search works for instance.
|
I solved 67 and 68 using custom software distributing the load across ~25k GPUs. 4090 stocks speeds : ~8.1Bkeys/sec. Don’t challenge me technically if you know shit about fuck, I’ll ignore you. Same goes if all you can do is LLM reply.
|
|
|
Denevron
Newbie
Offline
Activity: 121
Merit: 0
|
 |
March 30, 2025, 04:07:55 PM |
|
Speed is important, but if you haven't noticed the combinations are much smaller if you only change the bits in the key, and not randomly generate all possible private keys and their addresses) on a Python script, you can easily get to 50 +-, and 68 is possible, but it will take more time to check, say, 30 bits (mutations in the key), so good speed can be achieved in low-level programming languages, and these are C, C++ and similar languages  If I knew C or C++, I would have rewritten it from Python to it for better speed, but I don't have such knowledge! The most I can do is rewrite it to node.js, but it won't give any increase) The only thing you would get knowledge of is learning C and C++. Other than this, there's something else you'd learn: that converting private keys to hashes is around 100 to 1000 times slower than converting sequential private keys to hashes. Do you understand how the discrete logarithm works for an elliptic curve? Or generally, for a group? I'm not gonna say it loud what this means for your idea. You can do the math. Don't you think that you are contradicting yourself? private keys to hashes is around 100 to 1000 times slower than converting sequential private keys to hashes - these are the same operations ... The method I proposed works great! And I understand perfectly well that for large ranges you need speed or faith in randomness! Maybe someone will find my idea attractive and implement it, thereby speeding up finding solutions to puzzles! Those are not the same operations. Computing addr(pkey)…addr(pkey + n) is way faster than computing addr(pkey1)….addr(pkeyN). If you don’t understand why, have a look at how vanity search works for instance. I understand what you are talking about, I made a slight mistake in the post above, I apologize. 
|
|
|
|
|
b0dre
Jr. Member
Offline
Activity: 61
Merit: 1
|
 |
March 30, 2025, 04:16:52 PM |
|
can be called a mutagen  It's not that simple, but it speeds up the process, you've noticed it yourself) you can add it to Cyclon as a new method, and there you just don’t calculate the puzzle range, but enter the key by which the bits will change, and it’s better not in random mode, but in sequential mode, because the random mode will loop and that’s it, and you will be forever, say, chasing in 30 bits, and you won’t find it, and it will be, say, in 33 bits import time import multiprocessing import secp256k1 as ice import random from math import comb import numpy as np
# Configuration TARGET_ADDR = "1HsMJxNiV7TLxmoF6uJNkydxPFDog4NQum" # Puzzle 20 BASE_KEY = "000000000000000000000000000000000000000000000000000000000005749f" # Puzzle 19 Private Key PUZZLE_NUM = 20 WORKERS = multiprocessing.cpu_count()
# Historical flip counts FLIP_TABLE = { 20:8, 21:8, 22:9, 23:9, 24:10, 25:11, 26:12, 27:13, 28:14, 29:15, 30:16, 31:17, 32:18, 33:19, 34:20, 35:21, 36:22, 37:23, 38:24, 39:25, 40:26, 41:27, 42:28, 43:29, 44:30, 45:31, 46:32, 47:33, 48:34, 49:35, 50:36, 51:37, 52:38, 53:39, 54:40, 55:41, 56:42, 57:43, 58:44, 59:45, 60:46, 61:47, 62:48, 63:49, 64:50, 65:51, 66:52, 67:53 }
def predict_flips(puzzle_num): """Predict flip count using linear regression with bounds""" if puzzle_num in FLIP_TABLE: return FLIP_TABLE[puzzle_num] # Linear regression x = np.array(list(FLIP_TABLE.keys())) y = np.array(list(FLIP_TABLE.values())) coeffs = np.polyfit(x, y, 1) predicted = round(coeffs[0] * puzzle_num + coeffs[1]) # Apply bounds (50-60% of bits) bit_length = puzzle_num lower = max(8, int(bit_length * 0.5)) upper = min(int(bit_length * 0.6), bit_length-5) return min(max(predicted, lower), upper)
def mutate_key(base_int, flip_positions): """Ultra-fast bit flipping using XOR mask""" flip_mask = sum(1 << bit for bit in flip_positions) return base_int ^ flip_mask
def worker(base_int, bit_length, flip_count, results, stop_event): checked = 0 bit_positions = list(range(bit_length)) start_time = time.time() while not stop_event.is_set(): # Generate random flip positions flip_pos = random.sample(bit_positions, flip_count) # Create mutated key priv_int = mutate_key(base_int, flip_pos) checked += 1 # Generate address addr = ice.privatekey_to_address(0, True, priv_int) if addr == TARGET_ADDR: hex_key = "%064x" % priv_int actual_flips = bin(base_int ^ priv_int).count('1') results.put((hex_key, checked, actual_flips)) stop_event.set() return # Progress update if checked % 10000 == 0: elapsed = time.time() - start_time speed = checked/elapsed if elapsed > 0 else 0 print(f"Checked {checked:,} | {speed:,.0f} keys/sec", end='\r')
def parallel_search(): bit_length = PUZZLE_NUM base_int = int(BASE_KEY, 16) flip_count = predict_flips(PUZZLE_NUM) total_combs = comb(bit_length, flip_count) print(f"\nSearching Puzzle {PUZZLE_NUM} (256-bit)") print(f"Base Key: {BASE_KEY}") print(f"Target: {TARGET_ADDR}") print(f"Predicted Flip Count: {flip_count} bits") print(f"Total Possible Combinations: 2^{int(np.log2(total_combs)):,}") print(f"Using {WORKERS} workers...\n") manager = multiprocessing.Manager() results = manager.Queue() stop_event = manager.Event() start_time = time.time() with multiprocessing.Pool(WORKERS) as pool: for _ in range(WORKERS): pool.apply_async(worker, (base_int, bit_length, flip_count, results, stop_event)) try: hex_key, checked, actual_flips = results.get(timeout=86400) # 24h timeout stop_event.set() elapsed = time.time() - start_time print(f"\nFound in {elapsed/3600:.2f} hours") print(f"Private Key: {hex_key}") print(f"Actual Bit Flips: {actual_flips}") print(f"Keys Checked: {checked:,}") print(f"Speed: {checked/elapsed:,.0f} keys/sec") solution_file = f"puzzle_{PUZZLE_NUM}_solution.txt" with open(solution_file, "w") as f: f.write(hex_key) print(f"\nSolution saved to {solution_file}") return hex_key except: print("\nKey not found in 24 hours - try adjusting flip count ±2") return None finally: pool.terminate()
if __name__ == "__main__": flip_count = predict_flips(PUZZLE_NUM) # First calculate flip_count print(f"Predicted flip count for {PUZZLE_NUM}: {flip_count} bits") solution = parallel_search() Here is my Python version. I still think this is useless. If someone convinces me otherwise.  you don't need to use random, so you won't find a match, only if you guess the number of bits  import time import multiprocessing import secp256k1 as ice from math import comb import numpy as np from itertools import combinations # For sequential flip generation
# Configuration TARGET_ADDR = "1HsMJxNiV7TLxmoF6uJNkydxPFDog4NQum" # Puzzle 20 BASE_KEY = "000000000000000000000000000000000000000000000000000000000005749f" # Puzzle 19 Private Key PUZZLE_NUM = 20 WORKERS = multiprocessing.cpu_count()
# Historical flip counts (unchanged) FLIP_TABLE = { 20:8, 21:8, 22:9, 23:9, 24:10, 25:11, 26:12, 27:13, 28:14, 29:15, 30:16, 31:17, 32:18, 33:19, 34:20, 35:21, 36:22, 37:23, 38:24, 39:25, 40:26, 41:27, 42:28, 43:29, 44:30, 45:31, 46:32, 47:33, 48:34, 49:35, 50:36, 51:37, 52:38, 53:39, 54:40, 55:41, 56:42, 57:43, 58:44, 59:45, 60:46, 61:47, 62:48, 63:49, 64:50, 65:51, 66:52, 67:53 }
def predict_flips(puzzle_num): """Predict flip count using linear regression with bounds (unchanged)""" if puzzle_num in FLIP_TABLE: return FLIP_TABLE[puzzle_num] # Linear regression x = np.array(list(FLIP_TABLE.keys())) y = np.array(list(FLIP_TABLE.values())) coeffs = np.polyfit(x, y, 1) predicted = round(coeffs[0] * puzzle_num + coeffs[1]) # Apply bounds (50-60% of bits) bit_length = puzzle_num lower = max(8, int(bit_length * 0.5)) upper = min(int(bit_length * 0.6), bit_length-5) return min(max(predicted, lower), upper)
def mutate_key(base_int, flip_positions): """Ultra-fast bit flipping using XOR mask (unchanged)""" flip_mask = sum(1 << bit for bit in flip_positions) return base_int ^ flip_mask
def worker(base_int, bit_length, flip_count, results, stop_event, start_index, end_index): """Modified worker for sequential search""" checked = 0 bit_positions = list(range(bit_length)) start_time = time.time()
# Generate all combinations in the assigned range all_combinations = combinations(bit_positions, flip_count) # Skip to the start index (for parallelization) for _ in range(start_index): next(all_combinations, None) # Iterate through combinations sequentially for flip_pos in all_combinations: if stop_event.is_set(): return if checked >= (end_index - start_index): return priv_int = mutate_key(base_int, flip_pos) checked += 1 # Generate address addr = ice.privatekey_to_address(0, True, priv_int) if addr == TARGET_ADDR: hex_key = "%064x" % priv_int actual_flips = bin(base_int ^ priv_int).count('1') results.put((hex_key, checked, actual_flips)) stop_event.set() return # Progress update if checked % 10000 == 0: elapsed = time.time() - start_time speed = checked/elapsed if elapsed > 0 else 0 print(f"Checked {checked:,} | {speed:,.0f} keys/sec", end='\r')
def parallel_search(): bit_length = PUZZLE_NUM base_int = int(BASE_KEY, 16) flip_count = predict_flips(PUZZLE_NUM) total_combs = comb(bit_length, flip_count) print(f"\nSearching Puzzle {PUZZLE_NUM} (256-bit)") print(f"Base Key: {BASE_KEY}") print(f"Target: {TARGET_ADDR}") print(f"Predicted Flip Count: {flip_count} bits") print(f"Total Possible Combinations: 2^{int(np.log2(total_combs)):,}") print(f"Using {WORKERS} workers...\n") manager = multiprocessing.Manager() results = manager.Queue() stop_event = manager.Event() start_time = time.time() # Split work into chunks for each worker chunk_size = total_combs // WORKERS ranges = [(i * chunk_size, (i + 1) * chunk_size) for i in range(WORKERS)] with multiprocessing.Pool(WORKERS) as pool: for start, end in ranges: pool.apply_async(worker, (base_int, bit_length, flip_count, results, stop_event, start, end)) try: hex_key, checked, actual_flips = results.get(timeout=86400) # 24h timeout stop_event.set() elapsed = time.time() - start_time print(f"\nFound in {elapsed/3600:.2f} hours") print(f"Private Key: {hex_key}") print(f"Actual Bit Flips: {actual_flips}") print(f"Keys Checked: {checked:,}") print(f"Speed: {checked/elapsed:,.0f} keys/sec") solution_file = f"puzzle_{PUZZLE_NUM}_solution.txt" with open(solution_file, "w") as f: f.write(hex_key) print(f"\nSolution saved to {solution_file}") return hex_key except: print("\nKey not found in 24 hours - try adjusting flip count ±2") return None finally: pool.terminate()
if __name__ == "__main__": flip_count = predict_flips(PUZZLE_NUM) print(f"Predicted flip count for {PUZZLE_NUM}: {flip_count} bits") solution = parallel_search() Here is the script. I don't see any difference here. To solve puzzle 68 you need impossible speed. 21 is not found!
|
|
|
|
|
|