|
nomachine
|
 |
April 21, 2025, 05:28:40 AM |
|
Someone could explain me the "logic" behind WIF generation? I know that is random. But to check for the solution you would need to do: WIF -> PVK -> ECC -> PUB -> SHA -> RIPE. No? Why add another step if you can go from PVK forward?  Making WIF import hashlib import base58
# Private key in hexadecimal format private_key_hex = "00000000000000000000000000000000000000000000000354d62e5f7a0d2eb2"
# Step 1: Convert private key from hex to bytes private_key_bytes = bytes.fromhex(private_key_hex)
# Step 2: Add the WIF prefix byte (0x80 for mainnet) extended_key = b'\x80' + private_key_bytes
# Step 3: Append the compression flag (0x01) extended_key += b'\x01'
# Step 4: Calculate the checksum (double SHA-256) checksum = hashlib.sha256(hashlib.sha256(extended_key).digest()).digest()[:4]
# Step 5: Append the checksum to the extended key wif_bytes = extended_key + checksum
# Step 6: Encode in Base58 wif_compressed = base58.b58encode(wif_bytes).decode()
print("Compressed WIF:", wif_compressed) Reverse process import base58 import hashlib
# Step 1: Decode WIF from Base58 wif = "KwDiBf89QgGbjEhKnhXJuH7LrciVrZi3qZxidKgPHongbFoiMWNX" decoded_wif = base58.b58decode(wif)
# Step 2: Extract private key (remove prefix 0x80 and suffix 0x01 + checksum) private_key_bytes = decoded_wif[1:-5] private_key_hex = private_key_bytes.hex()
# Step 3 (Optional): Verify checksum data_for_checksum = decoded_wif[:-4] computed_checksum = hashlib.sha256(hashlib.sha256(data_for_checksum).digest()).digest()[:4] is_valid = computed_checksum == decoded_wif[-4:]
# Results print("Private Key (Hex):", private_key_hex) print("Checksum Valid:", is_valid) You need the fastest hashlib and Base58 implementation in the world, written in C/CUDA C, capable of achieving 35M WIFs/second or 100x higher. Optimization Techniques: AVX2/SHA-NI acceleration (if available) or GPU-accelerated SHA-256 for maximum throughput. Precomputed lookup tables (LUTs) for Base58 encoding/decoding to minimize runtime calculations. Warp-level shuffles to reduce global memory access and improve GPU efficiency. Fused SHA-256 + Base58 kernel to eliminate intermediate memory transfers. I’ve worked on this and developed a solution 
|
BTC: bc1qdwnxr7s08xwelpjy3cc52rrxg63xsmagv50fa8
|
|
|
Akito S. M. Hosana
Jr. Member
Offline
Activity: 420
Merit: 8
|
 |
April 21, 2025, 05:34:22 AM Last edit: April 21, 2025, 05:54:02 AM by Akito S. M. Hosana |
|
You need the fastest hashlib and Base58 implementation in the world, Warp-levelI’ve worked on this and developed a solution  holy moly  Wait, we don't need a public key here? BTC address? 
|
|
|
|
|
WanderingPhilospher
Sr. Member
  
Offline
Activity: 1498
Merit: 286
Shooters Shoot...
|
 |
April 21, 2025, 05:36:16 AM |
|
Your script has a problem. It's running all simulations on the same numbers (1 -> 100000) - You can see that line 17.
Where at?
|
|
|
|
|
Bram24732
Member

Offline
Activity: 322
Merit: 28
|
 |
April 21, 2025, 05:47:56 AM |
|
Where at?
Sorry I meant mcd script Ok, now in Python for better understanding.
Thanks, there is still one flaw : By counting only "wins" you miss one very important piece of data : how fast was a method compared to the other on each simulation ? A win 5x faster does not have the same value as a win 1.2x faster. This can be changed by summing the number of checks made over all the simulations, like this : results = {"sequential": {"wins": 0, "checks": 0}, "precise": {"wins": 0, "checks": 0}, "ties": 0} ... results["sequential"]["checks"] += seq_result["checks"] results["precise"]["checks"] += pre_result["checks"] ..... Sequential: {results['sequential']['checks']} Prefix: {results['precise']['checks']}
|
I solved 67 and 68 using custom software distributing the load across ~25k GPUs. 4090 stocks speeds : ~8.1Bkeys/sec. Don’t challenge me technically if you know shit about fuck, I’ll ignore you. Same goes if all you can do is LLM reply.
|
|
|
WanderingPhilospher
Sr. Member
  
Offline
Activity: 1498
Merit: 286
Shooters Shoot...
|
 |
April 21, 2025, 05:57:54 AM |
|
Where at?
Sorry I meant mcd script Ok, now in Python for better understanding.
Thanks, there is still one flaw : By counting only "wins" you miss one very important piece of data : how fast was a method compared to the other on each simulation ? A win 5x faster does not have the same value as a win 1.2x faster. This can be changed by summing the number of checks made over all the simulations, like this : results = {"sequential": {"wins": 0, "checks": 0}, "precise": {"wins": 0, "checks": 0}, "ties": 0} ... results["sequential"]["checks"] += seq_result["checks"] results["precise"]["checks"] += pre_result["checks"] ..... Sequential: {results['sequential']['checks']} Prefix: {results['precise']['checks']}
=== Configuration === Total numbers: 100,000 Block size: 5,000 Prefix: 3 characters (16^3 combinations) Simulations: 1000
Wins: Sequential: 421 (Average Win Margin: 42.18%) Prefix: 536 (Average Win Margin: 38.79%) Ties: 43
=== Configuration === Total numbers: 1,048,576 Block size: 4,096 Prefix: 3 characters (16^3 combinations) Simulations: 1000
=== FINAL RESULTS === Wins: Sequential: 358 (Average Win Margin: 41.54%) Prefix: 639 (Average Win Margin: 36.54%) Ties: 3
|
|
|
|
|
|
nomachine
|
 |
April 21, 2025, 05:58:39 AM |
|
You need the fastest hashlib and Base58 implementation in the world, Warp-levelI’ve worked on this and developed a solution  holy moly  Wait, we don't need a public key here? BTC address?  No. In my case, I only verify whether the WIF (Wallet Import Format) is correct. The output displays only checksum-validated WIFs. The second script computes the corresponding public key and address.
|
BTC: bc1qdwnxr7s08xwelpjy3cc52rrxg63xsmagv50fa8
|
|
|
Bram24732
Member

Offline
Activity: 322
Merit: 28
|
 |
April 21, 2025, 06:06:20 AM |
|
-- Sim results
If you sum the number of checks over 10k simulations you get this : === FINAL RESULTS === Sequential: 495816995 Prefix: 496059807
|
I solved 67 and 68 using custom software distributing the load across ~25k GPUs. 4090 stocks speeds : ~8.1Bkeys/sec. Don’t challenge me technically if you know shit about fuck, I’ll ignore you. Same goes if all you can do is LLM reply.
|
|
|
WanderingPhilospher
Sr. Member
  
Offline
Activity: 1498
Merit: 286
Shooters Shoot...
|
 |
April 21, 2025, 06:06:25 AM |
|
one more data point added: === Configuration === Total numbers: 1,048,576 Block size: 4,096 Prefix: 3 characters (16^3 combinations) Simulations: 1000
=== FINAL RESULTS === Wins: Sequential: 375 (Average Win Margin: 43.34%) Prefix: 624 (Average Win Margin: 36.66%) Ties: 1
Total Checks: Sequential: 520,179,738 Prefix: 527,377,365
=== Configuration === Total numbers: 1,048,576 Block size: 4,096 Prefix: 4 characters (16^4 combinations) Simulations: 1000
=== FINAL RESULTS === Wins: Sequential: 25 (Average Win Margin: 37.39%) Prefix: 907 (Average Win Margin: 3.35%) Ties: 68
Total Checks: Sequential: 517,804,613 Prefix: 511,880,745
I dunno, maybe my code is messed up lol. import hashlib import random import multiprocessing as MP
TOTAL_SIZE = 2**20 RANGE_SIZE = 2**12 PREFIX_LENGTH = 4 SIMULATIONS = 1000
def generate_h160(data): return hashlib.new('ripemd160', str(data).encode()).hexdigest()
def shuffled_range(n): arr = list(range(n + 1)) random.shuffle(arr) return arr
def sequential_search(size, block, target_hash, order): checks = 0 for idx in order: start = idx * block end = start + block for num in range(start, end): checks += 1 if generate_h160(num) == target_hash: return {'checks': checks, 'found': True} return {'checks': checks, 'found': False}
def precise_search(size, block, prefix_len, target_hash, order): prefix_hash = target_hash[:prefix_len] checks = 0 ranges = [] for idx in order: start = idx * block end = start + block found_prefix = False for num in range(start, end): checks += 1 h = generate_h160(num) if h == target_hash: return {'checks': checks, 'found': True} if not found_prefix and h.startswith(prefix_hash): found_prefix = True ranges.append({'start': num + 1, 'end': end}) break for r in ranges: for num in range(r['end'] - 1, r['start'] - 1, -1): checks += 1 if generate_h160(num) == target_hash: return {'checks': checks, 'found': True} return {'checks': checks, 'found': False}
def single_simulation(_): blocks = TOTAL_SIZE // RANGE_SIZE order = shuffled_range(blocks - 1) target_num = random.randint(0, TOTAL_SIZE - 1) target_hash = generate_h160(target_num)
seq_result = sequential_search(TOTAL_SIZE, RANGE_SIZE, target_hash, order) pre_result = precise_search(TOTAL_SIZE, RANGE_SIZE, PREFIX_LENGTH, target_hash, order)
return seq_result['checks'], pre_result['checks']
def compare_methods_parallel(): print(f""" === Configuration === Total numbers: {TOTAL_SIZE:,} Block size: {RANGE_SIZE:,} Prefix: {PREFIX_LENGTH} characters (16^{PREFIX_LENGTH} combinations) Simulations: {SIMULATIONS} """)
cpu_count = max(MP.cpu_count() - 2, 1) print(f"Using {cpu_count} worker processes...\n")
with MP.Pool(cpu_count) as pool: results = pool.map(single_simulation, range(SIMULATIONS))
sequential_wins = 0 precise_wins = 0 ties = 0
sequential_win_percentages = [] precise_win_percentages = []
total_seq_checks = 0 total_pre_checks = 0
for i, (seq_checks, pre_checks) in enumerate(results): total_seq_checks += seq_checks total_pre_checks += pre_checks
if seq_checks < pre_checks: sequential_wins += 1 win_percent = ((pre_checks - seq_checks) / pre_checks) * 100 sequential_win_percentages.append(win_percent) elif seq_checks > pre_checks: precise_wins += 1 win_percent = ((seq_checks - pre_checks) / seq_checks) * 100 precise_win_percentages.append(win_percent) else: ties += 1
print(f"Simulation {i + 1}: Sequential = {seq_checks} | Prefix = {pre_checks}")
avg_seq_win_pct = sum(sequential_win_percentages) / len(sequential_win_percentages) if sequential_win_percentages else 0 avg_pre_win_pct = sum(precise_win_percentages) / len(precise_win_percentages) if precise_win_percentages else 0
print(f""" === FINAL RESULTS === Wins: Sequential: {sequential_wins} (Average Win Margin: {avg_seq_win_pct:.2f}%) Prefix: {precise_wins} (Average Win Margin: {avg_pre_win_pct:.2f}%) Ties: {ties}
Total Checks: Sequential: {total_seq_checks:,} Prefix: {total_pre_checks:,} """)
if __name__ == "__main__": MP.freeze_support() # Important for Windows compare_methods_parallel()
|
|
|
|
|
Akito S. M. Hosana
Jr. Member
Offline
Activity: 420
Merit: 8
|
 |
April 21, 2025, 06:13:07 AM |
|
No. In my case, I only verify whether the WIF (Wallet Import Format) is correct. The output displays only checksum-validated WIFs. The second script computes the corresponding public key and address.
How many verified WIFs do you have in the output generated by the GPU? 
|
|
|
|
|
|
nomachine
|
 |
April 21, 2025, 06:19:27 AM |
|
No. In my case, I only verify whether the WIF (Wallet Import Format) is correct. The output displays only checksum-validated WIFs. The second script computes the corresponding public key and address.
How many verified WIFs do you have in the output generated by the GPU?  It depends on how many characters are missing in the WIF and their exact positions. For example, if 10 characters are missing at the beginning, the recovery speed would be approximately 2,000 valid WIFs per minute.
|
BTC: bc1qdwnxr7s08xwelpjy3cc52rrxg63xsmagv50fa8
|
|
|
Bram24732
Member

Offline
Activity: 322
Merit: 28
|
 |
April 21, 2025, 06:22:41 AM |
|
one more data point added: === Configuration === Total numbers: 1,048,576 Block size: 4,096 Prefix: 3 characters (16^3 combinations) Simulations: 1000
=== FINAL RESULTS === Wins: Sequential: 375 (Average Win Margin: 43.34%) Prefix: 624 (Average Win Margin: 36.66%) Ties: 1
Total Checks: Sequential: 520,179,738 Prefix: 527,377,365
=== Configuration === Total numbers: 1,048,576 Block size: 4,096 Prefix: 4 characters (16^4 combinations) Simulations: 1000
=== FINAL RESULTS === Wins: Sequential: 25 (Average Win Margin: 37.39%) Prefix: 907 (Average Win Margin: 3.35%) Ties: 68
Total Checks: Sequential: 517,804,613 Prefix: 511,880,745
I dunno, maybe my code is messed up lol. import hashlib import random import multiprocessing as MP
TOTAL_SIZE = 2**20 RANGE_SIZE = 2**12 PREFIX_LENGTH = 4 SIMULATIONS = 1000
def generate_h160(data): return hashlib.new('ripemd160', str(data).encode()).hexdigest()
def shuffled_range(n): arr = list(range(n + 1)) random.shuffle(arr) return arr
def sequential_search(size, block, target_hash, order): checks = 0 for idx in order: start = idx * block end = start + block for num in range(start, end): checks += 1 if generate_h160(num) == target_hash: return {'checks': checks, 'found': True} return {'checks': checks, 'found': False}
def precise_search(size, block, prefix_len, target_hash, order): prefix_hash = target_hash[:prefix_len] checks = 0 ranges = [] for idx in order: start = idx * block end = start + block found_prefix = False for num in range(start, end): checks += 1 h = generate_h160(num) if h == target_hash: return {'checks': checks, 'found': True} if not found_prefix and h.startswith(prefix_hash): found_prefix = True ranges.append({'start': num + 1, 'end': end}) break for r in ranges: for num in range(r['end'] - 1, r['start'] - 1, -1): checks += 1 if generate_h160(num) == target_hash: return {'checks': checks, 'found': True} return {'checks': checks, 'found': False}
def single_simulation(_): blocks = TOTAL_SIZE // RANGE_SIZE order = shuffled_range(blocks - 1) target_num = random.randint(0, TOTAL_SIZE - 1) target_hash = generate_h160(target_num)
seq_result = sequential_search(TOTAL_SIZE, RANGE_SIZE, target_hash, order) pre_result = precise_search(TOTAL_SIZE, RANGE_SIZE, PREFIX_LENGTH, target_hash, order)
return seq_result['checks'], pre_result['checks']
def compare_methods_parallel(): print(f""" === Configuration === Total numbers: {TOTAL_SIZE:,} Block size: {RANGE_SIZE:,} Prefix: {PREFIX_LENGTH} characters (16^{PREFIX_LENGTH} combinations) Simulations: {SIMULATIONS} """)
cpu_count = max(MP.cpu_count() - 2, 1) print(f"Using {cpu_count} worker processes...\n")
with MP.Pool(cpu_count) as pool: results = pool.map(single_simulation, range(SIMULATIONS))
sequential_wins = 0 precise_wins = 0 ties = 0
sequential_win_percentages = [] precise_win_percentages = []
total_seq_checks = 0 total_pre_checks = 0
for i, (seq_checks, pre_checks) in enumerate(results): total_seq_checks += seq_checks total_pre_checks += pre_checks
if seq_checks < pre_checks: sequential_wins += 1 win_percent = ((pre_checks - seq_checks) / pre_checks) * 100 sequential_win_percentages.append(win_percent) elif seq_checks > pre_checks: precise_wins += 1 win_percent = ((seq_checks - pre_checks) / seq_checks) * 100 precise_win_percentages.append(win_percent) else: ties += 1
print(f"Simulation {i + 1}: Sequential = {seq_checks} | Prefix = {pre_checks}")
avg_seq_win_pct = sum(sequential_win_percentages) / len(sequential_win_percentages) if sequential_win_percentages else 0 avg_pre_win_pct = sum(precise_win_percentages) / len(precise_win_percentages) if precise_win_percentages else 0
print(f""" === FINAL RESULTS === Wins: Sequential: {sequential_wins} (Average Win Margin: {avg_seq_win_pct:.2f}%) Prefix: {precise_wins} (Average Win Margin: {avg_pre_win_pct:.2f}%) Ties: {ties}
Total Checks: Sequential: {total_seq_checks:,} Prefix: {total_pre_checks:,} """)
if __name__ == "__main__": MP.freeze_support() # Important for Windows compare_methods_parallel()
That looks ok, you have the same number of checks roughly with both methods, that's what I would expect
|
I solved 67 and 68 using custom software distributing the load across ~25k GPUs. 4090 stocks speeds : ~8.1Bkeys/sec. Don’t challenge me technically if you know shit about fuck, I’ll ignore you. Same goes if all you can do is LLM reply.
|
|
|
Akito S. M. Hosana
Jr. Member
Offline
Activity: 420
Merit: 8
|
 |
April 21, 2025, 06:31:41 AM |
|
No. In my case, I only verify whether the WIF (Wallet Import Format) is correct. The output displays only checksum-validated WIFs. The second script computes the corresponding public key and address.
How many verified WIFs do you have in the output generated by the GPU?  It depends on how many characters are missing in the WIF and their exact positions. For example, if 10 characters are missing at the beginning, the recovery speed would be approximately 2,000 valid WIFs per minute. So you need 30 - 50 GPUs to have 1000 valid WIfs/s ? 
|
|
|
|
|
Bram24732
Member

Offline
Activity: 322
Merit: 28
|
 |
April 21, 2025, 06:33:04 AM |
|
Thanks, there is still one flaw : By counting only "wins" you miss one very important piece of data : how fast was a method compared to the other on each simulation ? A win 5x faster does not have the same value as a win 1.2x faster. This can be changed by summing the number of checks made over all the simulations, like this : results = {"sequential": {"wins": 0, "checks": 0}, "precise": {"wins": 0, "checks": 0}, "ties": 0} ... results["sequential"]["checks"] += seq_result["checks"] results["precise"]["checks"] += pre_result["checks"] ..... Sequential: {results['sequential']['checks']} Prefix: {results['precise']['checks']}
Just as the script commonly handles it, it's fine, because the important thing here was to demonstrate that prefixes are more efficient in the majority of attempts. It does not include computational load, because that's unfair, as we omit the entire Bitcoin process, and besides, it's not the same to omit 1000 keys out of 5000 as to use a 16**12 setup to give an example... but the basic aspect has already been demonstrated, which was the probabilistic success rate. I think being first without taking into account how much faster you are is not reflecting the stastistical reality. But it's ok to disagree 
|
I solved 67 and 68 using custom software distributing the load across ~25k GPUs. 4090 stocks speeds : ~8.1Bkeys/sec. Don’t challenge me technically if you know shit about fuck, I’ll ignore you. Same goes if all you can do is LLM reply.
|
|
|
|
nomachine
|
 |
April 21, 2025, 06:41:06 AM |
|
So you need 30 - 50 GPUs to have 1000 valid WIfs/s ?  Yes, but you'll need much more if you want to solve it in a reasonable amount of time—though still less than what's required for Puzzle 69. 
|
BTC: bc1qdwnxr7s08xwelpjy3cc52rrxg63xsmagv50fa8
|
|
|
Akito S. M. Hosana
Jr. Member
Offline
Activity: 420
Merit: 8
|
 |
April 21, 2025, 06:57:57 AM |
|
So you need 30 - 50 GPUs to have 1000 valid WIfs/s ?  Yes, but you'll need much more if you want to solve it in a reasonable amount of time—though still less than what's required for Puzzle 69.  What will you do if you solve this? 
|
|
|
|
|
|
nomachine
|
 |
April 21, 2025, 07:02:01 AM |
|
So you need 30 - 50 GPUs to have 1000 valid WIfs/s ?  Yes, but you'll need much more if you want to solve it in a reasonable amount of time—though still less than what's required for Puzzle 69.  What will you do if you solve this?  Hahaha, take it easy... It’s not that simple... But... Every member in this topic will get 0.2 BTC—if they have a BTC address in their signature. Satisfied? 
|
BTC: bc1qdwnxr7s08xwelpjy3cc52rrxg63xsmagv50fa8
|
|
|
fantom06
Jr. Member
Offline
Activity: 49
Merit: 1
|
 |
April 21, 2025, 07:02:34 AM |
|
=== FINAL RESULTS === Wins: Sequential: 2105 Prefix: 2688 Ties: 207
|
|
|
|
|
Bram24732
Member

Offline
Activity: 322
Merit: 28
|
 |
April 21, 2025, 07:09:55 AM |
|
-- Sim results
If you sum the number of checks over 10k simulations you get this : === FINAL RESULTS === Sequential: 495816995 Prefix: 496059807 This is a bias since the prefix method wins most of the time, meaning it is the best choice. When the prefix method loses, it obviously generates more keys because it is assumed that the target was omitted. And the goal is to find your best option to win and not who loses worse. I think being first without taking into account how much faster you are is not reflecting the stastistical reality. But it's ok to disagree  The times when the prefix method wins, it traverses fewer keys than the sequential method; therefore, by common sense, it saves computational power. Statistically, prefixes are the best option most of the time. Similarly, the overall statistics are more or less equal when considering total traversals, both won and lost. However, prefixes still yield the highest success rate. There's no need to overcomplicate it. It not complicated. Nor is it a bias. Those are actual numbers out of your script. On average, both methods require the same number of steps to reach a solution.
|
I solved 67 and 68 using custom software distributing the load across ~25k GPUs. 4090 stocks speeds : ~8.1Bkeys/sec. Don’t challenge me technically if you know shit about fuck, I’ll ignore you. Same goes if all you can do is LLM reply.
|
|
|
Akito S. M. Hosana
Jr. Member
Offline
Activity: 420
Merit: 8
|
 |
April 21, 2025, 07:19:06 AM |
|
Hahaha, take it easy... It’s not that simple... But... Every member in this topic will get 0.2 BTC—if they have a BTC address in their signature. Satisfied?  I'm not a full member yet. 
|
|
|
|
|
fantom06
Jr. Member
Offline
Activity: 49
Merit: 1
|
 |
April 21, 2025, 07:28:15 AM |
|
-- Sim results
If you sum the number of checks over 10k simulations you get this : === FINAL RESULTS === Sequential: 495816995 Prefix: 496059807 This is a bias since the prefix method wins most of the time, meaning it is the best choice. When the prefix method loses, it obviously generates more keys because it is assumed that the target was omitted. And the goal is to find your best option to win and not who loses worse. I think being first without taking into account how much faster you are is not reflecting the stastistical reality. But it's ok to disagree  The times when the prefix method wins, it traverses fewer keys than the sequential method; therefore, by common sense, it saves computational power. Statistically, prefixes are the best option most of the time. Similarly, the overall statistics are more or less equal when considering total traversals, both won and lost. However, prefixes still yield the highest success rate. There's no need to overcomplicate it. Wins: Sequential: 39 (Average Win Margin: 55.95%) Prefix: 899 (Average Win Margin: 3.37%) Ties: 62 Total Checks: Sequential: 494,958,197 Prefix: 502,727,060
|
|
|
|
|
|