Bitcoin Forum
April 24, 2026, 03:18:35 AM *
News: Latest Bitcoin Core release: 30.2 [Torrent]
 
   Home   Help Search Login Register More  
Pages: « 1 ... 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 [465] 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 ... 652 »
  Print  
Author Topic: Bitcoin puzzle transaction ~32 BTC prize to who solves it  (Read 380857 times)
nomachine
Full Member
***
Offline Offline

Activity: 812
Merit: 134



View Profile
April 21, 2025, 05:28:40 AM
 #9281

Someone could explain me the "logic" behind WIF generation? I know that is random. But to check for the solution you would need to do:

WIF -> PVK -> ECC -> PUB -> SHA -> RIPE. No?

Why add another step if you can go from PVK forward?  Huh




Making WIF

Code:
import hashlib
import base58

# Private key in hexadecimal format
private_key_hex = "00000000000000000000000000000000000000000000000354d62e5f7a0d2eb2"

# Step 1: Convert private key from hex to bytes
private_key_bytes = bytes.fromhex(private_key_hex)

# Step 2: Add the WIF prefix byte (0x80 for mainnet)
extended_key = b'\x80' + private_key_bytes

# Step 3: Append the compression flag (0x01)
extended_key += b'\x01'

# Step 4: Calculate the checksum (double SHA-256)
checksum = hashlib.sha256(hashlib.sha256(extended_key).digest()).digest()[:4]

# Step 5: Append the checksum to the extended key
wif_bytes = extended_key + checksum

# Step 6: Encode in Base58
wif_compressed = base58.b58encode(wif_bytes).decode()

print("Compressed WIF:", wif_compressed)


Reverse process

Code:
import base58
import hashlib

# Step 1: Decode WIF from Base58
wif = "KwDiBf89QgGbjEhKnhXJuH7LrciVrZi3qZxidKgPHongbFoiMWNX"
decoded_wif = base58.b58decode(wif)

# Step 2: Extract private key (remove prefix 0x80 and suffix 0x01 + checksum)
private_key_bytes = decoded_wif[1:-5]
private_key_hex = private_key_bytes.hex()

# Step 3 (Optional): Verify checksum
data_for_checksum = decoded_wif[:-4]
computed_checksum = hashlib.sha256(hashlib.sha256(data_for_checksum).digest()).digest()[:4]
is_valid = computed_checksum == decoded_wif[-4:]

# Results
print("Private Key (Hex):", private_key_hex)
print("Checksum Valid:", is_valid)


You need the fastest hashlib and Base58 implementation in the world, written in C/CUDA C, capable of achieving 35M WIFs/second or 100x higher.

Optimization Techniques:
AVX2/SHA-NI acceleration (if available) or GPU-accelerated SHA-256 for maximum throughput.

Precomputed lookup tables (LUTs) for Base58 encoding/decoding to minimize runtime calculations.

Warp-level shuffles to reduce global memory access and improve GPU efficiency.

Fused SHA-256 + Base58 kernel to eliminate intermediate memory transfers.

I’ve worked on this and developed a solution Grin

BTC: bc1qdwnxr7s08xwelpjy3cc52rrxg63xsmagv50fa8
Akito S. M. Hosana
Jr. Member
*
Offline Offline

Activity: 420
Merit: 8


View Profile
April 21, 2025, 05:34:22 AM
Last edit: April 21, 2025, 05:54:02 AM by Akito S. M. Hosana
 #9282

You need the fastest hashlib and Base58 implementation in the world,
Warp-level
I’ve worked on this and developed a solution Grin

holy moly Tongue

Wait, we don't need a public key here? BTC address? Roll Eyes

WanderingPhilospher
Sr. Member
****
Offline Offline

Activity: 1498
Merit: 286

Shooters Shoot...


View Profile
April 21, 2025, 05:36:16 AM
 #9283

Your script has a problem. It's running all simulations on the same numbers (1 -> 100000) - You can see that line 17.
Where at?
Bram24732
Member
**
Offline Offline

Activity: 322
Merit: 28


View Profile
April 21, 2025, 05:47:56 AM
 #9284

Where at?
Sorry I meant mcd script

Ok, now in Python for better understanding.

Thanks, there is still one flaw :
By counting only "wins" you miss one very important piece of data : how fast was a method compared to the other on each simulation ? A win 5x faster does not have the same value as a win 1.2x faster.
This can be changed by summing the number of checks made over all the simulations, like this :

Code:
results = {"sequential": {"wins": 0, "checks": 0}, "precise": {"wins": 0, "checks": 0}, "ties": 0}
...
results["sequential"]["checks"] += seq_result["checks"]
results["precise"]["checks"] += pre_result["checks"]
.....
Sequential: {results['sequential']['checks']}
Prefix: {results['precise']['checks']}

I solved 67 and 68 using custom software distributing the load across ~25k GPUs. 4090 stocks speeds : ~8.1Bkeys/sec. Don’t challenge me technically if you know shit about fuck, I’ll ignore you. Same goes if all you can do is LLM reply.
WanderingPhilospher
Sr. Member
****
Offline Offline

Activity: 1498
Merit: 286

Shooters Shoot...


View Profile
April 21, 2025, 05:57:54 AM
 #9285

Where at?
Sorry I meant mcd script

Ok, now in Python for better understanding.

Thanks, there is still one flaw :
By counting only "wins" you miss one very important piece of data : how fast was a method compared to the other on each simulation ? A win 5x faster does not have the same value as a win 1.2x faster.
This can be changed by summing the number of checks made over all the simulations, like this :

Code:
results = {"sequential": {"wins": 0, "checks": 0}, "precise": {"wins": 0, "checks": 0}, "ties": 0}
...
results["sequential"]["checks"] += seq_result["checks"]
results["precise"]["checks"] += pre_result["checks"]
.....
Sequential: {results['sequential']['checks']}
Prefix: {results['precise']['checks']}

Code:
=== Configuration ===
Total numbers: 100,000
Block size: 5,000
Prefix: 3 characters (16^3 combinations)
Simulations: 1000

Wins:
Sequential: 421 (Average Win Margin: 42.18%)
Prefix: 536 (Average Win Margin: 38.79%)
Ties: 43

Code:
=== Configuration ===
Total numbers: 1,048,576
Block size: 4,096
Prefix: 3 characters (16^3 combinations)
Simulations: 1000

=== FINAL RESULTS ===
Wins:
Sequential: 358 (Average Win Margin: 41.54%)
Prefix: 639 (Average Win Margin: 36.54%)
Ties: 3
nomachine
Full Member
***
Offline Offline

Activity: 812
Merit: 134



View Profile
April 21, 2025, 05:58:39 AM
 #9286

You need the fastest hashlib and Base58 implementation in the world,
Warp-level
I’ve worked on this and developed a solution Grin

holy moly Tongue

Wait, we don't need a public key here? BTC address? Roll Eyes



No. In my case, I only verify whether the WIF (Wallet Import Format) is correct. The output displays only checksum-validated WIFs. The second script computes the corresponding public key and address.

BTC: bc1qdwnxr7s08xwelpjy3cc52rrxg63xsmagv50fa8
Bram24732
Member
**
Offline Offline

Activity: 322
Merit: 28


View Profile
April 21, 2025, 06:06:20 AM
 #9287

-- Sim results

If you sum the number of checks over 10k simulations you get this :

Code:
=== FINAL RESULTS ===
Sequential: 495816995
Prefix: 496059807

I solved 67 and 68 using custom software distributing the load across ~25k GPUs. 4090 stocks speeds : ~8.1Bkeys/sec. Don’t challenge me technically if you know shit about fuck, I’ll ignore you. Same goes if all you can do is LLM reply.
WanderingPhilospher
Sr. Member
****
Offline Offline

Activity: 1498
Merit: 286

Shooters Shoot...


View Profile
April 21, 2025, 06:06:25 AM
 #9288

one more data point added:

Code:
=== Configuration ===
Total numbers: 1,048,576
Block size: 4,096
Prefix: 3 characters (16^3 combinations)
Simulations: 1000

=== FINAL RESULTS ===
Wins:
Sequential: 375 (Average Win Margin: 43.34%)
Prefix: 624 (Average Win Margin: 36.66%)
Ties: 1

Total Checks:
Sequential: 520,179,738
Prefix: 527,377,365

Code:
=== Configuration ===
Total numbers: 1,048,576
Block size: 4,096
Prefix: 4 characters (16^4 combinations)
Simulations: 1000

=== FINAL RESULTS ===
Wins:
Sequential: 25 (Average Win Margin: 37.39%)
Prefix: 907 (Average Win Margin: 3.35%)
Ties: 68

Total Checks:
Sequential: 517,804,613
Prefix: 511,880,745

I dunno, maybe my code is messed up lol.


Code:
import hashlib
import random
import multiprocessing as MP

TOTAL_SIZE = 2**20
RANGE_SIZE = 2**12
PREFIX_LENGTH = 4
SIMULATIONS = 1000

def generate_h160(data):
    return hashlib.new('ripemd160', str(data).encode()).hexdigest()

def shuffled_range(n):
    arr = list(range(n + 1))
    random.shuffle(arr)
    return arr

def sequential_search(size, block, target_hash, order):
    checks = 0
    for idx in order:
        start = idx * block
        end = start + block
        for num in range(start, end):
            checks += 1
            if generate_h160(num) == target_hash:
                return {'checks': checks, 'found': True}
    return {'checks': checks, 'found': False}

def precise_search(size, block, prefix_len, target_hash, order):
    prefix_hash = target_hash[:prefix_len]
    checks = 0
    ranges = []
    for idx in order:
        start = idx * block
        end = start + block
        found_prefix = False
        for num in range(start, end):
            checks += 1
            h = generate_h160(num)
            if h == target_hash:
                return {'checks': checks, 'found': True}
            if not found_prefix and h.startswith(prefix_hash):
                found_prefix = True
                ranges.append({'start': num + 1, 'end': end})
                break
    for r in ranges:
        for num in range(r['end'] - 1, r['start'] - 1, -1):
            checks += 1
            if generate_h160(num) == target_hash:
                return {'checks': checks, 'found': True}
    return {'checks': checks, 'found': False}

def single_simulation(_):
    blocks = TOTAL_SIZE // RANGE_SIZE
    order = shuffled_range(blocks - 1)
    target_num = random.randint(0, TOTAL_SIZE - 1)
    target_hash = generate_h160(target_num)

    seq_result = sequential_search(TOTAL_SIZE, RANGE_SIZE, target_hash, order)
    pre_result = precise_search(TOTAL_SIZE, RANGE_SIZE, PREFIX_LENGTH, target_hash, order)

    return seq_result['checks'], pre_result['checks']

def compare_methods_parallel():
    print(f"""
=== Configuration ===
Total numbers: {TOTAL_SIZE:,}
Block size: {RANGE_SIZE:,}
Prefix: {PREFIX_LENGTH} characters (16^{PREFIX_LENGTH} combinations)
Simulations: {SIMULATIONS}
""")

    cpu_count = max(MP.cpu_count() - 2, 1)
    print(f"Using {cpu_count} worker processes...\n")

    with MP.Pool(cpu_count) as pool:
        results = pool.map(single_simulation, range(SIMULATIONS))

    sequential_wins = 0
    precise_wins = 0
    ties = 0

    sequential_win_percentages = []
    precise_win_percentages = []

    total_seq_checks = 0
    total_pre_checks = 0

    for i, (seq_checks, pre_checks) in enumerate(results):
        total_seq_checks += seq_checks
        total_pre_checks += pre_checks

        if seq_checks < pre_checks:
            sequential_wins += 1
            win_percent = ((pre_checks - seq_checks) / pre_checks) * 100
            sequential_win_percentages.append(win_percent)
        elif seq_checks > pre_checks:
            precise_wins += 1
            win_percent = ((seq_checks - pre_checks) / seq_checks) * 100
            precise_win_percentages.append(win_percent)
        else:
            ties += 1

        print(f"Simulation {i + 1}: Sequential = {seq_checks} | Prefix = {pre_checks}")

    avg_seq_win_pct = sum(sequential_win_percentages) / len(sequential_win_percentages) if sequential_win_percentages else 0
    avg_pre_win_pct = sum(precise_win_percentages) / len(precise_win_percentages) if precise_win_percentages else 0

    print(f"""
=== FINAL RESULTS ===
Wins:
Sequential: {sequential_wins} (Average Win Margin: {avg_seq_win_pct:.2f}%)
Prefix: {precise_wins} (Average Win Margin: {avg_pre_win_pct:.2f}%)
Ties: {ties}

Total Checks:
Sequential: {total_seq_checks:,}
Prefix: {total_pre_checks:,}
""")

if __name__ == "__main__":
    MP.freeze_support()  # Important for Windows
    compare_methods_parallel()

Akito S. M. Hosana
Jr. Member
*
Offline Offline

Activity: 420
Merit: 8


View Profile
April 21, 2025, 06:13:07 AM
 #9289

No. In my case, I only verify whether the WIF (Wallet Import Format) is correct. The output displays only checksum-validated WIFs. The second script computes the corresponding public key and address.

How many verified WIFs do you have in the output generated by the GPU?  Tongue
nomachine
Full Member
***
Offline Offline

Activity: 812
Merit: 134



View Profile
April 21, 2025, 06:19:27 AM
 #9290

No. In my case, I only verify whether the WIF (Wallet Import Format) is correct. The output displays only checksum-validated WIFs. The second script computes the corresponding public key and address.

How many verified WIFs do you have in the output generated by the GPU?  Tongue


It depends on how many characters are missing in the WIF and their exact positions. For example, if 10 characters are missing at the beginning, the recovery speed would be approximately 2,000 valid WIFs per minute.

BTC: bc1qdwnxr7s08xwelpjy3cc52rrxg63xsmagv50fa8
Bram24732
Member
**
Offline Offline

Activity: 322
Merit: 28


View Profile
April 21, 2025, 06:22:41 AM
 #9291

one more data point added:

Code:
=== Configuration ===
Total numbers: 1,048,576
Block size: 4,096
Prefix: 3 characters (16^3 combinations)
Simulations: 1000

=== FINAL RESULTS ===
Wins:
Sequential: 375 (Average Win Margin: 43.34%)
Prefix: 624 (Average Win Margin: 36.66%)
Ties: 1

Total Checks:
Sequential: 520,179,738
Prefix: 527,377,365

Code:
=== Configuration ===
Total numbers: 1,048,576
Block size: 4,096
Prefix: 4 characters (16^4 combinations)
Simulations: 1000

=== FINAL RESULTS ===
Wins:
Sequential: 25 (Average Win Margin: 37.39%)
Prefix: 907 (Average Win Margin: 3.35%)
Ties: 68

Total Checks:
Sequential: 517,804,613
Prefix: 511,880,745

I dunno, maybe my code is messed up lol.


Code:
import hashlib
import random
import multiprocessing as MP

TOTAL_SIZE = 2**20
RANGE_SIZE = 2**12
PREFIX_LENGTH = 4
SIMULATIONS = 1000

def generate_h160(data):
    return hashlib.new('ripemd160', str(data).encode()).hexdigest()

def shuffled_range(n):
    arr = list(range(n + 1))
    random.shuffle(arr)
    return arr

def sequential_search(size, block, target_hash, order):
    checks = 0
    for idx in order:
        start = idx * block
        end = start + block
        for num in range(start, end):
            checks += 1
            if generate_h160(num) == target_hash:
                return {'checks': checks, 'found': True}
    return {'checks': checks, 'found': False}

def precise_search(size, block, prefix_len, target_hash, order):
    prefix_hash = target_hash[:prefix_len]
    checks = 0
    ranges = []
    for idx in order:
        start = idx * block
        end = start + block
        found_prefix = False
        for num in range(start, end):
            checks += 1
            h = generate_h160(num)
            if h == target_hash:
                return {'checks': checks, 'found': True}
            if not found_prefix and h.startswith(prefix_hash):
                found_prefix = True
                ranges.append({'start': num + 1, 'end': end})
                break
    for r in ranges:
        for num in range(r['end'] - 1, r['start'] - 1, -1):
            checks += 1
            if generate_h160(num) == target_hash:
                return {'checks': checks, 'found': True}
    return {'checks': checks, 'found': False}

def single_simulation(_):
    blocks = TOTAL_SIZE // RANGE_SIZE
    order = shuffled_range(blocks - 1)
    target_num = random.randint(0, TOTAL_SIZE - 1)
    target_hash = generate_h160(target_num)

    seq_result = sequential_search(TOTAL_SIZE, RANGE_SIZE, target_hash, order)
    pre_result = precise_search(TOTAL_SIZE, RANGE_SIZE, PREFIX_LENGTH, target_hash, order)

    return seq_result['checks'], pre_result['checks']

def compare_methods_parallel():
    print(f"""
=== Configuration ===
Total numbers: {TOTAL_SIZE:,}
Block size: {RANGE_SIZE:,}
Prefix: {PREFIX_LENGTH} characters (16^{PREFIX_LENGTH} combinations)
Simulations: {SIMULATIONS}
""")

    cpu_count = max(MP.cpu_count() - 2, 1)
    print(f"Using {cpu_count} worker processes...\n")

    with MP.Pool(cpu_count) as pool:
        results = pool.map(single_simulation, range(SIMULATIONS))

    sequential_wins = 0
    precise_wins = 0
    ties = 0

    sequential_win_percentages = []
    precise_win_percentages = []

    total_seq_checks = 0
    total_pre_checks = 0

    for i, (seq_checks, pre_checks) in enumerate(results):
        total_seq_checks += seq_checks
        total_pre_checks += pre_checks

        if seq_checks < pre_checks:
            sequential_wins += 1
            win_percent = ((pre_checks - seq_checks) / pre_checks) * 100
            sequential_win_percentages.append(win_percent)
        elif seq_checks > pre_checks:
            precise_wins += 1
            win_percent = ((seq_checks - pre_checks) / seq_checks) * 100
            precise_win_percentages.append(win_percent)
        else:
            ties += 1

        print(f"Simulation {i + 1}: Sequential = {seq_checks} | Prefix = {pre_checks}")

    avg_seq_win_pct = sum(sequential_win_percentages) / len(sequential_win_percentages) if sequential_win_percentages else 0
    avg_pre_win_pct = sum(precise_win_percentages) / len(precise_win_percentages) if precise_win_percentages else 0

    print(f"""
=== FINAL RESULTS ===
Wins:
Sequential: {sequential_wins} (Average Win Margin: {avg_seq_win_pct:.2f}%)
Prefix: {precise_wins} (Average Win Margin: {avg_pre_win_pct:.2f}%)
Ties: {ties}

Total Checks:
Sequential: {total_seq_checks:,}
Prefix: {total_pre_checks:,}
""")

if __name__ == "__main__":
    MP.freeze_support()  # Important for Windows
    compare_methods_parallel()



That looks ok, you have the same number of checks roughly with both methods, that's what I would expect

I solved 67 and 68 using custom software distributing the load across ~25k GPUs. 4090 stocks speeds : ~8.1Bkeys/sec. Don’t challenge me technically if you know shit about fuck, I’ll ignore you. Same goes if all you can do is LLM reply.
Akito S. M. Hosana
Jr. Member
*
Offline Offline

Activity: 420
Merit: 8


View Profile
April 21, 2025, 06:31:41 AM
 #9292

No. In my case, I only verify whether the WIF (Wallet Import Format) is correct. The output displays only checksum-validated WIFs. The second script computes the corresponding public key and address.

How many verified WIFs do you have in the output generated by the GPU?  Tongue


It depends on how many characters are missing in the WIF and their exact positions. For example, if 10 characters are missing at the beginning, the recovery speed would be approximately 2,000 valid WIFs per minute.

So you need 30 - 50 GPUs to have 1000 valid WIfs/s ?    Sad
Bram24732
Member
**
Offline Offline

Activity: 322
Merit: 28


View Profile
April 21, 2025, 06:33:04 AM
 #9293

Thanks, there is still one flaw :
By counting only "wins" you miss one very important piece of data : how fast was a method compared to the other on each simulation ? A win 5x faster does not have the same value as a win 1.2x faster.
This can be changed by summing the number of checks made over all the simulations, like this :

Code:
results = {"sequential": {"wins": 0, "checks": 0}, "precise": {"wins": 0, "checks": 0}, "ties": 0}
...
results["sequential"]["checks"] += seq_result["checks"]
results["precise"]["checks"] += pre_result["checks"]
.....
Sequential: {results['sequential']['checks']}
Prefix: {results['precise']['checks']}

Just as the script commonly handles it, it's fine, because the important thing here was to demonstrate that prefixes are more efficient in the majority of attempts. It does not include computational load, because that's unfair, as we omit the entire Bitcoin process, and besides, it's not the same to omit 1000 keys out of 5000 as to use a 16**12 setup to give an example... but the basic aspect has already been demonstrated, which was the probabilistic success rate.

I think being first without taking into account how much faster you are is not reflecting the stastistical reality.
But it's ok to disagree Smiley

I solved 67 and 68 using custom software distributing the load across ~25k GPUs. 4090 stocks speeds : ~8.1Bkeys/sec. Don’t challenge me technically if you know shit about fuck, I’ll ignore you. Same goes if all you can do is LLM reply.
nomachine
Full Member
***
Offline Offline

Activity: 812
Merit: 134



View Profile
April 21, 2025, 06:41:06 AM
 #9294

So you need 30 - 50 GPUs to have 1000 valid WIfs/s ?    Sad

Yes, but you'll need much more if you want to solve it in a reasonable amount of time—though still less than what's required for Puzzle 69.  Grin

BTC: bc1qdwnxr7s08xwelpjy3cc52rrxg63xsmagv50fa8
Akito S. M. Hosana
Jr. Member
*
Offline Offline

Activity: 420
Merit: 8


View Profile
April 21, 2025, 06:57:57 AM
 #9295

So you need 30 - 50 GPUs to have 1000 valid WIfs/s ?    Sad

Yes, but you'll need much more if you want to solve it in a reasonable amount of time—though still less than what's required for Puzzle 69.  Grin


What will you do if you solve this? Tongue
nomachine
Full Member
***
Offline Offline

Activity: 812
Merit: 134



View Profile
April 21, 2025, 07:02:01 AM
 #9296

So you need 30 - 50 GPUs to have 1000 valid WIfs/s ?    Sad

Yes, but you'll need much more if you want to solve it in a reasonable amount of time—though still less than what's required for Puzzle 69.  Grin


What will you do if you solve this? Tongue

Hahaha, take it easy... It’s not that simple... But... Every member in this topic will get 0.2 BTC—if they have a BTC address in their signature. Satisfied?  Grin

BTC: bc1qdwnxr7s08xwelpjy3cc52rrxg63xsmagv50fa8
fantom06
Jr. Member
*
Offline Offline

Activity: 49
Merit: 1


View Profile
April 21, 2025, 07:02:34 AM
 #9297

=== FINAL RESULTS ===
Wins:
Sequential: 2105
Prefix: 2688
Ties: 207
Bram24732
Member
**
Offline Offline

Activity: 322
Merit: 28


View Profile
April 21, 2025, 07:09:55 AM
 #9298

-- Sim results

If you sum the number of checks over 10k simulations you get this :

Code:
=== FINAL RESULTS ===
Sequential: 495816995
Prefix: 496059807


This is a bias since the prefix method wins most of the time, meaning it is the best choice. When the prefix method loses, it obviously generates more keys because it is assumed that the target was omitted. And the goal is to find your best option to win and not who loses worse.

I think being first without taking into account how much faster you are is not reflecting the stastistical reality.
But it's ok to disagree Smiley

The times when the prefix method wins, it traverses fewer keys than the sequential method; therefore, by common sense, it saves computational power.

Statistically, prefixes are the best option most of the time.


Similarly, the overall statistics are more or less equal when considering total traversals, both won and lost. However, prefixes still yield the highest success rate. There's no need to overcomplicate it.




It not complicated. Nor is it a bias. Those are actual numbers out of your script.
On average, both methods require the same number of steps to reach a solution.

I solved 67 and 68 using custom software distributing the load across ~25k GPUs. 4090 stocks speeds : ~8.1Bkeys/sec. Don’t challenge me technically if you know shit about fuck, I’ll ignore you. Same goes if all you can do is LLM reply.
Akito S. M. Hosana
Jr. Member
*
Offline Offline

Activity: 420
Merit: 8


View Profile
April 21, 2025, 07:19:06 AM
Merited by nomachine (3)
 #9299

Hahaha, take it easy... It’s not that simple... But... Every member in this topic will get 0.2 BTC—if they have a BTC address in their signature. Satisfied?  Grin

I'm not a full member yet.  Undecided
fantom06
Jr. Member
*
Offline Offline

Activity: 49
Merit: 1


View Profile
April 21, 2025, 07:28:15 AM
 #9300

-- Sim results

If you sum the number of checks over 10k simulations you get this :

Code:
=== FINAL RESULTS ===
Sequential: 495816995
Prefix: 496059807


This is a bias since the prefix method wins most of the time, meaning it is the best choice. When the prefix method loses, it obviously generates more keys because it is assumed that the target was omitted. And the goal is to find your best option to win and not who loses worse.

I think being first without taking into account how much faster you are is not reflecting the stastistical reality.
But it's ok to disagree Smiley

The times when the prefix method wins, it traverses fewer keys than the sequential method; therefore, by common sense, it saves computational power.

Statistically, prefixes are the best option most of the time.


Similarly, the overall statistics are more or less equal when considering total traversals, both won and lost. However, prefixes still yield the highest success rate. There's no need to overcomplicate it.




Wins:
Sequential: 39 (Average Win Margin: 55.95%)
Prefix: 899 (Average Win Margin: 3.37%)
Ties: 62

Total Checks:
Sequential: 494,958,197
Prefix: 502,727,060
Pages: « 1 ... 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 [465] 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 ... 652 »
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!