Bitcoin Forum
September 20, 2025, 11:54:27 PM *
News: Latest Bitcoin Core release: 29.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: [1]
  Print  
Author Topic: Probabilistic search of prefixes vs random+sequential-Part II  (Read 359 times)
mcdouglasx (OP)
Sr. Member
****
Offline Offline

Activity: 770
Merit: 408



View Profile WWW
June 13, 2025, 11:14:30 PM
Merited by vapourminer (4)
 #1

Previously, it was concluded that there was no advantage between prefix and sequential searching in terms of statistical significance.

But, to be fair, I don't think the issue is settled there, as this could be expanded a bit further and give a more elaborate idea of ​​what could be achieved.

Far from wanting to demonstrate which method is more efficient in a second run, this method only aims to highlight the positive aspects of the prefix method over the sequential method.

Breaking this down into two steps:

Case 1 - There may be improvements, but these could be minimal in terms of significance.
Case 2 - Used as a tool to improve the chances of finding the target, but without expecting a 100% certain result (leaving it to chance).


Let's start with Case 1:

I've added a second phase of prefix detection that slightly improves the method used in the previous post. Refreshing our memory of the previous method, we divided the scan range into blocks. The size of these blocks was defined by the probability of a 3-character hexadecimal prefix appearing in a data set, which was 1/4096. Therefore, following this line, the blocks were set to an equal or similar size. When we found a 3-character prefix, we skipped the rest of the block and left it for later.

The new part below is based on the same principle, but instead of skipping the rest of the blocks, we will now search the following keys for prefixes with 2 hexadecimal characters, i.e., -1 from the block size we chose. This way, we can take advantage of the compound probability and mitigate possible omissions of the target on the first pass.


code

Code:
import hashlib
import random
import time
import math
import statistics
import scipy.stats as stats
import statsmodels.stats.power as smp
from math import ceil

# Configuration
TOTAL_SIZE = 100_000
RANGE_SIZE = 4_096
PREFIX_LENGTH = 3
SIMULATIONS = 10000

SECP256K1_ORDER = int("0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFEBAAEDCE6AF48A03BBFD25E8CD0364141", 16)

print(f"""
=== Configuration ===
Total numbers: {TOTAL_SIZE:,}
Block size: {RANGE_SIZE:,}
Total blocks needed: {ceil(TOTAL_SIZE/RANGE_SIZE)}
Prefix: {PREFIX_LENGTH} characters (16^{PREFIX_LENGTH} = {16**PREFIX_LENGTH:,} combinations)
Simulations: {SIMULATIONS}
secp256k1 order: {SECP256K1_ORDER}
""")

def generate_h160(data):
    h = hashlib.new('ripemd160', str(data).encode('utf-8'))
    return h.hexdigest()

def shuffled_blck(total_blocks):
    blocks = list(range(total_blocks))
    random.shuffle(blocks)
    return blocks

def sequential_search(dataset, block_size, target_hash, block_order):
    checks = 0
    for block_idx in block_order:
        start = block_idx * block_size
        end = min(start + block_size, len(dataset))
        for i in range(start, end):
            checks += 1
            if generate_h160(dataset[i]) == target_hash:
                return {"checks": checks, "found": True, "index": i}
    return {"checks": checks, "found": False}

def prefix_search(dataset, block_size, prefix_len, target_hash, block_order):
    prefix = target_hash[:prefix_len]
    checks = 0
    omitted_ranges = []
    
    for block_idx in block_order:
        start = block_idx * block_size
        end = min(start + block_size, len(dataset))
        found_3_prefix = False
        found_2_prefix = False
        
        for i in range(start, end):
            checks += 1
            current_hash = generate_h160(dataset[i])
            
            if current_hash == target_hash:
                return {"checks": checks, "found": True, "index": i}
            
            if not found_3_prefix:
                if current_hash.startswith(prefix):
                    found_3_prefix = True
            else:
                if not found_2_prefix and current_hash.startswith(prefix[:2]):
                    found_2_prefix = True
            
            if found_3_prefix and found_2_prefix:
                omitted_ranges.append((i + 1, end))
                break
    
    for start_omit, end_omit in reversed(omitted_ranges):
        for j in range(end_omit - 1, start_omit - 1, -1):
            checks += 1
            current_hash = generate_h160(dataset[j])
            if current_hash == target_hash:
                return {"checks": checks, "found": True, "index": j}
    
    return {"checks": checks, "found": False}


def comp_cohens_d(list1, list2):
    if len(list1) < 2 or len(list2) < 2:
        return float('nan')
    n1, n2 = len(list1), len(list2)
    m1, m2 = statistics.mean(list1), statistics.mean(list2)
    s1, s2 = statistics.stdev(list1), statistics.stdev(list2)
    
    pooled_std = math.sqrt(((n1-1)*s1**2 + (n2-1)*s2**2) / (n1+n2-2))
    if pooled_std == 0:
        return float('nan')
    return (m1 - m2) / pooled_std

def coeff_variation(data):
    if not data or statistics.mean(data) == 0:
        return float('nan')
    return (statistics.stdev(data) / statistics.mean(data)) * 100

def longest_streak(outcomes, letter):
    max_streak = current = 0
    for o in outcomes:
        current = current + 1 if o == letter else 0
        max_streak = max(max_streak, current)
    return max_streak

def ascii_bar(label, value, max_value, bar_length=50):
    bar_count = int((value / max_value) * bar_length) if max_value > 0 else 0
    return f"{label:12}: {'#' * bar_count} ({value})"

def conf_interval(data, confidence=0.95):
    if len(data) < 2:
        return (0, 0)
    try:
        return stats.t.interval(
            confidence=confidence,
            df=len(data)-1,
            loc=statistics.mean(data),
            scale=stats.sem(data)
        )
    except:
        return (statistics.mean(data), statistics.mean(data))
    
def statistical_analysis(seq_checks, pre_checks, seq_success, pre_success):
    analysis = {}
    
    analysis['seq_mean'] = statistics.mean(seq_checks) if seq_checks else 0
    analysis['pre_mean'] = statistics.mean(pre_checks) if pre_checks else 0
    analysis['seq_ci'] = conf_interval(seq_checks)
    analysis['pre_ci'] = conf_interval(pre_checks)
    
    if len(seq_checks) > 1 and len(pre_checks) > 1:
        analysis['t_test'] = stats.ttest_ind(seq_checks, pre_checks, equal_var=False)
        analysis['mann_whitney'] = stats.mannwhitneyu(seq_checks, pre_checks)
        analysis['cohen_d'] = comp_cohens_d(seq_checks, pre_checks)
        
        effect_size = abs(analysis['cohen_d'])
        if effect_size > 0:
            analysis['power'] = smp.tt_ind_solve_power(
                effect_size=effect_size,
                nobs1=len(seq_checks),
                alpha=0.05,
                ratio=len(pre_checks)/len(seq_checks)
            )
        else:
            analysis['power'] = 0
    else:
        analysis['t_test'] = None
        analysis['mann_whitney'] = None
        analysis['cohen_d'] = 0
        analysis['power'] = 0
    
    analysis['risk_ratio'] = (seq_success/SIMULATIONS) / (pre_success/SIMULATIONS) if pre_success > 0 else 0
    
    return analysis

def compare_methods():
    results = {
        "sequential": {"wins": 0, "success": 0, "checks": [], "times": []},
        "prefix": {"wins": 0, "success": 0, "checks": [], "times": []},
        "ties": 0
    }
    outcome_history = []
    total_blocks = ceil(TOTAL_SIZE / RANGE_SIZE)

    for _ in range(SIMULATIONS):
        max_offset = SECP256K1_ORDER - TOTAL_SIZE - 1
        offset = random.randint(0, max_offset)
        dataset = [offset + i for i in range(TOTAL_SIZE)]
        target_num = random.choice(dataset)
        target_hash = generate_h160(target_num)
        block_order = shuffled_blck(total_blocks)

        start = time.perf_counter()
        seq_res = sequential_search(dataset, RANGE_SIZE, target_hash, block_order)
        seq_time = time.perf_counter() - start

        start = time.perf_counter()
        pre_res = prefix_search(dataset, RANGE_SIZE, PREFIX_LENGTH, target_hash, block_order)
        pre_time = time.perf_counter() - start

        if seq_res["found"]:
            results["sequential"]["success"] += 1
            results["sequential"]["checks"].append(seq_res["checks"])
            results["sequential"]["times"].append(seq_time)
            
        if pre_res["found"]:
            results["prefix"]["success"] += 1
            results["prefix"]["checks"].append(pre_res["checks"])
            results["prefix"]["times"].append(pre_time)
        
        if seq_res["found"] and pre_res["found"]:
            if seq_res["checks"] < pre_res["checks"]:
                results["sequential"]["wins"] += 1
                outcome_history.append("S")
            elif pre_res["checks"] < seq_res["checks"]:
                results["prefix"]["wins"] += 1
                outcome_history.append("P")
            else:
                results["ties"] += 1
                outcome_history.append("T")
        elif seq_res["found"]:
            results["sequential"]["wins"] += 1
            outcome_history.append("S")
        elif pre_res["found"]:
            results["prefix"]["wins"] += 1
            outcome_history.append("P")
        else:
            results["ties"] += 1
            outcome_history.append("T")

    def get_stats(data):
        if not data:
            return {"mean": 0, "min": 0, "max": 0, "median": 0, "stdev": 0}
        return {
            "mean": statistics.mean(data),
            "min": min(data),
            "max": max(data),
            "median": statistics.median(data),
            "stdev": statistics.stdev(data) if len(data) > 1 else 0
        }

    seq_stats = get_stats(results["sequential"]["checks"])
    pre_stats = get_stats(results["prefix"]["checks"])
    seq_time_stats = get_stats(results["sequential"]["times"])
    pre_time_stats = get_stats(results["prefix"]["times"])

    seq_success_rate = results["sequential"]["success"] / SIMULATIONS
    pre_success_rate = results["prefix"]["success"] / SIMULATIONS

    total_comparisons = results["sequential"]["wins"] + results["prefix"]["wins"] + results["ties"]
    seq_win_rate = results["sequential"]["wins"] / total_comparisons if total_comparisons > 0 else 0
    pre_win_rate = results["prefix"]["wins"] / total_comparisons if total_comparisons > 0 else 0

    cv_seq = coeff_variation(results["sequential"]["checks"])
    cv_pre = coeff_variation(results["prefix"]["checks"])

    stats_analysis = statistical_analysis(
        seq_checks=results["sequential"]["checks"],
        pre_checks=results["prefix"]["checks"],
        seq_success=results["sequential"]["success"],
        pre_success=results["prefix"]["success"]
    )

    print(f"""
=== FINAL ANALYSIS ===

[Success Rates]
Sequential: {seq_success_rate:.1%} ({results['sequential']['success']}/{SIMULATIONS})
Prefix:    {pre_success_rate:.1%} ({results['prefix']['success']}/{SIMULATIONS})

[Performance Metrics]
               | Sequential          | Prefix
---------------+---------------------+--------------------
Checks (mean)  | {seq_stats['mean']:>12,.1f} ± {seq_stats['stdev']:,.1f} | {pre_stats['mean']:>12,.1f} ± {pre_stats['stdev']:,.1f}
Time (mean ms) | {seq_time_stats['mean']*1000:>12.2f} ± {seq_time_stats['stdev']*1000:.2f} | {pre_time_stats['mean']*1000:>12.2f} ± {pre_time_stats['stdev']*1000:.2f}
Min checks     | {seq_stats['min']:>12,} | {pre_stats['min']:>12,}
Max checks     | {seq_stats['max']:>12,} | {pre_stats['max']:>12,}
Coef. Variation| {cv_seq:>11.1f}% | {cv_pre:>11.1f}%

[Comparison When Both Succeed]
Sequential wins: {results['sequential']['wins']} ({seq_win_rate:.1%})
Prefix wins:    {results['prefix']['wins']} ({pre_win_rate:.1%})
Ties:          {results['ties']}

=== ADVANCED STATISTICS ===

[Confidence Intervals 95%]
Checks Sequential: {seq_stats['mean']:.1f} ({stats_analysis['seq_ci'][0]:.1f} - {stats_analysis['seq_ci'][1]:.1f})
Checks Prefix:    {pre_stats['mean']:.1f} ({stats_analysis['pre_ci'][0]:.1f} - {stats_analysis['pre_ci'][1]:.1f})

[Statistical Tests]
Welch's t-test: {'t = %.3f, p = %.4f' % (stats_analysis['t_test'].statistic, stats_analysis['t_test'].pvalue) if stats_analysis['t_test'] else 'N/A'}
Mann-Whitney U: {'U = %.1f, p = %.4f' % (stats_analysis['mann_whitney'].statistic, stats_analysis['mann_whitney'].pvalue) if stats_analysis['mann_whitney'] else 'N/A'}
Effect Size (Cohen's d): {stats_analysis['cohen_d']:.3f}

[Power Analysis]
Statistical Power: {stats_analysis['power']:.1%}

[Risk/Benefit Ratio]
Success Ratio (Seq/Pre): {stats_analysis['risk_ratio']:.2f}:1
""")

    non_tie_outcomes = [o for o in outcome_history if o != "T"]
    streak_analysis = f"""
=== STREAK ANALYSIS ===
Longest Sequential streak: {longest_streak(outcome_history, 'S')}
Longest Prefix streak:    {longest_streak(outcome_history, 'P')}
Expected max streak:      {math.log(len(non_tie_outcomes), 2):.1f} (for {len(non_tie_outcomes)} trials)
"""
    print(streak_analysis)

    max_wins = max(results["sequential"]["wins"], results["prefix"]["wins"], results["ties"])
    print("=== WIN DISTRIBUTION ===")
    print(ascii_bar("Sequential", results["sequential"]["wins"], max_wins))
    print(ascii_bar("Prefix", results["prefix"]["wins"], max_wins))
    print(ascii_bar("Ties", results["ties"], max_wins))

if __name__ == '__main__':
    compare_methods()


result


Code:
=== FINAL ANALYSIS ===



[Success Rates]

Sequential: 100.0% (10000/10000)

Prefix:    100.0% (10000/10000)



[Performance Metrics]

               | Sequential          | Prefix

---------------+---------------------+--------------------

Checks (mean)  |     50,326.6 ± 28,808.3 |     49,908.5 ± 28,648.4

Time (mean ms) |       126.15 ± 72.73 |       129.80 ± 74.85

Min checks     |            7 |            7

Max checks     |       99,966 |       99,998

Coef. Variation|        57.2% |        57.4%



[Comparison When Both Succeed]

Sequential wins: 2385 (23.8%)

Prefix wins:    7149 (71.5%)

Ties:          466



=== ADVANCED STATISTICS ===



[Confidence Intervals 95%]

Checks Sequential: 50326.6 (49761.9 - 50891.3)

Checks Prefix:    49908.5 (49346.9 - 50470.1)



[Statistical Tests]

Welch's t-test: t = 1.029, p = 0.3035

Mann-Whitney U: U = 50420196.5, p = 0.3034

Effect Size (Cohen's d): 0.015



[Power Analysis]

Statistical Power: 17.7%



[Risk/Benefit Ratio]

Success Ratio (Seq/Pre): 1.00:1





=== STREAK ANALYSIS ===

Longest Sequential streak: 6

Longest Prefix streak:    30

Expected max streak:      13.2 (for 9534 trials)



=== WIN DISTRIBUTION ===

Sequential  : ################ (2385)

Prefix      : ################################################## (7149)

Ties        : ### (466)








Case 1.2:

In this release, we simply added a way to counteract the fact that skipped ranges are left until last. This will scan every 4 skipped ranges, limiting the worst-case scenarios suggested by newsecurity.


code

Code:
import hashlib
import random
import time
import math
import statistics
import scipy.stats as stats
import statsmodels.stats.power as smp
from math import ceil

# Configuration
TOTAL_SIZE = 100_000
RANGE_SIZE = 4_096
PREFIX_LENGTH = 3
SIMULATIONS = 10000

SECP256K1_ORDER = int("0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFEBAAEDCE6AF48A03BBFD25E8CD0364141", 16)

print(f"""
=== Configuration ===
Total numbers: {TOTAL_SIZE:,}
Block size: {RANGE_SIZE:,}
Total blocks needed: {ceil(TOTAL_SIZE/RANGE_SIZE)}
Prefix: {PREFIX_LENGTH} characters (16^{PREFIX_LENGTH} = {16**PREFIX_LENGTH:,} combinations)
Simulations: {SIMULATIONS}
secp256k1 order: {SECP256K1_ORDER}
""")

def generate_h160(data):
    h = hashlib.new('ripemd160', str(data).encode('utf-8'))
    return h.hexdigest()

def shuffled_blck(total_blocks):
    blocks = list(range(total_blocks))
    random.shuffle(blocks)
    return blocks

def sequential_search(dataset, block_size, target_hash, block_order):
    checks = 0
    for block_idx in block_order:
        start = block_idx * block_size
        end = min(start + block_size, len(dataset))
        for i in range(start, end):
            checks += 1
            if generate_h160(dataset[i]) == target_hash:
                return {"checks": checks, "found": True, "index": i}
    return {"checks": checks, "found": False}

def prefix_search(dataset, block_size, prefix_len, target_hash, block_order):
    prefix = target_hash[:prefix_len]
    checks = 0
    omitted_ranges = []
    pending_omissions = []
    
    for block_idx in block_order:
        start = block_idx * block_size
        end = min(start + block_size, len(dataset))
        found_3_prefix = False
        found_2_prefix = False
        
        for i in range(start, end):
            checks += 1
            current_hash = generate_h160(dataset[i])
            
            if current_hash == target_hash:
                return {"checks": checks, "found": True, "index": i}
            
            if not found_3_prefix:
                if current_hash.startswith(prefix):
                    found_3_prefix = True
            else:
                if not found_2_prefix and current_hash.startswith(prefix[:2]):
                    found_2_prefix = True
            
            if found_3_prefix and found_2_prefix:

                pending_omissions.append((i + 1, end))
                omitted_ranges.append((i + 1, end))
                
                if len(pending_omissions) >= 4:
                    for r_start, r_end in reversed(pending_omissions[-4:]):
                        for j in range(r_end - 1, r_start - 1, -1):
                            checks += 1
                            if generate_h160(dataset[j]) == target_hash:
                                return {"checks": checks, "found": True, "index": j}
                    pending_omissions = []
                break
    
    for r_start, r_end in reversed(pending_omissions):
        for j in range(r_end - 1, r_start - 1, -1):
            checks += 1
            if generate_h160(dataset[j]) == target_hash:
                return {"checks": checks, "found": True, "index": j}
    
    return {"checks": checks, "found": False}


def comp_cohens_d(list1, list2):
    if len(list1) < 2 or len(list2) < 2:
        return float('nan')
    n1, n2 = len(list1), len(list2)
    m1, m2 = statistics.mean(list1), statistics.mean(list2)
    s1, s2 = statistics.stdev(list1), statistics.stdev(list2)
    
    pooled_std = math.sqrt(((n1-1)*s1**2 + (n2-1)*s2**2) / (n1+n2-2))
    if pooled_std == 0:
        return float('nan')
    return (m1 - m2) / pooled_std

def coeff_variation(data):
    if not data or statistics.mean(data) == 0:
        return float('nan')
    return (statistics.stdev(data) / statistics.mean(data)) * 100

def longest_streak(outcomes, letter):
    max_streak = current = 0
    for o in outcomes:
        current = current + 1 if o == letter else 0
        max_streak = max(max_streak, current)
    return max_streak

def ascii_bar(label, value, max_value, bar_length=50):
    bar_count = int((value / max_value) * bar_length) if max_value > 0 else 0
    return f"{label:12}: {'#' * bar_count} ({value})"

def conf_interval(data, confidence=0.95):
    if len(data) < 2:
        return (0, 0)
    try:
        return stats.t.interval(
            confidence=confidence,
            df=len(data)-1,
            loc=statistics.mean(data),
            scale=stats.sem(data)
        )
    except:
        return (statistics.mean(data), statistics.mean(data))
    
def statistical_analysis(seq_checks, pre_checks, seq_success, pre_success):
    analysis = {}
    
    analysis['seq_mean'] = statistics.mean(seq_checks) if seq_checks else 0
    analysis['pre_mean'] = statistics.mean(pre_checks) if pre_checks else 0
    analysis['seq_ci'] = conf_interval(seq_checks)
    analysis['pre_ci'] = conf_interval(pre_checks)
    
    if len(seq_checks) > 1 and len(pre_checks) > 1:
        analysis['t_test'] = stats.ttest_ind(seq_checks, pre_checks, equal_var=False)
        analysis['mann_whitney'] = stats.mannwhitneyu(seq_checks, pre_checks)
        analysis['cohen_d'] = comp_cohens_d(seq_checks, pre_checks)
        
        effect_size = abs(analysis['cohen_d'])
        if effect_size > 0:
            analysis['power'] = smp.tt_ind_solve_power(
                effect_size=effect_size,
                nobs1=len(seq_checks),
                alpha=0.05,
                ratio=len(pre_checks)/len(seq_checks)
            )
        else:
            analysis['power'] = 0
    else:
        analysis['t_test'] = None
        analysis['mann_whitney'] = None
        analysis['cohen_d'] = 0
        analysis['power'] = 0
    
    analysis['risk_ratio'] = (seq_success/SIMULATIONS) / (pre_success/SIMULATIONS) if pre_success > 0 else 0
    
    return analysis

def compare_methods():
    results = {
        "sequential": {"wins": 0, "success": 0, "checks": [], "times": []},
        "prefix": {"wins": 0, "success": 0, "checks": [], "times": []},
        "ties": 0
    }
    outcome_history = []
    total_blocks = ceil(TOTAL_SIZE / RANGE_SIZE)

    for _ in range(SIMULATIONS):
        max_offset = SECP256K1_ORDER - TOTAL_SIZE - 1
        offset = random.randint(0, max_offset)
        dataset = [offset + i for i in range(TOTAL_SIZE)]
        target_num = random.choice(dataset)
        target_hash = generate_h160(target_num)
        block_order = shuffled_blck(total_blocks)

        start = time.perf_counter()
        seq_res = sequential_search(dataset, RANGE_SIZE, target_hash, block_order)
        seq_time = time.perf_counter() - start

        start = time.perf_counter()
        pre_res = prefix_search(dataset, RANGE_SIZE, PREFIX_LENGTH, target_hash, block_order)
        pre_time = time.perf_counter() - start

        if seq_res["found"]:
            results["sequential"]["success"] += 1
            results["sequential"]["checks"].append(seq_res["checks"])
            results["sequential"]["times"].append(seq_time)
            
        if pre_res["found"]:
            results["prefix"]["success"] += 1
            results["prefix"]["checks"].append(pre_res["checks"])
            results["prefix"]["times"].append(pre_time)
        
        if seq_res["found"] and pre_res["found"]:
            if seq_res["checks"] < pre_res["checks"]:
                results["sequential"]["wins"] += 1
                outcome_history.append("S")
            elif pre_res["checks"] < seq_res["checks"]:
                results["prefix"]["wins"] += 1
                outcome_history.append("P")
            else:
                results["ties"] += 1
                outcome_history.append("T")
        elif seq_res["found"]:
            results["sequential"]["wins"] += 1
            outcome_history.append("S")
        elif pre_res["found"]:
            results["prefix"]["wins"] += 1
            outcome_history.append("P")
        else:
            results["ties"] += 1
            outcome_history.append("T")

    def get_stats(data):
        if not data:
            return {"mean": 0, "min": 0, "max": 0, "median": 0, "stdev": 0}
        return {
            "mean": statistics.mean(data),
            "min": min(data),
            "max": max(data),
            "median": statistics.median(data),
            "stdev": statistics.stdev(data) if len(data) > 1 else 0
        }

    seq_stats = get_stats(results["sequential"]["checks"])
    pre_stats = get_stats(results["prefix"]["checks"])
    seq_time_stats = get_stats(results["sequential"]["times"])
    pre_time_stats = get_stats(results["prefix"]["times"])

    seq_success_rate = results["sequential"]["success"] / SIMULATIONS
    pre_success_rate = results["prefix"]["success"] / SIMULATIONS

    total_comparisons = results["sequential"]["wins"] + results["prefix"]["wins"] + results["ties"]
    seq_win_rate = results["sequential"]["wins"] / total_comparisons if total_comparisons > 0 else 0
    pre_win_rate = results["prefix"]["wins"] / total_comparisons if total_comparisons > 0 else 0

    cv_seq = coeff_variation(results["sequential"]["checks"])
    cv_pre = coeff_variation(results["prefix"]["checks"])

    stats_analysis = statistical_analysis(
        seq_checks=results["sequential"]["checks"],
        pre_checks=results["prefix"]["checks"],
        seq_success=results["sequential"]["success"],
        pre_success=results["prefix"]["success"]
    )

    print(f"""
=== FINAL ANALYSIS ===

[Success Rates]
Sequential: {seq_success_rate:.1%} ({results['sequential']['success']}/{SIMULATIONS})
Prefix:    {pre_success_rate:.1%} ({results['prefix']['success']}/{SIMULATIONS})

[Performance Metrics]
               | Sequential          | Prefix
---------------+---------------------+--------------------
Checks (mean)  | {seq_stats['mean']:>12,.1f} ± {seq_stats['stdev']:,.1f} | {pre_stats['mean']:>12,.1f} ± {pre_stats['stdev']:,.1f}
Time (mean ms) | {seq_time_stats['mean']*1000:>12.2f} ± {seq_time_stats['stdev']*1000:.2f} | {pre_time_stats['mean']*1000:>12.2f} ± {pre_time_stats['stdev']*1000:.2f}
Min checks     | {seq_stats['min']:>12,} | {pre_stats['min']:>12,}
Max checks     | {seq_stats['max']:>12,} | {pre_stats['max']:>12,}
Coef. Variation| {cv_seq:>11.1f}% | {cv_pre:>11.1f}%

[Comparison When Both Succeed]
Sequential wins: {results['sequential']['wins']} ({seq_win_rate:.1%})
Prefix wins:    {results['prefix']['wins']} ({pre_win_rate:.1%})
Ties:          {results['ties']}

=== ADVANCED STATISTICS ===

[Confidence Intervals 95%]
Checks Sequential: {seq_stats['mean']:.1f} ({stats_analysis['seq_ci'][0]:.1f} - {stats_analysis['seq_ci'][1]:.1f})
Checks Prefix:    {pre_stats['mean']:.1f} ({stats_analysis['pre_ci'][0]:.1f} - {stats_analysis['pre_ci'][1]:.1f})

[Statistical Tests]
Welch's t-test: {'t = %.3f, p = %.4f' % (stats_analysis['t_test'].statistic, stats_analysis['t_test'].pvalue) if stats_analysis['t_test'] else 'N/A'}
Mann-Whitney U: {'U = %.1f, p = %.4f' % (stats_analysis['mann_whitney'].statistic, stats_analysis['mann_whitney'].pvalue) if stats_analysis['mann_whitney'] else 'N/A'}
Effect Size (Cohen's d): {stats_analysis['cohen_d']:.3f}

[Power Analysis]
Statistical Power: {stats_analysis['power']:.1%}

[Risk/Benefit Ratio]
Success Ratio (Seq/Pre): {stats_analysis['risk_ratio']:.2f}:1
""")

    non_tie_outcomes = [o for o in outcome_history if o != "T"]
    streak_analysis = f"""
=== STREAK ANALYSIS ===
Longest Sequential streak: {longest_streak(outcome_history, 'S')}
Longest Prefix streak:    {longest_streak(outcome_history, 'P')}
Expected max streak:      {math.log(len(non_tie_outcomes), 2):.1f} (for {len(non_tie_outcomes)} trials)
"""
    print(streak_analysis)

    max_wins = max(results["sequential"]["wins"], results["prefix"]["wins"], results["ties"])
    print("=== WIN DISTRIBUTION ===")
    print(ascii_bar("Sequential", results["sequential"]["wins"], max_wins))
    print(ascii_bar("Prefix", results["prefix"]["wins"], max_wins))
    print(ascii_bar("Ties", results["ties"], max_wins))

if __name__ == '__main__':
    compare_methods()


result


Code:
=== FINAL ANALYSIS ===

[Success Rates]
Sequential: 100.0% (10000/10000)
Prefix:    100.0% (10000/10000)

[Performance Metrics]
               | Sequential          | Prefix
---------------+---------------------+--------------------
Checks (mean)  |     49,712.4 ± 28,876.0 |     49,644.4 ± 28,930.7
Time (mean ms) |       124.20 ± 72.81 |       127.68 ± 74.95
Min checks     |           10 |           10
Max checks     |       99,993 |       99,986
Coef. Variation|        58.1% |        58.3%

[Comparison When Both Succeed]
Sequential wins: 2180 (21.8%)
Prefix wins:    5939 (59.4%)
Ties:          1881

=== ADVANCED STATISTICS ===

[Confidence Intervals 95%]
Checks Sequential: 49712.4 (49146.4 - 50278.5)
Checks Prefix:    49644.4 (49077.3 - 50211.5)

[Statistical Tests]
Welch's t-test: t = 0.166, p = 0.8679
Mann-Whitney U: U = 50070425.5, p = 0.8630
Effect Size (Cohen's d): 0.002

[Power Analysis]
Statistical Power: 5.3%

[Risk/Benefit Ratio]
Success Ratio (Seq/Pre): 1.00:1


=== STREAK ANALYSIS ===
Longest Sequential streak: 6
Longest Prefix streak:    15
Expected max streak:      13.0 (for 8119 trials)

=== WIN DISTRIBUTION ===
Sequential  : ################## (2180)
Prefix      : ################################################## (5939)
Ties        : ############### (1881)










Let's start with Case 2:



This is the case that, in my opinion, most puzzle seekers would resort to, or should, since they're just trying their luck and aren't searching for the target with the goal of finding it no matter what. They simply want to perform a partial search that offers them the best chances.

At this point, we're 100% sure that the prefix method is undoubtedly the best option.

The following comparison is based on the number of times both methods found the target. The entire range isn't considered because it's a direct comparison of probabilistic efficiency. For the prefix method, at this point, you don't need to store ranges since the goal isn't to perform a complete search, but rather to give random chances a chance, with obvious statistical improvements.


code

Code:
import hashlib
import random
import time
import math
import statistics
import scipy.stats as stats
import statsmodels.stats.power as smp
from math import ceil

# Configuration
TOTAL_SIZE = 100_000
RANGE_SIZE = 4_096
PREFIX_LENGTH = 3
SIMULATIONS = 10000

SECP256K1_ORDER = int("0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFEBAAEDCE6AF48A03BBFD25E8CD0364141", 16)

print(f"""
=== Configuration ===
Total numbers: {TOTAL_SIZE:,}
Block size: {RANGE_SIZE:,}
Total blocks needed: {ceil(TOTAL_SIZE/RANGE_SIZE)}
Prefix: {PREFIX_LENGTH} characters (16^{PREFIX_LENGTH} = {16**PREFIX_LENGTH:,} combinations)
Simulations: {SIMULATIONS}
secp256k1 order: {SECP256K1_ORDER}
""")

def generate_h160(data):
    h = hashlib.new('ripemd160', str(data).encode('utf-8'))
    return h.hexdigest()

def shuffled_blck(total_blocks):
    blocks = list(range(total_blocks))
    random.shuffle(blocks)
    return blocks

def sequential_search(dataset, block_size, target_hash, block_order):
    checks = 0
    for block_idx in block_order:
        start = block_idx * block_size
        end = min(start + block_size, len(dataset))
        for i in range(start, end):
            checks += 1
            if generate_h160(dataset[i]) == target_hash:
                return {"checks": checks, "found": True, "index": i}
    return {"checks": checks, "found": False}

def prefix_search(dataset, block_size, prefix_len, target_hash, block_order):
    prefix = target_hash[:prefix_len]
    checks = 0
    
    for block_idx in block_order:
        start = block_idx * block_size
        end = min(start + block_size, len(dataset))
        found_prefix = False
        
        for i in range(start, end):
            checks += 1
            current_hash = generate_h160(dataset[i])
            
            if current_hash == target_hash:
                return {"checks": checks, "found": True, "index": i}
            
            if not found_prefix and current_hash.startswith(prefix):
                found_prefix = True
                break
    
    return {"checks": checks, "found": False}

def comp_cohens_d(list1, list2):
    if len(list1) < 2 or len(list2) < 2:
        return float('nan')
    n1, n2 = len(list1), len(list2)
    m1, m2 = statistics.mean(list1), statistics.mean(list2)
    s1, s2 = statistics.stdev(list1), statistics.stdev(list2)
    
    pooled_std = math.sqrt(((n1-1)*s1**2 + (n2-1)*s2**2) / (n1+n2-2))
    if pooled_std == 0:
        return float('nan')
    return (m1 - m2) / pooled_std

def coeff_variation(data):
    if not data or statistics.mean(data) == 0:
        return float('nan')
    return (statistics.stdev(data) / statistics.mean(data)) * 100

def longest_streak(outcomes, letter):
    max_streak = current = 0
    for o in outcomes:
        current = current + 1 if o == letter else 0
        max_streak = max(max_streak, current)
    return max_streak

def ascii_bar(label, value, max_value, bar_length=50):
    bar_count = int((value / max_value) * bar_length) if max_value > 0 else 0
    return f"{label:12}: {'#' * bar_count} ({value})"

def conf_interval(data, confidence=0.95):
    if len(data) < 2:
        return (0, 0)
    try:
        return stats.t.interval(
            confidence=confidence,
            df=len(data)-1,
            loc=statistics.mean(data),
            scale=stats.sem(data)
        )
    except:
        return (statistics.mean(data), statistics.mean(data))
    
def statistical_analysis(seq_checks, pre_checks):
    analysis = {}
    
    analysis['seq_mean'] = statistics.mean(seq_checks) if seq_checks else 0
    analysis['pre_mean'] = statistics.mean(pre_checks) if pre_checks else 0
    analysis['seq_ci'] = conf_interval(seq_checks)
    analysis['pre_ci'] = conf_interval(pre_checks)
    
    if len(seq_checks) > 1 and len(pre_checks) > 1:
        analysis['t_test'] = stats.ttest_ind(seq_checks, pre_checks, equal_var=False)
        analysis['mann_whitney'] = stats.mannwhitneyu(seq_checks, pre_checks)
        analysis['cohen_d'] = comp_cohens_d(seq_checks, pre_checks)
        
        effect_size = abs(analysis['cohen_d'])
        if effect_size > 0:
            analysis['power'] = smp.tt_ind_solve_power(
                effect_size=effect_size,
                nobs1=len(seq_checks),
                alpha=0.05,
                ratio=len(pre_checks)/len(seq_checks)
            )
        else:
            analysis['power'] = 0
    else:
        analysis['t_test'] = None
        analysis['mann_whitney'] = None
        analysis['cohen_d'] = 0
        analysis['power'] = 0
    
    return analysis

def compare_methods():
    results = {
        "sequential": {"wins": 0, "checks": [], "times": []},
        "prefix": {"wins": 0, "checks": [], "times": []},
        "ties": 0,
        "both_failed": 0
    }
    outcome_history = []
    total_blocks = ceil(TOTAL_SIZE / RANGE_SIZE)
    valid_cases = 0

    for _ in range(SIMULATIONS):
        max_offset = SECP256K1_ORDER - TOTAL_SIZE - 1
        offset = random.randint(0, max_offset)
        dataset = [offset + i for i in range(TOTAL_SIZE)]
        target_num = random.choice(dataset)
        target_hash = generate_h160(target_num)
        block_order = shuffled_blck(total_blocks)

        start = time.perf_counter()
        seq_res = sequential_search(dataset, RANGE_SIZE, target_hash, block_order)
        seq_time = time.perf_counter() - start

        start = time.perf_counter()
        pre_res = prefix_search(dataset, RANGE_SIZE, PREFIX_LENGTH, target_hash, block_order)
        pre_time = time.perf_counter() - start

        if seq_res["found"] and pre_res["found"]:
            valid_cases += 1
            results["sequential"]["checks"].append(seq_res["checks"])
            results["prefix"]["checks"].append(pre_res["checks"])
            results["sequential"]["times"].append(seq_time)
            results["prefix"]["times"].append(pre_time)
            
            if seq_res["checks"] < pre_res["checks"]:
                results["sequential"]["wins"] += 1
                outcome_history.append("S")
            elif pre_res["checks"] < seq_res["checks"]:
                results["prefix"]["wins"] += 1
                outcome_history.append("P")
            else:
                results["ties"] += 1
                outcome_history.append("T")
        elif not seq_res["found"] and not pre_res["found"]:
            results["both_failed"] += 1
        else:
            continue

    def get_stats(data):
        if not data:
            return {"mean": 0, "min": 0, "max": 0, "median": 0, "stdev": 0}
        return {
            "mean": statistics.mean(data),
            "min": min(data),
            "max": max(data),
            "median": statistics.median(data),
            "stdev": statistics.stdev(data) if len(data) > 1 else 0
        }

    seq_stats = get_stats(results["sequential"]["checks"])
    pre_stats = get_stats(results["prefix"]["checks"])
    seq_time_stats = get_stats(results["sequential"]["times"])
    pre_time_stats = get_stats(results["prefix"]["times"])

    total_comparisons = results["sequential"]["wins"] + results["prefix"]["wins"] + results["ties"]
    seq_win_rate = results["sequential"]["wins"] / total_comparisons if total_comparisons > 0 else 0
    pre_win_rate = results["prefix"]["wins"] / total_comparisons if total_comparisons > 0 else 0

    cv_seq = coeff_variation(results["sequential"]["checks"])
    cv_pre = coeff_variation(results["prefix"]["checks"])

    stats_analysis = statistical_analysis(
        seq_checks=results["sequential"]["checks"],
        pre_checks=results["prefix"]["checks"]
    )

    print(f"""
=== FINAL ANALYSIS ===
Valid cases (both found target): {valid_cases}/{SIMULATIONS}

[Performance Metrics]
               | Sequential          | Prefix
---------------+---------------------+--------------------
Checks (mean)  | {seq_stats['mean']:>12,.1f} ± {seq_stats['stdev']:,.1f} | {pre_stats['mean']:>12,.1f} ± {pre_stats['stdev']:,.1f}
Time (mean ms) | {seq_time_stats['mean']*1000:>12.2f} ± {seq_time_stats['stdev']*1000:.2f} | {pre_time_stats['mean']*1000:>12.2f} ± {pre_time_stats['stdev']*1000:.2f}
Min checks     | {seq_stats['min']:>12,} | {pre_stats['min']:>12,}
Max checks     | {seq_stats['max']:>12,} | {pre_stats['max']:>12,}
Coef. Variation| {cv_seq:>11.1f}% | {cv_pre:>11.1f}%

[Comparison Results]
Sequential wins: {results['sequential']['wins']} ({seq_win_rate:.1%})
Prefix wins:    {results['prefix']['wins']} ({pre_win_rate:.1%})
Ties:          {results['ties']}
Both failed:    {results['both_failed']}

=== STATISTICAL ANALYSIS ===

[Confidence Intervals]
Checks Sequential: {seq_stats['mean']:.1f} ({stats_analysis['seq_ci'][0]:.1f} - {stats_analysis['seq_ci'][1]:.1f})
Checks Prefix:    {pre_stats['mean']:.1f} ({stats_analysis['pre_ci'][0]:.1f} - {stats_analysis['pre_ci'][1]:.1f})

[Statistical Tests]
Welch's t-test: {'t = %.3f, p = %.4f' % (stats_analysis['t_test'].statistic, stats_analysis['t_test'].pvalue) if stats_analysis['t_test'] else 'N/A'}
Mann-Whitney U: {'U = %.1f, p = %.4f' % (stats_analysis['mann_whitney'].statistic, stats_analysis['mann_whitney'].pvalue) if stats_analysis['mann_whitney'] else 'N/A'}
Effect Size (Cohen's d): {stats_analysis['cohen_d']:.3f}

[Power Analysis]
Statistical Power: {stats_analysis['power']:.1%}
""")

    if outcome_history:
        non_tie_outcomes = [o for o in outcome_history if o != "T"]
        streak_analysis = f"""
=== STREAK ANALYSIS ===
Longest Sequential streak: {longest_streak(outcome_history, 'S')}
Longest Prefix streak:    {longest_streak(outcome_history, 'P')}
Expected max streak:      {math.log(len(non_tie_outcomes), 2):.1f} (for {len(non_tie_outcomes)} trials)
"""
        print(streak_analysis)

        max_wins = max(results["sequential"]["wins"], results["prefix"]["wins"], results["ties"])
        print("=== WIN DISTRIBUTION ===")
        print(ascii_bar("Sequential", results["sequential"]["wins"], max_wins))
        print(ascii_bar("Prefix", results["prefix"]["wins"], max_wins))
        print(ascii_bar("Ties", results["ties"], max_wins))

if __name__ == '__main__':
    compare_methods()


result...continued in the first comment

▄▄█████████████████▄▄
▄█████████████████████▄
███▀▀█████▀▀░░▀▀███████

██▄░░▀▀░░▄▄██▄░░█████
█████░░░████████░░█████
████▌░▄░░█████▀░░██████
███▌░▐█▌░░▀▀▀▀░░▄██████
███░░▌██░░▄░░▄█████████
███▌░▀▄▀░░█▄░░█████████
████▄░░░▄███▄░░▀▀█▀▀███
██████████████▄▄░░░▄███
▀█████████████████████▀
▀▀█████████████████▀▀
Rainbet.com
CRYPTO CASINO & SPORTSBOOK
|
█▄█▄█▄███████▄█▄█▄█
███████████████████
███████████████████
███████████████████
█████▀█▀▀▄▄▄▀██████
█████▀▄▀████░██████
█████░██░█▀▄███████
████▄▀▀▄▄▀███████
█████████▄▀▄███
█████████████████
███████████████████
██████████████████
███████████████████
 
 $20,000 
WEEKLY RAFFLE
|



█████████
█████████ ██
▄▄█░▄░▄█▄░▄░█▄▄
▀██░▐█████▌░██▀
▄█▄░▀▀▀▀▀░▄█▄
▀▀▀█▄▄░▄▄█▀▀▀
▀█▀░▀█▀
10K
WEEKLY
RACE
100K
MONTHLY
RACE
|

██









█████
███████
███████
█▄
██████
████▄▄
█████████████▄
███████████████▄
░▄████████████████▄
▄██████████████████▄
███████████████▀████
██████████▀██████████
██████████████████
░█████████████████▀
░░▀███████████████▀
████▀▀███
███████▀▀
████████████████████   ██
 
[..►PLAY..]
 
████████   ██████████████
mcdouglasx (OP)
Sr. Member
****
Offline Offline

Activity: 770
Merit: 408



View Profile WWW
June 13, 2025, 11:15:58 PM
Last edit: June 14, 2025, 09:26:05 AM by mcdouglasx
 #2

result

Code:
=== FINAL ANALYSIS ===
Valid cases (both found target): 6336/10000

[Performance Metrics]
               | Sequential          | Prefix
---------------+---------------------+--------------------
Checks (mean)  |     49,619.5 ± 28,835.7 |     32,096.8 ± 18,938.4
Time (mean ms) |       127.66 ± 74.88 |        84.95 ± 50.71
Min checks     |           23 |           23
Max checks     |       99,981 |       84,112
Coef. Variation|        58.1% |        59.0%

[Comparison Results]
Sequential wins: 0 (0.0%)
Prefix wins:    5934 (93.7%)
Ties:          402
Both failed:    0

=== STATISTICAL ANALYSIS ===

[Confidence Intervals]
Checks Sequential: 49619.5 (48909.3 - 50329.7)
Checks Prefix:    32096.8 (31630.4 - 32563.2)

[Statistical Tests]
Welch's t-test: t = 40.430, p = 0.0000
Mann-Whitney U: U = 27088422.0, p = 0.0000
Effect Size (Cohen's d): 0.718

[Power Analysis]
Statistical Power: 100.0%


=== STREAK ANALYSIS ===
Longest Sequential streak: 0
Longest Prefix streak:    92
Expected max streak:      12.5 (for 5934 trials)

=== WIN DISTRIBUTION ===
Sequential  :  (0)
Prefix      : ################################################## (5934)
Ties        : ### (402)








At this point, you might think I'm being unfair to the sequential method, since it only compares the number of times the prefix wins 1:1.

But that's not what I want to reflect. What I want to highlight here is that when the prefix method wins (which happens in most cases), it does so in a big way, and it's ideal for searches where the goal isn't to find a 100% final result.

▄▄█████████████████▄▄
▄█████████████████████▄
███▀▀█████▀▀░░▀▀███████

██▄░░▀▀░░▄▄██▄░░█████
█████░░░████████░░█████
████▌░▄░░█████▀░░██████
███▌░▐█▌░░▀▀▀▀░░▄██████
███░░▌██░░▄░░▄█████████
███▌░▀▄▀░░█▄░░█████████
████▄░░░▄███▄░░▀▀█▀▀███
██████████████▄▄░░░▄███
▀█████████████████████▀
▀▀█████████████████▀▀
Rainbet.com
CRYPTO CASINO & SPORTSBOOK
|
█▄█▄█▄███████▄█▄█▄█
███████████████████
███████████████████
███████████████████
█████▀█▀▀▄▄▄▀██████
█████▀▄▀████░██████
█████░██░█▀▄███████
████▄▀▀▄▄▀███████
█████████▄▀▄███
█████████████████
███████████████████
██████████████████
███████████████████
 
 $20,000 
WEEKLY RAFFLE
|



█████████
█████████ ██
▄▄█░▄░▄█▄░▄░█▄▄
▀██░▐█████▌░██▀
▄█▄░▀▀▀▀▀░▄█▄
▀▀▀█▄▄░▄▄█▀▀▀
▀█▀░▀█▀
10K
WEEKLY
RACE
100K
MONTHLY
RACE
|

██









█████
███████
███████
█▄
██████
████▄▄
█████████████▄
███████████████▄
░▄████████████████▄
▄██████████████████▄
███████████████▀████
██████████▀██████████
██████████████████
░█████████████████▀
░░▀███████████████▀
████▀▀███
███████▀▀
████████████████████   ██
 
[..►PLAY..]
 
████████   ██████████████
smracer
Donator
Legendary
*
Offline Offline

Activity: 1065
Merit: 1040



View Profile
July 01, 2025, 12:30:27 AM
 #3

Say for puzzle 71 you started searching for the prefix 1PWo3JeB9jr.  There are ~ 3150 prefixes in the keyspace that start with 1PWo3JeB9jr and let's say you starting searching in the middle and found the first 1PWo3JeB9jr around 50%.

Break up the 71 keyspace into 3,150 blocks each with 750 Quadrillion keys each.

What if you made approximately 1570 (+750 quadrillion) jump points going up from the prefix 1PWo3JeB9jr and 1570 (-750 quadrillion) jump points going down from the prefix 1PWo3JeB9jr and then searched all those points simultaneously from the midpoints outwards?
mcdouglasx (OP)
Sr. Member
****
Offline Offline

Activity: 770
Merit: 408



View Profile WWW
July 01, 2025, 01:22:18 AM
 #4

Say for puzzle 71 you started searching for the prefix 1PWo3JeB9jr.  There are ~ 3150 prefixes in the keyspace that start with 1PWo3JeB9jr and let's say you starting searching in the middle and found the first 1PWo3JeB9jr around 50%.

Break up the 71 keyspace into 3,150 blocks each with 750 Quadrillion keys each.

What if you made approximately 1570 (+750 quadrillion) jump points going up from the prefix 1PWo3JeB9jr and 1570 (-750 quadrillion) jump points going down from the prefix 1PWo3JeB9jr and then searched all those points simultaneously from the midpoints outwards?


what I would do is divide the search range into 3150 blocks of subranges, organize them randomly, and look for h160 prefixes equivalent to the prefix in question (the hash160 that produces the address with the prefix "1PWo3JeB9jr"), I search the blocks sequentially and each time I find the match of the prefix I save the remaining range of the block in a txt, and discard said block, in 60:80% of the cases, you will not need to review the . txt, in the worst case (when your target is in the same block where statistically there should only be 1 "1PWo3JeB9jr" and that coincidentally is after the first "1PWo3JeB9jr" which is not the target within the same block, you will have to go to the TXT (as you will see a series of unlikely things must happen for you to need the txt). Therefore, the probability of finding it with less effort than traditional sequential brute force is higher.

but the effort you would make if the worst case scenario happened as a consequence would be equal to searching the entire range in the traditional way.

▄▄█████████████████▄▄
▄█████████████████████▄
███▀▀█████▀▀░░▀▀███████

██▄░░▀▀░░▄▄██▄░░█████
█████░░░████████░░█████
████▌░▄░░█████▀░░██████
███▌░▐█▌░░▀▀▀▀░░▄██████
███░░▌██░░▄░░▄█████████
███▌░▀▄▀░░█▄░░█████████
████▄░░░▄███▄░░▀▀█▀▀███
██████████████▄▄░░░▄███
▀█████████████████████▀
▀▀█████████████████▀▀
Rainbet.com
CRYPTO CASINO & SPORTSBOOK
|
█▄█▄█▄███████▄█▄█▄█
███████████████████
███████████████████
███████████████████
█████▀█▀▀▄▄▄▀██████
█████▀▄▀████░██████
█████░██░█▀▄███████
████▄▀▀▄▄▀███████
█████████▄▀▄███
█████████████████
███████████████████
██████████████████
███████████████████
 
 $20,000 
WEEKLY RAFFLE
|



█████████
█████████ ██
▄▄█░▄░▄█▄░▄░█▄▄
▀██░▐█████▌░██▀
▄█▄░▀▀▀▀▀░▄█▄
▀▀▀█▄▄░▄▄█▀▀▀
▀█▀░▀█▀
10K
WEEKLY
RACE
100K
MONTHLY
RACE
|

██









█████
███████
███████
█▄
██████
████▄▄
█████████████▄
███████████████▄
░▄████████████████▄
▄██████████████████▄
███████████████▀████
██████████▀██████████
██████████████████
░█████████████████▀
░░▀███████████████▀
████▀▀███
███████▀▀
████████████████████   ██
 
[..►PLAY..]
 
████████   ██████████████
mcdouglasx (OP)
Sr. Member
****
Offline Offline

Activity: 770
Merit: 408



View Profile WWW
July 05, 2025, 01:19:49 PM
Merited by vapourminer (1)
 #5

Added a script that simulates a real-life Bitcoin environment in a low bit range. This is complemented by real-life probabilities,

since the range size doesn't matter;

the statistics will remain the same as long as the following rule is used:
Code:
MIN_PREFIX=any 
block_size=16**MIN_PREFIX


Code:
import random
import secp256k1 as ice
import concurrent.futures

START = 131071
END = 262143
BLOCK_SIZE = 4096
print("Block Size:",BLOCK_SIZE)
MIN_PREFIX = 3
MAX_WORKERS = 4
TOTAL_RUNS = 100

def generate_blocks(start, end, block_size):
    total_keys = end - start + 1
    num_blocks = (total_keys + block_size - 1) // block_size
    blocks = [
        (start + i * block_size,
         min(start + (i + 1) * block_size - 1, end))
        for i in range(num_blocks)
    ]
    random.shuffle(blocks)
    return blocks

def scan_block(b0, b1, target):
    full_pref = target[:MIN_PREFIX]
    half_pref = target[:MIN_PREFIX - 1]
    keys_tested = 0

    for key in range(b0, b1 + 1):
        keys_tested += 1
        addr = ice.privatekey_to_h160(0, 1, key).hex()
        if addr == target:
            return True, key, keys_tested
        if addr.startswith(full_pref) and addr != target:
            next_key = key + 1
            break
    else:
        return False, None, keys_tested

    for key in range(next_key, b1 + 1):
        keys_tested += 1
        addr = ice.privatekey_to_h160(0, 1, key).hex()
        if addr == target:
            return True, key, keys_tested
        if addr.startswith(half_pref):
            break

    return False, None, keys_tested

def worker(block_chunk, target):
    local_count = 0
    for b0, b1 in block_chunk:
        found, key, used = scan_block(b0, b1, target)
        local_count += used
        if found:
            return key, local_count
    return None, local_count

def parallel_scan(blocks, target):
    chunks = [blocks[i::MAX_WORKERS] for i in range(MAX_WORKERS)]
    with concurrent.futures.ProcessPoolExecutor(max_workers=MAX_WORKERS) as executor:
        futures = [executor.submit(worker, chunk, target) for chunk in chunks]
        for future in concurrent.futures.as_completed(futures):
            key, used = future.result()
            if key is not None:
                for f in futures:
                    f.cancel()
                return key, used
        return None, sum(f.result()[1] for f in futures if f.done())

if __name__ == "__main__":
    found_count = 0
    not_found_count = 0
    total_keys_tested = 0

    for run in range(1, TOTAL_RUNS + 1):
        target_key = random.randint(START, END)
        target = ice.privatekey_to_h160(0, 1, target_key).hex()

        blocks = generate_blocks(START, END, BLOCK_SIZE)
        key, used = parallel_scan(blocks, target)
        total_keys_tested += used

        if key is not None:
            found_count += 1
        else:
            not_found_count += 1

    total_range = (END - START + 1) * TOTAL_RUNS
    savings = 100 * (1 - total_keys_tested / total_range)

    print(f"\nOf {TOTAL_RUNS} runs, found: {found_count}, not found: {not_found_count}")
    print(f"Total keys tested: {total_keys_tested}")
    print(f"Percentage savings vs. full linear scan: {savings:.2f}%")

test 1
Code:
Block Size: 4096

Of 100 runs, found: 72, not found: 28
Total keys tested: 3320708
Percentage savings vs. full linear scan: 74.67%

test 2
Code:
Block Size: 4096

Of 100 runs, found: 61, not found: 39
Total keys tested: 4135859
Percentage savings vs. full linear scan: 68.45%


test 3

Code:
Block Size: 4096

Of 100 runs, found: 65, not found: 35
Total keys tested: 3805137
Percentage savings vs. full linear scan: 70.97%

test 4

Code:
Block Size: 4096

Of 100 runs, found: 59, not found: 41
Total keys tested: 4191999
Percentage savings vs. full linear scan: 68.02%


The Iceland module is required.

https://github.com/iceland2k14/secp256k1 The Iceland files must be in the same location where you saved this script. Some operating systems require Visual Studio Redistributables to be installed for it to work.

You can adjust the number of tests by setting total_runs = 100.

Increasing the number of tests will give the same consistency to the results, since the statistics remain the same.



▄▄█████████████████▄▄
▄█████████████████████▄
███▀▀█████▀▀░░▀▀███████

██▄░░▀▀░░▄▄██▄░░█████
█████░░░████████░░█████
████▌░▄░░█████▀░░██████
███▌░▐█▌░░▀▀▀▀░░▄██████
███░░▌██░░▄░░▄█████████
███▌░▀▄▀░░█▄░░█████████
████▄░░░▄███▄░░▀▀█▀▀███
██████████████▄▄░░░▄███
▀█████████████████████▀
▀▀█████████████████▀▀
Rainbet.com
CRYPTO CASINO & SPORTSBOOK
|
█▄█▄█▄███████▄█▄█▄█
███████████████████
███████████████████
███████████████████
█████▀█▀▀▄▄▄▀██████
█████▀▄▀████░██████
█████░██░█▀▄███████
████▄▀▀▄▄▀███████
█████████▄▀▄███
█████████████████
███████████████████
██████████████████
███████████████████
 
 $20,000 
WEEKLY RAFFLE
|



█████████
█████████ ██
▄▄█░▄░▄█▄░▄░█▄▄
▀██░▐█████▌░██▀
▄█▄░▀▀▀▀▀░▄█▄
▀▀▀█▄▄░▄▄█▀▀▀
▀█▀░▀█▀
10K
WEEKLY
RACE
100K
MONTHLY
RACE
|

██









█████
███████
███████
█▄
██████
████▄▄
█████████████▄
███████████████▄
░▄████████████████▄
▄██████████████████▄
███████████████▀████
██████████▀██████████
██████████████████
░█████████████████▀
░░▀███████████████▀
████▀▀███
███████▀▀
████████████████████   ██
 
[..►PLAY..]
 
████████   ██████████████
Pages: [1]
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!