Bitcoin Forum
April 02, 2026, 10:55:58 PM *
News: Latest Bitcoin Core release: 30.2 [Torrent]
 
   Home   Help Search Login Register More  
Pages: « 1 [2] 3 »  All
  Print  
Author Topic: BITCOIN PUZZLE: THE PREFIX DILEMMA  (Read 951 times)
fixedpaul
Member
**
Offline Offline

Activity: 86
Merit: 27


View Profile WWW
November 18, 2025, 12:49:48 AM
 #21

Why use the prefix method with hash160? I could do the same using ripemd160(hash160) or even ripemd160(ripemd160(hash160))). I could also use 1000000 chained hash functions, right? Would it work just the same? The probability of a prefix match is always 1/16^L.

So there are infinitely many different ways to stop the search within the range to obtain a statistical advantage. But if all of them give me a statistical advantage and all of them are different, how can one be better than the others?

You could try implementing in your Python script what I would call the ultra-deep prefix method, because it goes deep into the hash functions to uncover patterns!

Or maybe we’ll simply discover that there are infinite stupid ways to generate a random number between 1 and 4096.
mcdouglasx (OP)
Hero Member
*****
Offline Offline

Activity: 952
Merit: 532



View Profile WWW
November 18, 2025, 12:55:55 PM
 #22

Why use the prefix method with hash160? I could do the same using ripemd160(hash160) or even ripemd160(ripemd160(hash160))). I could also use 1000000 chained hash functions, right? Would it work just the same? The probability of a prefix match is always 1/16^L.

So there are infinitely many different ways to stop the search within the range to obtain a statistical advantage. But if all of them give me a statistical advantage and all of them are different, how can one be better than the others?

You could try implementing in your Python script what I would call the ultra-deep prefix method, because it goes deep into the hash functions to uncover patterns!

Or maybe we’ll simply discover that there are infinite stupid ways to generate a random number between 1 and 4096.

You're confusing 'infinite hash functions' with 'infinite useful strategies'. The hash160 prefix isn't arbitrary.

You're clinging to Ktimesg's absurd logic; it's not the same thing. For this one reason only, by using prefixes and stopping when you find the hash, you're reducing the probability of finding the prefix in the block to less than one (you're looking for an exact prefix, not just any prefix), with complete certainty. Stopping randomly, on the other hand, doesn't.

When prefixes find their signal, it's because there really is a key with the same prefix as the target. When randomness 'hits the mark,' it's pure coincidence. Prefixes provide information correlated with the target. Randomness provides meaningless noise.


ktimesG I remind you that this is a technical area; it's not about comparing apples and oranges, but about getting to the point, understanding the thread, and refuting it mathematically. If you think it makes no sense to create biased code, you're just trying to invalidate my idea, not refute it. You're simply trying to invalidate the idea of ​​prefixes without directly addressing the logic of the thread. Your messages contain more than five fallacies commonly used in flat-Earth debates. Get to the point.
kTimesG
Full Member
***
Offline Offline

Activity: 784
Merit: 242


View Profile
November 18, 2025, 04:47:57 PM
Merited by NotFuzzyWarm (1)
 #23

Get to the point.

Sure.

1. Did you miss the part where "parallelizing" your proposal cancels out the math illusion that scanning less than X% of whatever range would ever yield a larger than X% probability of success? I guess you did.

Truth: scanning X% of any range yields X% probability of success.

2. Did you miss the part where the 3 first bytes of any hash are, by design, a random value, and hence checking a prefix (any prefix) or whatever other set of whatever 24 bits of whatever random data you want (like the current hash, or a new hash, or a random hash, or the result of flipping a coin 24 times in a row), is the same identical operation, hence resulting in an identical effect?

Truth: A hash is uniform, hence equivalent to a random variable. They both exhibit the same properties; whatever properties you're investigating, it's the same on both of them.

3. Did you miss the part where target is checked BEFORE the prefix? Hence, whatever "shortcut" or imagined magnetization or pattern is long gone immediately, because, well, if the target matched, what the hell are we even talking about here? Everything related to your grand prefix-related magnetization idea happens post factum, so?!

Translation: zero relevance between prefix and target, since we're always in the case of "target not fully matched".

What kind of mathematical proof do you want? You had like a hundred of those so far. Can't understand them? Well then, accept the empirical results: run the Scooby Doo version. Or check for a random hash, or a random prefix, or whatever you feel like doing. Why do you get the same awesome results that you are getting with prefixes? Well, its because you maybe failed to understand that its not the prefixes that magnetize anything whatsoever - it's simply statistical random noise doing its thing in all the cases.

First I gave you the benefit of the doubt, as clearly stated, just to make you understand that your illusion doesn't even work when trying to run it as two separate threads, let alone on 50.000 CUDA threads.

But then you became annoying so I ended up proofing that you're like a broken record: you state the same things on repeat over and over again. But when you do that, what you'll get in return is exactly the same thing as well: check the results for yourself, because clearly:

1. You don't understand basic math.
2. You have no clues on basic concepts in algorithmic theory.

Yeah - this is the technical area. Welcome!

Off the grid, training pigeons to broadcast signed messages.
mcdouglasx (OP)
Hero Member
*****
Offline Offline

Activity: 952
Merit: 532



View Profile WWW
November 18, 2025, 05:15:41 PM
 #24

1. Did you miss the part where "parallelizing" your proposal cancels out the math illusion that scanning less than X% of whatever range would ever yield a larger than X% probability of success? I guess you did.

You're confusing parallelization with statistics. The relationship 16^L ≈ N is hardware-independent; it works the same on CPUs, GPUs, or paper and pencil.

2. Did you miss the part where the 3 first bytes of any hash are, by design, a random value, and hence checking a prefix (any prefix) or whatever other set of whatever 24 bits of whatever random data you want (like the current hash, or a new hash, or a random hash, or the result of flipping a coin 24 times in a row), is the same identical operation, hence resulting in an identical effect?

Truth: A hash is uniform, hence equivalent to a random variable. They both exhibit the same properties; whatever properties you're investigating, it's the same on both of them.

You're making a beginner's mistake, because you're confusing statistical uniformity with a lack of information.


3. Did you miss the part where target is checked BEFORE the prefix? Hence, whatever "shortcut" or imagined magnetization or pattern is long gone immediately, because, well, if the target matched, what the hell are we even talking about here? Everything related to your grand prefix-related magnetization idea happens post factum, so?!

Translation: zero relevance between prefix and target, since we're always in the case of "target not fully matched".
...

How long will you continue to pretend that a blind search is the same as a cryptographically valid prefixed search?.

1. You don't understand basic math.
2. You have no clues on basic concepts in algorithmic theory.

More fallacies of contempt, which have nothing to do with a technical debate.



The point of the thread: Prove mathematically why the function P(N) = 1 - (1 - 1/16^L)^N does not have an optimum when N ≈ 16^L.

Because from what I see so far, you've only offered opinions, trying to invalidate my argument with fallacies or code designed to deceive, since your proofs don't invalidate the purpose of my thread, they only confirm it. Not mathematics.

The debate ends here until you present real mathematics, not empty analogies.
kTimesG
Full Member
***
Offline Offline

Activity: 784
Merit: 242


View Profile
November 18, 2025, 06:20:05 PM
 #25

At this point you're so far off the reality-check, that I can only consider you're intentionally trolling, at best.

You have all the proofs of everything under your nose. Also not my problem at all you can't grasp basic computational theory. Or, actual common sense.

I'm giving up (again) and will just say what you want to hear the entire time: you are right. Good luck with the Nobel or whatever.

Off the grid, training pigeons to broadcast signed messages.
fixedpaul
Member
**
Offline Offline

Activity: 86
Merit: 27


View Profile WWW
November 18, 2025, 06:42:51 PM
 #26

You're confusing 'infinite hash functions' with 'infinite useful strategies'. The hash160 prefix isn't arbitrary.

You're clinging to Ktimesg's absurd logic; it's not the same thing. For this one reason only, by using prefixes and stopping when you find the hash, you're reducing the probability of finding the prefix in the block to less than one (you're looking for an exact prefix, not just any prefix), with complete certainty. Stopping randomly, on the other hand, doesn't.

When prefixes find their signal, it's because there really is a key with the same prefix as the target. When randomness 'hits the mark,' it's pure coincidence. Prefixes provide information correlated with the target. Randomness provides meaningless noise.

I think you didn’t understand what I meant, so I’ll rephrase it. Let’s talk only about mathematics and statistics, so we won’t consider computational time, only the number of checks I need to perform.

Let’s take puzzle 71, address 1PWo3JeB9jrGwfHDNpdGK54CRas7fsVzXU. The hash160 is:

f6f5431d25bbf7b12e8add9af5e3475c44a0a5b8

So you start scanning your range and stop when you find a prefix match, meaning a hash160 that starts with f6f. This gives you an advantage, right? If the prefix method works, yes.

Good, now let’s compute ripemd160(f6f5431d25bbf7b12e8add9af5e3475c44a0a5b8):

65804ce824a7c0d9b37ee97dc840a5dc2b20ce23

Now I change my target, which becomes
T = 65804ce824a7c0d9b37ee97dc840a5dc2b20ce23.
So for every hash160 in my range I compute ripemd160(hash160). If I find T, I’ve solved puzzle 71, correct? I hope we agree on this.

So at this point I could proceed using the prefix method: I split the range and start searching until I find a prefix match, meaning a ripemd160(hash160) that starts with 658.
We’re not looking for just any prefix; we’re looking for the TARGET's prefix.

If I understood correctly, it should work the same way, right? By stopping when I find a prefix match with 658, I increase my probabilities, is that correct?

If yes, why?
If not, why?
mcdouglasx (OP)
Hero Member
*****
Offline Offline

Activity: 952
Merit: 532



View Profile WWW
November 19, 2025, 02:08:51 AM
 #27

You're confusing 'infinite hash functions' with 'infinite useful strategies'. The hash160 prefix isn't arbitrary.

You're clinging to Ktimesg's absurd logic; it's not the same thing. For this one reason only, by using prefixes and stopping when you find the hash, you're reducing the probability of finding the prefix in the block to less than one (you're looking for an exact prefix, not just any prefix), with complete certainty. Stopping randomly, on the other hand, doesn't.

When prefixes find their signal, it's because there really is a key with the same prefix as the target. When randomness 'hits the mark,' it's pure coincidence. Prefixes provide information correlated with the target. Randomness provides meaningless noise.

I think you didn’t understand what I meant, so I’ll rephrase it. Let’s talk only about mathematics and statistics, so we won’t consider computational time, only the number of checks I need to perform.

Let’s take puzzle 71, address 1PWo3JeB9jrGwfHDNpdGK54CRas7fsVzXU. The hash160 is:

f6f5431d25bbf7b12e8add9af5e3475c44a0a5b8

So you start scanning your range and stop when you find a prefix match, meaning a hash160 that starts with f6f. This gives you an advantage, right? If the prefix method works, yes.

Good, now let’s compute ripemd160(f6f5431d25bbf7b12e8add9af5e3475c44a0a5b8):

65804ce824a7c0d9b37ee97dc840a5dc2b20ce23

Now I change my target, which becomes
T = 65804ce824a7c0d9b37ee97dc840a5dc2b20ce23.
So for every hash160 in my range I compute ripemd160(hash160). If I find T, I’ve solved puzzle 71, correct? I hope we agree on this.

So at this point I could proceed using the prefix method: I split the range and start searching until I find a prefix match, meaning a ripemd160(hash160) that starts with 658.
We’re not looking for just any prefix; we’re looking for the TARGET's prefix.

If I understood correctly, it should work the same way, right? By stopping when I find a prefix match with 658, I increase my probabilities, is that correct?

If yes, why?
If not, why?

If you intend to search for the T2 hash in the key space of bit 71 by following the normal Bitcoin address generation process without doing double work, you're looking for a different address. Conversely, if you apply `ripemd` again for each key, searching for the T2 hash, it yields the same result. However, the difference between the three methods is that: kTimesG's method is a blind search, prefixes are a search based on information related to the objective, and although yours also follows the same logic, it suffers from unnecessary resource expenditure.

If I had to compare what's been presented here, I would say that:

kTimesG:
It doesn't have a valid justification for discarding part of the work; it only does it randomly. It's an unfounded sample.

My Prefixes method: is the most efficient because it obtains the maximum informational value (the valid justification for discarding) with the same efficiency as the random method, making it the best cryptographic strategy when searching for a unique hash.

Your proposal: It's an acceptable method; the prefix is ​​still a valid filter. However, the expense of calculating an additional hash in each iteration makes it inefficient compared to the other options.

But you don't increase your odds by doing a double hash, you increase the computational work because you're just adding one step of computational difficulty, since doing the hash twice in each iteration is the same as doing it once; statistically the ratio remains at 1/4096, so you would be paying double for the same statistical benefit.
kTimesG
Full Member
***
Offline Offline

Activity: 784
Merit: 242


View Profile
November 19, 2025, 08:50:37 AM
Last edit: November 19, 2025, 01:21:27 PM by kTimesG
 #28

BITCOIN PUZZLE: THE POSTFIX DILEMMA

Abstract

1. This is the only rational cryptographically data-driven search that one can ever use.
2. This is the most efficient option one would use when he only has a couple CPUs or GPUs at disposal.
3. The math is unbeatable: the equation itself cannot be attacked, so don't bother.
4. Hardware-independent relationships ensure maximum probability.

Many thanks to mcdouglasx for his Prefix Search work.

Based on his groundbreaking work and thoroghful explanations over all the past debates, I have managed to evolve an alternative algorithm that does not suffer of any previous problems, such as:

- blindly discarding based on random values, instead of correct statistical correlation
- unnecessary alterations such as changing the user-defined block order: because obviously, doing such things modify the test conditions, so they are definitely a big no-no

Important note: if you have ANYTHING to debate over this proposal, first make sure you present a solid proof on why the hardware-independent probabilistic equation does not hold. OK? Thanks.

ANYTHING ELSE THAT RELATES TO ATTACKING THIS METHOD should be directly addressed to the mastermind of the original method, which is: the Prefix Search method. Because: ANYTHING that you seem unfit about the Postfix Search, is also an identical issue in the Prefix Search

If you are a simple user, and are confused about which method to choose:

a) the Prefix Method
b) the Postfix Method (identical results as the Prefix Method)

do NOT worry: I can also present another comb(160, 24) - 2 "Middlefix Methods" that have 100% the same exact benefits as these two methods. All of them are the superior, rational, most efficient methods!
That's 20767445112335933694764617900 different separate methods, and each of them is the best one and the only one that should be ever used!

Main proposal

----------


I've decided to provide a detailed explanation of why, for "lucky hunters" the best option right now is postfix searches, as they offer a significant advantage when luck is involved.

It was already concluded in this thread that in an exhaustive search comparing sequential searches versus prefix searches, there are no notable statistical advantages. However, this often confuses "treasure hunters" since most of them don't have access to GPU or CPU farms, making this comparison useless in home use. They need to change their mindset from needing to find the key at all costs when trying their luck, since exhaustively searching the entire space is pointless in both cases. With the limited resources of home computing, it's theoretically impossible to scan the entire range.

This is where probabilistic search strategy comes in to increase your chances of success, since the postfix strategy is faster and consumes fewer resources, statistically increasing your probability of success in this "lottery".

If your hardware is limited to a home PC or a couple of GPUs, the goal cannot be "brute force" or "sequential search". Your goal must be probabilistic efficiency.
The point is not the feasibility of finding the solution, but rather that between trying sequentially versus using postfixes, postfixes are statistically superior and are THE ONLY RATIONAL OPTION.

Focusing on a limited space like 71 bits, it would take home hardware many years to cover the entire range. "Sequential" searching (checking the entire range) is therefore a guaranteed path to failure due to lack of time.

The puzzle finder must maximize the number of "lucky hits" per unit of time and computation.

The Optimal Postfix-Block Relationship

The essence of the postfix search strategy lies not in magic numbers, but in maintaining an optimal probabilistic relationship between two key variables:

The Critical Variables:

Postfix Length (L): Number of hexadecimal characters of the target hash used as a filter
Block Size (N): Number of private keys we search for in each segment

The Key Equation:

16^L ≈ N

Where:

16^L represents the total space of possible postfixes (16 hexadecimal characters per position)
N is the number of keys in each block

Why Is This Relationship Optimal?

• If 16^L ≫ N: Too many possible postfixes, low probability of finding matches → you lose efficiency
• If 16^L ≪ N: Too many posfix matches, high risk of missing the target block → you lose accuracy

The Ideal Probability:

When 16^L ≈ N, the probability that at least one key in the block has the target postfix is ​​approximately:

P(match) ≈ 1 - (1 - 1/16^L)^N ≈ 63.2%

This is the optimal range where the method reaches its maximum efficiency.



test script

Code:
import hashlib
import random
import time
import math
import statistics
import scipy.stats as stats
import statsmodels.stats.power as smp
from math import ceil

# Configuration
TOTAL_SIZE = 100_000
RANGE_SIZE = 4_096
POSTFIX_LENGTH = 3
SIMULATIONS = 100000

SECP256K1_ORDER = int("0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFEBAAEDCE6AF48A03BBFD25E8CD0364141", 16)

print(f"""
=== Configuration ===
Total numbers: {TOTAL_SIZE:,}
Block size: {RANGE_SIZE:,}
Total blocks needed: {ceil(TOTAL_SIZE/RANGE_SIZE)}
Postfix: {POSTFIX_LENGTH} characters (16^{POSTFIX_LENGTH} = {16**POSTFIX_LENGTH:,} combinations)
Simulations: {SIMULATIONS}
secp256k1 order: {SECP256K1_ORDER}
""")

def generate_h160(data):
    h = hashlib.new('ripemd160', str(data).encode('utf-8'))
    return h.hexdigest()

def shuffled_blck(total_blocks):
    blocks = list(range(total_blocks))
    random.shuffle(blocks)
    return blocks

def sequential_search(dataset, block_size, target_hash, block_order):
    checks = 0
    for block_idx in block_order:
        start = block_idx * block_size
        end = min(start + block_size, len(dataset))
        for i in range(start, end):
            checks += 1
            if generate_h160(dataset[i]) == target_hash:
                return {"checks": checks, "found": True, "index": i}
    return {"checks": checks, "found": False}

def postfix_search(dataset, block_size, postfix_len, target_hash, block_order):
    postfix = target_hash[-postfix_len:]
    checks = 0
    
    for block_idx in block_order:
        start = block_idx * block_size
        end = min(start + block_size, len(dataset))
        found_postfix = False
        
        for i in range(start, end):
            checks += 1
            current_hash = generate_h160(dataset[i])
            
            if current_hash == target_hash:
                return {"checks": checks, "found": True, "index": i}
            
            if not found_postfix and current_hash.endswith(postfix):
                found_postfix = True
                break
    
    return {"checks": checks, "found": False}

def comp_cohens_d(list1, list2):
    if len(list1) < 2 or len(list2) < 2:
        return float('nan')
    n1, n2 = len(list1), len(list2)
    m1, m2 = statistics.mean(list1), statistics.mean(list2)
    s1, s2 = statistics.stdev(list1), statistics.stdev(list2)
    
    pooled_std = math.sqrt(((n1-1)*s1**2 + (n2-1)*s2**2) / (n1+n2-2))
    if pooled_std == 0:
        return float('nan')
    return (m1 - m2) / pooled_std

def coeff_variation(data):
    if not data or statistics.mean(data) == 0:
        return float('nan')
    return (statistics.stdev(data) / statistics.mean(data)) * 100

def longest_streak(outcomes, letter):
    max_streak = current = 0
    for o in outcomes:
        current = current + 1 if o == letter else 0
        max_streak = max(max_streak, current)
    return max_streak

def ascii_bar(label, value, max_value, bar_length=50):
    bar_count = int((value / max_value) * bar_length) if max_value > 0 else 0
    return f"{label:12}: {'#' * bar_count} ({value})"

def conf_interval(data, confidence=0.95):
    if len(data) < 2:
        return (0, 0)
    try:
        return stats.t.interval(
            confidence=confidence,
            df=len(data)-1,
            loc=statistics.mean(data),
            scale=stats.sem(data)
        )
    except:
        return (statistics.mean(data), statistics.mean(data))
    
def statistical_analysis(seq_checks, pre_checks):
    analysis = {}
    
    analysis['seq_mean'] = statistics.mean(seq_checks) if seq_checks else 0
    analysis['pre_mean'] = statistics.mean(pre_checks) if pre_checks else 0
    analysis['seq_ci'] = conf_interval(seq_checks)
    analysis['pre_ci'] = conf_interval(pre_checks)
    
    if len(seq_checks) > 1 and len(pre_checks) > 1:
        analysis['t_test'] = stats.ttest_ind(seq_checks, pre_checks, equal_var=False)
        analysis['mann_whitney'] = stats.mannwhitneyu(seq_checks, pre_checks)
        analysis['cohen_d'] = comp_cohens_d(seq_checks, pre_checks)
        
        effect_size = abs(analysis['cohen_d'])
        if effect_size > 0:
            analysis['power'] = smp.tt_ind_solve_power(
                effect_size=effect_size,
                nobs1=len(seq_checks),
                alpha=0.05,
                ratio=len(pre_checks)/len(seq_checks)
            )
        else:
            analysis['power'] = 0
    else:
        analysis['t_test'] = None
        analysis['mann_whitney'] = None
        analysis['cohen_d'] = 0
        analysis['power'] = 0
    
    return analysis

def compare_methods():
    results = {
        "sequential": {"wins": 0, "checks": [], "times": []},
        "postfix": {"wins": 0, "checks": [], "times": []},
        "ties": 0,
        "both_failed": 0
    }
    outcome_history = []
    total_blocks = ceil(TOTAL_SIZE / RANGE_SIZE)
    valid_cases = 0

    for _ in range(SIMULATIONS):
        max_offset = SECP256K1_ORDER - TOTAL_SIZE - 1
        offset = random.randint(0, max_offset)
        dataset = [offset + i for i in range(TOTAL_SIZE)]
        target_num = random.choice(dataset)
        target_hash = generate_h160(target_num)
        block_order = shuffled_blck(total_blocks)

        start = time.perf_counter()
        seq_res = sequential_search(dataset, RANGE_SIZE, target_hash, block_order)
        seq_time = time.perf_counter() - start

        start = time.perf_counter()
        pre_res = postfix_search(dataset, RANGE_SIZE, POSTFIX_LENGTH, target_hash, block_order)
        pre_time = time.perf_counter() - start

        if seq_res["found"] and pre_res["found"]:
            valid_cases += 1
            results["sequential"]["checks"].append(seq_res["checks"])
            results["postfix"]["checks"].append(pre_res["checks"])
            results["sequential"]["times"].append(seq_time)
            results["postfix"]["times"].append(pre_time)
            
            if seq_res["checks"] < pre_res["checks"]:
                results["sequential"]["wins"] += 1
                outcome_history.append("S")
            elif pre_res["checks"] < seq_res["checks"]:
                results["postfix"]["wins"] += 1
                outcome_history.append("P")
            else:
                results["ties"] += 1
                outcome_history.append("T")
        elif not seq_res["found"] and not pre_res["found"]:
            results["both_failed"] += 1
        else:
            continue

    def get_stats(data):
        if not data:
            return {"mean": 0, "min": 0, "max": 0, "median": 0, "stdev": 0}
        return {
            "mean": statistics.mean(data),
            "min": min(data),
            "max": max(data),
            "median": statistics.median(data),
            "stdev": statistics.stdev(data) if len(data) > 1 else 0
        }

    seq_stats = get_stats(results["sequential"]["checks"])
    pre_stats = get_stats(results["postfix"]["checks"])
    seq_time_stats = get_stats(results["sequential"]["times"])
    pre_time_stats = get_stats(results["postfix"]["times"])

    total_comparisons = results["sequential"]["wins"] + results["postfix"]["wins"] + results["ties"]
    seq_win_rate = results["sequential"]["wins"] / total_comparisons if total_comparisons > 0 else 0
    pre_win_rate = results["postfix"]["wins"] / total_comparisons if total_comparisons > 0 else 0

    cv_seq = coeff_variation(results["sequential"]["checks"])
    cv_pre = coeff_variation(results["postfix"]["checks"])

    stats_analysis = statistical_analysis(
        seq_checks=results["sequential"]["checks"],
        pre_checks=results["postfix"]["checks"]
    )

    print(f"""
=== FINAL ANALYSIS ===
Valid cases (both found target): {valid_cases}/{SIMULATIONS}

[Performance Metrics]
               | Sequential          | Postfix
---------------+---------------------+--------------------
Checks (mean)  | {seq_stats['mean']:>12,.1f} ± {seq_stats['stdev']:,.1f} | {pre_stats['mean']:>12,.1f} ± {pre_stats['stdev']:,.1f}
Time (mean ms) | {seq_time_stats['mean']*1000:>12.2f} ± {seq_time_stats['stdev']*1000:.2f} | {pre_time_stats['mean']*1000:>12.2f} ± {pre_time_stats['stdev']*1000:.2f}
Min checks     | {seq_stats['min']:>12,} | {pre_stats['min']:>12,}
Max checks     | {seq_stats['max']:>12,} | {pre_stats['max']:>12,}
Coef. Variation| {cv_seq:>11.1f}% | {cv_pre:>11.1f}%

[Comparison Results]
Sequential wins: {results['sequential']['wins']} ({seq_win_rate:.1%})
Postfix wins:    {results['postfix']['wins']} ({pre_win_rate:.1%})
Ties:          {results['ties']}
Both failed:    {results['both_failed']}

=== STATISTICAL ANALYSIS ===

[Confidence Intervals]
Checks Sequential: {seq_stats['mean']:.1f} ({stats_analysis['seq_ci'][0]:.1f} - {stats_analysis['seq_ci'][1]:.1f})
Checks Postfix:    {pre_stats['mean']:.1f} ({stats_analysis['pre_ci'][0]:.1f} - {stats_analysis['pre_ci'][1]:.1f})

[Statistical Tests]
Welch's t-test: {'t = %.3f, p = %.4f' % (stats_analysis['t_test'].statistic, stats_analysis['t_test'].pvalue) if stats_analysis['t_test'] else 'N/A'}
Mann-Whitney U: {'U = %.1f, p = %.4f' % (stats_analysis['mann_whitney'].statistic, stats_analysis['mann_whitney'].pvalue) if stats_analysis['mann_whitney'] else 'N/A'}
Effect Size (Cohen's d): {stats_analysis['cohen_d']:.3f}

[Power Analysis]
Statistical Power: {stats_analysis['power']:.1%}
""")

    if outcome_history:
        non_tie_outcomes = [o for o in outcome_history if o != "T"]
        streak_analysis = f"""
=== STREAK ANALYSIS ===
Longest Sequential streak: {longest_streak(outcome_history, 'S')}
Longest Postfix streak:    {longest_streak(outcome_history, 'P')}
Expected max streak:      {math.log(len(non_tie_outcomes), 2):.1f} (for {len(non_tie_outcomes)} trials)
"""
        print(streak_analysis)

        max_wins = max(results["sequential"]["wins"], results["postfix"]["wins"], results["ties"])
        print("=== WIN DISTRIBUTION ===")
        print(ascii_bar("Sequential", results["sequential"]["wins"], max_wins))
        print(ascii_bar("Postfix", results["postfix"]["wins"], max_wins))
        print(ascii_bar("Ties", results["ties"], max_wins))

if __name__ == '__main__':
    compare_methods()

results

Code:
=== FINAL ANALYSIS ===
Valid cases (both found target): 63761/100000

[Performance Metrics]
               | Sequential          | Postfix
---------------+---------------------+--------------------
Checks (mean)  |     49,583.6 ± 28,856.7 |     32,119.8 ± 19,013.3
Time (mean ms) |       118.70 ± 70.17 |        79.24 ± 47.81
Min checks     |            1 |            1
Max checks     |       99,997 |       84,519
Coef. Variation|        58.2% |        59.2%

[Comparison Results]
Sequential wins: 0 (0.0%)
Postfix wins:    59696 (93.6%)
Ties:          4065
Both failed:    0

=== STATISTICAL ANALYSIS ===

[Confidence Intervals]
Checks Sequential: 49583.6 (49359.6 - 49807.6)
Checks Postfix:    32119.8 (31972.2 - 32267.4)

[Statistical Tests]
Welch's t-test: t = 127.607, p = 0.0000
Mann-Whitney U: U = 2742363164.5, p = 0.0000
Effect Size (Cohen's d): 0.715

[Power Analysis]
Statistical Power: 100.0%


=== STREAK ANALYSIS ===
Longest Sequential streak: 0
Longest Postfix streak:    148
Expected max streak:      15.9 (for 59696 trials)

=== WIN DISTRIBUTION ===
Sequential  :  (0)
Postfix      : ################################################## (59696)
Ties        : ### (4065)

interpretation of the results

The analysis of 100,000 simulations is conclusive: Postfix is ​​35.3% more efficient than Sequential for each attempt.

Postfix Search sacrifices reliability for speed.

Loss Minimization: Most of the computation time in Sequential Search is wasted checking thousands of numbers in an incorrect block. Postfix Search avoids this by using a probability of 1 in 4,096 (for 3 characters) as a skip mechanism.

Advantage Accumulation: By quickly skipping incorrect blocks, the method accumulates a massive savings in checks (approximately 17,500 per successful attempt). This maximizes the number of "lucky" attempts your hardware can make.

Result: Total Dominance: The effort reduction is so significant that Postfix Search wins 93.6% of the time in a direct efficiency comparison.

If you don't own a GPU farm, you have no other rational option.

Sequential search is a failed brute-force strategy that exhausts your resources on useless ranges.

Postfix search is an optimized luck strategy that gives you a 93.6% probability of making the most efficient use of your computation time.

You must accept the risk (approximately 36% failure per partial attempt due to the "jump") in exchange for the possibility of success (approximately 64%) with minimal effort. This is the only way to tackle the vastness of the keyspace with limited resources.

You sacrifice: 100% theoretical coverage (impossible to achieve anyway)

You gain: 35% greater efficiency, 50% more attempts, 93% higher probability of success per resource invested.

In a search where time is the scarcest resource, sacrificing a theoretical guarantee for a practical advantage of 35% is not an option... it is an obligation.

In short, postfixes influence probabilities by speeding up the search process, allowing for more attempts per unit of time. They don't change the probability of an individual attempt, but they do increase the overall probability of success by increasing the frequency of attempts. It's the rational strategy for those who rely on luck!

Off the grid, training pigeons to broadcast signed messages.
fixedpaul
Member
**
Offline Offline

Activity: 86
Merit: 27


View Profile WWW
November 19, 2025, 10:23:51 AM
Last edit: November 19, 2025, 11:19:57 AM by fixedpaul
 #29



Your proposal: It's an acceptable method; the prefix is ​​still a valid filter. However, the expense of calculating an additional hash in each iteration makes it inefficient compared to the other options.

But you don't increase your odds by doing a double hash, you increase the computational work because you're just adding one step of computational difficulty, since doing the hash twice in each iteration is the same as doing it once; statistically the ratio remains at 1/4096, so you would be paying double for the same statistical benefit.

Certainly, increasing the number of computed hashes obviously adds computational cost. But here we’re only talking about the statistical advantage based on the number of steps, so let’s assume that the computational cost of computing RIPEMD-160 is negligible.

As you said, you agree that using the prefix method with T2, that is, ripemd160(hash160) gives me the same statistical advantage.

Therefore, we can extend the same reasoning to T3: ripemd160(ripemd160(hash160)), and also to T4, and so on up to Tn. So the prefix method works with any concatenation of n RIPEMD160 functions that I want to use, as long as I look at the prefix of the Tn-th target.

If each of these methods gives me a statistical advantage, it’s also true that before each range I could choose n randomly between 1 and 4096. So before analyzing each range, I choose a random number between 1 and 4096, and then I use my prefix method with Tn, and this gives me the same statistical advantage. Again, assuming we’re neglecting computational time.

Doesn’t this result seem a bit absurd? If I randomly choose a strategy for the prefix method using Tn, then I get a statistical advantage. What naturally comes to mind is: how is it possible that each of these strategies gives me a statistical advantage? Which one is statistically the best? Are they all equivalent? (Again, we’re ignoring computational time.)

If you think about it, the result of choosing n at random for the prefix method with Tn is simply that the search stops at a random point in the range, but supposedly giving me a statistical advantage, precisely because the strategy is chosen at random among all those that should give me a statistical advantage. The point where the search would stop would be random and uniformly distributed, right? Because n is generated uniformly at random, and RIPEMD160 returns uniformly distributed hashes, correct? So what happens is that the search in the range stops at some random number k, and k can be any value with the same probability, by definition of uniform distribution.

So maybe, I could simply generate a random number k between 1 and 4096 before starting to scan the range, and stop exactly after scanning k values in the range. Wouldn’t I get the exact same statistical result? If not, why?

I’m actually wondering whether you’ve ever asked yourself why your script gives the same results whether you choose a random number at which to stop the search, or you look at any prefix instead of using the target’s prefix.
mcdouglasx (OP)
Hero Member
*****
Offline Offline

Activity: 952
Merit: 532



View Profile WWW
November 19, 2025, 02:12:46 PM
 #30

snip...

You finally admit that my mathematical theory is correct. Your 'Postfix Search' is simply my Prefix Search with a different but mathematically equivalent verification.

The equation 16^L ≈ N, the 63.2% probability, the 35.3% advantage, I originally proved all of this. Now you're just copying it and renaming it.

If postfixes work, which I haven't tested, but I'll trust your data, it's because my original mathematical theory is sound, not because you've discovered something new. Changing "prefixes" to "postfixes" to make it seem "new" is the same concept with identical implementation.

It's the most epic admission of defeat imaginable!



So maybe, I could simply generate a random number k between 1 and 4096 before starting to scan the range, and stop exactly after scanning k values in the range. Wouldn’t I get the exact same statistical result? If not, why?

I’m actually wondering whether you’ve ever asked yourself why your script gives the same results whether you choose a random number at which to stop the search, or you look at any prefix instead of using the target’s prefix.

With randomness, the decision to discard a block is not tied to the cryptographic reality of the target, but to a random value.

This leads to an unacceptable risk:

Risk with Prefixes: If the Prefixes method fails on a block, you know it's because the key wasn't there (or the probability of it being there is minuscule, tied to 36.8%). The work you discard wasn't the one you needed.

Risk with Randomness: There will be times when the random number generator tells you to "skip" and discard the work right on the block that contained the target private key, throwing "all the work to the wind." You would lose the key you had already generated, just because of a bad roll of the dice.

In a block search where success is based on certainty (cryptography), you can never allow pure chance to make a decision that costs you the victory.

For a cryptographically rigorous method, there must be verifiable certainty to justify such a crucial action as discarding billions of checks. Random stopping lacks this rigor.

Therefore, you are falling into ktimesg's trap. Demonstrating that the average number of checks is similar to the number of prefixes leads people to believe that randomness is a valid substitute.
kTimesG
Full Member
***
Offline Offline

Activity: 784
Merit: 242


View Profile
November 19, 2025, 02:23:12 PM
 #31

You finally admit that my mathematical theory is correct. Your 'Postfix Search' is simply my Prefix Search with a different but mathematically equivalent verification.

The equation 16^L ≈ N, the 63.2% probability, the 35.3% advantage, I originally proved all of this. Now you're just copying it and renaming it.

If postfixes work, which I haven't tested, but I'll trust your data, it's because my original mathematical theory is sound, not because you've discovered something new. Changing "prefixes" to "postfixes" to make it seem "new" is the same concept with identical implementation.

It's the most epic admission of defeat imaginable!

Yes! As I said: you were right! But is it really an identical implementation? Postfix Search checks a different set of bits!

So, to settle this once and for all, which one is better? Prefix Search, or Postfix Search? Which one is the ultimate best single-option rational choice to use when we have limited resources and we want the 35.3% statistical edge? Because they definitely skip different keys, so I'm confused on whether I should go with your version, my version, or one of the other astronomically-large number of equivalent variants! Which one of these has the statistical edge over the others? My brain hurts, help!

Waiting for your suggestion. Regards.

Off the grid, training pigeons to broadcast signed messages.
mcdouglasx (OP)
Hero Member
*****
Offline Offline

Activity: 952
Merit: 532



View Profile WWW
November 19, 2025, 03:02:59 PM
 #32

You finally admit that my mathematical theory is correct. Your 'Postfix Search' is simply my Prefix Search with a different but mathematically equivalent verification.

The equation 16^L ≈ N, the 63.2% probability, the 35.3% advantage, I originally proved all of this. Now you're just copying it and renaming it.

If postfixes work, which I haven't tested, but I'll trust your data, it's because my original mathematical theory is sound, not because you've discovered something new. Changing "prefixes" to "postfixes" to make it seem "new" is the same concept with identical implementation.

It's the most epic admission of defeat imaginable!

Yes! As I said: you were right! But is it really an identical implementation? Postfix Search checks a different set of bits!

So, to settle this once and for all, which one is better? Prefix Search, or Postfix Search? Which one is the ultimate best single-option rational choice to use when we have limited resources and we want the 35.3% statistical edge? Because they definitely skip different keys, so I'm confused on whether I should go with your version, my version, or one of the other astronomically-large number of equivalent variants! Which one of these has the statistical edge over the others? My brain hurts, help!

Waiting for your suggestion. Regards.

Hahaha, I see you're trying to cause confusion with absurd (technically equivalent) topics where it no longer exists. Even though I used a prefix for my method, the conclusion is the same whether you use the end or the middle, statistically speaking. Your intention is to invalidate the word "prefixes" to overshadow my work.

What a sweeping technical and mathematical move you're presenting!

Anyway, even if you try to invalidate the label by calling it whatever you want, you're accepting that the directed sampling strategy is superior, and all its filter variants (prefix, postfix, etc.) are technically equivalent in terms of statistical benefit. Random sampling, while efficient, poses a cryptographic risk that, in a serious search, is neither tolerable, logical, nor intelligent, since you can use a directed filter to be certain.
fixedpaul
Member
**
Offline Offline

Activity: 86
Merit: 27


View Profile WWW
November 19, 2025, 03:08:59 PM
 #33


Therefore, you are falling into ktimesg's trap. Demonstrating that the average number of checks is similar to the number of prefixes leads people to believe that randomness is a valid substitute.

There is no trap, and we’re not trying to tell you that skipping keys randomly is better than any other method, only that no method can increase your probabilities. I wanted to try, for the umpteenth time, to make you understand that this idea is absurd, but you refuse to get it.

There is no better way to search for a uniformly distributed random number K, BY DEFINITION, precisely because every number has the same probability of appearing. Stop. If you understand this simple statement, you should already realize that a better method cannot exist.

And it makes no sense to think that a number K has to be far away to some number A, just because ripemd160(sha256(K*G)) starts with the same L bits as ripemd160(sha256(A*G)).

You probably don’t even realize how huge the claim you’re making actually is, because if it were true, it would deserve a Nobel prize. You are essentially claiming that by using cryptographic functions, you can transform a uniform distribution into a non-uniform one, claiming that through these cryptographic functions you are able to extract information that can tell you that a number may appear with higher or lower probability
kTimesG
Full Member
***
Offline Offline

Activity: 784
Merit: 242


View Profile
November 19, 2025, 03:28:15 PM
 #34

Hahaha, I see you're trying to cause confusion with absurd (technically equivalent) topics where it no longer exists. Even though I used a prefix for my method, the conclusion is the same whether you use the end or the middle, statistically speaking. Your intention is to invalidate the word "prefixes" to overshadow my work.

Anyway, even if you try to invalidate the label by calling it whatever you want, you're accepting that the directed sampling strategy is superior, and all its filter variants (prefix, postfix, etc.) are technically equivalent in terms of statistical benefit.

Cool, we're on common ground!

Now: since all the technically equivalent methods work differently, they will each find the target key after a different number of steps (only the average stays the same for all).

This is not something you can ever refute since it's a direct observation, since they work on different sets of bits, hence, on average, all of the technically equivalent methods will cover all the possible keys, on average, an equal amount of times per each key.

Considering this fact, which one of these technically equivalent methods is the better one? Because one wants to make the rational choice of picking a single method - the best method - which one should they pick?

Off the grid, training pigeons to broadcast signed messages.
mcdouglasx (OP)
Hero Member
*****
Offline Offline

Activity: 952
Merit: 532



View Profile WWW
November 19, 2025, 03:56:46 PM
Last edit: November 19, 2025, 04:08:42 PM by mcdouglasx
 #35


Therefore, you are falling into ktimesg's trap. Demonstrating that the average number of checks is similar to the number of prefixes leads people to believe that randomness is a valid substitute.

There is no trap, and we’re not trying to tell you that skipping keys randomly is better than any other method, only that no method can increase your probabilities. I wanted to try, for the umpteenth time, to make you understand that this idea is absurd, but you refuse to get it.

There is no better way to search for a uniformly distributed random number K, BY DEFINITION, precisely because every number has the same probability of appearing. Stop. If you understand this simple statement, you should already realize that a better method cannot exist.

And it makes no sense to think that a number K has to be far away to some number A, just because ripemd160(sha256(K*G)) starts with the same L bits as ripemd160(sha256(A*G)).

You probably don’t even realize how huge the claim you’re making actually is, because if it were true, it would deserve a Nobel prize. You are essentially claiming that by using cryptographic functions, you can transform a uniform distribution into a non-uniform one, claiming that through these cryptographic functions you are able to extract information that can tell you that a number may appear with higher or lower probability

Your argument is based on a conceptual fallacy by confusing a law of probability (uniformity) with a law of optimization (search strategy). You claim that "no method can increase your odds," but the reality is that you're confusing Probability per Key with Search Efficiency. The Probability per Key is always 1/N (immutable).

I've never claimed that this changes. Search Efficiency (the number of valid attempts per second) is variable and depends on the strategy.

My method doesn't violate uniformity; it cleverly exploits it. So you have a Uniformity Fallacy because your argument is that, since the key distribution (K) is uniform, all search strategies must be equally efficient. This is incorrect since I don't claim that the distribution changes. I claim that I can use the structure of the correlated hash (the prefix) to optimize my search strategy.

This is because the uniformity of the distribution doesn't imply that all search strategies are equivalent. It implies that all keys have the same probability, but not all require the same costly verification.

And I see that your debate logic is on par with ktimesg's. You're presenting the "Nobel Prize" fallacy and Optimal Search, since the idea of ​​using correlated information to reduce the search space isn't a radical claim; it's called Optimal Search Theory and is used throughout computer science (algorithms, database indexes, etc.).

You don't need a Nobel Prize to understand efficiency. Your mistake is assuming the cost should be the same in each iteration, ignoring the value of a low-cost, high-precision filter. In other words, your argument is based on another Straw Man fallacy: attributing to me the claim of changing the distribution.

You create probabilistic confusion by ignoring that efficiency is maximized with an informed strategy. While you propose statistical blindness and kTimesG proposes random noise, my method offers efficient intelligence. My method is the most efficient because it obtains the maximum informational value (the valid justification for discarding) with the same cost efficiency as the blind method, making it the only rational cryptographic strategy.



kTimesG, stop trying to move the goalposts. You're clearly resorting to rhetorical tricks because you've missed the main point about efficiency.
fixedpaul
Member
**
Offline Offline

Activity: 86
Merit: 27


View Profile WWW
November 19, 2025, 04:33:01 PM
 #36


My method doesn't violate uniformity; it cleverly exploits it. So you have a Uniformity Fallacy because your argument is that, since the key distribution (K) is uniform..


You are taking a range of keys, and as soon as you find a prefix match you stop, right? Why do you stop? Because you are saying that since you found a prefix match in the range, you believe you now have information indicating that the next keys in that range have a lower probability of being the winning one. Didn’t you say that 1/N is immutable?
mcdouglasx (OP)
Hero Member
*****
Offline Offline

Activity: 952
Merit: 532



View Profile WWW
November 19, 2025, 05:13:48 PM
 #37


My method doesn't violate uniformity; it cleverly exploits it. So you have a Uniformity Fallacy because your argument is that, since the key distribution (K) is uniform..


You are taking a range of keys, and as soon as you find a prefix match you stop, right? Why do you stop? Because you are saying that since you found a prefix match in the range, you believe you now have information indicating that the next keys in that range have a lower probability of being the winning one. Didn’t you say that 1/N is immutable?

You're right that I'm taking a calculated risk; that's the main point of the thread. But it's not about 'believing the odds have changed' it's about optimizing resources under uncertainty.

It's not as you assume, but rather making a rational calculation that the expected benefit justifies the risk. However, you insist on saying things I haven't stated.

It's as simple as the thread itself says: it's an optimized search "for search engines with limited resources" And I don't need to repeat in every reply that it's an optimized search based on real cryptographic principles. I'M NOT ASSUMING THE ODDS HAVE CHANGED, ONLY THAT THE PREFIX IS THE CRYPTOGRAPHICALLY JUSTIFIED AND LOGICAL METHOD.

If you can't understand the difference between 'risk management' and 'probability change', that's your conceptual problem, not a flaw in my method.
kTimesG
Full Member
***
Offline Offline

Activity: 784
Merit: 242


View Profile
November 19, 2025, 05:26:52 PM
 #38

kTimesG, stop trying to move the goalposts. You're clearly resorting to rhetorical tricks because you've missed the main point about efficiency.

I didn't miss anything, thanks.

Since all the technically equivalent methods have an equal efficiency (because they ALL find solutions faster than usual), then how can these two propositions occur simultaneously?

1. There is more than one method that finds the solution faster than normal, on average.

AND

2. Every method of these methods returns the solution after a different number of operation, for every instance of a specific problem to be solved?

If this doesn't sound like a logical contradiction to you, I don't know what does.

In essence, for whatever problem to solve, you will have a technical equivalent method that finds the solution faster, and one that finds the solution later.

So, again - which one is the better variant? Keep in mind - all the methods have the same statistical advantage, so which one should we pick, for, e.g., Puzzle 71?

If your answer is: ANY method out of all those technically equivalent methods, then how come some of those methods find the solution faster than the others, for ANY problem you are trying to solve?

This can all be proven empirically as well, just as everything up to this point.

So, for any problem where you have the Prefix working as an advantage, I can find some other technical equivalent variant, with the same statistical advantage, that finds the solution FASTER than the Prefix.

So, again - which one is the superior method after all? Or, you don't really have an answer, since the dilemma is in your garden at this point?

Off the grid, training pigeons to broadcast signed messages.
fixedpaul
Member
**
Offline Offline

Activity: 86
Merit: 27


View Profile WWW
November 19, 2025, 06:35:12 PM
 #39


It's as simple as the thread itself says: it's an optimized search "for search engines with limited resources" And I don't need to repeat in every reply that it's an optimized search based on real cryptographic principles. I'M NOT ASSUMING THE ODDS HAVE CHANGED, ONLY THAT THE PREFIX IS THE CRYPTOGRAPHICALLY JUSTIFIED AND LOGICAL METHOD.

If you can't understand the difference between 'risk management' and 'probability change', that's your conceptual problem, not a flaw in my method.

I wonder what the definition of cryptographically justified even is.

Anyway, try to help me understand better because my tiny brain just can’t get it.

With your method you skip keys in a range, and on average you stop after checking about 63% of the keys. And the prefix match tells you when to stop, right? But you said it’s not telling you to skip the next keys in the range because they have lower probability, is it? So what kind of information is giving to you?

Or in the end, are you basically just taking a risk by randomly skipping some keys, on average after checking around 63% of them?
Where’s the actual advantage of using the prefix if it doesn’t give you any information about the probabilities?
mcdouglasx (OP)
Hero Member
*****
Offline Offline

Activity: 952
Merit: 532



View Profile WWW
November 19, 2025, 07:02:19 PM
 #40


It's as simple as the thread itself says: it's an optimized search "for search engines with limited resources" And I don't need to repeat in every reply that it's an optimized search based on real cryptographic principles. I'M NOT ASSUMING THE ODDS HAVE CHANGED, ONLY THAT THE PREFIX IS THE CRYPTOGRAPHICALLY JUSTIFIED AND LOGICAL METHOD.

If you can't understand the difference between 'risk management' and 'probability change', that's your conceptual problem, not a flaw in my method.

I wonder what the definition of cryptographically justified even is.

Anyway, try to help me understand better because my tiny brain just can’t get it.

With your method you skip keys in a range, and on average you stop after checking about 63% of the keys. And the prefix match tells you when to stop, right? But you said it’s not telling you to skip the next keys in the range because they have lower probability, is it? So what kind of information is giving to you?

Or in the end, are you basically just taking a risk by randomly skipping some keys, on average after checking around 63% of them?
Where’s the actual advantage of using the prefix if it doesn’t give you any information about the probabilities?

I don't stop because the probability of the next key has changed (it's always 1/N), but because the "no target found" prefix indicates that the block has already reached its statistical potential for success (63% probability). The cost of searching the remaining 37% of the block doesn't justify the throughput gain I get by skipping to the next block.

The prefix does provide valid information because it tells me when a block has likely already reached its statistical potential based on correlation with the actual target, not pure randomness.

What information does the prefix give me?

It doesn't tell me that the following keys have a lower probability; what it tells me is, "I've already found a promising candidate in this block, and statistically it's optimal to continue searching in another block."

The prefix is ​​the cryptographically justified method for making that stopping decision because it's based on the deterministic property of the target, not on a random number without correlation, as a purely random search would.



Regarding ktimesg, again, if you can't see the contradictions in your post, I can't do anything for you, because the purpose of the post is lost in debate. You've gone from CUDA to random, to postfixes... what's next?

The prefix uses information correlated with the specific objective, while your random method uses statistical noise. In cryptography, that difference is enormous.
Pages: « 1 [2] 3 »  All
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!