bcchanger
Newbie
Offline
Activity: 18
Merit: 0
|
 |
May 10, 2025, 12:25:52 AM |
|
Something I had forgotten to mention: while the prefix method does not represent a global improvement over the sequential method, this only applies in cases where 100% of the ranges are being scanned. However, if one is testing luck rather than exhaustive searching, the prefix method is always the better choice. If we exclude the option of scanning omitted ranges in extreme cases, the prefix method will achieve the objective faster, with fewer checks, and a 90% success rate.
The times when sequential search statistically matches the prefix method occur in its worst-case scenarios, which only represent 10% of instances. Therefore, since most users are not attempting to scan the full range 71, the best option remains the prefix method, as it provides the greatest statistical advantages with just a 10% risk.
In the unlikely event that one reaches that point in the process, those omitted ranges could always be saved for future reference in a text file.
do you have a script of your method ? What script do you need?! Generating prefixes is like handling ≈60 million prefixes and brute-forcing them one by one until you get it. The game isn’t about scripts... it’s about hardware. You need a ton of GPUs. You have to find the key inside this number: 1,180,591,620,717,411,303,424 Read the number twice.
|
|
|
|
farou9
Newbie
Offline
Activity: 65
Merit: 0
|
 |
May 10, 2025, 12:45:05 AM |
|
Something I had forgotten to mention: while the prefix method does not represent a global improvement over the sequential method, this only applies in cases where 100% of the ranges are being scanned. However, if one is testing luck rather than exhaustive searching, the prefix method is always the better choice. If we exclude the option of scanning omitted ranges in extreme cases, the prefix method will achieve the objective faster, with fewer checks, and a 90% success rate.
The times when sequential search statistically matches the prefix method occur in its worst-case scenarios, which only represent 10% of instances. Therefore, since most users are not attempting to scan the full range 71, the best option remains the prefix method, as it provides the greatest statistical advantages with just a 10% risk.
In the unlikely event that one reaches that point in the process, those omitted ranges could always be saved for future reference in a text file.
do you have a script of your method ? What script do you need?! Generating prefixes is like handling ≈60 million prefixes and brute-forcing them one by one until you get it. The game isn’t about scripts... it’s about hardware. You need a ton of GPUs. You have to find the key inside this number: 1,180,591,620,717,411,303,424 Read the number twice. Easy ,study quantum mechanic and computer science and ....etc then create a fault. Tollerence quantum computer with 71 real qubits and voila!! You found it faster then a blink of an eye
|
|
|
|
nomachine
|
 |
May 10, 2025, 03:50:24 AM |
|
From github repositories I saw a lot of interesting code and concepts
I checked your Git repository. I saw exactly which secp256k1 implementation you're using in your project. I don’t need to be on the forum at all to find or share good ideas; GitHub is enough 
|
BTC: bc1qdwnxr7s08xwelpjy3cc52rrxg63xsmagv50fa8
|
|
|
Akito S. M. Hosana
Jr. Member
Offline
Activity: 350
Merit: 8
|
 |
May 10, 2025, 04:10:57 AM |
|
From github repositories I saw a lot of interesting code and concepts
I checked your Git repository. I saw exactly which secp256k1 implementation you're using in your project. I don’t need to be on the forum at all to find or share good ideas; GitHub is enough  Which one exactly? I can’t find it. Damn. Why is everyone so secretive? 
|
|
|
|
kTimesG
|
 |
May 10, 2025, 06:54:45 AM |
|
the prefix method is always the better choice
You're right! I finally cracked the puzzle (of what's happening here and why this is my last post indeed). - I know why McD is pushing fwd with the magic theory, even after being debated and refuted for 50 pages - I know why nomachine publishes dozens of scripts from his code folder. - I know why Akito is too bored and sees conspiracies everywhere (remember when you PM'ed me when 130 was snitched to tell me that RC is the creator dude? bad day indeed) - I know why real facts are refuted so intensely in this thread. It's dead simple: considering that what happens here would make zero sense in the real world, my best guess is that at least part of you are straight-up mass manipulators (not even trolls). And I know why you do it, it's not at all hard to guess. After all, what's the best way of increasing your own chances than to convince everyone else of using some better methods, that actually are the worst ones imaginable? GG to the guy(s) pulling the strings. You won. Since this thread is the go-to destination of anyone new to the puzzle, your mission will continue successfully. Others, like me, gave up fighting the disinformation wave. It's simply not worth it.
|
Off the grid, training pigeons to broadcast signed messages.
|
|
|
Bram24732
Member

Offline
Activity: 112
Merit: 14
|
 |
May 10, 2025, 08:02:01 AM |
|
the prefix method is always the better choice
You're right! I finally cracked the puzzle (of what's happening here and why this is my last post indeed). - I know why McD is pushing fwd with the magic theory, even after being debated and refuted for 50 pages - I know why nomachine publishes dozens of scripts from his code folder. - I know why Akito is too bored and sees conspiracies everywhere (remember when you PM'ed me when 130 was snitched to tell me that RC is the creator dude? bad day indeed) - I know why real facts are refuted so intensely in this thread. It's dead simple: considering that what happens here would make zero sense in the real world, my best guess is that at least part of you are straight-up mass manipulators (not even trolls). And I know why you do it, it's not at all hard to guess. After all, what's the best way of increasing your own chances than to convince everyone else of using some better methods, that actually are the worst ones imaginable? GG to the guy(s) pulling the strings. You won. Since this thread is the go-to destination of anyone new to the puzzle, your mission will continue successfully. Others, like me, gave up fighting the disinformation wave. It's simply not worth it. I think you're reading too much into it. Reality is probably way more sad. People are desperate and this puzzle pulls ambiant temperature IQ folks like a magnet. There's likely not much more to it.
|
|
|
|
fantom06
Jr. Member
Offline
Activity: 49
Merit: 1
|
 |
May 10, 2025, 09:59:54 AM |
|
the prefix method is always the better choice
You're right! I finally cracked the puzzle (of what's happening here and why this is my last post indeed). - I know why McD is pushing fwd with the magic theory, even after being debated and refuted for 50 pages - I know why nomachine publishes dozens of scripts from his code folder. - I know why Akito is too bored and sees conspiracies everywhere (remember when you PM'ed me when 130 was snitched to tell me that RC is the creator dude? bad day indeed) - I know why real facts are refuted so intensely in this thread. It's dead simple: considering that what happens here would make zero sense in the real world, my best guess is that at least part of you are straight-up mass manipulators (not even trolls). And I know why you do it, it's not at all hard to guess. After all, what's the best way of increasing your own chances than to convince everyone else of using some better methods, that actually are the worst ones imaginable? GG to the guy(s) pulling the strings. You won. Since this thread is the go-to destination of anyone new to the puzzle, your mission will continue successfully. Others, like me, gave up fighting the disinformation wave. It's simply not worth it. I think you're reading too much into it. Reality is probably way more sad. People are desperate and this puzzle pulls ambiant temperature IQ folks like a magnet. There's likely not much more to it. To be honest, when I came, I had a good opinion of both of you Bram24732 and kTimesG. And now it has changed. SHAME ON YOU!
|
|
|
|
Bram24732
Member

Offline
Activity: 112
Merit: 14
|
 |
May 10, 2025, 10:11:48 AM |
|
the prefix method is always the better choice
You're right! I finally cracked the puzzle (of what's happening here and why this is my last post indeed). - I know why McD is pushing fwd with the magic theory, even after being debated and refuted for 50 pages - I know why nomachine publishes dozens of scripts from his code folder. - I know why Akito is too bored and sees conspiracies everywhere (remember when you PM'ed me when 130 was snitched to tell me that RC is the creator dude? bad day indeed) - I know why real facts are refuted so intensely in this thread. It's dead simple: considering that what happens here would make zero sense in the real world, my best guess is that at least part of you are straight-up mass manipulators (not even trolls). And I know why you do it, it's not at all hard to guess. After all, what's the best way of increasing your own chances than to convince everyone else of using some better methods, that actually are the worst ones imaginable? GG to the guy(s) pulling the strings. You won. Since this thread is the go-to destination of anyone new to the puzzle, your mission will continue successfully. Others, like me, gave up fighting the disinformation wave. It's simply not worth it. I think you're reading too much into it. Reality is probably way more sad. People are desperate and this puzzle pulls ambiant temperature IQ folks like a magnet. There's likely not much more to it. To be honest, when I came, I had a good opinion of both of you Bram24732 and kTimesG. And now it has changed. SHAME ON YOU! I'm very curious about what changed your mind. I don't think I changed my tune very much since I arrived here
|
|
|
|
cctv5go
Newbie
Offline
Activity: 43
Merit: 0
|
 |
May 10, 2025, 11:27:49 AM |
|
Good programmers are always excellent wherever they go, there are always some bad programmers who always talk big.
|
|
|
|
mcdouglasx
|
 |
May 10, 2025, 12:02:11 PM |
|
the prefix method is always the better choice
You're right! I finally cracked the puzzle (of what's happening here and why this is my last post indeed). - I know why McD is pushing fwd with the magic theory, even after being debated and refuted for 50 pages - I know why nomachine publishes dozens of scripts from his code folder. - I know why Akito is too bored and sees conspiracies everywhere (remember when you PM'ed me when 130 was snitched to tell me that RC is the creator dude? bad day indeed) - I know why real facts are refuted so intensely in this thread. It's dead simple: considering that what happens here would make zero sense in the real world, my best guess is that at least part of you are straight-up mass manipulators (not even trolls). And I know why you do it, it's not at all hard to guess. After all, what's the best way of increasing your own chances than to convince everyone else of using some better methods, that actually are the worst ones imaginable? GG to the guy(s) pulling the strings. You won. Since this thread is the go-to destination of anyone new to the puzzle, your mission will continue successfully. Others, like me, gave up fighting the disinformation wave. It's simply not worth it. There is no conspiracy. If you treat everyone the way you do, sooner or later, you end up getting what you deserve. Do not distort, cut, or misrepresent my words.
Something I had forgotten to mention: while the prefix method does not represent a global improvement over the sequential method, this only applies in cases where 100% of the ranges are being scanned. However, if one is testing luck rather than exhaustive searching, the prefix method is always the better choice. If we exclude the option of scanning omitted ranges in extreme cases, the prefix method will achieve the objective faster, with fewer checks, and a 90% success rate. There is no statistical advantage between the methods if the goal is to scan the entire range and global metrics are analyzed. Although the success rate is lower, it remains high and is superior to the sequential method for our specific need, which will never be to scan the entire range. If we avoid exploring omitted ranges, the probability of failure stays between 10% and 20%. import hashlib import random import time import math import statistics import scipy.stats as st from math import ceil
# Configuration TOTAL_SIZE = 100_000 RANGE_SIZE = 4_096 PREFIX_LENGTH = 3 SIMULATIONS = 1000 SECP256K1_ORDER = int("0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFEBAAEDCE6AF48A03BBFD25E8CD0364141", 16)
print(f""" === Configuration === Total numbers: {TOTAL_SIZE:,} Block size: {RANGE_SIZE:,} Total blocks needed: {ceil(TOTAL_SIZE/RANGE_SIZE)} Prefix: {PREFIX_LENGTH} characters (16^{PREFIX_LENGTH} = {16**PREFIX_LENGTH:,} combinations) Simulations: {SIMULATIONS} secp256k1 order: {SECP256K1_ORDER} """)
def generate_h160(data): h = hashlib.new('ripemd160', str(data).encode('utf-8')) return h.hexdigest()
def shuffled_block_order(total_blocks): blocks = list(range(total_blocks)) random.shuffle(blocks) return blocks
def sequential_search(dataset, block_size, target_hash, block_order): checks = 0 for block_idx in block_order: start = block_idx * block_size end = min(start + block_size, len(dataset)) for i in range(start, end): checks += 1 if generate_h160(dataset[i]) == target_hash: return {"checks": checks, "found": True, "index": i} return {"checks": checks, "found": False}
def prefix_search(dataset, block_size, prefix_len, target_hash, block_order): prefix_hash = target_hash[:prefix_len] checks = 0 ranges_to_scan = [] skip_counter = 0 scan_increment = 1
for block_idx in block_order: start = block_idx * block_size end = min(start + block_size, len(dataset)) # Límite seguro found_prefix = False
for i in range(start, end): checks += 1 h = generate_h160(dataset[i]) if h == target_hash: return {"checks": checks, "found": True, "index": i} if not found_prefix and h.startswith(prefix_hash): found_prefix = True ranges_to_scan.append({"start": i + 1, "end": end}) skip_counter += 1 break
if skip_counter >= 4 and ranges_to_scan: for _ in range(min(scan_increment, len(ranges_to_scan))): r = ranges_to_scan.pop(0) for i in range(r["start"], r["end"]): checks += 1 if generate_h160(dataset[i]) == target_hash: return {"checks": checks, "found": True, "index": i} skip_counter = 0 scan_increment += 1
###for r in ranges_to_scan: ###for i in range(r["start"], r["end"]): ###checks += 1 ###if generate_h160(dataset[i]) == target_hash: ###return {"checks": checks, "found": True, "index": i}
return {"checks": checks, "found": False}
def compute_cohens_d(list1, list2): if len(list1) < 2 or len(list2) < 2: return float('nan') n1, n2 = len(list1), len(list2) m1, m2 = statistics.mean(list1), statistics.mean(list2) s1, s2 = statistics.stdev(list1), statistics.stdev(list2) pooled_std = math.sqrt(((n1-1)*s1**2 + (n2-1)*s2**2) / (n1+n2-2)) if pooled_std == 0: return float('nan') return (m1 - m2) / pooled_std
def correct_coefficient_of_variation(data): if not data or statistics.mean(data) == 0: return float('nan') return (statistics.stdev(data) / statistics.mean(data)) * 100
def longest_streak(outcomes, letter): max_streak = current = 0 for o in outcomes: current = current + 1 if o == letter else 0 max_streak = max(max_streak, current) return max_streak
def ascii_bar(label, value, max_value, bar_length=50): bar_count = int((value / max_value) * bar_length) if max_value > 0 else 0 return f"{label:12}: {'#' * bar_count} ({value})"
def compare_methods(): results = { "sequential": {"wins": 0, "success": 0, "checks": [], "times": []}, "prefix": {"wins": 0, "success": 0, "checks": [], "times": []}, "ties": 0 } outcome_history = [] total_blocks = ceil(TOTAL_SIZE / RANGE_SIZE)
for _ in range(SIMULATIONS): max_offset = SECP256K1_ORDER - TOTAL_SIZE - 1 offset = random.randint(0, max_offset) dataset = [offset + i for i in range(TOTAL_SIZE)] target_num = random.choice(dataset) target_hash = generate_h160(target_num) block_order = shuffled_block_order(total_blocks)
start = time.perf_counter() seq_res = sequential_search(dataset, RANGE_SIZE, target_hash, block_order) seq_time = time.perf_counter() - start
start = time.perf_counter() pre_res = prefix_search(dataset, RANGE_SIZE, PREFIX_LENGTH, target_hash, block_order) pre_time = time.perf_counter() - start
for method, res, t in [("sequential", seq_res, seq_time), ("prefix", pre_res, pre_time)]: if res["found"]: results[method]["success"] += 1 results[method]["checks"].append(res["checks"]) results[method]["times"].append(t)
if seq_res["found"] and pre_res["found"]: if seq_res["checks"] < pre_res["checks"]: results["sequential"]["wins"] += 1 outcome_history.append("S") elif pre_res["checks"] < seq_res["checks"]: results["prefix"]["wins"] += 1 outcome_history.append("P") else: results["ties"] += 1 outcome_history.append("T")
def get_stats(data): if not data: return {"mean": 0, "min": 0, "max": 0, "median": 0, "stdev": 0} return { "mean": statistics.mean(data), "min": min(data), "max": max(data), "median": statistics.median(data), "stdev": statistics.stdev(data) if len(data) > 1 else 0 }
seq_stats = get_stats(results["sequential"]["checks"]) pre_stats = get_stats(results["prefix"]["checks"]) seq_time_stats = get_stats(results["sequential"]["times"]) pre_time_stats = get_stats(results["prefix"]["times"])
seq_success_rate = results["sequential"]["success"] / SIMULATIONS pre_success_rate = results["prefix"]["success"] / SIMULATIONS
total_comparisons = results["sequential"]["wins"] + results["prefix"]["wins"] + results["ties"] seq_win_rate = results["sequential"]["wins"] / total_comparisons if total_comparisons > 0 else 0 pre_win_rate = results["prefix"]["wins"] / total_comparisons if total_comparisons > 0 else 0
cv_seq = correct_coefficient_of_variation(results["sequential"]["checks"]) cv_pre = correct_coefficient_of_variation(results["prefix"]["checks"])
effect_size = compute_cohens_d(results["sequential"]["checks"], results["prefix"]["checks"]) if len(results["sequential"]["checks"]) > 1 and len(results["prefix"]["checks"]) > 1: t_test = st.ttest_ind(results["sequential"]["checks"], results["prefix"]["checks"], equal_var=False) else: t_test = None
print(f""" === FINAL ANALYSIS ===
[Success Rates] Sequential: {seq_success_rate:.1%} ({results['sequential']['success']}/{SIMULATIONS}) Prefix: {pre_success_rate:.1%} ({results['prefix']['success']}/{SIMULATIONS})
[Performance Metrics] | Sequential | Prefix ---------------+---------------------+-------------------- Checks (mean) | {seq_stats['mean']:>12,.1f} ± {seq_stats['stdev']:,.1f} | {pre_stats['mean']:>12,.1f} ± {pre_stats['stdev']:,.1f} Time (mean ms) | {seq_time_stats['mean']*1000:>12.2f} ± {seq_time_stats['stdev']*1000:.2f} | {pre_time_stats['mean']*1000:>12.2f} ± {pre_time_stats['stdev']*1000:.2f} Min checks | {seq_stats['min']:>12,} | {pre_stats['min']:>12,} Max checks | {seq_stats['max']:>12,} | {pre_stats['max']:>12,} Coef. Variation| {cv_seq:>11.1f}% | {cv_pre:>11.1f}%
[Comparison When Both Succeed] Sequential wins: {results['sequential']['wins']} ({seq_win_rate:.1%}) Prefix wins: {results['prefix']['wins']} ({pre_win_rate:.1%}) Ties: {results['ties']}
[Statistical Significance] Cohen's d: {effect_size:.3f} Welch's t-test: {'t = %.3f, p = %.4f' % (t_test.statistic, t_test.pvalue) if t_test else 'Insufficient data'} """)
non_tie_outcomes = [o for o in outcome_history if o != "T"] streak_analysis = f""" === STREAK ANALYSIS === Longest Sequential streak: {longest_streak(outcome_history, 'S')} Longest Prefix streak: {longest_streak(outcome_history, 'P')} Expected max streak: {math.log(len(non_tie_outcomes), 2):.1f} (for {len(non_tie_outcomes)} trials) """ print(streak_analysis)
max_wins = max(results["sequential"]["wins"], results["prefix"]["wins"], results["ties"]) print("=== WIN DISTRIBUTION ===") print(ascii_bar("Sequential", results["sequential"]["wins"], max_wins)) print(ascii_bar("Prefix", results["prefix"]["wins"], max_wins)) print(ascii_bar("Ties", results["ties"], max_wins))
if __name__ == '__main__': compare_methods()
test #1=== Configuration === Total numbers: 100,000 Block size: 4,096 Total blocks needed: 25 Prefix: 3 characters (16^3 = 4,096 combinations) Simulations: 1000 secp256k1 order: 115792089237316195423570985008687907852837564279074904382605163141518161494337
=== FINAL ANALYSIS ===
[Success Rates] Sequential: 100.0% (1000/1000) Prefix: 85.1% (851/1000)
[Performance Metrics] | Sequential | Prefix ---------------+---------------------+-------------------- Checks (mean) | 49,871.6 ± 28,366.8 | 42,476.4 ± 23,538.4 Time (mean ms) | 124.07 ± 71.32 | 107.76 ± 60.40 Min checks | 106 | 106 Max checks | 99,737 | 89,445 Coef. Variation| 56.9% | 55.4%
[Comparison When Both Succeed] Sequential wins: 214 (25.1%) Prefix wins: 603 (70.9%) Ties: 34
[Statistical Significance] Cohen's d: 0.282 Welch's t-test: t = 6.129, p = 0.0000
=== STREAK ANALYSIS === Longest Sequential streak: 6 Longest Prefix streak: 12 Expected max streak: 9.7 (for 817 trials)
=== WIN DISTRIBUTION === Sequential : ################# (214) Prefix : ################################################## (603) Ties : ## (34)
|
▄▄█████████████████▄▄ ▄█████████████████████▄ ███▀▀█████▀▀░░▀▀███████ ███▄░░▀▀░░▄▄██▄░░██████ █████░░░████████░░█████ ████▌░▄░░█████▀░░██████ ███▌░▐█▌░░▀▀▀▀░░▄██████ ███░░▌██░░▄░░▄█████████ ███▌░▀▄▀░░█▄░░█████████ ████▄░░░▄███▄░░▀▀█▀▀███ ██████████████▄▄░░░▄███ ▀█████████████████████▀ ▀▀█████████████████▀▀ | Rainbet.com CRYPTO CASINO & SPORTSBOOK | | | █▄█▄█▄███████▄█▄█▄█ ███████████████████ ███████████████████ ███████████████████ █████▀█▀▀▄▄▄▀██████ █████▀▄▀████░██████ █████░██░█▀▄███████ ████▄▀▀▄▄▀███████ █████████▄▀▄███ █████████████████ ███████████████████ ███████████████████ ███████████████████ | | | |
▄█████████▄ █████████ ██ ▄▄█░▄░▄█▄░▄░█▄▄ ▀██░▐█████▌░██▀ ▄█▄░▀▀▀▀▀░▄█▄ ▀▀▀█▄▄░▄▄█▀▀▀ ▀█▀░▀█▀
| 10K WEEKLY RACE | | 100K MONTHLY RACE | | | ██
█████
| ███████▄█ ██████████▄ ████████████▄▄ ████▄███████████▄ ██████████████████▄ ░▄█████████████████▄ ▄███████████████████▄ █████████████████▀████ ██████████▀███████████ ▀█████████████████████ ░████████████████████▀ ░░▀█████████████████▀ ████▀▀██████████▀▀ | ████████ ██████████████ |
|
|
|
Dapud0886
Newbie
Offline
Activity: 2
Merit: 0
|
 |
May 10, 2025, 02:26:12 PM |
|
Here a tutorial to use Mara Slipstream and avoid bots stealing your prize
1. Choose the wallet you want to transfer to.
2. Hit Max button to transfer all the amount available.
3. Hit Pay button.
4. See the minimum fee Mara needs to mine your transaction (it’s not the same all the time).
5. Choose a fee slightly above the minimum fee of Mara (If minimum at current time is 20, choose 50 so you have your transaction included and prioritized at the next block).
6. Hit Preview button (DONT EVER PRESS “OK” BUTTON).
7. Sign your transaction with sign button (BE CAREFUL TO NOT HIT "BROADCAST” BUTTON BY MISTAKE).
8. Go to share button and export your transaction with Save to file and save the file wherever you want.
9. Close the preview window to avoid making a mistake you'll regret forever.
10. Open your file in notes and copy the Hex transaction.
11. Paste it in slipstream Mara and activate your transaction.
If this tutorial has helped you, fell free to throw me a coin and i wish Good Luck for you guys! -------------------------------------------------------------------------------- bc1qc3k9aefcam26plsq2j4wscpy6wt3s6y9ua2j66
Is this for electrum? If ever found, i would disconnect before this Yes, it is for electrum and make it offline can help if u broadcast the transaction by mistake, in this case u would need to copy the hex and activate through another device till it get mined before turning on ur connection I think if you entered The wif in offline laptop you can't send the BTCs because the wallet balance isn't sync ( not enough balance) , and i think the best way to avoid botes or some hidden virus in your laptop is to create an only watch wallet in your online laptop with the address : 1PWo3JeB9jrGwfHDNpdGK54CRas7fsVzXU. And put the receive address(your's or mine i don't have problem with it  )with the max balance and the right fee and press pay then save the unsigned transaction in a file by pressing export , after that cut the internet from your laptop and create a new wallet with the found wif and import The unsigned transaction and sing it ,you are safe because the broadcast button is disabled (laptop is offline) then press the export button again and select (show as RQ code) And scan it with your online mobile phone and past it in mara pool Any one can give this method a thought ,and see if its the right way to do this safely, because if something goes wrong people will end up hanging themselves in some tree behind this shi..t 
|
|
|
|
Benjade
Jr. Member
Offline
Activity: 35
Merit: 1
|
 |
May 10, 2025, 03:14:25 PM |
|
It's here! the first "breaking the elliptic curve" quantum computer cryptography competition is here! The QDay Prize is the first truly global quantum cryptanalysis competition with a 1 BTC prize. Entries are open, will anyone enter? https://www.qdayprize.org/What's clear is that a major breakthrough in data decryption will occur very soon, and the 160 puzzle will certainly be decrypted this way.
|
|
|
|
analyticnomad
Newbie
Offline
Activity: 44
Merit: 0
|
 |
May 10, 2025, 07:06:05 PM |
|
It's here! the first "breaking the elliptic curve" quantum computer cryptography competition is here! The QDay Prize is the first truly global quantum cryptanalysis competition with a 1 BTC prize. Entries are open, will anyone enter? https://www.qdayprize.org/What's clear is that a major breakthrough in data decryption will occur very soon, and the 160 puzzle will certainly be decrypted this way. huh.... the silence in the chat indicates people are feverishly trying their own version of this
|
|
|
|
nomachine
|
 |
May 10, 2025, 08:11:35 PM Last edit: May 10, 2025, 08:25:56 PM by nomachine |
|
It's here! the first "breaking the elliptic curve" quantum computer cryptography competition is here! The QDay Prize is the first truly global quantum cryptanalysis competition with a 1 BTC prize. Entries are open, will anyone enter? https://www.qdayprize.org/What's clear is that a major breakthrough in data decryption will occur very soon, and the 160 puzzle will certainly be decrypted this way. huh.... the silence in the chat indicates people are feverishly trying their own version of this A specialized laboratory is required for such a quantum computer. Not only does it need laser-generated (radiated) random numbers, but it also requires a quantum computer with a specific type of qubit optimized for Shor's algorithm, high-efficiency power supplies, sub-zero cooling with liquid nitrogen, and a fully controlled environment including air humidity.. The power required is about 3MW (like a train at full speed). However, laser-generated random numbers are not directly tied to the security of ECDSA or most other cryptographic algorithms. Random number generation is a separate component of cryptography, essential for key generation and other cryptographic operations. For example: https://arxiv.org/pdf/2302.06639.pdfThis paper discusses related concepts. Current IBM Quantum systems and other publicly available quantum computers do not possess the necessary hardware (such as arithmetic circuits) for "126+ logical qubits with error correction" (e.g., "Cat Qubits"). If someone were to achieve this, three-letter agencies would immediately recognize who accomplished it and they would know exactly where those researchers are located. 
|
BTC: bc1qdwnxr7s08xwelpjy3cc52rrxg63xsmagv50fa8
|
|
|
hoanghuy2912
Newbie
Offline
Activity: 39
Merit: 0
|
 |
May 10, 2025, 08:19:27 PM |
|
maybe the current security threat will stop at 72 bits and finding the 135 puzzle will be easier than finding the 71 now?
|
|
|
|
Denevron
Newbie
Offline
Activity: 103
Merit: 0
|
 |
May 10, 2025, 10:28:05 PM |
|
maybe the current security threat will stop at 72 bits and finding the 135 puzzle will be easier than finding the 71 now?
Try it and then tell us if you managed to find 135 faster than 71)
|
|
|
|
yxlm2003
Newbie
Offline
Activity: 8
Merit: 0
|
 |
May 10, 2025, 10:51:19 PM |
|
It's here! the first "breaking the elliptic curve" quantum computer cryptography competition is here! The QDay Prize is the first truly global quantum cryptanalysis competition with a 1 BTC prize. Entries are open, will anyone enter? https://www.qdayprize.org/What's clear is that a major breakthrough in data decryption will occur very soon, and the 160 puzzle will certainly be decrypted this way. huh.... the silence in the chat indicates people are feverishly trying their own version of this A specialized laboratory is required for such a quantum computer. Not only does it need laser-generated (radiated) random numbers, but it also requires a quantum computer with a specific type of qubit optimized for Shor's algorithm, high-efficiency power supplies, sub-zero cooling with liquid nitrogen, and a fully controlled environment including air humidity.. The power required is about 3MW (like a train at full speed). However, laser-generated random numbers are not directly tied to the security of ECDSA or most other cryptographic algorithms. Random number generation is a separate component of cryptography, essential for key generation and other cryptographic operations. For example: https://arxiv.org/pdf/2302.06639.pdfThis paper discusses related concepts. Current IBM Quantum systems and other publicly available quantum computers do not possess the necessary hardware (such as arithmetic circuits) for "126+ logical qubits with error correction" (e.g., "Cat Qubits"). If someone were to achieve this, three-letter agencies would immediately recognize who accomplished it and they would know exactly where those researchers are located.  fbi?
|
|
|
|
Benjade
Jr. Member
Offline
Activity: 35
Merit: 1
|
 |
May 10, 2025, 11:11:21 PM Last edit: May 10, 2025, 11:31:35 PM by Benjade |
|
Someone asked me for a full random version of Dookoo2's original Cyclone avx512 ( https://github.com/Dookoo2/Cyclone) so I'm sharing it here if anyone else is interested. I'm not going to make a Git just for this. Just replace this version of Cyclone.cpp with the old one if you want it. Cheers. //g++ -std=c++17 -w -Ofast -ffast-math -funroll-loops -ftree-vectorize -fstrict-aliasing -fno-semantic-interposition -fvect-cost-model=unlimited -fno-trapping-math -fipa-ra -mavx512f -mavx512vl -mavx512bw -mavx512dq -fipa-modref -flto -fassociative-math -fopenmp -mavx2 -mbmi2 -madx -o Cyclone Cyclone.cpp SECP256K1.cpp Int.cpp IntGroup.cpp IntMod.cpp Point.cpp ripemd160_avx2.cpp p2pkh_decoder.cpp sha256_avx2.cpp ripemd160_avx512.cpp sha256_avx512.cpp
//The software is developed for solving Satoshi's puzzles; any use for illegal purposes is strictly prohibited. The author is not responsible for any actions taken by the user when using this software for unlawful activities. #include <immintrin.h> #include <iostream> #include <iomanip> #include <string> #include <cstring> #include <chrono> #include <vector> #include <sstream> #include <stdexcept> #include <algorithm> #include <omp.h> #include <array> #include <random> #include <utility> #include <mutex> #include <atomic>
// Adding program modules #include "p2pkh_decoder.h" #include "sha256_avx2.h" #include "ripemd160_avx2.h" #include "sha256_avx512.h" #include "ripemd160_avx512.h" #include "SECP256K1.h" #include "Point.h" #include "Int.h" #include "IntGroup.h"
//------------------------------------------------------------------------------ // Batch size: ±256 public keys (512), hashed in groups of 16 (AVX512). static constexpr int POINTS_BATCH_SIZE = 512; static constexpr int HASH_BATCH_SIZE = 16;
// Status output and progress saving frequency static constexpr double statusIntervalSec = 5.0; static std::string g_lastKeyGlobal; static std::atomic<bool> matchFound(false);
//------------------------------------------------------------------------------ //Converts a HEX string into a large number (a vector of 64-bit words, little-endian).
std::vector<uint64_t> hexToBigNum(const std::string& hex) { std::vector<uint64_t> bigNum; const size_t len = hex.size(); bigNum.reserve((len + 15) / 16); for (size_t i = 0; i < len; i += 16) { size_t start = (len >= 16 + i) ? len - 16 - i : 0; size_t partLen = (len >= 16 + i) ? 16 : (len - i); uint64_t value = std::stoull(hex.substr(start, partLen), nullptr, 16); bigNum.push_back(value); } return bigNum; }
//Reverse conversion to a HEX string (with correct leading zeros within blocks).
std::string bigNumToHex(const std::vector<uint64_t>& num) { std::ostringstream oss; for (auto it = num.rbegin(); it != num.rend(); ++it) { if (it != num.rbegin()) oss << std::setw(16) << std::setfill('0'); oss << std::hex << *it; } return oss.str(); }
std::vector<uint64_t> singleElementVector(uint64_t val) { return { val }; }
std::vector<uint64_t> bigNumAdd(const std::vector<uint64_t>& a, const std::vector<uint64_t>& b) { std::vector<uint64_t> sum; sum.reserve(std::max(a.size(), b.size()) + 1); uint64_t carry = 0; for (size_t i = 0, sz = std::max(a.size(), b.size()); i < sz; ++i) { uint64_t x = (i < a.size()) ? a[i] : 0ULL; uint64_t y = (i < b.size()) ? b[i] : 0ULL; __uint128_t s = ( __uint128_t )x + ( __uint128_t )y + carry; carry = (uint64_t)(s >> 64); sum.push_back((uint64_t)s); } if (carry) sum.push_back(carry); return sum; }
std::vector<uint64_t> bigNumSubtract(const std::vector<uint64_t>& a, const std::vector<uint64_t>& b) { std::vector<uint64_t> diff = a; uint64_t borrow = 0; for (size_t i = 0; i < b.size(); ++i) { uint64_t subtrahend = b[i]; if (diff[i] < subtrahend + borrow) { diff[i] = diff[i] + (~0ULL) - subtrahend - borrow + 1ULL; // eqv diff[i] = diff[i] - subtrahend - borrow borrow = 1ULL; } else { diff[i] -= (subtrahend + borrow); borrow = 0ULL; } } for (size_t i = b.size(); i < diff.size() && borrow; ++i) { if (diff[i] == 0ULL) { diff[i] = ~0ULL; } else { diff[i] -= 1ULL; borrow = 0ULL; } } // delete leading zeros while (!diff.empty() && diff.back() == 0ULL) diff.pop_back(); return diff; }
std::pair<std::vector<uint64_t>, uint64_t> bigNumDivide(const std::vector<uint64_t>& a, uint64_t divisor) { std::vector<uint64_t> quotient(a.size(), 0ULL); uint64_t remainder = 0ULL; for (int i = (int)a.size() - 1; i >= 0; --i) { __uint128_t temp = ((__uint128_t)remainder << 64) | a[i]; uint64_t q = (uint64_t)(temp / divisor); uint64_t r = (uint64_t)(temp % divisor); quotient[i] = q; remainder = r; } while (!quotient.empty() && quotient.back() == 0ULL) quotient.pop_back(); return { quotient, remainder }; }
long double hexStrToLongDouble(const std::string &hex) { long double result = 0.0L; for (char c : hex) { result *= 16.0L; if (c >= '0' && c <= '9') result += (c - '0'); else if (c >= 'a' && c <= 'f') result += (c - 'a' + 10); else if (c >= 'A' && c <= 'F') result += (c - 'A' + 10); } return result; }
//------------------------------------------------------------------------------ static inline std::string padHexTo64(const std::string &hex) { return (hex.size() >= 64) ? hex : std::string(64 - hex.size(), '0') + hex; } static inline Int hexToInt(const std::string &hex) { Int number; char buf[65] = {0}; std::strncpy(buf, hex.c_str(), 64); number.SetBase16(buf); return number; } static inline std::string intToHex(const Int &value) { Int temp; temp.Set((Int*)&value); return temp.GetBase16(); } static inline bool intGreater(const Int &a, const Int &b) { std::string ha = ((Int&)a).GetBase16(); std::string hb = ((Int&)b).GetBase16(); if (ha.size() != hb.size()) return (ha.size() > hb.size()); return (ha > hb); } static inline bool isEven(const Int &number) { return ((Int&)number).IsEven(); }
static inline std::string intXToHex64(const Int &x) { Int temp; temp.Set((Int*)&x); std::string hex = temp.GetBase16(); if (hex.size() < 64) hex.insert(0, 64 - hex.size(), '0'); return hex; }
static inline std::string pointToCompressedHex(const Point &point) { return (isEven(point.y) ? "02" : "03") + intXToHex64(point.x); } static inline void pointToCompressedBin(const Point &point, uint8_t outCompressed[33]) { outCompressed[0] = isEven(point.y) ? 0x02 : 0x03; Int temp; temp.Set((Int*)&point.x); for (int i = 0; i < 32; i++) { outCompressed[1 + i] = (uint8_t)temp.GetByte(31 - i); } }
//------------------------------------------------------------------------------ inline void prepareShaBlock(const uint8_t* dataSrc, size_t dataLen, uint8_t* outBlock) { std::fill_n(outBlock, 64, 0); std::memcpy(outBlock, dataSrc, dataLen); outBlock[dataLen] = 0x80; const uint32_t bitLen = (uint32_t)(dataLen * 8); outBlock[60] = (uint8_t)((bitLen >> 24) & 0xFF); outBlock[61] = (uint8_t)((bitLen >> 16) & 0xFF); outBlock[62] = (uint8_t)((bitLen >> 8) & 0xFF); outBlock[63] = (uint8_t)( bitLen & 0xFF); } inline void prepareRipemdBlock(const uint8_t* dataSrc, uint8_t* outBlock) { std::fill_n(outBlock, 64, 0); std::memcpy(outBlock, dataSrc, 32); outBlock[32] = 0x80; const uint32_t bitLen = 256; outBlock[60] = (uint8_t)((bitLen >> 24) & 0xFF); outBlock[61] = (uint8_t)((bitLen >> 16) & 0xFF); outBlock[62] = (uint8_t)((bitLen >> 8) & 0xFF); outBlock[63] = (uint8_t)( bitLen & 0xFF); }
// Computing hash160 using avx512 (16 hashes per try) static void computeHash160BatchBinSingle(int numKeys, uint8_t pubKeys[][33], uint8_t hashResults[][20]) { std::array<std::array<uint8_t, 64>, HASH_BATCH_SIZE> shaInputs; std::array<std::array<uint8_t, 32>, HASH_BATCH_SIZE> shaOutputs; std::array<std::array<uint8_t, 64>, HASH_BATCH_SIZE> ripemdInputs; std::array<std::array<uint8_t, 20>, HASH_BATCH_SIZE> ripemdOutputs;
const size_t totalBatches = (numKeys + (HASH_BATCH_SIZE - 1)) / HASH_BATCH_SIZE; for (size_t batch = 0; batch < totalBatches; batch++) { const size_t batchCount = std::min<size_t>(HASH_BATCH_SIZE, numKeys - batch * HASH_BATCH_SIZE); for (size_t i = 0; i < batchCount; i++) { const size_t idx = batch * HASH_BATCH_SIZE + i; prepareShaBlock(pubKeys[idx], 33, shaInputs[i].data()); } for (size_t i = batchCount; i < HASH_BATCH_SIZE; i++) { std::memcpy(shaInputs[i].data(), shaInputs[0].data(), 64); } const uint8_t* inPtr[HASH_BATCH_SIZE]; uint8_t* outPtr[HASH_BATCH_SIZE]; for (int i = 0; i < HASH_BATCH_SIZE; i++) { inPtr[i] = shaInputs[i].data(); outPtr[i] = shaOutputs[i].data(); } sha256_avx512_16B( inPtr[0], inPtr[1], inPtr[2], inPtr[3], inPtr[4], inPtr[5], inPtr[6], inPtr[7], inPtr[8], inPtr[9], inPtr[10], inPtr[11], inPtr[12], inPtr[13], inPtr[14], inPtr[15], outPtr[0], outPtr[1], outPtr[2], outPtr[3], outPtr[4], outPtr[5], outPtr[6], outPtr[7], outPtr[8], outPtr[9], outPtr[10], outPtr[11], outPtr[12],outPtr[13], outPtr[14], outPtr[15] );
for (size_t i = 0; i < batchCount; i++) { prepareRipemdBlock(shaOutputs[i].data(), ripemdInputs[i].data()); } for (size_t i = batchCount; i < HASH_BATCH_SIZE; i++) { std::memcpy(ripemdInputs[i].data(), ripemdInputs[0].data(), 64); } for (int i = 0; i < HASH_BATCH_SIZE; i++) { inPtr[i] = ripemdInputs[i].data(); outPtr[i] = ripemdOutputs[i].data(); } ripemd160avx512::ripemd160avx512_32( (unsigned char*)inPtr[0], (unsigned char*)inPtr[1], (unsigned char*)inPtr[2], (unsigned char*)inPtr[3], (unsigned char*)inPtr[4], (unsigned char*)inPtr[5], (unsigned char*)inPtr[6], (unsigned char*)inPtr[7], (unsigned char*)inPtr[8], (unsigned char*)inPtr[9], (unsigned char*)inPtr[10], (unsigned char*)inPtr[11], (unsigned char*)inPtr[12], (unsigned char*)inPtr[13], (unsigned char*)inPtr[14], (unsigned char*)inPtr[15], outPtr[0], outPtr[1], outPtr[2], outPtr[3], outPtr[4], outPtr[5], outPtr[6], outPtr[7], outPtr[8], outPtr[9], outPtr[10], outPtr[11], outPtr[12], outPtr[13], outPtr[14], outPtr[15] ); for (size_t i = 0; i < batchCount; i++) { const size_t idx = batch * HASH_BATCH_SIZE + i; std::memcpy(hashResults[idx], ripemdOutputs[i].data(), 20); } } }
//------------------------------------------------------------------------------ static void printUsage(const char* programName) { std::cerr << "Usage: " << programName << " -a <Base58_P2PKH> -r <START:END>\n"; }
static std::string formatElapsedTime(double seconds) { int hrs = (int)seconds / 3600; int mins = ((int)seconds % 3600) / 60; int secs = (int)seconds % 60; std::ostringstream oss; oss << std::setw(2) << std::setfill('0') << hrs << ":" << std::setw(2) << std::setfill('0') << mins << ":" << std::setw(2) << std::setfill('0') << secs; return oss.str(); }
static void printStatsBlock(int numCPUs, const std::string &targetAddr, const std::string &rangeStr, double mkeysPerSec, unsigned long long totalChecked, double elapsedTime, long double progressPercent) { static bool firstPrint = true; const int total_lines = 9; if (!firstPrint) { std::cout << "\033[" << total_lines << "A"; } else { firstPrint = false; }
std::cout << "\033[2K" << "================= WORK IN PROGRESS =================\n" << "\033[2K" << "Target Address: " << targetAddr << "\n" << "\033[2K" << "CPU Threads : " << numCPUs << "\n" << "\033[2K" << "Mkeys/s : " << std::fixed << std::setprecision(2) << mkeysPerSec << "\n" << "\033[2K" << "Total Checked : " << totalChecked << "\n" << "\033[2K" << "Elapsed Time : " << formatElapsedTime(elapsedTime) << "\n" << "\033[2K" << "Range : " << rangeStr << "\n" << "\033[2K" << "Progress : " << std::fixed << std::setprecision(10) << progressPercent << " %\n";
std::cout << "\033[2K" << "Last Key : " << g_lastKeyGlobal << "\n";
std::cout.flush(); }
//------------------------------------------------------------------------------ struct ThreadRange { std::string startHex; std::string endHex; };
static std::vector<ThreadRange> g_threadRanges;
// Returns a random Int uniformly in [start,end] static Int getRandomKeyInRange(Int start, Int end, std::mt19937_64 &gen) { Int rangeSize = end; Int one; one.SetInt32(1); rangeSize.Sub(&start); rangeSize.Add(&one);
// Construct a random big-int of the right size size_t nBytes = rangeSize.GetBase16().size()/2 + 1; std::vector<uint64_t> randWords((nBytes+7)/8); for (auto &w : randWords) w = gen(); Int offset = hexToInt(bigNumToHex(randWords)); offset.Mod(&rangeSize);
Int result = start; result.Add(&offset); return result; }
//------------------------------------------------------------------------------ int main(int argc, char* argv[]) { std::cout << "\033[2J\033[H"; bool addressProvided = false, rangeProvided = false; std::string targetAddress, rangeInput; std::vector<uint8_t> targetHash160;
for (int i = 1; i < argc; i++) { if (!std::strcmp(argv[i], "-a") && i + 1 < argc) { targetAddress = argv[++i]; addressProvided = true; try { targetHash160 = P2PKHDecoder::getHash160(targetAddress); if (targetHash160.size() != 20) throw std::invalid_argument("Invalid hash160 length."); } catch (const std::exception &ex) { std::cerr << "Error parsing address: " << ex.what() << "\n"; return 1; } } else if (!std::strcmp(argv[i], "-r") && i + 1 < argc) { rangeInput = argv[++i]; rangeProvided = true; } else { std::cerr << "Unknown parameter: " << argv[i] << "\n"; printUsage(argv[0]); return 1; } } if (!addressProvided || !rangeProvided) { std::cerr << "Both -a <Base58_P2PKH> and -r <START:END> are required!\n"; printUsage(argv[0]); return 1; }
const size_t colonPos = rangeInput.find(':'); if (colonPos == std::string::npos) { std::cerr << "Invalid range format. Use <START:END> in HEX.\n"; return 1; } const std::string rangeStartHex = rangeInput.substr(0, colonPos); const std::string rangeEndHex = rangeInput.substr(colonPos + 1);
auto rangeStart = hexToBigNum(rangeStartHex); auto rangeEnd = hexToBigNum(rangeEndHex);
bool validRange = false; if (rangeStart.size() < rangeEnd.size()) { validRange = true; } else if (rangeStart.size() > rangeEnd.size()) { validRange = false; } else { validRange = true; for (int i = (int)rangeStart.size() - 1; i >= 0; --i) { if (rangeStart[i] < rangeEnd[i]) { break; } else if (rangeStart[i] > rangeEnd[i]) { validRange = false; break; } } } if (!validRange) { std::cerr << "Range start must be <= range end.\n"; return 1; }
auto rangeSize = bigNumSubtract(rangeEnd, rangeStart); rangeSize = bigNumAdd(rangeSize, singleElementVector(1ULL));
const std::string rangeSizeHex = bigNumToHex(rangeSize); const long double totalRangeLD = hexStrToLongDouble(rangeSizeHex); Int rangeSizeInt = hexToInt(rangeSizeHex); const int numCPUs = omp_get_num_procs();
auto [chunkSize, remainder] = bigNumDivide(rangeSize, (uint64_t)numCPUs); g_threadRanges.resize(numCPUs);
std::vector<uint64_t> currentStart = rangeStart; for (int t = 0; t < numCPUs; t++) { auto currentEnd = bigNumAdd(currentStart, chunkSize); if (t < (int)remainder) { currentEnd = bigNumAdd(currentEnd, singleElementVector(1ULL)); } currentEnd = bigNumSubtract(currentEnd, singleElementVector(1ULL));
g_threadRanges[t].startHex = bigNumToHex(currentStart); g_threadRanges[t].endHex = bigNumToHex(currentEnd);
currentStart = bigNumAdd(currentEnd, singleElementVector(1ULL)); } const std::string displayRange = g_threadRanges.front().startHex + ":" + g_threadRanges.back().endHex;
unsigned long long globalComparedCount = 0ULL; double globalElapsedTime = 0.0; double mkeysPerSec = 0.0;
const auto tStart = std::chrono::high_resolution_clock::now(); auto lastStatusTime = tStart;
std::string foundPrivateKeyHex, foundPublicKeyHex, foundWIF;
Secp256K1 secp; secp.Init();
// PARRALEL COMPUTING BLOCK #pragma omp parallel num_threads(numCPUs) \ shared(globalComparedCount, globalElapsedTime, mkeysPerSec, matchFound, \ foundPrivateKeyHex, foundPublicKeyHex, foundWIF, \ tStart, lastStatusTime) { const int threadId = omp_get_thread_num(); std::mt19937_64 gen(std::random_device{}() + threadId); std::uniform_int_distribution<uint64_t> dist64(0, ~0ULL); Int rangeStartInt = hexToInt(g_threadRanges[threadId].startHex); Int rangeEndInt = hexToInt(g_threadRanges[threadId].endHex); Int privateKey;
// Precomputing +i*G and -i*G for i=0..255 std::vector<Point> plusPoints(POINTS_BATCH_SIZE); std::vector<Point> minusPoints(POINTS_BATCH_SIZE); for (int i = 0; i < POINTS_BATCH_SIZE; i++) { Int tmp; tmp.SetInt32(i); Point p = secp.ComputePublicKey(&tmp); plusPoints[i] = p; p.y.ModNeg(); minusPoints[i] = p; }
// Arrays for batch-adding std::vector<Int> deltaX(POINTS_BATCH_SIZE); IntGroup modGroup(POINTS_BATCH_SIZE);
// Save 512 publickeys const int fullBatchSize = 2 * POINTS_BATCH_SIZE; std::vector<Point> pointBatch(fullBatchSize);
// Buffers for hashing uint8_t localPubKeys[fullBatchSize][33]; uint8_t localHashResults[HASH_BATCH_SIZE][20]; int localBatchCount = 0; int pointIndices[HASH_BATCH_SIZE];
// Local count unsigned long long localComparedCount = 0ULL;
// Download the target (hash160) в __m128i for fast compare __m128i target16 = _mm_loadu_si128(reinterpret_cast<const __m128i*>(targetHash160.data()));
// main while (true) { privateKey = getRandomKeyInRange(rangeStartInt, rangeEndInt, gen); #pragma omp critical(stats) { g_lastKeyGlobal = padHexTo64(intToHex(privateKey)); }
// startPoint = privateKey * G Int currentBatchKey; currentBatchKey.Set(&privateKey); Point startPoint = secp.ComputePublicKey(¤tBatchKey);
// Divide the batch of 512 keys into 2 blocks of 256 keys, count +256 and -256 from the center G-point of the batch // First pointBatch[0..255] + for (int i = 0; i < POINTS_BATCH_SIZE; i++) { deltaX[i].ModSub(&plusPoints[i].x, &startPoint.x); } modGroup.Set(deltaX.data()); modGroup.ModInv(); for (int i = 0; i < POINTS_BATCH_SIZE; i++) { Point tempPoint = startPoint; Int deltaY; deltaY.ModSub(&plusPoints[i].y, &startPoint.y); Int slope; slope.ModMulK1(&deltaY, &deltaX[i]); Int slopeSq; slopeSq.ModSquareK1(&slope);
Int tmpX; tmpX.Set(&startPoint.x); tmpX.ModNeg(); tmpX.ModAdd(&slopeSq); tmpX.ModSub(&plusPoints[i].x); tempPoint.x.Set(&tmpX);
Int diffX; diffX.Set(&startPoint.x); diffX.ModSub(&tempPoint.x); diffX.ModMulK1(&slope); tempPoint.y.ModNeg(); tempPoint.y.ModAdd(&diffX);
pointBatch[i] = tempPoint; }
// Second pointBatch[256..511] - for (int i = 0; i < POINTS_BATCH_SIZE; i++) { Point tempPoint = startPoint; Int deltaY; deltaY.ModSub(&minusPoints[i].y, &startPoint.y); Int slope; slope.ModMulK1(&deltaY, &deltaX[i]); Int slopeSq; slopeSq.ModSquareK1(&slope);
Int tmpX; tmpX.Set(&startPoint.x); tmpX.ModNeg(); tmpX.ModAdd(&slopeSq); tmpX.ModSub(&minusPoints[i].x); tempPoint.x.Set(&tmpX);
Int diffX; diffX.Set(&startPoint.x); diffX.ModSub(&tempPoint.x); diffX.ModMulK1(&slope); tempPoint.y.ModNeg(); tempPoint.y.ModAdd(&diffX);
pointBatch[POINTS_BATCH_SIZE + i] = tempPoint; }
// Construct local buffeer for (int i = 0; i < fullBatchSize; i++) { pointToCompressedBin(pointBatch[i], localPubKeys[localBatchCount]); pointIndices[localBatchCount] = i; localBatchCount++;
// 8 keys are ready - time to use avx512 if (localBatchCount == HASH_BATCH_SIZE) { computeHash160BatchBinSingle(localBatchCount, localPubKeys, localHashResults); // Results check for (int j = 0; j < HASH_BATCH_SIZE; j++) { __m128i cand16 = _mm_loadu_si128(reinterpret_cast<const __m128i*>(localHashResults[j])); __m128i cmp = _mm_cmpeq_epi8(cand16, target16); if (_mm_movemask_epi8(cmp) == 0xFFFF) { // Checking last 4 bytes (20 - 16) if (!matchFound && std::memcmp(localHashResults[j], targetHash160.data(), 20) == 0) { #pragma omp critical(stats) { if (!matchFound) { matchFound = true; auto tEndTime = std::chrono::high_resolution_clock::now(); globalElapsedTime = std::chrono::duration<double>(tEndTime - tStart).count(); mkeysPerSec = (double)(globalComparedCount + localComparedCount) / globalElapsedTime / 1e6;
// Recovering private key Int matchingPrivateKey; matchingPrivateKey.Set(¤tBatchKey); int idx = pointIndices[j]; if (idx < POINTS_BATCH_SIZE) { Int offset; offset.SetInt32(idx); matchingPrivateKey.Add(&offset); } else { Int offset; offset.SetInt32(idx - POINTS_BATCH_SIZE); matchingPrivateKey.Sub(&offset); } foundPrivateKeyHex = padHexTo64(intToHex(matchingPrivateKey));
Point matchedPoint = pointBatch[idx]; foundPublicKeyHex = pointToCompressedHex(matchedPoint); foundWIF = P2PKHDecoder::compute_wif(foundPrivateKeyHex, true); } } #pragma omp cancel parallel } localComparedCount++; } else { localComparedCount++; } } localBatchCount = 0; } }
// Time to show status (once every statusIntervalSec seconds, light locking) auto now = std::chrono::high_resolution_clock::now(); double dt = std::chrono::duration<double>(now - lastStatusTime).count(); if (dt >= statusIntervalSec) { #pragma omp atomic globalComparedCount += localComparedCount; localComparedCount = 0ULL;
globalElapsedTime = std::chrono::duration<double>(now - tStart).count(); mkeysPerSec = (double)globalComparedCount / globalElapsedTime / 1e6; long double p = (long double)globalComparedCount / totalRangeLD * 100.0L; long double progressPercent = (p > 100.0L ? 100.0L : p);
if (threadId == 0) { #pragma omp critical(stats) { printStatsBlock(numCPUs, targetAddress, displayRange, mkeysPerSec, globalComparedCount, globalElapsedTime, progressPercent); } lastStatusTime = now; } }
if (matchFound) { break; } }
} // end of parralel section
// Main results auto tEnd = std::chrono::high_resolution_clock::now(); globalElapsedTime = std::chrono::duration<double>(tEnd - tStart).count();
if (!matchFound) { mkeysPerSec = (double)globalComparedCount / globalElapsedTime / 1e6; std::cout << "\nNo match found.\n"; std::cout << "Total Checked : " << globalComparedCount << "\n"; std::cout << "Elapsed Time : " << formatElapsedTime(globalElapsedTime) << "\n"; std::cout << "Speed : " << mkeysPerSec << " Mkeys/s\n"; return 0; }
// If the key was found std::cout << "================== FOUND MATCH! ==================\n"; std::cout << "Private Key : " << foundPrivateKeyHex << "\n"; std::cout << "Public Key : " << foundPublicKeyHex << "\n"; std::cout << "WIF : " << foundWIF << "\n"; std::cout << "P2PKH Address : " << targetAddress << "\n"; std::cout << "Total Checked : " << globalComparedCount << "\n"; std::cout << "Elapsed Time : " << formatElapsedTime(globalElapsedTime) << "\n"; std::cout << "Speed : " << mkeysPerSec << " Mkeys/s\n"; return 0; }
|
|
|
|
nomachine
|
 |
May 11, 2025, 05:06:20 AM |
|
It's here! the first "breaking the elliptic curve" quantum computer cryptography competition is here! The QDay Prize is the first truly global quantum cryptanalysis competition with a 1 BTC prize. Entries are open, will anyone enter? https://www.qdayprize.org/What's clear is that a major breakthrough in data decryption will occur very soon, and the 160 puzzle will certainly be decrypted this way. huh.... the silence in the chat indicates people are feverishly trying their own version of this A specialized laboratory is required for such a quantum computer. Not only does it need laser-generated (radiated) random numbers, but it also requires a quantum computer with a specific type of qubit optimized for Shor's algorithm, high-efficiency power supplies, sub-zero cooling with liquid nitrogen, and a fully controlled environment including air humidity.. The power required is about 3MW (like a train at full speed). However, laser-generated random numbers are not directly tied to the security of ECDSA or most other cryptographic algorithms. Random number generation is a separate component of cryptography, essential for key generation and other cryptographic operations. For example: https://arxiv.org/pdf/2302.06639.pdfThis paper discusses related concepts. Current IBM Quantum systems and other publicly available quantum computers do not possess the necessary hardware (such as arithmetic circuits) for "126+ logical qubits with error correction" (e.g., "Cat Qubits"). If someone were to achieve this, three-letter agencies would immediately recognize who accomplished it and they would know exactly where those researchers are located.  fbi? Did you read the title of the scientific paper? They claim that they can crack 256-bit in 9 hours. All agencies that exist are interested in this. 
|
BTC: bc1qdwnxr7s08xwelpjy3cc52rrxg63xsmagv50fa8
|
|
|
Akito S. M. Hosana
Jr. Member
Offline
Activity: 350
Merit: 8
|
 |
May 11, 2025, 05:24:54 AM |
|
It's here! the first "breaking the elliptic curve" quantum computer cryptography competition is here! The QDay Prize is the first truly global quantum cryptanalysis competition with a 1 BTC prize. Entries are open, will anyone enter? https://www.qdayprize.org/What's clear is that a major breakthrough in data decryption will occur very soon, and the 160 puzzle will certainly be decrypted this way. huh.... the silence in the chat indicates people are feverishly trying their own version of this A specialized laboratory is required for such a quantum computer. Not only does it need laser-generated (radiated) random numbers, but it also requires a quantum computer with a specific type of qubit optimized for Shor's algorithm, high-efficiency power supplies, sub-zero cooling with liquid nitrogen, and a fully controlled environment including air humidity.. The power required is about 3MW (like a train at full speed). However, laser-generated random numbers are not directly tied to the security of ECDSA or most other cryptographic algorithms. Random number generation is a separate component of cryptography, essential for key generation and other cryptographic operations. For example: https://arxiv.org/pdf/2302.06639.pdfThis paper discusses related concepts. Current IBM Quantum systems and other publicly available quantum computers do not possess the necessary hardware (such as arithmetic circuits) for "126+ logical qubits with error correction" (e.g., "Cat Qubits"). If someone were to achieve this, three-letter agencies would immediately recognize who accomplished it and they would know exactly where those researchers are located.  fbi? Did you read the title of the scientific paper? They claim that they can crack 256-bit in 9 hours. All agencies that exist are interested in this.  Could this be weaponized? 
|
|
|
|
|