kTimesG
Member
Offline
Activity: 257
Merit: 44
|
|
October 11, 2024, 04:20:49 PM |
|
That's what I preach, you can't mess with the birthday paradox without losing efficiency, if your software doesn't do this right for you, at least you're not looking for a fish in the sky. My comments are directed towards @ktimesg's crazy proposals. If you read the context you'll understand. In fact, it is wrong to increase the jumps, you only lose efficiency, it is an exact science, I demonstrate it in my previous script. What do you understand by "efficiency"? It's not the same thing as "speed" or "complexity", rather a more overall indicator that accounts for a multitude of factors, of which one is the practical techniques that are being used; another one is what problem you are trying to solve. Why is it wrong to increase the jump (I guess you meant average jump size)? You're pretty much dismissing all existing research with this statement, and the math looks pretty legit IMHO (with well defined proofs for why it's optimal to use a specific average jump size to minimize the expected number of operations, etc.). By your (assumed) logic, and continuing it in reverse, we should basically run a brute-force search, one point after the other, right? So as not to mess with the damn birthday paradox, losing efficiency... am I wrong? But the joke's on you: messing around with the way you intend to use some theory, only ends up to decrease the efficiency. If you want better "birthday paradox" results, then you lose efficiency, because you do more operations. It does not matter the way you split the interval, or whether you make the jump sizes smaller or larger, or if you increase the DP to abnormal magnitude, or if you decide to go with storing trillions of points in the cloud, or if you decide to randomize starting points instead of having a central database of working state, the end result is the same: sub-optimal. And this is an objective measurable metric, not some personal opinion. Maybe, enlighten us about your exact science. Let's talk theory, not "you'll never find 135 using this or that", this is not the main point here anymore. Let's have some fun on the realm of exact science!
|
|
|
|
Woz2000
Jr. Member
Offline
Activity: 85
Merit: 2
|
|
October 11, 2024, 04:36:02 PM |
|
TTD #1 for 64, TTD #1 for 66, yet 'someone else' has found the key both times. Have you ever wondered why? Fastest client? How? Secure Server? Has it been audited? Startup scripts. Nice! But, either way you go, you need to plan ahead and think about items such as: cracking program to use server client how you will load the client and cracking program on all of those machines and hit "start"; will you have access to be able to SSH into each machine?
ttdpool and ttdclient already have this. Fastest client, secure server and startup scripts.
|
|
|
|
Akito S. M. Hosana
Jr. Member
Offline
Activity: 96
Merit: 2
|
|
October 11, 2024, 05:29:50 PM |
|
Here's another kicker: Kangaroo doesn't rely on the birthday paradox.
The Pollard Kangaroo algorithm, also known as Pollard's Lambda method, does not rely on the birthday paradox. The birthday paradox is relevant to the Rho method because it depends on finding a collision (i.e., two identical function evaluations). Gaudry and Schost's algorithm is a hybrid approach for solving the discrete logarithm problem. It combines elements of both the Pollard Rho method—utilizing random walks and the birthday paradox—and the Pohlig-Hellman algorithm, which leverages the factorization of the group order. I prefer the Pollard Lambda method due to its methodical nature, low memory requirements, and efficiency in cases where the range of possible solutions is known and bounded.
|
|
|
|
nomachine
Member
Offline
Activity: 490
Merit: 35
|
|
October 11, 2024, 05:46:01 PM |
|
I prefer the Pollard Lambda method due to its methodical nature, low memory requirements, and efficiency in cases where the range of possible solutions is known and bounded.
BS. Pollard Rho is preferred for general-purpose large-scale GPU ECDLP solvers, especially for SECP256K1.
|
bc1qdwnxr7s08xwelpjy3cc52rrxg63xsmagv50fa8
|
|
|
kTimesG
Member
Offline
Activity: 257
Merit: 44
|
|
October 12, 2024, 08:28:29 AM Last edit: October 12, 2024, 08:48:26 AM by kTimesG |
|
That's what I preach, you can't mess with the birthday paradox without losing efficiency, if your software doesn't do this right for you, at least you're not looking for a fish in the sky. My comments are directed towards @ktimesg's crazy proposals. If you read the context you'll understand. In fact, it is wrong to increase the jumps, you only lose efficiency, it is an exact science, I demonstrate it in my previous script. What do you understand by "efficiency"? It's not the same thing as "speed" or "complexity", rather a more overall indicator that accounts for a multitude of factors, of which one is the practical techniques that are being used; another one is what problem you are trying to solve. Why is it wrong to increase the jump (I guess you meant average jump size)? You're pretty much dismissing all existing research with this statement, and the math looks pretty legit IMHO (with well defined proofs for why it's optimal to use a specific average jump size to minimize the expected number of operations, etc.). By your (assumed) logic, and continuing it in reverse, we should basically run a brute-force search, one point after the other, right? So as not to mess with the damn birthday paradox, losing efficiency... am I wrong? But the joke's on you: messing around with the way you intend to use some theory, only ends up to decrease the efficiency. If you want better "birthday paradox" results, then you lose efficiency, because you do more operations. It does not matter the way you split the interval, or whether you make the jump sizes smaller or larger, or if you increase the DP to abnormal magnitude, or if you decide to go with storing trillions of points in the cloud, or if you decide to randomize starting points instead of having a central database of working state, the end result is the same: sub-optimal. And this is an objective measurable metric, not some personal opinion. Maybe, enlighten us about your exact science. Let's talk theory, not "you'll never find 135 using this or that", this is not the main point here anymore. Let's have some fun on the realm of exact science! ok, explain to me why here the first script is more efficient than the second, and we will talk about math. 1- same resources. 2- same range only difference x10 in jumps. Yes, let's talk about math and resources! Can you replace this line: with i += jump time.sleep(0.000001). # time required to perform a jump
and then maybe you understand why the first program is 10x less efficient than the second. Also, it is hard to figure out what problem the scripts are trying to solve, but it definitely is not a benchmark to prove the birthday paradox, since you shrink down the remaining options after every loop. Also, kangaroos don't stop after going past the end of the interval. They continue their walk indefinitely. It is not the same thing as the birthday paradox. Execution time: 7.475505113601685 seconds Total matches: 46
Execution time: 0.8068211078643799 seconds Total matches: 7
|
|
|
|
mabdlmonem
Jr. Member
Offline
Activity: 35
Merit: 1
|
|
October 12, 2024, 01:26:15 PM |
|
How many GPU needs to find a key in range 119 with kangaroo and how much time it takes? Is there any calculator.
|
|
|
|
kTimesG
Member
Offline
Activity: 257
Merit: 44
|
|
October 12, 2024, 05:04:55 PM |
|
Yes, let's talk about math and resources! Can you replace this line: with i += jump time.sleep(0.000001). # time required to perform a jump
and then maybe you understand why the first program is 10x less efficient than the second. Also, it is hard to figure out what problem the scripts are trying to solve, but it definitely is not a benchmark to prove the birthday paradox, since you shrink down the remaining options after every loop. Also, kangaroos don't stop after going past the end of the interval. They continue their walk indefinitely. It is not the same thing as the birthday paradox. Execution time: 7.475505113601685 seconds Total matches: 46
Execution time: 0.8068211078643799 seconds Total matches: 7
If you respond with questions, you are not answering, you are just diverting the topic. Also, kangaroos don't stop after going past the end of the interval. They continue their walk indefinitely. It is not the same thing as the birthday paradox.
Even if we make infinite cycles, the first one will be more efficient. I'm just trying to make you understand that the difficulty of the puzzles is exponential, Kangaroo has already fulfilled its useful life. Kangaro is not better at long distances "myth", by increasing the jumps and taking shortcuts to do fewer operations you only lose the probability of getting it right. It's not just a birthday paradox, it's common sense. BSGS must improve its DB to be more efficient. Kangaroo must improve its computing power to be more efficient. Dude, you come here, post some junk code, junk ideas, and junk principles of thought, and you expect us to take precious time from our life to prove you are just raving complete non-sense. After you have all the answers on the table (there were no questions in my last post, only direct answers for you) you still insist on the same non-sense, and do not even address, or forget completely the whatever problems and questions that you got the answers to! I am personally done with dealing with you, it is of no use for either of us, or the rest or this forum.
|
|
|
|
Akito S. M. Hosana
Jr. Member
Offline
Activity: 96
Merit: 2
|
|
October 12, 2024, 05:15:47 PM |
|
This is my king. Digaran.
|
|
|
|
ElonMusk_ia
Newbie
Offline
Activity: 23
Merit: 2
|
|
October 12, 2024, 07:16:32 PM |
|
Flat-earthers, who can beat their madness.
|
|
|
|
kTimesG
Member
Offline
Activity: 257
Merit: 44
|
|
October 12, 2024, 08:45:41 PM |
|
Flat-earthers, who can beat their madness.
Rather, people that don't understand the meaning of what they are expressing. Some examples (yeah, I'm bored) Statement: "BSGS must improve its DB" Fact 1: You cannot improve something below the known optimal complexity, unless you discover something of a lower complexity. However in information theory there are informational bounds that cannot be overcome, so how exactly would one improve on something that is proven to be already optimal? Fact 2: BSGS does not use a database, it uses what is referred to as "fast memory", which is ideally a data structure that has the lowest possible access time, with seek and search complexity of O(1) (a single fundamental operation at the information implementation layer). So any attempt to optimize something that is already as fast as possible, and beyond what the information-based paradigm allows, seems unlikely. Fact 3: There are limiting factors in the actual physical representation resources, we are talking here about the real implementation of facts 1 & 2 above, of which the most known one is RAM (random address memory). RAM allows the usage of the information theory principle of "fast memory". So, any attempt to optimize on Fact 3 is limited by the technological constraints available. If we analyze the requirements of solving a specific problem, we are going to be quickly slapped in the face by the realization that one cannot simply add up many magnitudes of RAM to some computer system (or even a super-computer system) due to a lack of tehcnological availabality. Statement: "Kangaroo must improve its computing power to be more efficient." Fact 1: Kangaroo is an algorithm, it does not have "computing power", which is something in the realm of practical concrete implementation. Fact 2: Efficiency is also something that correlates to a practical implementation. Fact 3: We can always add up more computing power. In contrast to "we cannot always add up more and more RAM for our fast-memory-based algorithm". So, these facts make this statement pretty blake. Why? Because, we can always add up the computing power, it is not a "must improve", it is rather "this is something that already can be done and profit from in our implementation". In contrast, can we arbitrarily add up more computing power to BSGS? No, because the fast-memory real-life implementation (RAM) is only fast because it's physically connected to a singular computing system, not many. Can we arbitrarily add up more fast-memory to BSGS? No, because we have techology limitations. Can we fit more data in the same fast-memory RAM in BSGS? Well, sure we can, buddy, but the information theory bounds will again hit you very hard in the face, because now you need some form of processing. These problems are not something that you can simply cheat on and call it revolutionary or miraculous. These are well-defined problems rooted in the information field itself, before any kind of other talk on opinions. Dude, you come here, post some junk code, junk ideas, and junk principles of thought, and you expect us to take precious time from our life to prove you are just raving complete non-sense. After you have all the answers on the table (there were no questions in my last post, only direct answers for you) you still insist on the same non-sense, and do not even address, or forget completely the whatever problems and questions that you got the answers to! I am personally done with dealing with you, it is of no use for either of us, or the rest or this forum.
Haha, this is an ad hominem fallacy. Instead of addressing the argument, you attack to discredit and divert attention from the main issue. If I were you, I would delete that code you posted, and which I proved you it is running 10x more slower than what you were bragging about as being 10x more efficient. If 10x slower means "10x more efficient" for you, than yeah, my bad.
|
|
|
|
kTimesG
Member
Offline
Activity: 257
Merit: 44
|
|
October 12, 2024, 10:00:35 PM |
|
If I were you, I would delete that code you posted, and which I proved you it is running 10x more slower than what you were bragging about as being 10x more efficient. If 10x slower means "10x more efficient" for you, than yeah, my bad.
You mean this: Execution time: 7.475505113601685 seconds Total matches: 46
Execution time: 0.8068211078643799 seconds Total matches: 7
Clearly show my point that the first is more efficient with 46 coincidences, and the second although it is faster, obtubes a low coincidence rate. Doesn't it seem similar to what you propose with Kangaroo the second? Your "point" makes sense only in the context of the code you provided. The code you provided has no resemblance, structure, or respect to the Kangaroo algorithm. It also has no correlation to the birthday paradox. You are the only one who actually knows what exactly your point was about in there. And even if you actually explained what that code is trying to do (which you hadn't), then the second code is still observably faster (and hence more efficient) when it finds the same amount of matches as the first one does. But again, I cannot understand what your code was attempting to prove. However you stated that they both solve the same problem (whatever that is), so the comparison is fair, if the comparing factor was "how many matches does it find". Again, in lack of a concrete statement of what problem it's trying to solve. Maybe if you come up with something that actually has to do with the Kangaroo algorithm (or of course, with the birthday paradox, though that would be something that has nothing to do with Kangaroo) maybe someone will bother to check it. I can promise it won't be me. And one last thing I have to address to you, since it's my last one: I am not and was not ever either digaran or any other user on this forum. Good luck mate.
|
|
|
|
WanderingPhilospher
Full Member
Offline
Activity: 1204
Merit: 237
Shooters Shoot...
|
|
October 13, 2024, 07:38:59 AM |
|
Important info stored in here, related to the wallet challenges / puzzles :50BA1F083DE4F022B32996C8070B71F7D27A73E439AE20E5B87B85F3064835EDDB98AFF04FA09B4 D66EA70436C44D927B48408D85D4AB69E57CD466CF922E9A7
More to come...
|
|
|
|
citb0in
|
|
October 13, 2024, 08:20:02 AM |
|
Important info stored in here, related to the wallet challenges / puzzles :50BA1F083DE4F022B32996C8070B71F7D27A73E439AE20E5B87B85F3064835EDDB98AFF04FA09B4 D66EA70436C44D927B48408D85D4AB69E57CD466CF922E9A7
More to come...
good luck with TX
|
_ _ _ __ _ _ _ __ |_) | / \ / |/ (_ / \ | \ / |_ |_) (_ |_) |_ \_/ \_ |\ __) \_/ |_ \/ |_ | \ __) --> citb0in Solo-Mining Group <--- low stake of only 0.001 BTC. We regularly rent about 5 PH/s hash power and direct it to SoloCK pool. Wanna know more? Read through the link and JOIN NOW
|
|
|
kTimesG
Member
Offline
Activity: 257
Merit: 44
|
|
October 13, 2024, 08:56:57 AM Last edit: October 13, 2024, 09:15:51 AM by kTimesG |
|
Seriously? You say the second one has better or equal success rate?
Nah, I just mean that if you run the second one as much as needed to get the same number of matches as the first one, then it runs in less time, on average, on the same system, under the same resources and under the same conditions. You know, in order to compare them, by bringing both to a common denominator. But again, the code is not a demonstration of either kang or b-day paradox, so God knows what you were trying to prove... It might as well count apples in Satoshi's bag in the story above, as far as I'm concerned. My script is the simplest demonstration of Kangaro's obsolescence.
Your script is sampling some arbitrary number of items and then traverses pseudo-randomly some interval, it is neither a demonstration of Kangaroo, and neither using the b-day paradox. If you want to talk about Kangaro, just try bit 66,67,68...125.
If you want to talk about Kangaroo, then understand it correctly, code a proper implementation, and you'll be amazed - it works at the same speed and at the same efficiency no matter what interval you are running it. This is why we have notions like "algorithm complexity", big O notation, which are relative to the problem size and on fundamental time concepts, not on random opinions. That way you see that the higher the puzzle, the worse it performs, and there comes a point where not even getting the necessary strength would be more expensive than the prize itself.
That is generic to any problem and any algorithm, however some algorithms will depend more on space than on time, and the main issue here is that the higher bound on the space you can use to solve is much much lower than on the higher bound on the time which you can decrease (translation: faster speed, due to more compute power). And I won't touch on the subject of Bsgs or databases because you don't understand it, and you don't even try to understand in order to speak properly.
Yeah, sure, I don't understand that even the fact of using a database slows down any algorithm that requires fast memory in the algorithm itself. I mean, fast memory assumes O(1) steps to do a read/write, while a database, guess what, works in a logarithmic number of steps relative to the number of entries, which is slower than O(1), so it must be taken into account if you intend to analyze the performance of an algorithm. Cool. Do you realize that a database is actually a practical-side emulation of the fast-memory concept? You just spout fallacies with huge texts without any code. And I'm supported by the fact that all my posts, whether considered good or bad, have codes to support the idea.
What fallacies are you referring to, or is it just your opinion? The fact that you actually post code is working against you, when there is missing correlation between your idea, the problem, and the code. Seriously, do you believe that your two scripts are in any way supporting the text that was in front of them, you know, the one that starts with "The correct option is..."? Your code does not dismiss anything you were given arguments to, since it has nothing to do with the said arguments, and rather with some dubious problem: let's take some random numbers and traverse an interval pseudo-randomly, and see what happens. Zero relevance to either kang or bday paradox, it's just some arbitrary created problem. One other thing you should meditate on: there is no need to make any changes to Kangaroo to benefit from pre-computation, which is what you highly suggest, probably because you didn't understand correctly. You might as well have some already-compiled binary executable, and if you simply save results from one run to the next, then the next run will solve the next problem (same key, or other key), in less and less time. There is no change in the computing speed, only linear slowdown at the collision check layer, which of course, if it's a database, means you need to double the amount of stored items to ever need an additional step to get to a stored value. So overall, the efficiency grows, but it never decreases, no matter if you run it once, twice, or a quadrillion times. It will simply asymptotically approach on having a single run just needing to do the minimal possible amount of computing to find some already-computed point.
|
|
|
|
ronin445
Newbie
Offline
Activity: 12
Merit: 0
|
|
October 13, 2024, 09:43:38 AM |
|
A further note for everyone, unless something drastically improves with being able to use a public key, such as BSGS, Kangaroo, etc., the 67, 68, and 69 wallets, are all easier reached (time wise) now versus 135.
The lower puzzles are now very risky ... Even if someone manages to crack one of them still they could be easily stolen. It's better not to find a solution than get it and then being robbed right away. Please stop this wallet stealing talk. Even if unintentionally, you are creating an invitation and letting more people know. By the way, the difficulty level of puzzle 69 and 66 is the same, puzzle 69 the first digit is 1 and known , so on . this does make it 66 bit.
|
|
|
|
Anonymous User
Newbie
Offline
Activity: 28
Merit: 0
|
|
October 13, 2024, 09:56:34 AM |
|
A further note for everyone, unless something drastically improves with being able to use a public key, such as BSGS, Kangaroo, etc., the 67, 68, and 69 wallets, are all easier reached (time wise) now versus 135.
The lower puzzles are now very risky ... Even if someone manages to crack one of them still they could be easily stolen. It's better not to find a solution than get it and then being robbed right away. Please stop this wallet stealing talk. Even if unintentionally, you are creating an invitation and letting more people know. By the way, the difficulty level of puzzle 69 and 66 is the same, puzzle 69 the first digit is 1 and known , so on . this does make it 66 bit. Everyone needs to be aware of this risk. Attackers can intercept and change transactions before they get mined, which is a serious issue. People should know about tools like https://slipstream.mara.com/ that help secure transactions without exposing public keys, so they don't make any mistakes in the future.
|
|
|
|
AlanJohnson
Member
Offline
Activity: 126
Merit: 11
|
|
October 13, 2024, 11:19:42 AM |
|
A further note for everyone, unless something drastically improves with being able to use a public key, such as BSGS, Kangaroo, etc., the 67, 68, and 69 wallets, are all easier reached (time wise) now versus 135.
The lower puzzles are now very risky ... Even if someone manages to crack one of them still they could be easily stolen. It's better not to find a solution than get it and then being robbed right away. Please stop this wallet stealing talk. Even if unintentionally, you are creating an invitation and letting more people know. By the way, the difficulty level of puzzle 69 and 66 is the same, puzzle 69 the first digit is 1 and known , so on . this does make it 66 bit. I guess everyone who is trying to solve one of the lower puzzles would like to know about the risk. I don't get your point that my previous post was "invitation" to stealing. Even if there is only few people hunting for that public key to be revealed the risk exists and it's better to know that. BTW How can be 66bit and 69 bit at the same difficulty level ? Visualize 66 puzzle like you have 66 light switches on a wall. Every switch can be only in position on or off ( one or zero). To resolve the puzzle you must set every switch in a certain position (and of course in certain order)... now... how can adding three more switches keep the same level of difficulty ?
|
|
|
|
b0dre
Newbie
Offline
Activity: 7
Merit: 0
|
|
October 13, 2024, 06:43:26 PM |
|
Simple program for puzzle 66 below. If you want to search from a specific private key number both ways up and down, just use it: #!/usr/bin/env python3
from hdwallet import HDWallet from hdwallet.symbols import BTC from tqdm import tqdm from tqdm.contrib.concurrent import process_map
hdwallet = HDWallet(symbol=BTC)
p=0x000000000000000000000000000000000000000000000001a838b13505b26867 middle=p*2 mid=1000000
def go(i): b=hex(i) b='0'*(66-len(b))+b[2:] hdwallet.from_private_key(private_key=b) a=hdwallet.p2pkh_address() if a=='13zb1hQbWVsc2S7ZTZnP2G4undNNpdh5so': print('private key: 0x'+b+'\a') exit()
process_map(go, [x for i in range(mid) for x in {middle-i:0,middle+i:0}], max_workers=10, chunksize=10000)
import sys print('\a',end='',file=sys.stderr)
Here I set p for previous puzzle pvk and double it. Middle is the variable from where the search will start. I think your version have some errors, this version work to me: #!/usr/bin/env python3
from hdwallet import HDWallet from hdwallet.symbols import BTC from tqdm.contrib.concurrent import process_map import sys import concurrent.futures
# Initialize HDWallet for BTC hdwallet = HDWallet(symbol=BTC)
# Define constants p = 0x000000000000000000000000000000000000000000000001a838b13505b26867 middle = p * 2 mid = 1000000
# Function to process private keys and check addresses def go(i): # Calculate the private key from the iteration index private_key = middle - i if i % 2 == 0 else middle + i # Convert the private key to a hex string and ensure it's 64 characters b = hex(private_key)[2:].zfill(64) # Initialize wallet from private key hdwallet.from_private_key(private_key=b) # Get address address = hdwallet.p2pkh_address() # Check if the address matches the target if address == '13zb1hQbWVsc2S7ZTZnP2G4undNNpdh5so': print('Private key: 0x' + b + '\a') sys.exit()
# Main script execution with multiprocessing guard if __name__ == "__main__": try: # Use process_map to parallelize the work process_map(go, range(mid), max_workers=10, chunksize=10000) except concurrent.futures.process.BrokenProcessPool: print("A process in the pool was terminated abruptly.", file=sys.stderr) except KeyboardInterrupt: print("Process interrupted.", file=sys.stderr)
# Optional alert on script completion (in stderr) print('\a', end='', file=sys.stderr)
|
|
|
|
mabdlmonem
Jr. Member
Offline
Activity: 35
Merit: 1
|
|
October 13, 2024, 07:38:07 PM |
|
Simple program for puzzle 66 below. If you want to search from a specific private key number both ways up and down, just use it: #!/usr/bin/env python3
from hdwallet import HDWallet from hdwallet.symbols import BTC from tqdm import tqdm from tqdm.contrib.concurrent import process_map
hdwallet = HDWallet(symbol=BTC)
p=0x000000000000000000000000000000000000000000000001a838b13505b26867 middle=p*2 mid=1000000
def go(i): b=hex(i) b='0'*(66-len(b))+b[2:] hdwallet.from_private_key(private_key=b) a=hdwallet.p2pkh_address() if a=='13zb1hQbWVsc2S7ZTZnP2G4undNNpdh5so': print('private key: 0x'+b+'\a') exit()
process_map(go, [x for i in range(mid) for x in {middle-i:0,middle+i:0}], max_workers=10, chunksize=10000)
import sys print('\a',end='',file=sys.stderr)
Here I set p for previous puzzle pvk and double it. Middle is the variable from where the search will start. I think your version have some errors, this version work to me: #!/usr/bin/env python3
from hdwallet import HDWallet from hdwallet.symbols import BTC from tqdm.contrib.concurrent import process_map import sys import concurrent.futures
# Initialize HDWallet for BTC hdwallet = HDWallet(symbol=BTC)
# Define constants p = 0x000000000000000000000000000000000000000000000001a838b13505b26867 middle = p * 2 mid = 1000000
# Function to process private keys and check addresses def go(i): # Calculate the private key from the iteration index private_key = middle - i if i % 2 == 0 else middle + i # Convert the private key to a hex string and ensure it's 64 characters b = hex(private_key)[2:].zfill(64) # Initialize wallet from private key hdwallet.from_private_key(private_key=b) # Get address address = hdwallet.p2pkh_address() # Check if the address matches the target if address == '13zb1hQbWVsc2S7ZTZnP2G4undNNpdh5so': print('Private key: 0x' + b + '\a') sys.exit()
# Main script execution with multiprocessing guard if __name__ == "__main__": try: # Use process_map to parallelize the work process_map(go, range(mid), max_workers=10, chunksize=10000) except concurrent.futures.process.BrokenProcessPool: print("A process in the pool was terminated abruptly.", file=sys.stderr) except KeyboardInterrupt: print("Process interrupted.", file=sys.stderr)
# Optional alert on script completion (in stderr) print('\a', end='', file=sys.stderr)
It would work only if the private for the next puzzle is even not odd .
|
|
|
|
karrask
Newbie
Offline
Activity: 11
Merit: 0
|
|
October 13, 2024, 07:57:31 PM |
|
Important info stored in here, related to the wallet challenges / puzzles :50BA1F083DE4F022B32996C8070B71F7D27A73E439AE20E5B87B85F3064835EDDB98AFF04FA09B4 D66EA70436C44D927B48408D85D4AB69E57CD466CF922E9A7
More to come...
hello! if it's not difficult, can you explain what it is?
|
|
|
|
|