aby3er
Newbie
Offline
Activity: 5
Merit: 0
|
 |
March 12, 2025, 03:09:56 PM |
|
It looks so close 🙃😅.........
Any hints for Puzzle 68's first 3-digit range? Hehe. working on it,for now not sure even about first one digit lol
|
|
|
|
|
WanderingPhilospher
Sr. Member
  
Offline
Activity: 1498
Merit: 286
Shooters Shoot...
|
 |
March 12, 2025, 03:10:20 PM |
|
1MVDYgVaSN6Qy3d7W5nqGAUziqwJpKyhGu I don't know the hex code of the wallet you shared. I think it starts with D. Because when I calculate the hash160 and the PROBABILITY with what I found, it shows that it starts with D.
But if I try to guess the 2nd hex code with low probability. I think it is between D6 and DC. True? or False? It starts with 0xE. I get what some are trying to do, I really do. I see nothing wrong with doing it this way if you are outgunned and outmatched. Maybe you get lucky before 3243294932794 (lol) GPUs beat you to it. I just don't want people to think that you can apply a set range/jump, and not miss the actual address you are looking for. - Smallest difference: 194903573833 - Largest difference: 1946984192923367 Is a huge difference between matching prefixes. One could go with a very low jump/exclusion range, but then that doesn't really provide a big speed up IMO.
|
|
|
|
|
|
mcdouglasx
|
 |
March 12, 2025, 03:14:15 PM |
|
To verify that if we search, for example, 3 prefixes (hex), we can skip the probability for the space of 2 prefixes '256' when we find an 'abc'. Clearly, in the space of the next 256, the probabilities of finding 'abc' are minimal. That's why my method searches in the most probable zones first and then, if necessary, reduces the percentage in the database to continue exploring without retracing steps, but always focusing on the most probable place. Is it so difficult for AI experts to do this?:import secrets
def generate_hex(): return ''.join(secrets.choice('0123456789abcdef') for _ in range(2))
def calculate_probabilities(attempts): count_ab = 0 count_abc = 0 for _ in range(attempts): hex_val = generate_hex() if "ab" in hex_val: count_ab += 1 if "abc" in hex_val: count_abc += 1 probability_ab = count_ab / attempts probability_abc = count_abc / attempts return probability_ab, probability_abc
attempts = 256 samples = 1000 sum_probability_ab = 0 sum_probability_abc = 0
for _ in range(samples): probability_ab, probability_abc = calculate_probabilities(attempts) sum_probability_ab += probability_ab sum_probability_abc += probability_abc
average_probability_ab = sum_probability_ab / samples average_probability_abc = sum_probability_abc / samples
print(f"Average probability of finding 'ab' in {attempts} attempts (based on {samples} samples): {average_probability_ab:.6f}") print(f"Average probability of finding 'abc' in {attempts} attempts (based on {samples} samples): {average_probability_abc:.6f}")
|
|
|
|
WanderingPhilospher
Sr. Member
  
Offline
Activity: 1498
Merit: 286
Shooters Shoot...
|
 |
March 12, 2025, 03:26:06 PM |
|
To verify that if we search, for example, 3 prefixes (hex), we can skip the probability for the space of 2 prefixes '256' when we find an 'abc'. Clearly, in the space of the next 256, the probabilities of finding 'abc' are minimal. That's why my method searches in the most probable zones first and then, if necessary, reduces the percentage in the database to continue exploring without retracing steps, but always focusing on the most probable place. Is it so difficult for AI experts to do this?:import secrets
def generate_hex(): return ''.join(secrets.choice('0123456789abcdef') for _ in range(2))
def calculate_probabilities(attempts): count_ab = 0 count_abc = 0 for _ in range(attempts): hex_val = generate_hex() if "ab" in hex_val: count_ab += 1 if "abc" in hex_val: count_abc += 1 probability_ab = count_ab / attempts probability_abc = count_abc / attempts return probability_ab, probability_abc
attempts = 256 samples = 1000 sum_probability_ab = 0 sum_probability_abc = 0
for _ in range(samples): probability_ab, probability_abc = calculate_probabilities(attempts) sum_probability_ab += probability_ab sum_probability_abc += probability_abc
average_probability_ab = sum_probability_ab / samples average_probability_abc = sum_probability_abc / samples
print(f"Average probability of finding 'ab' in {attempts} attempts (based on {samples} samples): {average_probability_ab:.6f}") print(f"Average probability of finding 'abc' in {attempts} attempts (based on {samples} samples): {average_probability_abc:.6f}")
Instead of using "hypothetical" data, just use real data gathered from 67. It is a ton of data, over 50% of the entire range. Pick a subset and run the numbers. That is the data I used to come up with: First Run: - Average difference: 282602011632656.06 - Smallest difference: 194903573833 - Largest difference: 1946984192923367
Second Run (Excluding Smallest and Largest Differences): - Average difference: 281241799946404.22
Again, can someone do this and find the key, maybe. Could they do this and skip/miss the key, 100%.
|
|
|
|
|
gygy
Newbie
Offline
Activity: 24
Merit: 0
|
 |
March 12, 2025, 03:28:39 PM |
|
To verify that if we search, for example, 3 prefixes (hex), we can skip the probability for the space of 2 prefixes '256' when we find an 'abc'. Clearly, in the space of the next 256, the probabilities of finding 'abc' are minimal. That's why my method searches in the most probable zones first and then, if necessary, reduces the percentage in the database to continue exploring without retracing steps, but always focusing on the most probable place.
There are no more probable or less probable ranges. There are no ranges that you can safely skip. I can assure you it is not in the lower half of the range with a 50% chance. The thing you are trying to sell is the same with other probabilities. It is random. Based on the fact you found some address you cannot have any conclusions about addresses near that private key. What if you check postfixes instead of prefixes. Is that the same? If it is, that what if you check in different bases instead of 16? This is just superstition, and sadly for a while this is what is this forum topic is about. This and bad python codes.
|
|
|
|
|
|
mcdouglasx
|
 |
March 12, 2025, 03:44:05 PM |
|
Again, can someone do this and find the key, maybe. Could they do this and skip/miss the key, 100%.
With this, I see that you don't understand my logic. There's no way to lose the target since it is self-adjusting, designed to focus on the most probable range down to the least probable. It covers the entire range. I thought you understood it the last time. Maybe you are confusing it with Bibilgin. Honestly, I don't see the logic of Bibilgin as viable in my head, but this one is, because I only do what you call a 'full random' search. Instead of dividing into subranges, I focus on avoiding less probable ranges, and then if necessary, I narrow down the path. Honestly, I don't understand why there is so much fuss about this; it's just probabilities.
|
|
|
|
WanderingPhilospher
Sr. Member
  
Offline
Activity: 1498
Merit: 286
Shooters Shoot...
|
 |
March 12, 2025, 03:56:53 PM |
|
Again, can someone do this and find the key, maybe. Could they do this and skip/miss the key, 100%.
With this, I see that you don't understand my logic. There's no way to lose the target since it is self-adjusting, designed to focus on the most probable range down to the least probable. It covers the entire range. I thought you understood it the last time. Maybe you are confusing it with Bibilgin. Honestly, I don't see the logic of Bibilgin as viable in my head, but this one is, because I only do what you call a 'full random' search. Instead of dividing into subranges, I focus on avoiding less probable ranges, and then if necessary, I narrow down the path. Honestly, I don't understand why there is so much fuss about this; it's just probabilities. Then yes, I would need to see more, or in application. Maybe a further explanation of what happens before the search (if anything), and during the search.
|
|
|
|
|
bibilgin
Newbie
Offline
Activity: 280
Merit: 0
|
 |
March 12, 2025, 04:50:54 PM |
|
It starts with 0xE.
I get what some are trying to do, I really do. I see nothing wrong with doing it this way if you are outgunned and outmatched. Maybe you get lucky before 3243294932794 (lol) GPUs beat you to it.
I just don't want people to think that you can apply a set range/jump, and not miss the actual address you are looking for.
Is a huge difference between matching prefixes. One could go with a very low jump/exclusion range, but then that doesn't really provide a big speed up IMO.
There is also a possibility of skipping the wallet. You are right. (We eat the first candy we like.)  I can say that you kept the similar prefix short. I sent a PM. Can you answer?
|
|
|
|
|
DmitryMerk
Newbie
Offline
Activity: 4
Merit: 0
|
 |
March 12, 2025, 04:56:58 PM |
|
I have 15 PRIV (hex) for prefix e0b8a2baee1b And 50 PRIV (hex) for prefix e0b8a2baee1
Maybe someone wants to change with me?)
|
|
|
|
|
|
kTimesG
|
 |
March 12, 2025, 05:13:02 PM |
|
This is just superstition, and sadly for a while this is what is this forum topic is about. This and bad python codes. I guess you haven't seen COBRAS's. What's wrong with having code that explodes in complexity? Most people here have zero clues about algorithms, and they complicate things terribly instead of simplifying them. Mostly because things can't really be simplified, so desperate solutions appear. Here's a quick way to compute the prefix size, in number of bits, without having to run a loop across string searching for hex non-sense: h160_c = int.from_bytes(h160) # this is a CONSTANT integer # Magical XOR. Read it up on Wikipedia. prefix_len = 160 - (h160_c ^ int.from_bytes(h160_d)).bit_length()
This is what people did, well, IDK, before AI made us all experts in bad coding. It's called using a brain.
|
Off the grid, training pigeons to broadcast signed messages.
|
|
|
|
mcdouglasx
|
 |
March 12, 2025, 05:36:07 PM |
|
Again, can someone do this and find the key, maybe. Could they do this and skip/miss the key, 100%.
With this, I see that you don't understand my logic. There's no way to lose the target since it is self-adjusting, designed to focus on the most probable range down to the least probable. It covers the entire range. I thought you understood it the last time. Maybe you are confusing it with Bibilgin. Honestly, I don't see the logic of Bibilgin as viable in my head, but this one is, because I only do what you call a 'full random' search. Instead of dividing into subranges, I focus on avoiding less probable ranges, and then if necessary, I narrow down the path. Honestly, I don't understand why there is so much fuss about this; it's just probabilities. Then yes, I would need to see more, or in application. Maybe a further explanation of what happens before the search (if anything), and during the search. I understand that they need to know more about what I mean, but I haven't even published the keyhunt mod yet, and I almost receive mocking attacks every 10 minutes. It seems easier for people to attack these days, even based on assumptions, without even knowing the idea they're attacking. It's funny to me. But come on, you know what I mean. I don't see a problem with probabilistic software for a huge space, especially if there's no risk of losing the key. In the worst-case scenario, it just adjusts a smaller percentage until it reaches 0% (which means scanning everything). I don't think this is so hard to understand.
|
|
|
|
|
|
WanderingPhilospher
Sr. Member
  
Offline
Activity: 1498
Merit: 286
Shooters Shoot...
|
 |
March 12, 2025, 06:24:56 PM |
|
Again, can someone do this and find the key, maybe. Could they do this and skip/miss the key, 100%.
With this, I see that you don't understand my logic. There's no way to lose the target since it is self-adjusting, designed to focus on the most probable range down to the least probable. It covers the entire range. I thought you understood it the last time. Maybe you are confusing it with Bibilgin. Honestly, I don't see the logic of Bibilgin as viable in my head, but this one is, because I only do what you call a 'full random' search. Instead of dividing into subranges, I focus on avoiding less probable ranges, and then if necessary, I narrow down the path. Honestly, I don't understand why there is so much fuss about this; it's just probabilities. Then yes, I would need to see more, or in application. Maybe a further explanation of what happens before the search (if anything), and during the search. I understand that they need to know more about what I mean, but I haven't even published the keyhunt mod yet, and I almost receive mocking attacks every 10 minutes. It seems easier for people to attack these days, even based on assumptions, without even knowing the idea they're attacking. It's funny to me. But come on, you know what I mean. I don't see a problem with probabilistic software for a huge space, especially if there's no risk of losing the key. In the worst-case scenario, it just adjusts a smaller percentage until it reaches 0% (which means scanning everything). I don't think this is so hard to understand. Are you talking about the script you made where it pads each side of a found key so that area is skipped if landed on again? Or something else? If we are talking same script, yes, I see problems with it. If you are talking something else, then I can't say. Sorry if we are talking about 2 different things lol.
|
|
|
|
|
|
mcdouglasx
|
 |
March 12, 2025, 08:08:40 PM |
|
Again, can someone do this and find the key, maybe. Could they do this and skip/miss the key, 100%.
With this, I see that you don't understand my logic. There's no way to lose the target since it is self-adjusting, designed to focus on the most probable range down to the least probable. It covers the entire range. I thought you understood it the last time. Maybe you are confusing it with Bibilgin. Honestly, I don't see the logic of Bibilgin as viable in my head, but this one is, because I only do what you call a 'full random' search. Instead of dividing into subranges, I focus on avoiding less probable ranges, and then if necessary, I narrow down the path. Honestly, I don't understand why there is so much fuss about this; it's just probabilities. Then yes, I would need to see more, or in application. Maybe a further explanation of what happens before the search (if anything), and during the search. I understand that they need to know more about what I mean, but I haven't even published the keyhunt mod yet, and I almost receive mocking attacks every 10 minutes. It seems easier for people to attack these days, even based on assumptions, without even knowing the idea they're attacking. It's funny to me. But come on, you know what I mean. I don't see a problem with probabilistic software for a huge space, especially if there's no risk of losing the key. In the worst-case scenario, it just adjusts a smaller percentage until it reaches 0% (which means scanning everything). I don't think this is so hard to understand. Are you talking about the script you made where it pads each side of a found key so that area is skipped if landed on again? Or something else? If we are talking same script, yes, I see problems with it. If you are talking something else, then I can't say. Sorry if we are talking about 2 different things lol. The idea of prefixes is complete and carries no risk of losing the key, with better statistics among other things. That's why I say it's at least absurd to receive criticism based on assumptions. From the beginning, I talked about creating a mod for Keyhunt. I've uploaded photos here, and in each of my posts, I include a disclaimer that the script is merely a representation of an idea. Then I see people complaining, and I think, didn't they read that this was just a concept or a kind of draft that serves as the basis for the idea? Anyway, I explained my idea and I know it works; I have been testing it. But I anticipate what will happen: they will focus on the possible code flaws and criticize it because it's fashionable here to feel superior. They will put the idea in the background. For example, if I leave the database in plain text so they can understand it, they'll say it's better to optimize it with bytes, etc. In short, another day at the puzzle bots office. Lol. Everyone counters what they assume with basic probability. I demonstrate with a basic script, and that bothers them. They only see what they want to see, blind in their own ignorance.
|
|
|
|
teguh54321
Jr. Member
Offline
Activity: 144
Merit: 1
|
 |
March 12, 2025, 08:46:43 PM |
|
1MVDYgVaSN6Qy3d7W5nqGAUziqwJpKyhGu I don't know the hex code of the wallet you shared. I think it starts with D. Because when I calculate the hash160 and the PROBABILITY with what I found, it shows that it starts with D.
But if I try to guess the 2nd hex code with low probability. I think it is between D6 and DC. True? or False? It starts with 0xE. I get what some are trying to do, I really do. I see nothing wrong with doing it this way if you are outgunned and outmatched. Maybe you get lucky before 3243294932794 (lol) GPUs beat you to it. I just don't want people to think that you can apply a set range/jump, and not miss the actual address you are looking for. - Smallest difference: 194903573833 - Largest difference: 1946984192923367 Is a huge difference between matching prefixes. One could go with a very low jump/exclusion range, but then that doesn't really provide a big speed up IMO. Maybe add a matching prefix code in the brute-force cracking process—if it matches, then jump, skipping 120,000,000,000. If it jumps 20 times, that means skipping 20 × 120,000,000,000. 🤔 But the matching code process might also slow down the entire brute-force process. Hmmm 🙃
|
|
|
|
|
WanderingPhilospher
Sr. Member
  
Offline
Activity: 1498
Merit: 286
Shooters Shoot...
|
 |
March 12, 2025, 08:57:28 PM |
|
The idea of prefixes is complete and carries no risk of losing the key, with better statistics among other things. So point me to this new idea of yours, maybe I missed it. Again, if it is the database padding you wrote, there are flaws in it and finding the key is not guaranteed, without possibly going back and tweaking the walls around the keys, monitoring how many keys are left to check to make sure it's not continuing to randomly jump around, finding no matches, etc. I like the idea, but it is still flawed, even with the tweaks I made. I have one db that has over 8 million keys found (now padded ranges) in it lol, 8 matches up to 14/15. Yes, over 8 million, supplemented by GPU search obviously. But in doing so, I knew there are still flaws as we can guesstimate a padding, but it could be too large and skip the key on the first iteration, or too small and wasting time vs plain brute force. I'm not critiquing, I am wondering if you are talking of some other project of yours. If so, point me in the right direction.
|
|
|
|
|
zahid888
Member

Offline
Activity: 335
Merit: 24
the right steps towards the goal
|
 |
March 12, 2025, 09:10:35 PM |
|
you can speed up this Python code 100 times in Cyclone.
How do you think prefixes can be generated in Cyclone? There is already a search in the script using AVX2 instructions to compare two 128-bit values. However, the search focus on the last 4 bytes of the hashes rather than just the beginning. // 8 keys are ready - time to use avx2 if (localBatchCount == HASH_BATCH_SIZE) { computeHash160BatchBinSingle(localBatchCount, localPubKeys, localHashResults); // Results check for (int j = 0; j < HASH_BATCH_SIZE; j++) { __m128i cand16 = _mm_loadu_si128(reinterpret_cast<const __m128i*>(localHashResults[j])); __m128i cmp = _mm_cmpeq_epi8(cand16, target16); if (_mm_movemask_epi8(cmp) == 0xFFFF) { // Checking last 4 bytes (20 - 16) A Bitcoin address is derived from a RIPEMD-160 hash (20 bytes). To match 1MVDYgVaSN, you need to match the first 10 bytes. // Modify the comparison logic to check only the first 10 bytes if ((_mm_movemask_epi8(cmp) & 0x03FF) == 0x03FF) { // Check the first 10 bytes of the hash160 result if (!matchFound && std::memcmp(localHashResults[j], targetHash160.data(), 10) == 0) { #pragma omp critical { if (!matchFound) { matchFound = true; auto tEndTime = std::chrono::high_resolution_clock::now(); globalElapsedTime = std::chrono::duration<double>(tEndTime - tStart).count(); mkeysPerSec = (double)(globalComparedCount + localComparedCount) / globalElapsedTime / 1e6;
// Recovering private key Int matchingPrivateKey; matchingPrivateKey.Set(¤tBatchKey); int idx = pointIndices[j]; if (idx < 256) { Int offset; offset.SetInt32(idx); matchingPrivateKey.Add(&offset); } else { Int offset; offset.SetInt32(idx - 256); matchingPrivateKey.Sub(&offset); } foundPrivateKeyHex = padHexTo64(intToHex(matchingPrivateKey));
// Print the partial match and private key std::string first10(reinterpret_cast<char*>(localHashResults[j]), 10); std::cout << "Partial Match Found! First 10 bytes: " << first10 << "\n"; std::cout << "Private Key: " << foundPrivateKeyHex << "\n"; } } #pragma omp cancel parallel } localComparedCount++; } else { localComparedCount++; } Something like this, but I don't believe in prefixes anymore either. I did that in the beginning, and I don't even know what I haven't tried. The pattern doesn't exist. This is just useless fun.... Can you help me for these function i write and compiled it in same repo but not working properly Modified_Cyclone.exe -a 1FRoHA9xewq7DjrZ1psWJVeTer8gHRqEvR -r 1:ffffffff --prefix 1MVD --stride ff -O found.txt================= WORK IN PROGRESS ================= Target Address: 1FRoHA9xewq7DjrZ1psWJVeTer8gHRqEvR CPU Threads : 6 Mkeys/s : 24.12 Total Checked : 1447131648 Elapsed Time : 00:01:00 Range : 1:ffffffff Progress : 33.6937 % Progress Save : 0 ================== FOUND MATCH! ================== Private Key : 00000000000000000000000000000000000000000000000000000000B862A62E Public Key : 0209C58240E50E3BA3F833C82655E8725C037A2294E14CF5D73A5DF8D56159DE69 WIF : KwDiBf89QgGbjEhKnhXJuH7LrciVrZi3qYjgd9MACNivtz8yMYTd P2PKH Address : 1FRoHA9xewq7DjrZ1psWJVeTer8gHRqEvR Total Checked : 1487326208 Elapsed Time : 00:01:01 Speed : 23.4595 Mkeys/s Here my modification #include <immintrin.h> #include <iostream> #include <iomanip> #include <string> #include <cstring> #include <chrono> #include <vector> #include <sstream> #include <stdexcept> #include <algorithm> #include <fstream> #include <omp.h> #include <array> #include <utility> #include "p2pkh_decoder.h" #include "sha256_avx2.h" #include "ripemd160_avx2.h" #include "SECP256K1.h" #include "Point.h" #include "Int.h" #include "IntGroup.h"
// Batch size: ±256 public keys (512), hashed in groups of 8 (AVX2). static constexpr int POINTS_BATCH_SIZE = 256; static constexpr int HASH_BATCH_SIZE = 8;
// Status output and progress saving frequency static constexpr double statusIntervalSec = 5.0; static constexpr double saveProgressIntervalSec = 300.0;
static int g_progressSaveCount = 0; static std::vector<std::string> g_threadPrivateKeys;
//------------------------------------------------------------------------------ void saveProgressToFile(const std::string &progressStr) { std::ofstream ofs("progress.txt", std::ios::app); if (ofs) { ofs << progressStr << "\n"; } else { std::cerr << "Cannot open progress.txt for writing\n"; } }
//------------------------------------------------------------------------------ //Converts a HEX string into a large number (a vector of 64-bit words, little-endian).
std::vector<uint64_t> hexToBigNum(const std::string& hex) { std::vector<uint64_t> bigNum; const size_t len = hex.size(); bigNum.reserve((len + 15) / 16); for (size_t i = 0; i < len; i += 16) { size_t start = (len >= 16 + i) ? len - 16 - i : 0; size_t partLen = (len >= 16 + i) ? 16 : (len - i); uint64_t value = std::stoull(hex.substr(start, partLen), nullptr, 16); bigNum.push_back(value); } return bigNum; }
//Reverse conversion to a HEX string (with correct leading zeros within blocks).
std::string bigNumToHex(const std::vector<uint64_t>& num) { std::ostringstream oss; for (auto it = num.rbegin(); it != num.rend(); ++it) { if (it != num.rbegin()) oss << std::setw(16) << std::setfill('0'); oss << std::hex << *it; } return oss.str(); }
std::vector<uint64_t> singleElementVector(uint64_t val) { return { val }; }
std::vector<uint64_t> bigNumAdd(const std::vector<uint64_t>& a, const std::vector<uint64_t>& b) { std::vector<uint64_t> sum; sum.reserve(std::max(a.size(), b.size()) + 1); uint64_t carry = 0; for (size_t i = 0, sz = std::max(a.size(), b.size()); i < sz; ++i) { uint64_t x = (i < a.size()) ? a[i] : 0ULL; uint64_t y = (i < b.size()) ? b[i] : 0ULL; __uint128_t s = ( __uint128_t )x + ( __uint128_t )y + carry; carry = (uint64_t)(s >> 64); sum.push_back((uint64_t)s); } if (carry) sum.push_back(carry); return sum; }
std::vector<uint64_t> bigNumSubtract(const std::vector<uint64_t>& a, const std::vector<uint64_t>& b) { std::vector<uint64_t> diff = a; uint64_t borrow = 0; for (size_t i = 0; i < b.size(); ++i) { uint64_t subtrahend = b[i]; if (diff[i] < subtrahend + borrow) { diff[i] = diff[i] + (~0ULL) - subtrahend - borrow + 1ULL; // eqv diff[i] = diff[i] - subtrahend - borrow borrow = 1ULL; } else { diff[i] -= (subtrahend + borrow); borrow = 0ULL; } } for (size_t i = b.size(); i < diff.size() && borrow; ++i) { if (diff[i] == 0ULL) { diff[i] = ~0ULL; } else { diff[i] -= 1ULL; borrow = 0ULL; } } // delete leading zeros while (!diff.empty() && diff.back() == 0ULL) diff.pop_back(); return diff; }
std::pair<std::vector<uint64_t>, uint64_t> bigNumDivide(const std::vector<uint64_t>& a, uint64_t divisor) { std::vector<uint64_t> quotient(a.size(), 0ULL); uint64_t remainder = 0ULL; for (int i = (int)a.size() - 1; i >= 0; --i) { __uint128_t temp = ((__uint128_t)remainder << 64) | a[i]; uint64_t q = (uint64_t)(temp / divisor); uint64_t r = (uint64_t)(temp % divisor); quotient[i] = q; remainder = r; } while (!quotient.empty() && quotient.back() == 0ULL) quotient.pop_back(); return { quotient, remainder }; }
long double hexStrToLongDouble(const std::string &hex) { long double result = 0.0L; for (char c : hex) { result *= 16.0L; if (c >= '0' && c <= '9') result += (c - '0'); else if (c >= 'a' && c <= 'f') result += (c - 'a' + 10); else if (c >= 'A' && c <= 'F') result += (c - 'A' + 10); } return result; }
//------------------------------------------------------------------------------ static inline std::string padHexTo64(const std::string &hex) { return (hex.size() >= 64) ? hex : std::string(64 - hex.size(), '0') + hex; } static inline Int hexToInt(const std::string &hex) { Int number; char buf[65] = {0}; std::strncpy(buf, hex.c_str(), 64); number.SetBase16(buf); return number; } static inline std::string intToHex(const Int &value) { Int temp; temp.Set((Int*)&value); return temp.GetBase16(); } static inline bool intGreater(const Int &a, const Int &b) { std::string ha = ((Int&)a).GetBase16(); std::string hb = ((Int&)b).GetBase16(); if (ha.size() != hb.size()) return (ha.size() > hb.size()); return (ha > hb); } static inline bool isEven(const Int &number) { return ((Int&)number).IsEven(); }
static inline std::string intXToHex64(const Int &x) { Int temp; temp.Set((Int*)&x); std::string hex = temp.GetBase16(); if (hex.size() < 64) hex.insert(0, 64 - hex.size(), '0'); return hex; }
static inline std::string pointToCompressedHex(const Point &point) { return (isEven(point.y) ? "02" : "03") + intXToHex64(point.x); } static inline void pointToCompressedBin(const Point &point, uint8_t outCompressed[33]) { outCompressed[0] = isEven(point.y) ? 0x02 : 0x03; Int temp; temp.Set((Int*)&point.x); for (int i = 0; i < 32; i++) { outCompressed[1 + i] = (uint8_t)temp.GetByte(31 - i); } }
//------------------------------------------------------------------------------ inline void prepareShaBlock(const uint8_t* dataSrc, size_t dataLen, uint8_t* outBlock) { std::fill_n(outBlock, 64, 0); std::memcpy(outBlock, dataSrc, dataLen); outBlock[dataLen] = 0x80; const uint32_t bitLen = (uint32_t)(dataLen * 8); outBlock[60] = (uint8_t)((bitLen >> 24) & 0xFF); outBlock[61] = (uint8_t)((bitLen >> 16) & 0xFF); outBlock[62] = (uint8_t)((bitLen >> 8) & 0xFF); outBlock[63] = (uint8_t)( bitLen & 0xFF); } inline void prepareRipemdBlock(const uint8_t* dataSrc, uint8_t* outBlock) { std::fill_n(outBlock, 64, 0); std::memcpy(outBlock, dataSrc, 32); outBlock[32] = 0x80; const uint32_t bitLen = 256; outBlock[60] = (uint8_t)((bitLen >> 24) & 0xFF); outBlock[61] = (uint8_t)((bitLen >> 16) & 0xFF); outBlock[62] = (uint8_t)((bitLen >> 8) & 0xFF); outBlock[63] = (uint8_t)( bitLen & 0xFF); }
// Computing hash160 using avx2 (8 hashes per try) static void computeHash160BatchBinSingle(int numKeys, uint8_t pubKeys[][33], uint8_t hashResults[][20]) { std::array<std::array<uint8_t, 64>, HASH_BATCH_SIZE> shaInputs; std::array<std::array<uint8_t, 32>, HASH_BATCH_SIZE> shaOutputs; std::array<std::array<uint8_t, 64>, HASH_BATCH_SIZE> ripemdInputs; std::array<std::array<uint8_t, 20>, HASH_BATCH_SIZE> ripemdOutputs;
const size_t totalBatches = (numKeys + (HASH_BATCH_SIZE - 1)) / HASH_BATCH_SIZE; for (size_t batch = 0; batch < totalBatches; batch++) { const size_t batchCount = std::min<size_t>(HASH_BATCH_SIZE, numKeys - batch * HASH_BATCH_SIZE); for (size_t i = 0; i < batchCount; i++) { const size_t idx = batch * HASH_BATCH_SIZE + i; prepareShaBlock(pubKeys[idx], 33, shaInputs[i].data()); } for (size_t i = batchCount; i < HASH_BATCH_SIZE; i++) { std::memcpy(shaInputs[i].data(), shaInputs[0].data(), 64); } const uint8_t* inPtr[HASH_BATCH_SIZE]; uint8_t* outPtr[HASH_BATCH_SIZE]; for (int i = 0; i < HASH_BATCH_SIZE; i++) { inPtr[i] = shaInputs[i].data(); outPtr[i] = shaOutputs[i].data(); } // SHA256 (avx2) sha256avx2_8B(inPtr[0], inPtr[1], inPtr[2], inPtr[3], inPtr[4], inPtr[5], inPtr[6], inPtr[7], outPtr[0], outPtr[1], outPtr[2], outPtr[3], outPtr[4], outPtr[5], outPtr[6], outPtr[7]);
// Preparing Ripemd160 for (size_t i = 0; i < batchCount; i++) { prepareRipemdBlock(shaOutputs[i].data(), ripemdInputs[i].data()); } for (size_t i = batchCount; i < HASH_BATCH_SIZE; i++) { std::memcpy(ripemdInputs[i].data(), ripemdInputs[0].data(), 64); } for (int i = 0; i < HASH_BATCH_SIZE; i++) { inPtr[i] = ripemdInputs[i].data(); outPtr[i] = ripemdOutputs[i].data(); } // Ripemd160 (avx2) ripemd160avx2::ripemd160avx2_32( (unsigned char*)inPtr[0], (unsigned char*)inPtr[1], (unsigned char*)inPtr[2], (unsigned char*)inPtr[3], (unsigned char*)inPtr[4], (unsigned char*)inPtr[5], (unsigned char*)inPtr[6], (unsigned char*)inPtr[7], outPtr[0], outPtr[1], outPtr[2], outPtr[3], outPtr[4], outPtr[5], outPtr[6], outPtr[7] ); for (size_t i = 0; i < batchCount; i++) { const size_t idx = batch * HASH_BATCH_SIZE + i; std::memcpy(hashResults[idx], ripemdOutputs[i].data(), 20); } } }
//------------------------------------------------------------------------------ static void printUsage(const char* programName) { std::cerr << "Usage: " << programName << " -a <Base58_P2PKH> -r <START:END> [--prefix <prefix>] [--stride <stride>] [-O <output_file>]\n"; }
static std::string formatElapsedTime(double seconds) { int hrs = (int)seconds / 3600; int mins = ((int)seconds % 3600) / 60; int secs = (int)seconds % 60; std::ostringstream oss; oss << std::setw(2) << std::setfill('0') << hrs << ":" << std::setw(2) << std::setfill('0') << mins << ":" << std::setw(2) << std::setfill('0') << secs; return oss.str(); }
//------------------------------------------------------------------------------ static void printStatsBlock(int numCPUs, const std::string &targetAddr, const std::string &rangeStr, double mkeysPerSec, unsigned long long totalChecked, double elapsedTime, int progressSaves, long double progressPercent) { static bool firstPrint = true; if (!firstPrint) { std::cout << "\033[9A"; } else { firstPrint = false; } std::cout << "================= WORK IN PROGRESS =================\n"; std::cout << "Target Address: " << targetAddr << "\n"; std::cout << "CPU Threads : " << numCPUs << "\n"; std::cout << "Mkeys/s : " << std::fixed << std::setprecision(2) << mkeysPerSec << "\n"; std::cout << "Total Checked : " << totalChecked << "\n"; std::cout << "Elapsed Time : " << formatElapsedTime(elapsedTime) << "\n"; std::cout << "Range : " << rangeStr << "\n"; std::cout << "Progress : " << std::fixed << std::setprecision(4) << progressPercent << " %\n"; std::cout << "Progress Save : " << progressSaves << "\n"; std::cout.flush(); }
//------------------------------------------------------------------------------ struct ThreadRange { std::string startHex; std::string endHex; };
static std::vector<ThreadRange> g_threadRanges;
//------------------------------------------------------------------------------ int main(int argc, char* argv[]) { bool addressProvided = false, rangeProvided = false; std::string targetAddress, rangeInput; std::vector<uint8_t> targetHash160; std::string prefixFilter; uint64_t stride = 1; std::string outputFile;
for (int i = 1; i < argc; i++) { if (!std::strcmp(argv[i], "-a") && i + 1 < argc) { targetAddress = argv[++i]; addressProvided = true; try { targetHash160 = P2PKHDecoder::getHash160(targetAddress); if (targetHash160.size() != 20) throw std::invalid_argument("Invalid hash160 length."); } catch (const std::exception &ex) { std::cerr << "Error parsing address: " << ex.what() << "\n"; return 1; } } else if (!std::strcmp(argv[i], "-r") && i + 1 < argc) { rangeInput = argv[++i]; rangeProvided = true; } else if (!std::strcmp(argv[i], "--prefix") && i + 1 < argc) { prefixFilter = argv[++i]; } else if (!std::strcmp(argv[i], "--stride") && i + 1 < argc) { stride = std::stoull(argv[++i], nullptr, 16); } else if (!std::strcmp(argv[i], "-O") && i + 1 < argc) { outputFile = argv[++i]; } else { std::cerr << "Unknown parameter: " << argv[i] << "\n"; printUsage(argv[0]); return 1; } } if (!addressProvided || !rangeProvided) { std::cerr << "Both -a <Base58_P2PKH> and -r <START:END> are required!\n"; printUsage(argv[0]); return 1; }
const size_t colonPos = rangeInput.find(':'); if (colonPos == std::string::npos) { std::cerr << "Invalid range format. Use <START:END> in HEX.\n"; return 1; } const std::string rangeStartHex = rangeInput.substr(0, colonPos); const std::string rangeEndHex = rangeInput.substr(colonPos + 1);
auto rangeStart = hexToBigNum(rangeStartHex); auto rangeEnd = hexToBigNum(rangeEndHex);
bool validRange = false; if (rangeStart.size() < rangeEnd.size()) { validRange = true; } else if (rangeStart.size() > rangeEnd.size()) { validRange = false; } else { validRange = true; for (int i = (int)rangeStart.size() - 1; i >= 0; --i) { if (rangeStart[i] < rangeEnd[i]) { break; } else if (rangeStart[i] > rangeEnd[i]) { validRange = false; break; } } } if (!validRange) { std::cerr << "Range start must be <= range end.\n"; return 1; }
auto rangeSize = bigNumSubtract(rangeEnd, rangeStart); rangeSize = bigNumAdd(rangeSize, singleElementVector(1ULL));
const std::string rangeSizeHex = bigNumToHex(rangeSize); const long double totalRangeLD = hexStrToLongDouble(rangeSizeHex);
const int numCPUs = omp_get_num_procs(); g_threadPrivateKeys.resize(numCPUs, "0");
auto [chunkSize, remainder] = bigNumDivide(rangeSize, (uint64_t)numCPUs); g_threadRanges.resize(numCPUs);
std::vector<uint64_t> currentStart = rangeStart; for (int t = 0; t < numCPUs; t++) { auto currentEnd = bigNumAdd(currentStart, chunkSize); if (t < (int)remainder) { currentEnd = bigNumAdd(currentEnd, singleElementVector(1ULL)); } currentEnd = bigNumSubtract(currentEnd, singleElementVector(1ULL));
g_threadRanges[t].startHex = bigNumToHex(currentStart); g_threadRanges[t].endHex = bigNumToHex(currentEnd);
currentStart = bigNumAdd(currentEnd, singleElementVector(1ULL)); } const std::string displayRange = g_threadRanges.front().startHex + ":" + g_threadRanges.back().endHex;
unsigned long long globalComparedCount = 0ULL; double globalElapsedTime = 0.0; double mkeysPerSec = 0.0;
const auto tStart = std::chrono::high_resolution_clock::now(); auto lastStatusTime = tStart; auto lastSaveTime = tStart;
bool matchFound = false; std::string foundPrivateKeyHex, foundPublicKeyHex, foundWIF;
Secp256K1 secp; secp.Init();
// PARRALEL COMPUTING BLOCK #pragma omp parallel num_threads(numCPUs) \ shared(globalComparedCount, globalElapsedTime, mkeysPerSec, matchFound, \ foundPrivateKeyHex, foundPublicKeyHex, foundWIF, \ tStart, lastStatusTime, lastSaveTime, g_progressSaveCount, \ g_threadPrivateKeys) { const int threadId = omp_get_thread_num();
Int privateKey = hexToInt(g_threadRanges[threadId].startHex); const Int threadRangeEnd = hexToInt(g_threadRanges[threadId].endHex);
#pragma omp critical { g_threadPrivateKeys[threadId] = padHexTo64(intToHex(privateKey)); }
// Precomputing +i*G and -i*G for i=0..255 std::vector<Point> plusPoints(POINTS_BATCH_SIZE); std::vector<Point> minusPoints(POINTS_BATCH_SIZE); for (int i = 0; i < POINTS_BATCH_SIZE; i++) { Int tmp; tmp.SetInt32(i); Point p = secp.ComputePublicKey(&tmp); plusPoints[i] = p; p.y.ModNeg(); minusPoints[i] = p; }
// Arrays for batch-adding std::vector<Int> deltaX(POINTS_BATCH_SIZE); IntGroup modGroup(POINTS_BATCH_SIZE);
// Save 512 publickeys const int fullBatchSize = 2 * POINTS_BATCH_SIZE; std::vector<Point> pointBatch(fullBatchSize);
// Buffers for hashing uint8_t localPubKeys[fullBatchSize][33]; uint8_t localHashResults[HASH_BATCH_SIZE][20]; int localBatchCount = 0; int pointIndices[HASH_BATCH_SIZE];
// Local count unsigned long long localComparedCount = 0ULL;
// Download the target (hash160) в __m128i for fast compare __m128i target16 = _mm_loadu_si128(reinterpret_cast<const __m128i*>(targetHash160.data()));
// main while (true) { if (intGreater(privateKey, threadRangeEnd)) { break; } // startPoint = privateKey * G Int currentBatchKey; currentBatchKey.Set(&privateKey); Point startPoint = secp.ComputePublicKey(¤tBatchKey);
#pragma omp critical { g_threadPrivateKeys[threadId] = padHexTo64(intToHex(privateKey)); }
// Divide the batch of 512 keys into 2 blocks of 256 keys, count +256 and -256 from the center G-point of the batch // First pointBatch[0..255] + for (int i = 0; i < POINTS_BATCH_SIZE; i++) { deltaX[i].ModSub(&plusPoints[i].x, &startPoint.x); } modGroup.Set(deltaX.data()); modGroup.ModInv(); for (int i = 0; i < POINTS_BATCH_SIZE; i++) { Point tempPoint = startPoint; Int deltaY; deltaY.ModSub(&plusPoints[i].y, &startPoint.y); Int slope; slope.ModMulK1(&deltaY, &deltaX[i]); Int slopeSq; slopeSq.ModSquareK1(&slope);
Int tmpX; tmpX.Set(&startPoint.x); tmpX.ModNeg(); tmpX.ModAdd(&slopeSq); tmpX.ModSub(&plusPoints[i].x); tempPoint.x.Set(&tmpX);
Int diffX; diffX.Set(&startPoint.x); diffX.ModSub(&tempPoint.x); diffX.ModMulK1(&slope); tempPoint.y.ModNeg(); tempPoint.y.ModAdd(&diffX);
pointBatch[i] = tempPoint; }
// Second pointBatch[256..511] - for (int i = 0; i < POINTS_BATCH_SIZE; i++) { Point tempPoint = startPoint; Int deltaY; deltaY.ModSub(&minusPoints[i].y, &startPoint.y); Int slope; slope.ModMulK1(&deltaY, &deltaX[i]); Int slopeSq; slopeSq.ModSquareK1(&slope);
Int tmpX; tmpX.Set(&startPoint.x); tmpX.ModNeg(); tmpX.ModAdd(&slopeSq); tmpX.ModSub(&minusPoints[i].x); tempPoint.x.Set(&tmpX);
Int diffX; diffX.Set(&startPoint.x); diffX.ModSub(&tempPoint.x); diffX.ModMulK1(&slope); tempPoint.y.ModNeg(); tempPoint.y.ModAdd(&diffX);
pointBatch[POINTS_BATCH_SIZE + i] = tempPoint; }
// Construct local buffeer for (int i = 0; i < fullBatchSize; i++) { pointToCompressedBin(pointBatch[i], localPubKeys[localBatchCount]); pointIndices[localBatchCount] = i; localBatchCount++;
// 8 keys are ready - time to use avx2 if (localBatchCount == HASH_BATCH_SIZE) { computeHash160BatchBinSingle(localBatchCount, localPubKeys, localHashResults); // Results check for (int j = 0; j < HASH_BATCH_SIZE; j++) { __m128i cand16 = _mm_loadu_si128(reinterpret_cast<const __m128i*>(localHashResults[j])); __m128i cmp = _mm_cmpeq_epi8(cand16, target16); if (_mm_movemask_epi8(cmp) == 0xFFFF) { // Checking last 4 bytes (20 - 16) if (!matchFound && std::memcmp(localHashResults[j], targetHash160.data(), 20) == 0) { #pragma omp critical { if (!matchFound) { matchFound = true; auto tEndTime = std::chrono::high_resolution_clock::now(); globalElapsedTime = std::chrono::duration<double>(tEndTime - tStart).count(); mkeysPerSec = (double)(globalComparedCount + localComparedCount) / globalElapsedTime / 1e6;
// Recovering private key Int matchingPrivateKey; matchingPrivateKey.Set(¤tBatchKey); int idx = pointIndices[j]; if (idx < 256) { Int offset; offset.SetInt32(idx); matchingPrivateKey.Add(&offset); } else { Int offset; offset.SetInt32(idx - 256); matchingPrivateKey.Sub(&offset); } foundPrivateKeyHex = padHexTo64(intToHex(matchingPrivateKey)); Point matchedPoint = pointBatch[idx]; foundPublicKeyHex = pointToCompressedHex(matchedPoint); foundWIF = P2PKHDecoder::compute_wif(foundPrivateKeyHex, true); } } #pragma omp cancel parallel } localComparedCount++; } else { localComparedCount++; } } localBatchCount = 0; } }
// Next step { Int step; step.SetInt32(fullBatchSize - 2); // 510 privateKey.Add(&step); }
// Time to show status auto now = std::chrono::high_resolution_clock::now(); double secondsSinceStatus = std::chrono::duration<double>(now - lastStatusTime).count(); if (secondsSinceStatus >= statusIntervalSec) { #pragma omp critical { globalComparedCount += localComparedCount; localComparedCount = 0ULL; globalElapsedTime = std::chrono::duration<double>(now - tStart).count(); mkeysPerSec = (double)globalComparedCount / globalElapsedTime / 1e6;
long double progressPercent = 0.0L; if (totalRangeLD > 0.0L) { progressPercent = ((long double)globalComparedCount / totalRangeLD) * 100.0L; } printStatsBlock(numCPUs, targetAddress, displayRange, mkeysPerSec, globalComparedCount, globalElapsedTime, g_progressSaveCount, progressPercent); lastStatusTime = now; } }
// Save progress periodically auto nowSave = std::chrono::high_resolution_clock::now(); double secondsSinceSave = std::chrono::duration<double>(nowSave - lastSaveTime).count(); if (secondsSinceSave >= saveProgressIntervalSec) { #pragma omp critical { if (threadId == 0) { g_progressSaveCount++; std::ostringstream oss; oss << "Progress Save #" << g_progressSaveCount << " at " << std::chrono::duration<double>(nowSave - tStart).count() << " sec: " << "TotalChecked=" << globalComparedCount << ", " << "ElapsedTime=" << formatElapsedTime(globalElapsedTime) << ", " << "Mkeys/s=" << std::fixed << std::setprecision(2) << mkeysPerSec << "\n"; for (int k = 0; k < numCPUs; k++) { oss << "Thread Key " << k << ": " << g_threadPrivateKeys[k] << "\n"; } saveProgressToFile(oss.str()); lastSaveTime = nowSave; } } }
if (matchFound) { break; } } // while(true)
// Adding local count #pragma omp atomic globalComparedCount += localComparedCount; } // end of parralel section
// Main results auto tEnd = std::chrono::high_resolution_clock::now(); globalElapsedTime = std::chrono::duration<double>(tEnd - tStart).count();
if (!matchFound) { mkeysPerSec = (double)globalComparedCount / globalElapsedTime / 1e6; std::cout << "\nNo match found.\n"; std::cout << "Total Checked : " << globalComparedCount << "\n"; std::cout << "Elapsed Time : " << formatElapsedTime(globalElapsedTime) << "\n"; std::cout << "Speed : " << mkeysPerSec << " Mkeys/s\n"; return 0; }
// If the key was found std::cout << "================== FOUND MATCH! ==================\n"; std::cout << "Private Key : " << foundPrivateKeyHex << "\n"; std::cout << "Public Key : " << foundPublicKeyHex << "\n"; std::cout << "WIF : " << foundWIF << "\n"; std::cout << "P2PKH Address : " << targetAddress << "\n"; std::cout << "Total Checked : " << globalComparedCount << "\n"; std::cout << "Elapsed Time : " << formatElapsedTime(globalElapsedTime) << "\n"; std::cout << "Speed : " << mkeysPerSec << " Mkeys/s\n";
// Write results to output file if specified if (!outputFile.empty()) { std::ofstream ofs(outputFile, std::ios::app); // Open in append mode if (ofs) { ofs << "Private Key : " << foundPrivateKeyHex << "\n"; ofs << "Public Key : " << foundPublicKeyHex << "\n"; ofs << "WIF : " << foundWIF << "\n"; ofs << "P2PKH Address : " << targetAddress << "\n"; ofs << "Total Checked : " << globalComparedCount << "\n"; ofs << "Elapsed Time : " << formatElapsedTime(globalElapsedTime) << "\n"; ofs << "Speed : " << mkeysPerSec << " Mkeys/s\n"; } else { std::cerr << "Cannot open output file for writing\n"; } }
return 0; } Thanks in Advance 
|
1BGvwggxfCaHGykKrVXX7fk8GYaLQpeixA
|
|
|
|
mcdouglasx
|
 |
March 12, 2025, 11:50:02 PM |
|
point me to this new idea of yours, maybe I missed it.
I will try to explain it to you and I apologize in advance if something is not understood: 1. Selection of Percentages with the Flags –pct , –top and -pm-pm : minimum prefixes to consider. -pct (initial percentage): You start the process by defining a percentage (for example, 30%) that sets the omission range relative to a found prefix. For instance, if a prefix of 10 characters from an h160 address is found, a value—let's call it "Aval"—is calculated corresponding to 30% of 16**9. This gives you an interval pk-Aval:pk+Aval, which is then marked to omit from future searches. The idea is to leverage the lower probability (1/16**9) of finding another identical prefix and thus avoid overlaps, wasting resources on redundant calculations. -top (maximum percentage): This parameter acts as a threshold. If the sum of omitted keys reaches 60% of the total search range, it is assumed (according to the birthday paradox) that conditions have fallen into the worst-case statistical scenario. At that moment, the algorithm automatically reduces the -pct by 5% (for example, from 30% to 25%), and the ranges are recalculated based on the new parameters and already found prefixes. 2. Scanning and Storage ProcessComplete Random Block Scanning: Random blocks are selected and scanned in their entirety, regardless of whether prefixes are found or not. Each scanned block is recorded in the DB to avoid revisiting it. Storage of Prefixes and Ranges:A file, for example, named p_target, saves the keys of found prefixes along with their lengths (example: “1000:12”). Another file, say ranges, stores both the omitted ranges and the traversed ranges without matches. To avoid duplications or overlaps, a hashtable (HT) is used to organize the ranges. For instance, if you initially have ranges like 1:100, 50:500, and 499:1000, the HT will merge them into a consolidated range (1:1000). 3. DB Recalibration and Range HandlingCalculation of Omission Ranges: When a prefix is detected, for instance, at “pk” you calculate an omission range using the indicated percentage. With -pct at 30%, the range might be something like 70:130 (assuming “70” and “130” are calculated based on “Aval”). Then, if the –top is activated, and 60% of the total search space is omitted, you recalculate with the new percentage (in this case, 25%), which modifies the omitted range to, for example, 75:125. Hashtable (HT) Operation and Orphan Ranges: The HT centralizes the omitted ranges. For instance, if the omitted range was initially 70:130 (30%), the HT will remove that block from the total, leaving "valid" ranges like: -From 1 to 70 -From 130 onward When the new calculation (25%) is applied, it introduces an omitted range of 75:125. This creates "gaps" or orphan ranges in the process, i.e., the intervals left between the previous range (70 to 75) and the new one (125 to 130). If these gaps are smaller than a threshold (defined, for example, with the -n flag), they are prioritized for re-scanning and, once verified, are "reinserted" to reconstruct the complete range (for example, from 1 to 1000). Automatic Adjustment of Search Blocks: The system also dynamically adapts. If you have planned to scan a block, say 800:950, but a larger part has already been explored (for example, 900:3000), the block automatically adjusts to the unexplored region (for instance, it would remain as 750:900). Similarly, if there is an overlap with an already scanned range (for example, 200:800), the uncovered part is considered orphaned and is scanned to ensure no possible target is missed.
|
|
|
|
WanderingPhilospher
Sr. Member
  
Offline
Activity: 1498
Merit: 286
Shooters Shoot...
|
 |
March 13, 2025, 01:41:57 AM |
|
I will try to explain it to you and I apologize in advance if something is not understood:
Ahhh so it is your original thought/script. BUT, I see you have implemented a few things to what I did, no single random (because of possible endless loops before db is filled up to x percent; I added a print statement so I could gauge if it was happening a lot) and added block sizes of 2^24 (16 threads x 2^24 blocks, each running their own random block/range) and would mark if key was found or not found, and still added those to the db. I did swap up the .bin file with a db file, quicker look up time, and a few other reasons, as I created a central server of sorts to receive matches found/ranges ran. But if prefix number was higher, you could use just about anything lol. I like the way you are going with a lot of the automation parts. Mine is not automated, I have to merge overlapped ranges (simple python script). Seems like you have a solid plan laid out, and a little powerful script, if you can get it all worked out...kudos.
|
|
|
|
|
|
nomachine
|
 |
March 13, 2025, 06:34:50 AM |
|
Thanks in Advance  I see you're started with the modification. Just to let you know in advance, I don't use #include "p2pkh_decoder.h" or p2pkh_decoder. Using them costs around 10-15 Mkeys/s in performance. Comparing HASH160 directly is much faster than decoding to a P2PKH address and then comparing. Additionally, it will be even slower if the comparison is based on the decoded address. The --prefix argument is not being used in the script to filter addresses based on a specific prefix. // You need function like this to extract the prefix from a P2PKH address std::string extractPrefix(const std::string& address, size_t prefixLength) { return address.substr(0, prefixLength); }
// Inside the parallel section std::string generatedAddress = P2PKHDecoder::getAddressFromHash160(localHashResults[j]); std::string generatedPrefix = extractPrefix(generatedAddress, prefixFilter.length());
if (generatedPrefix == prefixFilter) { // Prefix matches, now check if the address matches the target __m128i cand16 = _mm_loadu_si128(reinterpret_cast<const __m128i*>(localHashResults[j])); __m128i cmp = _mm_cmpeq_epi8(cand16, target16); if (_mm_movemask_epi8(cmp) == 0xFFFF) { // Checking last 4 bytes (20 - 16) if (!matchFound && std::memcmp(localHashResults[j], targetHash160.data(), 20) == 0) { #pragma omp critical { if (!matchFound) { matchFound = true; auto tEndTime = std::chrono::high_resolution_clock::now(); globalElapsedTime = std::chrono::duration<double>(tEndTime - tStart).count(); mkeysPerSec = (double)(globalComparedCount + localComparedCount) / globalElapsedTime / 1e6;
// Recovering private key Int matchingPrivateKey; matchingPrivateKey.Set(¤tBatchKey); int idx = pointIndices[j]; if (idx < 256) { Int offset; offset.SetInt32(idx); matchingPrivateKey.Add(&offset); } else { Int offset; offset.SetInt32(idx - 256); matchingPrivateKey.Sub(&offset); } foundPrivateKeyHex = padHexTo64(intToHex(matchingPrivateKey)); Point matchedPoint = pointBatch[idx]; foundPublicKeyHex = pointToCompressedHex(matchedPoint); foundWIF = P2PKHDecoder::compute_wif(foundPrivateKeyHex, true); } } #pragma omp cancel parallel } localComparedCount++; } else { localComparedCount++; } } Implementing the stride option is even more challenging in such a complex AVX2 implementation, especially with so many if statements and batch operations.
|
BTC: bc1qdwnxr7s08xwelpjy3cc52rrxg63xsmagv50fa8
|
|
|
|