WanderingPhilospher
Sr. Member
  
Offline
Activity: 1456
Merit: 275
Shooters Shoot...
|
 |
April 27, 2025, 07:43:18 PM Last edit: April 27, 2025, 08:00:52 PM by WanderingPhilospher |
|
Yes You're just dragging along the bias. If I bring up Scooby Doo 4.0 that does random picks, and your prefix method runs along the exact same sequence of random picks - prefix will win. But that happens because you are simply integrating the bias into the comparison, which is kind of non-rational. It has nothing to do with RANGES or subranges or "proximity bias". I think you are getting a little fooled by a natural consequence of traversing the exact same sequence of values using two different strategies, while not acknowledging that a third one exists that beats your "winner" on that same exact dataset. Where's the logic in that? How do you want me to run the tests, lol...I thought we were comparing 2 methods; sequential versus prefix. You set up the scooby doo search method, however you like. But both must be ran the same way, whether that be the same shuffle order, the same start to end or end to start, etc. Because if not, then that is flawed and we are merely getting the luck of the draw shuffle order or where random key was generated, and that goes for both methods. So tell me what this is: third one exists that beats your "winner" on that same exact dataset. and i will run against it. If I bring up Scooby Doo 4.0 that does random picks, and your prefix method runs along the exact same sequence of random picks - prefix will win I think that is false, and if we are just randomly picking single random points and there is no size to it, then it's not even a test lol. Unless I am misunderstanding what you are saying. If your script randomly picks point x, and it is a 3 length prefix match, then prefix will pad some value to it, so if the next random pick is within that padded value, prefix search would not even hash it. So I'm not sure how that helps. Yeah, I dunno what you are wanting now lol. 500 simulations at 2^20 range size: === FINAL RESULTS (Sequential, Full Range) === Wins: Scooby_Doo: 1 Prefix: 498 Ties: 1
Total Checks:
Scooby_Doo: 252778674 Prefix: 252814878 Total Time:
Scooby_Doo: 358.475484 seconds Prefix: 734.161429 seconds
Averages (Total Time / Wins):
Scooby_Doo : 358.475484 seconds/victory Prefix : 1.474220 seconds/victory
Checks per Win: Scooby_Doo : 252778674.00 checks/win Prefix : 507660.40 checks/win
I may need to look at your script there too because some of those numbers just seem wrong/abnormal or backwards. I need to see what each is actually measuring and not just the print statement. Added a new print statement: (500 simulations, back to 2^17 range size) === FINAL RESULTS (Sequential, Full Range) === Wins: Scooby_Doo: 1 Prefix: 487 Ties: 12
Total Checks:
Scooby_Doo: 31597542 Prefix: 31485133 Total Time:
Scooby_Doo: 42.913778 seconds Prefix: 52.088803 seconds
Averages (Total Time / Wins):
Scooby_Doo : 42.913778 seconds/victory Prefix : 0.106959 seconds/victory
Checks per Win: Scooby_Doo : 31597542.00 checks/win Prefix : 64651.20 checks/win
Average Checks per Simulation:
Scooby_Doo : 63,195.08 checks/simulation Prefix : 62,970.27 checks/simulation
this one: Scooby_Doo : 63,195.08 checks/simulation Prefix : 62,970.27 checks/simulation
|
|
|
|
|
|
kTimesG
|
 |
April 27, 2025, 08:00:37 PM |
|
But both must be ran the same way, whether that be the same shuffle order, the same start to end or end to start, etc. Because if not, then that is flawed and we are merely getting the luck of the draw shuffle order or where random key was generated, and that goes for both methods.
Mhm.... ?! - dataset is the same - target hash is the same - position of target hash is the same - data does not change while processing a simulation - hashes of X return the same value every time Why does Scooby Doo need to USE something called "shuffled block order" or "direction of traversal"? Where is the "luck" involved here? Everything is 100% fixed when either method is being called to solve the damn target hash, isn't it? I think by "luck" you mean - who solves it faster? Magic prefix method, or Scooby Doo? Answer: they will both solve it equivalently well, since we are dealing with a uniform distribution. What is this, fascism or freedom to solve a problem however we wish? The point of Scooby Doo was to prove that the Magic Method does not have any advantages. Those "wins" that you see have two amazing properties: 1. They don't really matter at all - they are offset by an equally worse "loses" ops quantity 2. They exist because the same order of elements is traversed, which introduces bias. I think maybe the point that you're not getting is this introduced "bias": It is an interesting thing - but not a magic trick. It all adds up equally when you run the numbers between risk and reward here. Does it allow you to find the target sooner? Maybe yes, but then again, when it doesn't - you're gonna have a huge problem. The sick thing about it? If you do the exact opposite thing , it runs just as good. So what to choose? Or, more exactly, what would be the conclusion? Maybe run an actual 3-way / 4-way comparison, heck, even a 5-way comparison, just to understand that they will all line up equally. I may need to look at your script there too because some of those numbers just seem wrong/abnormal or backwards. I need to see what each is actually measuring and not just the print statement.
Please don't blame me for the fucked up statistical computation of results. I only touched how Scooby Doo finds the target and that's all. 
|
Off the grid, training pigeons to broadcast signed messages.
|
|
|
WanderingPhilospher
Sr. Member
  
Offline
Activity: 1456
Merit: 275
Shooters Shoot...
|
 |
April 27, 2025, 08:05:06 PM |
|
But both must be ran the same way, whether that be the same shuffle order, the same start to end or end to start, etc. Because if not, then that is flawed and we are merely getting the luck of the draw shuffle order or where random key was generated, and that goes for both methods.
Mhm.... ?! - dataset is the same - target hash is the same - position of target hash is the same - data does not change while processing a simulation - hashes of X return the same value every time Why does Scooby Doo need to USE something called "shuffled block order" or "direction of traversal"? Where is the "luck" involved here? Everything is 100% fixed when either method is being called to solve the damn target hash, isn't it? I think by "luck" you mean - who solves it faster? Magic prefix method, or Scooby Doo? Answer: they will both solve it equivalently well, since we are dealing with a uniform distribution. What is this, fascism or freedom to solve a problem however we wish? The point of Scooby Doo was to prove that the Magic Method does not have any advantages. Those "wins" that you see have two amazing properties: 1. They don't really matter at all - they are offset by an equally worse "loses" ops quantity 2. They exist because the same order of elements is traversed, which introduces bias. I think maybe the point that you're not getting is this introduced "bias": It is an interesting thing - but not a magic trick. It all adds up equally when you run the numbers between risk and reward here. Does it allow you to find the target sooner? Maybe yes, but then again, when it doesn't - you're gonna have a huge problem. The sick thing about it? If you do the exact opposite thing , it runs just as good. So what to choose? Or, more exactly, what would be the conclusion? Maybe run an actual 3-way / 4-way comparison, heck, even a 5-way comparison, just to understand that they will all line up equally. If we are comparing two methods, we need them to use the same everything. If we use different orders of checking blocks, how does that help us at all? It then is just which method landed on the correct block first, which is 100% pure luck of the draw? I dunno, we can agree to disagree on that one. And ok, if my method loses, it loses big/I have a huge problem. Cool. That one loss out of 500...probably a risk I would take TBH.
|
|
|
|
|
|
fixedpaul
|
 |
April 27, 2025, 08:23:23 PM |
|
Since from a mathematical point of view it has already been explained that it makes no sense, but apparently many choose to ignore it or just don't get i, I don't know. Properly prove it if you claim that prefix-method works. Proving that there is a method that actually works better then any other is very simple, as ktimesG already said. Take any range of size N, run many simulations, and calculate the number of operations/keychecked. If avg(ops)/N is significantly less than 0.5, then you have empirically proven it. By "significantly," I mean a p-value << 1— I don't know, the smaller the better. But it’s enough to share the method so that it’s repeatable and anyone can test it. That way, we can finally put this topic to rest once and for all. The burden of proof lies with those who claim to have a better method; at most, we can just mock some scripts using a cartoon dog, which is probably more fun 
|
|
|
|
|
WanderingPhilospher
Sr. Member
  
Offline
Activity: 1456
Merit: 275
Shooters Shoot...
|
 |
April 27, 2025, 08:46:07 PM |
|
Since from a mathematical point of view it has already been explained that it makes no sense, but apparently many choose to ignore it or just don't get i, I don't know. Properly prove it if you claim that prefix-method works. Proving that there is a method that actually works better then any other is very simple, as ktimesG already said. Take any range of size N, run many simulations, and calculate the number of operations/keychecked. If avg(ops)/N is significantly less than 0.5, then you have empirically proven it. By "significantly," I mean a p-value << 1— I don't know, the smaller the better. But it’s enough to share the method so that it’s repeatable and anyone can test it. That way, we can finally put this topic to rest once and for all. The burden of proof lies with those who claim to have a better method; at most, we can just mock some scripts using a cartoon dog, which is probably more fun  Frankly, I find this post "lazy". you say, "as ktimesG already said" but don't ever refute anything he says lol?! That the methods should basically be compared via of some luck of the random draw lol. The numbers for the last test I ran, in case you missed it: === FINAL RESULTS (Sequential, Full Range) === Wins: Scooby_Doo: 1 Prefix: 487 Ties: 12
Total Checks:
Scooby_Doo: 31597542 Prefix: 31485133 Total Time:
Scooby_Doo: 42.913778 seconds Prefix: 52.088803 seconds
Averages (Total Time / Wins):
Scooby_Doo : 42.913778 seconds/victory Prefix : 0.106959 seconds/victory
Checks per Win: Scooby_Doo : 31597542.00 checks/win Prefix : 64651.20 checks/win
Average Checks per Simulation:
Scooby_Doo : 63,195.08 checks/simulation Prefix : 62,970.27 checks/simulation
Now, that is a vastly small range size, compared to what we are working on, as far as the puzzles. So one should take that into consideration. But the numbers would hold true, if due diligence was done on skip count size. If you want to run a full blown 68 bit test, go for it bud. I will await your results. And you don't even have to call it the prefix method. You can blindly select a number for skip count, and run the numbers yourself, and just call it the skipkeys method lol. SO no, from a mathematical point of view, it has not been explained, rather the opposite. But I for one, would not blindly do this if I were investing, 100s, thousands, or even hundreds of thousands of dollars to solve a puzzle. I would want some tests and data to back up my skip count number. And the only way I can do that is via h160 bits. Well, that is the way I would chose. I am sure there are other ways, just not explored. Burden of proof?! The proof and scripts are out there, or create your own. Don't be lazy. I told you everything I changed with the scooby doo method. It's not brain science, nor rocket surgery. Create it and run your tests. You should be able to merely read what I wrote and understand how it finds a match faster, based on, well, math, of all things lol. Or better yet, create your own script from scratch. First method, call it method A: it starts at start and checks for match until range is exhausted. Second method, call it method B: It starts at start and if it finds a "DP" (prefix length match) it stores the keys it checked prior to finding prefix match, then skips ahead in the range by skip count value, and again goes sequentially until next prefix match is found, rinse and repeat. If it gets to the end of the range and the key was not found, it goes back and searches the keys it skipped, sequentially, until it finds the key. How is that hard to understand/see why on average, it will find the key faster?! Want to be super lazy? Don't look for prefix matches, just skip x amount of keys...meaning just randomly come up with a skip count number. I bet it will still will win more than sequential. Matter of fact, I will test this alongside sequential, prefix match, and now, random skip count number. It will be glorious lol.
|
|
|
|
|
|
kTimesG
|
 |
April 27, 2025, 08:49:13 PM |
|
If we are comparing two methods, we need them to use the same everything. If we use different orders of checking blocks, how does that help us at all? It then is just which method landed on the correct block first, which is 100% pure luck of the draw? I dunno, we can agree to disagree on that one.
And ok, if my method loses, it loses big/I have a huge problem. Cool. That one loss out of 500...probably a risk I would take TBH.
They do use the same everything: same data, same target. Anything else except those is a conditional constraint, not at all part of what is being asked from the algorithm. Sorry but it does not make any sense at all. Maybe it does for you though. You're closer to your answer though - you're spot on that it's because that blocks / whatever are traversed in the same order, that introduces the "bias". Once you freely let go of this constraint - you will notice that both methods act identical, which is the expected thing to happen. Did you look at the histogram? Maybe this detail will change your mind: When Prefix wins - it wins by a very small margin (by just a few ops ahead) When Prefix loses - it loses by a very high margin (proportional to range size) I wouldn't take that risk just because I have more chances to find some key sooner by a handful of ops, while risking to descent into scanning N times more keys in vain, but of course everyone is free to do what they want, at the end of the day. And completely ignore the fact that simply scanning sequentially in the exact opposite order of what the magic method does yields the exact same performance in all aspects... I think the subject was dissected way too much for anything more substantial to ever be added.
|
Off the grid, training pigeons to broadcast signed messages.
|
|
|
WanderingPhilospher
Sr. Member
  
Offline
Activity: 1456
Merit: 275
Shooters Shoot...
|
 |
April 27, 2025, 09:01:35 PM Last edit: April 27, 2025, 09:12:44 PM by WanderingPhilospher |
|
When Prefix wins - it wins by a very small margin (by just a few ops ahead) When Prefix loses - it loses by a very high margin (proportional to range size) Well we are working with very tiny ranges lol... Once you freely let go of this constraint - you will notice that both methods act identical, which is the expected thing to happen. For grins and giggles I will use random starting points (since I am just using 1 block size) Who needs prefixes or straight line sequential order  (2^17 range size, 500 simulations, random checked 4096 consecutive keys then skipped 64, if range was exhausted, it went back to first key skipped and checked them sequentially until key was found) === FINAL RESULTS (Sequential, Full Range) === Wins: ScoobyDoo : 0 Prefix : 9 RandomSkip : 482 Ties : 9
Total Checks:
ScoobyDoo : 31597542 Prefix : 31485133 RandomSkip : 31429414
Total Time:
ScoobyDoo : 43.892759 seconds Prefix : 53.199185 seconds RandomSkip : 46.203749 seconds
Averages (Total Time / Wins):
ScoobyDoo : inf seconds/victory Prefix : 5.911021 seconds/victory RandomSkip : 0.095858 seconds/victory
Checks per Win: ScoobyDoo : inf checks/win Prefix : 3,498,348.11 checks/win RandomSkip : 65,206.25 checks/win
Average Checks per Simulation:
ScoobyDoo : 63,195.08 checks/simulation Prefix : 62,970.27 checks/simulation RandomSkip : 62,858.83 checks/simulation
Ok, 2^17 range size, 500 simulations, each method started from a random point inside the range, if key not found it went to whatever keys were not checked, and started checking those. === FINAL RESULTS (Sequential, Full Range with Random Starts) === Wins: ScoobyDoo : 133 Prefix : 241 RandomSkip : 126 Ties : 0
Total Checks:
ScoobyDoo : 31775601 Prefix : 22520018 RandomSkip : 32676939
Total Time:
ScoobyDoo : 45.047694 seconds Prefix : 42.106104 seconds RandomSkip : 49.145924 seconds
Averages (Total Time / Wins):
ScoobyDoo : 0.338704 seconds/victory Prefix : 0.174714 seconds/victory RandomSkip : 0.390047 seconds/victory
Checks per Win: ScoobyDoo : 238,914.29 checks/win Prefix : 93,444.06 checks/win RandomSkip : 259,340.79 checks/win
Average Checks per Simulation:
ScoobyDoo : 63,551.20 checks/simulation Prefix : 45,040.04 checks/simulation RandomSkip : 65,353.88 checks/simulation
Prefix whooped some azzzzz....
|
|
|
|
|
fantom06
Jr. Member
Offline
Activity: 49
Merit: 1
|
 |
April 27, 2025, 09:52:27 PM |
|
When Prefix wins - it wins by a very small margin (by just a few ops ahead) When Prefix loses - it loses by a very high margin (proportional to range size) Well we are working with very tiny ranges lol... Once you freely let go of this constraint - you will notice that both methods act identical, which is the expected thing to happen. For grins and giggles I will use random starting points (since I am just using 1 block size) Who needs prefixes or straight line sequential order  (2^17 range size, 500 simulations, random checked 4096 consecutive keys then skipped 64, if range was exhausted, it went back to first key skipped and checked them sequentially until key was found) === FINAL RESULTS (Sequential, Full Range) === Wins: ScoobyDoo : 0 Prefix : 9 RandomSkip : 482 Ties : 9
Total Checks:
ScoobyDoo : 31597542 Prefix : 31485133 RandomSkip : 31429414
Total Time:
ScoobyDoo : 43.892759 seconds Prefix : 53.199185 seconds RandomSkip : 46.203749 seconds
Averages (Total Time / Wins):
ScoobyDoo : inf seconds/victory Prefix : 5.911021 seconds/victory RandomSkip : 0.095858 seconds/victory
Checks per Win: ScoobyDoo : inf checks/win Prefix : 3,498,348.11 checks/win RandomSkip : 65,206.25 checks/win
Average Checks per Simulation:
ScoobyDoo : 63,195.08 checks/simulation Prefix : 62,970.27 checks/simulation RandomSkip : 62,858.83 checks/simulation
Ok, 2^17 range size, 500 simulations, each method started from a random point inside the range, if key not found it went to whatever keys were not checked, and started checking those. === FINAL RESULTS (Sequential, Full Range with Random Starts) === Wins: ScoobyDoo : 133 Prefix : 241 RandomSkip : 126 Ties : 0
Total Checks:
ScoobyDoo : 31775601 Prefix : 22520018 RandomSkip : 32676939
Total Time:
ScoobyDoo : 45.047694 seconds Prefix : 42.106104 seconds RandomSkip : 49.145924 seconds
Averages (Total Time / Wins):
ScoobyDoo : 0.338704 seconds/victory Prefix : 0.174714 seconds/victory RandomSkip : 0.390047 seconds/victory
Checks per Win: ScoobyDoo : 238,914.29 checks/win Prefix : 93,444.06 checks/win RandomSkip : 259,340.79 checks/win
Average Checks per Simulation:
ScoobyDoo : 63,551.20 checks/simulation Prefix : 45,040.04 checks/simulation RandomSkip : 65,353.88 checks/simulation
Prefix whooped some azzzzz.... Simulation 5000: Scooby_Doo = 87634 checks in 0.406732s | Prefix = 70515 checks in 0.261866s === FINAL RESULTS === Wins: Scooby_Doo: 2423 Prefix: 2379 Ties: 198 Total Checks: Scooby_Doo: 248193832 Prefix: 251283717 Total Time: Scooby_Doo: 1133.652252 seconds Prefix: 949.231911 seconds Averages (Total Time / Wins): Scooby_Doo : 0.467871 seconds/victory Prefix : 0.399005 seconds/victory Checks per Win: Scooby_Doo : 102432.45 checks/win Prefix : 105625.77 checks/win
|
|
|
|
|
Menowa*
Newbie
Offline
Activity: 52
Merit: 0
|
 |
April 27, 2025, 10:20:54 PM |
|
could be anywhere but if you thinking at hex 4321 you go ahead and scan it yourself Puzzle 69:
19vkiEajfhuZ8bs8Zu2jgmC6oqZbWqhxhG Start Hex : 176DAEDFC76AE2EE58 End Hex: 17733BF5A8E10AEE69
why I'm 70% sure the key is in this range. Try it, you'll thank me later. It’s not there!
|
|
|
|
|
lunixtlt
Newbie
Offline
Activity: 2
Merit: 0
|
 |
April 27, 2025, 11:06:56 PM |
|
Hello everyone. Guys, please help me with the script. I made a script in Python that calculates addresses in order, but it calculates on the CPU. The speed is 200 Kkey/s. I have already tried everything. Tell me a script that would calculate on the GPU. Please, someone.
|
|
|
|
|
fantom06
Jr. Member
Offline
Activity: 49
Merit: 1
|
 |
April 27, 2025, 11:22:00 PM |
|
Hello everyone. Guys, please help me with the script. I made a script in Python that calculates addresses in order, but it calculates on the CPU. The speed is 200 Kkey/s. I have already tried everything. Tell me a script that would calculate on the GPU. Please, someone.
a hundred for the script?
|
|
|
|
|
lunixtlt
Newbie
Offline
Activity: 2
Merit: 0
|
 |
April 27, 2025, 11:38:26 PM |
|
Hello everyone. Guys, please help me with the script. I made a script in Python that calculates addresses in order, but it calculates on the CPU. The speed is 200 Kkey/s. I have already tried everything. Tell me a script that would calculate on the GPU. Please, someone.
a hundred for the script? I am communicating through a translator. I didn't quite understand you.
|
|
|
|
|
farou9
Newbie
Offline
Activity: 81
Merit: 0
|
 |
April 27, 2025, 11:38:37 PM |
|
Hello everyone. Guys, please help me with the script. I made a script in Python that calculates addresses in order, but it calculates on the CPU. The speed is 200 Kkey/s. I have already tried everything. Tell me a script that would calculate on the GPU. Please, someone.
tell chatgpt to make you Cuda script
|
|
|
|
|
Bram24732
Member

Offline
Activity: 224
Merit: 22
|
 |
April 28, 2025, 05:47:05 AM |
|
I think I’ve approached all of this the wrong way.
I’m offering a 0.1 BTC bounty for the formal proof of any traversal method that provides a statistical edge over a linear scan for puzzle 69. By statistical edge I mean that this new traversal method running on a statistically significant number of executions requires significantly fewer checks (let’s put the threshold at 5%) to find the key.
Conditions : - Has to be written using math semantics. Not “where does John lives” metaphors. - Has to be empirically validated using a python / nodeJS script. - First one posting it to this thread will be recipient of the bounty.
|
|
|
|
|
|
kTimesG
|
 |
April 28, 2025, 06:21:55 AM |
|
For anyone who still has more than 2 living neurons:
Block size: 5000; Same traverse order. No need to bother even checking for an actual prefix. Or do it - it's basically the same thing
- key < 4096: on average, a tie; this results in 4.1% ties overall - key < 5000: on average, seq wins; +1% - key < 9096: prefix wins because it skipped 904 keys in block 1 - ... rinse and repeat according to what happened so far
etc, etc, etc... but instead of anyone actually doing this analysis to explain why the bias appears when the SAME ORDER is used... we get Doodles. At least it's an upgrade from TicTacToe I guess.
If anyone still thinks the magic method works, and that it's a fallacy to ever have the atrocity to compare it to a method that goes from the other end (sorry, I meant breaking the relativity theory), I can only have a few recommendations:
1. Write to MIT / NASA / your local secret services agency. 2. Publish the paper. I have a good title: "How to read people's mind. Prefix Theory for Dummies" 3. You might want to stay off from the internet for a while. Forget the forums! Those haters who don't get math will be all over you. 4. Since you now have too much free time, maybe plot a few graphs of your magic method and observe that the CDF is straight up identical in all cases. CDF stands for "Cute Doodle Figures".
|
Off the grid, training pigeons to broadcast signed messages.
|
|
|
viceversas
Newbie
Offline
Activity: 8
Merit: 0
|
 |
April 28, 2025, 07:41:05 AM |
|
I think I’ve approached all of this the wrong way.
I’m offering a 0.1 BTC bounty for the formal proof of any traversal method that provides a statistical edge over a linear scan for puzzle 69. By statistical edge I mean that this new traversal method running on a statistically significant number of executions requires significantly fewer checks (let’s put the threshold at 5%) to find the key.
Conditions : - Has to be written using math semantics. Not “where does John lives” metaphors. - Has to be empirically validated using a python / nodeJS script. - First one posting it to this thread will be recipient of the bounty.
In theory, modular traversal is bijective and randomized. So, generally speaking, if the target keys are randomly distributed across the keyspace, modular traversal can be as efficient as linear traversal. But, for clustered targets like in the 2^69 puzzle, modular traversal is not efficient because the targets were clustered near the starting point (within a small range of the keyspace). So we need a "new" linear traversal. just my 2cents.
|
|
|
|
|
|
kTimesG
|
 |
April 28, 2025, 08:15:22 AM |
|
Here a direct proof on puzzle 17: def get_filter_candidate(x: int, c: int) -> int: h2 = hash160_pubkey(x) val = int.from_bytes(h2, 'big') return val >> (160 - c)
def prefilter_scan(start: int, end: int, target: str, c: int): t1 = get_filter_target(target, c) ops = 0 for x in range(start, end+1): if get_filter_candidate(x, c) != t1: continue ops += 1 if derive_address(x) == target: return x, ops return None, ops
You have an error in your script. You do H160 and continue before you increment ops. So the heavy op is not accounted always. Also - if the key isn't found, the method simply fails. I also believe that the actual HEAVY operation that actually ever matters is having a public key of the private key - because that is the actual step where you go from the scalar domain into the discrete log injective-only domain.
|
Off the grid, training pigeons to broadcast signed messages.
|
|
|
|
kTimesG
|
 |
April 28, 2025, 08:34:24 AM |
|
My C-bit prefilter is a winner, my friend. Convinced? ⇒ bc1qvmw6hxf7fhatxstf7vg53cd3n2a4jfa8at9wa6  I'm not. Maybe Bram is, who knows? You still do the same amount of hashing, so there's zero saving. We don't need to derive an address. The H160 is always enough. And it's always also necessary. You still might not find a key also. Nice try though.
|
Off the grid, training pigeons to broadcast signed messages.
|
|
|
|
nomachine
|
 |
April 28, 2025, 08:46:11 AM |
|
I'm not going to prove anything to anyone, but here is the Millisecond Puzzle 30 solver import random import os import time import secp256k1 as ice
puzzle = 30 target = "d39c4704664e1deb76c9331e637564c257d68a08" lower_range_limit = 2 ** (puzzle - 1) upper_range_limit = (2 ** puzzle) - 1
start_time = time.time()
for x in range(10000000): #Random seed Config #constant_prefix = b'' #back to no constant constant_prefix = b'yx\xcb\x08\xb70l' prefix_length = len(constant_prefix) length = 8 ending_length = length - prefix_length ending_bytes = os.urandom(ending_length) random_bytes = constant_prefix + ending_bytes random.seed(random_bytes) dec = random.randint(lower_range_limit, upper_range_limit) h160 = ice.privatekey_to_h160(0, True, dec).hex() if h160 == target: HEX = "%064x" % dec caddr = ice.privatekey_to_address(0, True, dec) wifc = ice.btc_pvk_to_wif(HEX) print("Bitcoin address Compressed: " + caddr) print("Private Key (decimal): " + str(dec)) print("Private key (wif) Compressed : " + wifc) print(f"Random seed: {random_bytes}") break
end_time = time.time() execution_time_ms = (end_time - start_time) * 1000
print("Execution Time (ms):", execution_time_ms) Bitcoin address Compressed: 1LHtnpd8nU5VHEMkG2TMYYNUjjLc992bps Private Key (decimal): 1033162084 Private key (wif) Compressed : KwDiBf89QgGbjEhKnhXJuH7LrciVrZi3qYjgd9M8diLSC5MyERoW Random seed: b'yx\xcb\x08\xb70l\xf1' Execution Time (ms): 2.977609634399414 There are no rules for how any puzzle should be solved — the rule is that there are no rules 
|
BTC: bc1qdwnxr7s08xwelpjy3cc52rrxg63xsmagv50fa8
|
|
|
|
kTimesG
|
 |
April 28, 2025, 08:55:16 AM |
|
My C-bit prefilter is a winner, my friend.
You still do the same amount of hashing, so there's zero saving. We don't need to derive an address. The H160 is always enough. And it's always also necessary. That comment misses the real win: we’re not paying the full cost of address derivation + Base58 on every candidate, only on the tiny fraction that passes the c-bit filter. I think you're either trolling, or simply don't get what is being searched.
|
Off the grid, training pigeons to broadcast signed messages.
|
|
|
|