RetiredCoder
Full Member
 
Offline
Activity: 152
Merit: 140
No pain, no gain!
|
 |
November 07, 2024, 08:17:12 AM |
|
Yesterday I finally decided to get out of the dusty shelf the trick that @arulbero came up with 4 years ago. I decided to try to expand 80-bit points to the 85-bit range. The expected solution time is 5 hours 35 minutes for GTX 1660s. 5 hours passed and nothing... I thought that this trick really won't work. But today I changed the jump table (as you hinted) and here is the result: on the first attempt the key in the 85-bit range was found in 6 minutes, and the second in 14 minutes.
Yes, old DPs can be reused if you keep old jump table, but they don't help much. It's a good idea to use them for one next puzzle to improve chances a bit, but that's all.
|
|
|
|
|
Etar
|
 |
November 07, 2024, 10:42:38 AM |
|
What trick is that?
https://bitcointalk.org/index.php?topic=5244940.msg54629413#msg54629413Yes, old DPs can be reused if you keep old jump table, but they don't help much. It's a good idea to use them for one next puzzle to improve chances a bit, but that's all.
Not only the next one.. 80 bit DPs extended to 90 bit range: Start:0 Stop :3FFFFFFFFFFFFFFFFFFFFFF Keys :1 KeyX :672DDE17A8F345C04D6C0B5C53750E107907313ECF5B3FFDB122868515ECD171 KeyY :B151B8521AA206F51FE685A231C5FAED10D903DA70F15EDDE3C09CB2AC41AA08 LoadWork: [HashTable 7449.0/9315.7MB] [35s] Number of CPU thread: 0 NB_RUN: 64 GPU_GRP_SIZE: 128 NB_JUMP: 32 Range width: 2^90 Jump Avg distance: 2^40.03 Number of kangaroos: 2^20.46 Suggested DP: 24 Expected operations: 2^46.08 Expected RAM: 2725.0MB DP size: 20 [0xFFFFF00000000000] GPU: GPU #0 NVIDIA GeForce GTX 1660 SUPER (22x64 cores) Grid(88x128) (141.0 MB used) SolveKeyGPU Thread GPU#0: creating kangaroos... SolveKeyGPU Thread GPU#0: 2^20.46 kangaroos [10.5s] [711.66 MK/s][GPU 711.66 MK/s][Count 2^49.63][Dead 0][32:00 (Avg 1.2d)][7483.8/9361.2MB] Cur td: 8F8EDB4B5B318F159493 Mult td: 23E3B6D2D6CC63C56524C00
Key# 0 [1S]Pub: 0x02672DDE17A8F345C04D6C0B5C53750E107907313ECF5B3FFDB122868515ECD171 Priv: 0x387599B938E25FF1A07C51E
Done: Total time 32:40
But of course there is a limitation here and it depends on the number of accumulated DPs. Let's say, 95 bits I will not be able to solve with the number of DPs that I have.
|
|
|
|
|
RetiredCoder
Full Member
 
Offline
Activity: 152
Merit: 140
No pain, no gain!
|
 |
November 07, 2024, 11:46:26 AM |
|
Not only the next one.. 80 bit DPs extended to 90 bit range: But of course there is a limitation here and it depends on the number of accumulated DPs. Let's say, 95 bits I will not be able to solve with the number of DPs that I have.
Exactly. If you solved #80, DB with DPs for it will help only like 5-10% to solve #85. But if you have huge x10 DB for #80 you can solve 80-bit range very quickly and also solve #85 much faster, and even #90 will be solved a bit faster. But for high-number puzzles probably you don't have x10 DB.
|
|
|
|
|
Etar
|
 |
November 07, 2024, 02:09:44 PM |
|
So the trick is to spread out existing DPs into the higher interval, by changing the perspective on the generator.. But because jump rules need to be identical, this means the jumps are also multiplied as well, to make the same jumps, like this:
You are absolutely right, the jump table should be multiplied. I can't say anything about the optimal distance. I just wanted to try again if it works, because 4 years ago, I missed the point with the jump multiplier. I'm not sure about the practical benefit of the method, but some boost of 12% can be obtained when moving from puzzle to puzzle.
|
|
|
|
|
RetiredCoder
Full Member
 
Offline
Activity: 152
Merit: 140
No pain, no gain!
|
 |
November 07, 2024, 02:56:03 PM |
|
If we have some optimal average jump distance = m * sqrt(b) / 4 where m = number of kangaroos
Why do you think so? From my experience, optimal average jump distance does not depend on the number of kangs (at least if you have many of them), it's always about sqrt(range).
|
|
|
|
|
Etar
|
 |
November 07, 2024, 05:31:45 PM Last edit: November 07, 2024, 05:47:37 PM by Etar |
|
The optimal value of the jump range is when the kangaroos do not run further than 2 ranges during the solving: expected_op = 2.076 * math.sqrt(range) + Nkangaroo opimal_jmp_distance = range * 2 // expected_op
or simply math.sqrt(range)
But trick probably won't work for puzzles... For example, you solved 120bit range and accumulated points. The optimal jump distance for this range is 2^59.95. When you expand these points to the next range - 125bit, they will all be in the range of 2^125, but the average jump distance will be multiplied by 32 and will be 2^64.95. At the same time, for 125 bits of range, the optimal jump distance is 2^62.44. This means that by the time you accumulate the required number of points for the solution, they will go 5.7 times further out of range. There is no problem that they out of range, kangaroos always run forward, but there will be no benefit from the points that remain in the range of 2^125 Most likely the trick works well when you have a huge base of accumulated points and you can use this to expand the range, knowing that the number of points is enough for the solution.
|
|
|
|
|
RetiredCoder
Full Member
 
Offline
Activity: 152
Merit: 140
No pain, no gain!
|
 |
November 07, 2024, 08:49:07 PM |
|
The average jump distance is the one that needs to be multiplied by m, not every jump distances! The average jump distance = sum(jump_distances) / len(jump_distances) ... Tame = random between b/2 and b Wild = random between 1 and b / 2 alpha = sqrt(b) : 21 jump points. Results:
If I need to increase average distance I generate larger jumps, but I don't change the number of jumps in the jump table. But it seems you do, you added 8 more points to get x256 average distance somehow. So next my question is how do you generate the jump table?
|
|
|
|
madogss
Newbie
Offline
Activity: 53
Merit: 0
|
 |
November 07, 2024, 09:30:54 PM |
|
I might be thinking about this wrong but if you had a DB of DPs for say the 70 bit range then could use alberts ecctools to divide the #135 to that range and check for collisions, would that be faster then running bsgs over and over on that range for each pubkey.
|
|
|
|
|
RetiredCoder
Full Member
 
Offline
Activity: 152
Merit: 140
No pain, no gain!
|
 |
November 08, 2024, 09:17:40 AM |
|
Powers of two, and adjust the last element to the value needed to have the desired average. This jump set was proven optimal here (e.g. minimizes total number of operations): Kangaroos, Monopoly and Discrete Logarithms, J. M. Pollard, 2000
Well, ok. But I use fixed length table for all tests, it's more practical for implementation, also I get better results for longer list than using powers of two. I will try your approach to see results. Question for you: do you prefer fewer kangaroos that jump faster, or lots of kangaroos that jump slower?
I prefer faster kangs because of high DP bits that I have to use to solve high puzzles, to get smaller overhead. But even so, the number of kangs is crazy because there are many GPUs and every GPU has a lot of kangs anyway. How can you explain case #3 (the awful case with runtime 172 sqrt)? When the Tame and Wild are separated by a distance of b/2, and the average jump size is much too small, it will take a lot of jumps for them to ever meet. In the random case, it's a little better than that, but still too far from the optimal case (e.g. a correct larger average jump size).
Main question here is how many times do you solve a point to calculate average result value? In my tests I solve at least 1000 times.
|
|
|
|
turardarmen
Newbie
Offline
Activity: 1
Merit: 0
|
 |
November 08, 2024, 05:08:34 PM Last edit: November 09, 2024, 03:13:39 PM by hilariousandco |
|
Hello everyone, maybe someone will be interested: if this puzzle is a deterministic wallet, and if each subsequent private key becomes 2x harder, the bitcoin at that address also doubles,can it then its seed phrase is 1.2.4.8.16.32.64.128.256.512.1024.2048. be it might just be nonsense. 2^0 2^1 2^2 2^3 2^4 … 2^11 1.2.4.8.16.32.64.128.256.512.1024.2048. abandon ability about abstract acid advance among avocado cable divide lend zoo If it turns out as I said, I think that the person who finds the solution will give me a refund. sorry for bad english Hello everyone, maybe someone will be interested: if this puzzle is a deterministic wallet, and if each subsequent private key becomes 2x harder, the bitcoin at that address also doubles,can it then its seed phrase is 1.2.4.8.16.32.64.128.256.512.1024.2048. be it might just be nonsense. 2^0 2^1 2^2 2^3 2^4 … 2^11 1.2.4.8.16.32.64.128.256.512.1024.2048. abandon ability about abstract acid advance among avocado cable divide lend zoo If it turns out as I said, I think that the person who finds the solution will give me a refund. sorry for bad english
I know that the creator of the puzzle said that there is no pattern What does the creator say to this? can you comment?
|
|
|
|
|
damiankopacz87
Newbie
Offline
Activity: 16
Merit: 0
|
 |
November 08, 2024, 07:16:55 PM |
|
Hello everyone, maybe someone will be interested: if this puzzle is a deterministic wallet,
Hi, As far as I have studied bitcoin, it's highly impossible for this addresses to came from one deterministic wallet. Please, correct me if I am wrong. Creating deterministic wallet You have no influence on final shape of priv keys so Yo can not made them,1bit, 2bit, 3bit etc. BR Damian
|
|
|
|
|
gygy
Newbie
Offline
Activity: 24
Merit: 0
|
 |
November 08, 2024, 09:18:17 PM |
|
Hello everyone, maybe someone will be interested: if this puzzle is a deterministic wallet,
Hi, As far as I have studied bitcoin, it's highly impossible for this addresses to came from one deterministic wallet. Please, correct me if I am wrong. Creating deterministic wallet You have no influence on final shape of priv keys so Yo can not made them,1bit, 2bit, 3bit etc. BR Damian You can have a deterministic wallet and get the first 160 address. Get those private keys, mask them for you need (flip ones and zeros as you wish), and create addresses for these modified private keys.
|
|
|
|
|
WanderingPhilospher
Sr. Member
  
Offline
Activity: 1484
Merit: 285
Shooters Shoot...
|
 |
November 09, 2024, 05:36:06 AM |
|
Powers of two, and adjust the last element to the value needed to have the desired average. This jump set was proven optimal here (e.g. minimizes total number of operations): Kangaroos, Monopoly and Discrete Logarithms, J. M. Pollard, 2000
Well, ok. But I use fixed length table for all tests, it's more practical for implementation, also I get better results for longer list than using powers of two. I will try your approach to see results. Question for you: do you prefer fewer kangaroos that jump faster, or lots of kangaroos that jump slower?
I prefer faster kangs because of high DP bits that I have to use to solve high puzzles, to get smaller overhead. But even so, the number of kangs is crazy because there are many GPUs and every GPU has a lot of kangs anyway. How can you explain case #3 (the awful case with runtime 172 sqrt)? When the Tame and Wild are separated by a distance of b/2, and the average jump size is much too small, it will take a lot of jumps for them to ever meet. In the random case, it's a little better than that, but still too far from the optimal case (e.g. a correct larger average jump size).
Main question here is how many times do you solve a point to calculate average result value? In my tests I solve at least 1000 times. I never understodd why JLP and the clones leaned towards squeezing more kangaroos into GPU memory; it slows down everything, instead of speeding things up. Lower throughput, lower speed/kang, and a really high DP overhead. Like what are y'all even talking about here? You prefer faster kangaroos versus more, slower kangaroos...ok, what's the sweet spot? What bit range? What DP is used? All of these play a factor...I don't think you can say x y z is always better. Isn't this a super easy task, to test? Give me the same program, and I will run it with a GPU and then with a CPU, and let's see which solves the key first. Let's make it an 80 bit range. 1 GPU versus a single core, or do you want to use as many cores as the CPU has? Any bets on which one finds the key first? Also, RetiredCoder, make mods to the program, to create less "kangs" when using a GPU, if it's to crazy for you...it's super easy to do. And another question, how does the speed of "kangs", impact the finding of High DP bits. Does a CPU (which the individual kangs are faster) find high DP bits, faster? Or does the GPU's slow, but many, find more, DP bits, faster? And the last question, which "high puzzles" have you solved and what did you use to solve (CPU, GPU, DP, etc)
|
|
|
|
|
Kelvin555
Jr. Member
Offline
Activity: 63
Merit: 1
|
 |
November 09, 2024, 05:48:35 AM |
|
Hi,
As far as I have studied bitcoin, it's highly impossible for this addresses to came from one deterministic wallet. Please, correct me if I am wrong. Creating deterministic wallet You have no influence on final shape of priv keys so Yo can not made them,1bit, 2bit, 3bit etc.
BR Damian
You can have a deterministic wallet and get the first 160 address. Get those private keys, mask them for you need (flip ones and zeros as you wish), and create addresses for these modified private keys. I think these keys are from a deterministic wallet but not just 256 keys, he created like 20000 keys or more, puzzle 65 private key might be the 10000th key masked down, 66 private key the 1500th key masked down and 67 the 7600th key masked down. Just in any case some people might use the keys to derive the seed, they will always fail cause the keys are not in order the way it was generated. He said he spent two years creating this challenge, if he just generated 256 keys and masked them down, it would not take 48 hours.
|
|
|
|
|
Akito S. M. Hosana
Jr. Member
Offline
Activity: 420
Merit: 8
|
 |
November 09, 2024, 12:53:08 PM |
|
I won't post its source, but soon I will post some software for K (examining K*sqrt required ops) - from classic 2.1 to best 1.23 so people who are interested can learn some things. I solved #120, #125 and #130 for now, for 130 DB with DP was about 1TB. About big DP overhead in JLP you can check my old posts, I'm glad that it's super easy for you to make less number but faster kangaroos to reduce this overhead  How do you managed and accessed the DP database efficiently, given its 1TB size. Did you employ any specific data structures or caching strategies to minimize access time? You mentioned achieving a reduction in the required operations from classic 2.1 to best 1.23 in K * sqrt terms. Could you explain what methods or optimizations contributed most to this efficiency? How did you implement these changes in your software? When transitioning from JLP's code to your custom implementation, what metrics did you use to benchmark efficiency improvements? Were there any specific areas where you saw the most significant speed gains?
|
|
|
|
|
RetiredCoder
Full Member
 
Offline
Activity: 152
Merit: 140
No pain, no gain!
|
 |
November 09, 2024, 01:22:29 PM |
|
I won't post its source, but soon I will post some software for K (examining K*sqrt required ops) - from classic 2.1 to best 1.23 so people who are interested can learn some things. I solved #120, #125 and #130 for now, for 130 DB with DP was about 1TB. About big DP overhead in JLP you can check my old posts, I'm glad that it's super easy for you to make less number but faster kangaroos to reduce this overhead  How do you managed and accessed the DP database efficiently, given its 1TB size. Did you employ any specific data structures or caching strategies to minimize access time? You mentioned achieving a reduction in the required operations from classic 2.1 to best 1.23 in K * sqrt terms. Could you explain what methods or optimizations contributed most to this efficiency? How did you implement these changes in your software? When transitioning from JLP's code to your custom implementation, what metrics did you use to benchmark efficiency improvements? Were there any specific areas where you saw the most significant speed gains? Most answers in sources here: https://bitcointalk.org/index.php?topic=5517607About DB - I stored it in RAM directly 
|
|
|
|
GR Sasa
Member

Offline
Activity: 200
Merit: 14
|
 |
November 09, 2024, 01:49:37 PM |
|
I have the feeling that RetiredCoder is the creator of the puzzles himself.
|
|
|
|
|
Anonymous User
Newbie
Offline
Activity: 30
Merit: 0
|
 |
November 09, 2024, 02:11:40 PM |
|
I have the feeling that RetiredCoder is the creator of the puzzles himself.
No, he’s not the creator of this puzzle. The creator removed puzzle #120 to make the total prize 1000 BTC. Retired Coder shared the private key for puzzle #120 after a year, and in the thread below, he clearly mentions he has 20 PC of the RTX 4090(maybe he's lying). https://bitcointalk.org/index.php?topic=5512304.msg64643001#msg64643001If you look at the timing of puzzles #125 and #130, there’s just a two-month gap between them. So, do you still believe that in just two months, Retired Coder created a 1TB distinguished points table with 20 GPUs and also solved puzzle #130?
|
|
|
|
|
|
nomachine
|
 |
November 09, 2024, 02:28:31 PM |
|
20 GPUs solved puzzle #130?
I also do something similar to this. 20 GPUs are not enough. This speed is impossible. It must be a three-digit number of graphics cards, even if you change the BIOS for the GPUs and use a different CUDA kernel. 
|
BTC: bc1qdwnxr7s08xwelpjy3cc52rrxg63xsmagv50fa8
|
|
|
WanderingPhilospher
Sr. Member
  
Offline
Activity: 1484
Merit: 285
Shooters Shoot...
|
 |
November 09, 2024, 03:18:00 PM |
|
For DP overhead, it was discussed in JLP's Kangaroo thread. His program tries to give a user the best DP to use, that will create the least amount of overhead. The issue comes when you have multiple machines, like if you are using the server, it does not account for this, but I am pretty sure, it was discussed on how to come up with a more "optimal" DP, to decrease DP overhead, based on how many kangaroos are expected to be used. Imagine a 10-band highway, and you have 1000 cars, and they all need to go from point A to point B before moving from point B to point C. Yeah, I don't understand this. But it's ok, I don't need to. I don't understand how the speed is tied to this...the same would be true for the fastest of fastest kangaroos, using the fastest of fastest CPUs. I know that more but slower kangaroos, will solve faster than less, but faster kangaroos, when dealing with higher bit ranges. Speed versus "efficiency / optimization" (DP overhead) are two different things in my opinion. I solved #120, #125 and #130 for now I still do not believe you solved these, or was the first to solve 120. You could easily sign a message for 125 and 130 to prove this. Maybe you did and I missed it? Or are you still waiting for your merit points to be "bumped" up before doing so?
|
|
|
|
|
|