Bitcoin Forum
November 29, 2025, 08:18:53 AM *
News: Latest Bitcoin Core release: 30.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: « 1 ... 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 [333] 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 ... 605 »
  Print  
Author Topic: Bitcoin puzzle transaction ~32 BTC prize to who solves it  (Read 353386 times)
bitcurve
Member
**
Offline Offline

Activity: 76
Merit: 11


View Profile
November 16, 2024, 04:09:09 PM
 #6641

When I try this trick, it just doesn't find the collision. even when it should take just a second, it never finds it. can you confirm these are the steps you did?
Choose a range (say, 48 bit)
Choose any key that is not in that range
run kangaroo on it for some time, (with -m 3 to be exact to your example), and make it save the work into a workfile.
Then load the work file into a new session with an actual key inside the range.
This is what I did, didn't work.
Before finding the public key in the range, did you change it in the working file using a script?
P.S. 48 bit...?  Maybe you are using CPU and not GPU, then it won't work. I remade the JLP program only for GPU, I don't know who uses CPU now.

I didn't touch the workfile or changed it using a script. I did try using a CPU. I'll try now using a GPU.

Edit - doesn't work with a GPU either. what did you change from JLP's version?
albert0bsd
Hero Member
*****
Offline Offline

Activity: 1120
Merit: 718



View Profile
November 16, 2024, 04:34:32 PM
 #6642

Edit - doesn't work with a GPU either. what did you change from JLP's version?

By the things that he share here in the forum and github it isn't a single line or some of them, i think he rewrite a lot of code and added new code to it.

If he don't share it I doubt you can replicate it.
bitcurve
Member
**
Offline Offline

Activity: 76
Merit: 11


View Profile
November 16, 2024, 04:46:44 PM
 #6643

Edit - doesn't work with a GPU either. what did you change from JLP's version?

By the things that he share here in the forum and github it isn't a single line or some of them, i think he rewrite a lot of code and added new code to it.

If he don't share it I doubt you can replicate it.

I think the issue is that we need to overwrite the key in the workfile, working on a script to do it now...
Etar
Sr. Member
****
Offline Offline

Activity: 654
Merit: 316


View Profile
November 16, 2024, 05:24:13 PM
Merited by vapourminer (1)
 #6644

I didn't touch the workfile or changed it using a script. I did try using a CPU. I'll try now using a GPU.
Edit - doesn't work with a GPU either. what did you change from JLP's version?
I downloaded the KangarooOT and KangarooOW version from GitHub to make sure we were using the same tools.
For the experiment we will look for the public key 02d2779258710a6fcd4e978335698e5c1b20795f8c3aae524714e0e40ebacdb213
whose private key is 0x7989031fda5ba3bf5 in the 67-bit range from 0 to 7ffffffffffffffff
Step 0: create a 67bitWrong.txt file with the following content:
Code:
0
7ffffffffffffffff
03633CBE3EC02B9401C5EFFA144C5B4D22F87940259634858FC7E59B1C09937852
Step 1: aor the accumulation of tame DPs create a step1.bat file with the following content and launch:
Code:
kangarooOT -t 0 -gpu -gpuId 0 -g 88,128 -m 2.5 -d 13  -wi 120 -w testwork 67bitWrong.txt
for /l %%i in (1,1,6) do (
    echo Iteration %%i    
    kangarooOT -t 0 -gpu -gpuId 0 -g 88,128 -m 2.5 -d 13  -wi 120 -w testwork -i testwork
)
pause
Step 2: To change the public key in the working file, make a step2.bat file and launch:
Code:
py changewf.py -f testwork -pub 02d2779258710a6fcd4e978335698e5c1b20795f8c3aae524714e0e40ebacdb213 -rb 0 -re 7ffffffffffffffff
pause
Step 3: To find the public key  make a step3.bat file and launch:
Code:
kangarooOW -t 0 -gpu -gpuId 0 -g 88,128 -i testwork
pause
P.S. If you like it, I can add a version with a bit more speed (gtx 1160 Super - 1.1 Gk/s)
bitcurve
Member
**
Offline Offline

Activity: 76
Merit: 11


View Profile
November 16, 2024, 05:28:29 PM
 #6645

I didn't touch the workfile or changed it using a script. I did try using a CPU. I'll try now using a GPU.
Edit - doesn't work with a GPU either. what did you change from JLP's version?
I downloaded the KangarooOT and KangarooOW version from GitHub to make sure we were using the same tools.
For the experiment we will look for the public key 02d2779258710a6fcd4e978335698e5c1b20795f8c3aae524714e0e40ebacdb213
whose private key is 0x7989031fda5ba3bf5 in the 67-bit range from 0 to 7ffffffffffffffff
Step 0: create a 67bitWrong.txt file with the following content:
Code:
0
7ffffffffffffffff
03633CBE3EC02B9401C5EFFA144C5B4D22F87940259634858FC7E59B1C09937852
Step 1: aor the accumulation of tame DPs create a step1.bat file with the following content and launch:
Code:
kangarooOT -t 0 -gpu -gpuId 0 -g 88,128 -m 2.5 -d 13  -wi 120 -w testwork 67bitWrong.txt
for /l %%i in (1,1,6) do (
    echo Iteration %%i    
    kangarooOT -t 0 -gpu -gpuId 0 -g 88,128 -m 2.5 -d 13  -wi 120 -w testwork -i testwork
)
pause
Step 2: To change the public key in the working file, make a step2.bat file and launch:
Code:
py changewf.py -f testwork -pub 02d2779258710a6fcd4e978335698e5c1b20795f8c3aae524714e0e40ebacdb213 -rb 0 -re 7ffffffffffffffff
pause
Step 3: To find the public key  make a step3.bat file and launch:
Code:
kangarooOW -t 0 -gpu -gpuId 0 -g 88,128 -i testwork
pause

Thanks for the details, I already got it to work using my own version.
If you can share changewf.py though, that'd be great.
Etar
Sr. Member
****
Offline Offline

Activity: 654
Merit: 316


View Profile
November 16, 2024, 05:31:46 PM
 #6646

If you can share changewf.py though, that'd be great.
https://bitcointalk.org/index.php?topic=1306983.msg64701973#msg64701973
bitcurve
Member
**
Offline Offline

Activity: 76
Merit: 11


View Profile
November 16, 2024, 05:35:07 PM
 #6647


P.S. If you like it, I can add a version with a bit more speed (gtx 1160 Super - 1.1 Gk/s)
[/quote]

I wonder what exactly you mean by 1.1Gk/s. Only one type of kangaroos included in that? Using my 1080 Ti, I get about 580Mk/s with my version.
Etar
Sr. Member
****
Offline Offline

Activity: 654
Merit: 316


View Profile
November 16, 2024, 05:43:05 PM
Last edit: November 16, 2024, 06:37:06 PM by Etar
 #6648

I wonder what exactly you mean by 1.1Gk/s. Only one type of kangaroos included in that? Using my 1080 Ti, I get about 580Mk/s with my version.
The original JLP Kangaroo version 1.7 gives a speed of about 700 Mk/s for GTX 1660 Super, a few changes in the code and the speed will be 1.1 Gk/s and around 2.2Gk/s for 2080ti.
1080 Ti is old card, I don't know what speed you can get.
P.S. It doesn't matter if it's 1 type of kangaroo or both. The speed is the same.
Code:
Kangaroo v1.7Gfix (Only Wild)
Gx=79BE667EF9DCBBAC55A06295CE870B07029BFCDB2DCE28D959F2815B16F81798
Gy=483ADA7726A3C4655DA4FBFC0E1108A8FD17B448A68554199C47D08FFB10D4B8
G Multipler: 0x1
JMP Multipler: 0x1
Loading: testwork
Start:0
Stop :7FFFFFFFFFFFFFFFF
Keys :1
KeyX :D2779258710A6FCD4E978335698E5C1B20795F8C3AAE524714E0E40EBACDB213
KeyY :D973566FFD3D6F79192827E1F93CCBF7E7F2EAF48F762C72C37578EA8154D978
LoadWork: [HashTable 1665.5/2088.3MB] [07s]
Number of CPU thread: 0
NB_RUN: 128
GPU_GRP_SIZE: 128
NB_JUMP: 32
Range width: 2^67
JMP bits DEC: 34
Jump Avg distance min: 2^32.95
Jump Avg distance max: 2^33.05
Jump multipled by: 0x1
Jump Avg distance: 2^32.97 [96]
Number of kangaroos: 2^20.46
Suggested DP: 13
Expected operations: 2^35.11
Expected RAM: 184.5MB
DP size: 13 [0xFFF8000000000000]
GPU: GPU #0 NVIDIA GeForce GTX 1660 SUPER (22x64 cores) Grid(88x128) (141.0 MB used)
SolveKeyGPU Thread GPU#0: creating kangaroos...
SolveKeyGPU Thread GPU#0: 2^20.46 kangaroos [9.4s]
[1132.46 MK/s][GPU 1132.46 MK/s][Count 2^38.71][Dead 0][04s (Avg 32s)][1677.2/2103.0MB]
Key# 0 [1S]Pub:  0x02D2779258710A6FCD4E978335698E5C1B20795F8C3AAE524714E0E40EBACDB213
       Priv: 0x7989031FDA5BA3BF5
b0dre
Jr. Member
*
Offline Offline

Activity: 61
Merit: 1


View Profile
November 16, 2024, 08:14:19 PM
 #6649

I wonder what exactly you mean by 1.1Gk/s. Only one type of kangaroos included in that? Using my 1080 Ti, I get about 580Mk/s with my version.
The original JLP Kangaroo version 1.7 gives a speed of about 700 Mk/s for GTX 1660 Super, a few changes in the code and the speed will be 1.1 Gk/s and around 2.2Gk/s for 2080ti.
1080 Ti is old card, I don't know what speed you can get.
P.S. It doesn't matter if it's 1 type of kangaroo or both. The speed is the same.
Code:
Kangaroo v1.7Gfix (Only Wild)
Gx=79BE667EF9DCBBAC55A06295CE870B07029BFCDB2DCE28D959F2815B16F81798
Gy=483ADA7726A3C4655DA4FBFC0E1108A8FD17B448A68554199C47D08FFB10D4B8
G Multipler: 0x1
JMP Multipler: 0x1
Loading: testwork
Start:0
Stop :7FFFFFFFFFFFFFFFF
Keys :1
KeyX :D2779258710A6FCD4E978335698E5C1B20795F8C3AAE524714E0E40EBACDB213
KeyY :D973566FFD3D6F79192827E1F93CCBF7E7F2EAF48F762C72C37578EA8154D978
LoadWork: [HashTable 1665.5/2088.3MB] [07s]
Number of CPU thread: 0
NB_RUN: 128
GPU_GRP_SIZE: 128
NB_JUMP: 32
Range width: 2^67
JMP bits DEC: 34
Jump Avg distance min: 2^32.95
Jump Avg distance max: 2^33.05
Jump multipled by: 0x1
Jump Avg distance: 2^32.97 [96]
Number of kangaroos: 2^20.46
Suggested DP: 13
Expected operations: 2^35.11
Expected RAM: 184.5MB
DP size: 13 [0xFFF8000000000000]
GPU: GPU #0 NVIDIA GeForce GTX 1660 SUPER (22x64 cores) Grid(88x128) (141.0 MB used)
SolveKeyGPU Thread GPU#0: creating kangaroos...
SolveKeyGPU Thread GPU#0: 2^20.46 kangaroos [9.4s]
[1132.46 MK/s][GPU 1132.46 MK/s][Count 2^38.71][Dead 0][04s (Avg 32s)][1677.2/2103.0MB]
Key# 0 [1S]Pub:  0x02D2779258710A6FCD4E978335698E5C1B20795F8C3AAE524714E0E40EBACDB213
       Priv: 0x7989031FDA5BA3BF5

I would like to test this version with a 3060, can you share it?
kTimesG
Full Member
***
Offline Offline

Activity: 672
Merit: 210


View Profile
November 17, 2024, 01:40:53 AM
 #6650


The original JLP Kangaroo version 1.7 gives a speed of about 700 Mk/s for GTX 1660 Super, a few changes in the code and the speed will be 1.1 Gk/s and around 2.2Gk/s for 2080ti.

What grid size are you using with the "original JLP Kangaroo version 1.7" in order to see it at 700 Mk/s on a GTX 1660S? Are you sure about that speed being real?

Cause with no code changes and using the 1.7 release tag, I can't go beyond 300 Mk/s (or rather, ~250 in reality, since stats display is kinda broken) on a card that is both newer and superior to that model. And looking at the nvcc compile stats, I have some doubts that it would even be capable to go at a triple speed, on an inferior card with 25% less CUDA cores.

Off the grid, training pigeons to broadcast signed messages.
b0dre
Jr. Member
*
Offline Offline

Activity: 61
Merit: 1


View Profile
November 17, 2024, 03:14:59 AM
 #6651


The original JLP Kangaroo version 1.7 gives a speed of about 700 Mk/s for GTX 1660 Super, a few changes in the code and the speed will be 1.1 Gk/s and around 2.2Gk/s for 2080ti.

What grid size are you using with the "original JLP Kangaroo version 1.7" in order to see it at 700 Mk/s on a GTX 1660S? Are you sure about that speed being real?

Cause with no code changes and using the 1.7 release tag, I can't go beyond 300 Mk/s (or rather, ~250 in reality, since stats display is kinda broken) on a card that is both newer and superior to that model. And looking at the nvcc compile stats, I have some doubts that it would even be capable to go at a triple speed, on an inferior card with 25% less CUDA cores.

I got 1308.29 Mk/s 2x GTX 1660S, like 700 per each.
Etar
Sr. Member
****
Offline Offline

Activity: 654
Merit: 316


View Profile
November 17, 2024, 05:52:55 AM
Last edit: November 17, 2024, 10:11:11 AM by Etar
Merited by vapourminer (1)
 #6652

What grid size are you using with the "original JLP Kangaroo version 1.7" in order to see it at 700 Mk/s on a GTX 1660S? Are you sure about that speed being real?

Cause with no code changes and using the 1.7 release tag, I can't go beyond 300 Mk/s (or rather, ~250 in reality, since stats display is kinda broken) on a card that is both newer and superior to that model. And looking at the nvcc compile stats, I have some doubts that it would even be capable to go at a triple speed, on an inferior card with 25% less CUDA cores.
You probably configured the grid incorrectly. The Nsight shows that 4 blocks are working simultaneously. That's why I have a grid of 88*128. And yes, the speed is +/- correct since it matches the amount of DP accumulated over a certain period of time.
Or you used a small DP value, then of course the speed will be much less. For example, with DP 13 the speed is only 950Mk/s (on the patched version) and 1100Mk/s with DP 20.
P.S. But you are right there is a glitch with the speed calculation. JLP forgot to reset the average values ​​before calculating the speed (thread.cpp)
After making changes the speed became 950 Mk/s
Code:
GPU: GPU #0 NVIDIA GeForce GTX 1660 SUPER (22x64 cores) Grid(88x128) (141.0 MB used)
SolveKeyGPU Thread GPU#0: creating kangaroos...
SolveKeyGPU Thread GPU#0: 2^20.46 kangaroos [7.6s]
[952.59 MK/s][GPU 952.59 MK/s][Count 2^35.49][Dead 0][50s (Avg 01:06:23)][3.4/10.6MB]

@kTimesG  you have written more than once that your program is several times faster than JLP and other clones. Can you really run 1660 Super at 2Gk/s?

I would like to test this version with a 3060, can you share it?
the release is available you can try.
Akito S. M. Hosana
Jr. Member
*
Offline Offline

Activity: 392
Merit: 8


View Profile
November 17, 2024, 12:44:18 PM
 #6653

What grid size are you using with the "original JLP Kangaroo version 1.7" in order to see it at 700 Mk/s on a GTX 1660S? Are you sure about that speed being real?

Cause with no code changes and using the 1.7 release tag, I can't go beyond 300 Mk/s (or rather, ~250 in reality, since stats display is kinda broken) on a card that is both newer and superior to that model. And looking at the nvcc compile stats, I have some doubts that it would even be capable to go at a triple speed, on an inferior card with 25% less CUDA cores.
You probably configured the grid incorrectly. The Nsight shows that 4 blocks are working simultaneously. That's why I have a grid of 88*128. And yes, the speed is +/- correct since it matches the amount of DP accumulated over a certain period of time.
Or you used a small DP value, then of course the speed will be much less. For example, with DP 13 the speed is only 950Mk/s (on the patched version) and 1100Mk/s with DP 20.
P.S. But you are right there is a glitch with the speed calculation. JLP forgot to reset the average values ​​before calculating the speed (thread.cpp)
After making changes the speed became 950 Mk/s
Code:
GPU: GPU #0 NVIDIA GeForce GTX 1660 SUPER (22x64 cores) Grid(88x128) (141.0 MB used)
SolveKeyGPU Thread GPU#0: creating kangaroos...
SolveKeyGPU Thread GPU#0: 2^20.46 kangaroos [7.6s]
[952.59 MK/s][GPU 952.59 MK/s][Count 2^35.49][Dead 0][50s (Avg 01:06:23)][3.4/10.6MB]

@kTimesG  you have written more than once that your program is several times faster than JLP and other clones. Can you really run 1660 Super at 2Gk/s?

I would like to test this version with a 3060, can you share it?
the release is available you can try.


This is pain in *ss to install on Windows 11 with 3060 - Visual Studio 2022 + Cuda12

 make gpu=1 ccap=86 all

I've been trying for several hours without success. I managed to compile it for the CPU.

opel323
Newbie
*
Offline Offline

Activity: 4
Merit: 0


View Profile
November 17, 2024, 01:32:26 PM
 #6654

...........

Hello. New version LPKangaroo_OW_OT not working on 135 puzzle? I see 2.0/4.0 MB and saving is not added when I use 135 puzzle. The same behavior I saw when I tried to use the original from JLP
kTimesG
Full Member
***
Offline Offline

Activity: 672
Merit: 210


View Profile
November 17, 2024, 04:54:33 PM
 #6655

You probably configured the grid incorrectly. The Nsight shows that 4 blocks are working simultaneously. That's why I have a grid of 88*128. And yes, the speed is +/- correct since it matches the amount of DP accumulated over a certain period of time.

Nah. After looking better at the specs of 1660S I see it has higher SM clock frequency and larger memory bus width than what I was comparing against, so that explains some things.

@kTimesG  you have written more than once that your program is several times faster than JLP and other clones. Can you really run 1660 Super at 2Gk/s?

I have no idea, I can't test on that. I can currently squeeze out 6.2 Gk/s on a RTX 4090, but some users here claim they can obtain 8 Gk/s or more. I think RetiredCoder might have an even faster version. Technically, it is plausible, it really depends on how well the kernel is implemented. And I said some time ago - if someone manages to fully parallelize the inversion, we can have a doubling in speed Smiley That b***h is really time-consuming.

Off the grid, training pigeons to broadcast signed messages.
k3ntINA
Newbie
*
Offline Offline

Activity: 27
Merit: 0


View Profile
November 17, 2024, 08:47:33 PM
Last edit: November 18, 2024, 11:09:20 AM by k3ntINA
 #6656

Does anyone have anything to say? Very simple math! Only I don't know what to do for the next keys. My brain is full of ideas and methods, but I can't implement them.
https://postimg.cc/bdmjP5wx
https://postimg.cc/bdmjP5wx
mjojo
Newbie
*
Offline Offline

Activity: 79
Merit: 0


View Profile
November 18, 2024, 10:32:06 AM
 #6657

I just got the information, but don't tell me where i got them from.

The solvers of #120 and #125 were the same persons and they were miners, they always had a mining farm which they minted dogecoins and Ethereum and other low altcoins.

They had access to huge GPU power, and most probably used Jean's Kangaroo for bruteforcing.

Most importantly; They are CURRENTLY NOW running and trying to hunt for #130 again! I hope we can be able to stop them and gain the 130's reward before them and before the end of the year. Based on calculations from the miners 3Emiwzxme7Mrj4d89uqohXNncnRM15YESs The estimated time for #130 to be hunted from the miners, should be around January 2024, February 2024, and if lucky already in December 2023!

So summary of the story: You have guys time before February 2024 to get #130's reward before the miners solve the third puzzle and get #120, #125, and #130.

comment this guy is correct

As @Etar said, we need to unite to stop the miners or he will take all prizes for himself.

and what kind unite??
k3ntINA
Newbie
*
Offline Offline

Activity: 27
Merit: 0


View Profile
November 18, 2024, 11:32:51 AM
 #6658

You are my hero! We need more magic circles! Grin
Are you kidding, hero?! If you really mean it, come and help me! Not by donating, but by telling me if the magic circle helped you?!
I have enhanced the magic circles by combining shapes. The things I see from the discovered keys are amazing, but I don't know how to summarize the unknown keys to find the full key.
I would appreciate it if I could send them to you and if it really gives you a clue. Split the rewards How you split depends on how much my magic rings helped, for example if 20% helped you then my share is 20%.
kTimesG
Full Member
***
Offline Offline

Activity: 672
Merit: 210


View Profile
November 18, 2024, 12:05:00 PM
 #6659

You are my hero! We need more magic circles! Grin
Are you kidding, hero?! If you really mean it, come and help me! Not by donating, but by telling me if the magic circle helped you?!
I have enhanced the magic circles by combining shapes. The things I see from the discovered keys are amazing, but I don't know how to summarize the unknown keys to find the full key.

Let me summarize your amazing discoveries, and also help you find the full keys:

In every single puzzle the position of a leading bit "1" is hard-coded into the key. This information creates a pattern.

This explains pretty much all of your discoveries, but unfortunately this is something we already know.

Maybe subtract the range start from your decimal keys so that you actually have unbiased data to work with. Let us know how well your circles and shapes and columns of digits work for you after that.

Off the grid, training pigeons to broadcast signed messages.
tmar777
Newbie
*
Offline Offline

Activity: 33
Merit: 0


View Profile
November 18, 2024, 12:08:59 PM
 #6660

Hi guys!
Is there any central place where ALL SEARCHED ranges are?
I am interested in ALL SEARCHED ranges from the whole puzzle, NOT only for the unsolved one.

If there are not in any central place, are there any log files from people who have already searched some ranges?

Thanks
Pages: « 1 ... 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 [333] 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 ... 605 »
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!