Bitcoin Forum
March 20, 2026, 03:41:58 PM *
News: Latest Bitcoin Core release: 30.2 [Torrent]
 
   Home   Help Search Login Register More  
Pages: « 1 ... 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 [642]
  Print  
Author Topic: Bitcoin puzzle transaction ~32 BTC prize to who solves it  (Read 374541 times)
Jorge54PT
Newbie
*
Offline Offline

Activity: 50
Merit: 0


View Profile
March 19, 2026, 03:27:04 AM
 #12821

mara is safe dude go first find the key  Grin Grin Grin Grin

It's not easy  Grin
brainless
Member
**
Offline Offline

Activity: 477
Merit: 35


View Profile
March 19, 2026, 09:07:02 AM
 #12822

In cyclone once they update load file list of ranges
Start:end
Start:end
Etc
Have anyone seen bitcrack or any other app for cuda with these features
Pls update app name and link
Thanks

13sXkWqtivcMtNGQpskD78iqsgVy9hcHLF
anyone_future_again
Newbie
*
Offline Offline

Activity: 14
Merit: 0


View Profile
March 19, 2026, 12:32:36 PM
 #12823

In cyclone once they update load file list of ranges
Start:end
Start:end
Etc
Have anyone seen bitcrack or any other app for cuda with these features
Pls update app name and link
Thanks

Nothing from the github. I have built, as i said the most complex vanity with all features included and with features on request. I was struggle with timing CPU and GPU's at full speed without endomorphic and with endomorphic included. Was not easy to sync all GPU's and CPU's but i did it.
Just mention which features you need and i will add to the software.
Also the software is working up to 1024 CPU, 16 GPU.
Extra features added are: searching by multiple prefixes, by any word from BTC address,Split the BTC address, split the BTC pubic key....and goes on...

Example of full power:
8 GPU 4090 + 192 CPU - 90.000MK/s
8 GPU 5090 - 72-80.000 MK/s

The software is not added anywhere because of the haters....
brainless
Member
**
Offline Offline

Activity: 477
Merit: 35


View Profile
March 19, 2026, 01:19:56 PM
 #12824

In cyclone once they update load file list of ranges
Start:end
Start:end
Etc
Have anyone seen bitcrack or any other app for cuda with these features
Pls update app name and link
Thanks

Nothing from the github. I have built, as i said the most complex vanity with all features included and with features on request. I was struggle with timing CPU and GPU's at full speed without endomorphic and with endomorphic included. Was not easy to sync all GPU's and CPU's but i did it.
Just mention which features you need and i will add to the software.
Also the software is working up to 1024 CPU, 16 GPU.
Extra features added are: searching by multiple prefixes, by any word from BTC address,Split the BTC address, split the BTC pubic key....and goes on...

Example of full power:
8 GPU 4090 + 192 CPU - 90.000MK/s
8 GPU 5090 - 72-80.000 MK/s

The software is not added anywhere because of the haters....
Send uploaded link without checking how I could say what features inside ?

13sXkWqtivcMtNGQpskD78iqsgVy9hcHLF
SecretAdmirere
Newbie
*
Offline Offline

Activity: 6
Merit: 1


View Profile
March 19, 2026, 04:00:31 PM
 #12825

FixedPaul's is using only 1 GPU for full power. For example 1 card 4090 i get 7.9MK/s and 5090 i get 9.5MK/s.
You cannot use also CPU and is not double the power of cards...for example for 8 cards 4090 i have almost 90.000MK/s. If you can get this power with what you find on github, good luck!
Everything that is on github, if you use CPU is limited to 256 threads by construction.
Also what is there is working only for RTX 3000, 4000 and 5000.
My code is working also on RTX 6000 ADA Blackwell up to H200+ CPU up to 1024 threads.
Plus extra to search not only by prefix, i search by any text i want inside or at the end of the Address.

I have a few questions to ask:

1. Are those speeds you claim, with or without endomorphism?
2. Did you change or tweak JLP inversion, multiply, add, subtract, etc. functions?
3. How many registers do your kernels consume, and how many spills/loads do you have when compiling code for 8.9 and 12.0 compute capability?
4. What do you mean by "up to 1024 threads on CPU"?
5. Did you change or tweak the hashing, how the rounds are handled, processed, rotations, etc.? Sha256, Ripemd160?
anyone_future_again
Newbie
*
Offline Offline

Activity: 14
Merit: 0


View Profile
March 19, 2026, 06:00:39 PM
 #12826

I have a few questions to ask:

1. Are those speeds you claim, with or without endomorphism?
2. Did you change or tweak JLP inversion, multiply, add, subtract, etc. functions?
3. How many registers do your kernels consume, and how many spills/loads do you have when compiling code for 8.9 and 12.0 compute capability?
4. What do you mean by "up to 1024 threads on CPU"?
5. Did you change or tweak the hashing, how the rounds are handled, processed, rotations, etc.? Sha256, Ripemd160?
[/quote]
1. Are those speeds you claim, with or without endomorphism?
My code supports endomorphism and can use it for speedup. In Vanity.cpp and main.cpp, there are explicit switches and function calls (e.g., ModMulK1order(&lambda), ModMulK1order(&lambda2)) that apply endomorphism. The actual speed depends on whether the endomorphism option is enabled at runtime. On my explicit built speeds is not activated but can be activated by default, you need to add the 256 length and activate the random+ the special flag
2. Did you change or tweak JLP inversion, multiply, add, subtract, etc. functions?
Yes, my code implements custom versions of these functions. Files like Int.cpp, IntMod.cpp, and Point.cpp contain custom modular arithmetic, inversion, multiplication, addition, and subtraction routines, including Montgomery multiplication and specialized secp256k1 routines (e.g., ModMulK1, ModAddK1order). These are not simple wrappers around standard libraries.
3. How many registers do your kernels consume, and how many spills/loads do you have when compiling code for 8.9 and 12.0 compute capability?
The codebase does not specify register usage or spills/loads directly. To find this, you must compile the CUDA code (e.g., GPU/GPUEngine.cu) with nvcc --ptxas-options=-v -arch=sm_89 ... and -arch=sm_120 .... The compiler output will show register usage and spills/loads for each kernel. My compiled version  include all architectures.
4. What do you mean by "up to 1024 threads on CPU"?
All versions from github are limited for use the CPU up to 256 threads. I unlocked, synced the timers and is working up to 1024 threads(depends how many CPU you have on your machine...384 threads for example)
5. Did you change or tweak the hashing, how the rounds are handled, processed, rotations, etc.? Sha256, Ripemd160?
Yes, my code contains custom implementations of SHA256 and RIPEMD160 in hash/sha256.cpp and hash/ripemd160.cpp. The round functions, rotations, and processing are implemented directly, not just wrappers around system libraries. There are also SSE-optimized versions and custom macros for round operations.

I understand that those guys were the brain and built those versions, but to upgrade and improve is not so hard to read the project and compile it...
0xastraeus
Newbie
*
Offline Offline

Activity: 30
Merit: 0


View Profile
March 19, 2026, 06:36:25 PM
 #12827

I am convinced this guy is absolutely certifiable. Are you okay? Don't act like you did all that when it was already in the original JLP. All you did was change how the input file was handled and added extra support for GPUs...

Quite misleading people.

1. Are those speeds you claim, with or without endomorphism?
My code supports endomorphism and can use it for speedup. In Vanity.cpp and main.cpp, there are explicit switches and function calls (e.g., ModMulK1order(&lambda), ModMulK1order(&lambda2)) that apply endomorphism. The actual speed depends on whether the endomorphism option is enabled at runtime. On my explicit built speeds is not activated but can be activated by default, you need to add the 256 length and activate the random+ the special flag
2. Did you change or tweak JLP inversion, multiply, add, subtract, etc. functions?
Yes, my code implements custom versions of these functions. Files like Int.cpp, IntMod.cpp, and Point.cpp contain custom modular arithmetic, inversion, multiplication, addition, and subtraction routines, including Montgomery multiplication and specialized secp256k1 routines (e.g., ModMulK1, ModAddK1order). These are not simple wrappers around standard libraries.
3. How many registers do your kernels consume, and how many spills/loads do you have when compiling code for 8.9 and 12.0 compute capability?
The codebase does not specify register usage or spills/loads directly. To find this, you must compile the CUDA code (e.g., GPU/GPUEngine.cu) with nvcc --ptxas-options=-v -arch=sm_89 ... and -arch=sm_120 .... The compiler output will show register usage and spills/loads for each kernel. My compiled version  include all architectures.
4. What do you mean by "up to 1024 threads on CPU"?
All versions from github are limited for use the CPU up to 256 threads. I unlocked, synced the timers and is working up to 1024 threads(depends how many CPU you have on your machine...384 threads for example)
5. Did you change or tweak the hashing, how the rounds are handled, processed, rotations, etc.? Sha256, Ripemd160?
Yes, my code contains custom implementations of SHA256 and RIPEMD160 in hash/sha256.cpp and hash/ripemd160.cpp. The round functions, rotations, and processing are implemented directly, not just wrappers around system libraries. There are also SSE-optimized versions and custom macros for round operations.

I understand that those guys were the brain and built those versions, but to upgrade and improve is not so hard to read the project and compile it...
GinnyBanzz
Jr. Member
*
Offline Offline

Activity: 184
Merit: 6


View Profile
March 19, 2026, 07:59:28 PM
 #12828

Why are people so obsessed with vanity searches? Unless I'm missing something it makes absolutely 0 difference to your chance of finding the key.
0xastraeus
Newbie
*
Offline Offline

Activity: 30
Merit: 0


View Profile
March 19, 2026, 08:09:01 PM
 #12829

Yeah it doesn't. But there are 2 groups of people who care.

1. Those who are rooted in reality and just find it fun to collect addresses similar to the target.
2. Those who are convinced that prefixes will indicate how far you are from the target or something like that.

Why are people so obsessed with vanity searches? Unless I'm missing something it makes absolutely 0 difference to your chance of finding the key.
SecretAdmirere
Newbie
*
Offline Offline

Activity: 6
Merit: 1


View Profile
March 19, 2026, 08:58:06 PM
Last edit: March 19, 2026, 09:35:14 PM by SecretAdmirere
 #12830

Yeah it doesn't. But there are 2 groups of people who care.

1. Those who are rooted in reality and just find it fun to collect addresses similar to the target.
2. Those who are convinced that prefixes will indicate how far you are from the target or something like that.

Maybe there is a psychological element of finding something that resembles something you are searching for.  When something pops out on the screen and it resembles the targeted address.  The prefix search doesn't help with anything, period.  The hashing involved is there to scramble any resemblance of the original input, and a small change in input equals a big change in output.  But who knows, maybe one day when ChatGPT actually becomes smart investment or it evolves into Skynet it might show us how Sha256+Ripemd160 are flawed and have relations between input and output to reverse it.  But one quick note i forgot to mention, prefix check does help with the speed of comparing computed hash160 to target hash160, to quickly reject full check as much as possible so there is that, turns out prefix does help with something.

I am convinced this guy is absolutely certifiable. Are you okay? Don't act like you did all that when it was already in the original JLP. All you did was change how the input file was handled and added extra support for GPUs...

Quite misleading people.

I was generally interested in @anyone_future_again code, and I still am, not to "steal" it or anything, but to see how it compares against my code I'm working on.  I'm getting a stable 2040 Mkey/s (hash160 compares in a second, to clarify meaning of Mkey/s) on 3070 Ti gpu.  From all of the publicly available programs, it is faster, and the closest one to me is FixedPaul's Bitcrack at 1940 Mkey/s on same gpu (my 3070 Ti).  With minimal "overhead" like cpu-gpu transfer, driver doings, kernel launches, scalar preparation, etc..  Since every new kernel launch gpu receives a fresh set of scalars to process, because it's a random-like search so I can't reuse last processed points of previous kernel launch for new kernel launch.  It compiles to 128 registers 0 spills/loads on both the 8.6 and 8.9 compute capability and on 12.0 it compiles to 128 registers and around 250 spills/loads (I forgot exact spills/loads).  Profiling it shows 85% compute bound, 9% memory bound with a little higher occupancy achieved then 33% with little to none thread divergence.  Occupancy hurts at 128 registers, but lowering register usage and increasing spills hurts more than lower occupancy so I don't know how to achieve higher occupancy without dropping the required registers all of the functions need.  Also everything is fully inlined, compiler sees everything, no function calls, no range boundaries.  Also one quick mention, moving the precomputed table from cmem to gmem didn't affect the speed of kernel execution at all, and that precomputed table is used inside hot loop, but it did allow me to store more points inside the table than the 512 I was able to fit in cmem.  So I'm generally interested if the code can run at 7.9b+ processed keys on 4090 or 9.5b+ processed keys on 5090.  And of course when someone claims those numbers, I wanna know where I did wrong in my code and what wall I kept hitting my head on.

I don't know, I don't know what to do..
kTimesG
Full Member
***
Offline Offline

Activity: 784
Merit: 237


View Profile
March 19, 2026, 09:21:17 PM
 #12831

I understand that those guys were the brain and built those versions, but to upgrade and improve is not so hard to read the project and compile it...

Bro, it is clear from an airplane that you have zero clues about coding, or about what this puzzle actually implies. Otherwise you would not reply, basically WORD BY WORD, as if you are an AI agent that looked over JLP's project and gave it some dumb review, all while hallucinating all kinds of fantasies regarding speedups and complex useless "upgrades", but can't f***ing answer some questions to the actual point. The guy asked you how many registers your secret dupe kernel has, but it's clear you don't even know what those are. Perhaps the AI wasn't instructed to do a compile and read the build logs / stdout Smiley

Note: I don't give a crap about your super kernel at all (as in, 0% curiosity), so it's OK to call me a hater, if that's everything you are able to comprehend about the subject.

Off the grid, training pigeons to broadcast signed messages.
anyone_future_again2
Newbie
*
Offline Offline

Activity: 2
Merit: 0


View Profile
March 19, 2026, 10:11:52 PM
 #12832

"BRO" you have no ideea...for you everything that is online is done with AI...grow up...
Did you tried to code? Do you have any ideea how to code and how to iterate some sequence?

Why to give a solution if the owner of the original code didn't wanted to make public to everyone?
Sorry but some informations are not destined to all people. If you are smart enough as you speak built your own programs.

Now to give a hint to the one who is interested: rebuilt all code with 256 instead of 32 and in some cases with 128. So uint32 to uint256.
big-int math can panic or allocate; handle errors cleanly, cap batch sizes, and ensure progress telemetry so you don’t “run forever” without measurable yield.

For details ask your friend GPT because i am not capable to answer more as this smart guy said.
jonematt
Newbie
*
Offline Offline

Activity: 11
Merit: 3


View Profile
March 19, 2026, 10:54:24 PM
 #12833

  1PWo3JeB9jdvp56gzG8navJjUMFQxxb997   
  1PWo3JeB9jgLYkm8cEj6yywPn3kk41BFaY 
  1PWo3JeB9jRLz2NjTHsZ8uTBnqcjnu66Ak 
  1PWo3JeB9jSVntJs1FpYM2iTNAoADNv2YD 
  1PWo3JeB9jeFuotNeYDrWkEhqyZ359abB2 
  1PWo3JeB9jgFbdXaZG1h8ng9hpYzg3CAPw 
  1PWo3JeB9jT926gmLys26TBwr6pwStgwLb
Jorge54PT
Newbie
*
Offline Offline

Activity: 50
Merit: 0


View Profile
Today at 01:01:49 AM
 #12834

I'll give a tip that I've been looking for myself for over a year. It's not about high key/s speed, but about having the best and most efficient algorithm possible (which doesn't exist yet) along with monstrous luck.  Grin  That's why I'll add this: no GitHub program will find 71, but it will be great support for you to work with until you achieve something amazing  Grin
ldabasim
Newbie
*
Offline Offline

Activity: 5
Merit: 0


View Profile
Today at 05:34:01 AM
 #12835

seems down to pure luck, I ran a few dozen kangaroo 10-minute 72-bit tests and many times it took like an hour. Not once did it take less than 2 minutes. If you're unlucky to stumble on such an unfortunate 5-6x run, you can easily spend millions $ on gpus with 0 result. Same with a consumer machine, you can optimize down to exactly sqrt(n) and your kangaroo run could be pre-destined to solve it on the 80 thousandth year. Unless someone makes an amazing secp256k1 asic or an analog computer with a 160-bit accuracy, its down to many people with few resources trying their luck and someone eventually stumbling on the key. So maybe the only way is ignoring efficiency and leaving a low power machine running and see if you're the lucky one. That's why someone here jokingly said a while ago he's sure if any remaining puzzles are solved it will be on a cpu and not a gpu.
GinnyBanzz
Jr. Member
*
Offline Offline

Activity: 184
Merit: 6


View Profile
Today at 10:17:08 AM
 #12836

seems down to pure luck, I ran a few dozen kangaroo 10-minute 72-bit tests and many times it took like an hour. Not once did it take less than 2 minutes. If you're unlucky to stumble on such an unfortunate 5-6x run, you can easily spend millions $ on gpus with 0 result. Same with a consumer machine, you can optimize down to exactly sqrt(n) and your kangaroo run could be pre-destined to solve it on the 80 thousandth year. Unless someone makes an amazing secp256k1 asic or an analog computer with a 160-bit accuracy, its down to many people with few resources trying their luck and someone eventually stumbling on the key. So maybe the only way is ignoring efficiency and leaving a low power machine running and see if you're the lucky one. That's why someone here jokingly said a while ago he's sure if any remaining puzzles are solved it will be on a cpu and not a gpu.

This is essentially what I'm doing, I have access to essentially free compute power. Done hundreds of trillions of key attempts, and naturally, found absolutely nothing so far lol.
0xastraeus
Newbie
*
Offline Offline

Activity: 30
Merit: 0


View Profile
Today at 02:28:19 PM
 #12837

So I'm not going to begin to describe to you how inherently wrong it is to change all uints to 256-bit especially when dealing with sha and ripemd. You're just such a god-tier programmer you don't need that information right?

You do you... Let's just end this discussion and move on.

"BRO" you have no ideea...for you everything that is online is done with AI...grow up...
Did you tried to code? Do you have any ideea how to code and how to iterate some sequence?

Why to give a solution if the owner of the original code didn't wanted to make public to everyone?
Sorry but some informations are not destined to all people. If you are smart enough as you speak built your own programs.

Now to give a hint to the one who is interested: rebuilt all code with 256 instead of 32 and in some cases with 128. So uint32 to uint256.
big-int math can panic or allocate; handle errors cleanly, cap batch sizes, and ensure progress telemetry so you don’t “run forever” without measurable yield.

For details ask your friend GPT because i am not capable to answer more as this smart guy said.
Pages: « 1 ... 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 [642]
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!