Bitcoin Forum
December 24, 2025, 11:33:38 PM *
News: Latest Bitcoin Core release: 30.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: « 1 ... 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 [612] 613 614 615 616 »
  Print  
Author Topic: Bitcoin puzzle transaction ~32 BTC prize to who solves it  (Read 358609 times)
Torin Keepler
Newbie
*
Offline Offline

Activity: 22
Merit: 0


View Profile
December 12, 2025, 07:41:14 PM
Last edit: December 13, 2025, 11:28:11 AM by Mr. Big
 #12221

..for 135.. If we increase the distance between the jumps, could that make it possible to obtain a collision that is “close” to the target key?
We can get a collision that doesn’t match the key we’re looking for, but it can be near it — and maybe that could allow us to “narrow down” the range in which the actual key might be located?

No, from the coordinates of a point on the secp256k1 elliptic curve,
it is not possible to determine whether that point is “close” to a target point or far away in any meaningful sense.
Additionally, the very notion of a collision implies that two points coincide with each other,
not that either of them must be equal to the target public key.
The collision only indicates equality between those two derived points.
If the offset (difference) between the corresponding scalars of these two points is known,
that offset can be used to compute the private key associated with the target public key.



The idea of the Kangaroo algorithm is that once two points collide at least once,
they will collide again at every subsequent iteration.
This allows us to perform an expensive collision check not on every iteration,
but only in rare, specific cases.
Another major advantage of this algorithm is that it does not matter which particular points collides.
The more such points there are, the higher the probability of a collision.
At this stage, the birthday paradox comes into play. It states that in a group of 23 people,
there is already about a 50% probability that at least two people share the same birthday,
which is counterintuitive.
The same principle applies to the Kangaroo algorithm: the larger the group,
the higher the probability that at least one pair of points will have the same (x)-coordinate - that is, a collision will occur.
JackMazzoni
Jr. Member
*
Offline Offline

Activity: 165
Merit: 6


View Profile
December 12, 2025, 08:59:02 PM
 #12222

The idea of the Kangaroo algorithm is that once two points collide at least once,
they will collide again at every subsequent iteration.
This allows us to perform an expensive collision check not on every iteration,
but only in rare, specific cases.
Another major advantage of this algorithm is that it does not matter which particular points collides.
The more such points there are, the higher the probability of a collision.
At this stage, the birthday paradox comes into play. It states that in a group of 23 people,
there is already about a 50% probability that at least two people share the same birthday,
which is counterintuitive.
The same principle applies to the Kangaroo algorithm: the larger the group,
the higher the probability that at least one pair of points will have the same (x)-coordinate - that is, a collision will occur.


If it collides once does we already know the location?

Need Wallet Recovery? PM ME. 100% SAFE
kTimesG
Full Member
***
Offline Offline

Activity: 700
Merit: 220


View Profile
December 12, 2025, 09:02:47 PM
 #12223

The more such points there are, the higher the probability of a collision.
At this stage, the birthday paradox comes into play. It states that in a group of 23 people,
there is already about a 50% probability that at least two people share the same birthday,
which is counterintuitive.
The same principle applies to the Kangaroo algorithm: the larger the group,
the higher the probability that at least one pair of points will have the same (x)-coordinate - that is, a collision will occur.

Do you have a fetish to propagate stupidity that AI spit out? I'll try one last time to make you understand. If not you, then at least so that others don't actually believe the non-sense above.

The points that Kangaroo traverses are not sampled from the entire interval, they start off from very well-established positions (if they do not, its not a Kangaroo algorithm), and jump off into a single direction (if they do not, then its not a Kangaroo algorithm), and they easily go out of the range itself.

For Tames: they definitely go out of the interval because you know where it goes, and by definition, it needs to continue walking for at least up to 1.5 N (this applies no matter if you have a single Tame walk or a billion parallel Tame walks).

For Wilds, it's even worse: we can't know at all if and when it went out of the interval, until we find its key.

This is the ELI5 basic explanation of why Kangaroo has nothing to do with the birthday paradox. The only thing in common is they're both probability-based, but nothing more. However, the probability of Kangaroo is based off on a totally different set of parameters than the Birthday Paradox, that is:

- jumps start off from exact positions (this averages the expected delta)
- jump table is chosen in such a way to fit the exact average jump size (if that is not how is created, then its not a Kangaroo algorithm).

Off the grid, training pigeons to broadcast signed messages.
Torin Keepler
Newbie
*
Offline Offline

Activity: 22
Merit: 0


View Profile
December 12, 2025, 10:14:18 PM
Last edit: December 12, 2025, 10:31:02 PM by Torin Keepler
 #12224

The more such points there are, the higher the probability of a collision.
At this stage, the birthday paradox comes into play. It states that in a group of 23 people,
there is already about a 50% probability that at least two people share the same birthday,
which is counterintuitive.
The same principle applies to the Kangaroo algorithm: the larger the group,
the higher the probability that at least one pair of points will have the same (x)-coordinate - that is, a collision will occur.

Do you have a fetish to propagate stupidity that AI spit out? I'll try one last time to make you understand. If not you, then at least so that others don't actually believe the non-sense above.

The points that Kangaroo traverses are not sampled from the entire interval, they start off from very well-established positions (if they do not, its not a Kangaroo algorithm), and jump off into a single direction (if they do not, then its not a Kangaroo algorithm), and they easily go out of the range itself.

For Tames: they definitely go out of the interval because you know where it goes, and by definition, it needs to continue walking for at least up to 1.5 N (this applies no matter if you have a single Tame walk or a billion parallel Tame walks).

For Wilds, it's even worse: we can't know at all if and when it went out of the interval, until we find its key.

This is the ELI5 basic explanation of why Kangaroo has nothing to do with the birthday paradox. The only thing in common is they're both probability-based, but nothing more. However, the probability of Kangaroo is based off on a totally different set of parameters than the Birthday Paradox, that is:

- jumps start off from exact positions (this averages the expected delta)
- jump table is chosen in such a way to fit the exact average jump size (if that is not how is created, then its not a Kangaroo algorithm).


The Dunning–Kruger effect states that “low competence deprives a person of the ability to adequately
assess both their own knowledge and the knowledge of others.” I have also noticed that you
communicate with many people here in an arrogant and rude manner; therefore,
I conclude that your responses are more emotional in nature rather than logical and consistently reasoned.
As for the connection between the Kangaroo algorithm and the birthday paradox, you are attempting to refute not only my words,
but also the scientific paper to which I previously sent you a link.

Therefore, you are arguing not with me, but with a professor and head of the Department of Mathematical Sciences,
Ravi Montenegro (University of Massachusetts Lowell), and Prasad Tetali (Georgia Institute of Technology).


The connection between the Kangaroo algorithm and the birthday paradox is as follows.
The efficiency analysis of the algorithm is based on the probability that two independent random sequences intersect in a finite space.
This type of reasoning about intersections is conceptually close to the birthday paradox,
where a large number of possible pairs leads to a surprisingly high probability of a coincidence.
Niekko
Newbie
*
Offline Offline

Activity: 23
Merit: 3


View Profile
December 12, 2025, 10:42:02 PM
 #12225

The Dunning–Kruger effect states that “low competence deprives a person of the ability to adequately
assess both their own knowledge and the knowledge of others.” I have also noticed that you
communicate with many people here in an arrogant and rude manner; therefore,
I conclude that your responses are more emotional in nature rather than logical and consistently reasoned.
As for the connection between the Kangaroo algorithm and the birthday paradox, you are attempting to refute not only my words,
but also the scientific paper to which I previously sent you a link.

Therefore, you are arguing not with me, but with a professor and head of the Department of Mathematical Sciences,
Ravi Montenegro (University of Massachusetts Lowell), and Prasad Tetali (Georgia Institute of Technology).


The connection between the Kangaroo algorithm and the birthday paradox is as follows.
The efficiency analysis of the algorithm is based on the probability that two independent random sequences intersect in a finite space.
This type of reasoning about intersections is conceptually close to the birthday paradox,
where a large number of possible pairs leads to a surprisingly high probability of a coincidence.



oh god, again. No !

Repeating this over and over does not make it true. This is not a matter of interpretation.

Pollard’s Kangaroo does not use the Birthday Paradox. Kangaroo is an interval-based method with two guided walks and known offsets. The Birthday Paradox is about random collisions.

At this point, please stop inventing explanations and confusing speculation with facts.


kTimesG
Full Member
***
Offline Offline

Activity: 700
Merit: 220


View Profile
December 12, 2025, 10:47:36 PM
 #12226

Therefore, you are arguing not with me, but with a professor and head of the Department of Mathematical Sciences,
Ravi Montenegro (University of Massachusetts Lowell), and Prasad Tetali (Georgia Institute of Technology).

The connection between the Kangaroo algorithm and the birthday paradox is as follows.
The efficiency analysis of the algorithm is based on the probability that two independent random sequences intersect in a finite space.
This type of reasoning about intersections is conceptually close to the birthday paradox,
where a large number of possible pairs leads to a surprisingly high probability of a coincidence.

Yeah, except Kangaroo isn't about independent random sequences inside a finite range, and they don't intersect because of BP,, because neither the space, neither the events, and neither the sequence of events that occur are inside a BP paradigm. So you're either abusing the naming or, more likely, you still have no idea WTF you're talking about...

If you're a math teacher but can't see this obvious difference, well, I'm D.J. Bernstein's mentor.

Off the grid, training pigeons to broadcast signed messages.
whistle307194
Copper Member
Newbie
*
Offline Offline

Activity: 19
Merit: 0


View Profile
December 12, 2025, 11:16:11 PM
 #12227

...It should take only a few tenths of a second even with powerful GPUs and weak CPUs.

That would be fantastic to see it work like that. I guess You're not willing to share Your version?

You also say that "each thread analyzes a subrange and must have a starting point" and I agree, absolutely this is what I thought setting my desired target address far from the starting point and knowing which of starting point has to be used so after N additions, let's say 30 seconds of work it must hit my target but it misses!

My idea is quite simple but putting this to work seems impossible.

If each single thread is responsible for keeping those starting points then why it is not executing it properly then?

For instance when I have initially let's say '125 000 001' starting points in total and they're created using range start upward counting +1 and I will use as a target address any of this starting points, so simply I will add a decimal value of start point height to the decimal private key based on range start - the program hit it - quite expectable.

Let me "visualize": I know that I have '125 000 001' starting points. So the integer values from 1 to 125 000 001, decimals. I chose a target address that requires precise X forward stride additions which will take 30 seconds of work so from other python calculator I also use, I know that N additions from the desired starting point that should be covered because is between '1' to '125 000 001' + my desired stride HAS TO hit the target. Guess what, program does not hit it, it just keeps going until range end.

Yes, I apply my stride in hex format. Still choosing as target address value taken from starting point applied to range initial will always hit but anything upfront, after N doesn't work.

So as I understand correctly it should create starting points sequentially this way:
200000000000000000
200000000000000001
200000000000000002
...

For a range [A, B] the fastest way to compute it entirely is starting from the middle of it and adding all the constant points (half of the range size), using the shared inverse to get each (left & right) points, and finally moving off to the next middle point of a new range (for example, but not necessarily, the immediate next range). Rinse and repeat as many times as needed.


I do agree, starting from a not completely flat initial range private key decreases total amount of work needed, very smart however what if an actual pvk is lower than desired start Wink?

I am feeling this again that I may have something that is still not existing on this forum but let's see who will be first, we or bots.
kTimesG
Full Member
***
Offline Offline

Activity: 700
Merit: 220


View Profile
December 12, 2025, 11:54:28 PM
 #12228

I do agree, starting from a not completely flat initial range private key decreases total amount of work needed, very smart however what if an actual pvk is lower than desired start Wink?

[A, B] is the subrange you scan in one-shot (program "arguments"), not a full-blown giant range.

If key < A then it's not in [A, B], so it won't be found, choose a new [A, B], maybe key > B Smiley Starting points are scattered in [A, B] based on your GPU grid size.

If you feel brave you can always go into "rabbit hole mode" and pick 16384 different [A. B] smaller ranges (one for each thread) and scan those, instead of the larger range. The speed will be identical because the chips don't know or care that they cooperate or not on a contiguous larger range, or whether they all compute the exact same keys. But managing overall progress at such granularity might turn into a nightmare.

Off the grid, training pigeons to broadcast signed messages.
fixedpaul
Member
**
Offline Offline

Activity: 83
Merit: 26


View Profile WWW
December 13, 2025, 01:37:11 PM
 #12229


That would be fantastic to see it work like that. I guess You're not willing to share Your version?


You can find the BitCrack version on my GitHub. As for the rest, I’m honestly having a bit of trouble following you
whistle307194
Copper Member
Newbie
*
Offline Offline

Activity: 19
Merit: 0


View Profile
December 13, 2025, 04:05:10 PM
 #12230


That would be fantastic to see it work like that. I guess You're not willing to share Your version?


You can find the BitCrack version on my GitHub. As for the rest, I’m honestly having a bit of trouble following you

Oh that's great and I have already tested it and yeah the initialization takes microseconds and begin with process with even higher speed and lower gpu vram usage compared to BitCrack, nice to see it however I cannot find a stride option anywhere..is it possible that You can add it by any chance?
fixedpaul
Member
**
Offline Offline

Activity: 83
Merit: 26


View Profile WWW
December 13, 2025, 06:18:16 PM
 #12231


That would be fantastic to see it work like that. I guess You're not willing to share Your version?


You can find the BitCrack version on my GitHub. As for the rest, I’m honestly having a bit of trouble following you

Oh that's great and I have already tested it and yeah the initialization takes microseconds and begin with process with even higher speed and lower gpu vram usage compared to BitCrack, nice to see it however I cannot find a stride option anywhere..is it possible that You can add it by any chance?

No, unfortunately I don’t have much time anymore to update the repository. But you can easily do a kind of stride by changing the constant points in GPUgroup.h. You just need to ecmultiply them by number you want, and maybe shift the starting points if you need
penguinEV
Newbie
*
Offline Offline

Activity: 3
Merit: 0


View Profile
December 13, 2025, 06:41:12 PM
 #12232


That would be fantastic to see it work like that. I guess You're not willing to share Your version?


You can find the BitCrack version on my GitHub. As for the rest, I’m honestly having a bit of trouble following you


can you post your github here for us
Cricktor
Legendary
*
Offline Offline

Activity: 1344
Merit: 3341



View Profile
December 13, 2025, 08:16:06 PM
Last edit: December 13, 2025, 08:26:42 PM by Cricktor
 #12233

can you post your github here for us
Are you too stupid to do a very basic search on your own? Gosh, learn to use https://bitlist.co/search ...
You could even have clicked on fixedpaul's name which opens his account profile page and click on the link that lists all posts by him. It would've taken you minimal effort to find a pretty recent post by him where he mentions his Github account. Sheesh, what kind of toddlers are posting here?

As you asked for spoon-feeding: https://github.com/FixedPaul/
Find someone else to change your diapers.

k2laci
Member
**
Offline Offline

Activity: 183
Merit: 10


View Profile
December 14, 2025, 12:05:48 PM
 #12234

Please help me !

I tested BTC puzzle key-finding two alternatives:

1. Bitcrackrandomiser (BTC Puzzle official program) (40 MKeys/sec)
2. Honestpool open source program.  (115 Mkeys/sec)

Tested with the same graphics card. (GTX-1050Ti 4GB)

https://ibb.co/ZzhbtC5D

realnewuser
Newbie
*
Offline Offline

Activity: 11
Merit: 0


View Profile
December 14, 2025, 12:39:04 PM
 #12235

Please help me !

I tested BTC puzzle key-finding two alternatives:

1. Bitcrackrandomiser (BTC Puzzle official program) (40 MKeys/sec)
2. Honestpool open source program.  (115 Mkeys/sec)

Tested with the same graphics card. (GTX-1050Ti 4GB)

https://ibb.co/ZzhbtC5D


It's simple: Honestpool uses a modification of VanitySearch by FixedPaul, which is several times faster than Bitcrack. In any case, both Honestpool and BTCpuzzle have APIs, so you can use any software you like for key searching
zahid888
Member
**
Offline Offline

Activity: 343
Merit: 24

the right steps towards the goal


View Profile
December 15, 2025, 05:22:26 AM
 #12236

can you post your github here for us
Find someone else to change your diapers.

LoL.. Lazy puzzle solver roasted brutally Grin


1BGvwggxfCaHGykKrVXX7fk8GYaLQpeixA
Kermits Froggy
Newbie
*
Offline Offline

Activity: 6
Merit: 0


View Profile
December 15, 2025, 07:44:31 PM
 #12237

So as I understand correctly it should create starting points sequentially this way:
200000000000000000
200000000000000001
200000000000000002
...

For a range [A, B] the fastest way to compute it entirely is starting from the middle of it and adding all the constant points (half of the range size), using the shared inverse to get each (left & right) points, and finally moving off to the next middle point of a new range (for example, but not necessarily, the immediate next range). Rinse and repeat as many times as needed.

So those starting points make no sense, but I'm not surprised that it's something ChatGPT would gladly suggest. They should all be middle points of distinct partitions inside some larger range (your target range). There are various valid options to make this choice, in such a way that the entire target range is covered efficiently, eventually.

I was more thinking about how to manage renting on various platforms and their API specificities, the renting latency, etc… I’m quite sure you’ve experienced that their API design / rate limiting gets annoying quite fast ? That’s what I was referring to.

Never needed more than 50 or so actual instances at once, so my only interaction with such APIs was a script that destroyed all instances automatically once they were no longer needed. I kinda hunted the cheapest bid rates manually, so yeah, I had to do some clicks once in a while. I'm sure the bidding can be automated as well, but it seems to be a big problem to find a lot of cheap computing power... and at the end of the day it all comes down to what budget is allocated.

Oh ok, I might have overestimated your automation then. I was running thousands for 67 and 68, so automation and finding the cheapest programatically made a lot of sense. If ever you need to do that, hit me up I’ll share what I learned the long (and mostly annoying) way.

You were renting 1000s of GPUs on what platforms? On vast or clore going beyond 200 GPUs will mess up our initial budget/estimation of $0.15/hr/5090 GPU. Any tips?
kTimesG
Full Member
***
Offline Offline

Activity: 700
Merit: 220


View Profile
December 15, 2025, 08:59:57 PM
 #12238

You were renting 1000s of GPUs on what platforms? On vast or clore going beyond 200 GPUs will mess up our initial budget/estimation of $0.15/hr/5090 GPU. Any tips?

I have one: going with 5090 is far from optimal at this time. You should look at minimizing $ / work, not the number of GPUs. Unless of course you can find a way to make a single 5090 work faster than 2 or 3 4090s, to justify the higher price.

Off the grid, training pigeons to broadcast signed messages.
Kermits Froggy
Newbie
*
Offline Offline

Activity: 6
Merit: 0


View Profile
December 16, 2025, 02:56:29 AM
 #12239

You were renting 1000s of GPUs on what platforms? On vast or clore going beyond 200 GPUs will mess up our initial budget/estimation of $0.15/hr/5090 GPU. Any tips?

I have one: going with 5090 is far from optimal at this time. You should look at minimizing $ / work, not the number of GPUs. Unless of course you can find a way to make a single 5090 work faster than 2 or 3 4090s, to justify the higher price.

Spot on. I agree. But for us, we can't scale beyond 150-200 GPUs within our budget constraints and we can't breakeven if we increase our budget. With 150 GPUs it'll take forever.

I'd say Puzzle 71 will never be solved with current bitcoin price and 7.1BTC reward.
Bram24732
Member
**
Offline Offline

Activity: 224
Merit: 22


View Profile
December 16, 2025, 07:24:08 AM
 #12240

So as I understand correctly it should create starting points sequentially this way:
200000000000000000
200000000000000001
200000000000000002
...

For a range [A, B] the fastest way to compute it entirely is starting from the middle of it and adding all the constant points (half of the range size), using the shared inverse to get each (left & right) points, and finally moving off to the next middle point of a new range (for example, but not necessarily, the immediate next range). Rinse and repeat as many times as needed.

So those starting points make no sense, but I'm not surprised that it's something ChatGPT would gladly suggest. They should all be middle points of distinct partitions inside some larger range (your target range). There are various valid options to make this choice, in such a way that the entire target range is covered efficiently, eventually.

I was more thinking about how to manage renting on various platforms and their API specificities, the renting latency, etc… I’m quite sure you’ve experienced that their API design / rate limiting gets annoying quite fast ? That’s what I was referring to.

Never needed more than 50 or so actual instances at once, so my only interaction with such APIs was a script that destroyed all instances automatically once they were no longer needed. I kinda hunted the cheapest bid rates manually, so yeah, I had to do some clicks once in a while. I'm sure the bidding can be automated as well, but it seems to be a big problem to find a lot of cheap computing power... and at the end of the day it all comes down to what budget is allocated.

Oh ok, I might have overestimated your automation then. I was running thousands for 67 and 68, so automation and finding the cheapest programatically made a lot of sense. If ever you need to do that, hit me up I’ll share what I learned the long (and mostly annoying) way.

You were renting 1000s of GPUs on what platforms? On vast or clore going beyond 200 GPUs will mess up our initial budget/estimation of $0.15/hr/5090 GPU. Any tips?

I have API integration for every provider out there.
Whenever something is within budget (I had a $/trillion threshold) - rent it.
I also have integration with direct SSH connection to farms for private deals.

For 67 / 68 it made a lot of sense.
It was barely break even for 69.
For 71 there’s no way it’s profitable, even with perfect code and lowest energy costs in the world.
Pages: « 1 ... 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 [612] 613 614 615 616 »
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!