Bitcoin Forum
February 14, 2026, 10:19:34 AM *
News: Latest Bitcoin Core release: 30.2 [Torrent]
 
   Home   Help Search Login Register More  
Pages: « 1 ... 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 [363] 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 ... 635 »
  Print  
Author Topic: Bitcoin puzzle transaction ~32 BTC prize to who solves it  (Read 368782 times)
kTimesG
Full Member
***
Offline Offline

Activity: 742
Merit: 224


View Profile
January 28, 2025, 11:42:40 PM
 #7241

Think of a dice, there are numbers from 1 to 6. Imagine doing this in software. After the first throw, the number 3 comes up, the probability of the number 3 coming up again is low, even if the software resets itself (applies the hashing process) in the second throw.

OK, it's clear you are either trolling or haven't understood anything from the last few pages  Just spitting the above non-sense BS junk is a great summary of your research, and you can be sure it helps everyone understand you perfectly by now Smiley

Off the grid, training pigeons to broadcast signed messages.
mcdouglasx
Hero Member
*****
Offline Offline

Activity: 910
Merit: 515



View Profile WWW
January 28, 2025, 11:54:18 PM
 #7242

So to be clear, are you saying that if you "flip" (hash) a "coin" (ripemd(sha256(pubKey))) once to a "head" (target hash of Puzzle 67), then the next "flip" has less chances of being "head" as well? Assuming the "coin" (RIPEMD) is a "fair coin" (any bit of any hash has an equal 50% chance)?

I think everyone understands the concept, you're just rambling. If you evaluate a single toss, it's the same individual probability. If you flip a coin a thousand times, the 1001st flip will still be 50/50 because you ignore the effort made to get the first 1000 (basically you're evaluating a single toss, the previous assumption is just fluff). However, getting 1001 heads in a row changes everything, that's why getting longer prefixes is more difficult.

█████████████████████████
█████████████████████████
███████▀█████████▀███████
█████████████████████████
█████████████████████████
████████████▀████████████
███████▀███████▄███████
███████████▄▄▄███████████
█████████████████████████
█████████████████████████
█████████████████████████
█████████████████████████
█████████████████████████

 2UP.io 
NO KYC
CASINO
██████████████████████████
████████████████████████
███████████████████████
███████████████████
██████████████████████
███████████████████████
███████████████████████
██████████████████
███████████████████████
██████████████████
███████████████████████
████████████████████████
██████████████████████████
███████████████████████████████████████████████████████████████████████████████████████
 
FASTEST-GROWING CRYPTO
CASINO & SPORTSBOOK

 

███████████████████████████████████████████████████████████████████████████████████████
███████████████████████████
█████████████████████████
███████████████████████
███████████████████████
████████████████████████
███████████████████████
███████████████████████
██████████████████████
████████████████████████
███████████████████████
███████████████████████
█████████████████████████
███████████████████████████
 

...PLAY NOW...
kTimesG
Full Member
***
Offline Offline

Activity: 742
Merit: 224


View Profile
January 29, 2025, 12:37:15 AM
 #7243

So to be clear, are you saying that if you "flip" (hash) a "coin" (ripemd(sha256(pubKey))) once to a "head" (target hash of Puzzle 67), then the next "flip" has less chances of being "head" as well? Assuming the "coin" (RIPEMD) is a "fair coin" (any bit of any hash has an equal 50% chance)?

I think everyone understands the concept, you're just rambling. If you evaluate a single toss, it's the same individual probability. If you flip a coin a thousand times, the 1001st flip will still be 50/50 because you ignore the effort made to get the first 1000 (basically you're evaluating a single toss, the previous assumption is just fluff). However, getting 1001 heads in a row changes everything, that's why getting longer prefixes is more difficult.

"Ignore the effort"? How does that come to play in an abstract manner, in what equation does it go? Who cares about that effort? Or that there were some 1000 consecutive throws done without any coffee breaks, by someone somewhere in the Universe?

The issue I brought forward was something completely different: who / what says that it is important to compare two consecutive (or close, or "average"-spaced) events / hashes, as special treatment? No one answered this, yet. But you mentioned it this exact method as being a valid "foundational theory".

Now, once it is (hopefully) clear that some two keys sharing the same hash (or same prefix, or whatever) have positions A and B exist, and it is (hopefully) clear that A and B are irrelevant (because SETS do not have order, and neither subsets), then the big question arises:

If we find some hash at position A, why is it less likely that B == A + 1

and why is it less likely that B is in some range [A + 1, A + whatever].

Look, let's make a parallel: if you use Kangaroo to find DPs of 32 bits (just an example), sometimes you will find more DPs after N jumps then what the theory predicts (1 in 4.2 billion keys), and sometimes you will find much fewer DPs in the same amount of jumps. Sometimes a lot of DPs will be found in a very short range, and sometimes much less DPs after a longer series of jumps. So we are talking about N = hundreds/thousands of billions of keys, and already the deviation is so huge? Wow, how could that ever be possible? I mean, the space is so huge, that surely we would have to find some whatever semi-exact number of DPs at every some semi-exact number of jumos, correct? Because... difficulty?

Off the grid, training pigeons to broadcast signed messages.
Kelvin555
Jr. Member
*
Offline Offline

Activity: 63
Merit: 1


View Profile
January 29, 2025, 03:54:44 AM
 #7244


Let's be more clear.

Think of a dice, there are numbers from 1 to 6. Imagine doing this in software. After the first throw, the number 3 comes up, the probability of the number 3 coming up again is low, even if the software resets itself (applies the hashing process) in the second throw.

There are many reasons for this, you know.
For example; The concept of time is the biggest factor.

Are you for real Huh
You started with "Let's be more clear" and you dropped a joke on us lol....... Go read up about dice and probabilities lol......

But still you do you, I believe however anyone is searching, as long as you are doing it in the 67 bit space you have the same probability of finding the right key just as anyone else regardless of how stupid your method is.
bibilgin
Newbie
*
Offline Offline

Activity: 276
Merit: 0


View Profile
January 29, 2025, 09:54:36 AM
 #7245

Are you for real Huh
You started with "Let's be more clear" and you dropped a joke on us lol....... Go read up about dice and probabilities lol......

But still you do you, I believe however anyone is searching, as long as you are doing it in the 67 bit space you have the same probability of finding the right key just as anyone else regardless of how stupid your method is.

Yes man, I had to make a joke. lol

Because no one knows the basics of PROBABILITY or the answers to open-ended questions. That's why everyone writes the theory and possible probability results. As you said, the PROBABILITY of finding it is the same for everyone.

That's why everyone may have a difference of opinion on the subject. They may have different thoughts.

I repeat. "Every man eats yogurt differently." Wink
kTimesG
Full Member
***
Offline Offline

Activity: 742
Merit: 224


View Profile
January 29, 2025, 12:58:10 PM
 #7246

Any working theory can be successfully backed up by practical experiments, if the theoretical proofs are being denied or questioned. However, any non-working theory usually is impossible to be backed up in practice. bibilgin, your 2 weeks are up for a long time, what more are you trying to convince us? That there aren't actually 20.000 of your prefixes in the range (on average, maybe there's 60k of them, or maybe there's just 5000), that WP hasn't found some prefix closer than you, or etc.? How can you prove any of that?

Code:
import itertools

N = 100     # set size
r = 3       # ASSUMED success findings in the set

avg_deltas = []
sd_deltas = []
for c in itertools.combinations(range(N), r):
    # for every POSSIBLE outcome, get the AVERAGE distance
    # between ALL "this and next" pairs of found positions
    deltas = []
    for posAB in range(r - 1):
        deltas.append(c[posAB + 1] - c[posAB])
    delta_avg = sum(deltas) / (r - 1)
    avg_deltas.append(delta_avg)

    # compute StdDev of observed deltas, for this sample only
    sample_std_dev = pow(sum((x - delta_avg) ** 2 for x in deltas) / (len(deltas) - 0), 1/2)
    sd_deltas.append(sample_std_dev)

# compute deviation of the average deltas, of all possible outcomes
mean_ab = sum(avg_deltas) / len(avg_deltas)
ideal_std_dev = pow(sum([(x - mean_ab) ** 2 for x in avg_deltas]) / len(avg_deltas), 1 / 2)

print(f"Mean delta: {mean_ab}")
print(f"StdDev: {ideal_std_dev}")

# Attempt to use the ideal values to predict positions
# Compute result errors for every POSSIBLE outcome
prediction_errors = []
for sample_sd in sd_deltas:
    error_delta = abs(sample_sd - ideal_std_dev)
    prediction_errors.append(error_delta)

mean_error = sum(prediction_errors) / len(prediction_errors)
print(f"Average expected StdDev error: {mean_error} = {1 - mean_error/ideal_std_dev:%}")

prediction_errors = []
for sample_avg in avg_deltas:
    error_delta = abs(sample_avg - mean_ab)
    prediction_errors.append(error_delta)

mean_error = sum(prediction_errors) / len(prediction_errors)
print(f"Average expected Mean error: {mean_error} = {1 - mean_error/mean_ab:%}")

Code:
Mean delta: 25.25
StdDev: 11.066277603602758
Average expected StdDev error: 7.686000404305985 = 30.545747%
Average expected Mean error: 9.280303030303031 = 63.246325%

Off the grid, training pigeons to broadcast signed messages.
mcdouglasx
Hero Member
*****
Offline Offline

Activity: 910
Merit: 515



View Profile WWW
January 29, 2025, 01:35:29 PM
 #7247

"Ignore the effort"? How does that come to play in an abstract manner, in what equation does it go? Who cares about that effort? Or that there were some 1000 consecutive throws done without any coffee breaks, by someone somewhere in the Universe?

The issue I brought forward was something completely different: who / what says that it is important to compare two consecutive (or close, or "average"-spaced) events / hashes, as special treatment? No one answered this, yet. But you mentioned it this exact method as being a valid "foundational theory".

Now, once it is (hopefully) clear that some two keys sharing the same hash (or same prefix, or whatever) have positions A and B exist, and it is (hopefully) clear that A and B are irrelevant (because SETS do not have order, and neither subsets), then the big question arises:

If we find some hash at position A, why is it less likely that B == A + 1

and why is it less likely that B is in some range [A + 1, A + whatever].

In short, it applies compound probability to prefix search in a given range because secp256k1 has already flipped the coin for a set of 2**256 samples. This set is already constituted and is immutable.

Compound probability refers to the probability of two or more independent events occurring one after the other. To calculate it, you multiply the probabilities of each individual event.

For example, if you flip a coin and want to know the probability of getting two consecutive heads, you would multiply the probability of getting heads on the first flip (1/2) by the probability of getting heads on the second flip (1/2).

█████████████████████████
█████████████████████████
███████▀█████████▀███████
█████████████████████████
█████████████████████████
████████████▀████████████
███████▀███████▄███████
███████████▄▄▄███████████
█████████████████████████
█████████████████████████
█████████████████████████
█████████████████████████
█████████████████████████

 2UP.io 
NO KYC
CASINO
██████████████████████████
████████████████████████
███████████████████████
███████████████████
██████████████████████
███████████████████████
███████████████████████
██████████████████
███████████████████████
██████████████████
███████████████████████
████████████████████████
██████████████████████████
███████████████████████████████████████████████████████████████████████████████████████
 
FASTEST-GROWING CRYPTO
CASINO & SPORTSBOOK

 

███████████████████████████████████████████████████████████████████████████████████████
███████████████████████████
█████████████████████████
███████████████████████
███████████████████████
████████████████████████
███████████████████████
███████████████████████
██████████████████████
████████████████████████
███████████████████████
███████████████████████
█████████████████████████
███████████████████████████
 

...PLAY NOW...
bibilgin
Newbie
*
Offline Offline

Activity: 276
Merit: 0


View Profile
January 29, 2025, 01:50:58 PM
 #7248

Any working theory can be successfully backed up by practical experiments, if the theoretical proofs are being denied or questioned. However, any non-working theory usually is impossible to be backed up in practice. bibilgin, your 2 weeks are up for a long time, what more are you trying to convince us? That there aren't actually 20.000 of your prefixes in the range (on average, maybe there's 60k of them, or maybe there's just 5000), that WP hasn't found some prefix closer than you, or etc.? How can you prove any of that?


Okay, calm down. Wink We can't change your mind. We get it.

You can eat your yogurt. Smiley
Now take everything you asked ChatGPT and go back to your room.

By the way, were you going to do something in 2 weeks? I don't get it. Cheesy
I think you should shut up now with your personality that doesn't like what people do, that thinks only you know the best and your EGO is skyrocketing.
kTimesG
Full Member
***
Offline Offline

Activity: 742
Merit: 224


View Profile
January 29, 2025, 01:53:37 PM
 #7249

In short, it applies compound probability to prefix search in a given range because secp256k1 has already flipped the coin for a set of 2**256 samples. This set is already constituted and is immutable.

Compound probability refers to the probability of two or more independent events occurring one after the other. To calculate it, you multiply the probabilities of each individual event.

For example, if you flip a coin and want to know the probability of getting two consecutive heads, you would multiply the probability of getting heads on the first flip (1/2) by the probability of getting heads on the second flip (1/2).

What reference do you have that uses this particular statement:

one after the other

Let's say we have a coin that flipped heads twice, in 100 flips.

What is the difference between the probability of getting heads for flips #1 and #99, and the probability of getting the heads at flips #27 and #28?

If you write down all the possible combinations of sequences (2 heads, 100 flips), they all have the exact same probability (because any of the possible combinations has the same exact equal chance of occurring), and they all have the exact same compound probability. Because we are working with the SET of all possible outcomes, and the calculation of any probability you want to compute does not involve the POSITION where the coin flipped heads. That sort of thing would make them dependent events, not random independent events.

It works exactly the same with hashes of length 160 bits, dices, you name it. What formula do you know of, that introduces the position, or the order, of the compunded events into the calculation of the probability?...

Off the grid, training pigeons to broadcast signed messages.
mcdouglasx
Hero Member
*****
Offline Offline

Activity: 910
Merit: 515



View Profile WWW
January 29, 2025, 02:53:07 PM
Last edit: January 29, 2025, 03:08:55 PM by mcdouglasx
 #7250

If you write down all the possible combinations of sequences (2 heads, 100 flips), they all have the exact same probability (because any of the possible combinations has the same exact equal chance of occurring), and they all have the exact same compound probability.

And it is for this reason that with these same probabilities you can average how many steps away the next prefix might be. It is not exact, they are just probabilities and they cannot accurately predict the next specific sequence.

In hexadecimal, the chance of finding 'f' is 1/16 and the chance of finding 'ff' is 1/16 * 1/16, which is equivalent to 1/256, that is, a compound probability calculation.

Although it seems that the probability of being close is the same, it is less likely to find a prefix close to another due to the uniform distribution of the hashes.

█████████████████████████
█████████████████████████
███████▀█████████▀███████
█████████████████████████
█████████████████████████
████████████▀████████████
███████▀███████▄███████
███████████▄▄▄███████████
█████████████████████████
█████████████████████████
█████████████████████████
█████████████████████████
█████████████████████████

 2UP.io 
NO KYC
CASINO
██████████████████████████
████████████████████████
███████████████████████
███████████████████
██████████████████████
███████████████████████
███████████████████████
██████████████████
███████████████████████
██████████████████
███████████████████████
████████████████████████
██████████████████████████
███████████████████████████████████████████████████████████████████████████████████████
 
FASTEST-GROWING CRYPTO
CASINO & SPORTSBOOK

 

███████████████████████████████████████████████████████████████████████████████████████
███████████████████████████
█████████████████████████
███████████████████████
███████████████████████
████████████████████████
███████████████████████
███████████████████████
██████████████████████
████████████████████████
███████████████████████
███████████████████████
█████████████████████████
███████████████████████████
 

...PLAY NOW...
kTimesG
Full Member
***
Offline Offline

Activity: 742
Merit: 224


View Profile
January 29, 2025, 03:53:50 PM
 #7251

If you write down all the possible combinations of sequences (2 heads, 100 flips), they all have the exact same probability (because any of the possible combinations has the same exact equal chance of occurring), and they all have the exact same compound probability.

And it is for this reason that with these same probabilities you can average how many steps away the next prefix might be. It is not exact, they are just probabilities and they cannot accurately predict the next specific sequence.

In hexadecimal, the chance of finding 'f' is 1/16 and the chance of finding 'ff' is 1/16 * 1/16, which is equivalent to 1/256, that is, a compound probability calculation.

Although it seems that the probability of being close is the same, it is less likely to find a prefix close to another due to the uniform distribution of the hashes.

No, you cannot average anything this way, because the average works over all the entire possible combinations. This is like trying to explain that you are somehow affecting the results because they should respect some rule. Like the Monty Hall problem.

So, finding prefixes next to each other has the same chances as finding prefixes after ANY possible distance. You can verify this by analyzing all the possibilities that can occur, you will discover that the probabilities you end up are all identical: any distance you choose between the prefixes, they will all have the same exact probability of occuring.

Off the grid, training pigeons to broadcast signed messages.
mcdouglasx
Hero Member
*****
Offline Offline

Activity: 910
Merit: 515



View Profile WWW
January 29, 2025, 04:21:30 PM
 #7252

No, you cannot average anything this way, because the average works over all the entire possible combinations. This is like trying to explain that you are somehow affecting the results because they should respect some rule. Like the Monty Hall problem.

So, finding prefixes next to each other has the same chances as finding prefixes after ANY possible distance. You can verify this by analyzing all the possibilities that can occur, you will discover that the probabilities you end up are all identical: any distance you choose between the prefixes, they will all have the same exact probability of occuring.
So you say that finding 2 prefixes of 5 characters is just as likely as 2 prefixes of 10 characters, because according to you it is impossible to average.

█████████████████████████
█████████████████████████
███████▀█████████▀███████
█████████████████████████
█████████████████████████
████████████▀████████████
███████▀███████▄███████
███████████▄▄▄███████████
█████████████████████████
█████████████████████████
█████████████████████████
█████████████████████████
█████████████████████████

 2UP.io 
NO KYC
CASINO
██████████████████████████
████████████████████████
███████████████████████
███████████████████
██████████████████████
███████████████████████
███████████████████████
██████████████████
███████████████████████
██████████████████
███████████████████████
████████████████████████
██████████████████████████
███████████████████████████████████████████████████████████████████████████████████████
 
FASTEST-GROWING CRYPTO
CASINO & SPORTSBOOK

 

███████████████████████████████████████████████████████████████████████████████████████
███████████████████████████
█████████████████████████
███████████████████████
███████████████████████
████████████████████████
███████████████████████
███████████████████████
██████████████████████
████████████████████████
███████████████████████
███████████████████████
█████████████████████████
███████████████████████████
 

...PLAY NOW...
kTimesG
Full Member
***
Offline Offline

Activity: 742
Merit: 224


View Profile
January 29, 2025, 04:50:01 PM
 #7253

No, you cannot average anything this way, because the average works over all the entire possible combinations. This is like trying to explain that you are somehow affecting the results because they should respect some rule. Like the Monty Hall problem.

So, finding prefixes next to each other has the same chances as finding prefixes after ANY possible distance. You can verify this by analyzing all the possibilities that can occur, you will discover that the probabilities you end up are all identical: any distance you choose between the prefixes, they will all have the same exact probability of occuring.
So you say that finding 2 prefixes of 5 characters is just as likely as 2 prefixes of 10 characters, because according to you it is impossible to average.


No. What I'm saying is, that if you estimated that there are 1000 hashes with whatever prefix in some whatever bit-range, it does not matter whether those 1000 hashes are obtained from consecutive keys, or from spaced apart keys, because the AVERAGE prefixes/subrange is identical in all cases.

Just because there are countless more possibilities for them to be spaced apart does not change anything, can't you see? Because each of those countless "other" options are all with an equal chance of ocurrring. But once you take them to be analyzed (one possibility by one possibility), then the averages are all the same, while each possibility itself will not respect any average. Sounds absurd, but this is how probability works: short term = can't estimate shit; long term: averages converge.

Off the grid, training pigeons to broadcast signed messages.
Bill buffalo
Newbie
*
Offline Offline

Activity: 50
Merit: 0


View Profile
January 30, 2025, 03:43:58 AM
 #7254

lol, I have no reason to lie.

I did not scan anything, well, recently. I found it on a rig that hasn't ran the 67 challenge in a while. I finally consolidated my list and spotted that one.

You can do what you will with the info. I was just informing you that there is an address closer.

Thank you for not giving me information = (Nothing).
If you wanted me to write this, I wrote it.

Now it's up to you to prove it's a LIE or TRUE.
You won't be the first or the last person to say you're closer than me. Wink

My MESSAGE to everyone ;

I really don't understand most of you.

You say you are doing it wrong?
I say prove it. Everyone says things they have memorized.
But you don't think there is a difference of opinion or that we are wrong.

Everyone has calculated how many 66-67 bit, 10 or 11 lengths there are. Nobody knows the right answer. Everyone says an average PROBABILITY result.

When I say something about PROBABILITY, I am declared the person who is doing it WRONG.

I know all of these, friends. But don't forget that there will be solutions.


------------------------------------------------------------
Private Key:
Public Key:
Address: 1BY8GQbnueYofwSuFAT3USAhArAdMsz8Qy
Hash160: 739437bb3dd6d1983e66629c5f08c70e51db53af
------------------------------------------------------------
Private Key:
Public Key:
Address: 1BY8GQbnueYofwSuFAT3USAhFEzGfW5Jd5
Hash160: 739437bb3dd6d1983e66629c5f08c70e524f436c
------------------------------------------------------------
Private Key:
Public Key:
Address: 1BY8GQbnueYofwSuFAT3USAhBXaa9CSSoz
Hash160: 739437bb3dd6d1983e66629c5f08c70e51ed4234
------------------------------------------------------------
Private Key:
Public Key:
Address: 1BY8GQbnueYofwSuFAT3USAhBREEgoMzYo
Hash160: 739437bb3dd6d1983e66629c5f08c70e51ea5e87
------------------------------------------------------------
Private Key:
Public Key:
Address: 1BY8GQbnueYofwSuFAT3USAhGm5ehSbYxs
Hash160: 739437bb3dd6d1983e66629c5f08c70e527757d3
Hello Geshma I would like to ask you for the private key and public keys of the address listed below you posted it but didn't add the private keys and public keys please could you also post the private keys
Private Key:
Public Key:
Address: 1BY8GQbnueYofwSuFAT3USAhArAdMsz8Qy
Hash160: 739437bb3dd6d1983e66629c5f08c70e51db53af
------------------------------------------------------------
Private Key:
Public Key:
Address: 1BY8GQbnueYofwSuFAT3USAhFEzGfW5Jd5
Hash160: 739437bb3dd6d1983e66629c5f08c70e524f436c
------------------------------------------------------------
Private Key:
Public Key:
Address: 1BY8GQbnueYofwSuFAT3USAhBXaa9CSSoz
Hash160: 739437bb3dd6d1983e66629c5f08c70e51ed4234
------------------------------------------------------------
Private Key:
Public Key:
Address: 1BY8GQbnueYofwSuFAT3USAhBREEgoMzYo
Hash160: 739437bb3dd6d1983e66629c5f08c70e51ea5e87
------------------------------------------------------------
Private Key:
Public Key:
Address: 1BY8GQbnueYofwSuFAT3USAhGm5ehSbYxs
Hash160: 739437bb3dd6d1983e66629c5f08c70e527757d3
Bill buffalo
Newbie
*
Offline Offline

Activity: 50
Merit: 0


View Profile
January 30, 2025, 09:26:49 AM
 #7255

lol, I have no reason to lie.

I did not scan anything, well, recently. I found it on a rig that hasn't ran the 67 challenge in a while. I finally consolidated my list and spotted that one.

You can do what you will with the info. I was just informing you that there is an address closer.

Thank you for not giving me information = (Nothing).
If you wanted me to write this, I wrote it.

Now it's up to you to prove it's a LIE or TRUE.
You won't be the first or the last person to say you're closer than me. Wink

My MESSAGE to everyone ;

I really don't understand most of you.

You say you are doing it wrong?
I say prove it. Everyone says things they have memorized.
But you don't think there is a difference of opinion or that we are wrong.

Everyone has calculated how many 66-67 bit, 10 or 11 lengths there are. Nobody knows the right answer. Everyone says an average PROBABILITY result.

When I say something about PROBABILITY, I am declared the person who is doing it WRONG.

I know all of these, friends. But don't forget that there will be solutions.


------------------------------------------------------------
Private Key:
Public Key:
Address: 1BY8GQbnueYofwSuFAT3USAhArAdMsz8Qy
Hash160: 739437bb3dd6d1983e66629c5f08c70e51db53af
------------------------------------------------------------
Private Key:
Public Key:
Address: 1BY8GQbnueYofwSuFAT3USAhFEzGfW5Jd5
Hash160: 739437bb3dd6d1983e66629c5f08c70e524f436c
------------------------------------------------------------
Private Key:
Public Key:
Address: 1BY8GQbnueYofwSuFAT3USAhBXaa9CSSoz
Hash160: 739437bb3dd6d1983e66629c5f08c70e51ed4234
------------------------------------------------------------
Private Key:
Public Key:
Address: 1BY8GQbnueYofwSuFAT3USAhBREEgoMzYo
Hash160: 739437bb3dd6d1983e66629c5f08c70e51ea5e87
------------------------------------------------------------
Private Key:
Public Key:
Address: 1BY8GQbnueYofwSuFAT3USAhGm5ehSbYxs
Hash160: 739437bb3dd6d1983e66629c5f08c70e527757d3
Hello Geshma I would like to ask you for the private key and public keys of the address listed below you posted it but didn't add the private keys and public keys please could you also post the private keys
Private Key:
Public Key:
Address: 1BY8GQbnueYofwSuFAT3USAhArAdMsz8Qy
Hash160: 739437bb3dd6d1983e66629c5f08c70e51db53af
------------------------------------------------------------
Private Key:
Public Key:
Address: 1BY8GQbnueYofwSuFAT3USAhFEzGfW5Jd5
Hash160: 739437bb3dd6d1983e66629c5f08c70e524f436c
------------------------------------------------------------
Private Key:
Public Key:
Address: 1BY8GQbnueYofwSuFAT3USAhBXaa9CSSoz
Hash160: 739437bb3dd6d1983e66629c5f08c70e51ed4234
------------------------------------------------------------
Private Key:
Public Key:
Address: 1BY8GQbnueYofwSuFAT3USAhBREEgoMzYo
Hash160: 739437bb3dd6d1983e66629c5f08c70e51ea5e87
------------------------------------------------------------
Private Key:
Public Key:
Address: 1BY8GQbnueYofwSuFAT3USAhGm5ehSbYxs
Hash160: 739437bb3dd6d1983e66629c5f08c70e527757d3


currently no , but once i finalize with the code will post link to github , its a work in progress.

Hello Geshma I know the is a work in progress but I really need this now it will help and go a long way in what am doing you don't have to release all the private keys for all the addresses two private keys should be alright and you don't have to post it you can send it to me personally I really need this I don't want the code all I need is just two private keys for any of the address above that should be ok thank you
damiankopacz87
Newbie
*
Offline Offline

Activity: 16
Merit: 0


View Profile
January 30, 2025, 01:16:50 PM
 #7256

Hi,

Lately I was wondering if it is possible to modify JLPKangaroo_OW_OT to write DP's on SSD M.2 NVMe instead of RAM? Do You have some experience or theoretical knowledge about such an issue?

I know that RAM is something different than SSD, but concerning that main task in Kangaroo is to caclucate points, find DP's and search for colision, is there a chance to store DP's on NVMe SSD without losing performance?

Best Regards
Damian
WanderingPhilospher
Sr. Member
****
Offline Offline

Activity: 1470
Merit: 284

Shooters Shoot...


View Profile
January 30, 2025, 03:23:12 PM
 #7257

Hi,

Lately I was wondering if it is possible to modify JLPKangaroo_OW_OT to write DP's on SSD M.2 NVMe instead of RAM? Do You have some experience or theoretical knowledge about such an issue?

I know that RAM is something different than SSD, but concerning that main task in Kangaroo is to caclucate points, find DP's and search for colision, is there a chance to store DP's on NVMe SSD without losing performance?

Best Regards
Damian
Yes and no.

Yes, You can modify the code to go to output files.

If you use a low DP, yes it will lose speed/performance.

If you use a high DP, no it will not lose speed/performance.

The OG of GPU Kangaroo programs, uses this method, printing points and distances to files and then checks for collisions every x amount of seconds.
brainless
Member
**
Offline Offline

Activity: 467
Merit: 35


View Profile
January 30, 2025, 04:33:29 PM
 #7258

Hi,

Lately I was wondering if it is possible to modify JLPKangaroo_OW_OT to write DP's on SSD M.2 NVMe instead of RAM? Do You have some experience or theoretical knowledge about such an issue?

I know that RAM is something different than SSD, but concerning that main task in Kangaroo is to caclucate points, find DP's and search for colision, is there a chance to store DP's on NVMe SSD without losing performance?

Best Regards
Damian
Yes find at
GitHub.com/Telariust
There were 2 ver
Hybrid and multi
It's print dp to file

13sXkWqtivcMtNGQpskD78iqsgVy9hcHLF
WanderingPhilospher
Sr. Member
****
Offline Offline

Activity: 1470
Merit: 284

Shooters Shoot...


View Profile
January 30, 2025, 07:05:56 PM
 #7259

Hi,

Lately I was wondering if it is possible to modify JLPKangaroo_OW_OT to write DP's on SSD M.2 NVMe instead of RAM? Do You have some experience or theoretical knowledge about such an issue?

I know that RAM is something different than SSD, but concerning that main task in Kangaroo is to caclucate points, find DP's and search for colision, is there a chance to store DP's on NVMe SSD without losing performance?

Best Regards
Damian
Yes find at
GitHub.com/Telariust
There were 2 ver
Hybrid and multi
It's print dp to file


Telariust only came up with a CPU only version.

Alek76 was the OG to develop a working GPU Kangaroo program, the hybrid and multi.
brainless
Member
**
Offline Offline

Activity: 467
Merit: 35


View Profile
January 30, 2025, 08:02:09 PM
 #7260

Hi,

Lately I was wondering if it is possible to modify JLPKangaroo_OW_OT to write DP's on SSD M.2 NVMe instead of RAM? Do You have some experience or theoretical knowledge about such an issue?

I know that RAM is something different than SSD, but concerning that main task in Kangaroo is to caclucate points, find DP's and search for colision, is there a chance to store DP's on NVMe SSD without losing performance?

Best Regards
Damian
Yes find at
GitHub.com/Telariust
There were 2 ver
Hybrid and multi
It's print dp to file


Telariust only came up with a CPU only version.

Alek76 was the OG to develop a working GPU Kangaroo program, the hybrid and multi.
Yes you right
I read Read.me from his downloaded git, where he reference for telariust and jlp
Alek76 right program for save dp in file then use python competitor for find solution

13sXkWqtivcMtNGQpskD78iqsgVy9hcHLF
Pages: « 1 ... 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 [363] 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 ... 635 »
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!