Bitcoin Forum
November 04, 2024, 02:25:15 PM *
News: Latest Bitcoin Core release: 28.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: « 1 ... 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 [288] 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 »
  Print  
Author Topic: Bitcoin puzzle transaction ~32 BTC prize to who solves it  (Read 222964 times)
kTimesG
Member
**
Offline Offline

Activity: 251
Merit: 39


View Profile
August 30, 2024, 11:55:39 AM
 #5741

Can someone please give a technical explanation of how they find (how this similarity search works) private key after someone already finds the private key and starts a transaction? I know mining means finding a part of private key and these private keys are in a narrow window. But there are too many pools, so I would expect to 1 or 2 approvals "quickly". Even if my transaction got let say 1 approval are they allowed to cancel the transaction when they have private key?

Is it a naive approach to create a bot to make ~60 transactions 0.1 BTC each?
This was discussed to death dozens of times, why don't you read the last pages of the thread?

Once the private key is found it is inevitable to have to make its public key, well... public when you create the transaction, so that the signature can be verified. It doesn't matter if you create 1 output or 100 outputs. Since the public key gets, well... public, everyone will know it. Knowing the public key of a "small" private key (< 100 bits, less or more depending on current technology) allows to find the corresponding private key in anywhere between less than a second and 10 minutes, before your original transaction gets confirmed in a mined block. Knowing the private key allows for creating a new valid (signed via the private key) overriding transaction, with a higher fee, which replaces your transaction, irrelevant of how it looked like or of its content, flags, etc.
madogss
Newbie
*
Offline Offline

Activity: 37
Merit: 0


View Profile
August 30, 2024, 07:45:40 PM
 #5742

Number of numbers with unequal bits: 431910
Number of numbers with equal bits: 92378
Number of numbers differing by 1 bit: 0
Number of numbers differing by 2 bits: 167960
Number of numbers differing by 3 bits: 0
Number of numbers differing by 4 bits: 125970
Number of numbers differing by 5 bits: 0
Maximum deviation from 50/50: 20

[Program finished]

so in your search ranges not 2^20-2^19  = 524288, but 92378 calculations )) 5 times smaller, yes ?))

priv of puz 2^19-2^20 is 11010010110001010101 so, my darlings Smiley, you are soend 75 percent of yor time for free then you brute all range



I wonder how you convert the 92378 calculations to a valid search range? and do you have any plan to make it in a more efficient language so it could possibly be run using higher bit ranges.   
COBRAS
Member
**
Offline Offline

Activity: 1014
Merit: 23


View Profile
August 30, 2024, 09:33:43 PM
Last edit: September 01, 2024, 08:45:46 PM by Mr. Big
 #5743

Number of numbers with unequal bits: 431910
Number of numbers with equal bits: 92378
Number of numbers differing by 1 bit: 0
Number of numbers differing by 2 bits: 167960
Number of numbers differing by 3 bits: 0
Number of numbers differing by 4 bits: 125970
Number of numbers differing by 5 bits: 0
Maximum deviation from 50/50: 20

[Program finished]

so in your search ranges not 2^20-2^19  = 524288, but 92378 calculations )) 5 times smaller, yes ?))

priv of puz 2^19-2^20 is 11010010110001010101 so, my darlings Smiley, you are soend 75 percent of yor time for free then you brute all range



I wonder how you convert the 92378 calculations to a valid search range? and do you have any plan to make it in a more efficient language so it could possibly be run using higher bit ranges.  

I not search range, try research how many bits 0 and zero in range, and how many 0 and 1 in https://privatekeys.pw/puzzles/bitcoin-puzzle-tx

after, remove not valid generations. Yes, calculate all ranges not needs as for me, for other guy's I don know



Number of numbers with unequal bits: 431910
Number of numbers with equal bits: 92378
Number of numbers differing by 1 bit: 0
Number of numbers differing by 2 bits: 167960
Number of numbers differing by 3 bits: 0
Number of numbers differing by 4 bits: 125970
Number of numbers differing by 5 bits: 0
Maximum deviation from 50/50: 20

[Program finished]

so in your search ranges not 2^20-2^19  = 524288, but 92378 calculations )) 5 times smaller, yes ?))

priv of puz 2^19-2^20 is 11010010110001010101 so, my darlings Smiley, you are soend 75 percent of yor time for free then you brute all range



I wonder how you convert the 92378 calculations to a valid search range? and do you have any plan to make it in a more efficient language so it could possibly be run using higher bit ranges.  

not range, random generator do 92378, not fool range

[
gygy
Newbie
*
Offline Offline

Activity: 19
Merit: 0


View Profile
August 31, 2024, 08:04:43 PM
 #5744

If it is not a secret how many keys have you checked total, per sec?

I started 4 month ago with an obsolete GPU and it only runs when I am home and I feel like that. I have checked 576_006_654_001_152 keys with about 150 MKeys/sec speed.

I know these are rookie numbers. Smiley
kTimesG
Member
**
Offline Offline

Activity: 251
Merit: 39


View Profile
September 02, 2024, 04:11:51 PM
 #5745

Can an actual expert in EC confirm or deny this simple statement?

Considering the secp256k1 curve, and the existence of symmetry and lambda endomorphism, there can not exist any algorithm that solves the ECDLP over an interval in less than sqrt(b) group operations (with no precomputation required).

From what I "know" the complexity cofactors are something like: BSGS: 1, Rho: 1.25, normal kangaroo: 2, four-kangaroo: 1.71, Gaudry-Schost: 1.36

If the answer is no, what can happen if it is proven otherwise? If the answer is yes, is there a known lower bound? And maybe a research paper?
gygy
Newbie
*
Offline Offline

Activity: 19
Merit: 0


View Profile
September 02, 2024, 04:56:00 PM
 #5746

Can an actual expert in EC confirm or deny this simple statement?

Considering the secp256k1 curve, and the existence of symmetry and lambda endomorphism, there can not exist any algorithm that solves the ECDLP over an interval in less than sqrt(b) group operations (with no precomputation required).

From what I "know" the complexity cofactors are something like: BSGS: 1, Rho: 1.25, normal kangaroo: 2, four-kangaroo: 1.71, Gaudry-Schost: 1.36

If the answer is no, what can happen if it is proven otherwise? If the answer is yes, is there a known lower bound? And maybe a research paper?

Since we don't know if P=NP the most we can state is a weak probably not.
Henark
Member
**
Offline Offline

Activity: 132
Merit: 10


View Profile WWW
September 04, 2024, 11:18:06 AM
Last edit: September 04, 2024, 12:05:43 PM by Henark
 #5747

KwDiBf89QgGbjEhKnhXJuH7LrciVrZi3qYjgd9M7rFUZQ9KKCuDw = 3070 = 0000000000000000000000000000000000000000000000000000000000000BFE
Biginteger PVK value: 3070
Hex PVK value: 0BFE
Private Key Base64 (44 characters):
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAC/4=
kTimesG
Member
**
Offline Offline

Activity: 251
Merit: 39


View Profile
September 05, 2024, 12:08:36 PM
 #5748

While playing around with trying to achieve theoretical running time (in number of group operations) of several known IDLP methods, I stumbled upon a method that solves it in 1.55 sqrt(b) ops, and not based on Gaudry-Schost (e.g. all walks go forward, and restarting walks is not necessary unless a point was visited already by same type of walk). Fruitless cycles are not possible.

The method only requires 3 starting points, one of them is fixed, and a translation of the interval, but not around 0. Need to see if it can be parallelized while keeping total ops low.

It also works with DP > 0 so this pretty much beats anything currently known, since the time/memory tradeoff is much much better. I tested it with several 5000 runs per experiment, over misc. interval sizes and DP exponents.

I need to get this reviewed by some professors I know, but if this checks out, then it would be a speedup of 10% over Pollard's four-kang algorithm (1.71 / 1.55), and 29% over the classic 2*sqrt(b) kangaroo algorithm (2 / 1.55).

Note: this is not a joke. Also this only works on groups with fast inversion (P ~ -P equiv class), like EC.

Why did I do this? Well, the best known ECDLP algorithm is improved Gaudry-Schost jumping over equivalence classes, but that method is an absolute pain to run in practice, since it involves detecting fruitless cycles, restarting from another random point when a DP is hit (e.g. lots of group multiplications) which ruin the actual runtime, and is also a nightmare to implement on a GPU.
WanderingPhilospher
Full Member
***
Offline Offline

Activity: 1204
Merit: 237

Shooters Shoot...


View Profile
September 05, 2024, 02:50:25 PM
Last edit: September 05, 2024, 03:10:04 PM by WanderingPhilospher
 #5749

Quote
Fruitless cycles are not possible.
Define:
a cycle
a fruitless cycle

Based on this:
Quote
restarting walks is not necessary unless a point was visited already by same type of walk

That is a fruitless cycle, or at least a partial fruitless cycle.

With your "new found" method, does it require knowing how many "processors" will be used up front, like one of Pollard's methods, to avoid fruitless / useless cycles?

Quote
and is also a nightmare to implement on a GPU

Why do you say this? If tame lands on a previously point already stored by another tame, just "rekey" the one.
Agreed that useless collisions slow down / impact the overall run time, but I don't think it's a nightmare to implement on a GPU. I'm pretty sure it's standard in JLP's Kangaroo version. If I am understanding your correctly.

Interesting find. Keep us posted.



COBRAS
Member
**
Offline Offline

Activity: 1014
Merit: 23


View Profile
September 05, 2024, 04:06:31 PM
 #5750


https://github.com/bobmitch23/discrete-log/tree/master

"


They include the Pohlig-Hellman Algorithm (practically the fastest), the Baby Step Giant Step Algorithm and the Rho Algorithm.

"
Code:

import discrete_log

rho_metadata = discrete_log.rho.solve(5, 222137199848, 4889427811007)
print(rho_metadata.solution, rho_metadata.secondsTaken)

pohlig_hellman_metadata = discrete_log.pohlig_hellman.solve(5, 222137199848, 4889427811007)
print(pohlig_hellman_metadata.solution, pohlig_hellman_metadata.secondsTaken)

baby_giant_metadata = discrete_log.baby_giant.solve(5, 222137199848, 4889427811007)
print(baby_giant_metadata.solution, baby_giant_metadata.secondsTaken)

Results:


971597696122 16.105365991592407
971597696122 0.024655580520629883
971597696122 37.2728009223938


Kangaroo, Rho, looks like  not fasted.

[
kTimesG
Member
**
Offline Offline

Activity: 251
Merit: 39


View Profile
September 05, 2024, 04:39:16 PM
 #5751

Quote
Fruitless cycles are not possible.
Define:
a cycle
a fruitless cycle

If we only jump forward cycles are only possible if we end up back at the beginning of the interval, or somehow an incredibly long jump ends up on a visited point, and no DP was found during the entire walk. These can't occur for bit sizes < 254 or so. Otherwise, at some point a self-collision with a DP ends the cycle.

Based on this:
Quote
restarting walks is not necessary unless a point was visited already by same type of walk

That is a fruitless cycle, or at least a partial fruitless cycle.

I would not be so sure about that, I said "same type of walk", not "same walk". And restarting simply means moving forward by a little bit, not a full blown multiplication again.


With your "new found" method, does it require knowing how many "processors" will be used up front, like one of Pollard's methods, to avoid fruitless / useless cycles?

Knowing number of processors changes some parameters as usual (optimal avg jump size etc) but if you compare expected results on misc. variants of kangaroo parallelization, using more or less number of processors (m) there isn't really a huge difference in actual runtime. Those are idealistic theoretical averages, under a very probabilistic reality.

The Pollard variant fails if the node which was supposed to give the solution fails.
van Oorschot variant can cause self-type collisions, but extremely unlikely, and it's not an issue to move along one of the walks when that happens.


Quote
and is also a nightmare to implement on a GPU

Why do you say this? If tame lands on a previously point already stored by another tame, just "rekey" the one.
Agreed that useless collisions slow down / impact the overall run time, but I don't think it's a nightmare to implement on a GPU. I'm pretty sure it's standard in JLP's Kangaroo version. If I am understanding your correctly.

Interesting find. Keep us posted.

So simple to say these... "lands on", "already stored", "another tame", "rekey"... lol

If you do it on a CPU + RAM then jumping over equivalence classes with Gaudry-Schost solves ECDLP in 1.46 to 1.49 sqrt(b) (theoretically 1.36), which is faster than any kangaroo ever will. Pollard himself advises to use it for elliptic curves, instead of his kangaroo. Why no one done it (or did they)? Because it's complex and recreation of new walks from fresh points, detecting cycles, and overhead of all the other logic seems too much of a headache.

Sure it can be done on a GPU, nothing is impossible. But a GPU doesn't have access to "already stored", "lands on", "another tame", or "rekey"-ing, it should only really just compute millions of times in a row the next jump for a small batch of items, the less the better, with minimal (preferably zero) need to access any data during that entire process. Otherwise it's just another slow CPU, losing clock cycles on waiting for read/write memory.

And that doesn't bond well with algorithms that need to break in the middle of the jump sequence to "rekey". Lots of jumps for fewer kangaroos  runs at max speed if not interrupted.

They include the Pohlig-Hellman Algorithm (practically the fastest), the Baby Step Giant Step Algorithm and the Rho Algorithm.

Kangaroo, Rho, looks like  not fasted.

Buddy, that only works if the curve order is not a prime.
WanderingPhilospher
Full Member
***
Offline Offline

Activity: 1204
Merit: 237

Shooters Shoot...


View Profile
September 05, 2024, 05:37:13 PM
 #5752

Quote
I would not be so sure about that, I said "same type of walk", not "same walk". And restarting simply means moving forward by a little bit, not a full blown multiplication again.

Could be a misunderstanding, but if any tame lands on a point already visited by a tame, the latter tame's walk was "fruitless". Same if wild. You are saying that a tame and tame are the same type of walk correct? That is what I am saying, and if tame lands on already visited tame point, one of the walks is useless.

Quote
So simple to say these... "lands on", "already stored", "another tame", "rekey"... lol

yes, it is easy to say. If tame lands on a point already stored / visited by another tame, rekey it, i.e. generate a new random starting point. That's what happens in JLPs version.

I made a mod to it, where once a tame or wild found a DP, it would auto generate a new starting point, thus, a new walk. Always debates on paths / walks / random lol.I never ran enough tests with it to know how it compared to traditional paths.

Quote
Knowing number of processors changes some parameters as usua

I was really referring to (avoiding useless cycles/collisions) Pollards variant that, "...a way to parallelize the kangaroo method while avoiding the problem of useless collisions. The trade off of this method is that the number of available processors needs to be both fixed and known ahead of time." (John M. Pollard. Kangaroos, monopoly and discrete logarithms. Journal of Cryptology,
13(4):437–447, 2000.)

kTimesG
Member
**
Offline Offline

Activity: 251
Merit: 39


View Profile
September 05, 2024, 07:51:03 PM
Last edit: September 06, 2024, 04:58:19 PM by kTimesG
 #5753

Quote
I would not be so sure about that, I said "same type of walk", not "same walk". And restarting simply means moving forward by a little bit, not a full blown multiplication again.

Could be a misunderstanding, but if any tame lands on a point already visited by a tame, the latter tame's walk was "fruitless". Same if wild. You are saying that a tame and tame are the same type of walk correct? That is what I am saying, and if tame lands on already visited tame point, one of the walks is useless.

You are talking about the case where we have herds. I was talking about the case of using 3 kangaroos, in which a point can be reached by two kangaroos, the two that aren't Tame type. Usually this would resolve the DLP, but the distances might also cancel each other out, in which case one of the walks should divert, or they would duplicate.

In the "herd" case, you can't really call self-collisions useless, since you had two branches that, before reaching a common DP, individually produced different other DPs up to that point. So, not really fruitless. You know what may be fruitless? Starting another kangaroo from some position which was already tried before. Randomly creating a new one without keeping a good track of your kangaroo collection history (globally) can hurt. Chances to choose an already tried one increase as you create more. This is a paradox used in lottery analysis also (if you randomly pick numbers to play from a set of size N, after N random picks you covered only around 60-70% of all numbers, and played a lot of numbers two or more times). This ain't what we want to happen.

Another completely deal breaker to this method is that, even if you choose a new starting position that was not the initial position of some kangaroo, it may as well be somewhere inside the non-DP points visited earlier by a kangaroo you have no idea about. So, some wasted cycles creating the kangaroo only to find that the first DP it hits was already found!


Quote
So simple to say these... "lands on", "already stored", "another tame", "rekey"... lol

yes, it is easy to say. If tame lands on a point already stored / visited by another tame, rekey it, i.e. generate a new random starting point. That's what happens in JLPs version.

I made a mod to it, where once a tame or wild found a DP, it would auto generate a new starting point, thus, a new walk. Always debates on paths / walks / random lol.I never ran enough tests with it to know how it compared to traditional paths.

Quote
Knowing number of processors changes some parameters as usua

I was really referring to (avoiding useless cycles/collisions) Pollards variant that, "...a way to parallelize the kangaroo method while avoiding the problem of useless collisions. The trade off of this method is that the number of available processors needs to be both fixed and known ahead of time." (John M. Pollard. Kangaroos, monopoly and discrete logarithms. Journal of Cryptology,
13(4):437–447, 2000.)

A good rule I learned is to forget everything that I "think" can work better when reading some math paper's paragraph, title, or content. Also to get rid of the entire context and remember it's all abstract unless otherwise explicitly noted to apply to some problem.

Sure, randomizing the starting points, restarting fresh after some DP even though the analysis of doing these things is non-existent (probably because it's wrong), etc. - it works, but is it really optimal? Without a proper demonstration these may (only sometimes) "seem" to work better due to placebo effect or special circumstances.

So, creating a fresh walk after a DP is found - doesn't that simply increase the running time? What's the point of doing this? You can just create more kangaroos at the beginning and shorten the limit of their max travel distance or of the total steps.

Why try so hard to avoid self-type collisions? it's basically a no-op to advance one of the walks and continue. There's no other benefit, and you risk to miss the solution if you don't do node management correctly (start from last checkpoint after node failed).

Otherwise, strictly speaking, if you follow exactly what the papers say (especially the one about paralllelization, by van Oorschot), stick to these rules, and don't improvise around it:

- compute correctly the jump tables (no, its size is NOT sqrt(n) - ALL the papers specify it as needing to have a specific mean value)  
- Tame starts at b/2 + i * v
- Wild starts at k+ i * v
- jump until collision found; if T/W -> solved, otherwise, push one of the kangaroos forward by a small value.

No "max travel limit" of any sorts, no random starting points mentioned anywhere. Not sure who and why came up with those ideas.
Kelvin555
Newbie
*
Offline Offline

Activity: 9
Merit: 0


View Profile
September 06, 2024, 02:05:56 PM
 #5754

========================================================================================
I spent over a year searching for #66 and even made a Nvidia Kernel to shuffle using nvcc and OpenSSL-Win64.
I have abandoned the search due to the withdrawal front runners and have been selling all my RTX GPUs anyway.
Let me show you something, if focus only on puzzles divisible by 6:

======================================================

Can you share the code, let me give this a try ?
WanderingPhilospher
Full Member
***
Offline Offline

Activity: 1204
Merit: 237

Shooters Shoot...


View Profile
September 06, 2024, 05:38:34 PM
 #5755

First:
Quote
In the "herd" case, you can't really call self-collisions useless, since you had two branches that, before reaching a common DP, individually produced different other DPs up to that point. So, not really fruitless.

It's like I have to asks a lot of questions first, when talking with you. Are you talking any DP or just DP 0. I normally never talk in terms of DP 0, because it's unrealistic for the range I am working in (the one that this thread is about, the challenge / puzzle wallets, the 130 bit wallet)

Yeah, I can call them fruitless. You assume that every branch found DPs before reaching a common DP.

DP 32 - If thread 1 started at xx random point and finally found it's first DP after 2^30 "jumps" and thread 2 started at xx2 point and landed on the same DP after 2^32 jumps, then yes, thread 2's first jumps out of the gate were fruitless. It did not find any DPs prior to finding the same one as thread 1.

At today's speed, let's just say a single CPU core can do 30 MOp/s. And each core has 1,024 kangaroos hopping about. Then each kangaroo is hopping around at 30k hop/s. So Then thread 2 wasted, was on a fruitless path, for over 39 hours.

Also, I was asking if the method you were talking about (with the improved run time) used
Quote
Pollards variant that, "...a way to parallelize the kangaroo method while avoiding the problem of useless collisions. The trade off of this method is that the number of available processors needs to be both fixed and known ahead of time."
Was a direct question. I wasn't trying to tell you what is right or wrong or the gospel, just asking, is the method you speak of, using that or a variant of it to avoid fruitless cycles/paths? Or does it not bother or care about fruitless paths?

Again, hard to understand what you are saying or asking, a lot of the time:
Quote
No "max travel limit" of any sorts, no random starting points mentioned anywhere. Not sure who and why came up with those ideas.
What do you mean no one has ever mentioned the above?? Are you speaking of a specific variant or that none of the variants mention the above or just that van Oorschot's variant doesn't mention it?

The Gaudry-Schost algorithm mentions both. I thought Oorschot and Wiener's did too. But I'd have to reread theirs.

Again, I was asking questions about your newly found variant that is x times faster than prevous variants.
I have my own version, a working / running version, in use. But I always try to learn something new.


They are letting you know, they have arrived! lol.
kTimesG
Member
**
Offline Offline

Activity: 251
Merit: 39


View Profile
September 06, 2024, 06:57:43 PM
 #5756

First:
Quote
In the "herd" case, you can't really call self-collisions useless, since you had two branches that, before reaching a common DP, individually produced different other DPs up to that point. So, not really fruitless.

It's like I have to asks a lot of questions first, when talking with you. Are you talking any DP or just DP 0. I normally never talk in terms of DP 0, because it's unrealistic for the range I am working in (the one that this thread is about, the challenge / puzzle wallets, the 130 bit wallet)

Yeah, I can call them fruitless. You assume that every branch found DPs before reaching a common DP.

DP 32 - If thread 1 started at xx random point and finally found it's first DP after 2^30 "jumps" and thread 2 started at xx2 point and landed on the same DP after 2^32 jumps, then yes, thread 2's first jumps out of the gate were fruitless. It did not find any DPs prior to finding the same one as thread 1.

... is the method you speak of, using that or a variant of it to avoid fruitless cycles/paths? Or does it not bother or care about fruitless paths?

You forgot the fact that the "fruitless" one started from a known position that (if it is not taken randomly) somehow represents a portion of the interval (in respect to the probabilities, which are equal). If you choose starting points randomly, than the analysis of what "fruitless" means will also end up being random as well. What exact thought process are you apply-ing when you are computing the probability of success, when the points are taken at random?

The only paper I've read so far that did this was Bernstein's pre-computation research, and even in that one, it states that "we are not claiming this to be optimal". And even in that case, it was specifically used for precomputing lots of DPs when the problem was "solve more than one DLP in the same range". So, do you have any actual paper or proof that using random start points result in an improved lower runtime?

From a point of view of probabilities, one walk = one representative -> keep it running until problem solved. On collision with a buddy, take a step forward, and continue, don't start from some other beginning, because you are "stepping" over some other representative's walk if you do that, and don't get surprised if what you call "fruitless" (first DP = death, ouch) happens because of exactly this, behind the scenes.

I hope that makes sense, and the DP is irrelevant for this explanation.

Quote
No "max travel limit" of any sorts, no random starting points mentioned anywhere. Not sure who and why came up with those ideas.
What do you mean no one has ever mentioned the above?? Are you speaking of a specific variant or that none of the variants mention the above or just that van Oorschot's variant doesn't mention it?

The Gaudry-Schost algorithm mentions both. I thought Oorschot and Wiener's did too. But I'd have to reread theirs.

I mean that no one has mentioned using random starting points as the strategy. What is unclear? Or do you have an actual paper by someone who actually recommended doing this for some reason?...

Gaudry-Schost IS NOT kangaroo, and has nothing to do with it. It's using the birthday paradox, so obviously wants random uniform sampling from the set. I thought the difference should be clear, they don't really have anything in common.
WanderingPhilospher
Full Member
***
Offline Offline

Activity: 1204
Merit: 237

Shooters Shoot...


View Profile
September 07, 2024, 12:56:56 AM
Last edit: September 07, 2024, 03:38:06 AM by WanderingPhilospher
 #5757

Quote
The main difference between the Gaudry-Schost algorithm and the kangaroo
algorithm is that when a distinguished point is hit, Gaudry and Schost restart
the walk from a random starting point in a certain range, whereas the kangaroos
keep on running. The theoretical analysis is different too: Gaudry and Schost
use a variant of the birthday paradox whereas Pollard and van Oorschot and
Wiener use a different probabilistic argument

Do you agree with the above statement? (You first mentioned them, in your original post, thought you were comparing them, or maybe just the run time and the pain it is to program?)

Quote
You forgot the fact that the "fruitless" one started from a known position that (if it is not taken randomly)

Yes, I am saying the starting points are random. I thought you were too, I guess not.

In your Kangaroo "variant" using 100s of GPUs, are you placing each thread on an exact / known starting point? If so, how are you doing this, on the fly? It can be done, I am curious if you are doing it?

Are you using pseudorandom walks or something else as well?

Quote
using random start points result in an improved lower runtime

Where did I state this?

Again, I am asking YOU questions about your new lower run time variant that you stumbled upon.

Ignore everything else. How or what does the variant do, to lower average runtime? Is only having 3 starting points, feasible for a 130 bit range? Or would you break it up into sub-ranges?

EDIT: I do agree with you about the fruitless, if starting point is known.

These algos and these challenges are always fun and fascinating. It may take reading 10 different papers or posts, but eventually, one learns something new of sees something from a different perspective.

COBRAS
Member
**
Offline Offline

Activity: 1014
Merit: 23


View Profile
September 07, 2024, 01:11:11 AM
 #5758

Quote
The main difference between the Gaudry-Schost algorithm and the kangaroo
algorithm is that when a distinguished point is hit, Gaudry and Schost restart
the walk from a random starting point in a certain range, whereas the kangaroos
keep on running. The theoretical analysis is different too: Gaudry and Schost
use a variant of the birthday paradox whereas Pollard and van Oorschot and
Wiener use a different probabilistic argument

Do you agree with the above statement? (You first mentioned them, in your original post, thought you were comparing them, or maybe just the run time and the pain it is to program?)

Quote
You forgot the fact that the "fruitless" one started from a known position that (if it is not taken randomly)

Yes, I am saying the starting points are random. I thought you were too, I guess not.

In your Kangaroo "variant" using 100s of GPUs, are you placing each thread on an exact / known starting point? If so, how are you doing this, on the fly? It can be done, I am curious if you are doing it?

Are you using pseudorandom walks or something else as well?

Quote
using random start points result in an improved lower runtime

Where did I state this?

Again, I am asking YOU questions about your new lower run time variant that you stumbled upon.

Ignore everything else. How or what does the variant do, to lower average runtime? Is only having 3 starting points, feasible for a 130 bit range? Or would you break it up into sub-ranges?





Is starting point not random, did you take less operation then with random ? I think will be equal, or not ?


then no method, not random operations will be equal to random.

[
kTimesG
Member
**
Offline Offline

Activity: 251
Merit: 39


View Profile
September 07, 2024, 08:12:20 AM
 #5759

Do you agree with the above statement? (You first mentioned them, in your original post, thought you were comparing them, or maybe just the run time and the pain it is to program?)
Of course, since it's correct. I never referred to GS as a kangaroo variant. Read the three/four kangaroo paper by the same authors & Pollard to understand better the challenges of implementing this correctly (mainly, restarts are not counted as group operations, but on EC this is a very heavy operation compared to the simple additive jumps).

In your Kangaroo "variant" using 100s of GPUs, are you placing each thread on an exact / known starting point? If so, how are you doing this, on the fly? It can be done, I am curious if you are doing it?

Well, my kangaroos are kept on a central database, keeping track of the next starting point and how many times they jumped. The ones that jumped the less are scheduled to go in the next round when requested. I only extract XY after a lot of jumps (hundreds of thousands of jumps per thread, and a thread having several kangaroos) and verify they are correct (they should match the delta distance traveled, which is also counted during the jumps). This ensures nothing sketchy was done in the GPU kernel, like a bad field operation or other invisible bugs that can compromise the entire computation chain. Also for DPs the same thing, only needed piece of info from GPU is the distance it's at (relative to where computation started at - less bits needed to write and transfer) and the kang that found it, no need to ever store X coord and absolute distance like in JLP.

I haven't discovered anything breaking new, to be honest. If we simply combine secp256k1 curve properties to a specific variant of using 3 types of walks, the complexity is lower than the expected, that is all. This simply happens because kangaroo is a generic algorithm, and there was no assumption in the paper of presentation, that an element and its inverse might only differ by a single bit (in our case, the Y sign).

To be more clear than ever: the best known kangaroo method uses 4 kangaroos, and an expected number of 1.71 sqrt(b) operations on a generic group. In a generic group, an element and its inverse might be two totally different values with nothing in common.

Quote
using random start points result in an improved lower runtime

Where did I state this?

You didn't, and it doesn't, random start positions and adding more kangaroos without serious thinking of the consequences just results in an increased number of operations and greater runtime. You can verify this over long number of runs on small intervals.

Number of kang directly influences the choice of the optimal jump table length. If this is messed with after the algo started, cannot expect same results, or better results, only worse results Smiley
stamun
Newbie
*
Offline Offline

Activity: 5
Merit: 0


View Profile
September 08, 2024, 07:57:42 PM
 #5760

Which one will be most performant software to attack the #130 challenge public key?
The CPU version of keyhunt is giving me around ~35 Pkeys/sec.
 Is there a more optimal solution for GPUs?
Pages: « 1 ... 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 [288] 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 »
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!