Bitcoin Forum
June 20, 2024, 05:48:13 AM *
News: Voting for pizza day contest
 
  Home Help Search Login Register More  
  Show Posts
Pages: « 1 2 3 4 5 6 7 8 9 [10] 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 »
181  Bitcoin / Development & Technical Discussion / Re: Pollard's kangaroo ECDLP solver on: May 09, 2020, 12:35:58 PM
Yes the jumps have to be fixed otherwise paths differ and work files become incompatible.
Hope the cafe will be good Smiley
182  Bitcoin / Development & Technical Discussion / Re: Pollard's kangaroo ECDLP solver on: May 09, 2020, 04:15:39 AM
@MrFreeDragon:

I'm looking at your test.

@HardwareCollector:

I did a test on 100 trials, 40bit range, 2^12 Kangaroo dp=10 ( quasi square root of your config, cannot have dp=9.5 Wink ).
The 100 keys are uniformly distributed in the range.

[ 98] 2^21.962 Dead:0 Avg:2^22.016 DeadAvg:1.4 (2^22.603)
[ 99] 2^21.276 Dead:0 Avg:2^22.010 DeadAvg:1.4 (2^22.603)
[100] 2^20.427 Dead:0 Avg:2^22.001 DeadAvg:1.4 (2^22.603)

The calculation of the overhead gives 2^22.603 and the actual average is 2^22.001, exact average for dp=0 is 20^21.056.

In that case it overestimate because I don't know the exact analytic expression of the time complexity of the DP method.
I know that it converges to ~cubicroot( 16.numberOfKangaroo.N.2^dp ) when numberOfKangaroo >> sqrt(N)/2^dp. nbKangaroo.2^dp is an asymptote when numberOfKangaroo << sqrt(N)/2^dp , N is the range size, 2^dp is lower than sqrt(N).
Here we are a bit between the 2 cases where the approximation is not really good.

To compute the exact expression, it is like the birthday paradox with 2 tables but by drawing bunches of 2^dp random numbers nbKangaroo times alternatively in the 2 tables. Quite a nightmare, never detailed in all the papers I read.

You can also see that the 3 last trials where under the average. This is due to the fact that the number of expected operation depends also where the private key is in the range and that the deviation is large.

To make test there is 2 things, you need a large number of test with key uniformly distributed in the range. If you make test with always the same key, it is not representative.

Edit: correction it was log2(numberOfKangaroo) >> log2(sqrt(N)) - log2(2^dp) so numberOfKangaroo >> sqrt(N)/2^dp
183  Bitcoin / Development & Technical Discussion / Re: Pollard's kangaroo ECDLP solver on: May 08, 2020, 07:29:10 PM
2 times more kangaroos with the same dp, overhead 2 time larger and it has solved in 2^42 (no lucky here) + 6GB wrote to disk.
During the file saving, GPUs are waiting, the table is locked.
I'll have a look at your result tomorow in details Wink
Thanks for the test...
184  Bitcoin / Development & Technical Discussion / Re: Pollard's kangaroo ECDLP solver on: May 08, 2020, 05:53:00 PM
@MrFreeDragon
You have to take in consideration that on this first test it has written a ~2GB file and ~4GB one and created 2^24 kangaroos.
On the second with DP18 and 2^24, this gives an overhead of ~2^42 so more than the average, however it was solved solved in only 2^41, more than 2 time less than expected, he was lucky Smiley
This is the problem of multiple powerful GPU, with 2^24 kangaroos, 80bit range is a too small range.
The client/server mode is a good idea as it will avoid memory consumption on the client side, but the server should be well tuned., the overhead due to DP still apply.

@HardwareCollector

I agree
185  Bitcoin / Development & Technical Discussion / Re: Pollard's kangaroo ECDLP solver on: May 08, 2020, 03:09:42 PM
Thanks again for the tests Wink

What you can try is to store the work file on a NFS mount and write a script on an other host that check for collision using -wm option.
That should be more or less equivalent to what you suggest.
Of course, using a centralized server is much more convenient.
Interesting, i know rather well socket and I have good experience with server coding, so that should not be too hard to write an efficient distributed solution.

Edit: Don't forget to update you repository Wink

186  Bitcoin / Development & Technical Discussion / Re: Pollard's kangaroo ECDLP solver on: May 08, 2020, 12:34:29 PM
The release 1.4 is ready.
Available on github

https://github.com/JeanLucPons/Kangaroo/releases

Quote
-  Added load/save/merge work file
-  Memory usage improvements
-  Added max step option

Thanks to test it Wink
187  Bitcoin / Development & Technical Discussion / Re: Pollard's kangaroo ECDLP solver on: May 08, 2020, 09:03:13 AM
I did a test:

Starting from scratch on 64bit range, save the work file every 10sec. It has solved the key in 2^33.8, a bit more than the average and the last work file at 2^33.27.

Code:
C:\C++\Kangaroo\VC_CUDA10>x64\Release\Kangaroo.exe -d 11 -t 0 -w work1.wrk -wi 10 -gpu ..\VC_CUDA8\in64.txt
Kangaroo v1.4notready
Start:5B3F38AF935A3640D158E871CE6E9666DB862636383386EE0000000000000000
Stop :5B3F38AF935A3640D158E871CE6E9666DB862636383386EEFFFFFFFFFFFFFFFF
Keys :1
Number of CPU thread: 0
Range width: 2^64
Jump Avg distance: 2^31.99
Number of kangaroos: 2^18.58
Suggested DP: 13
Expected operations: 2^33.18
Expected RAM: 193.1MB
DP size: 11 [0xFFE0000000000000]
GPU: GPU #0 GeForce GTX 1050 Ti (6x128 cores) Grid(12x256) (45.0 MB used)
SolveKeyGPU Thread GPU#0: creating kangaroos...
SolveKeyGPU Thread GPU#0: 2^18.58 kangaroos [1.9s]
[183.89 MK/s][GPU 183.89 MK/s][Count 2^29.99][Dead 0][08s (Avg 52s)][17.9/47.6MB]
SaveWork: work1.wrk...............done [18.0 MB] [00s] Fri May  8 09:37:01 2020
[151.40 MK/s][GPU 151.40 MK/s][Count 2^31.17][Dead 0][18s (Avg 01:04)][37.9/71.6MB]
SaveWork: work1.wrk...............done [38.0 MB] [00s] Fri May  8 09:37:12 2020
[149.85 MK/s][GPU 149.85 MK/s][Count 2^31.80][Dead 0][28s (Avg 01:04)][57.8/91.8MB]
SaveWork: work1.wrk...............done [57.9 MB] [00s] Fri May  8 09:37:22 2020
[148.78 MK/s][GPU 148.78 MK/s][Count 2^32.24][Dead 0][39s (Avg 01:05)][77.7/111.9MB]
SaveWork: work1.wrk...............done [77.7 MB] [00s] Fri May  8 09:37:33 2020
[147.04 MK/s][GPU 147.04 MK/s][Count 2^32.58][Dead 0][49s (Avg 01:06)][97.5/132.9MB]
SaveWork: work1.wrk...............done [97.6 MB] [00s] Fri May  8 09:37:44 2020
[145.74 MK/s][GPU 145.74 MK/s][Count 2^32.85][Dead 0][01:00 (Avg 01:06)][117.2/155.0MB]
SaveWork: work1.wrk...............done [117.2 MB] [00s] Fri May  8 09:37:55 2020
[144.31 MK/s][GPU 144.31 MK/s][Count 2^33.07][Dead 1][01:11 (Avg 01:07)][136.8/178.2MB]
SaveWork: work1.wrk...............done [136.9 MB] [01s] Fri May  8 09:38:06 2020
[143.51 MK/s][GPU 143.51 MK/s][Count 2^33.27][Dead 2][01:22 (Avg 01:07)][156.4/202.3MB]
SaveWork: work1.wrk...............done [156.5 MB] [01s] Fri May  8 09:38:17 2020
[136.98 MK/s][GPU 136.98 MK/s][Count 2^33.31][Dead 2][01:26 (Avg 01:10)][160.3/207.0MB]
Key# 0 [1S]Pub:  0x03BB113592002132E6EF387C3AEBC04667670D4CD40B2103C7D0EE4969E9FF56E4
       Priv: 0x5B3F38AF935A3640D158E871CE6E9666DB862636383386EE510F18CCC3BD72EB

[  0] 2^33.338 Dead:2 Avg:2^33.338 DeadAvg:2.0 (2^33.179)


work file info:

Code:
Loading: work1.wrk
Version: 0
DP bits: 11
Start  :5B3F38AF935A3640D158E871CE6E9666DB862636383386EE0000000000000000
Stop   :5B3F38AF935A3640D158E871CE6E9666DB862636383386EEFFFFFFFFFFFFFFFF
Key    :03BB113592002132E6EF387C3AEBC04667670D4CD40B2103C7D0EE4969E9FF56E4
Count  : 10355736576 2^33.270
Time   :01:22
DP Size: 156.5/202.4MB
DP Cnt : 5063157 2^22.272
DP Max : 41 [@ 006CDC]
DP Min : 4 [@ 003037]
DP Avg : 19.31

I get this work file and restarted 10 times the search:

Code:
[  0] 2^31.073 Dead:0 Avg:2^31.073 DeadAvg:0.0 (2^33.179)
[  0] 2^32.051 Dead:2 Avg:2^32.051 DeadAvg:2.0 (2^33.179)
[  0] 2^30.842 Dead:0 Avg:2^30.842 DeadAvg:0.0 (2^33.179)
[  0] 2^31.480 Dead:1 Avg:2^31.480 DeadAvg:1.0 (2^33.179)
[  0] 2^31.268 Dead:0 Avg:2^31.268 DeadAvg:0.0 (2^33.179)
[  0] 2^30.116 Dead:0 Avg:2^30.116 DeadAvg:0.0 (2^33.179)
[  0] 2^30.999 Dead:0 Avg:2^30.999 DeadAvg:0.0 (2^33.179)
[  0] 2^32.083 Dead:1 Avg:2^32.083 DeadAvg:1.0 (2^33.179)
[  0] 2^30.306 Dead:0 Avg:2^30.306 DeadAvg:0.0 (2^33.179)
[  0] 2^33.031 Dead:4 Avg:2^33.031 DeadAvg:4.0 (2^33.179)

So an average of 2^31.583 (~3 times less than the average)

The dp overhead is about 2^29.58 = 2^31.583 / 4.
I would expect 2^31.583/3, one dp overhead when starting the first tine, one when writing the file and one when restarting the search....

If you save the kangaroo, you get rid off this extra overhead, only one when starting the first time, but you need to continue on the same hardware or on an hardware that handle all kangaroos.

Need more average and test to fully understand this...

188  Bitcoin / Development & Technical Discussion / Re: Pollard's kangaroo ECDLP solver on: May 08, 2020, 07:30:50 AM
The statistics will not be present in the final release.
It is more for developers.
You can remove it by removing the #define STATS in Kangaroo.cpp:796
189  Bitcoin / Development & Technical Discussion / Re: Pollard's kangaroo ECDLP solver on: May 08, 2020, 04:12:28 AM
Many thanks for this test @HardwareCollector and  @MrFreeDragon Wink

Yes this is a bit tricky and this extra delay delay can be due to the overload of the dp. I will add note on the README about that.

First the suggested dp is a bad approximation, it needs improvement, you have to tune it manually according to available RAM and expected operation, the goal is to be as close as possible to 2sqrt(N).

When you continue a job on a different configuration, depending on how much kangaroo you have, the overload will be different, each kangaroo need 2^dp (in average) to reach a distinguished point. So if you continue on a configuration with much more kangaroo, the overload will be much higher and during the merge, it will add a count but only count/(nbKangaroo*2^dp) distinguished points to the hashtable.

Each time you do a merge, the count recorded in the merge will get an error of ~nbKangaroo*2^dp which is the counting granularity of actual distinguished point which will impact the estimation...

Another things is that when you merge 2 work files with a different dp bits, the lowest will be saved.

If you overload the dp bits by hand using -d and you choose a larger one (let's say +2), you will benefit only of 1 distinguished point on 4 in the hashtable.


I tried to continue the job 7 times - for the 1st time 2080ti solved the key for the extra 1 minute (with total operations 2^41.9), but all other 6 attempts I stopped while they reach 2^42.3 group operations (actually 2 times more than the expected).

However I'm also a bit surprised here, i would need more info to try to understand, especially number of kangaroo of each configuration and evolution of the number of distinguished point bits in the work files...

190  Bitcoin / Development & Technical Discussion / Re: Pollard's kangaroo ECDLP solver on: May 07, 2020, 03:15:49 PM
No the jump are now with a fixed seed in order to avoid incompatibility with work files.
191  Bitcoin / Development & Technical Discussion / Re: Pollard's kangaroo ECDLP solver on: May 07, 2020, 02:50:17 PM
I committed the mods. Linux user can try them. (Edit: or Windows user who compile,
I updated project files)

./kangaroo -w save.work -wi 10 in.txt (Save work file every 10 sec)
./kangaroo -w save2.work -wi 10 in.txt (Save work file every 10 sec)

Merge 2 files:

Code:
pons@linpons:~/Kangaroo$ ./kangaroo -wm save.work save2.work save3.work
Kangaroo v1.4notready
Loading: save.work
MergeWork: [HashTalbe1 2.3/5.7MB] [00s]
Loading: save2.work
MergeWork: [HashTalbe2 2.3/5.7MB] [00s]
Merging...
Range width: 2^56

SaveWork: save3.work...............done [2.5 MB] [00s] Thu May  7 16:38:03 2020
Dead kangaroo: 0
Total f1+f2: count 2^29.03 [30s]

During the merge the key can also be solved, you can share your work files !

Code:
pons@linpons:~/Kangaroo$ ./kangaroo -wm save.work save2.work save3.work
Kangaroo v1.4notready
Loading: save.work
MergeWork: [HashTalbe1 2.5/6.5MB] [00s]
Loading: save2.work
MergeWork: [HashTalbe2 2.5/6.5MB] [00s]
Merging...
Range width: 2^56

Key# 0 [1S]Pub:  0x02E9F43F810784FF1E91D8BC7C4FF06BFEE935DA71D7350734C3472FE305FEF82A
       Priv: 0x378ABDEC51BC5D
Dead kangaroo: 1
Total f1+f2: count 2^29.78 [50s]


Restart a work file:
pons@linpons:~/Kangaroo$ ./kangaroo -i save.work

If you use the -ws option, it saves all the kangaroo state, so you restart exactly where you are and you avoid kangaroo creation.

Hope that I have no added too much bugs Smiley

Thanks to test
192  Bitcoin / Development & Technical Discussion / Re: Pollard's kangaroo ECDLP solver on: May 07, 2020, 01:08:04 PM
Did you read the article? I can't, did you pay for reading it?

No just the abstract.

@cobras: Check your grid size, may be too large....

@MrFreeDragon:

For the fist step I add:

Code:
 -w workfile: Specify file to save work into (current processed key only)
 -i workfile: Specify file to load work from (current processed key only)
 -wi workInterval: Periodic interval (in seconds) for saving work
 -ws: Save kangaroos in the work file
 -wm file1 file2 destfile: Merge work file

I almost ended, still need few checks and port to linux....

All concerning multiple keys, tame arrays, advanced merge, ... will come later....

The current modification is quite heavy. I hope to not introduce bugs....
193  Bitcoin / Development & Technical Discussion / Re: Pollard's kangaroo ECDLP solver on: May 07, 2020, 06:37:06 AM
Thanks for the test.
And what about the average ?
Did you test this new jump table on in40_1000.txt to see if it improves something ?
194  Bitcoin / Development & Technical Discussion / Re: Pollard's kangaroo ECDLP solver on: May 07, 2020, 05:11:16 AM
Hi,

Thanks for the reading Wink

Yes this is a good idea to do symmetry class switch after a certain distance in order to limit cycles but:

- We have only a small jump table size, so the probability of cycle is high and for large range, random walks are long.
  The distance to choose should be as lower as possible in order to not loose the benefit of symmetry (41% max).

- This trick does not prevent at all from cycle of length 2 so we have to keep the lastjump trick also.

The last jump and the symmetry class switch has an impact of ~10% on the perf (global memory access and extra operations)
To implement this, this is a supplementary global memory access, a supplementary branch, and a supplementary local array...

But, however interesting to test Wink

Anyway, the save/load/merge file operations are almost ended, commit probably tomorrow...
195  Bitcoin / Development & Technical Discussion / Re: Pollard's kangaroo ECDLP solver on: May 05, 2020, 01:41:58 PM
OK, i you want to test large range and symmetry, put 512 in the NB_JUMP...

I'm currently working on the save work options:
Code:
-w workfile: Specify file to save work into
 -i workfile: Specify file to load work from
 -wi workInterval: Periodic interval (in seconds) for saving work

Concerning optimization of the complexity, I will investigate on the Wild compression and Tame/Wild ratio which seems more promising to me...
196  Bitcoin / Development & Technical Discussion / Re: Pollard's kangaroo ECDLP solver on: May 05, 2020, 01:20:57 PM
Thanks for the tests Wink
Stop your program, it will never ends if you have let the NB_JUMP to 32 and even with 512 I don't know if it will end Wink
The second test with CPU is not displaying: "Kangaroo v1.4gamma (with symmetry)" ?
Did you git pull between the correction I made concerning this (commit at 13:22 UTC+2) ?

197  Bitcoin / Development & Technical Discussion / Re: Pollard's kangaroo ECDLP solver on: May 05, 2020, 11:30:51 AM
I committed the code for the GPU.
Symmetry is not enabled by default, you have to edit the Constants.h file and comment out the USE_SYMMETRY define.

Code:
// Use symmetry
//#define USE_SYMMETRY

You can change the jump table size in the same file but high number decrease GPU performance.

Code:
// Number of random jumps
// Max 512 for the GPU
#define NB_JUMP 32

Unfortunately, symmetry on large range without killing GPU performance seems difficult as lots of kangaroo end in infinite cycle.
After all, the title of the article was: "Using Equivalence Classes to Accelerate Solving the Discrete Logarithm Problem in a Short Interval"
They well mention this problem of cycle in the article.

The trick of saving the last jump avoid cycle of 2, but not the others.
May be solution could be to precompute a random jump table in order to minimize cycles, but it could be difficult for large random walks of 2^dp. If we recreate kangaroo at each ~2^dp iterations, the creation of a kangaroo is expensive.
Symmetry can bring a maximum gain of sqrt(2) = ~1.41 , so to win, we have to find a way to do all the job in less than 41%.


198  Bitcoin / Development & Technical Discussion / Re: Pollard's kangaroo ECDLP solver on: May 05, 2020, 05:57:07 AM
Hi,

Thanks for this detailed report, very interesting !
I did few quick tests with a logic slightly different than yours, still only positive jump in the table and a I get the next one if equal to the last one.

Code:
#ifdef USE_SYMMETRY
        // Limit cycle
        if( jmp == herd[g].lastJump ) jmp = (herd[g].lastJump + 1) % NB_JUMP;
#endif

After 100 trials, DP=5, 40bit search, 2048 kangaroos:

With symmetry: Here only cycles or natural path collisions which have a distinguished point on it are detected.
The average length of walks is Avg/2048 = 2^20.614/2048 = 783

NB_JUMP = 32
[100] 2^20.263 Dead:44 Avg:2^20.823 DeadAvg:71.7 (2^20.614)

NB_JUMP = 64
[100] 2^19.985 Dead:17 Avg:2^20.662 DeadAvg:28.4 (2^20.614)

NB_JUMP = 128
[100] 2^19.346 Dead:4 Avg:2^20.581 DeadAvg:19.6 (2^20.614)

NB_JUMP = 256
[100] 2^20.938 Dead:20 Avg:2^20.510 DeadAvg:15.1 (2^20.614)

Then above 256 the dead average seems to converge to ~15, which should be the natural collision rate when using symmetry for this configuration.

No symmetry: Here we have only natural collisions in same herd, which is independent of the table size.

NB_JUMP = 64
[100] 2^22.203 Dead:5 Avg:2^21.098 DeadAvg:1.3 (2^21.097)

NB_JUMP = 128
[100] 2^19.432 Dead:0 Avg:2^21.044 DeadAvg:1.2 (2^21.097)

NB_JUMP = 256
[100] 2^20.805 Dead:1 Avg:2^20.959 DeadAvg:1.4 (2^21.097)

199  Bitcoin / Development & Technical Discussion / Re: Pollard's kangaroo ECDLP solver on: May 04, 2020, 05:41:25 PM
OK thansk I'll try tomorow
The 1.4gamma is still under dev...
200  Bitcoin / Development & Technical Discussion / Re: Pollard's kangaroo ECDLP solver on: May 04, 2020, 04:54:06 PM
Many thanks.
I will try this tomorrow. Hope the test of the last jump and the symmetry will no kill GPU performance.
We have to find a determinist logic to get an other jump if we get the opposite.
Thanks Wink
Pages: « 1 2 3 4 5 6 7 8 9 [10] 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 »
Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!