Show Posts
|
Pages: [1]
|
If you or someone who can program can work on sending hashtable to textfiles (like your export option configuration) then there would be no need for save files and merging, etc. If you give user option to export tame and wild files with any amount they wish (example 20 tame and 20 wild files), then no merging or saving. Just compare the different tame and wild files for a collision/solution. This also eliminates any RAM issues.
Once you know the structure of a workfile it is not difficult to work with this type of file. I have created a small script in python as an example, anyone can modify it to suit their needs https://gist.github.com/PatatasFritas/a0409df4306fb1bb81f9a53e70151ddc
|
|
|
Congratulations to Jean_Luc and zielar! I have been thinking for several days about removing the hashtables from the joining process, and when I finally finish it I find that it had already been implemented by JeanLuc Now I have applied the same method to join several files in one process, reducing the total time quite a bit. My approach is to read all the files in the directory at the same time in HASH_ENTRY loop. Although my code works, it is still in testing and it is possible that it has some bug. https://github.com/PatatasFritas/FriedKangaroo/blob/merge/Merge.cpp
|
|
|
I used Linux version, compiling github last version. 65.txt ------ 10000000000000000 1FFFFFFFFFFFFFFFF 0230210c23b1a047bc9bdbb13448e67deddc108946de6de639bcc75d47c0216b1b
# Kangaroo github version Date: Sun Jun 7 06:43:00 2020 +0200 ./kangaroo -wi 1 -w 65.save 65.txt
# My fork with 'wexport' ./FriedKangaroo -wexport 65.save
head tame.txt wild.txt ==> tame.txt <== 00007860796862b77663def37eaec012748f8 0000000000000000d1a2b6873ad16061 000082e81e4c4a33d0c1b8b49c3e00a4be836 000000000000000026cad0b165912746 000437e07afd45cef7944ae5b161f1df0ba4e 0000000000000000601ac5f0134d69bd 000464166e33cbd94e3ccf5272b4df1b11469 000000000000000051e7b44b64751df0 0004c94946d338d9f706cbae1860e0a4a362c 000000000000000056f0b43bc155a22c 0005d51d5b66f09c891a5af6efbb9a0f51aae 000000000000000042d2568962f26df1 00061a3590ca99c26549e8c8ee1275516bcab 0000000000000000d80aad8f08314744 000630f0c3542f10acc88362e11df849bc98b 00000000000000008c93cfb5218c6182 000813b89d1350e9740af0bed4224911401f3 0000000000000000ec870953b96ff89b 0008bd33af12d0587acb72a758667760f0780 0000000000000000a8e43affff9d308e
==> wild.txt <== 00011ee12c4fe000037449166c9680be93bf3 0000000000000000732a93a7c0625d29 000158515f890c49b608f23ec7717ed42e650 000000000000000020dee9654893f1d9 00025559376457b3d94c0c6493fb1fd893a15 00000000000000001730d21498fcc363 0003f80f12b0f73ad8483e3d63bb37bd1043a -00000000000000005af81793c032e216 00042e9863823886282176066c4bcc3020c60 -00000000000000004da91f00d7ef1261 00055b5545e9b32f5d9f4b26500d6126f0962 -000000000000000068762260d7829363 0006895b5122bf0a599cc2f4e7d3de2d0180d 00000000000000004ad469542989e05b 00070258142e4c4fddf93edec87781bc43be8 00000000000000006c9d275e1a58bc14 00072dfc9270c8b69f77119b9830b4243e089 00000000000000004a61321922a283ef 00092eb91a1ebe9c45a746bb43928db973eca -0000000000000000342d7af2532ab7c0
# Without 'wexport' using hexdump or xxd. # Skip 156 bytes of savefile header and show hexdump dd if=65.save bs=1 count=256 skip=156 | xxd -c 32 -g 16 -e ... 00000040: 860796862b77663def37eaec012748f8 0000000000000000d1a2b6873ad16061 ...
|
|
|
Patatas...How/where did you get this export text? TAME: 01f7fe7bfd3dc4c604b3e708c2fb4bdd2ed46 0000000000000000f862dc916dfc4479 WILD: 01f7fe7bfd3dc4c604b3e708c2fb4bdd2ed46 0000000000000000502a2b5c6849dc12 It's from 65 Bits key, Tame goes from 0 to 2**64 (0xffffffffffffffff) DP bits : 22 Start : 10000000000000000 Stop : 1FFFFFFFFFFFFFFFF Key : 0230210C23B1A047BC9BDBB13448E67DEDDC108946DE6DE639BCC75D47C0216B1B
|
|
|
In JeanLuc implementation of Kangaroo, there are an offset in tame/wild calculation. The offset is determined by "START" value un config file.
|
|
|
Does your export strip any information out of original DP files or are you extracting everything, minus the headers?
Only exports DP, headers can be "extracted" with -winfo In SaveFile/HashTable, DPs are grouped by 18bits (from 0x00000 to 0x3ffff). And inside every group are stored distance and x-coord. Distance: (1bit sign + 1bit type tame/wild + 126bits distance) X-Coord: 128 bits With 18bits of group + 128bits of x-coord, we have 146bits; and we "lost" 110bits (left N bits are zeroed as DP, -d option) Example: Distance: 0000000000000000f862dc916dfc4479 X-Coord: 000002af00818a0d3e8923d211901f7fe7bfd3dc4c604b3e708c2fb4bdd2ed46
00000 2af00818a0d3e8923d2119 01f7f e7bfd3dc4c604b3e708c2fb4bdd2ed46 |__DP |__Lost bits (110-DP) | |___128 bits |__18bits (first 2bits from first hex-char lost)
|
|
|
Where on that page do you go, and what do you enter exactly?
Put tame "distance" in Secret Exponent, and select compressed to view only X coord in Pubkey (after 02/03) 0000000000000000f862dc916dfc4479 -> 02:000002af00818a0d3e8923d211901f7fe7bfd3dc4c604b3e708c2fb4bdd2ed46 I see there is a match in the LSBs: 01f7fe7bfd3dc4c604b3e708c2fb4bdd2ed46 So I take it this was a solved key? TAME: 01f7fe7bfd3dc4c604b3e708c2fb4bdd2ed46 0000000000000000f862dc916dfc4479 WILD: 01f7fe7bfd3dc4c604b3e708c2fb4bdd2ed46 0000000000000000502a2b5c6849dc12
SECRET_PRIV_KEY = TAME_DIST - WILD_DIST + START 0xf862dc916dfc4479 - 0x502a2b5c6849dc12 + 2**64 = 0x1a838b13505b26867
|
|
|
Some examples from 65bits address. You can use https://brainwalletx.github.io/ to check G**p tame points. TAME 01f57df72a208ecb0dff27e1836f190850728 0000000000000000c6b916f827ecc764 01f7fe7bfd3dc4c604b3e708c2fb4bdd2ed46 0000000000000000f862dc916dfc4479 01f84404e1bbaf74e5e86ddf86fa2e96f165a 0000000000000000d190f29c6c0596f6 34f64327b6e6129a2af1cc6a106f2cec5fcf0 00000000000000001f9e0d582f7942fd 34f6889b8abd66be32f13ebe2cf24d5ab6b62 00000000000000009b0984d32bf64396 34f6fdd77d4d6a532866be4fdfa5923a09ead 0000000000000000bc3de9e206c157db -- WILD 34f44f024c8997abff6b5f1ee364c414c3016 0000000000000000344f3351ce019232 34f6889b8abd66be32f13ebe2cf24d5ab6b62 -00000000000000000d2f2c61d9bc24d1 34f738fed0a483f198a7a44d04e073c45b962 -0000000000000000602f88bbfaa6b336 01f7a2ae1656a659818a81f9a87204c29f1e8 -000000000000000048ea5f42dfb5bcc4 01f7fe7bfd3dc4c604b3e708c2fb4bdd2ed46 0000000000000000502a2b5c6849dc12 01f8143c41d86963bf9df718f9d0ba2e80cd7 00000000000000000a350d95bb55cb79
|
|
|
I was testing "directory merge" function and RAM memory is quickly exhausted. I was thinking that I forgot to free temp HashTable in each reading iteration; but I changed the code and the problem remains The merged saveFile is 5GB, and in merge process takes up about 14GB of RAM. I think the more obvious solution is to sort files from bigger to smallest when are merged; or use only small saveFiles. On the other hand, the -ws flag I think is problematic when using -wsplit, generating larger files than necessary. Do you think it is interesting to separate the DP and the kangaroos into different save files? As next improvements, I will work on improving the export of the DPs and the possibility of modifying the DP bits in a save file to reduce its size if we have chosen a too low DP value. It can also be interesting to remove from a save file the distances to share it without gifting the prize.
|
|
|
In "wsplit" mode, the time and count should be reset after every save? When I merged all split files the total time are incorrect. kangaroo -ws -d 22 -w 70/70-22 -wsplit -wi 30 70.txt
kangaroo -winfo 70/70-22_29May20_0 Count : 17936959488 2^34.062 Time : 20s
kangaroo -winfo 70/70-22_29May20_084011 Count : 42612829184 2^35.311 Time : 50s
kangaroo -winfo 70/70-22_29May20_092015 Count : 704108078080 2^39.357 Time : 14:12
What is the best way to combine all the save files? I added a "Directory Merge" option to merge more than two files at once ( https://github.com/PatatasFritas/FriedKangaroo/commit/f36c560135710f9956677eb3dcda78ffd1ccb0a4)
|
|
|
Example with 40bits key pubkey: 03a2efa402fd5268400c77c20e574ba86409ededee7c4020e4b9f0edbee53de0d4 priv: 0xe9ae4933d6 We get a wild DP from savefile, x-coord and distance 2bff14d4321506319af572ab7604e22f2d172 0000000000000000000000100d3439c4 Distance: 0x100d3439c4 Starting point (offset): 2^39 = 0x8000000000 0xe9ae4933d6 + 0x100d3439c4 - 2^39 = 0x79bb7d6d9a The tame distance 0x79bb7d6d9a returns X-coord 0000157f1985299f1bda6646dd76bff14d4321506319af572ab7604e22f2d172.
|
|
|
All wilds became tames if you found the private key. I wrote you above how to retrieve the distance to all wilds, it is just one more group operation.
Good point. I will try to code a merger. All small pk110 solvers, I have just uploaded some calculated distinguished points DP=28 for pk #110. There are 1.4million DPs (2^20.42). The table includes x-coordinate (DP 28) together with the kangaroo type (0 for tame, 1 for wild). Here is zipped txt file: https://gofile.io/d/zD2RHeHave a look at your tables, an if you have the same x-coordinate but with the different type (i.e im my table wild, but in yours tame) - so we immediately could retrieve the private key for #110 PS. The table includes only x-coordinates. Without distance it is useless, but helpful to find cross-collision with others. What is the best way to compare TXT lists? I'm going to separate wild and tame in two different files, merge+sort with my own files and use uniq -d to find duplicates. grep 'type:0' DP28.txt|cut -d " " -f 1 > remote_tame.txt grep 'type:1' DP28.txt|cut -d " " -f 1 > remote_wild.txt
cat local_tame.txt remote_wild.txt | sort | uniq -d cat local_wild.txt remote_tame.txt | sort | uniq -d
# Inserting a remote wild in local tame for testing echo 000d4230841b31f02659885bd7f6c72c >> local_tame_testing cat local_tame_testing.txt remote_wild.txt | sort | uniq -d 000d4230841b31f02659885bd7f6c72c
|
|
|
Jean_Luc, I want to look at the table which is saved to workfile. But as I understood, only 128bit are saved for X-coordinate and 126bit for distance (together with 1 bit for sign and 1 bit for kangaroo type).
Anyway, what is the easiest way to receive the whole table in txt format. I easily could read from binary file the head, dp, start/stop ranges, x/y coordinates for key. After that the hash table is saved with a lot of 0 bytes.... Can you just briefly describe the hash table structure which is saved to binary file?
if you still haven't managed to export the workfile, here are the changes I made to the code to export the DPs to txt. The X-coord is 128+18 bits and distance 126bits. The X-coordinate could be re-generated from the distance if necessary. https://github.com/PatatasFritas/FriedKangaroo/commit/1669df5f91ef2cc8a7619b21f66883fa164ab602
|
|
|
Works fine. Tested with 65bits key and 3 clients in same network. [Client 3][DP Count 2^14.39/2^13.05][Dead 0][01:50][2.7/7.2MB] Key# 0 [1S]Pub: 0x0230210C23B1A047BC9BDBB13448E67DEDDC108946DE6DE639BCC75D47C0216B1B Priv: 0x1A838B13505B26867
Closing connection with
Closing connection with X.X.X.X:47611
Closing connection with I've a gpu client outside Kangaroo server network. By the moment i'm using SSH tunnel, but the connection isn't perfect. ssh user@kangarooServer -L 17403:127.0.0.1:17403 It would be interesting if the client could operate without a permanent connection to the kangaroo server.
|
|
|
I did a small patch to try to speed up kangaroo transfer and also changed the timeout to 3 sec. Could you try it and tell me if it improves something ?
I had same problem. Also, when the save fails, the GPU downs from ~500M to ~20M. The 3sec patch fixed it. Thanks
|
|
|
|