Bitcoin Forum
May 12, 2024, 06:38:24 PM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: « 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 [22] 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 ... 142 »
  Print  
Author Topic: Pollard's kangaroo ECDLP solver  (Read 55727 times)
filo1992
Newbie
*
Offline Offline

Activity: 32
Merit: 0


View Profile
May 26, 2020, 06:21:17 PM
 #421

I have an error Gpuengine: launch: the launch timed put and was terminated

gpu tesla v100
1715539104
Hero Member
*
Offline Offline

Posts: 1715539104

View Profile Personal Message (Offline)

Ignore
1715539104
Reply with quote  #2

1715539104
Report to moderator
The block chain is the main innovation of Bitcoin. It is the first distributed timestamping system.
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction.
COBRAS
Member
**
Offline Offline

Activity: 850
Merit: 22

$$P2P BTC BRUTE.JOIN NOW ! https://uclck.me/SQPJk


View Profile
May 26, 2020, 06:24:59 PM
 #422

I have an error Gpuengine: launch: the launch timed put and was terminated

gpu tesla v100

Power up yours brain Man !  Wink

$$$ P2P NETWORK FOR BTC WALLET.DAT BRUTE F ORCE .JOIN NOW=GET MANY COINS NOW !!!
https://github.com/phrutis/LostWallet  https://t.me/+2niP9bQ8uu43MDg6
Jean_Luc (OP)
Sr. Member
****
Offline Offline

Activity: 462
Merit: 696


View Profile
May 26, 2020, 06:45:46 PM
 #423

My RAM value has allow me to launch DP=22 when i divide them /2.
There was good ?Smiley

Yes, launch with -d 23, that should be enough.
MrFreeDragon
Sr. Member
****
Offline Offline

Activity: 443
Merit: 350


View Profile
May 26, 2020, 06:50:48 PM
 #424

Jean_Luc, I want to look at the table which is saved to workfile. But as I understood, only 128bit are saved for X-coordinate and 126bit for distance (together with 1 bit for sign and 1 bit for kangaroo type).

Anyway, what is the easiest way to receive the whole table in txt format. I easily could read from binary file the head, dp, start/stop ranges, x/y coordinates for key. After that the hash table is saved with a lot of 0 bytes....
Can you just briefly describe the hash table structure which is saved to binary file?

COBRAS
Member
**
Offline Offline

Activity: 850
Merit: 22

$$P2P BTC BRUTE.JOIN NOW ! https://uclck.me/SQPJk


View Profile
May 26, 2020, 07:38:16 PM
 #425

Jean_Luc, I want to look at the table which is saved to workfile. But as I understood, only 128bit are saved for X-coordinate and 126bit for distance (together with 1 bit for sign and 1 bit for kangaroo type).

Anyway, what is the easiest way to receive the whole table in txt format. I easily could read from binary file the head, dp, start/stop ranges, x/y coordinates for key. After that the hash table is saved with a lot of 0 bytes....
Can you just briefly describe the hash table structure which is saved to binary file?


Quote
Anyway, what is the easiest way to receive the whole table in txt format.


Bro,I think you not need a .txt format, Bro.

$$$ P2P NETWORK FOR BTC WALLET.DAT BRUTE F ORCE .JOIN NOW=GET MANY COINS NOW !!!
https://github.com/phrutis/LostWallet  https://t.me/+2niP9bQ8uu43MDg6
schurpert
Newbie
*
Offline Offline

Activity: 1
Merit: 0


View Profile
May 26, 2020, 07:50:49 PM
 #426

The expected number of DP's is just an average.

Here's the problem though: each kangaroo on the GPU makes maybe 20 hops per second. It is going to take a very long time (e.g. 2^28 / 20 seconds) before a DP actually represents ~2^28 points.

This is why zielar's run is taking so long (barring any bugs). They have many DPs but the average walk is far less than 2^28.

To find the optimal DP bits you need to consider your total MKeys/sec, the number of kangaroos, and the number of hops each kangaroo can make per second.

Also full disclosure: I have been working on puzzle 110 since March 31st using my own implementation but will probably not finish by Sunday.


Nice. Hope you get it against all odds. You deserve it!
patatasfritas
Newbie
*
Offline Offline

Activity: 17
Merit: 0


View Profile
May 26, 2020, 08:44:52 PM
 #427

Jean_Luc, I want to look at the table which is saved to workfile. But as I understood, only 128bit are saved for X-coordinate and 126bit for distance (together with 1 bit for sign and 1 bit for kangaroo type).

Anyway, what is the easiest way to receive the whole table in txt format. I easily could read from binary file the head, dp, start/stop ranges, x/y coordinates for key. After that the hash table is saved with a lot of 0 bytes....
Can you just briefly describe the hash table structure which is saved to binary file?

MergeWork and LoadTable functions can help you to understand the saveFile structure. I'm trying export DP to txt (or other format), and syncronize over the network only new data.

https://github.com/JeanLucPons/Kangaroo/blob/36d165d346b1b4fe52325a07bbf9039ee5439d31/Backup.cpp#L409
https://github.com/JeanLucPons/Kangaroo/blob/36d165d346b1b4fe52325a07bbf9039ee5439d31/HashTable.cpp#L247
Etar
Sr. Member
****
Offline Offline

Activity: 616
Merit: 312


View Profile
May 26, 2020, 09:25:31 PM
Last edit: May 26, 2020, 09:46:33 PM by Etar
 #428

Jean_Luc, I want to look at the table which is saved to workfile. But as I understood, only 128bit are saved for X-coordinate and 126bit for distance (together with 1 bit for sign and 1 bit for kangaroo type).

Anyway, what is the easiest way to receive the whole table in txt format. I easily could read from binary file the head, dp, start/stop ranges, x/y coordinates for key. After that the hash table is saved with a lot of 0 bytes....
Can you just briefly describe the hash table structure which is saved to binary file?
After header hashtable located:
this hash table like blocks, totaly there 262144 blocks
each block have structure like this:
[nbItem = 4b
maxItem = 4b
than array of elements equil to nbItem
Xcoordinates = 16b
d = 16b  (127bit sign, 126bit type(TAME 0,WILD 1), 125-0 distance),
Xcoordinates = 16b
d = 16b  (127bit sign, 126bit type(TAME 0,WILD 1), 125-0 distance)
......
Xcoordinates = 16b
d = 16b  (127bit sign, 126bit type(TAME 0,WILD 1), 125-0 distance)]
zielar
Full Member
***
Offline Offline

Activity: 277
Merit: 106


View Profile
May 26, 2020, 09:51:59 PM
 #429

I switched all clients to the new scan with dp "23".
I saw the number of kangaroos in the counter (probably 2^33.08), but I do not remember, because after turning off the server to change the save file - again I see 2^inf only, so I have no idea, but something about this value.

Two hours of operation (when connecting more clients) resulted in:

If you want - you can send me a donation to my BTC wallet address 31hgbukdkehcuxcedchkdbsrygegyefbvd
BitCrack
Jr. Member
*
Offline Offline

Activity: 30
Merit: 122


View Profile
May 27, 2020, 12:11:28 AM
 #430

A dozen out of a few hundred machines? Smiley
Jean_Luc (OP)
Sr. Member
****
Offline Offline

Activity: 462
Merit: 696


View Profile
May 27, 2020, 03:03:23 AM
 #431

I saw the number of kangaroos in the counter (probably 2^33.08), but I do not remember, because after turning off the server to change the save file - again I see 2^inf only, so I

Wow 2^33.08 kangaroo ! With DP23, the overhead is still a bit large.
Why do you turn off the server ? The work file is too big ?
If I was you, I would reduce the grid size of the GPUs and/or reduce the GPU_GRP_SIZE to 64.
By reducing the gridx and gridy by 2 and the GPU_GRP_SIZE to 64 you will have 2^30 kangaroo and will be nice with dp23.
You will probably loose in performance. Make a test on a single GPU of each type to see what is the performance with reduced grid and GPU_GRP_SIZE.
You can also engage less machine and try to see what is the best trade off.

Yes if you turn off the server, at the reconnection, the kangaroo are not counted, i will fix this.

Jean_Luc, I want to look at the table which is saved to workfile. But as I understood, only 128bit are saved for X-coordinate and 126bit for distance (together with 1 bit for sign and 1 bit for kangaroo type).

Anyway, what is the easiest way to receive the whole table in txt format. I easily could read from binary file the head, dp, start/stop ranges, x/y coordinates for key. After that the hash table is saved with a lot of 0 bytes....
Can you just briefly describe the hash table structure which is saved to binary file?

Yes the 0 are the HASH entry header, if you have lots of 0, the hash table is not very filled.
As mentioned above, you can have a look at the HashTable::SaveTable() function to understand the format.
zielar
Full Member
***
Offline Offline

Activity: 277
Merit: 106


View Profile
May 27, 2020, 04:01:20 AM
 #432

Yeah, i turn off because a work file is visible bigger on realtime.
I would try your suggestion and give you opinion later.
I have 2^30.08/2^32.55 now so it is bad moment again to make changes in source, but i would try with -g. On 10 machines my hashrate was down do 200mkeys/s and no workload activity  on GPUS in nvidia-smi... Still connected .. what can be reason? Hashrate real would be ~ 13000mkeys , not 200😁

If you want - you can send me a donation to my BTC wallet address 31hgbukdkehcuxcedchkdbsrygegyefbvd
zielar
Full Member
***
Offline Offline

Activity: 277
Merit: 106


View Profile
May 27, 2020, 04:16:54 AM
 #433

problem is gone after close connection from one machine with 1251 dead kangaroo after two hours.. from two hours i see 0 dead, after that all work perfectly.

If you want - you can send me a donation to my BTC wallet address 31hgbukdkehcuxcedchkdbsrygegyefbvd
Jean_Luc (OP)
Sr. Member
****
Offline Offline

Activity: 462
Merit: 696


View Profile
May 27, 2020, 04:17:39 AM
 #434

Yeah, i turn off because a work file is visible bigger on realtime.
I would try your suggestion and give you opinion later.
I have 2^30.08/2^32.55 now so it is bad moment again to make changes in source, but i would try with -g. On 10 machines my hashrate was down do 200mkeys/s and no workload activity  on GPUS in nvidia-smi... Still connected .. what can be reason? Hashrate real would be ~ 13000mkeys , not 200😁

OK, I'm adding a -wsplit option to the server, it will reset the hashTable at each backup and save to fileName + timestamp. eg save39.work_27May20_061427.
This will decrease RAM usage, improve server performance for insertion. The merge can be done offline without stopping the server.
zielar
Full Member
***
Offline Offline

Activity: 277
Merit: 106


View Profile
May 27, 2020, 04:34:22 AM
 #435

Many thanks... Finally my problem with low hashrate  back again. i see that this problem not have place one release back than your release from yesterday, so your last changes must be a problem reason

If you want - you can send me a donation to my BTC wallet address 31hgbukdkehcuxcedchkdbsrygegyefbvd
Jean_Luc (OP)
Sr. Member
****
Offline Offline

Activity: 462
Merit: 696


View Profile
May 27, 2020, 04:58:31 AM
 #436

I uploaded the new release with the -wsplit option.
IMHO, this is a great option.
It does not prevent to solve the key even if the hashtable is reseted at each backup because paths continue and collision may occur in the small hashtable.
Of course merging offline should solve before.

On the little test i did (reset every 10seconds, DP10), the server solved the 64bit key in 1:41
The merge solved after 1:12

Code:
[Client 0][Kang 2^-inf][DP Count 2^-inf/2^23.05][Dead 0][04s][2.0/4.0MB]
New connection from 127.0.0.1:58358
[Client 1][Kang 2^18.58][DP Count 2^-inf/2^23.05][Dead 0][08s][2.0/4.0MB]
New connection from 172.24.9.18:52090
[Client 2][Kang 2^18.61][DP Count 2^16.17/2^23.05][Dead 0][10s][4.2/14.1MB]
SaveWork: save.work_27May20_063455...............done [4.2 MB] [00s] Wed May 27 06:34:55 2020
[Client 2][Kang 2^18.61][DP Count 2^20.25/2^23.05][Dead 0][20s][40.1/73.9MB]
SaveWork: save.work_27May20_063505...............done [40.1 MB] [00s] Wed May 27 06:35:06 2020
[Client 2][Kang 2^18.61][DP Count 2^20.17/2^23.05][Dead 0][30s][37.9/71.5MB]
SaveWork: save.work_27May20_063516...............done [37.9 MB] [00s] Wed May 27 06:35:16 2020
[Client 2][Kang 2^18.61][DP Count 2^20.55/2^23.05][Dead 0][41s][48.9/82.8MB]
SaveWork: save.work_27May20_063526...............done [48.9 MB] [00s] Wed May 27 06:35:27 2020
[Client 2][Kang 2^18.61][DP Count 2^20.29/2^23.05][Dead 0][51s][41.1/74.9MB]
SaveWork: save.work_27May20_063537...............done [41.1 MB] [00s] Wed May 27 06:35:37 2020
[Client 2][Kang 2^18.61][DP Count 2^20.30/2^23.05][Dead 0][01:02][41.5/75.2MB]
SaveWork: save.work_27May20_063547...............done [41.5 MB] [00s] Wed May 27 06:35:48 2020
[Client 2][Kang 2^18.61][DP Count 2^20.28/2^23.05][Dead 0][01:12][40.9/74.6MB]
SaveWork: save.work_27May20_063558...............done [40.9 MB] [00s] Wed May 27 06:35:58 2020  <= offline merge solved there
[Client 2][Kang 2^18.61][DP Count 2^20.19/2^23.05][Dead 0][01:22][38.5/72.2MB]
SaveWork: save.work_27May20_063608...............done [38.5 MB] [00s] Wed May 27 06:36:08 2020
[Client 2][Kang 2^18.61][DP Count 2^20.55/2^23.05][Dead 0][01:33][48.8/82.7MB]
SaveWork: save.work_27May20_063618...............done [48.8 MB] [00s] Wed May 27 06:36:19 2020
[Client 2][Kang 2^18.61][DP Count 2^19.98/2^23.05][Dead 0][01:41][33.5/66.8MB]
Key# 0 [1S]Pub:  0x03BB113592002132E6EF387C3AEBC04667670D4CD40B2103C7D0EE4969E9FF56E4
       Priv: 0x5B3F38AF935A3640D158E871CE6E9666DB862636383386EE510F18CCC3BD72EB

Concerning your hashrate problem I don't see any trivial reason, do you see timeout or error message at the client level, may be the server is a bit overloaded ?
I'm investigating...
Jean_Luc (OP)
Sr. Member
****
Offline Offline

Activity: 462
Merit: 696


View Profile
May 27, 2020, 05:04:25 AM
 #437

I switched to CUDA10.2 at the 1.6. can it be the reason of your keyrate issue ? Wrong driver ?
zielar
Full Member
***
Offline Offline

Activity: 277
Merit: 106


View Profile
May 27, 2020, 06:25:05 AM
 #438

Server is very overloaded using this dp. My file after 5h have 45GB 🙄 CUDA 10.2 is not problem because i use that version in all releases before. Maybe -wsplit solve my problem and this hard level finally. best of all - I put the server specially on Google to get the right amount of RAM and performance, and all 80 processors have 100% consumption and the time since launch refreshes almost every few minutes

If you want - you can send me a donation to my BTC wallet address 31hgbukdkehcuxcedchkdbsrygegyefbvd
Jean_Luc (OP)
Sr. Member
****
Offline Offline

Activity: 462
Merit: 696


View Profile
May 27, 2020, 06:40:49 AM
 #439

Yes try -wsplit.
This will also block clients during backup and hashtable cleanup. It should solve the overload.
Etar
Sr. Member
****
Offline Offline

Activity: 616
Merit: 312


View Profile
May 27, 2020, 07:06:00 AM
 #440

-snip-
This will decrease RAM usage, improve server performance for insertion. The merge can be done offline without stopping the server.

But when you merge files offline you any way should have a lot of memory to merge works, no? (the same amount of RAM as without -wsplit)
because app should download huge workfile and add new small work file.
Pages: « 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 [22] 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 ... 142 »
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!