filo1992
Newbie
Offline
Activity: 32
Merit: 0
|
|
May 26, 2020, 06:21:17 PM |
|
I have an error Gpuengine: launch: the launch timed put and was terminated
gpu tesla v100
|
|
|
|
COBRAS
Member
Offline
Activity: 864
Merit: 22
$$P2P BTC BRUTE.JOIN NOW ! https://uclck.me/SQPJk
|
|
May 26, 2020, 06:24:59 PM |
|
I have an error Gpuengine: launch: the launch timed put and was terminated
gpu tesla v100
Power up yours brain Man !
|
|
|
|
Jean_Luc (OP)
|
|
May 26, 2020, 06:45:46 PM |
|
My RAM value has allow me to launch DP=22 when i divide them /2. There was good ? Yes, launch with -d 23, that should be enough.
|
|
|
|
MrFreeDragon
|
|
May 26, 2020, 06:50:48 PM |
|
Jean_Luc, I want to look at the table which is saved to workfile. But as I understood, only 128bit are saved for X-coordinate and 126bit for distance (together with 1 bit for sign and 1 bit for kangaroo type).
Anyway, what is the easiest way to receive the whole table in txt format. I easily could read from binary file the head, dp, start/stop ranges, x/y coordinates for key. After that the hash table is saved with a lot of 0 bytes.... Can you just briefly describe the hash table structure which is saved to binary file?
|
|
|
|
COBRAS
Member
Offline
Activity: 864
Merit: 22
$$P2P BTC BRUTE.JOIN NOW ! https://uclck.me/SQPJk
|
|
May 26, 2020, 07:38:16 PM |
|
Jean_Luc, I want to look at the table which is saved to workfile. But as I understood, only 128bit are saved for X-coordinate and 126bit for distance (together with 1 bit for sign and 1 bit for kangaroo type).
Anyway, what is the easiest way to receive the whole table in txt format. I easily could read from binary file the head, dp, start/stop ranges, x/y coordinates for key. After that the hash table is saved with a lot of 0 bytes.... Can you just briefly describe the hash table structure which is saved to binary file?
Anyway, what is the easiest way to receive the whole table in txt format. Bro,I think you not need a .txt format, Bro.
|
|
|
|
schurpert
Newbie
Offline
Activity: 1
Merit: 0
|
|
May 26, 2020, 07:50:49 PM |
|
The expected number of DP's is just an average.
Here's the problem though: each kangaroo on the GPU makes maybe 20 hops per second. It is going to take a very long time (e.g. 2^28 / 20 seconds) before a DP actually represents ~2^28 points.
This is why zielar's run is taking so long (barring any bugs). They have many DPs but the average walk is far less than 2^28.
To find the optimal DP bits you need to consider your total MKeys/sec, the number of kangaroos, and the number of hops each kangaroo can make per second.
Also full disclosure: I have been working on puzzle 110 since March 31st using my own implementation but will probably not finish by Sunday.
Nice. Hope you get it against all odds. You deserve it!
|
|
|
|
|
Etar
|
|
May 26, 2020, 09:25:31 PM Last edit: May 26, 2020, 09:46:33 PM by Etar |
|
Jean_Luc, I want to look at the table which is saved to workfile. But as I understood, only 128bit are saved for X-coordinate and 126bit for distance (together with 1 bit for sign and 1 bit for kangaroo type).
Anyway, what is the easiest way to receive the whole table in txt format. I easily could read from binary file the head, dp, start/stop ranges, x/y coordinates for key. After that the hash table is saved with a lot of 0 bytes.... Can you just briefly describe the hash table structure which is saved to binary file?
After header hashtable located: this hash table like blocks, totaly there 262144 blocks each block have structure like this: [nbItem = 4b maxItem = 4b than array of elements equil to nbItem Xcoordinates = 16b d = 16b (127bit sign, 126bit type(TAME 0,WILD 1), 125-0 distance), Xcoordinates = 16b d = 16b (127bit sign, 126bit type(TAME 0,WILD 1), 125-0 distance) ...... Xcoordinates = 16b d = 16b (127bit sign, 126bit type(TAME 0,WILD 1), 125-0 distance)]
|
|
|
|
zielar
|
|
May 26, 2020, 09:51:59 PM |
|
I switched all clients to the new scan with dp "23". I saw the number of kangaroos in the counter (probably 2^33.08), but I do not remember, because after turning off the server to change the save file - again I see 2^inf only, so I have no idea, but something about this value. Two hours of operation (when connecting more clients) resulted in:
|
If you want - you can send me a donation to my BTC wallet address 31hgbukdkehcuxcedchkdbsrygegyefbvd
|
|
|
BitCrack
Jr. Member
Offline
Activity: 30
Merit: 122
|
|
May 27, 2020, 12:11:28 AM |
|
A dozen out of a few hundred machines?
|
|
|
|
Jean_Luc (OP)
|
|
May 27, 2020, 03:03:23 AM |
|
I saw the number of kangaroos in the counter (probably 2^33.08), but I do not remember, because after turning off the server to change the save file - again I see 2^inf only, so I Wow 2^33.08 kangaroo ! With DP23, the overhead is still a bit large. Why do you turn off the server ? The work file is too big ? If I was you, I would reduce the grid size of the GPUs and/or reduce the GPU_GRP_SIZE to 64. By reducing the gridx and gridy by 2 and the GPU_GRP_SIZE to 64 you will have 2^30 kangaroo and will be nice with dp23. You will probably loose in performance. Make a test on a single GPU of each type to see what is the performance with reduced grid and GPU_GRP_SIZE. You can also engage less machine and try to see what is the best trade off. Yes if you turn off the server, at the reconnection, the kangaroo are not counted, i will fix this. Jean_Luc, I want to look at the table which is saved to workfile. But as I understood, only 128bit are saved for X-coordinate and 126bit for distance (together with 1 bit for sign and 1 bit for kangaroo type).
Anyway, what is the easiest way to receive the whole table in txt format. I easily could read from binary file the head, dp, start/stop ranges, x/y coordinates for key. After that the hash table is saved with a lot of 0 bytes.... Can you just briefly describe the hash table structure which is saved to binary file?
Yes the 0 are the HASH entry header, if you have lots of 0, the hash table is not very filled. As mentioned above, you can have a look at the HashTable::SaveTable() function to understand the format.
|
|
|
|
zielar
|
|
May 27, 2020, 04:01:20 AM |
|
Yeah, i turn off because a work file is visible bigger on realtime. I would try your suggestion and give you opinion later. I have 2^30.08/2^32.55 now so it is bad moment again to make changes in source, but i would try with -g. On 10 machines my hashrate was down do 200mkeys/s and no workload activity on GPUS in nvidia-smi... Still connected .. what can be reason? Hashrate real would be ~ 13000mkeys , not 200😁
|
If you want - you can send me a donation to my BTC wallet address 31hgbukdkehcuxcedchkdbsrygegyefbvd
|
|
|
zielar
|
|
May 27, 2020, 04:16:54 AM |
|
problem is gone after close connection from one machine with 1251 dead kangaroo after two hours.. from two hours i see 0 dead, after that all work perfectly.
|
If you want - you can send me a donation to my BTC wallet address 31hgbukdkehcuxcedchkdbsrygegyefbvd
|
|
|
Jean_Luc (OP)
|
|
May 27, 2020, 04:17:39 AM |
|
Yeah, i turn off because a work file is visible bigger on realtime. I would try your suggestion and give you opinion later. I have 2^30.08/2^32.55 now so it is bad moment again to make changes in source, but i would try with -g. On 10 machines my hashrate was down do 200mkeys/s and no workload activity on GPUS in nvidia-smi... Still connected .. what can be reason? Hashrate real would be ~ 13000mkeys , not 200😁
OK, I'm adding a -wsplit option to the server, it will reset the hashTable at each backup and save to fileName + timestamp. eg save39.work_27May20_061427. This will decrease RAM usage, improve server performance for insertion. The merge can be done offline without stopping the server.
|
|
|
|
zielar
|
|
May 27, 2020, 04:34:22 AM |
|
Many thanks... Finally my problem with low hashrate back again. i see that this problem not have place one release back than your release from yesterday, so your last changes must be a problem reason
|
If you want - you can send me a donation to my BTC wallet address 31hgbukdkehcuxcedchkdbsrygegyefbvd
|
|
|
Jean_Luc (OP)
|
|
May 27, 2020, 04:58:31 AM |
|
I uploaded the new release with the -wsplit option. IMHO, this is a great option. It does not prevent to solve the key even if the hashtable is reseted at each backup because paths continue and collision may occur in the small hashtable. Of course merging offline should solve before. On the little test i did (reset every 10seconds, DP10), the server solved the 64bit key in 1:41 The merge solved after 1:12 [Client 0][Kang 2^-inf][DP Count 2^-inf/2^23.05][Dead 0][04s][2.0/4.0MB] New connection from 127.0.0.1:58358 [Client 1][Kang 2^18.58][DP Count 2^-inf/2^23.05][Dead 0][08s][2.0/4.0MB] New connection from 172.24.9.18:52090 [Client 2][Kang 2^18.61][DP Count 2^16.17/2^23.05][Dead 0][10s][4.2/14.1MB] SaveWork: save.work_27May20_063455...............done [4.2 MB] [00s] Wed May 27 06:34:55 2020 [Client 2][Kang 2^18.61][DP Count 2^20.25/2^23.05][Dead 0][20s][40.1/73.9MB] SaveWork: save.work_27May20_063505...............done [40.1 MB] [00s] Wed May 27 06:35:06 2020 [Client 2][Kang 2^18.61][DP Count 2^20.17/2^23.05][Dead 0][30s][37.9/71.5MB] SaveWork: save.work_27May20_063516...............done [37.9 MB] [00s] Wed May 27 06:35:16 2020 [Client 2][Kang 2^18.61][DP Count 2^20.55/2^23.05][Dead 0][41s][48.9/82.8MB] SaveWork: save.work_27May20_063526...............done [48.9 MB] [00s] Wed May 27 06:35:27 2020 [Client 2][Kang 2^18.61][DP Count 2^20.29/2^23.05][Dead 0][51s][41.1/74.9MB] SaveWork: save.work_27May20_063537...............done [41.1 MB] [00s] Wed May 27 06:35:37 2020 [Client 2][Kang 2^18.61][DP Count 2^20.30/2^23.05][Dead 0][01:02][41.5/75.2MB] SaveWork: save.work_27May20_063547...............done [41.5 MB] [00s] Wed May 27 06:35:48 2020 [Client 2][Kang 2^18.61][DP Count 2^20.28/2^23.05][Dead 0][01:12][40.9/74.6MB] SaveWork: save.work_27May20_063558...............done [40.9 MB] [00s] Wed May 27 06:35:58 2020 <= offline merge solved there [Client 2][Kang 2^18.61][DP Count 2^20.19/2^23.05][Dead 0][01:22][38.5/72.2MB] SaveWork: save.work_27May20_063608...............done [38.5 MB] [00s] Wed May 27 06:36:08 2020 [Client 2][Kang 2^18.61][DP Count 2^20.55/2^23.05][Dead 0][01:33][48.8/82.7MB] SaveWork: save.work_27May20_063618...............done [48.8 MB] [00s] Wed May 27 06:36:19 2020 [Client 2][Kang 2^18.61][DP Count 2^19.98/2^23.05][Dead 0][01:41][33.5/66.8MB] Key# 0 [1S]Pub: 0x03BB113592002132E6EF387C3AEBC04667670D4CD40B2103C7D0EE4969E9FF56E4 Priv: 0x5B3F38AF935A3640D158E871CE6E9666DB862636383386EE510F18CCC3BD72EB
Concerning your hashrate problem I don't see any trivial reason, do you see timeout or error message at the client level, may be the server is a bit overloaded ? I'm investigating...
|
|
|
|
Jean_Luc (OP)
|
|
May 27, 2020, 05:04:25 AM |
|
I switched to CUDA10.2 at the 1.6. can it be the reason of your keyrate issue ? Wrong driver ?
|
|
|
|
zielar
|
|
May 27, 2020, 06:25:05 AM |
|
Server is very overloaded using this dp. My file after 5h have 45GB 🙄 CUDA 10.2 is not problem because i use that version in all releases before. Maybe -wsplit solve my problem and this hard level finally. best of all - I put the server specially on Google to get the right amount of RAM and performance, and all 80 processors have 100% consumption and the time since launch refreshes almost every few minutes
|
If you want - you can send me a donation to my BTC wallet address 31hgbukdkehcuxcedchkdbsrygegyefbvd
|
|
|
Jean_Luc (OP)
|
|
May 27, 2020, 06:40:49 AM |
|
Yes try -wsplit. This will also block clients during backup and hashtable cleanup. It should solve the overload.
|
|
|
|
Etar
|
|
May 27, 2020, 07:06:00 AM |
|
-snip- This will decrease RAM usage, improve server performance for insertion. The merge can be done offline without stopping the server.
But when you merge files offline you any way should have a lot of memory to merge works, no? (the same amount of RAM as without -wsplit) because app should download huge workfile and add new small work file.
|
|
|
|
|