Bitcoin Forum
June 14, 2024, 09:07:15 AM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: « 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 [16] 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 ... 142 »
  Print  
Author Topic: Pollard's kangaroo ECDLP solver  (Read 56542 times)
Etar
Sr. Member
****
Offline Offline

Activity: 616
Merit: 312


View Profile
May 24, 2020, 08:15:10 PM
 #301

All I use is Windows...using the original server/server code from Jean Luc.
I can say the same.. i use windows 10 and use release from github compliled by JeanLuc and server crashed.
there should be reason why for some it works fine while others breaks down.
There no matter how many ram which processor or so, in no case was there an anomaly in the consumption of resources.
WanderingPhilospher
Full Member
***
Offline Offline

Activity: 1078
Merit: 219

Shooters Shoot...


View Profile
May 24, 2020, 08:20:09 PM
 #302

All I use is Windows...using the original server/server code from Jean Luc.
I can say the same.. i use windows 10 and use release from github compliled by JeanLuc and server crashed.
there should be reason why for some it works fine while others breaks down.
There no matter how many ram which processor or so, in no case was there an anomaly in the consumption of resources.

Like I said earlier, mine may pause/crash twice a day. But I did catch one instance where one of the clients had a GPU issue, and maybe that caused it to stop.

I have had up to 39 clients working at one time. Now, once file gets so big, I stop server, merge, and start a new work file.
WanderingPhilospher
Full Member
***
Offline Offline

Activity: 1078
Merit: 219

Shooters Shoot...


View Profile
May 24, 2020, 08:22:37 PM
 #303

Jean Luc - or anyone:

Precomp Table!

I want to do a precomputation of table, where only Tame Kangaroos are stored.

I tried tinkering with the current source code, but to no avail.

Do you know/can you tell me what I need to change in code or does it require a code overhaul?
Etar
Sr. Member
****
Offline Offline

Activity: 616
Merit: 312


View Profile
May 24, 2020, 08:23:25 PM
 #304

All I use is Windows...using the original server/server code from Jean Luc.
I can say the same.. i use windows 10 and use release from github compliled by JeanLuc and server crashed.
there should be reason why for some it works fine while others breaks down.
There no matter how many ram which processor or so, in no case was there an anomaly in the consumption of resources.

Like I said earlier, mine may pause/crash twice a day. But I did catch one instance where one of the clients had a GPU issue, and maybe that caused it to stop.

I have had up to 39 clients working at one time. Now, once file gets so big, I stop server, merge, and start a new work file.
Maybe your clients in the same local network an you do not have any issues with internet.
All my clients in diiferent countries. So even 4 client connect i get server crash time to time.
zielar
Full Member
***
Offline Offline

Activity: 277
Merit: 106


View Profile
May 24, 2020, 08:23:45 PM
 #305

Arg,
Hope there is no problem with a file bigger that 4GB !
The last save work printed a wrong size.
Do not restart clients and restart the server without the -w option.

I use my version of server client app and do not get shutdown server.
I'm not sure that it will be useful to you with your dp size.
I use DP=31 and for ex. rig 8x2080ti send file 1.6Gb every 2h, rig 6x2070 around 700mb every 2h
with your size DP=28 file should be 8 times more.
Any way if somebody interesting in app i can public here my code for purebasic.(for Windows OS x64)

I will be grateful for the code, because I don't think I will finish it :-(

Quote
Did you use the original server or a modified version ?
Could you also do a -winfo on the save28.work ?

Yes, I use the original one. I will add an answer from -winfo work28.save

If you want - you can send me a donation to my BTC wallet address 31hgbukdkehcuxcedchkdbsrygegyefbvd
WanderingPhilospher
Full Member
***
Offline Offline

Activity: 1078
Merit: 219

Shooters Shoot...


View Profile
May 24, 2020, 08:32:11 PM
 #306

All I use is Windows...using the original server/server code from Jean Luc.
I can say the same.. i use windows 10 and use release from github compliled by JeanLuc and server crashed.
there should be reason why for some it works fine while others breaks down.
There no matter how many ram which processor or so, in no case was there an anomaly in the consumption of resources.

Like I said earlier, mine may pause/crash twice a day. But I did catch one instance where one of the clients had a GPU issue, and maybe that caused it to stop.

I have had up to 39 clients working at one time. Now, once file gets so big, I stop server, merge, and start a new work file.
Maybe your clients in the same local network an you do not have any issues with internet.
All my clients in diiferent countries. So even 4 client connect i get server crash time to time.

That's what I was wondering. Yes, I own all of my clients. They all reside in my house, but none pay rent as of late Smiley
Etar
Sr. Member
****
Offline Offline

Activity: 616
Merit: 312


View Profile
May 24, 2020, 08:38:11 PM
 #307

-snip-
That's what I was wondering. Yes, I own all of my clients. They all reside in my house, but none pay rent as of late Smiley


Those. The problem is most likely when the clients are not on the local network. Namely, the problem is caused by an error in working with sockets.
zielar
Full Member
***
Offline Offline

Activity: 277
Merit: 106


View Profile
May 24, 2020, 08:39:02 PM
 #308

Here is my -winfo dump:
Code:
Kangaroo v1.5
Loading: save28.work
Version   : 0
DP bits   : 28
Start     : 2000000000000000000000000000
Stop      : 3FFFFFFFFFFFFFFFFFFFFFFFFFFF
Key       : 0309976BA5570966BF889196B7FDF5A0F9A1E9AB340556EC29F8BB60599616167D
Count     : 0 2^-inf
Time      : 00s
DP Size   : 4872.4/6097.0MB
DP Count  : 159593967 2^27.250
HT Max    : 729 [@ 011F52]
HT Min    : 497 [@ 0287B2]
HT Avg    : 608.80
HT SDev   : 24.65
Kangaroos : 0 2^-inf

Disabling the -w option resulted in no record of progress, but did not produce any results.

If you want - you can send me a donation to my BTC wallet address 31hgbukdkehcuxcedchkdbsrygegyefbvd
WanderingPhilospher
Full Member
***
Offline Offline

Activity: 1078
Merit: 219

Shooters Shoot...


View Profile
May 24, 2020, 09:08:29 PM
 #309

so we all got no chance again,
can close this thread now, guess after all finding money is a job.

This topic is not about making money, but about the ability to cut a difficult task like finding the key to the puzzle 110.
Which before that was extremely difficult to solve.
Moreover, JeanLuc has a couple more tricks in the warehouse to improve his brainchild, I'm sure.
After all, no one forbids joining in the search for a key. Why spend each separately a bunch of electricity and money, if you can combine work and do it much more efficiently.
Can create a common pool and solve problems together, divide production in proportion to the work performed.

I like the idea. How would it work, for the division of production?

If nothing else, settle on a DP to use and share equal sized workfiles each day. So if workfiles was to be 10 mb each day and it took 100 mb to solve, then anyone who submitted a 10 mb workfile, would get 10 percent of prize. If they submitted more than 10 mb, say 20 mb, then they get 20 percent of prize. Only problem is trust, using this method.

But I like the concept and would be willing to join.
WanderingPhilospher
Full Member
***
Offline Offline

Activity: 1078
Merit: 219

Shooters Shoot...


View Profile
May 24, 2020, 09:10:12 PM
 #310

Here is my -winfo dump:
Code:
Kangaroo v1.5
Loading: save28.work
Version   : 0
DP bits   : 28
Start     : 2000000000000000000000000000
Stop      : 3FFFFFFFFFFFFFFFFFFFFFFFFFFF
Key       : 0309976BA5570966BF889196B7FDF5A0F9A1E9AB340556EC29F8BB60599616167D
Count     : 0 2^-inf
Time      : 00s
DP Size   : 4872.4/6097.0MB
DP Count  : 159593967 2^27.250
HT Max    : 729 [@ 011F52]
HT Min    : 497 [@ 0287B2]
HT Avg    : 608.80
HT SDev   : 24.65
Kangaroos : 0 2^-inf

Disabling the -w option resulted in no record of progress, but did not produce any results.

Why don't you just start a new save work file, so it doesn't take so long to save/read/rewrite to, and then just merge your work files?? This is your best option.
BitCrack
Jr. Member
*
Offline Offline

Activity: 30
Merit: 122


View Profile
May 24, 2020, 09:18:35 PM
 #311

so we all got no chance again,
can close this thread now, guess after all finding money is a job.

This topic is not about making money, but about the ability to cut a difficult task like finding the key to the puzzle 110.
Which before that was extremely difficult to solve.
Moreover, JeanLuc has a couple more tricks in the warehouse to improve his brainchild, I'm sure.
After all, no one forbids joining in the search for a key. Why spend each separately a bunch of electricity and money, if you can combine work and do it much more efficiently.
Can create a common pool and solve problems together, divide production in proportion to the work performed.

I like the idea. How would it work, for the division of production?

If nothing else, settle on a DP to use and share equal sized workfiles each day. So if workfiles was to be 10 mb each day and it took 100 mb to solve, then anyone who submitted a 10 mb workfile, would get 10 percent of prize. If they submitted more than 10 mb, say 20 mb, then they get 20 percent of prize. Only problem is trust, using this method.

But I like the concept and would be willing to join.

A distributed project like this is probably necessary if we want to solve the puzzles in the 130-160 bit range.
Etar
Sr. Member
****
Offline Offline

Activity: 616
Merit: 312


View Profile
May 24, 2020, 09:20:24 PM
 #312

so we all got no chance again,
can close this thread now, guess after all finding money is a job.

This topic is not about making money, but about the ability to cut a difficult task like finding the key to the puzzle 110.
Which before that was extremely difficult to solve.
Moreover, JeanLuc has a couple more tricks in the warehouse to improve his brainchild, I'm sure.
After all, no one forbids joining in the search for a key. Why spend each separately a bunch of electricity and money, if you can combine work and do it much more efficiently.
Can create a common pool and solve problems together, divide production in proportion to the work performed.

I like the idea. How would it work, for the division of production?

If nothing else, settle on a DP to use and share equal sized workfiles each day. So if workfiles was to be 10 mb each day and it took 100 mb to solve, then anyone who submitted a 10 mb workfile, would get 10 percent of prize. If they submitted more than 10 mb, say 20 mb, then they get 20 percent of prize. Only problem is trust, using this method.

But I like the concept and would be willing to join.
not size of workfiles, but size of DP counter.
Check master file dp counter>merge client job>check master file dp counter>difference save to client account.
When key solved, client % = client account dpcounter * 100 / masterfile dpcounter
WanderingPhilospher
Full Member
***
Offline Offline

Activity: 1078
Merit: 219

Shooters Shoot...


View Profile
May 24, 2020, 09:42:34 PM
 #313

so we all got no chance again,
can close this thread now, guess after all finding money is a job.

This topic is not about making money, but about the ability to cut a difficult task like finding the key to the puzzle 110.
Which before that was extremely difficult to solve.
Moreover, JeanLuc has a couple more tricks in the warehouse to improve his brainchild, I'm sure.
After all, no one forbids joining in the search for a key. Why spend each separately a bunch of electricity and money, if you can combine work and do it much more efficiently.
Can create a common pool and solve problems together, divide production in proportion to the work performed.

I like the idea. How would it work, for the division of production?

If nothing else, settle on a DP to use and share equal sized workfiles each day. So if workfiles was to be 10 mb each day and it took 100 mb to solve, then anyone who submitted a 10 mb workfile, would get 10 percent of prize. If they submitted more than 10 mb, say 20 mb, then they get 20 percent of prize. Only problem is trust, using this method.

But I like the concept and would be willing to join.
not size of workfiles, but size of DP counter.
Check master file dp counter>merge client job>check master file dp counter>difference save to client account.
When key solved, client % = client account dpcounter * 100 / masterfile dpcounter

Makes sense. What do we need to set it up?
zielar
Full Member
***
Offline Offline

Activity: 277
Merit: 106


View Profile
May 24, 2020, 09:53:56 PM
 #314

Here is my -winfo dump:
Code:
Kangaroo v1.5
Loading: save28.work
Version   : 0
DP bits   : 28
Start     : 2000000000000000000000000000
Stop      : 3FFFFFFFFFFFFFFFFFFFFFFFFFFF
Key       : 0309976BA5570966BF889196B7FDF5A0F9A1E9AB340556EC29F8BB60599616167D
Count     : 0 2^-inf
Time      : 00s
DP Size   : 4872.4/6097.0MB
DP Count  : 159593967 2^27.250
HT Max    : 729 [@ 011F52]
HT Min    : 497 [@ 0287B2]
HT Avg    : 608.80
HT SDev   : 24.65
Kangaroos : 0 2^-inf

Disabling the -w option resulted in no record of progress, but did not produce any results.

Why don't you just start a new save work file, so it doesn't take so long to save/read/rewrite to, and then just merge your work files?? This is your best option.

I don't quite understand ...
A total of ~ 90 separate operating systems are working on the solution and they are all connected to one server, whose progress record is set to 5 minutes.
In the current phase:
- The file with saved progress takes 5GB
- Reading / reading the progress file takes ~ 30 seconds, or a total of one minute
- The running server restarts at random times within 3-15 minutes

I don't know where in my case the best option are any combinations with joining files?

P.S. It just enlightened me :-) - is this problem also on LINUX?

If you want - you can send me a donation to my BTC wallet address 31hgbukdkehcuxcedchkdbsrygegyefbvd
WanderingPhilospher
Full Member
***
Offline Offline

Activity: 1078
Merit: 219

Shooters Shoot...


View Profile
May 24, 2020, 10:31:45 PM
 #315

Here is my -winfo dump:
Code:
Kangaroo v1.5
Loading: save28.work
Version   : 0
DP bits   : 28
Start     : 2000000000000000000000000000
Stop      : 3FFFFFFFFFFFFFFFFFFFFFFFFFFF
Key       : 0309976BA5570966BF889196B7FDF5A0F9A1E9AB340556EC29F8BB60599616167D
Count     : 0 2^-inf
Time      : 00s
DP Size   : 4872.4/6097.0MB
DP Count  : 159593967 2^27.250
HT Max    : 729 [@ 011F52]
HT Min    : 497 [@ 0287B2]
HT Avg    : 608.80
HT SDev   : 24.65
Kangaroos : 0 2^-inf

Disabling the -w option resulted in no record of progress, but did not produce any results.

Why don't you just start a new save work file, so it doesn't take so long to save/read/rewrite to, and then just merge your work files?? This is your best option.

I don't quite understand ...
A total of ~ 90 separate operating systems are working on the solution and they are all connected to one server, whose progress record is set to 5 minutes.
In the current phase:
- The file with saved progress takes 5GB
- Reading / reading the progress file takes ~ 30 seconds, or a total of one minute
- The running server restarts at random times within 3-15 minutes

I don't know where in my case the best option are any combinations with joining files?

P.S. It just enlightened me :-) - is this problem also on LINUX?

Maybe I'm not following you. Earlier you said you had file saving every 10 minutes and server restart below that. So that made since why your work file was not growing with saved DPs. 1 minute read/write is a lot when you are getting higher up. If you start a new save file, reduce the read/write to 0s, you are getting more work done. Example, if you are saving every 5 minutes, and it takes 1 minute to read/write then you are losing 12 minutes of work time every hour, as opposed to less than a minute with a smaller work file. I don't know. Again, maybe I'm not tracking.
Also, if your server restarts every 3-15 minutes, who's to say it doesn't restart just prior to it saving?
zielar
Full Member
***
Offline Offline

Activity: 277
Merit: 106


View Profile
May 24, 2020, 10:51:07 PM
 #316

It just happens that it restarts BEFORE saving - and that's why this is a PROBLEM. Maybe I don't understand something here, because for me logic - using a new work file will start work from the beginning ...?

If you want - you can send me a donation to my BTC wallet address 31hgbukdkehcuxcedchkdbsrygegyefbvd
WanderingPhilospher
Full Member
***
Offline Offline

Activity: 1078
Merit: 219

Shooters Shoot...


View Profile
May 24, 2020, 11:03:20 PM
 #317

It just happens that it restarts BEFORE saving - and that's why this is a PROBLEM. Maybe I don't understand something here, because for me logic - using a new work file will start work from the beginning ...?

Maybe I don't understand the solver. I thought a DP was a DP regardless of how/where/when it was found (as long as it's the same number, i.e. both work files have DP size of 28). Whether a DP is found at the end of a 6 GB file or at the beginning of a 1 GB file, doesn't matter. That's the point of the merge file, to check all DPs and see if their is a collision.

Think about this. Let's say you have zero issues with server and clients and they run straight for 50 days but then a power outage happens. You restart your program but new DPs found with different kangaroo starting points will go into the saved work file. Do you think those DPs found at new starting points don't count? or make you start from scratch? All kangaroos will be different, i.e. different starting points...

I have merged many files into one, with restarts. If this doesn't act the same as one long running file, I am screwed and will never find key Smiley

But to answer you. For smaller range keys (56-80 bits), I have stopped and started and merged many files and it still found the key. So hopefully the same will be true for 110 bits.
MrFreeDragon
Sr. Member
****
Offline Offline

Activity: 443
Merit: 350


View Profile
May 25, 2020, 12:33:31 AM
 #318

-snip-
- The running server restarts at random times within 3-15 minutes
-snip-

Is you server in local net or open to worldwide?
If open, try to check all the connections and be sure that all that connections are made from your IPs.

The GPU solver's server has no authentication. Plus there are a lot of ip/port scanner web tools. So, some other users (not you) or scanners may cause your server restart.
This is just an assumption. I am not sure.

zielar
Full Member
***
Offline Offline

Activity: 277
Merit: 106


View Profile
May 25, 2020, 02:56:36 AM
 #319

Quote
Is you server in local net or open to worldwide?
If open, try to check all the connections and be sure that all that connections are made from your IPs.

The GPU solver's server has no authentication. Plus there are a lot of ip/port scanner web tools. So, some other users (not you) or scanners may cause your server restart.
This is just an assumption. I am not sure.
My server is secure. I also checked the logs and I don't have any unknown IPs. Everyone else has a problem, so that's not the problem.

----
And now another question:

[DP COUNT 2^27.60 / 2^27.55] - I don't understand again ... (?)

If you want - you can send me a donation to my BTC wallet address 31hgbukdkehcuxcedchkdbsrygegyefbvd
WanderingPhilospher
Full Member
***
Offline Offline

Activity: 1078
Merit: 219

Shooters Shoot...


View Profile
May 25, 2020, 03:21:18 AM
 #320

Quote
Is you server in local net or open to worldwide?
If open, try to check all the connections and be sure that all that connections are made from your IPs.

The GPU solver's server has no authentication. Plus there are a lot of ip/port scanner web tools. So, some other users (not you) or scanners may cause your server restart.
This is just an assumption. I am not sure.
My server is secure. I also checked the logs and I don't have any unknown IPs. Everyone else has a problem, so that's not the problem.

----
And now another question:

[DP COUNT 2^27.60 / 2^27.55] - I don't understand again ... (?)


Remember, it's an "Expected" and estimation...you could solve in 2^25 or 2^29...or the other chance of not finding it at all. I can't remember the percentage of not finding it at all but it's in the code somewhere.

This isn't straight forward like Bitcrack.

Hopefully you solve it soon!
Pages: « 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 [16] 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 ... 142 »
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!