Bitcoin Forum
April 26, 2024, 09:59:15 AM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: « 1 2 3 4 5 6 7 8 9 10 11 [12] 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 »
  Print  
Author Topic: [ARCHIVE] Bitcoin challenge discusion  (Read 28953 times)
bulleteyedk
Jr. Member
*
Offline Offline

Activity: 84
Merit: 1


View Profile
August 27, 2019, 09:13:12 PM
Last edit: August 27, 2019, 10:14:36 PM by bulleteyedk
 #221

Yeah it's a massive range, the grid was also mostly made to show people how big it actually is.

Making this grid was also a possiblity to catch up on a little processing coding, which i haven't done in a long time.
I made the grid show the hole range from #61 hex value +1 to #63 hex value -1, then find the total range in decimal and divide that with number of pixels when 3 pixels wide.

But the range being massive each pixel had to represent a quite big decimal value, so the visualisation is only approximate and ofcourse only showing the reported ranges, i know there will be a lot more scanned, i remember seeing you write about 85% had been scanned  Grin


EDIT: but i get what you state about the real range from
0x 2000000000000000 to 0x 4000000000000000 should probably make another one with those actual ranges
1714125555
Hero Member
*
Offline Offline

Posts: 1714125555

View Profile Personal Message (Offline)

Ignore
1714125555
Reply with quote  #2

1714125555
Report to moderator
1714125555
Hero Member
*
Offline Offline

Posts: 1714125555

View Profile Personal Message (Offline)

Ignore
1714125555
Reply with quote  #2

1714125555
Report to moderator
1714125555
Hero Member
*
Offline Offline

Posts: 1714125555

View Profile Personal Message (Offline)

Ignore
1714125555
Reply with quote  #2

1714125555
Report to moderator
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction.
racminer
Member
**
Offline Offline

Activity: 242
Merit: 17


View Profile
August 27, 2019, 10:17:19 PM
 #222

Maybe the creator of the "puzzle" will read all this and we hope he'll make a statement after checking that there is no mistake on case #62.
Fingers crossed  Grin
zielar (OP)
Full Member
***
Offline Offline

Activity: 277
Merit: 106


View Profile
August 27, 2019, 10:22:04 PM
 #223

Maybe the creator of the "puzzle" will read all this and we hope he'll make a statement after checking that there is no mistake on case #62.
Fingers crossed  Grin


And by the way let him send me a PW dump with keys to the rest of addresses 62-160 ... Just look out of curiosity ... ;-D

If you want - you can send me a donation to my BTC wallet address 31hgbukdkehcuxcedchkdbsrygegyefbvd
racminer
Member
**
Offline Offline

Activity: 242
Merit: 17


View Profile
August 27, 2019, 10:37:07 PM
 #224

Maybe the creator of the "puzzle" will read all this and we hope he'll make a statement after checking that there is no mistake on case #62.
Fingers crossed  Grin


And by the way let him send me a PW dump with keys to the rest of addresses 62-160 ... Just look out of curiosity ... ;-D

LOL  Cheesy
Raj Lassi
Newbie
*
Offline Offline

Activity: 17
Merit: 1


View Profile
August 28, 2019, 01:52:26 AM
 #225

IMHO
ranges should go from blah...blah...00000000 to blah...blah...FFFFFFFF
just to keep things neat and tidy.  maybe i am OCD.

why are all these bizarre ranges being scanned?
like 2b87eefd01e43a40 - 2b87f0798ea43a40
it makes no sense to me.

is something being converted from decimal?
that makes even less sense.

i just don't get it.  enlighten me.

Yeah so that range is within the 62-bit range, what is the problem?

i am just saying that, for me, for reasons of accounting and clarity, in my messed up mind (and probably in the perfectly good minds of others), i prefer a block to start on a 00 and end on FF. 

for example:
A600000000 - A6FFFFFFFF
3B92C70000000000 - 3B92C7FFFFFFFFFF
0000 - FFFF

like that.  it makes sense to me and is easy to understand.

what i do not understand, is how people come up with these ranges with some arbitrary collection of least significant bytes.
(no example given)

and i wonder where these come from.  are hands mashed down on a hex keyboard?  is it somebody's birthday?  their name in ascii?  is it just random?  is it some nice round decimal number that converts to god-awful looking hex.  Why is decimal even mentioned in this forum?

just wondering why.  not saying it is right or wrong.  it just does not align with my thought process.
MeBender
Jr. Member
*
Offline Offline

Activity: 114
Merit: 5


View Profile WWW
August 29, 2019, 03:45:50 AM
 #226

IMHO
ranges should go from blah...blah...00000000 to blah...blah...FFFFFFFF
just to keep things neat and tidy.  maybe i am OCD.

why are all these bizarre ranges being scanned?
like 2b87eefd01e43a40 - 2b87f0798ea43a40
it makes no sense to me.

is something being converted from decimal?
that makes even less sense.

i just don't get it.  enlighten me.

Yeah so that range is within the 62-bit range, what is the problem?

i am just saying that, for me, for reasons of accounting and clarity, in my messed up mind (and probably in the perfectly good minds of others), i prefer a block to start on a 00 and end on FF.  

for example:
A600000000 - A6FFFFFFFF
3B92C70000000000 - 3B92C7FFFFFFFFFF
0000 - FFFF

like that.  it makes sense to me and is easy to understand.

what i do not understand, is how people come up with these ranges with some arbitrary collection of least significant bytes.
(no example given)

and i wonder where these come from.  are hands mashed down on a hex keyboard?  is it somebody's birthday?  their name in ascii?  is it just random?  is it some nice round decimal number that converts to god-awful looking hex.  Why is decimal even mentioned in this forum?

just wondering why.  not saying it is right or wrong.  it just does not align with my thought process.


Well the way I have been getting my starting points is by taking #61 hex key, converting it to decimal, multiplying it by anything from 1.1 to 2.9 and then converting it back to hex, then the ending key is the original decimal value plus how many addresses I've searched then converted back to hex.

But now I won't be posting any more ranges because I'm using random..

CryptoCoin - The latest Cuckoo Cycle coin - https://crypt-o-coin.cash
Github: https://github.com/GonzoTheDev
paniker
Newbie
*
Offline Offline

Activity: 49
Merit: 0


View Profile
August 29, 2019, 09:35:12 AM
 #227

IMHO
ranges should go from blah...blah...00000000 to blah...blah...FFFFFFFF
just to keep things neat and tidy.  maybe i am OCD.

why are all these bizarre ranges being scanned?
like 2b87eefd01e43a40 - 2b87f0798ea43a40
it makes no sense to me.

is something being converted from decimal?
that makes even less sense.

i just don't get it.  enlighten me.

Yeah so that range is within the 62-bit range, what is the problem?

i am just saying that, for me, for reasons of accounting and clarity, in my messed up mind (and probably in the perfectly good minds of others), i prefer a block to start on a 00 and end on FF.  

for example:
A600000000 - A6FFFFFFFF
3B92C70000000000 - 3B92C7FFFFFFFFFF
0000 - FFFF

like that.  it makes sense to me and is easy to understand.

what i do not understand, is how people come up with these ranges with some arbitrary collection of least significant bytes.
(no example given)

and i wonder where these come from.  are hands mashed down on a hex keyboard?  is it somebody's birthday?  their name in ascii?  is it just random?  is it some nice round decimal number that converts to god-awful looking hex.  Why is decimal even mentioned in this forum?

just wondering why.  not saying it is right or wrong.  it just does not align with my thought process.


Well the way I have been getting my starting points is by taking #61 hex key, converting it to decimal, multiplying it by anything from 1.1 to 2.9 and then converting it back to hex, then the ending key is the original decimal value plus how many addresses I've searched then converted back to hex.

But now I won't be posting any more ranges because I'm using random..

i was testing bitcrack in random...it will never find nothing as i decided...  he was looking to find 1 add in 3mlrd range ...6 hours)) lol
Raj Lassi
Newbie
*
Offline Offline

Activity: 17
Merit: 1


View Profile
August 29, 2019, 01:34:37 PM
 #228

Distributed Random Brute Force

I don't have the GPU power to make any progress with sequential brute force.
I also found, by experiment, that guessing random numbers can take much longer.

So, to maximize the fun, I am doing both. 
I distribute the scan over the 20-3F keyspace, pick 3 random bytes, and brute force the rest.

My ranges look like this: (where XXXXXX is 3 random bytes)

[20-3F][XXXXXX]00000000 - [20-3F][XXXXXX]FFFFFFFF

My old card can try fifteen 3-byte randoms per scan, every 13 hours, at 44Mkey/s.  Plus about 2 million really random randoms with the leftover starting points.

What does that get me?

15 random blocks of 4.3 billion keys in each of 32 sub-ranges [20-3F] per scan = 2 trillion.  4T/day.  Pffft.

so, every 26-hour day, scanning the following:
128 B keys in 20XXXXXX00000000-20XXXXXXFFFFFFFF
128 B keys in 21XXXXXX00000000-21XXXXXXFFFFFFFF
128 B keys in 22XXXXXX00000000-22XXXXXXFFFFFFFF
.
.
.
128 B keys in 3DXXXXXX00000000-3DXXXXXXFFFFFFFF
128 B keys in 3EXXXXXX00000000-3EXXXXXXFFFFFFFF
128 B keys in 3FXXXXXX00000000-3FXXXXXXFFFFFFFF


i just need to get lucky with 3 bytes.  how hard can that be?Huh
or, get lucky with 2 bytes, but wait a week to find out.  that shouldn't take much more than 65535 weeks.
racminer
Member
**
Offline Offline

Activity: 242
Merit: 17


View Profile
August 29, 2019, 05:33:43 PM
 #229

Distributed Random Brute Force

I don't have the GPU power to make any progress with sequential brute force.
I also found, by experiment, that guessing random numbers can take much longer.

So, to maximize the fun, I am doing both. 
I distribute the scan over the 20-3F keyspace, pick 3 random bytes, and brute force the rest.

My ranges look like this: (where XXXXXX is 3 random bytes)

[20-3F][XXXXXX]00000000 - [20-3F][XXXXXX]FFFFFFFF

My old card can try fifteen 3-byte randoms per scan, every 13 hours, at 44Mkey/s.  Plus about 2 million really random randoms with the leftover starting points.

What does that get me?

15 random blocks of 4.3 billion keys in each of 32 sub-ranges [20-3F] per scan = 2 trillion.  4T/day.  Pffft.

so, every 26-hour day, scanning the following:
128 B keys in 20XXXXXX00000000-20XXXXXXFFFFFFFF
128 B keys in 21XXXXXX00000000-21XXXXXXFFFFFFFF
128 B keys in 22XXXXXX00000000-22XXXXXXFFFFFFFF
.
.
.
128 B keys in 3DXXXXXX00000000-3DXXXXXXFFFFFFFF
128 B keys in 3EXXXXXX00000000-3EXXXXXXFFFFFFFF
128 B keys in 3FXXXXXX00000000-3FXXXXXXFFFFFFFF


i just need to get lucky with 3 bytes.  how hard can that be?Huh
or, get lucky with 2 bytes, but wait a week to find out.  that shouldn't take much more than 65535 weeks.




So, speaking of case 61:

In my case, what I do is mix randomness with full range scan. I keep the range small enough to get reasonable waiting time (say 10 min) .
This is an example:

1) generate all possibles 5 bits number: 10 11 12 13 14 15 16 17 18 19 1A 1B 1C 1D 1E 1F 

2) pick a random 5 hex digit number (20bit) : for instance CADE9
we have now 16 possibles header for our key

10CADE9
11CADE9
12CADE9
13CADE9
14CADE9
15CADE9
16CADE9
17CADE9
18CADE9
19CADE9
1ACADE9
1BCADE9
1CCADE9
1DCADE9
1ECADE9
1FCADE9

3) call 16 instances of Bitcrack fully scanning following 16 ranges

10CADE9000000000:10CADE9FFFFFFFFF
11CADE9000000000:11CADE9FFFFFFFFF
12CADE9000000000:12CADE9FFFFFFFFF
13CADE9000000000:13CADE9FFFFFFFFF
14CADE9000000000:14CADE9FFFFFFFFF
15CADE9000000000:15CADE9FFFFFFFFF
16CADE9000000000:16CADE9FFFFFFFFF
17CADE9000000000:17CADE9FFFFFFFFF
18CADE9000000000:18CADE9FFFFFFFFF
19CADE9000000000:19CADE9FFFFFFFFF
1ACADE9000000000:1ACADE9FFFFFFFFF
1BCADE9000000000:1BCADE9FFFFFFFFF
1CCADE9000000000:1CCADE9FFFFFFFFF
1DCADE9000000000:1DCADE9FFFFFFFFF
1ECADE9000000000:1ECADE9FFFFFFFFF
1FCADE9000000000:1FCADE9FFFFFFFFF
 
goto step 2 and repeat.

each cycle will take 16*10min = 160 min if you have one GPU.

This trick will increase your likelihood in finding the key as times increases and not wait 180 years (1GPU) or 5 years (36GPUs). LOL
   
a random 5 hex digit is one of 1,048,575  if you get it right, you have the key in 160 min with only one GPU Wink
 
 


This looks like a suggestion I've made a while ago  Wink



bulleteyedk
Jr. Member
*
Offline Offline

Activity: 84
Merit: 1


View Profile
August 29, 2019, 05:55:44 PM
 #230

yeah... being the one that claims this and the following wallets requires some luck (besides GPU power) , but sticking to a plan is never a bad idea  Wink
MrFreeDragon
Sr. Member
****
Offline Offline

Activity: 443
Merit: 350


View Profile
August 29, 2019, 06:11:19 PM
 #231

Who knows for bitcrack is it important to have Visual Studio 2017 and CUDA Toolkit 9.2? Will it work with Visual Studio 2019 and CUDA Toolkit 10.1 (newest vesrions)?

PrivatePerson
Member
**
Offline Offline

Activity: 173
Merit: 12


View Profile
August 29, 2019, 07:17:40 PM
 #232

Let's discuss the Pollard's kangaroo script from 57fe since it is not possible to register on its forum. Has anyone managed to increase the speed above 150k? And who understands, explain what the lines mean 5-8:

modulo = 0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFEFFFFFC2F
order  = 0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFEBAAEDCE6AF48A03BBFD25E8CD0364141
Gx = 0x79BE667EF9DCBBAC55A06295CE870B07029BFCDB2DCE28D959F2815B16F81798
Gy = 0x483ada7726a3c4655da4fbfc0e1108a8fd17b448a68554199c47d08ffb10d4b8



Who knows for bitcrack is it important to have Visual Studio 2017 and CUDA Toolkit 9.2? Will it work with Visual Studio 2019 and CUDA Toolkit 10.1 (newest vesrions)?

If I understand correctly, then there is a new version with Upgrade to Visual Studio 2019 / CUDA 10.1, but it is not in releases
https://github.com/brichard19/BitCrack
MrFreeDragon
Sr. Member
****
Offline Offline

Activity: 443
Merit: 350


View Profile
August 29, 2019, 07:33:02 PM
Last edit: August 29, 2019, 07:44:06 PM by MrFreeDragon
 #233

Let's discuss the Pollard's kangaroo script from 57fe since it is not possible to register on its forum. Has anyone managed to increase the speed above 150k? And who understands, explain what the lines mean 5-8:

modulo = 0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFEFFFFFC2F
order  = 0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFEBAAEDCE6AF48A03BBFD25E8CD0364141
Gx = 0x79BE667EF9DCBBAC55A06295CE870B07029BFCDB2DCE28D959F2815B16F81798
Gy = 0x483ada7726a3c4655da4fbfc0e1108a8fd17b448a68554199c47d08ffb10d4b8


This is the easiest part  Grin All these integers are determined by bitcoin mathematics in ECDSA. Gx and Gy are x, y coordinates of the base point (G) which is used to produce private key. Modulo is the mod of the field (like border). Order is the order of the fiels, i.e the total number of points of ECDSA curve.

MrFreeDragon
Sr. Member
****
Offline Offline

Activity: 443
Merit: 350


View Profile
August 29, 2019, 11:15:04 PM
 #234

I am at 1080ti maximum reached 380 Mkey / s. This is probably normal relative to 1070 and AMD. I wonder why 2080ti is more powerful in ~ 3 times than 1080ti, it should be about ~ 30% (500 Mkey / s). Due to what such a gap, have thoughts?

Can you please tell your configuration for 1080ti? (b, t, p and any other relevant info). What CUDA do you use? (9.2 or the newest 10.1).
I tried on the week to test 1080ti (Gigabyte Gaming OC 11Gb) and received thr result 150 Mkey/sec (as another guy posted earlier) with -b 56 -t 512 -p 2048.

vimp666
Jr. Member
*
Offline Offline

Activity: 37
Merit: 1


View Profile
August 30, 2019, 11:28:39 AM
Last edit: August 30, 2019, 01:21:12 PM by vimp666
 #235

I am at 1080ti maximum reached 380 Mkey / s. This is probably normal relative to 1070 and AMD. I wonder why 2080ti is more powerful in ~ 3 times than 1080ti, it should be about ~ 30% (500 Mkey / s). Due to what such a gap, have thoughts?

Can you please tell your configuration for 1080ti? (b, t, p and any other relevant info). What CUDA do you use? (9.2 or the newest 10.1).
I tried on the week to test 1080ti (Gigabyte Gaming OC 11Gb) and received thr result 150 Mkey/sec (as another guy posted earlier) with -b 56 -t 512 -p 2048.
experiments. I think you need to gradually increase the parameter -b. You can also raise the parameter -p (this uses the memory of your card).try for example like this  -b 80 -t 512 -p 4000 or  -b 88 -t 256 -p 4500 or  -b 60 -t 512 -p 6000  etc. change the parameters and...experiments....experiments....experiments....
MeBender
Jr. Member
*
Offline Offline

Activity: 114
Merit: 5


View Profile WWW
August 30, 2019, 01:44:54 PM
 #236

I am at 1080ti maximum reached 380 Mkey / s. This is probably normal relative to 1070 and AMD. I wonder why 2080ti is more powerful in ~ 3 times than 1080ti, it should be about ~ 30% (500 Mkey / s). Due to what such a gap, have thoughts?

Can you please tell your configuration for 1080ti? (b, t, p and any other relevant info). What CUDA do you use? (9.2 or the newest 10.1).
I tried on the week to test 1080ti (Gigabyte Gaming OC 11Gb) and received thr result 150 Mkey/sec (as another guy posted earlier) with -b 56 -t 512 -p 2048.
experiments. I think you need to gradually increase the parameter -b. You can also raise the parameter -p (this uses the memory of your card).try for example like this  -b 80 -t 512 -p 4000 or  -b 88 -t 256 -p 4500 or  -b 60 -t 512 -p 6000  etc. change the parameters and...experiments....experiments....experiments....

Just want to add that you should increase -p in multiples of 512

CryptoCoin - The latest Cuckoo Cycle coin - https://crypt-o-coin.cash
Github: https://github.com/GonzoTheDev
PrivatePerson
Member
**
Offline Offline

Activity: 173
Merit: 12


View Profile
August 30, 2019, 03:49:30 PM
 #237

Let's discuss the Pollard's kangaroo script from 57fe since it is not possible to register on its forum. Has anyone managed to increase the speed above 150k? And who understands, explain what the lines mean 5-8:

modulo = 0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFEFFFFFC2F
order  = 0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFEBAAEDCE6AF48A03BBFD25E8CD0364141
Gx = 0x79BE667EF9DCBBAC55A06295CE870B07029BFCDB2DCE28D959F2815B16F81798
Gy = 0x483ada7726a3c4655da4fbfc0e1108a8fd17b448a68554199c47d08ffb10d4b8


This is the easiest part  Grin All these integers are determined by bitcoin mathematics in ECDSA. Gx and Gy are x, y coordinates of the base point (G) which is used to produce private key. Modulo is the mod of the field (like border). Order is the order of the fiels, i.e the total number of points of ECDSA curve.


if I want to look for any other bitcoin key (I guess it takes a very long time), I do not need to change these lines? Only lines 168-176 and 178?
MrFreeDragon
Sr. Member
****
Offline Offline

Activity: 443
Merit: 350


View Profile
August 30, 2019, 08:15:34 PM
 #238

if I want to look for any other bitcoin key (I guess it takes a very long time), I do not need to change these lines? Only lines 168-176 and 178?

These are the constant numbers. They are always the same for bitcoin.

Raj Lassi
Newbie
*
Offline Offline

Activity: 17
Merit: 1


View Profile
August 31, 2019, 01:41:32 AM
 #239

Distributed Random Brute Force

I don't have the GPU power to make any progress with sequential brute force.
I also found, by experiment, that guessing random numbers can take much longer.

So, to maximize the fun, I am doing both. 
I distribute the scan over the 20-3F keyspace, pick 3 random bytes, and brute force the rest.

My ranges look like this: (where XXXXXX is 3 random bytes)

[20-3F][XXXXXX]00000000 - [20-3F][XXXXXX]FFFFFFFF

My old card can try fifteen 3-byte randoms per scan, every 13 hours, at 44Mkey/s.  Plus about 2 million really random randoms with the leftover starting points.

What does that get me?

15 random blocks of 4.3 billion keys in each of 32 sub-ranges [20-3F] per scan = 2 trillion.  4T/day.  Pffft.

so, every 26-hour day, scanning the following:
128 B keys in 20XXXXXX00000000-20XXXXXXFFFFFFFF
128 B keys in 21XXXXXX00000000-21XXXXXXFFFFFFFF
128 B keys in 22XXXXXX00000000-22XXXXXXFFFFFFFF
.
.
.
128 B keys in 3DXXXXXX00000000-3DXXXXXXFFFFFFFF
128 B keys in 3EXXXXXX00000000-3EXXXXXXFFFFFFFF
128 B keys in 3FXXXXXX00000000-3FXXXXXXFFFFFFFF


i just need to get lucky with 3 bytes.  how hard can that be?Huh
or, get lucky with 2 bytes, but wait a week to find out.  that shouldn't take much more than 65535 weeks.




So, speaking of case 61:

In my case, what I do is mix randomness with full range scan. I keep the range small enough to get reasonable waiting time (say 10 min) .
This is an example:

1) generate all possibles 5 bits number: 10 11 12 13 14 15 16 17 18 19 1A 1B 1C 1D 1E 1F 

2) pick a random 5 hex digit number (20bit) : for instance CADE9
we have now 16 possibles header for our key

10CADE9
11CADE9
12CADE9
13CADE9
14CADE9
15CADE9
16CADE9
17CADE9
18CADE9
19CADE9
1ACADE9
1BCADE9
1CCADE9
1DCADE9
1ECADE9
1FCADE9

3) call 16 instances of Bitcrack fully scanning following 16 ranges

10CADE9000000000:10CADE9FFFFFFFFF
11CADE9000000000:11CADE9FFFFFFFFF
12CADE9000000000:12CADE9FFFFFFFFF
13CADE9000000000:13CADE9FFFFFFFFF
14CADE9000000000:14CADE9FFFFFFFFF
15CADE9000000000:15CADE9FFFFFFFFF
16CADE9000000000:16CADE9FFFFFFFFF
17CADE9000000000:17CADE9FFFFFFFFF
18CADE9000000000:18CADE9FFFFFFFFF
19CADE9000000000:19CADE9FFFFFFFFF
1ACADE9000000000:1ACADE9FFFFFFFFF
1BCADE9000000000:1BCADE9FFFFFFFFF
1CCADE9000000000:1CCADE9FFFFFFFFF
1DCADE9000000000:1DCADE9FFFFFFFFF
1ECADE9000000000:1ECADE9FFFFFFFFF
1FCADE9000000000:1FCADE9FFFFFFFFF
 
goto step 2 and repeat.

each cycle will take 16*10min = 160 min if you have one GPU.

This trick will increase your likelihood in finding the key as times increases and not wait 180 years (1GPU) or 5 years (36GPUs). LOL
   
a random 5 hex digit is one of 1,048,575  if you get it right, you have the key in 160 min with only one GPU Wink

 


This looks like a suggestion I've made a while ago  Wink


hahaha cool.  i made some tweaks to BitCrack so i can set whatever starting points i want.  I have it count to FFFF and restart with a new batch of points.  it writes everything it has tried to a file.  It has not picked the same random 3 bytes yet, nor has any scan overlapped any ranges posted here.  i will be lazy and wait until it does before coding anything to prevent that. Tongue
Telariust
Jr. Member
*
Offline Offline

Activity: 38
Merit: 18


View Profile
August 31, 2019, 10:12:18 AM
Last edit: October 09, 2019, 01:17:08 PM by Telariust
Merited by th3nolo (1)
 #240

origin link
https://fe57.org/forum/thread.php?board=4&thema=1#1

..Let's discuss the Pollard's kangaroo script from 57fe since it is not possible to register on its forum. Has anyone managed to increase the speed above 150k?..
Let's!

I will conduct a lot of tests and update this post.
Last update at Oct 09 2019, edit#34

#################
Notice, about power, it is the same

2123 == 2^(123) == 2**(123) == pow(2,123) == pow2(123) == 1<<123
sqrt(2) == 21/2 == 2**0.5

2123 - only this forum, outside of quote/code tags
2** - its python specific
pow() - popularity in many programm langs
2^ - classic old standart math text format, so i use it in 99%, not beautiful but it works

here x^y is math text format of power as xy and not bit operation xor as [x xor y], do not confuse!
#################
# gmpy2

When I ported the code to coincurve lib, I was surprised to find that gmpy2 is faster.
Until this moment, I have not seen an implementation faster than in coincurve, my respect!
1core i5-2540 python37x64 win7x64
Code:
coincurve:	30528.2 j/s
coincurve+cffi: 41165.6 j/s
gmpy2: 90194.8 j/s
(with python27x64 -10%)
(on linux +20% and more!)

how to install gmpy2 on windows
 1) download file .whl from https://www.lfd.uci.edu/~gohlke/pythonlibs/
 2) put to root dir of python
 3) install
Python27>python -m pip install gmpy2-2.0.8-cp27-cp27m-win_amd64.whl
Python37>python -m pip install gmpy2-2.0.8-cp37-cp37m-win_amd64.whl

#################
# speed cpu

Quote
..Tests with C++ (VS2013) leads to x2..x3 faster calculations only (single-thread), but much more complicated code..
..because this is the limit of computation on the cpu
take for example break_short
1core i5-2540 = 0,16 Mkeys/s (C secp256k1 x64 without calc Y) (A + J -> J; xJ -> xA)
add(points) using every loop in break_short
add(points) using every jump in kangaroo
therefore their speed is equivalent

#################
# about kangaroos algo by 57fe

Quote
..Now we give the improved algorithm of [11] (for a serial computer).
The two kangaroos now make alternate jumps, and T starts at the middle of the interval, i.e. x0 = rM , where M = [(L + U)/2].
Whenever xi falls in D, we store the pair (xi, di) together with the name of the kangaroo (T or W).
This information may be stored in a hash table, hashing on xi.
When some xi is stored for the second time, having been reached by both kangaroos, we can compute the solution.
Thus, if xi(forT)=x'j(forW), we have z = M + di - d'j .
In this version, unlike [5], we do not know which kangaroo is in the lead,..
this is based of [5]

Quote
Launch m/2 tame kangaroos with starting points g(b/2)+iv, for 0<=i<m, where v is a small constant
(avoid a power of 2 for v so that the probability of tame kangaroos following the same path is not abnormally high).
At the same time, launch m/2 wild kangaroos with starting points ygb+iv, for 0<=i<m.
Whenever a kangaroo lands on a distinguished point, store it in a list common to all processors.
With each distinguished point, store a flag indicating the kangaroo type (tame or wild) and the distance
from g0 (for tame kangaroos) or from y (for wild kangaroos).
Suppose that a kangaroo reaches a distinguished point that is already in the list.
If the kangaroos are of the same type, then move the trailing kangaroo forward by some small random distance so that it will not repeat the other kangaroo’s path.
If the distinguished point was produced by kangaroos of different type, then subtract the distances to get the desired logarithm and we are done.
and this is parallelism of [11]

[5] J. M. Pollard, Monte Carlo methods for index computation (mod p), Math. Comp., #32 (1978)
[11] P. C. van Oorschot and M. J. Wiener, Parallel collision search with cryptanalytic applications, J. Cryptology, #12 (1999)

I studied the source:
 - used pre-calc Set jumps with size pow2 (by Pollard)
 - creates 8 Tame kangaroos + 8 Wild kangaroos (idea 1T+1W by Oorschot&Wiener of [11]);
..but why 8/8?? 1/1 be must norm.. if set 1/1 - speed drops - calculation of jumpsize and startrange not optimal..
..i think, author try fix low efficiency (see below "anomaly detected"), or its trick to avg runtime, but fact - its slower -30%
(answer: variable kangoo_power (number kangaroos in herd) affects the step size, so the illusion is created that T8+W8 is optimal, although in fact it should not differ from T1+W1)
 - gradually added new distinguished points in T+W sets that pass through DP_rarity (distinguished points for economy ram by Oorschot&Wiener);
hmm... has problem:
Quote
..where 0 <= uj,vj < r are chosen uniformly at random..
...
Launch m/2 tame kangaroos with starting points g(b/2)+iv, for 0 <= i < m, where v is a small constant
(avoid a power of 2 for v so that the probability of tame kangaroos following the same path is not abnormally high).
...
We analyse a variation of their method, which avoids thepossibility of “collisions” between members of the same herd.
The numbers of kangaroos in the herds will be coprime integers u and v, nearly equal, with u+v=P.
Thus if P=4p, we could take u=2p+1 and v=2p-1. Their jumps will all be multiples of uv..
coprime integers! or 2p(+/-)1 integers! not equal random.randint(1,(1<<(problem-1))) !
and uniqueness new point is not checked.
some kangaroos can falls into the trap of same sequences (without new distinguished points);
Quote
..If the kangaroos are of the same type, then move the trailing kangaroo forward by some small random distance so that it will not repeat the other kangaroo’s path..
this reduces speed, jumps indicating norm, but really there are many calculation useless same sequences.
Quote
..If there is a collision between two kangaroos of the same herd then it will eventually be detected when the second one lands on the same distinguished point as the first.
In [473] it is suggested that in this case the server should instruct the second kangaroo to take a jump of random length so that it no longer follows the path of the front kangaroo.
Note that Teske [606] has shown that the expected number of collisions within the same herd is 2, so this issue can probably be ignored in practice.
omg, but ok

#################
# Most famous optimizations of Pollard
# about complexity, runtime and RAM

Quote
1) not use distinguished points (early, of [5])
..runtime of approximately 3.28w1/2 group operations
minimal RAM

 2) use distinguished points
..The kangaroo method (of [11, p. 13]) can solve in about 2w1/2 expected operations..
It is expected to take 1.64 times faster than early, of [5].
need more RAM for saved points

why use distinguished points more efficiency
Quote
..by keeping previous data, the number of collisions found grows as the square of the time spent..
(because, the number of pairs of points grows as the square of the number of points produced)

 "three-kangaroo algorithm"
("Using Inversion", but I think it means symmetry of Y)
One can perform a version of the kangaroo method with three kangaroos:
One tame kangaroo starting from g^u for an appropriate value of u and two wild kangaroos starting from h and h^(-1) respectively.
runtime (average-case expected) of 1.897w1/2 group operations if m=0.316w1/2 (m is avg size 1 jump)
runtime (average-case expected) of 1.818w1/2 group operations if m=0.375w1/2

 "four-kangaroo algorithm"
runtime (average-case expected) of 1.714w1/2 group operations if m=0.375(2w1/2)


other algo avg runtime for compare

 "Gaudry-Schost algorithm"
The basic idea of the Gaudry-Schost algorithm is as follows. One has pseudorandom walks in two (or more) subsets of the group such that a collision between walks of different types leads to a solution to the discrete logarithm problem.
runtime (average-case expected) of 1.660w1/2 group operations

 "Baby-step Giant-step” method of Shanks
runtime (average-case expected) of (3/2)w1/2 group operations if first compute Baby and save in hashtable
runtime (average-case expected) of (4/3)w1/2 group operations if compute both sets at the same time (double RAM)
very more RAM, w1/2

pollard-rho
Theorem1 - 1.25w1/2
Quote
..Theorem1 makes the assumption of true randomness.
However, it has been shown empirically that this assumption does not hold exactly for Pollard’s iteration function.
The actual performance is worse than the expected value given in Teske..
In prime order subgroups.. is 1.55w1/2 and Teske is 1.27w1/2
..while in groups of points of elliptic curves over finite fields.. is 1.60w1/2 and 1.29w1/2
1) Floyd improvement, y value is updated at each step y=F(F(y))
gcd is calculated for N and (y-x)
Floyd’s method needs on average 2.47(lt + lh) group operations (2.47 is three times the expected value of the epact).
2) Brent improvement (Brent improves Floyd)
check only (xi, xj), where j=2^k; k=1,2,3..a; i=[2^k+1; 2^(k+1)]
Brent has given an improved cycle finding method that still only requires storage for two group elements but which requires fewer group operations.
Brent’s method is 30-40% faster than Floyd’s. minimal RAM.
3) Teske improvement, used 20 kangaroo instead of 3(rho), +20%.
..running time the assumption is made that a random walk in the underlying group is simulated.
..this assumption does not hold for the walk originally suggested by Pollard: its performance is worse than in the random case.
..introduce a class of walks that lead to the same performance as expected in the random case.

Quote
Thus the complexity of Pollard rho method is determined by the size of the base group G on which random walk is performed and the running time of the iterating function.
Reducing either one of these two factors will result in speedup of the Pollard rho
...
In fact, the result on generic algorithm complexity implies that, to achieve results better than normal Pollard rho, we should either exploit the group encoding or invent a method that mostly uses the fastest of the group operations.

#################
# Compare pollard-rho vs pollard-kangaroo(lambda method*)

*parallel version rho resembles an image of the shape of the greek letter lambda, do not confuse!
*endomorphism have the constant is called lambda, do not confuse!

pollard-rho method effective for full field size Fp;
pollard-kangaroo method effective for partly field size Fp;
we can use rho for partly Fp, but runtime will remain as for full;
we can use kangaroo for full Fp, but it is about 1.60 times slower than rho; it becomes faster when b < 0.39p;

#################
# batch inversion, affine/jacobian, endomorphism, symmetry Y

Batch inversion is impossible.
Because its  adding single points with pseudo random size step, instead total sequential compute.

Quote from: arulbero
"Are you using affine or jacobian coordinates for the points?"
its affine
Code:
# A + A -> A	
# 1I, 2M, 1S
def add(P, Q, p=modulo):
R = Point()
dx = Q.x - P.x
dy = Q.y - P.y
if flag_gmpy2: # 1I, 1M
c = dy * gmpy2.invert(dx, p) % p
else:
c = dy * rev(dx, p) % p
R.x = (c*c - P.x - Q.x) % p # 1S
R.y = (c*(P.x - R.x) - P.y) % p # 1M
return R
But it's useless, we cannot economy multiples without batch inversion.

..how about using jacobian coordinates for adding points?
(to economy on inversions, 1I equal 20M, its very expensive)
Quote
One often uses projective coordinates to speed up elliptic curve arithmetic, so it is natural to use
projective coordinates when implementing these algorithms. But to define the pseudorandom walk one
needs a unique representation for points, so projective coordinates are not appropriate. See Remark 13.3.2.
test showed - X jacobian doesnt unique representation and distinguished points not found.
after convert J->A (only X) - all work, but -40% speed. conclusion - A+A->A is optimal. close.

Endomorphism is impossible.
Applying it to X coordinate is the same as changing discriminator(DP_rarity) - it will not give acceleration.
Speed depends only on number and size of jumps.

SymmetryY used in "three-kangaroo algorithm".
but 1.818w1/2 vs 2w1/2 very small speed-up efficiency. close.
SymmetryY used in "four-kangaroo algorithm"
1.714w1/2 ..its (1.7/2)*100 = +15%. close.


Quote from: st0ffl
The idea is of course to only make sqr(2^(bit-2)) operations.
nice idea by st0ffl about x2 boost used SymmetryY
https://bitcointalk.org/index.php?topic=5173445.msg52616670#msg52616670


lol, in python x*x*x*... more faster than x**y

and byte_shift 1<<123 more fastest (x10) than 2**123

#################
# popular questions
# I carefully read your posts in the thread and reply to them here

Quote
Why does the work suddenly increase (by a factor of about 8 to 10, depending on time or jumps)
Quote
why do you need 10 Ntimeit(N_tests)?
Algo of Pollard is probablistic nature (Birthday Paradox, Kruskal's card trick, etc)
The solution time is not stable; several attempts are needed to obtain the avg runtime and jumps.
Quote
..(heuristic time rather than deterministic)..
######
Quote
pubkeys = {
   , 33: ('02ed949eaca31df5e8be9bf46adc1dfae1734b8900dcc303606831372955c728da', False) #0x01abcd1234
   ,105: ('03bcf7ce887ffca5e62c9cabbdb7ffa71dc183c52c04ff4ee5ee82e0c55c39d77b', False)
..what does 3 column mean? And why do 33 and 105 matter False?..
it private key for double-check solution (demo/debug mode)
False if it unknow (recovery mode)
######
Quote
what i don't understand is , if 250 bit address can be found in 570 second.
then why, we can't find 2105 bits of puzzle address in 1140 second? or bit more? but why it takes so much time than it should simply do...
105/50=2,1 ? not, wrong logic!
250  = 1125899906842624
2105 = 40564819207303340847894502572032
2105/250 = 36028797018963968 times more?
..no, its square dependence - need sqrt: x1/2
((2105)1/2) / ((250)1/2) =
 ((40564819207303340847894502572032)1/2) / ((1125899906842624)1/2) =
6369051672525772 / 33554432 =
189812531 times more! its right result
and now if 250 found in 570sec, then 2105 is
189812531 * 570 = 108193142670 sec = 3478y  5m  5d 10:44:30s
######
Quote
..multi pubkeys at once check inside bit range..
..this script check only 1 pubkey in bit range..
..but would it work does someone a test if multiple pub keys work?..
yes, only 1
"multi" check at the cost of "multi" drop in performance!
(if u dont know, break_short has the same problem on Gigant-step calc)
sry, but no
######
Quote
The Pollard-Kangaroo ... seems to find keys even when the specified bit size doesn't match the key mask, but it does take longer. I guess the bits parameter is a hint at where to start searching, rather than a maximum search space limit.
Pollard answers
Quote
If z is outside the interval, the algorithm will still work but will take longer.

#################
# CPU build

Python implementation:
 - python2/3 compatibility (print/time/input/xrange/IntDiv)
 - auto adaptation under environment
 - raw python/coincurve+cffi/gmpy2
 - some checks, debug, more info, minor accelerate
 - (un)compress/custom/random pubkey
 - format time: 0y 0m 0d 00:00:00s 000ms
 - format prefix SI: Kilo/Mega/Giga/Tera/Peta/Exa
 - heuristic prognosis (avg)time and (avg)jump per bits
 - repair kangaroos, that falls into the trap of same sequences
 - location privkey in keyspace on the strip
 - percent status progress, lost/left time
 - support arbitrary range (keyspace) start:end

Python multicore implementation:
1) naive parallelism, split equally the search range
 - warning! the lowest efficiency (w/Ncores)1/2
2) parallelism by Oorschot&Wiener
 - norm efficiency w1/2/Ncores
 - has problem of collisions between members of the same herd
3) parallelism by Pollard (best choice)
 - norm efficiency w1/2/Ncores
 - coprime/odd numbers U=V=2p+/-1, U+V<=Ncores
 - without problem of collisions between members of the same herd


*you can already use option 1) by running script in Ncores copies, manually spliting the search range

*Acceleration on several devices (i.e. with shared RAM) for 2) and 3) (w1/2/Ncores) can only be achieved using the pool.
Filtered distinguished points should be sent to the pool in a single hashtable.

*removed support coincurve lib, reason: if u can install coincurv - that u can install gmpy2


python multicore cpu, python2/3, raw/gmpy2
https://github.com/Telariust/pollard-kangaroo/blob/master/pollard-kangaroo-multi.py


my cpu benchmark libs
Algo: 1 Tame + 1 Wild with distinguished points,  expected of 2w^(1/2) group operations
1core i5-2540, win7x64
Code:
python2x64
[lib#raw] 7.1K j/s
[lib#coincurve] 24.5K j/s
[lib#coincurve+cffi] 35.4K j/s
[lib#gmpy2] 89.4K j/s

python3x64
[lib#raw] 8.9K j/s
[lib#coincurve] 31.2K j/s
[lib#coincurve+cffi] 43.2K j/s
[lib#gmpy2] 97.8K j/s

1core i7-6820
Code:
python3x64
[lib#gmpy2] 174.3K j/s;


my cpu heuristic prognose
Algo: 1 Tame + 1 Wild with distinguished points,  expected of 2w1/2 group operations
avg stat after 10tests
1core i5-2540, python37x64 + gmpy2, win7x64
Code:
[i] 97.8K j/s;..
...
[prognose] expected of 2w^(1/2) group operations
-------|--------/--------|---------------------------------/---------------------------------|
   W   |jump avg/2w^(1/2)| time                         avg/2w^(1/2)                         |
-------|--------/--------|---------------------------------/---------------------------------|
..
 2^020 |    1.8K/   2.0K |              0d 00:00:00s 018ms /              0d 00:00:00s 020ms |
..
 2^030 |   57.3K/  65.5K |              0d 00:00:00s 586ms /              0d 00:00:00s 670ms |
>2^031 |   81.1K/  92.7K |              0d 00:00:00s 829ms /              0d 00:00:00s 948ms |
 2^032 |  114.6K/ 131.1K |              0d 00:00:01s 173ms /              0d 00:00:01s 341ms |
..
 2^040 |    1.8M/   2.1M |              0d 00:00:18s 777ms /              0d 00:00:21s 471ms |
..
 2^050 |   58.7M/  67.1M |              0d 00:10:00s 886ms /              0d 00:11:27s 086ms |
..
 2^060 |    1.9G/   2.1G |              0d 05:20:28s 355ms /              0d 06:06:26s 759ms |
..
 2^070 |   60.1G/  68.7G |              7d 02:55:07s 378ms /              8d 03:26:16s 292ms |
..
 2^080 |    1.9T/   2.2T |          7m 17d 21:23:56s 096ms /          8m 20d 14:00:41s 355ms |
..
 2^090 |   61.5T/  70.4T |     20y  3m  2d 12:45:55s 095ms /     23y  1m 28d 16:22:03s 390ms |
..
 2^100 |    2.0P/   2.3P |    648y  2m 21d 00:29:23s 067ms /    741y  2m 17d 19:45:48s 481ms |
..
 2^105 |   11.1P/  12.7P |   3.7Ky 10m 29d 06:43:07s 581ms /   4.2Ky 11m 12d 16:12:57s 514ms |
..
 2^110 |   63.0P/  72.1P |  20.7Ky  2m 12d 15:40:18s 166ms /  23.7Ky 11m  0d 08:25:51s 416ms |
..
 2^120 |    2.0E/   2.3E | 663.8Ky  5m 14d 21:29:41s 312ms / 759.0Ky  4m 11d 05:47:25s 328ms |
----------------------------------------------------------------------------------------------


C99 implementation
 - singlecore
 - bitcoin-core/secp256k1 library
 - A+A->A; 0.219M j/s (1core i5-2540)
https://bitcointalk.org/index.php?topic=5173445.msg52473992#msg52473992


C++ implementation
 - multicore
 - based on engine VanitySearch1.15, vs bignum lib
 - A+A->A; 0.175M j/s (1core i5-2540)
https://bitcointalk.org/index.php?topic=5173445.msg52698284#msg52698284


#################
# GPU build (cuda/opencl)

Quote from: TRINITY
Tank, I need a pilot program for a V-212 helicopter cuda/opencl.. Let’s go.

https://bitcointalk.org/index.php?topic=1306983.msg51796187#msg51796187
https://imgur.com/a/f3zTmTa
thank you, it helped a lot

i imagine how j2002ba2 read topic and thinks
Quote
omg, he used python.. what a primitive primate..
Smiley

article cuda + pollard-rho
https://github.com/beranm14/pollard_rho/blob/master/report/main.pdf

rho, by brichard19, good candidate to rewrite
https://github.com/brichard19/ecdl

roadmap:
 - rewrite VanitySearch to pollard-kangaroo

in process, wait..

#################
# parallelism problems

Quote
Maybe you can split equally the search range among threads ...

Pollard answers
Quote
6. The Kangaroo Method on a Parallel Computer
..A bad method is to divide the interval [L, U] into P equal parts, and use each of P processors(cores) to search one part, using the serial method. This will take time O((w/P)1/2)..
van Oorschot and Wiener [11] employ two herds of kangaroos in a method which runs in time O((w1/2)/P)..
i.e. if have P cpu and split equally the search range, than speed-up only P1/2

Quote
..The lambda method suffers the same parallelization problem as the rho method..
Note that here each parallel processor is producing its own sequence of points independently of the others and each particular processor
does not increase the probability of success of any other processor.
..m cores have m1/2 accelerate..

Quote
..Suppose that there are P identical artists.
If we use P different sequences (that is, different polynomials F(x)), then the probability that the first k numbers in these sequences will be different modulo p will be approximately equal to exp ((-k^2P)/2p). Thus, the maximum acceleration can be estimated as P1/2..

Quote
One important property of the lambda method is the fact that it is easily distributed on several computers. Each participant in distributed computing selects a random number r and begins to take pseudo-random steps from the number b^r, where b is the element of the group for which a discrete logarithm is sought. Each participant uses the same easily computable pseudorandom function f(G)->S, where S is a relatively small set of numbers with an average value comparable to the size of a group G of order n. The powers a^s for S are calculated in advance..
...
A small difficulty is that a match can occur within the same sequence. However, if the number of participants in the calculations is large enough, then the probability of a match between sequences is greater than the probability of a match within one sequence.
...
The central computer must track all sequences from all participants to identify matches. According to the birthday paradox, a match is expected when the number of elements in all sequences is of the order of O(N1/2). Obviously, in the described form, this method requires a large expenditure of memory of the central computer. The following idea, described by van Orschot, greatly reduces memory requirements and thus makes this method applicable to solving complex problems. The idea is to consider the so-called selected points. It is assumed that the elements of the group are represented by integers (or, possibly, sets of integers). A distinguished binary field of length k in this number will consist of zeros of about (1/2)k of the entire time. A random walk will go through such marked points on average every 2k steps. If two random sequences intersect somewhere, then they will intersect further and together will get to the next selected point. So, the idea is to send only these dedicated points to the central computer, which will reduce the required memory size by 2k times.

Quote
There are two versions of the distributed algorithm, one by van Oorschot and Wiener [473] and another by Pollard [488].
The difference is how they handle the possibility of collisions between kangaroos of the same herd.
The former has a mechanism to deal with this, which we will explain later. The latter paper elegantly ensures that there will not be collisions between individuals of the same herd.

 Van Oorschot and Wiener Version
..kangaroos are spaced a (small) distance s apart..
..If there is a collision between two kangaroos of the same herd then it will eventually be detected when the second one lands on the same distinguished point as the first.
In [473] it is suggested that in this case the server should instruct the second kangaroo to take a jump of random length so that it no longer follows the path of the front kangaroo.
Note that Teske [606] has shown that the expected number of collisions within the same herd is 2, so this issue can probably be ignored in practice.

 Pollard Version
Let N be the number of processors and suppose we can write N=U+V where gcd(U,V)=1 (i.e. U,V is prime numbers) and U,V = N/2.
The number of tame kangaroos is U and the number of wild kangaroos is V.
The (super) kangaroos perform the usual pseudorandom walk with steps {U*V*u0,...,U*V*u(n-1)} having mean m=N(w1/2)/4
(this is UV times the mean step size for solving the DLP in an interval of length w/UV =~ 4w/(N2).
As usual we choose either uj=2j or else random values between 0 and 2m/UV.
The U tame kangaroos start at g(w/2)+i*V for 0 <= i < U.
The V wild kangaroos start at hgj*U      for 0 <= j < V.

Lemma 14.6.5. Suppose the walks do not cover the whole group, i.e., 0 <= an < r (r is order).
Then there is no collision between two tame kangaroos or two wild kangaroos.
There is a unique pair of tame and wild kangaroos who can collide.


#################
How Bitcoin works, a simple explanation.

Private Key is the number N (very large, 2256, so that it is impossible to guess or count).
The Public Key is the point (x, y) on the ec curve type "secp256k1".
privkey is primary, and pubkey is computed from it. They exist in pairs.

Knowing the pubkey is almost impossible to restore its privkey. This is where safety is based.

There is a basic-point (i.e. basic-pubkey but don’t say that), everyone knows it, and the privkey for this point is 1.
To calculate pubkeyX from privkey = X, multiply basic-point by X.

#################
# How pollard-kangaroo works, the Tame and Wild kangaroos, is a simple explanation.

Suppose there is pubkeyX, unknow privkeyX, but privkeyX is in range w=[L..U]
The keys have a property - if we increase pubkey by S, then its privkey will also increase by S.
We start step-by-step to increase pubkeyX by S(i), keeping sumX(S(i)). This is a Wild kangaroo.
We select a random privkeyT from range [L..U], compute pubkeyT.
We start step-by-step to increment pubkeyT by S(i) while maintaining sumT(S(i)). This is a Tame kangaroo.
The size of the jump S(i) is determined by the x coordinate of the current point, so if a Wild or Tame kangaroo lands on one point, their paths will merge.
(we are concerned with pseudo random walks whose next step is determined by the current position)
Thanks to the Birthday Paradox (Kruskal's card trick), their paths will one day meet.
Knowing each traveled path, privkeyX is calculated.
The number of jumps is approximately 2*(w^1/2) group operations, which is much less than a full search w.

#################
# articles
(links from other users, all in one place)

base
https://andrea.corbellini.name/2015/05/17/elliptic-curve-cryptography-a-gentle-introduction/
hand translate to ru/ua (recommend instead translate.google)
https://habr.com/ru/post/335906/

0) best of the best, all in 1, epic,  2012
Chapter 14. Factoring and Discrete Logarithms using Pseudorandom Walks
https://www.math.auckland.ac.nz/~sgal018/crypto-book/ch14.pdf

1) with reference to old
J. M. Pollard, “Kangaroos, monopoly and discrete logarithms,” Journal of Cryptology, #13 (2000).
https://web.northeastern.edu/seigen/11Magic/KruskalsCount/PollardKangarooMonopoly.pdf
(good dir web.northeastern.edu/seigen/11Magic/KruskalsCount/)

2) about parallelism problems
P. C. van Oorschot and M. J. Wiener, Parallel collision search with cryptanalytic applications, J. Cryptology, #12 (1999)
https://people.scs.carleton.ca/~paulv/papers/JoC97.pdf

advice acceleration tips by fernand77
https://github.com/JeanLucPons/VanitySearch/issues/25#issuecomment-509293732

#################
# polard implementations
(links from other users, all in one place)

kangaroo (now no info), by BurtW, (openssl,secp256k1?)
https://bitcointalk.org/index.php?topic=5173445.0

kangaroo (classic, not use distinguished points), by natmchugh, python
https://bitcointalk.org/index.php?topic=5166284.msg52233957#msg52233957

rho, by brichard19
github.com/brichard19/ecdl
python, rho
rho, by Andrea Corbellini
github.com/andreacorbellini/ecc/blob/master/logs/pollardsrho.py
rho, by qubd
raw.githubusercontent.com/qubd/mini_ecdsa/master/mini_ecdsa.py
rho, by David Cox 2016
gist.github.com/davidcox143/c68a477b8390f35cb9a19084edd2a1e5
rho, by ralphleon
github.com/ralphleon/Python-Algorithms/blob/master/Cryptology/pollard.py
rho, by thomdixon
gist.github.com/thomdixon/dd1e280681f16535fbf1

other lang
github.com/nikitaneo/rho_pollard
github.com/twjvdhorst/Parallel-Pollard-Rho/
github.com/beranm14/pollard_rho/
github.com/grouville/factrace/

#################
# motivation

..they don`t want to share the tools like Pollard kangaroo script..

Quote from: BENDER
..Well, I’m gonna go build my own theme park, with blackjack and hookers!..

#################

why not use endomorphism for pollard-rho?
point after multiplying by beta is enough pseudo random?
if I'm right, this will be a new superfast method to calculate pollard-rho at the cost of 1M per 1 point
didn’t it occur to anyone before me, I need to test

#################

to be continued..
Pages: « 1 2 3 4 5 6 7 8 9 10 11 [12] 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 »
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!