Bitcoin Forum
November 09, 2024, 02:37:08 PM *
News: Latest Bitcoin Core release: 28.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: « 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 [28] 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 ... 144 »
  Print  
Author Topic: Pollard's kangaroo ECDLP solver  (Read 58801 times)
WanderingPhilospher
Full Member
***
Offline Offline

Activity: 1204
Merit: 237

Shooters Shoot...


View Profile
May 30, 2020, 08:31:57 AM
 #541

True or False?

Let's say for any of the puzzles, #110, 115, 120, etc. people can use various DP settings, meaning people can search for different DPs.

just for conversation sake, Let's say the DPs range from 14 to 48.

For each of those DPs, 14 to 48, the key can be solved, true or false?
WanderingPhilospher
Full Member
***
Offline Offline

Activity: 1204
Merit: 237

Shooters Shoot...


View Profile
May 30, 2020, 09:29:46 AM
 #542

@MrFreeDragon
Thanks for the link, was funny Wink

@patatasfritas
About the merger, yes, I know, I currently working on a low RAM usage version and the integration of your features.
concerning sug group, I didn't have a look yet but splitting range is not a good way of parallelization, you have only a gain of sqrt(n) where n is the number of subgroup.

Jean Luc,

all this and that with merging. There has to be a better way...with a comparator that I spoke about earlier.

Break the code down so that it saves multiple tame and wild files and build a comparator. Easy fix!

Currently, one has to merge files or base their DP setting on expected RAM usage. So if I only have 48GB RAM, that will dictate the lowest DP that I can use because now we have one huge master save file.

Let's say my master save file is 20GB and I need to merge it with another 4GB file (now a total of 24GB). My RAM will be nearing the 50 percent threshold that you mentioned.

But now, let's say you save 2 Tame files and 4 Wild files. Now we have 6 files and each Tame file is now only 6GB and each Wild file is now only 3 GB. Now when I compare to look for solved key, my machine reads the 1st Tame file (6GB) and the 1st Wild file (3GB) now I am only consuming 9GB. The machine compares, no collision found, now compare 1st Tame file with 2nd Wild file. Repeat until 1st Tame file has been compared to each Wild file and the 2nd Tame file has been compared to each Wild file.

Would this not work?
Jean_Luc (OP)
Sr. Member
****
Offline Offline

Activity: 462
Merit: 696


View Profile
May 30, 2020, 09:36:12 AM
 #543

The new merger committed split and perform merge block by block.
So it does not need full RAM
I'm still working on it

For tame and wild separated array this is not yet supported but can be done.
At the moment, i'm working to have a usable and stable prgoram.

COBRAS
Member
**
Offline Offline

Activity: 1016
Merit: 23


View Profile
May 30, 2020, 09:42:39 AM
 #544

The new merger committed split and perform merge block by block.
So it does not need full RAM
I'm still working on it

For tame and wild separated array this is not yet supported but can be done.
At the moment, i'm working to have a usable and stable prgoram.



@Jean_Luck make please option so that clients can connect to the server at any time rather than all at the same time and join a task running on the server ?


Jean_Luck, server need a GPU, or server can work fine without GPU computations ?
Thanx !

[
WanderingPhilospher
Full Member
***
Offline Offline

Activity: 1204
Merit: 237

Shooters Shoot...


View Profile
May 30, 2020, 09:42:53 AM
 #545

The new merger committed split and perform merge block by block.
So it does not need full RAM
I'm still working on it

For tame and wild separated array this is not yet supported but can be done.
At the moment, i'm working to have a usable and stable prgoram.


Ok.

I think it's pretty stable. Only issues you've had is when someone tried to solve key with 80billion * sqrt160gajillion Mkey/s on your first server edition Smiley

You have developed an awesome program...extremely usable and stable.

I am sorry my coding skills are basically zero or I would help you more with different ideas.
HardwareCollector
Member
**
Offline Offline

Activity: 144
Merit: 10


View Profile
May 30, 2020, 02:30:07 PM
 #546

Hardware, how long are you running your precomp for? You have a set # of DPs or a percentage? 2/3 ops?
@WanderingPhilospher

The goal is ~512GiB worth of precomputed points (@ 16bytes/point).
HardwareCollector
Member
**
Offline Offline

Activity: 144
Merit: 10


View Profile
May 30, 2020, 02:34:43 PM
 #547


As relating to the Wild Kangaroos, [working_public_key] = [(original_public_key) - (beginning_range)*(secp256k1_generator_point)].
[distinguished_point] = [(+-traveled_distance)*(secp256k1_generator_point)] + [working_public key]

You will need to add back the (beginning_range) when there’s a collision to solve for the (original_public_key).

Ok. it is addidng point not addiding key + distance
(0xe6dabff2705a80acc23ae121956873c4ff9fd31cb0faca522c33624e23657e04,0x125c04d29ea83874332ea8aef3b3467f22665a4970df415be756bcdf5675e569)
+ (fb12e2e7eba822db7582b91da81c0f1d991a6fec79d170733a1eceb039b3e1f9,ee2e79d5326d178c91ed36ca52f9be4f04c42e3cf7cabb3299e070bc1231bb05)
=(dcbae520622e89bd4c0062bb82400003c628f41e76f5bce8566c3dfa2c3fff0,b6fdc18b5be9048e837759b86efa422511b717ed9e7bc2d7b1936c06a0620cfe)
and x coordinate is correct,, but any way tame will never get this point becouse it is out of range 0..fffffffffffff
So not all wild can be tame. We can tame all wild with sign + but each wild with sign - should be verifid with range.
In that case we can add tamed wild to experience.


@Etar

I am not sure if that’s the case with Jean_Luc’s software, but with mine all Wild_Kangaroos can be converted to Tame_Kangaroos after the DL has been solved. I ran a small test and it was true for all cases:

Code:
iterations = 65536 (2^16)
bmul  size = 22
batch size = 128
total keys = 8388608

main context =   0.01402114
bmul context =  10.28601294

Baseline verification passed
Serial verification passed

Starting...

fffffffffffffffffffffffffffffffebaaebd0cde863851b153c3b3e54b842a
2855b4681021cbc04373860592de141d7e8686574f6edb7de522baf3a8eb0309
000001e66aa3c3ccbbe8735d6af80000078097436d9053b4552d8bb2c6491b15

fffffffffffffffffffffffffffffffebaaebcf57871bba71cababca3cfc6ba5
3c9bf5c08830cbd65ac8c4d551ddd8b77e93b56ba917fe70a6f9a7093f11907c
00000145e24ee029218b666ef7bc00000977a6faaf969829b43524b39c1594ae

fffffffffffffffffffffffffffffffebaaebcfdf0ee232d9145f3fe9d38ff41
85aa570afa81ba04d4a5cb000a90468caf95078cfedd7ca8379c32a9e5f2d615
000000688f0aa57f0950015258ac00000b30373c83ed1236f3fb177730a2117e

fffffffffffffffffffffffffffffffebaaebd0598056b9fc923ecc885a3bc20
c4c7a3af557b96e726a9377f7a37ca9bae18d06e16f11f859a95cb4409780f47
000001820963965f9c34ad90266400000d412f5abe971869764e15e03f8f8ad8

fffffffffffffffffffffffffffffffebaaebd0ada254af6931585f77bafcf4d
dd3d0d37ea59b4bbae97dcef111398a86bfd249dbf0ab536457c23f634d92aaa
0000017bdcfa4edd88fdf04dfd0c00000f16e754bc105657863055dd948487a1

fffffffffffffffffffffffffffffffebaaebcf25b5a4b020e63b0ce136fa282
f4cb6a2aa4fdace911883c0aa733af8447e06b5c56526f43569168b91936cb19
000000a4bf27ad623515dc0de7f8000016c74bbaa3098d7aa92ed4f331f12c95

fffffffffffffffffffffffffffffffebaaebcf176baa77561285fff4e219ad5
1af3f603f6edd7b10349afcfa13bbd63c1bdb99801e491b682184924dfc5c3c4
00000022f465e6f8b16d0f1c1ea000001706ad24024dd9fa0ce11c0adce08018

fffffffffffffffffffffffffffffffebaaebd015682cd2ae25d864d4169574c
d64b5bcb9724ed9a02dae2dac8ec17686f4b6f765c0534bff11888b928d08ec2
000001c2d572bdebd0f8831fd3e800001831690f67a08f7da03346684b117560

fffffffffffffffffffffffffffffffebaaebcf3d017caaa8c6822877b7f8688
13c9f7a86c88bc696fc6da053acf6568110acb4677620fef3bc7d5fd3800c2bb
000000866db879683704712e878400001ba7908f67561bdc006981cbbdcb15b1

fffffffffffffffffffffffffffffffebaaebcf5eb02df76cdf1112b2f85aa76
32be189e1d49409560a3e0273d973965b5742407caba96942af32f0b239a4bdb
000001be084cae57d8293270a5b400001c8467b521322e3635adf277d28d460b

.
.
.
WanderingPhilospher
Full Member
***
Offline Offline

Activity: 1204
Merit: 237

Shooters Shoot...


View Profile
May 30, 2020, 02:38:40 PM
 #548

Hardware, how long are you running your precomp for? You have a set # of DPs or a percentage? 2/3 ops?
@WanderingPhilospher

The goal is ~512GiB worth of precomputed points (@ 16bytes/point).

Wow...pretty good bit!
BitCrack
Jr. Member
*
Offline Offline

Activity: 30
Merit: 122


View Profile
May 30, 2020, 02:41:00 PM
 #549

How did you manage to get it down to 16 bytes per point?
Etar
Sr. Member
****
Offline Offline

Activity: 634
Merit: 312


View Profile
May 30, 2020, 02:45:41 PM
Last edit: May 30, 2020, 02:59:49 PM by Etar
 #550

About using an experience of previous work.
Here is test, were all possible DPs from previous was put to next work.
Gain from pazzle#54 to #79
DP=20 for both variants.
Code:
pazzle#54
*original*
40000000000000
7fffffffffffff
0385a30d8413af4f8f9e6312400f2d194fe14f02e719b24c3f83bf1fd233a8f963
[Count 2^32.75]result>0x6ABE1F9B67E114
*shifted*
0
3fffffffffffff
04b1f8ba606a61026d1ecaa93d9e9ceb12260c33e92809ed1ecb0850a2856accc1ac46d892583d7fdd24a25a229b6ea8d17add77b1f410a9d30441e0acbec19c74
[Count 2^32.76]result>0x2ABE1F9B67E114 + 0x40000000000000 = 0x6abe1f9b67e114

pazzle#59
*original*
800000000000000
fffffffffffffff
0348e843dc5b1bd246e6309b4924b81543d02b16c8083df973a89ce2c7eb89a10d
[Count 2^34.63]result>0xFC07A1825367BBE
*shifted + experience*
0
7ffffffffffffff
042b90a8680d08cf81e15e209acadbb16b7bdff19787d6d13ca6a7852fee8d1013447a29425ea9773471ad9810d21a976cf5bd4bdb46d0c8ae537591dce43252f2
[Count 2^35.42]result>0x7C07A1825367BBE+0x800000000000000 = 0xfc07a1825367bbe

pazzle#64
*original*
10000000000000000
1ffffffffffffffff
0230210c23b1a047bc9bdbb13448e67deddc108946de6de639bcc75d47c0216b1b
[Count 2^34.49]result>0x1A838B13505B26867
*shifted + experience*
0
ffffffffffffffff
04e7daa14feea7f3e6b77aa3ff75989ac171d9b3f7a263e70eab001849f2f185449b51b0bc783313d9e201f1bc453d0e7c2bb94db007ab7a848a63b47d86f29416
[Count 2^36.70]result>0xA838B13505B26867+0x10000000000000000 = 0x1a838b13505b26867

pazzle#69
*original*
200000000000000000
3fffffffffffffffff
0290e6900a58d33393bc1097b5aed31f2e4e7cbd3e5466af958665bc0121248483
[Count 2^38.68]result>0x349B84B6431A6C4EF1
*shifted + experience*
0
1fffffffffffffffff
049ffc74f6136e60e9229a58544c0e9cffa5a9d955d949e75e4bccbe1c06b81c3402d4ba4d9f65b918376c0cbf6f8caa840a6261cf900f413fb5279bb243f20c4c
[Count 2^38.43]result>0x149B84B6431A6C4EF1+0x200000000000000000 = 0x349b84b6431a6c4ef1

pazzle#74
*original*
4000000000000000000
7ffffffffffffffffff
03726b574f193e374686d8e12bc6e4142adeb06770e0a2856f5e4ad89f66044755
[Count 2^38.56]result>0x4C5CE114686A1336E07
*shifted + experience*
0
3ffffffffffffffffff
04997bf5a3795f965c338f3739dae9e4cc202c2203100e6be8132a65bf2c3681d1fa67799ecfb697015037e44076c734ad2eef8c3b166e04b0a76d8c0de0e14dc2
[Count 2^40.32]result>0xC5CE114686A1336E07+0x4000000000000000000 = 0x4c5ce114686a1336e07

pazzle#79
*original*
80000000000000000000
ffffffffffffffffffff
037e1238f7b1ce757df94faa9a2eb261bf0aeb9f84dbf81212104e78931c2a19dc
[Count 2^39.25]result>0xEA1A5C66DCC11B5AD180
*shifted + experience*
0
7fffffffffffffffffff
043aeb4f818ca91912a3e50d1b3db196696f82713bae00ba2b53c09a23f1d284a085b2197137256def6c05a0f105e1b1eee9c10d23b7a4911040a23e891ebb3dc9
[Count 2^41.87]result>0x6A1A5C66DCC11B5AD180+0x80000000000000000000 = 0xea1a5c66dcc11b5ad180
So as you can see an experience didn`t help to solve next pazzle faster.
Most likely because the range is increased by 32 times in the next puzzle, and all DPs are concentrated at the very beginning of the range. Perhaps experience can help in solving the next range, but not after 5 bits.

And here is result of 3 test randomly generated keys in range 64bit and with experience from the puzzle 79.
Code:
***test 64bit****
*original*
10000000000000000
1ffffffffffffffff
035492d5b1c9e271d393ace35b32ed922e1ad202ff0c0aa05f53ba39edcabb003c
[Count 2^36.63]result>0x151881A76D2AEC2D9
*shifted + experience79bit*
0
ffffffffffffffff
049959d0a41a11f42a64f2a86c8133089e7fa508e53fb4afc64c03e5b4b109edd7b9c955dd43bca51b6db320602101d10585ebf60330bd3ea9d445b6cbb6e18932
[Count 2^34.63]result>0x51881A76D2AEC2D9+0x10000000000000000 = 0x151881a76d2aec2d9

***test 64bit****
*original*
10000000000000000
1ffffffffffffffff
03a643b91c86b76a71ec6f9c6a6d0a9ba71fcf3c69c11b919a252b44b76bfe9b6f
[Count 2^35.93]result>0x1CC24D7785E54FD54
*shifted + experience79bit*
0
ffffffffffffffff
0443814ed8fe2a1d4a36993911354fe8f7bf6daddbfcc3a1f814a3d3d32c7772eaa448c10af7a7f4f94da0c35e087cfba5310c81f86c4b0aaca9250476960305b7
[Count 2^34.87]result>0xCC24D7785E54FD54+0x10000000000000000 = 0x1cc24d7785e54fd54

***test 64bit****
*original*
10000000000000000
1ffffffffffffffff
035267fc38fc04ef271b54f5be8e482c46392ba97ca171e68d2408dd0717bd87cf
[Count 2^37.10]result>0x1114527E6BCFD0EAD
*shifted + experience79bit*
0
ffffffffffffffff
041c4f6e5f89465a941bb4291cf0266fff79c333e4dc1b30e5a8671b15504f364f1e5be5ba0cb4f8b19769ac5904aa465dab6fb3b8ace9862c704f8ea5a916d061
[Count 2^35.56]result>0x114527E6BCFD0EAD+0x10000000000000000 = 0x1114527e6bcfd0ead
In all three cases, experience helped me find the key faster.

The result is that experience helps you find the key in the current range or in the previous one, but it is absolutely useless to search in the following range that is multiple large to the current one.

In other words, in order to solve the puzzle faster you need to move from end to beginning .. first 125, then 120, then 115 and then 110))
MrFreeDragon
Sr. Member
****
Offline Offline

Activity: 443
Merit: 350


View Profile
May 30, 2020, 03:16:55 PM
 #551

Hardware, how long are you running your precomp for? You have a set # of DPs or a percentage? 2/3 ops?
@WanderingPhilospher

The goal is ~512GiB worth of precomputed points (@ 16bytes/point).

How much bytes do you have for x-coordinate and for distance?

For the whole distance in 109bit range we need 109/8 ~ 14 bytes. If you allocate lower space, you need to brute the remaining bytes/bits if the collision is found.
If you have 16 bytes space per point, so you should split somehow it between x-coordinate and distance.

sssergy2705
Copper Member
Jr. Member
*
Offline Offline

Activity: 205
Merit: 1


View Profile
May 30, 2020, 04:01:21 PM
 #552


As relating to the Wild Kangaroos, [working_public_key] = [(original_public_key) - (beginning_range)*(secp256k1_generator_point)].
[distinguished_point] = [(+-traveled_distance)*(secp256k1_generator_point)] + [working_public key]

You will need to add back the (beginning_range) when there’s a collision to solve for the (original_public_key).


(0xe6dabff2705a80acc23ae121956873c4ff9fd31cb0faca522c33624e23657e04,0x125c04d29ea83874332ea8aef3b3467f22665a4970df415be756bcdf5675e569)
+ (fb12e2e7eba822db7582b91da81c0f1d991a6fec79d170733a1eceb039b3e1f9,ee2e79d5326d178c91ed36ca52f9be4f04c42e3cf7cabb3299e070bc1231bb05)
=(dcbae520622e89bd4c0062bb82400003c628f41e76f5bce8566c3dfa2c3fff0,b6fdc18b5be9048e837759b86efa422511b717ed9e7bc2d7b1936c06a0620cfe)


Could you explain how you calculate the coordinates, is there such a tool? I know how to add and multiply, but not the opposite.
arulbero
Legendary
*
Offline Offline

Activity: 1938
Merit: 2080


View Profile
May 30, 2020, 04:02:58 PM
Last edit: May 30, 2020, 04:30:07 PM by arulbero
 #553

The goal is ~512GiB worth of precomputed points (@ 16bytes/point).

I remember that the DPs are not equally useful:

https://eprint.iacr.org/2012/458.pdf
Quote
Choosing the most useful distinguished points.
Instead of randomly generating T distinguished points, we propose generating more distinguished points,
say 2T or 10T or 1000T, and then keeping the T most useful distinguished points.
(This presumably means storing 2T or 10T or 1000T points during the precomputation, but we follow standard practice in distinguishing between the space consumed during the precomputation and the space required for the output of the precomputation. As an illustrative example in support of this practice, consider the precomputed rainbow tables distributed by the A5/1 Cracking Project [26]; the cost of local RAM used temporarily by those computations is much less important than the network cost of distributing these tables to users and the long-term cost of storing these tables.)
The natural definition of “most useful” is “having the largest number of ancestors”. By definition the ancestors of a distinguished point are the group elements that walk to this point; the chance of a uniform random group element walking to this point is exactly the number of ancestors divided by 'L'.
Unfortunately, without taking the time to survey all 'L' group elements, one does not know the number of ancestors of a distinguished point. Fortunately, one has a statistical estimate of this number: a distinguished point found by many walks is very likely to be more useful than a distinguished point found
by fewer walks. This estimate is unreliable for a distinguished point found by very few walks, especially for distinguished points found by just one walk; we thus propose using the walk length as a secondary estimate.

I remember this post too:

For the range [a,b] if we have pre-calculated (b-a)^(1/3) distinguished points, we would need only (b-a)^(1/3) group operations in order to find the collision. But precomputation requires (b-a)^(2/3) group operations (performed once as a preliminary stage).

EDIT:
source (see 2nd paragraph on page 6): https://eprint.iacr.org/2014/565.pdf


If we split 2^110 in 2^20 ranges each of length 2^90, for each range it would take only 2^30 steps to retrieve (on average) the key.  In theory 2^20 * 2^30 = 2^50 steps would be enough.

Even if we look in different ranges, we can share the same 2^30 DP points. Each interval can be shifted in the range [0, 2^90] together with the public key P.

It becomes very important to generate this 2^30 DP points (but it requires 2^60 steps).

If we split 2^110 in 2^29 ranges each of length 2^81, for each range it would take only 2^27 steps to retrieve (on average) the key.  In theory 2^29 * 2^27 = 2^56 steps would be enough.

And we would need to generate first 2^27 DP points (it requires 2^54 steps).




About using an experience of previous work.
---
The result is that experience helps you find the key in the current range or in the previous one, but it is absolutely useless to search in the following range that is multiple large to the current one.[/b]
In other words, in order to solve the puzzle faster you need to move from end to beginning .. first 125, then 120, then 115 and then 110))


There is a small exception:

if you have found DPs in the interval [0, 2^109], then you have found actually DPs in the interval [-2^109, 2^109], therefor you have already the DPs for a double range.
MrFreeDragon
Sr. Member
****
Offline Offline

Activity: 443
Merit: 350


View Profile
May 30, 2020, 04:09:33 PM
 #554

-snip-
Could you explain how you calculate the coordinates, is there such a tool? I know how to add and multiply, but not the opposite.

If you know how to add - it is enough to perform the subtraction:
M = P - Q = P + (-Q) - here M point is P point minus Q point, which is the same as P point plus (-Q) point. So you make the addition of points P and (-Q).

If Q point has coordinates (Qx, Qy), so (-Q) point will have coordinates (Qx, modulo - Qy)

Now you can easily can make a subtraction:
M = P - Q = (Px, Py) - (Qx, Qy) = (Px, Py) + (Qx, modulo - Qy)


HardwareCollector
Member
**
Offline Offline

Activity: 144
Merit: 10


View Profile
May 30, 2020, 04:20:39 PM
Last edit: May 30, 2020, 04:47:24 PM by HardwareCollector
 #555

How did you manage to get it down to 16 bytes per point?
@BitCrack

I am storing only 8bytes of the x-coordinate and lookup information (8bytes with HT overhead included) on how to retrieve the rest of the information in the event of a collision. This is accomplish by using a custom version of Google’s Sparse_Hash_Map in a distributed setup. The cost is network bandwidth but all clients are on a 1 GigE network.

 https://github.com/sparsehash/sparsehash
HardwareCollector
Member
**
Offline Offline

Activity: 144
Merit: 10


View Profile
May 30, 2020, 04:26:11 PM
 #556

Hardware, how long are you running your precomp for? You have a set # of DPs or a percentage? 2/3 ops?
@WanderingPhilospher

The goal is ~512GiB worth of precomputed points (@ 16bytes/point).

How much bytes do you have for x-coordinate and for distance?

For the whole distance in 109bit range we need 109/8 ~ 14 bytes. If you allocate lower space, you need to brute the remaining bytes/bits if the collision is found.
If you have 16 bytes space per point, so you should split somehow it between x-coordinate and distance.

@MrFreeDragon

Please see my previous comment. Only 8bytes of the x-coordinate is needed plus ~4bytes for lookup information.
arulbero
Legendary
*
Offline Offline

Activity: 1938
Merit: 2080


View Profile
May 30, 2020, 04:32:05 PM
 #557

How did you manage to get it down to 16 bytes per point?
@BitCrack

I am storing only 8bytes of the x-coordinate and lookup information (8bytes with HT overhead included) on how to retrieve the rest of the information in the event of a collision. This is accomplish by using a custom version of Google’s Sparse_Hash_Set in a distributed setup. The cost is network bandwidth but all clients are on a 1 GigE network.

 https://github.com/sparsehash/sparsehash


Maybe with a Bloom filter you can save more space ...
sssergy2705
Copper Member
Jr. Member
*
Offline Offline

Activity: 205
Merit: 1


View Profile
May 30, 2020, 04:58:46 PM
 #558


If you know how to add - it is enough to perform the subtraction:
M = P - Q = P + (-Q) - here M point is P point minus Q point, which is the same as P point plus (-Q) point. So you make the addition of points P and (-Q).

If Q point has coordinates (Qx, Qy), so (-Q) point will have coordinates (Qx, modulo - Qy)

Now you can easily can make a subtraction:
M = P - Q = (Px, Py) - (Qx, Qy) = (Px, Py) + (Qx, modulo - Qy)


Perhaps there is an implementation in C++ or Python code?
arulbero
Legendary
*
Offline Offline

Activity: 1938
Merit: 2080


View Profile
May 30, 2020, 05:12:19 PM
Last edit: May 30, 2020, 05:24:54 PM by arulbero
 #559


If you know how to add - it is enough to perform the subtraction:
M = P - Q = P + (-Q) - here M point is P point minus Q point, which is the same as P point plus (-Q) point. So you make the addition of points P and (-Q).

If Q point has coordinates (Qx, Qy), so (-Q) point will have coordinates (Qx, modulo - Qy)

Now you can easily can make a subtraction:
M = P - Q = (Px, Py) - (Qx, Qy) = (Px, Py) + (Qx, modulo - Qy)


Perhaps there is an implementation in C++ or Python code?

python
Code:
p=0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFEFFFFFC2F #field
n=0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFEBAAEDCE6AF48A03BBFD25E8CD0364141 #curve


def inv(a,p):
u, v = a%p, p
x1, x2 = 1, 0
while u != 1 :
q = v//u
r = v-q*u
x = x2-q*x1
v = u
u = r
x2 = x1
x1 = x
return x1%p



def add(ax, ay, bx, by): #A + B -> C


bxax_inv = inv(bx-ax, p)           #1/(x2-x1)
m = ( (by-ay) * bxax_inv ) % p     #m = (y2-y1)/(x2-x1)

 
cx = (m*m  - ax - bx) % p
cy = (m * (ax - cx) - ay) % p


return cx, cy



def sub(ax, ay, bx, by,): #A - B -> C

        bxax_inv = inv(bx-ax, p)       #1/(x2-x1)
m = ((p-by-ay) * bxax_inv) % p  #m = (-y2-y1)/(x2-x1)
 
cx = (m*m  - ax - bx) % p
cy = (m * (ax - cx) - ay) % p

return cx, cy
Etar
Sr. Member
****
Offline Offline

Activity: 634
Merit: 312


View Profile
May 30, 2020, 05:22:22 PM
 #560

-snip-
Perhaps there is an implementation in C++ or Python code?
A negative point is the same point only with a negative coordinate Y
In python i use operator.neg(point_y))%p then ordinary addplt
Pages: « 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 [28] 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 ... 144 »
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!