farou9
Newbie
Offline
Activity: 89
Merit: 0
|
 |
April 24, 2025, 10:56:20 PM |
|
ktimesG. any insites on solution for my problem
 Let's say you build that database with 1 billion points from G to 1B * G (likely a 50 GB or so sqlite database since you want to index by X[0:2] Then what? You weren't clear what you want to do with it. If you want to break 135 with this, then you should forget this plan and think again. i was crystal clear on the purpose of it , but i haven't tested my method (i know its not practical) with the Db yet
|
|
|
|
|
|
kTimesG
|
 |
April 24, 2025, 11:06:13 PM |
|
was crystal clear on the purpose of it , but i haven't tested my method (i know its not practical) with the Db yet
So if it's not practical why do you want to conduct it? For fun? I think there's more atoms in the Milky Way than the number of random tries you'd need to make when using a 1 billion window to find a key in Puzzle 135. Is it not a linear search, basically? But instead of having one key, you have a 1 B window? Do you think that helps?
|
Off the grid, training pigeons to broadcast signed messages.
|
|
|
farou9
Newbie
Offline
Activity: 89
Merit: 0
|
 |
April 24, 2025, 11:18:57 PM |
|
was crystal clear on the purpose of it , but i haven't tested my method (i know its not practical) with the Db yet
So if it's not practical why do you want to conduct it? For fun? I think there's more atoms in the Milky Way than the number of random tries you'd need to make when using a 1 billion window to find a key in Puzzle 135. Is it not a linear search, basically? But instead of having one key, you have a 1 B window? Do you think that helps? answer me on this one question what is the probability of hitting a random number that is in range 1B of the half of the target , even thought we are taking randoms until R>2^132 or R>2^133 .
|
|
|
|
|
|
kTimesG
|
 |
April 24, 2025, 11:29:03 PM |
|
answer me on this one question what is the probability of hitting a random number that is in range 1B of the half of the target , even thought we are taking randoms until R>2^132 or R>2^133 .
Yeah sorry, you lost me. You can just divide by 1 billion to get the inverse of that probability. Not sure what you mean by half point between target and whatever other point. That one cannot be computed, otherwise ECC would be broken.Because k can be either odd or even, so any halving may end up with a scalar on the other side of the curve.
|
Off the grid, training pigeons to broadcast signed messages.
|
|
|
farou9
Newbie
Offline
Activity: 89
Merit: 0
|
 |
April 24, 2025, 11:31:45 PM |
|
answer me on this one question what is the probability of hitting a random number that is in range 1B of the half of the target , even thought we are taking randoms until R>2^132 or R>2^133 .
Yeah sorry, you lost me. You can just divide by 1 billion to get the inverse of that probability. Not sure what you mean by half point between target and whatever other point. That one cannot be computed, otherwise ECC would be broken.Because k can be either odd or even, so any halving may end up with a scalar on the other side of the curve. i mean that Ps+Pr=P1 , Pt + (-Pr) =P2. What is the probability of P2+(-P1)=Q , Q being in range 1billion . And yes i know how that dividing work , but something wierd happens when you keep dividing you could say we get new automatic subranges
|
|
|
|
|
|
kTimesG
|
 |
April 24, 2025, 11:49:18 PM |
|
i mean that Ps+Pr=P1 , Pt + (-Pr) =P2. What is the probability of P2+(-P1)=Q , Q being in range 1billion .
And yes i know how that dividing work , but something wierd happens when you keep dividing you could say we get new automatic subranges
You don't know where Pt is, so when you make that "difference" you might as well go straight to the left of Ps, exiting the search interval, instead of "shrinking" the current range into a subrange.
|
Off the grid, training pigeons to broadcast signed messages.
|
|
|
farou9
Newbie
Offline
Activity: 89
Merit: 0
|
 |
April 25, 2025, 12:07:12 AM |
|
i mean that Ps+Pr=P1 , Pt + (-Pr) =P2. What is the probability of P2+(-P1)=Q , Q being in range 1billion .
And yes i know how that dividing work , but something wierd happens when you keep dividing you could say we get new automatic subranges
You don't know where Pt is, so when you make that "difference" you might as well go straight to the left of Ps, exiting the search interval, instead of "shrinking" the current range into a subrange. yes but even if it the two pass by if that interval of passing is in the 1b range we will still gets the x value , because it will be the inverse of it. iam not looking to shrink anything or subranging it , iam looking rather than the collision the kangaroos looking for iam looking for a space i already have the scalar of
|
|
|
|
|
|
kTimesG
|
 |
April 25, 2025, 12:55:01 AM |
|
yes but even if it the two pass by if that interval of passing is in the 1b range we will still gets the x value , because it will be the inverse of it.
iam not looking to shrink anything or subranging it , iam looking rather than the collision the kangaroos looking for iam looking for a space i already have the scalar of
Got it. You lookup by X so you catch either Y or -Y, so points of 2 billion scalars. But it still sounds like a chance of 1 billion in 2**133, since there's nothing that increases your chances from one try to the next. If you pick a random scalar you'll have a greater and greater problem with already tried scalars, more and more often after around 2**66 tries. You're better off with an actual BSGS, which sounds like what you're trying to emulate a little. Some variants already handle the Y/-Y optimization to lower the complexity. But you're still looking at 2**133/1 billion total steps, however this is an guaranteed maximum upper bound, unlike your random strategy.
|
Off the grid, training pigeons to broadcast signed messages.
|
|
|
farou9
Newbie
Offline
Activity: 89
Merit: 0
|
 |
April 25, 2025, 01:18:38 AM |
|
yes but even if it the two pass by if that interval of passing is in the 1b range we will still gets the x value , because it will be the inverse of it.
iam not looking to shrink anything or subranging it , iam looking rather than the collision the kangaroos looking for iam looking for a space i already have the scalar of
Got it. You lookup by X so you catch either Y or -Y, so points of 2 billion scalars. But it still sounds like a chance of 1 billion in 2**133, since there's nothing that increases your chances from one try to the next. If you pick a random scalar you'll have a greater and greater problem with already tried scalars, more and more often after around 2**66 tries. You're better off with an actual BSGS, which sounds like what you're trying to emulate a little. Some variants already handle the Y/-Y optimization to lower the complexity. But you're still looking at 2**133/1 billion total steps, however this is an guaranteed maximum upper bound, unlike your random strategy. yes exactly Y or -Y Well I don't have the computing power to make 2**66 tries. I have nothing to lose, I can only try
|
|
|
|
|
Bram24732
Member

Offline
Activity: 322
Merit: 28
|
 |
April 25, 2025, 04:17:46 AM |
|
Another mystery solved!
The ghost turned out to be an imposter.
What happens if in Scooby Doo method you use 4095 Instead of 5000? I said 4095 beacause is 16^3-1, which is the number of prefix combination (16^3). === FINAL RESULTS === Wins: Scooby_Doo: 242 Prefix: 245 Ties: 13
Total Checks:
Scooby_Doo: 24659497 Prefix: 25177201 Total Time:
Scooby_Doo: 319.437524 seconds Prefix: 312.082084 seconds
Averages (Total Time / Wins):
Scooby_Doo : 1.319990 seconds/victory Prefix : 1.273804 seconds/victory
Checks per Win: Scooby_Doo : 101898.75 checks/win Prefix : 102764.09 checks/win
=== FINAL RESULTS === Wins: Scooby_Doo: 242 Prefix: 243 Ties: 15
Total Checks:
Scooby_Doo: 24255742 Prefix: 24184105 Total Time:
Scooby_Doo: 315.602728 seconds Prefix: 303.996729 seconds
Averages (Total Time / Wins):
Scooby_Doo : 1.304144 seconds/victory Prefix : 1.251015 seconds/victory
Checks per Win: Scooby_Doo : 100230.34 checks/win Prefix : 99523.07 checks/win
Maybe generating an hash and looking at the first 3 hex digits (12 bits) is basically the same as generating a random number between 1 and 4096? Your rigged results are hilarious. Please let us know how they are rigged
|
I solved 67 and 68 using custom software distributing the load across ~25k GPUs. 4090 stocks speeds : ~8.1Bkeys/sec. Don’t challenge me technically if you know shit about fuck, I’ll ignore you. Same goes if all you can do is LLM reply.
|
|
|
|
nomachine
|
 |
April 25, 2025, 06:36:23 AM |
|
Got bored and trying to waste time.
Have no idea how to improve it!
Yo, ditch that OpenSSL SHA nonsense and hop on the AVX2 train—4x or 8x, baby! Now, imagine this: you're forcing some idiotic AI bro to crunch 4x or 8x parallel SHA-256 and RIPEMD-160 computations. At first, it's all chill, but then BAM! The AI starts glitchin' out like a caffeinated squirrel on a sugar rush. It’s trying to compute so hard, it forgets its own name, starts speaking in binary gibberish, and accidentally sends you memes instead of results. True chaos, man. You just turned a high-tech algorithm into a hot mess! 
|
BTC: bc1qdwnxr7s08xwelpjy3cc52rrxg63xsmagv50fa8
|
|
|
AlanJohnson
Member

Offline
Activity: 185
Merit: 11
|
 |
April 25, 2025, 06:52:19 AM |
|
caffeinated squirrel on a sugar rush.
Very dangerous being
|
|
|
|
|
Akito S. M. Hosana
Jr. Member
Offline
Activity: 420
Merit: 8
|
 |
April 25, 2025, 06:52:32 AM |
|
True chaos, man. You just turned a high-tech algorithm into a hot mess!  I end up with a hot mess even with simpler Python scripts. When I use AI, it sometimes doesn’t know what to do and throws out random code. It even happened to me that it deleted the entire code 
|
|
|
|
|
|
kTimesG
|
 |
April 25, 2025, 08:59:52 AM Last edit: April 25, 2025, 09:32:41 AM by kTimesG |
|
yes exactly Y or -Y
Well I don't have the computing power to make 2**66 tries.
I have nothing to lose, I can only try
2**66 was just a quick estimation around which repeats start to show up, as you said you pick random 2**133 numbers. You'd still need to make ~ 2**102 unique random selections (or simply, sequential BSGS steps) until you find the key. What happens if in Scooby Doo method you use 4095 Instead of 5000? Maybe generating an hash and looking at the first 3 hex digits (12 bits) is basically the same as generating a random number between 1 and 4096?
Your rigged results are hilarious. Please let us know how they are rigged H160 is rigged. If you mess with it, trying to prove it's uniform, you break the relativity theory. Why do you even bother anymore? There are only very few checkboxes in a denialist's CV that weren't marked yet.
|
Off the grid, training pigeons to broadcast signed messages.
|
|
|
Benjade
Jr. Member
Offline
Activity: 40
Merit: 1
|
 |
April 25, 2025, 10:47:59 AM |
|
Hello all key hunters, For those interested, I just released KeyQuest V1 on GitHub. It's a random or hybrid Bruteforce created from Cyclone's idea and using some of its optimized includes. If you enjoy working with prefixes like me, then this will be perfect for you. More info at https://github.com/Benjade/KeyQuestHappy hunting!
|
|
|
|
|
|
nomachine
|
 |
April 25, 2025, 10:58:20 AM |
|
Got bored and trying to waste time.
Have no idea how to improve it!
Yo, ditch that OpenSSL SHA nonsense and hop on the AVX2 train—4x or 8x, baby! Now, imagine this: you're forcing some idiotic AI bro to crunch 4x or 8x parallel SHA-256 and RIPEMD-160 computations. At first, it's all chill, but then BAM! The AI starts glitchin' out like a caffeinated squirrel on a sugar rush. It’s trying to compute so hard, it forgets its own name, starts speaking in binary gibberish, and accidentally sends you memes instead of results. True chaos, man. You just turned a high-tech algorithm into a hot mess!  Squirrel got speed loss and giving random shit after avx2. OpenSSL SHA nonsense still gives valid values with 1.5G/s speed. Bro, you don't even know what you're talkin' about. libsecp256k1 wasn't made for low-level batched optimizations like that. You gotta use JeanLucPons' SECP256K1 if you ever wanna hit 50M keys per second on any CPU.
|
BTC: bc1qdwnxr7s08xwelpjy3cc52rrxg63xsmagv50fa8
|
|
|
Akito S. M. Hosana
Jr. Member
Offline
Activity: 420
Merit: 8
|
 |
April 25, 2025, 11:07:41 AM |
|
OpenSSL SHA nonsense still gives valid values with 1.5G/s speed.
I have a script that shows 20G/s speed. Made in AI. in python. 
|
|
|
|
|
|
kTimesG
|
 |
April 25, 2025, 11:10:11 AM |
|
Bro, you don't even know what you're talkin' about. libsecp256k1 wasn't made for low-level batched optimizations like that. You gotta use JeanLucPons' SECP256K1 if you ever wanna hit 50M keys per second on any CPU.
Actually, JLP is the slow one here. Like, at least 50% slower. libsecp256k1's primitives can be easily used to implement batched addition, or anything else. I already posted the code to do this a while ago.
|
Off the grid, training pigeons to broadcast signed messages.
|
|
|
|
nomachine
|
 |
April 25, 2025, 11:16:49 AM Last edit: April 25, 2025, 11:32:09 AM by nomachine |
|
Bro, you don't even know what you're talkin' about. libsecp256k1 wasn't made for low-level batched optimizations like that. You gotta use JeanLucPons' SECP256K1 if you ever wanna hit 50M keys per second on any CPU.
Actually, JLP is the slow one here. Like, at least 50% slower. libsecp256k1's primitives can be easily used to implement batched addition, or anything else. I already posted the code to do this a while ago. Alright, can you hook this part up for me using libsecp256k1? // Worker function void worker(int threadId, __uint128_t threadRangeStart, __uint128_t threadRangeEnd) { alignas(32) uint8_t localPubKeys[HASH_BATCH_SIZE][33]; alignas(32) uint8_t localHashResults[HASH_BATCH_SIZE][20]; alignas(32) int pointIndices[HASH_BATCH_SIZE];
__m256i target16 = _mm256_loadu_si256(reinterpret_cast<const __m256i*>(TARGET_HASH160_RAW.data()));
alignas(32) Point plusPoints[POINTS_BATCH_SIZE]; alignas(32) Point minusPoints[POINTS_BATCH_SIZE];
for (int i = 0; i < POINTS_BATCH_SIZE; i++) { Int tmp; tmp.SetInt32(i); plusPoints[i] = secp->ComputePublicKey(&tmp); minusPoints[i] = plusPoints[i]; minusPoints[i].y.ModNeg(); }
alignas(32) Int deltaX[POINTS_BATCH_SIZE]; IntGroup modGroup(POINTS_BATCH_SIZE); alignas(32) Int pointBatchX[fullBatchSize]; alignas(32) Int pointBatchY[fullBatchSize];
secp256k1_context* ctx = secp256k1_context_create(SECP256K1_CONTEXT_SIGN);
__uint128_t currentKey = threadRangeStart; while (currentKey <= threadRangeEnd) { int localBatchCount = 0;
// Generate public keys in batches for (; localBatchCount < HASH_BATCH_SIZE && currentKey <= threadRangeEnd; ++localBatchCount, ++currentKey) { uint8_t priv[32]; uint128ToPrivKey(currentKey, priv);
uint8_t startPoint[65]; if (!derivePubkey(ctx, priv, startPoint, true)) { std::cerr << "Failed to derive public key.\n"; continue; }
// Use custom arithmetic for batch processing Point startPointObj; startPointObj.x.SetBytes(startPoint + 1, 32); startPointObj.y.SetBytes(startPoint + 1, 32);
for (int i = 0; i < POINTS_BATCH_SIZE; i += 4) { deltaX[i].ModSub(&plusPoints[i].x, &startPointObj.x); deltaX[i+1].ModSub(&plusPoints[i+1].x, &startPointObj.x); deltaX[i+2].ModSub(&plusPoints[i+2].x, &startPointObj.x); deltaX[i+3].ModSub(&plusPoints[i+3].x, &startPointObj.x); } modGroup.Set(deltaX); modGroup.ModInv();
for (int i = 0; i < POINTS_BATCH_SIZE; i++) { Int deltaY; deltaY.ModSub(&plusPoints[i].y, &startPointObj.y);
Int slope; slope.ModMulK1(&deltaY, &deltaX[i]);
Int slopeSq; slopeSq.ModSquareK1(&slope);
pointBatchX[i].Set(&startPointObj.x); pointBatchX[i].ModAdd(&slopeSq); pointBatchX[i].ModSub(&plusPoints[i].x);
Int diffX; diffX.Set(&startPointObj.x); diffX.ModSub(&pointBatchX[i]); diffX.ModMulK1(&slope);
pointBatchY[i].Set(&startPointObj.y); pointBatchY[i].ModNeg(); pointBatchY[i].ModAdd(&diffX); }
for (int i = 0; i < POINTS_BATCH_SIZE; i++) { Int deltaY; deltaY.ModSub(&minusPoints[i].y, &startPointObj.y);
Int slope; slope.ModMulK1(&deltaY, &deltaX[i]);
Int slopeSq; slopeSq.ModSquareK1(&slope);
pointBatchX[POINTS_BATCH_SIZE + i].Set(&startPointObj.x); pointBatchX[POINTS_BATCH_SIZE + i].ModAdd(&slopeSq); pointBatchX[POINTS_BATCH_SIZE + i].ModSub(&minusPoints[i].x);
Int diffX; diffX.Set(&startPointObj.x); diffX.ModSub(&pointBatchX[POINTS_BATCH_SIZE + i]); diffX.ModMulK1(&slope);
pointBatchY[POINTS_BATCH_SIZE + i].Set(&startPointObj.y); pointBatchY[POINTS_BATCH_SIZE + i].ModNeg(); pointBatchY[POINTS_BATCH_SIZE + i].ModAdd(&diffX); }
for (int i = 0; i < fullBatchSize && localBatchCount < HASH_BATCH_SIZE; i++) { Point tempPoint; tempPoint.x.Set(&pointBatchX[i]); tempPoint.y.Set(&pointBatchY[i]);
localPubKeys[localBatchCount][0] = tempPoint.y.IsEven() ? 0x02 : 0x03; for (int j = 0; j < 32; j++) { localPubKeys[localBatchCount][1 + j] = pointBatchX[i].GetByte(31 - j); } pointIndices[localBatchCount] = i; localBatchCount++; } } I really did try to get it working… but yeah, I totally biffed it. 
|
BTC: bc1qdwnxr7s08xwelpjy3cc52rrxg63xsmagv50fa8
|
|
|
|
kTimesG
|
 |
April 25, 2025, 01:02:06 PM |
|
libsecp256k1's primitives can be easily used to implement batched addition, or anything else.
Alright, can you hook this part up for me using libsecp256k1? Hooking up libsecp256k1Disclaimer: some biffing may be included - also known as "cat walked over the keyboard effect". This code was definitely tested (as in - no Red Alerts in the IDE - definitely the best indicator of everything working perfectly). 1. Build the constant hook jumpers! What we want: happiness multiples of G up to whatever. static void compute_const_points( secp256k1_ge * out, uint32_t numPoints ) { // Note: this uses a naive method, and should only be called once. // If numPoints is large, this method can be HEAVILY optimized.
secp256k1_gej tmp; *out = secp256k1_ge_const_g;
secp256k1_gej_set_ge(&tmp, out++);
for (uint32_t i = 1; i < numPoints; i++) { secp256k1_gej_add_ge_var(&tmp, &tmp, &secp256k1_ge_const_g, NULL); secp256k1_ge_set_gej(out++, &tmp); } }
2. Now we're already deep inside the secp256k1 inner guts. Let's delve deeper, but strategically, in order not to become obsessed maniacs over the calling conventions. #define FE_INV(r, x) secp256k1_fe_impl_inv_var(&(r), &(x)) #define FE_MUL(r, a, b) secp256k1_fe_mul_inner((r).n, (a).n, (b).n) #define FE_SQR(r, x) secp256k1_fe_sqr_inner((r).n, (x).n) #define FE_ADD(r, d) secp256k1_fe_impl_add(&(r), &(d)) #define FE_NEG(r, a, m) secp256k1_fe_impl_negate_unchecked(&(r), &(a), (m)) #define FE_IMUL(r, a) secp256k1_fe_impl_mul_int_unchecked(&(r), (a))
Now we have the fastest paths to play with field mines elements. 3. Now we're already shit in into the ECC multiverse, but wait... what are we even trying to do? I hope nobody forgot - we're trying to do batched addition of group elements. And do it faster than light, if possible (not proven). Let's focus on doing it to a single point together with the constant points array we already cooked up earlier, just to demonstrate. Let's build up a really cool batch addition manager that handles the Space aspect (also called the RAM we require). Because the n00bz need to know: batched addition requires some additional buffers to hold stuff. Advanced professionals haters might say we're using too much memory here. And they're right. Except they're wrong, because more memory helps with doing some things faster, or even in parallel! static void batch_addition_wrapper( secp256k1_ge * geMiddle, // a single point const secp256k1_ge * const_points, U32 num_const_points, U32 num_repeats ) { size_t tree_sz = (num_const_points * 2 - 1) * sizeof(secp256k1_fe);
// printf("Allocating %zu bytes for tree\n", tree_sz);
secp256k1_fe * xz_1 = malloc(tree_sz); if (NULL == xz_1) return;
secp256k1_fe * xz_2 = malloc(tree_sz); if (NULL == xz_2) return;
for (uint32_t loop = 0; loop < num_repeats; loop++) { batch_addition(geMiddle, const_points, xz_1, xz_2, num_const_points); }
free(xz_1); free(xz_2); }
Now we're talking code, apparently. So many stars! However the magic is missing. For who made it so far: you're a hero! Quick reminder: we're trying to add a lot of points to a single point, left and right, if not even up and down. The 3D version coming soon. 4. THE MAGIC METHOD Phew, so far so good, but where's the actual addition? Maybe smth like this may work, who knows? We didn't got here just to get dumped with a "write your own, I never share code" lame comment, right? static void batch_addition( secp256k1_ge * ge, // a single point const secp256k1_ge * jp, secp256k1_fe * xz, // product tree leafs + parent nodes secp256k1_fe * xzOut, U32 batch_size ) { secp256k1_fe t1, t2, t3;
S64 i;
for (i = 0; i < batch_size; i++) { xz[i] = ge[0].x; FE_NEG(t1, jp[i].x, 1); // T1 = -x2 FE_ADD(xz[i], t1); // XZ[i] = x1 - x2 }
// up-sweep inversion tree [SIMD friendly] for (i = 0; i < batch_size - 1; i++) { FE_MUL(xz[batch_size + i], xz[i * 2], xz[i * 2 + 1]); }
FE_INV(xzOut[batch_size * 2 - 2], xz[2 * batch_size - 2]);
// down-sweep inversion tree for (i = batch_size - 2; i >= 0; i--) { FE_MUL(xzOut[i * 2], xz[i * 2 + 1], xzOut[batch_size + i]); FE_MUL(xzOut[i * 2 + 1], xz[i * 2], xzOut[batch_size + i]); }
// TODO - this should be returned one by one, this is for demo only secp256k1_ge result;
secp256k1_ge * _a = &result; const secp256k1_fe * _inv = xzOut;
for (i = 0; i < batch_size; i++) { const secp256k1_ge * _b = &jp[i];
// 1. do P + Q result = ge[0];
FE_NEG(t1, _b->y, 1); // T1 = -y2 FE_ADD(_a->y, t1); // Y1 = y1 - y2 m = max_y + 2(1) FE_MUL(_a->y, _a->y, *_inv); // Y1 = m = (y1 - y2) / (x1 - x2) m = 1 FE_SQR(t2, _a->y); // T2 = m**2 m = 1 FE_NEG(t3, _b->x, 1); // T3 = -x2 FE_ADD(t2, t3); // T2 = m**2 - x2 m = 1 + 2(1) = 3(2) FE_NEG(_a->x, _a->x, 1); // X1 = -x1 m = max_x + 1 FE_ADD(_a->x, t2); // X1 = x3 = m**2 - x1 - x2 max_x = 3 + max_x + 1 secp256k1_fe_normalize_weak(&_a->x);
FE_NEG(t2, _a->x, 1); // T2 = -x3 m = 1 + 1 = 2 FE_ADD(t2, _b->x); // T1 = x2 - x3 m = 2 + 1 = 3 FE_MUL(_a->y, _a->y, t2); // Y1 = m * (x2 - x3) m = 1 FE_ADD(_a->y, t1); // Y1 = y3 = m * (x2 - x3) - y2 m = 1 + 2 = 3 secp256k1_fe_normalize_weak(&_a->y);
// TODO - consume first result = P + Q
// 2. Do P - Q using the same inverse result = ge[0];
FE_ADD(_a->y, _b->y); // Y1 = y1 + y2 m = max_y + 2(1) FE_MUL(_a->y, _a->y, *_inv); // Y1 = m = (y1 + y2) / (x1 - x2) m = 1 FE_SQR(t2, _a->y); // T2 = m**2 m = 1 FE_NEG(t3, _b->x, 1); // T3 = -x2 FE_ADD(t2, t3); // T2 = m**2 - x2 m = 1 + 2(1) = 3(2) FE_NEG(_a->x, _a->x, 1); // X1 = -x1 m = max_x + 1 FE_ADD(_a->x, t2); // X1 = x3 = m**2 - x1 - x2 max_x = 3 + max_x + 1 secp256k1_fe_normalize_weak(&_a->x);
FE_NEG(t2, _a->x, 1); // T2 = -x3 m = 1 + 1 = 2 FE_ADD(t2, _b->x); // T1 = x2 - x3 m = 2 + 1 = 3 FE_MUL(_a->y, _a->y, t2); // Y1 = m * (x2 - x3) m = 1 FE_ADD(_a->y, _b->y); // Y1 = y3 = m * (x2 - x3) + y2 m = 1 + 2 = 3 secp256k1_fe_normalize_weak(&_a->y);
// TODO - consume second result = P - Q
++_inv; } }
Damn, this one looks like Chinese. WTF is even happening here? Basically a dry run useless cycle of operations, apparently. And you are 100% correct. In fact - do not even attampt to consider this code as production ready. 5. But.... does it work? A: God knows. Maybe it does, maybe not. Maybe it regresses itself into Curve25519 by accident. You may also want to fully normalize the resulting GE back to, IDK... 33 byte public keys? Here's one way to do it and I'm out. secp256k1_fe_normalize_var(x); // final Normalization of X (or Y)
// Convert X to 32 bytes... just for fun, or when you're done with the arithmetical nonsense secp256k1_fe_to_storage(...);
// Check Y parity (no need to convert it) to set the first byte to get the final Compressed Public Key
That's it, basically.
|
Off the grid, training pigeons to broadcast signed messages.
|
|
|
|