Bitcoin Forum
September 04, 2025, 03:33:10 AM *
News: Latest Bitcoin Core release: 29.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: « 1 ... 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 [474] 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 ... 581 »
  Print  
Author Topic: Bitcoin puzzle transaction ~32 BTC prize to who solves it  (Read 334658 times)
Benjade
Jr. Member
*
Offline Offline

Activity: 40
Merit: 1


View Profile WWW
April 25, 2025, 10:47:59 AM
 #9461

Hello all key hunters,

For those interested, I just released KeyQuest V1 on GitHub. It's a random or hybrid Bruteforce created from Cyclone's idea and using some of its optimized includes. If you enjoy working with prefixes like me, then this will be perfect for you.

More info at https://github.com/Benjade/KeyQuest

Happy hunting!
nomachine
Full Member
***
Offline Offline

Activity: 742
Merit: 110


View Profile
April 25, 2025, 10:58:20 AM
 #9462

Got bored and trying to waste time.

Have no idea how to improve it!

Yo, ditch that OpenSSL SHA nonsense and hop on the AVX2 train—4x or 8x, baby!

Now, imagine this: you're forcing some idiotic AI bro to crunch 4x or 8x parallel SHA-256 and RIPEMD-160 computations.

At first, it's all chill, but then BAM! The AI starts glitchin' out like a caffeinated squirrel on a sugar rush.

It’s trying to compute so hard, it forgets its own name, starts speaking in binary gibberish, and accidentally sends you memes instead of results. True chaos, man. You just turned a high-tech algorithm into a hot mess!  Grin

Squirrel got speed loss and giving random shit after avx2.
OpenSSL SHA nonsense still gives valid values with 1.5G/s speed.

Bro, you don't even know what you're talkin' about. libsecp256k1 wasn't made for low-level batched optimizations like that. You gotta use JeanLucPons' SECP256K1 if you ever wanna hit 50M keys per second on any CPU.


BTC: bc1qdwnxr7s08xwelpjy3cc52rrxg63xsmagv50fa8
Akito S. M. Hosana
Jr. Member
*
Offline Offline

Activity: 392
Merit: 8


View Profile
April 25, 2025, 11:07:41 AM
 #9463


OpenSSL SHA nonsense still gives valid values with 1.5G/s speed.


I have a script that shows 20G/s speed. Made in AI. in python.  Tongue
kTimesG
Full Member
***
Offline Offline

Activity: 574
Merit: 198


View Profile
April 25, 2025, 11:10:11 AM
 #9464

Bro, you don't even know what you're talkin' about. libsecp256k1 wasn't made for low-level batched optimizations like that. You gotta use JeanLucPons' SECP256K1 if you ever wanna hit 50M keys per second on any CPU.

Actually, JLP is the slow one here. Like, at least 50% slower.

libsecp256k1's primitives can be easily used to implement batched addition, or anything else. I already posted the code to do this a while ago.

Off the grid, training pigeons to broadcast signed messages.
nomachine
Full Member
***
Offline Offline

Activity: 742
Merit: 110


View Profile
April 25, 2025, 11:16:49 AM
Last edit: April 25, 2025, 11:32:09 AM by nomachine
 #9465

Bro, you don't even know what you're talkin' about. libsecp256k1 wasn't made for low-level batched optimizations like that. You gotta use JeanLucPons' SECP256K1 if you ever wanna hit 50M keys per second on any CPU.

Actually, JLP is the slow one here. Like, at least 50% slower.

libsecp256k1's primitives can be easily used to implement batched addition, or anything else. I already posted the code to do this a while ago.


Alright, can you hook this part up for me using libsecp256k1?

Code:
// Worker function
void worker(int threadId, __uint128_t threadRangeStart, __uint128_t threadRangeEnd) {
    alignas(32) uint8_t localPubKeys[HASH_BATCH_SIZE][33];
    alignas(32) uint8_t localHashResults[HASH_BATCH_SIZE][20];
    alignas(32) int pointIndices[HASH_BATCH_SIZE];

    __m256i target16 = _mm256_loadu_si256(reinterpret_cast<const __m256i*>(TARGET_HASH160_RAW.data()));

    alignas(32) Point plusPoints[POINTS_BATCH_SIZE];
    alignas(32) Point minusPoints[POINTS_BATCH_SIZE];

    for (int i = 0; i < POINTS_BATCH_SIZE; i++) {
        Int tmp;
        tmp.SetInt32(i);
        plusPoints[i] = secp->ComputePublicKey(&tmp);
        minusPoints[i] = plusPoints[i];
        minusPoints[i].y.ModNeg();
    }

    alignas(32) Int deltaX[POINTS_BATCH_SIZE];
    IntGroup modGroup(POINTS_BATCH_SIZE);
    alignas(32) Int pointBatchX[fullBatchSize];
    alignas(32) Int pointBatchY[fullBatchSize];

    secp256k1_context* ctx = secp256k1_context_create(SECP256K1_CONTEXT_SIGN);

    __uint128_t currentKey = threadRangeStart;
    while (currentKey <= threadRangeEnd) {
        int localBatchCount = 0;

        // Generate public keys in batches
        for (; localBatchCount < HASH_BATCH_SIZE && currentKey <= threadRangeEnd; ++localBatchCount, ++currentKey) {
            uint8_t priv[32];
            uint128ToPrivKey(currentKey, priv);

            uint8_t startPoint[65];
            if (!derivePubkey(ctx, priv, startPoint, true)) {
                std::cerr << "Failed to derive public key.\n";
                continue;
            }

            // Use custom arithmetic for batch processing
            Point startPointObj;
            startPointObj.x.SetBytes(startPoint + 1, 32);
            startPointObj.y.SetBytes(startPoint + 1, 32);

            for (int i = 0; i < POINTS_BATCH_SIZE; i += 4) {
                deltaX[i].ModSub(&plusPoints[i].x, &startPointObj.x);
                deltaX[i+1].ModSub(&plusPoints[i+1].x, &startPointObj.x);
                deltaX[i+2].ModSub(&plusPoints[i+2].x, &startPointObj.x);
                deltaX[i+3].ModSub(&plusPoints[i+3].x, &startPointObj.x);
            }
            modGroup.Set(deltaX);
            modGroup.ModInv();

            for (int i = 0; i < POINTS_BATCH_SIZE; i++) {
                Int deltaY;
                deltaY.ModSub(&plusPoints[i].y, &startPointObj.y);

                Int slope;
                slope.ModMulK1(&deltaY, &deltaX[i]);

                Int slopeSq;
                slopeSq.ModSquareK1(&slope);

                pointBatchX[i].Set(&startPointObj.x);
                pointBatchX[i].ModAdd(&slopeSq);
                pointBatchX[i].ModSub(&plusPoints[i].x);

                Int diffX;
                diffX.Set(&startPointObj.x);
                diffX.ModSub(&pointBatchX[i]);
                diffX.ModMulK1(&slope);

                pointBatchY[i].Set(&startPointObj.y);
                pointBatchY[i].ModNeg();
                pointBatchY[i].ModAdd(&diffX);
            }

            for (int i = 0; i < POINTS_BATCH_SIZE; i++) {
                Int deltaY;
                deltaY.ModSub(&minusPoints[i].y, &startPointObj.y);

                Int slope;
                slope.ModMulK1(&deltaY, &deltaX[i]);

                Int slopeSq;
                slopeSq.ModSquareK1(&slope);

                pointBatchX[POINTS_BATCH_SIZE + i].Set(&startPointObj.x);
                pointBatchX[POINTS_BATCH_SIZE + i].ModAdd(&slopeSq);
                pointBatchX[POINTS_BATCH_SIZE + i].ModSub(&minusPoints[i].x);

                Int diffX;
                diffX.Set(&startPointObj.x);
                diffX.ModSub(&pointBatchX[POINTS_BATCH_SIZE + i]);
                diffX.ModMulK1(&slope);

                pointBatchY[POINTS_BATCH_SIZE + i].Set(&startPointObj.y);
                pointBatchY[POINTS_BATCH_SIZE + i].ModNeg();
                pointBatchY[POINTS_BATCH_SIZE + i].ModAdd(&diffX);
            }

            for (int i = 0; i < fullBatchSize && localBatchCount < HASH_BATCH_SIZE; i++) {
                Point tempPoint;
                tempPoint.x.Set(&pointBatchX[i]);
                tempPoint.y.Set(&pointBatchY[i]);

                localPubKeys[localBatchCount][0] = tempPoint.y.IsEven() ? 0x02 : 0x03;
                for (int j = 0; j < 32; j++) {
                    localPubKeys[localBatchCount][1 + j] = pointBatchX[i].GetByte(31 - j);
                }
                pointIndices[localBatchCount] = i;
                localBatchCount++;
            }
        }

I really did try to get it working… but yeah, I totally biffed it.  Grin

BTC: bc1qdwnxr7s08xwelpjy3cc52rrxg63xsmagv50fa8
kTimesG
Full Member
***
Offline Offline

Activity: 574
Merit: 198


View Profile
April 25, 2025, 01:02:06 PM
 #9466

libsecp256k1's primitives can be easily used to implement batched addition, or anything else.
Alright, can you hook this part up for me using libsecp256k1?

Hooking up libsecp256k1

Disclaimer: some biffing may be included - also known as "cat walked over the keyboard effect". This code was definitely tested (as in - no Red Alerts in the IDE - definitely the best indicator of everything working perfectly).

1. Build the constant hook jumpers!

What we want: happiness multiples of G up to whatever.

Code:
static void compute_const_points(
    secp256k1_ge * out,
    uint32_t numPoints
) {
    // Note: this uses a naive method, and should only be called once.
    // If numPoints is large, this method can be HEAVILY optimized.

    secp256k1_gej tmp;
    *out = secp256k1_ge_const_g;

    secp256k1_gej_set_ge(&tmp, out++);

    for (uint32_t i = 1; i < numPoints; i++) {
        secp256k1_gej_add_ge_var(&tmp, &tmp, &secp256k1_ge_const_g, NULL);
        secp256k1_ge_set_gej(out++, &tmp);
    }
}

2. Now we're already deep inside the secp256k1 inner guts. Let's delve deeper, but strategically, in order not to become obsessed maniacs over the calling conventions.

Code:
#define FE_INV(r, x)        secp256k1_fe_impl_inv_var(&(r), &(x))
#define FE_MUL(r, a, b)     secp256k1_fe_mul_inner((r).n, (a).n, (b).n)
#define FE_SQR(r, x)        secp256k1_fe_sqr_inner((r).n, (x).n)
#define FE_ADD(r, d)        secp256k1_fe_impl_add(&(r), &(d))
#define FE_NEG(r, a, m)     secp256k1_fe_impl_negate_unchecked(&(r), &(a), (m))
#define FE_IMUL(r, a)       secp256k1_fe_impl_mul_int_unchecked(&(r), (a))

Now we have the fastest paths to play with field mines elements.

3. Now we're already shit in into the ECC multiverse, but wait... what are we even trying to do? I hope nobody forgot - we're trying to do batched addition of group elements. And do it faster than light, if possible (not proven).

Let's focus on doing it to a single point together with the constant points array we already cooked up earlier, just to demonstrate.

Let's build up a really cool batch addition manager that handles the Space aspect (also called the RAM we require). Because the n00bz need to know: batched addition requires some additional buffers to hold stuff.

Advanced professionals haters might say we're using too much memory here. And they're right. Except they're wrong, because more memory helps with doing some things faster, or even in parallel!

Code:
static
void batch_addition_wrapper(
    secp256k1_ge * geMiddle,              // a single point
    const secp256k1_ge * const_points,
    U32 num_const_points,
    U32 num_repeats
) {
    size_t tree_sz = (num_const_points * 2 - 1) * sizeof(secp256k1_fe);

//    printf("Allocating %zu bytes for tree\n", tree_sz);

    secp256k1_fe * xz_1 = malloc(tree_sz);
    if (NULL == xz_1) return;

    secp256k1_fe * xz_2 = malloc(tree_sz);
    if (NULL == xz_2) return;

    for (uint32_t loop = 0; loop < num_repeats; loop++) {
        batch_addition(geMiddle, const_points, xz_1, xz_2, num_const_points);
    }

    free(xz_1);
    free(xz_2);
}

Now we're talking code, apparently. So many stars! However the magic is missing.

For who made it so far: you're a hero! Quick reminder: we're trying to add a lot of points to a single point, left and right, if not even up and down. The 3D version coming soon.

4. THE MAGIC METHOD

Phew, so far so good, but where's the actual addition? Maybe smth like this may work, who knows? We didn't got here just to get dumped with a "write your own, I never share code" lame comment, right?

Code:
static
void batch_addition(
    secp256k1_ge * ge,                  // a single point
    const secp256k1_ge * jp,
    secp256k1_fe * xz,                  // product tree leafs + parent nodes
    secp256k1_fe * xzOut,
    U32 batch_size
) {
    secp256k1_fe t1, t2, t3;

    S64 i;

    for (i = 0; i < batch_size; i++) {
        xz[i] = ge[0].x;
        FE_NEG(t1, jp[i].x, 1);         // T1 = -x2
        FE_ADD(xz[i], t1);              // XZ[i] = x1 - x2
    }

    // up-sweep inversion tree [SIMD friendly]
    for (i = 0; i < batch_size - 1; i++) {
        FE_MUL(xz[batch_size + i], xz[i * 2], xz[i * 2 + 1]);
    }

    FE_INV(xzOut[batch_size * 2 - 2], xz[2 * batch_size - 2]);

    // down-sweep inversion tree
    for (i = batch_size - 2; i >= 0; i--) {
        FE_MUL(xzOut[i * 2], xz[i * 2 + 1], xzOut[batch_size + i]);
        FE_MUL(xzOut[i * 2 + 1], xz[i * 2], xzOut[batch_size + i]);
    }

    // TODO - this should be returned one by one, this is for demo only
    secp256k1_ge result;

    secp256k1_ge * _a = &result;
    const secp256k1_fe * _inv = xzOut;

    for (i = 0; i < batch_size; i++) {
        const secp256k1_ge * _b = &jp[i];

        // 1. do P + Q
        result = ge[0];

        FE_NEG(t1, _b->y, 1);                       // T1 = -y2
        FE_ADD(_a->y, t1);                          // Y1 = y1 - y2                     m = max_y + 2(1)
        FE_MUL(_a->y, _a->y, *_inv);                // Y1 = m = (y1 - y2) / (x1 - x2)   m = 1
        FE_SQR(t2, _a->y);                          // T2 = m**2                        m = 1
        FE_NEG(t3, _b->x, 1);                       // T3 = -x2
        FE_ADD(t2, t3);                             // T2 = m**2 - x2                   m = 1 + 2(1) = 3(2)
        FE_NEG(_a->x, _a->x, 1);                    // X1 = -x1                         m = max_x + 1
        FE_ADD(_a->x, t2);                          // X1 = x3 = m**2 - x1 - x2         max_x = 3 + max_x + 1
        secp256k1_fe_normalize_weak(&_a->x);

        FE_NEG(t2, _a->x, 1);                       // T2 = -x3                         m = 1 + 1 = 2
        FE_ADD(t2, _b->x);                          // T1 = x2 - x3                     m = 2 + 1 = 3
        FE_MUL(_a->y, _a->y, t2);                   // Y1 = m * (x2 - x3)               m = 1
        FE_ADD(_a->y, t1);                          // Y1 = y3 = m * (x2 - x3) - y2     m = 1 + 2 = 3
        secp256k1_fe_normalize_weak(&_a->y);

        // TODO - consume first result = P + Q

        // 2. Do P - Q using the same inverse
        result = ge[0];

        FE_ADD(_a->y, _b->y);                       // Y1 = y1 + y2                     m = max_y + 2(1)
        FE_MUL(_a->y, _a->y, *_inv);                // Y1 = m = (y1 + y2) / (x1 - x2)   m = 1
        FE_SQR(t2, _a->y);                          // T2 = m**2                        m = 1
        FE_NEG(t3, _b->x, 1);                       // T3 = -x2
        FE_ADD(t2, t3);                             // T2 = m**2 - x2                   m = 1 + 2(1) = 3(2)
        FE_NEG(_a->x, _a->x, 1);                    // X1 = -x1                         m = max_x + 1
        FE_ADD(_a->x, t2);                          // X1 = x3 = m**2 - x1 - x2         max_x = 3 + max_x + 1
        secp256k1_fe_normalize_weak(&_a->x);

        FE_NEG(t2, _a->x, 1);                       // T2 = -x3                         m = 1 + 1 = 2
        FE_ADD(t2, _b->x);                          // T1 = x2 - x3                     m = 2 + 1 = 3
        FE_MUL(_a->y, _a->y, t2);                   // Y1 = m * (x2 - x3)               m = 1
        FE_ADD(_a->y, _b->y);                       // Y1 = y3 = m * (x2 - x3) + y2     m = 1 + 2 = 3
        secp256k1_fe_normalize_weak(&_a->y);

        // TODO - consume second result = P - Q

        ++_inv;
    }
}

Damn, this one looks like Chinese. WTF is even happening here? Basically a dry run useless cycle of operations, apparently. And you are 100% correct. In fact - do not even attampt to consider this code as production ready.

5. But.... does it work?

A: God knows. Maybe it does, maybe not. Maybe it regresses itself into Curve25519 by accident. You may also want to fully normalize the resulting GE back to, IDK... 33 byte public keys? Here's one way to do it and I'm out.

Code:
secp256k1_fe_normalize_var(x);    // final Normalization of X (or Y)

// Convert X to 32 bytes... just for fun, or when you're done with the arithmetical nonsense
secp256k1_fe_to_storage(...);

// Check Y parity (no need to convert it) to set the first byte to get the final Compressed Public Key

That's it, basically.

Off the grid, training pigeons to broadcast signed messages.
nomachine
Full Member
***
Offline Offline

Activity: 742
Merit: 110


View Profile
April 25, 2025, 02:07:29 PM
Last edit: April 25, 2025, 02:40:51 PM by nomachine
 #9467

That's it, basically.
In the end, we came up with a sort of Frankenstein code together, and this is it:

Code:
#include <secp256k1.h>
#include "sha256_avx2.h"
#include "ripemd160_avx2.h"

static constexpr int HASH_BATCH_SIZE = 8;
static constexpr int POINTS_BATCH_SIZE = 256;


inline void prepareShaBlock(const uint8_t* dataSrc, __uint128_t dataLen, uint8_t* outBlock) {
    std::fill_n(outBlock, 64, 0);
    std::memcpy(outBlock, dataSrc, dataLen);
    outBlock[dataLen] = 0x80;
    const uint32_t bitLen = (uint32_t)(dataLen * 8);
    outBlock[60] = (uint8_t)((bitLen >> 24) & 0xFF);
    outBlock[61] = (uint8_t)((bitLen >> 16) & 0xFF);
    outBlock[62] = (uint8_t)((bitLen >>  8) & 0xFF);
    outBlock[63] = (uint8_t)( bitLen        & 0xFF);
}

inline void prepareRipemdBlock(const uint8_t* dataSrc, uint8_t* outBlock) {
    std::fill_n(outBlock, 64, 0);
    std::memcpy(outBlock, dataSrc, 32);
    outBlock[32] = 0x80;
    const uint32_t bitLen = 256;
    outBlock[60] = (uint8_t)((bitLen >> 24) & 0xFF);
    outBlock[61] = (uint8_t)((bitLen >> 16) & 0xFF);
    outBlock[62] = (uint8_t)((bitLen >>  8) & 0xFF);
    outBlock[63] = (uint8_t)( bitLen        & 0xFF);
}

static void computeHash160BatchBinSingle(int numKeys,
                                       uint8_t pubKeys[][33],
                                       uint8_t hashResults[][20])
{
    alignas(32) std::array<std::array<uint8_t, 64>, HASH_BATCH_SIZE> shaInputs;
    alignas(32) std::array<std::array<uint8_t, 32>, HASH_BATCH_SIZE> shaOutputs;
    alignas(32) std::array<std::array<uint8_t, 64>, HASH_BATCH_SIZE> ripemdInputs;
    alignas(32) std::array<std::array<uint8_t, 20>, HASH_BATCH_SIZE> ripemdOutputs;
    const __uint128_t totalBatches = (numKeys + (HASH_BATCH_SIZE - 1)) / HASH_BATCH_SIZE;
    for (__uint128_t batch = 0; batch < totalBatches; batch++) {
        const __uint128_t batchCount = std::min<__uint128_t>(HASH_BATCH_SIZE, numKeys - batch * HASH_BATCH_SIZE);

        for (__uint128_t i = 0; i < batchCount; i++) {
            prepareShaBlock(pubKeys[batch * HASH_BATCH_SIZE + i], 33, shaInputs[i].data());
        }

        if (batchCount < HASH_BATCH_SIZE) {
            static std::array<uint8_t, 64> shaPadding = {};
            prepareShaBlock(pubKeys[0], 33, shaPadding.data());
            for (__uint128_t i = batchCount; i < HASH_BATCH_SIZE; i++) {
                std::memcpy(shaInputs[i].data(), shaPadding.data(), 64);
            }
        }

        const uint8_t* inPtr[HASH_BATCH_SIZE];
        uint8_t* outPtr[HASH_BATCH_SIZE];
        for (int i = 0; i < HASH_BATCH_SIZE; i++) {
            inPtr[i]  = shaInputs[i].data();
            outPtr[i] = shaOutputs[i].data();
        }

        sha256avx2_8B(inPtr[0], inPtr[1], inPtr[2], inPtr[3],
                      inPtr[4], inPtr[5], inPtr[6], inPtr[7],
                      outPtr[0], outPtr[1], outPtr[2], outPtr[3],
                      outPtr[4], outPtr[5], outPtr[6], outPtr[7]);

        for (__uint128_t i = 0; i < batchCount; i++) {
            prepareRipemdBlock(shaOutputs[i].data(), ripemdInputs[i].data());
        }

        if (batchCount < HASH_BATCH_SIZE) {
            static std::array<uint8_t, 64> ripemdPadding = {};
            prepareRipemdBlock(shaOutputs[0].data(), ripemdPadding.data());
            for (__uint128_t i = batchCount; i < HASH_BATCH_SIZE; i++) {
                std::memcpy(ripemdInputs[i].data(), ripemdPadding.data(), 64);
            }
        }

        for (int i = 0; i < HASH_BATCH_SIZE; i++) {
            inPtr[i]  = ripemdInputs[i].data();
            outPtr[i] = ripemdOutputs[i].data();
        }

        ripemd160avx2::ripemd160avx2_32(
            (unsigned char*)inPtr[0], (unsigned char*)inPtr[1],
            (unsigned char*)inPtr[2], (unsigned char*)inPtr[3],
            (unsigned char*)inPtr[4], (unsigned char*)inPtr[5],
            (unsigned char*)inPtr[6], (unsigned char*)inPtr[7],
            outPtr[0], outPtr[1], outPtr[2], outPtr[3],
            outPtr[4], outPtr[5],
            outPtr[6], outPtr[7]
        );

        for (__uint128_t i = 0; i < batchCount; i++) {
            std::memcpy(hashResults[batch * HASH_BATCH_SIZE + i], ripemdOutputs[i].data(), 20);
        }
    }
}



void worker(int threadId, __uint128_t threadRangeStart, __uint128_t threadRangeEnd) {
    alignas(32) uint8_t localPubKeys[HASH_BATCH_SIZE][33];
    alignas(32) uint8_t localHashResults[HASH_BATCH_SIZE][20];

    __m256i target16 = _mm256_loadu_si256(reinterpret_cast<const __m256i*>(TARGET_HASH160_RAW.data()));

    secp256k1_context* ctx = secp256k1_context_create(SECP256K1_CONTEXT_SIGN);

    // Precompute points for batch processing
    alignas(32) secp256k1_fe plusPointsX[POINTS_BATCH_SIZE];
    alignas(32) secp256k1_fe plusPointsY[POINTS_BATCH_SIZE];
    alignas(32) secp256k1_fe minusPointsY[POINTS_BATCH_SIZE];

    for (int i = 0; i < POINTS_BATCH_SIZE; i++) {
        secp256k1_scalar scalar;
        secp256k1_scalar_set_int(&scalar, i);
        secp256k1_gej pointJ;
        secp256k1_ecmult_gen(ctx->ecmult_gen_ctx, &pointJ, &scalar);

        secp256k1_ge point;
        secp256k1_ge_set_gej(&point, &pointJ);

        secp256k1_fe_normalize_var(&point.x);
        secp256k1_fe_normalize_var(&point.y);

        plusPointsX[i] = point.x;
        plusPointsY[i] = point.y;

        secp256k1_fe_negate(&minusPointsY[i], &point.y, 1);
    }

    __uint128_t currentKey = threadRangeStart;
    while (currentKey <= threadRangeEnd) {
        int localBatchCount = 0;

        // Generate public keys in batches
        for (; localBatchCount < HASH_BATCH_SIZE && currentKey <= threadRangeEnd; ++localBatchCount, ++currentKey) {
            uint8_t priv[32];
            uint128ToPrivKey(currentKey, priv);

            secp256k1_pubkey pubkey;
            if (!secp256k1_ec_pubkey_create(ctx, &pubkey, priv)) {
                std::cerr << "Failed to derive public key.\n";
                continue;
            }

            size_t len = 33;
            secp256k1_ec_pubkey_serialize(ctx, localPubKeys[localBatchCount], &len, &pubkey, SECP256K1_EC_COMPRESSED);
        }

        // Compute HASH160 for the batch
        computeHash160BatchBinSingle(localBatchCount, localPubKeys, localHashResults);

        // Compare HASH160 results with the target
        for (int j = 0; j < localBatchCount; ++j) {
            __m256i cand = _mm256_loadu_si256(reinterpret_cast<const __m256i*>(localHashResults[j]));
            __m256i cmp = _mm256_cmpeq_epi8(cand, target16);
            int mask = _mm256_movemask_epi8(cmp);

            if ((mask & 0x0F) == 0x0F) {  
                bool fullMatch = true;
                for (int k = 0; k < 20; k++) {
                    if (localHashResults[j][k] != TARGET_HASH160_RAW[k]) {
                        fullMatch = false;
                        break;
                    }
                }
                if (fullMatch) {
                    auto tEndTime = std::chrono::high_resolution_clock::now();
                    double globalElapsedTime = std::chrono::duration<double>(tEndTime - tStart).count();

                    std::lock_guard<std::mutex> lock(progress_mutex);
                    globalComparedCount += actual_work_done;
                    mkeysPerSec = (double)globalComparedCount / globalElapsedTime / 1e6;

                    __uint128_t foundKey = currentKey - (localBatchCount - j);
                    std::string hexKey = uint128ToHex(foundKey);

                    std::lock_guard<std::mutex> resultLock(result_mutex);
                    results.push(std::make_tuple(hexKey, total_checked_avx.load(), flip_count));
                    stop_event.store(true);
                    return;
                }
            }
        }
    }

    secp256k1_context_destroy(ctx);
}


To use internal APIs like secp256k1_ecmult_gen, you must install secp256k1 form source:

Code:
git clone https://github.com/bitcoin-core/secp256k1.git
cd secp256k1
./autogen.sh
./configure --enable-module-ecmult-gen --enable-experimental --enable-module-recovery
make
sudo make install


And the problems don't end here.

That's why it's easier for me to use JLP  secp256k1  Grin

BTC: bc1qdwnxr7s08xwelpjy3cc52rrxg63xsmagv50fa8
kTimesG
Full Member
***
Offline Offline

Activity: 574
Merit: 198


View Profile
April 25, 2025, 02:40:47 PM
 #9468

That's it, basically.
In the end, we came up with a sort of Frankenstein code together, and this is it:

Code:
        // Generate public keys in batches
        for (; localBatchCount < HASH_BATCH_SIZE && currentKey <= threadRangeEnd; ++localBatchCount, ++currentKey) {
            if (!secp256k1_ec_pubkey_create(ctx, &pubkey, priv)) {
                std::cerr << "Failed to derive public key.\n";
                continue;
            }

            size_t len = 33;
            secp256k1_ec_pubkey_serialize(ctx, localPubKeys[localBatchCount], &len, &pubkey, SECP256K1_EC_COMPRESSED);
        }
}

Fastest way to faster code: screw batch addition altogether, compute from pvt.

But I forgive you - after all, this is what happens when two kinds of gibberish code collide.

 Grin

To use internal APIs like secp256k1_ecmult_gen, you must install secp256k1 form source:

You don't need that function if you don't ever need point multiplications. Neither a context at all. Simply include the headers with the implementation for group & field ops. That's how I do it. Zero issues.

Off the grid, training pigeons to broadcast signed messages.
nomachine
Full Member
***
Offline Offline

Activity: 742
Merit: 110


View Profile
April 25, 2025, 02:48:03 PM
 #9469

It's easier to go fishing. Or with a rope and a very strong magnet at the end. You can pull out anything. Even a safe with gold.  Grin

BTC: bc1qdwnxr7s08xwelpjy3cc52rrxg63xsmagv50fa8
Akito S. M. Hosana
Jr. Member
*
Offline Offline

Activity: 392
Merit: 8


View Profile
April 25, 2025, 02:52:57 PM
 #9470

It's easier to go fishing. Or with a rope and a very strong magnet at the end. You can pull out anything. Even a safe with gold.  Grin

I watched it on YouTube. You could also end up pulling out an unexploded mine — and that's definitely not harmless  Tongue
brainless
Member
**
Offline Offline

Activity: 421
Merit: 35


View Profile
April 25, 2025, 03:58:29 PM
 #9471

In back posts I saw someone calc prefix count based on bits
Can some one tell me how much could be 19vk prefix in 69 bit range ?

13sXkWqtivcMtNGQpskD78iqsgVy9hcHLF
kTimesG
Full Member
***
Offline Offline

Activity: 574
Merit: 198


View Profile
April 25, 2025, 05:30:42 PM
 #9472

In back posts I saw someone calc prefix count based on bits
Can some one tell me how much could be 19vk prefix in 69 bit range ?

99% chances to find between 3759415488110813 and 3759415803938210 keys in the #69 interval that have an address starting with 19vk.

Off the grid, training pigeons to broadcast signed messages.
Fllear
Newbie
*
Offline Offline

Activity: 6
Merit: 0


View Profile
April 25, 2025, 05:52:13 PM
 #9473

Hello all key hunters,

For those interested, I just released KeyQuest V1 on GitHub. It's a random or hybrid Bruteforce created from Cyclone's idea and using some of its optimized includes. If you enjoy working with prefixes like me, then this will be perfect for you.

More info at https://github.com/Benjade/KeyQuest

Happy hunting!

Your code is created exclusively for unix, for most people using windows systems, compilation and the program itself will not work.
Mdz21
Newbie
*
Offline Offline

Activity: 2
Merit: 0


View Profile
April 25, 2025, 06:01:09 PM
 #9474

Hello all key hunters,

For those interested, I just released KeyQuest V1 on GitHub. It's a random or hybrid Bruteforce created from Cyclone's idea and using some of its optimized includes. If you enjoy working with prefixes like me, then this will be perfect for you.

More info at https://github.com/Benjade/KeyQuest

Happy hunting!

Your code is created exclusively for unix, for most people using windows systems, compilation and the program itself will not work.


use wsl in windows
Akito S. M. Hosana
Jr. Member
*
Offline Offline

Activity: 392
Merit: 8


View Profile
April 25, 2025, 06:47:58 PM
 #9475

Hello all key hunters,

For those interested, I just released KeyQuest V1 on GitHub. It's a random or hybrid Bruteforce created from Cyclone's idea and using some of its optimized includes. If you enjoy working with prefixes like me, then this will be perfect for you.

More info at https://github.com/Benjade/KeyQuest

Happy hunting!


This is based on Cyclone, with no mention of the original author, but it's also slower than Cyclone. What's the point of this? Is it some hidden gem or an encrypted result? I don't understand. Tongue
nomachine
Full Member
***
Offline Offline

Activity: 742
Merit: 110


View Profile
April 25, 2025, 07:01:25 PM
 #9476

This is based on Cyclone, with no mention of the original author, but it's also slower than Cyclone. What's the point of this? Is it some hidden gem or an encrypted result? I don't understand. Tongue

At least you can see the source code here. Take it or leave it. You have three versions now, so pick one — or none, like me, even though I modified the original myself. This is just one of the waystations to something… or nothing.  Grin

BTC: bc1qdwnxr7s08xwelpjy3cc52rrxg63xsmagv50fa8
Benjade
Jr. Member
*
Offline Offline

Activity: 40
Merit: 1


View Profile WWW
April 25, 2025, 07:06:04 PM
 #9477

Hello all key hunters,

For those interested, I just released KeyQuest V1 on GitHub. It's a random or hybrid Bruteforce created from Cyclone's idea and using some of its optimized includes. If you enjoy working with prefixes like me, then this will be perfect for you.

More info at https://github.com/Benjade/KeyQuest

Happy hunting!

Your code is created exclusively for unix, for most people using windows systems, compilation and the program itself will not work.

Yeah sorry but I don't touch Windows, I hate it.
Akito S. M. Hosana
Jr. Member
*
Offline Offline

Activity: 392
Merit: 8


View Profile
April 25, 2025, 07:07:27 PM
 #9478

This is based on Cyclone, with no mention of the original author, but it's also slower than Cyclone. What's the point of this? Is it some hidden gem or an encrypted result? I don't understand. Tongue

At least you can see the source code here. Take it or leave it. You have three versions now, so pick one — or none, like me, even though I modified the original myself. This is just one of the waystations to something… or nothing.  Grin

What do you mean? Do you think any part of the code is worth anything—the AVX part, maybe? A new page in the book?  Tongue
nomachine
Full Member
***
Offline Offline

Activity: 742
Merit: 110


View Profile
April 25, 2025, 07:12:40 PM
 #9479

What do you mean? Do you think any part of the code is worth anything—the AVX part, maybe? A new page in the book?  Tongue

Well, you saw in the last few pages where my transition leads — into the research and generation of WIFs, filtering based on checksums, and warp-speeding SHA-256 and Base58. . I won’t even bother with secp256k1 anymore.

BTC: bc1qdwnxr7s08xwelpjy3cc52rrxg63xsmagv50fa8
Benjade
Jr. Member
*
Offline Offline

Activity: 40
Merit: 1


View Profile WWW
April 25, 2025, 07:16:36 PM
 #9480

Hello all key hunters,

For those interested, I just released KeyQuest V1 on GitHub. It's a random or hybrid Bruteforce created from Cyclone's idea and using some of its optimized includes. If you enjoy working with prefixes like me, then this will be perfect for you.

More info at https://github.com/Benjade/KeyQuest

Happy hunting!


This is based on Cyclone, with no mention of the original author, but it's also slower than Cyclone. What's the point of this? Is it some hidden gem or an encrypted result? I don't understand. Tongue

It's not mentioned in the original code, but I added one on the GitHub page if you're reading this Cheesy

The point is, you can perform several calculations with it. I can't give you the formula, but for example, you can escape prefixes if you have a hunch, or even without. The possibilities are endless. And it's normal for it to be a little slower depending on your search, the number of prefixes, and the randomness you use. Randomness is always a little slower.

I should point out that I haven't searched for unsolved keys yet. I waited until I went live to do that. But I can say that I found a 14-hex key in 37 minutes with it.
Pages: « 1 ... 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 [474] 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 ... 581 »
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!