Bitcoin Forum
August 12, 2025, 06:24:35 PM *
News: Latest Bitcoin Core release: 29.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: « 1 ... 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 [468] 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 ... 574 »
  Print  
Author Topic: Bitcoin puzzle transaction ~32 BTC prize to who solves it  (Read 328504 times)
nomachine
Full Member
***
Offline Offline

Activity: 742
Merit: 110


View Profile
April 22, 2025, 07:18:57 AM
 #9341

Can we check if these addresses have BTC while we wait ?   Tongue


Sure. You already have the python code on GitHub that checks for that—just add it to this script. Although it's unlikely you'll find anything..  Grin

BTC: bc1qdwnxr7s08xwelpjy3cc52rrxg63xsmagv50fa8
Akito S. M. Hosana
Jr. Member
*
Offline Offline

Activity: 392
Merit: 8


View Profile
April 22, 2025, 07:33:48 AM
 #9342

Can we check if these addresses have BTC while we wait ?   Tongue


Sure. You already have the python code on GitHub that checks for that—just add it to this script. Although it's unlikely you'll find anything..  Grin

Can you add this directly to GitHub? Then the script will truly be Casino..  Tongue
nomachine
Full Member
***
Offline Offline

Activity: 742
Merit: 110


View Profile
April 22, 2025, 07:54:31 AM
 #9343

Can you add this directly to GitHub? Then the script will truly be Casino..  Tongue

Done. . .

BTC: bc1qdwnxr7s08xwelpjy3cc52rrxg63xsmagv50fa8
POD5
Member
**
Offline Offline

Activity: 323
Merit: 10

Keep smiling if you're loosing!


View Profile
April 22, 2025, 08:06:07 AM
 #9344

Done. . .

Thanks!  Grin
Any instructions on how to use it specifically when searching for an address?

bc1qtmtmhzp54yvkz7asnqxc9j7ls6y5g93hg08msa
nomachine
Full Member
***
Offline Offline

Activity: 742
Merit: 110


View Profile
April 22, 2025, 08:17:16 AM
 #9345

Done. . .

Thanks!  Grin
Any instructions on how to use it specifically when searching for an address?

There are no instructions—unless you have a real WIF that's missing the first 12 characters. It can be any WIF if you're just relying on luck

P.S. It may be possible to reduce the number of prefixes in the main script
From:
Code:
static const char VALID_PREFIXES[][3] = {
    "Kw", "Kx", "Ky", "Kz", "L1", "L2", "L3", "L4", "L5"
};

To:
Code:
static const char VALID_PREFIXES[][3] = {
    "Kz"
};
If you're sure it's right there, then work even faster.

BTC: bc1qdwnxr7s08xwelpjy3cc52rrxg63xsmagv50fa8
Akito S. M. Hosana
Jr. Member
*
Offline Offline

Activity: 392
Merit: 8


View Profile
April 22, 2025, 02:37:43 PM
 #9346

a real WIF that's missing the first 12 characters.

What if I have a real WIF that's missing, say, the last 12 characters ?  Tongue
novaflare12
Newbie
*
Offline Offline

Activity: 1
Merit: 0


View Profile
April 22, 2025, 03:23:32 PM
Last edit: April 22, 2025, 03:38:43 PM by novaflare12
 #9347


I love your sense of humour brother Grin Can I DM you?
nomachine
Full Member
***
Offline Offline

Activity: 742
Merit: 110


View Profile
April 22, 2025, 04:21:50 PM
Last edit: April 22, 2025, 04:32:26 PM by nomachine
 #9348

What if I have a real WIF that's missing, say, the last 12 characters ?  Tongue

change in WIFRoulette.cpp to be:
Code:
using namespace std;

#define ROOT_LENGTH 12

alignas(64) const char BASE58[] = "123456789ABCDEFGHJKLMNPQRSTUVWXYZabcdefghijkmnopqrstuvwxyz";

const char WIF_START[] = "KwDiBf89QgGbjEhKnhXJuH7LrciVrZi3qd7sDG4F";

class Timer {

remove "static const char VALID_PREFIXES[][3]" , static constexpr size_t NUM_PREFIXES and  "void generate_valid_prefix" from script.

change "void init_batch(WIFBatch& batch, int batch_size)" to:
Code:
void init_batch(WIFBatch& batch, int batch_size) {
    thread_local Xoshiro256plus rng([]() {
        static atomic<uint64_t> global_seed(chrono::steady_clock::now().time_since_epoch().count());
        uint64_t seed_value = global_seed.load();
        global_seed.store(seed_value + 1);
        return Xoshiro256plus(seed_value);
    }());
    char random_root[ROOT_LENGTH + 1];
    for (int b = 0; b < batch_size; ++b) {
        generate_random_root(random_root, ROOT_LENGTH, rng);
        memcpy(batch.wifs[b], WIF_START, strlen(WIF_START));
        memcpy(batch.wifs[b] + strlen(WIF_START), random_root, ROOT_LENGTH + 1);
        memset(batch.extended_WIFs[b], 0, 64);
        batch.extended_WIFs[b][0] = 0x80;
        batch.extended_WIFs[b][33] = 0x01;
    }
}

Change target adress in WIFRoulette.py to:
1MVDYgVaSN6iKKEsbzRUAYFrYJadLYZvvZ

Code:
[2025-04-22 08:20:16] [I] WIF Roulette
[2025-04-22 08:25:34] [W] KwDiBf89QgGbjEhKnhXJuH7LrciVrZi3qd7sDG4F2sdMtzNe8y2U
[2025-04-22 08:25:35] [I] 1MVDYgVaSN6iKKEsbzRUAYFrYJadLYZvvZ | Balance: 0 BTC
[2025-04-22 08:25:35] [I] 1G1PszAzdLZWgGNG79pijNrt6BuK5HsVo8 | Balance: 0 BTC
[2025-04-22 08:25:36] [I] Partial match found (Compressed): 1MVDYgVaSN6iKKEsbzRUAYFrYJadLYZvvZ
[2025-04-22 08:25:36] [I] PRIVATE KEY FOUND!!!
[2025-04-22 08:25:37] Compressed Bitcoin Address: 1MVDYgVaSN6iKKEsbzRUAYFrYJadLYZvvZ
[2025-04-22 08:25:37] Uncompressed Bitcoin Address: 1G1PszAzdLZWgGNG79pijNrt6BuK5HsVo8
[2025-04-22 08:25:37] Private key (hex): 00000000000000000000000000000000000000000000000bebb3940cd0fc1491
[2025-04-22 08:25:38] Private key (decimal): 219898266213316039825
[2025-04-22 08:25:38] [I] Private key saved to found_key.txt
PRIVATE KEY FOUND!     
[2025-04-22 08:25:39] [I] Script execution ended


BTC: bc1qdwnxr7s08xwelpjy3cc52rrxg63xsmagv50fa8
farou9
Newbie
*
Offline Offline

Activity: 71
Merit: 0


View Profile
April 22, 2025, 04:27:08 PM
Last edit: April 22, 2025, 10:46:03 PM by Mr. Big
 #9349

It's not in this range either: 10000000000000000:80000000000000000

Code:
time ./keyhunt -m bsgs -t 6 -f tests/farou9_pubkey.txt -r 10000000000000000:80000000000000000 -n 0x1000000000000000 -M -s 0 -S -k 4
[+] Version 0.2.230430 Satoshi Quest, developed by AlbertoBSD

End

real 35m15.934s
user 192m37.033s
sys 0m13.839s



thank you for your time



why doesn't libsecp256k1  support point addition
Akito S. M. Hosana
Jr. Member
*
Offline Offline

Activity: 392
Merit: 8


View Profile
April 22, 2025, 04:52:22 PM
 #9350

------


Thanks!   Grin
kTimesG
Full Member
***
Offline Offline

Activity: 560
Merit: 182


View Profile
April 22, 2025, 04:59:26 PM
 #9351

why doesn't libsecp256k1  support point addition

It does, it's called pubkey_combine and can add multiple points.

It might be a little overkill if you use it in a long loop / a lot of times, though. Depending on your use case there can be much faster custom ways to do it though.

Off the grid, training pigeons to broadcast signed messages.
fixedpaul
Jr. Member
*
Offline Offline

Activity: 55
Merit: 16


View Profile WWW
April 22, 2025, 05:14:54 PM
Last edit: April 22, 2025, 06:03:46 PM by fixedpaul
Merited by Bram24732 (4), kTimesG (3), Cricktor (1)
 #9352

That's exactly what you described. Leaving "less likely" ranges for later because you think it's less likely to be there.
The script that you sent shows that it does not change anything, both methods are equally fast on average.

Okay, I'll return to my field again. I hope this time it makes you happy.


Unfortunately, what you did doesn't work as you think. Let me explain.

First of all, the correct metric to look at, as many have already explained to you, is the average number of checks, not the number of wins. But okay, you say, "Well, in the end the prefix method wins more often, so in most cases I find the solution earlier" – This is incorrect, but we'll get to that.

I modified your code in the preciseSearch function.

Code:
const preciseSearch = (block, prefix, targetHash, order) => {
  const randomHash = generateH160(Math.floor(Math.random() * 999999));
  const prefixHash = randomHash.slice(0, prefix);  //generate a random hash prefix for each block

  let checks = 0, ranges = [];
  for (const idx of order) {
    let start = idx * block,
        end = start + block,
        foundPrefix = false;
    for (let num = start; num < end; num++) {
      checks++;
      const hash = generateH160(num);
      if (hash === targetHash) return { checks, found: true };
      if (!foundPrefix && hash.startsWith(prefixHash)) {
        foundPrefix = true;
        ranges.push({ start: num + 1, end });
        break;
      }
    }
  }
  for (const { start, end } of ranges) {
    for (let num = end - 1; num >= start; num--) {
      checks++;
      if (generateH160(num) === targetHash) return { checks, found: true };
    }
  }
  return { checks, found: false };
};

Now, the search in the block is simply "paused" when any random prefix is found—not the targetAddress's prefix, but any randomly chosen prefix for each block. If the prefix method somehow actually worked, I would expect a different result. Instead, I get exactly the same result: almost identical number of checks, but the prefix method wins many more times!

Code:
=== FINAL RESULTS ===
Wins:
Sequential: 158
Precise: 230
Ties: 12

Why does this happen? The answer is that the prefix method doesn't make sense—you're just performing a flawed analysis.

The mistake lies in comparing the two methods while keeping the same block search order. That is, you're forcing both methods to iterate through the blocks in exactly the same order. This introduces a kind of bias in the statistics: the distribution of the number of checks becomes highly variable, causing the "precise" method to win more often, but when it loses, it loses badly, and the average number of checks stays about the same.

So I'm attaching my version of the code, where it's possible to freely switch whether the two methods share the same order or not, by changing the variable SHARED_BLOCK_ORDER.

Code:
const { performance } = require("perf_hooks");
const crypto = require("crypto");

const TOTAL_SIZE = 100_000,
      RANGE_SIZE = 5_000,
      PREFIX_LENGTH = 3,
      SIMULATIONS = 1000,
      SHARED_BLOCK_ORDER = true; // << Set this to true or false

console.log(`
=== Configuration ===
Total numbers: ${TOTAL_SIZE.toLocaleString()}
Block size: ${RANGE_SIZE.toLocaleString()}
Prefix: ${PREFIX_LENGTH} characters (16^${PREFIX_LENGTH} combinations)
Simulations: ${SIMULATIONS}
Shared block order: ${SHARED_BLOCK_ORDER}
`);

const generateH160 = (data) =>
  crypto.createHash("ripemd160").update(data.toString()).digest("hex");

const shuffledRange = (n) => {
  const arr = [...Array(n + 1).keys()];
  for (let i = arr.length - 1; i > 0; i--) {
    const j = Math.floor(Math.random() * (i + 1));
    [arr[i], arr[j]] = [arr[j], arr[i]];
  }
  return arr;
};

const sequentialSearch = (block, targetHash, order) => {
  let checks = 0;
  for (const idx of order) {
    for (let num = idx * block; num < (idx + 1) * block; num++) {
      checks++;
      if (generateH160(num) === targetHash) return { checks, found: true };
    }
  }
  return { checks, found: false };
};

const preciseSearch = (block, prefix, targetHash, order) => {
  const prefixHash = targetHash.slice(0, prefix);
  let checks = 0, ranges = [];
  for (const idx of order) {
    let start = idx * block,
        end = start + block,
        foundPrefix = false;
    for (let num = start; num < end; num++) {
      checks++;
      const hash = generateH160(num);
      if (hash === targetHash) return { checks, found: true };
      if (!foundPrefix && hash.startsWith(prefixHash)) {
        foundPrefix = true;
        ranges.push({ start: num + 1, end });
        break;
      }
    }
  }
  for (const { start, end } of ranges) {
    for (let num = end - 1; num >= start; num--) {
      checks++;
      if (generateH160(num) === targetHash) return { checks, found: true };
    }
  }
  return { checks, found: false };
};

const compareMethods = async () => {
  const results = { sequential: { wins: 0 }, precise: { wins: 0 }, ties: 0 };

  const blocks = Math.floor(TOTAL_SIZE / RANGE_SIZE);

  for (let i = 0; i < SIMULATIONS; i++) {
    const targetNum = Math.floor(Math.random() * TOTAL_SIZE),
          targetHash = generateH160(targetNum);

    const orderSeq = shuffledRange(blocks - 1);
    const orderPre = SHARED_BLOCK_ORDER ? [...orderSeq] : shuffledRange(blocks - 1);

    const seqResult = sequentialSearch(RANGE_SIZE, targetHash, orderSeq);
    const preResult = preciseSearch(RANGE_SIZE, PREFIX_LENGTH, targetHash, orderPre);

    if (seqResult.checks < preResult.checks) {
      results.sequential.wins++;
    } else if (seqResult.checks > preResult.checks) {
      results.precise.wins++;
    } else {
      results.ties++;
    }

    console.log(`Simulation ${i + 1}: Sequential = ${seqResult.checks} | Precise = ${preResult.checks}`);
  }

  console.log(`
=== FINAL RESULTS ===
Wins:
Sequential: ${results.sequential.wins}
Precise: ${results.precise.wins}
Ties: ${results.ties}
`);
};

compareMethods();


What you get by setting SHARED_BLOCK_ORDER = true is exactly what you get with your original code. Setting it to false changes the result, precisely because we're introducing true randomness and making a genuinely fair comparison between the two methods.

Code:
=== Configuration ===
Total numbers: 100.000
Block size: 5.000
Prefix: 3 characters (16^3 combinations)
Simulations: 1000
Shared block order: false

=== FINAL RESULTS ===
Wins:
Sequential: 499
Precise: 500
Ties: 1

Total Checks:
Sequential: 49169108
Prefix: 48831019

Do you understand where the fallacy in your analysis lies? In case it's still not clear, I’ve prepared an even simpler example to remove all doubt.

Let’s now try to find a better method to guess a number from 1 to 1000, chosen randomly with a uniform distribution. The numbers from 1 to 1000 are divided into 20 blocks of 50 numbers each.

The "sequential" method searches sequentially through each block, with the blocks ordered randomly.

The "magic" method searches each block too, but does something different: it only checks the first 20 numbers of each block, leaving the remaining 30 for a second pass—similar to what the prefix method does (checking from 50 to 21).

Here as well, I leave the option to choose whether both methods should search the blocks in the same order or not (SHARED_BLOCK_ORDER). Let’s look at a Python script that simulates this, printing the number of wins and ties for both methods, along with the total number of checks.

Code:
import random

# Number of simulations
NUM_SIMULATIONS = 10000

# Block settings
SPACE_SIZE = 1000
NUM_BLOCKS = 20
BLOCK_SIZE = round(SPACE_SIZE/NUM_BLOCKS)
PARTIAL_SCAN = 20  # Magic method: check first 20 numbers in block

# Shared block order flag (1 = same for both methods, 0 = independent)
SHARED_BLOCK_ORDER = 1

# Stats counters
sequential_wins = 0
magic_wins = 0
ties = 0
total_sequential_checks = 0
total_magic_checks = 0

# Run simulations
for _ in range(NUM_SIMULATIONS):
    winning_number = random.randint(1, SPACE_SIZE)

    # Create blocks
    blocks = [list(range(i * BLOCK_SIZE + 1, (i + 1) * BLOCK_SIZE + 1)) for i in range(NUM_BLOCKS)]

    # Generate block orders
    block_order_seq = list(range(NUM_BLOCKS))
    block_order_magic = list(range(NUM_BLOCKS))
    random.shuffle(block_order_seq)
    if SHARED_BLOCK_ORDER:
        block_order_magic = block_order_seq.copy()
    else:
        random.shuffle(block_order_magic)

    # === Sequential Method ===
    sequential_checks = 0
    found_seq = False

    for block_idx in block_order_seq:
        block = blocks[block_idx]
        for number in block:
            sequential_checks += 1
            if number == winning_number:
                found_seq = True
                break
        if found_seq:
            break

    # === Magic Method ===
    magic_checks = 0
    found_magic = False

    # First pass: check first 20 numbers of each block
    for block_idx in block_order_magic:
        block = blocks[block_idx]
        for number in block[:PARTIAL_SCAN]:
            magic_checks += 1
            if number == winning_number:
                found_magic = True
                break
        if found_magic:
            break

    # Second pass: check 50 to 21 (reversed)
    if not found_magic:
        for block_idx in reversed(block_order_magic):
            block = blocks[block_idx]
            for number in reversed(block[PARTIAL_SCAN:]):
                magic_checks += 1
                if number == winning_number:
                    found_magic = True
                    break
            if found_magic:
                break

    # === Update stats ===
    total_sequential_checks += sequential_checks
    total_magic_checks += magic_checks

    if sequential_checks < magic_checks:
        sequential_wins += 1
    elif magic_checks < sequential_checks:
        magic_wins += 1
    else:
        ties += 1

# === Final Output ===
print("Results after", NUM_SIMULATIONS, "simulations:")
print("Sequential wins:", sequential_wins)
print("Magic wins:", magic_wins)
print("Ties:", ties)
print("Total sequential checks:", total_sequential_checks)
print("Total magic checks:", total_magic_checks)



Results with SHARED_BLOCK_ORDER = 1:

Code:
Results after 10000 simulations:
Sequential wins: 3704
Magic wins: 6098
Ties: 198
Total sequential checks: 5037847
Total magic checks: 5038341

OMG!!! Did I just find a better method to search for a random number in a uniform distribution? OBVIOUSLY NOT. It's just a flawed way of conducting this analysis that introduces a bias in the distribution of checks during the simulations.
Now let’s see what happens with SHARED_BLOCK_ORDER = 0:

Code:
Results after 10000 simulations:
Sequential wins: 4980
Magic wins: 4976
Ties: 44
Total sequential checks: 5013645
Total magic checks: 5004263

Well, luckily statistics still works. I hope it was appreciated that I took the time to explain to everyone why we were seeing those results. And I hope this general obsession with the prefix method—or any other so-called magical method—will soon be extinguished by logic and your ability to reason and think critically.

I'm sorry, but there is no better method to guess a random number. The ripemd160 function doesn’t somehow “understand” that it should generate less likely similar prefixes if I input the hash of public keys derived from nearby private keys. That would require consciousness, and it would need to be able to reverse sha256 and secp256k1—which I very much doubt it can do.





kTimesG
Full Member
***
Offline Offline

Activity: 560
Merit: 182


View Profile
April 22, 2025, 05:28:41 PM
 #9353

...

"You're reading the results wrong"

"Changing the traversal order destroys the same initial conditions"

Cheesy Cheesy Cheesy

PS: While messing around with the Python version, I also simply added a fresh shuffle of the blocks order, between the two methods. Guess what happened? The results were almost a perfect 50 - 50, just like expected. But I didn't mention this so to not get "not the same conditions". Your experiment, with using a totally random useless prefix to get the same biased results, is much much better though.

Off the grid, training pigeons to broadcast signed messages.
farou9
Newbie
*
Offline Offline

Activity: 71
Merit: 0


View Profile
April 22, 2025, 05:33:02 PM
 #9354

why doesn't libsecp256k1  support point addition

It does, it's called pubkey_combine and can add multiple points.

It might be a little overkill if you use it in a long loop / a lot of times, though. Depending on your use case there can be much faster custom ways to do it though.
I'm trying to make a c++ script to compute the points between scalars 1 to 1 billion and store the x value and the scalars in txt files based on the first 4 characters of x , but i have two problems that makes the process very slow .
1. libsecp256k1  Can't keep adding g to itself repeatedly ,it cant handle the same point being added
2. The core problem that slow it down , we can't open 16^4 files in the same time we are limited to 3200 , and we don't know what the next point prefix will be so we can't choose which files to open ,and opening only 1 file at a time is very very slow , the speed is 1000 point per 5s
kTimesG
Full Member
***
Offline Offline

Activity: 560
Merit: 182


View Profile
April 22, 2025, 05:53:04 PM
 #9355

I'm trying to make a c++ script to compute the points between scalars 1 to 1 billion and store the x value and the scalars in txt files based on the first 4 characters of x , but i have two problems that makes the process very slow .
1. libsecp256k1  Can't keep adding g to itself repeatedly ,it cant handle the same point being added
2. The core problem that slow it down , we can't open 16^4 files in the same time we are limited to 3200 , and we don't know what the next point prefix will be so we can't choose which files to open ,and opening only 1 file at a time is very very slow , the speed is 1000 point per 5s

Depending on how many cores you have (1 to 100), and with fast storage (at least an SSD), this can be done efficiently in something like between 10 seconds and 3 minutes.

Can I ask what is the purpose of this? The code is kinda complex, to be honest, and involves dividing work across all cores, adding points in Jacobian form, inverting the Z product a single time, and then finishing up the individual additions by down-sweeping inverted Zs and normalizing back to affine points.

You'll also need a ton of RAM (or less, and split the whole range, in exchange for a longer time to finish).

Off the grid, training pigeons to broadcast signed messages.
farou9
Newbie
*
Offline Offline

Activity: 71
Merit: 0


View Profile
April 22, 2025, 06:10:29 PM
 #9356

I'm trying to make a c++ script to compute the points between scalars 1 to 1 billion and store the x value and the scalars in txt files based on the first 4 characters of x , but i have two problems that makes the process very slow .
1. libsecp256k1  Can't keep adding g to itself repeatedly ,it cant handle the same point being added
2. The core problem that slow it down , we can't open 16^4 files in the same time we are limited to 3200 , and we don't know what the next point prefix will be so we can't choose which files to open ,and opening only 1 file at a time is very very slow , the speed is 1000 point per 5s

Depending on how many cores you have (1 to 100), and with fast storage (at least an SSD), this can be done efficiently in something like between 10 seconds and 3 minutes.

Can I ask what is the purpose of this? The code is kinda complex, to be honest, and involves dividing work across all cores, adding points in Jacobian form, inverting the Z product a single time, and then finishing up the individual additions by down-sweeping inverted Zs and normalizing back to affine points.

You'll also need a ton of RAM (or less, and split the whole range, in exchange for a longer time to finish).
i don't know how does bsgs work but i think it the same core idea , i create a data base of the first 1 billion point and store the x value actualy the scalar doesn't realy matter but it can speed things up , anyway we keep adding 1 billion to the start of the range and subtract that point from the target by adding the negate of it if we are close to our target in 1 billion away the point the subtraction yield will be in the DB anyway you understand this already , but the purpose of dividing the points to diffrent files is to speed up the lookup part , looking for a point in 1 billion line takes 200 seconds. But if the file only contains 15k points that won't take a second .

I only have (Intel i5 4 cores ) 8gb ram
brainless
Member
**
Online Online

Activity: 408
Merit: 35


View Profile
April 22, 2025, 06:44:20 PM
 #9357

@nomachine
I repeat request for add function in cyclone
Where we could check list of hash160 instead of only 1 hash160
Example switch -f hash160.txt
More default -b 4 should be ended, if need apply -b, if no need then user don't apply -b switch for buildup speed
Thankx

13sXkWqtivcMtNGQpskD78iqsgVy9hcHLF
kTimesG
Full Member
***
Offline Offline

Activity: 560
Merit: 182


View Profile
April 22, 2025, 06:52:25 PM
Last edit: April 22, 2025, 07:11:45 PM by kTimesG
 #9358

Can I ask what is the purpose of this?
i don't know how does bsgs work but i think it the same core idea.

A range of 1 billion can only solve efficiently an up to maybe a 59-bits private key of a known public key, if this is what you are trying to do. But I think you got the strategy wrong - the target point needs to get subtracted in sqrt(rangeSize) 1 billion delta increments and then looked up in your "database" (which should actually be used with a bloom filter first, and definitely not as 65536 text files).

Off the grid, training pigeons to broadcast signed messages.
farou9
Newbie
*
Offline Offline

Activity: 71
Merit: 0


View Profile
April 22, 2025, 07:11:59 PM
 #9359

Can I ask what is the purpose of this?
i don't know how does bsgs work but i think it the same core idea.

A range of 1 billion can only solve an up to maybe a 59-bits private key of a known public key, if this is what you are trying to do. But I think you got the strategy wrong - the target point needs to get subtracted in sqrt(rangeSize) delta increments and then looked up in your "database" (which should actually be used with a bloom filter first, and definitely not as 65536 text files).

we are not subtracting the target we are subtracting from the target. Ps(Strat range) Pt(traget) pb(point of scalar 1billion).
PS +Pb =Q , Pt + (-Q)=R , then we look for x of R in Db .
What do you mean by   "the target point needs to get subtracted in sqrt(rangeSize) delta increments". 
I know this strategy can't help much with 135bit , that is why I have another approach  with that Db
kTimesG
Full Member
***
Offline Offline

Activity: 560
Merit: 182


View Profile
April 22, 2025, 07:39:28 PM
 #9360

we are not subtracting the target we are subtracting from the target. Ps(Strat range) Pt(traget) pb(point of scalar 1billion).
PS +Pb =Q , Pt + (-Q)=R , then we look for x of R in Db .
What do you mean by   "the target point needs to get subtracted in sqrt(rangeSize) delta increments". 
I know this strategy can't help much with 135bit , that is why I have another approach  with that Db

It's faster to simply do Pt - j*Pb = R and look for x in the precomputed [startRange, startRange + 1 billion] table.

This is basically how BSGS works.

If the private key is beyond 58 bits this gets exponentially more inefficient, the larger the range is.

For 135 bits you'd need a 2**67 database and 2**67 subtraction & lookup steps. But due to space and time constraints this is not possible for the foreseeable future (e.g. our lifetime).

Off the grid, training pigeons to broadcast signed messages.
Pages: « 1 ... 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 [468] 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 ... 574 »
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!