 |
Today at 12:00:15 AM |
|
im too lazy to type it all so i had ai breakdown the script. the basic idea is use less memory but a little more compute. studying a different curve and found some cool things. this might help someone out there. should be curve agnostic. below is the breakdown. Goodluck.
Consecutive-Signature Filtering for BSGS-Style Searches
This method is a memory-efficient variation of baby-step/giant-step (BSGS) that replaces full point storage with small coordinate signatures and then verifies candidates using consecutive curve steps.
The goal is to dramatically reduce memory while still allowing reliable candidate detection.
Core Idea
Instead of storing entire elliptic-curve points in the baby table, store a small signature derived from the point coordinates.
Example signature:
sig(P) = (x mod 256, y mod 256)
This produces a 16-bit fingerprint of the point.
Many points will share the same fingerprint, so matches are only possible candidates.
False matches are eliminated by verifying consecutive points along the curve.
Step 1 — Build the Baby Table
Compute a sequence of curve points:
P_i = iG
for:
i = 0 … m−1
For each point:
compute the signature
store the signature in a table
store the position i where it occurred
Only the signature is stored, not the full point.
Example stored entry:
signature → [i1, i2, ...]
Multiple positions may share the same signature.
Step 2 — Probe With Candidate Points
For a candidate point Q:
compute its signature
check if that signature exists in the baby table
If no match exists, discard immediately.
If matches exist, those positions become candidate alignments.
Step 3 — Consecutive Verification
To eliminate false matches, verify that the surrounding points also match.
Example for a 3-check verification:
Compare signatures for:
Q Q + G Q + 2G
against the stored sequence:
P_i P_{i+1} P_{i+2}
If all signatures match, the candidate survives verification.
Why Consecutive Verification Works
Elliptic-curve points behave essentially like random values when examining small coordinate fragments.
Matching a single signature has probability roughly:
1 / signature_space
Example:
16-bit signature → 1 / 65536
Matching multiple consecutive points multiplies the constraint:
(1 / signature_space)^checks
Example:
3 checks with 16-bit signatures ≈ 1 / 2^48
This makes accidental matches extremely unlikely.
Signature Width
The number of bits in the signature controls the trade-off between:
memory usage
candidate frequency
verification workload
Examples:
Signature Bits Storage x8 + y8 16 2 bytes x12 + y12 24 3 bytes x16 + y16 32 4 bytes
Larger signatures reduce the number of candidate hits.
Verification Cost
Most candidate points are discarded immediately by the signature lookup.
Only a fraction proceed to verification.
Verification requires computing:
Q + G Q + 2G ...
depending on the number of checks.
These operations are inexpensive compared with storing large tables.
Memory Characteristics
Traditional BSGS typically stores:
compressed point ≈ 33 bytes
This method stores:
signature only ≈ 2–4 bytes
The reduction in memory can exceed 90%.
Accuracy
The method is probabilistic.
False positives are possible but become extremely unlikely when:
signatures are sufficiently large
multiple consecutive checks are used
final full-point verification is performed
Advantages
• very small memory footprint • scalable to much larger tables • fast rejection of non-matching candidates • verification cost remains small
Limitations
• matches are probabilistic until fully verified • additional verification steps are required • signature collisions are inevitable at small widths
Practical Use
The method works best as a filtering layer:
use small signatures to identify potential matches
verify consecutive points
perform a final exact verification
This allows extremely large search spaces to be explored while keeping memory requirements manageable.
Summary
Consecutive-signature filtering replaces full point storage with small coordinate fingerprints and verifies matches using short sequences of curve steps.
The approach trades a small amount of extra computation for a dramatic reduction in memory, making it possible to scale BSGS-style searches much further than traditional implementations.
|