Bitcoin Forum
April 01, 2026, 09:49:28 PM *
News: Latest Bitcoin Core release: 30.2 [Torrent]
 
   Home   Help Search Login Register More  
Pages: [1]
  Print  
Author Topic: ECDSA projective structural decomposition attack  (Read 47 times)
kTimesG (OP)
Full Member
***
Offline Offline

Activity: 784
Merit: 242


View Profile
Today at 12:44:36 AM
 #1

OK, this is something I've been working on for some months now, so bear with me.

We can find a structural shortcut in ECDSA that hasn’t been explicitly documented, as far as I looked.

While analyzing the signature equation:

Code:
s·k ≡ r·d + h (mod n)

and thinking of it as a 2D rectangle sum equation (one axis for scalars, one for x-coord), a bidirectional decomposition of both axes can be explored.

Let
Code:
s = r + (s − r)
k = d + (k − d)

Expanding gives:

s·k = r·d + r(k − d) + (s − r)d + (s − r)(k − d)

Canceling r·d yields a previously overlooked representation:

h = r(k − d) + (s − r)d + (s − r)(k − d)

This expresses h as the sum of three structurally independent components, each isolating interactions between known signature values and the unknowns k and d. Visually, this is an expression of three rectangles summed up via the difference of the products between the disjoint projective (log x) and scalar components of the involved equation (ground truth).

What’s interesting is that this decomposition separates the dependency graph in a way that allows iterative candidate reduction:

For any candidate dᵢ, we can compute:

kᵢ = s⁻¹(r·dᵢ + h) mod n

But under the 3-term decomposition, k − d appears explicitly in two terms, meaning that incorrect guesses of d introduce correlated distortions across multiple components rather than a single scalar mismatch.

Empirically, this suggests that the solution space is not uniformly flat, but exhibits detectable structure when projected through this decomposition.

In other words, instead of verifying k via kG.x ≡ r, we can potentially detect inconsistencies earlier by analyzing the internal balance of the three terms.

This hints at a possible reduction in search complexity when d is constrained (e.g., biased key generation, partial leakage scenarios, or non-uniform entropy sources). In particular, this reduction search is optimal for interval ECDLP such as the puzzles with outgoing TX (and hence, available signatures).

Still early, but if this holds under deeper analysis, it could open an alternative angle on ECDSA key recovery that doesn’t rely directly on elliptic curve point validation.

In effect, this can mean that a viable alternative to HNP or lattice attacks is possible, with just a single ECDSA signature.

No elliptical curve group operations or scalar multiplications are required for this attack.

Curious if anyone has seen this formulation explored before. Thoughts?

Off the grid, training pigeons to broadcast signed messages.
Ryan MacQuarrie
Newbie
*
Online Online

Activity: 3
Merit: 0


View Profile
Today at 05:57:08 AM
 #2

I thought this topic was interesting, so I did some digging. Knowing the number K does not reveal KG, this is because KG is a geometric point on a curve. The numbers do satisfy the modular equation, but ECDSA requires that K makes a curve point whose X-Coordinate equals R (https://csrc.nist.gov/pubs/fips/186-5/final, https://crypto.stackexchange.com/questions/67627/why-doesnt-the-formula-work-when-checking-two-ecdsa-signatures?utm_source=chatgpt.com).

When we plug in your three-component equation it works (as it should be default), however the chance that you've actually guessed a private key is essentially zero. By essentially zero I mean that for real curves like secp256k1 (𝑛=2²⁵⁶), the probability is about 1/115,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000, rounded down, and in written form about one-hundred-fifty-septillion to one.

To put simply your equation checks numbers using only normal arithmetic, whereas ECDSA also involves points on a curve. The extra geometric dimension in the equation makes it NEARLY impossible to break with this method (https://www.secg.org/sec1-v2.pdf, https://books.google.com/books/about/Guide_to_Elliptic_Curve_Cryptography.html?id=V5oACAAAQBAJ&utm).

Best Regards, -Ryan MacQuarrie
kTimesG (OP)
Full Member
***
Offline Offline

Activity: 784
Merit: 242


View Profile
Today at 12:30:46 PM
 #3

ECDSA requires that K makes a curve point whose X-Coordinate equals R

Thanks for the detailed reply, really appreciate the references and the probability breakdown.

I fully agree with your main point: without the EC relation (kG → R.x = r), any scalar-only check is usually insufficient, and the probability of randomly hitting a valid key would effectively be zero for secp256k1.

What I was trying to explore is slightly orthogonal to that.

Starting from the decomposition:

h = r(k − d) + (s − r)d + (s − r)(k − d)

this makes the interaction between k and d explicit across multiple coupled terms, rather than a single linear relation.

Now, in the standard view:

k = s⁻¹(r·d + h) mod n

an incorrect guess for d simply maps to a different k, with no immediate scalar-level inconsistency.

However, in the decomposed form, (k − d) appears in two separate terms, meaning that an incorrect d propagates into multiple components simultaneously. This raises the question of whether those deviations remain uniformly random, or whether they introduce detectable structure under certain constraints.

One scenario that seems worth thinking about is deterministic nonce generation (RFC 6979). In that case, k is no longer uniformly random, but a function of (d, h). If we assume d lies in a constrained interval of size 2ⁿ, then k is effectively drawn from at most 2ⁿ possible values as well, rather than the full group order.

That doesn’t break the scheme, of course, but it changes the shape of the search space: instead of arbitrary (d, k) pairs, we’re looking at a structured subset where k and d are functionally linked.

In that context, the decomposition can be visualized as overlapping areas:
Code:
pubX

|   s    ┌───────────────┐   ← (k, s)
|        │      h        │
|        │   (difference)│
|   r ───┼───────┐       │
|        │ r·d   │       │
|        │       │       │
|        └───────┴───────┘──────── scalar
|        0       d       k  → x

Here, h is not a separate rectangle but the L-shaped region of (s·k) not covered by (r·d). When d is incorrect, both the horizontal (k − d) and vertical (s − r) offsets shift together.

So the question becomes: in a constrained or deterministic setting, do these coupled offsets remain indistinguishable from random noise, or could they serve as a weak pre-filter before EC validation?

So this “multi-term coupling” view must be considered in the context of structured nonce generation.

Off the grid, training pigeons to broadcast signed messages.
Pages: [1]
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!