Bitcoin Forum
May 05, 2024, 09:42:50 AM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: « 1 [2] 3 4 5 6 »  All
  Print  
Author Topic: [Crypto] Compact Confidential Transactions for Bitcoin  (Read 18400 times)
Mixles (OP)
Member
**
Offline Offline

Activity: 63
Merit: 11


View Profile
June 28, 2015, 11:43:41 AM
Last edit: June 28, 2015, 01:36:37 PM by Mixles
 #21

Employing an extra proof per output similar to the smallness NIZKP in the first version of your whitepaper (but multiplying by r instead of adding), I believe it can be proven that the sum of the outputs minus the output being proven is some fraction < 1 of the input sum without revealing anything about the magnitude of the inputs and outputs. If this is combined with input mixing, then to the extent that anonymity set can't be unmasked then the relative values can't be traced back to the original coinbase magnitudes.

Can you elaborate further? I don't see how m = r*c*x would be secure.

But afaics in exchange for that extra complexity, the reduced bits issue is eliminated. Whereas, isn't the weakened commitment E2 explicit in your protocol?

E1 and E2 bit strengths are both well defined, and are both stronger than 128 bits on a 768 bit curve (i.e. not weak).

But as I said, doesn't ∆ reveal a new simultaneous hardness assumption to the assumption of discrete logarithm hardness, especially in conjunction with the explicit weakened commiment? Whereas, in the sum of 3 squares approach, only the hardness of discrete logarithm is assumed.

Of course, there is a reduction in cryptanalysis complexity given by the attacker knowing that F1 and F2 are squares, and knowing an encryption of their sum. This is already baked into the bit security reduction from 188 fuzzbits to 64 E2 bits, and why the scheme is hungry for a large curve.

CCT does not rely on additional assumptions as included by [38]'s reference [25] (Boudot "Efficient Proofs that a Committed Number Lies in an Interval") Appendix A or B. It's also a sum. There is no product for an attacker to factor.



Donations to 1SumKArxoEJ1HoGibmj8ygw1DZWYBvjmM
1714902170
Hero Member
*
Offline Offline

Posts: 1714902170

View Profile Personal Message (Offline)

Ignore
1714902170
Reply with quote  #2

1714902170
Report to moderator
1714902170
Hero Member
*
Offline Offline

Posts: 1714902170

View Profile Personal Message (Offline)

Ignore
1714902170
Reply with quote  #2

1714902170
Report to moderator
The forum strives to allow free discussion of any ideas. All policies are built around this principle. This doesn't mean you can post garbage, though: posts should actually contain ideas, and these ideas should be argued reasonably.
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction.
1714902170
Hero Member
*
Offline Offline

Posts: 1714902170

View Profile Personal Message (Offline)

Ignore
1714902170
Reply with quote  #2

1714902170
Report to moderator
TPTB_need_war
Sr. Member
****
Offline Offline

Activity: 420
Merit: 257


View Profile
June 28, 2015, 01:07:28 PM
Last edit: June 28, 2015, 01:21:12 PM by TPTB_need_war
 #22

Employing an extra proof per output similar to the smallness NIZKP in the first version of your whitepaper (but multiplying by r instead of adding), I believe it can be proven that the sum of the outputs minus the output being proven is some fraction < 1 of the input sum without revealing anything about the magnitude of the inputs and outputs. If this is combined with input mixing, then to the extent that anonymity set can't be unmasked then the relative values can't be traced back to the original coinbase magnitudes.

Can you elaborate further? I don't see how m = r*c*x would be secure.

Probably I am incorrect, because I don't have much experience in this field, but why is it insecure if the verifier doesn't know r then how can verifier know x?

I envisioned two proofs, one for m = r*c*(sum of inputs) and n = r*c*(sum of outputs - output proving non-negative), i.e. same U and V used in both proofs. If n/m <= 1, then the output is non-negative (because also the sum of outputs * G = sum of inputs * G).

Afaics, all that is revealed is the ratio of the values, not the magnitudes. But I did it only quickly in my head, so maybe I screwed up. I am very rusty haven't been doing higher-level maths since decades.


But afaics in exchange for that extra complexity, the reduced bits issue is eliminated. Whereas, isn't the weakened commitment E2 explicit in your protocol?

E1 and E2 bit strengths are both well defined, and are both stronger than 128 bits on a 768 bit curve (i.e. not weak).

But as I said, doesn't ∆ reveal a new simultaneous hardness assumption to the assumption of discrete logarithm hardness, especially in conjunction with the explicit weakened commiment? Whereas, in the sum of 3 squares approach, only the hardness of discrete logarithm is assumed.

Of course, there is a reduction in cryptanalysis complexity given by the attacker knowing that F1 and F2 are squares, and knowing an encryption of their sum. This is already baked into the bit security reduction from 188 fuzzbits to 64 E2 bits, and why the scheme is hungry for a large curve.

CCT does not rely on additional assumptions as included by [38]'s reference [25] (Boudot "Efficient Proofs that a Committed Number Lies in an Interval") Appendix A or B. It's also a sum. There is no product for an attacker to factor.

You are asserting only the bit security has been weakened and thus can be compensated with a larger curve. I am thinking the attacker also knows a relationship between the sum (fuzzedValue) and a revealed constant (∆) that is not baked into the bit security reduction, specifically the point #4 that you admitted defines a set (probably smaller than Zn) with attributes that are either not well understood or not yet explained.

Mixles (OP)
Member
**
Offline Offline

Activity: 63
Merit: 11


View Profile
June 28, 2015, 01:26:46 PM
Last edit: June 28, 2015, 02:03:16 PM by Mixles
 #23

Probably I am incorrect, because I don't have much experience in this field, but why is it insecure if the verifier doesn't know r then how can verifier know x?

Eve would multiply by multiplicative inverse of public c to find product z = r*x modN. Factor it. Maybe use some other equations too.

You are asserting only the bit security has been weakened and thus can be compensated with a larger curve. I am thinking the attacker also knows a relationship between the sum (fuzzedValue) and a revealed constant (∆) that is not baked into the bit security reduction, specifically the point #4 that you admitted defines a set (probably smaller than Zn) with attributes that are either not well understood or not yet explained.

Yes, the set is smaller. That's why the bit security is lower.

∆ is a random number near any sum of squares. There are provably many more sums of squares than there are squares. There are as many possible squares as bits in x2 allow. There are therefore many more possible fuzzedValue solutions for a single ∆ than iterations in brute forcing all the bits in E2=x2*G.

Knowing exactly how many more possible fuzzedValue solutions for a single ∆ doesn't change that. Having the mathematical beauty explained Gauss-style over Zn would be a nice-to-have for sake of pure mathematics, but I don't see it as necessary.

Edit: Curiously, if I used the third square for ∆ instead of a random number, I would also be concerned of some other relationship between that and the other two squares.

Donations to 1SumKArxoEJ1HoGibmj8ygw1DZWYBvjmM
TPTB_need_war
Sr. Member
****
Offline Offline

Activity: 420
Merit: 257


View Profile
June 28, 2015, 02:01:06 PM
Last edit: June 28, 2015, 02:13:55 PM by TPTB_need_war
 #24

Probably I am incorrect, because I don't have much experience in this field, but why is it insecure if the verifier doesn't know r then how can verifier know x?

I would multiply by multiplicative inverse of public c to find product z = r*x modN. Factor it. Maybe use some other equations too.

Afaik, the only purpose of c is so the prover doesn't cheat, so it is okay if it drops out for the verifier. I don't see how factoring can reduce the set of possibilities. There is no constraint on z other than r which the verifier doesn't know. The set of possibilities appears to be the bit security of r?

You are asserting only the bit security has been weakened and thus can be compensated with a larger curve. I am thinking the attacker also knows a relationship between the sum (fuzzedValue) and a revealed constant (∆) that is not baked into the bit security reduction, specifically the point #4 that you admitted defines a set (probably smaller than Zn) with attributes that are either not well understood or not yet explained.

Yes, the set is smaller. That's why the bit security is lower.

∆ is a random number near any sum of squares. There are provably more sums of squares than there are squares. There are as many possible squares as bits in x2 allow. There are therefore more possible solutions to ∆ than iterations in brute forcing all the bits in E2=x2*G.

∆ has less entropy than fuzzbits because some of those degrees-of-freedom are constrained (discarded) by the smallness requirement on ∆, thus I don't see how you concluded that the lowerbound of the entropy is only that imparted to x2. The relationship appears to me to be more complex.

Edit: Curiously, if I used the third square for ∆ instead of a random number, I would also be concerned of some other relationship between that and the other two squares.

The distinction afaics is that then you don't reveal it and the entropy is not knowingly constrained to less bits. That is why I thinking of the 3 squares when the sender can find a suitable set, and fall back to one of the other ideas if not. But any way I am not really qualified to do this and I am probably wasting your time.

Mixles (OP)
Member
**
Offline Offline

Activity: 63
Merit: 11


View Profile
June 28, 2015, 02:13:42 PM
Last edit: June 28, 2015, 02:32:46 PM by Mixles
 #25

Afaik, the only purpose of c is so the prover doesn't cheat, so it is okay if it drops out for the verifier. I don't see how factoring can reduce the set of possibilities. There is no constraint on z other than r which the verifier doesn't know. The set of possibilities appears to be the bit security of r?

x*r is discovered, which is much easier to factor to get [x, r, ...] as factors than doing ECDL. You're then relying on, as you put it earlier, a different hardness assumption.

∆ has less entropy than fuzzbits because some of those degrees-of-freedom are constrained by the smallness requirement on ∆, thus I don't see how you concluded that the lowerbound of the entropy is only that imparted to x2. The relationship appears to me to be more complex.

Yes. The entropy in ∆ is specifically fuzzbits/2 bits. ∆ comes directly from the difference between a uniformly random number and a randomly selected sum of two squares (which are much more common than squares).

The distinction afaics is that then you don't reveal it and the entropy is not knowingly constrained to less bits. That is why I thinking of the 3 squares when the sender can find a suitable set, and fall back to one of the other ideas if not. But any way I am not really qualified to do this and I am probably wasting your time.

Qualification and pieces of paper are irrelevant. I don't have any certificates of cryptographic achievement myself. You challenge me to think about the assumptions, and that is a useful part of review.

Donations to 1SumKArxoEJ1HoGibmj8ygw1DZWYBvjmM
TPTB_need_war
Sr. Member
****
Offline Offline

Activity: 420
Merit: 257


View Profile
June 28, 2015, 02:30:06 PM
 #26

Afaik, the only purpose of c is so the prover doesn't cheat, so it is okay if it drops out for the verifier. I don't see how factoring can reduce the set of possibilities. There is no constraint on z other than r which the verifier doesn't know. The set of possibilities appears to be the bit security of r?

x*r is discovered, which is much easier to factor than doing ECDL. You're then relying on, as you put it earlier, on a different hardness assumption.

Why isn't that also the case for m = r + c * x? Verifier knows c thus the possibilities for x are entirely determined by the possible values for r.

Isn't it still just a brute force search of the entropy in r?

∆ has less entropy than fuzzbits because some of those degrees-of-freedom are constrained by the smallness requirement on ∆, thus I don't see how you concluded that the lowerbound of the entropy is only that imparted to x2. The relationship appears to me to be more complex.

Yes. The entropy in ∆ is specifically fuzzbits/2 bits. ∆ comes directly from the difference between a uniformly random number and a randomly selected sum of two squares (which are much more common than squares).

The fuzz and the sum of squares are not independently random. And then there is the added constraint that all those ∆ > 2fuzzbits/2 are discarded. I thus speculate that the lower bound of entropy is potentially much less than you calculated.

Mixles (OP)
Member
**
Offline Offline

Activity: 63
Merit: 11


View Profile
June 28, 2015, 02:43:02 PM
Last edit: June 28, 2015, 02:59:31 PM by Mixles
 #27

Why isn't that also the case for m = r + c * x? Verifier knows c thus the possibilities for x are entirely determined by the possible values for r.

Because c*x is not modulo N, but some N-r, so you can't get the multiplicative inverse without the secret r, for which you need ECDL.

The fuzz and the sum of squares are not independently random. And then there is the added constraint that all those ∆ > 2fuzzbits/2 are discarded. I thus speculate that the lower bound of entropy is potentially much less than you calculated.

As I have proved in my earlier post, there are many more possible solutions for each ∆ than there are possible sums of squares. You cannot imply anything from ∆, which is a random number. If I picked ∆+1, ∆+2 or ∆+3, I could still pick with that, the same sum of squares with each, or any one of more than 2fuzzbits/2 other sums of squares.

As uniform random numbers are picked and then their roots are taken, the distribution of the roots won't be uniform of course, while the distribution of the squares is. That's how squaring works. Fortunately, the roots will still be big enough to satisfy the bit security requirement.

Donations to 1SumKArxoEJ1HoGibmj8ygw1DZWYBvjmM
johoe
Full Member
***
Offline Offline

Activity: 217
Merit: 238


View Profile
June 28, 2015, 03:04:34 PM
 #28

I think that your square-roots have too few bits.

In your current scheme x1 has 126 bits.  Then x2^2 < 2*x1, hence x2 has only 64 bits. Since you publish x2*G, the baby-step giant-step algorithm would only have 32 bits complexity. It is even worse, since you know that the highest 32 bits of fuzzvalue are probably 0.  So x2 is almost trivially breakable.  Even x1 can be broken with ~50 bit complexity if one has a good guess what the value is.

So I think you should make the square roots at least about 250 bits, so that using baby-step giant-step is hard enough.  This means that you need more than 512 bit ECC.  Also don't use the scheme where x2 has only has half the bits of x1.

If you have multiple outputs you could distribute the squares over the outputs.  You could use a single square root x_i for each output, such that the first 64 bit of x_i^2 equal the satoshi amount.  Choose the lower bits of x_i at random except for the last one.  Then publish the square-roots and the difference Δ = sum inputs - sum x_i^2.  

BTW, how much information would leak if Δ = fuzzvalue - floor(sqrt(fuzzvalue))^2 is known (if there is only one output and the simple scheme is used)?  To avoid leaking the magnitude of x, one could make sure that Δ is not the smallest remainder but equally distributed with (fuzzbits+64)/2 bits.

Donations to 1CF62UFWXiKqFUmgQMUby9DpEW5LXjypU3
TPTB_need_war
Sr. Member
****
Offline Offline

Activity: 420
Merit: 257


View Profile
June 28, 2015, 03:05:39 PM
Last edit: June 29, 2015, 12:21:55 AM by TPTB_need_war
 #29

Why isn't that also the case for m = r + c * x? Verifier knows c thus the possibilities for x are entirely determined by the possible values for r.

Because c*x is not modulo M, but some M-r, so you can't get the multiplicative inverse without the secret r, for which you need ECDL.

I don't know what is this M you refer to. The only mod I say was 2t on c only.

The fuzz and the sum of squares are not independently random. And then there is the added constraint that all those ∆ > 2fuzzbits/2 are discarded. I thus speculate that the lower bound of entropy is potentially much less than you calculated.

As I have proved in my earlier post, there are many more possible solutions for each ∆ than there are possible sums of squares. You cannot imply anything from ∆, which is a random number. If I picked ∆+1, ∆+2 or ∆+3, I could still pick with that, the same sum of squares with each, or any one of more than 2fuzzbits/2 other sums of squares.

As uniform random numbers are picked and then their roots are taken, the distribution of the roots won't be uniform of course, while the distribution of the squares is. That's how squaring works. Fortunately, the roots will still be big enough to satisfy the bit security requirement.

I can see that is true if finding the squares by any possible algorithm, but afaics the algorithm you have suggested restricts the sums of squares to specific values with the entropy of x2. Yet this entropy of x2 is further constrained because only some of those will satisfy the requirement that ∆ < 2fuzzbits/2.

Mixles (OP)
Member
**
Offline Offline

Activity: 63
Merit: 11


View Profile
June 29, 2015, 12:08:51 AM
Last edit: June 29, 2015, 09:19:18 AM by Mixles
 #30

I think that your square-roots have too few bits.

In your current scheme x1 has 126 bits.  Then x2^2 < 2*x1, hence x2 has only 64 bits. Since you publish x2*G, the baby-step giant-step algorithm would only have 32 bits complexity. It is even worse, since you know that the highest 32 bits of fuzzvalue are probably 0.  So x2 is almost trivially breakable.  Even x1 can be broken with ~50 bit complexity if one has a good guess what the value is.

So I think you should make the square roots at least about 250 bits, so that using baby-step giant-step is hard enough.  This means that you need more than 512 bit ECC.

Updated paper to 768 bit curve for now, which brings square roots up to x1@220 and x2@190.

Baby-step giant-step algorithm is memory intensive. Eve would be spending 2^(190/2)=2^95 on storage and also a lot of CPU for 768-bit ECC, just to see a single root (not even the full single output).

Also don't use the scheme where x2 has only has half the bits of x1.

x2 doesn't get that low generally, it retains about 87% as many bits as x1. Note, I am not rooting x1 to get x2, I am rooting |x1^2 - fuzzvalue|.

If you have multiple outputs you could distribute the squares over the outputs.  You could use a single square root x_i for each output, such that the first 64 bit of x_i^2 equal the satoshi amount.  Choose the lower bits of x_i at random except for the last one.  Then publish the square-roots and the difference Δ = sum inputs - sum x_i^2.  

In your approach you have basically removed sum of squares and settled on a single square per output and Δ per txn?

I was also thinking of a less dramatic optimisation with a dominant x1 that is reused for each output (based on log mean), then only require additional x2 and Δ per output.

BTW, how much information would leak if Δ = fuzzvalue - floor(sqrt(fuzzvalue))^2 is known (if there is only one output and the simple scheme is used)?  To avoid leaking the magnitude of x, one could make sure that Δ is not the smallest remainder but equally distributed with (fuzzbits+64)/2 bits.

The bit lengths of everything will be revealed. I don't see a way to have Δ both evenly distributed and complete the exact output sum with a square x_i. If you mean to only publish one Δ for the whole transaction, then each output would be constrained to only ever spend square amounts. I guess you mean using multiple outputs to send the amount to save revealing a Δ per output?

Donations to 1SumKArxoEJ1HoGibmj8ygw1DZWYBvjmM
TPTB_need_war
Sr. Member
****
Offline Offline

Activity: 420
Merit: 257


View Profile
June 29, 2015, 12:54:18 AM
Last edit: June 29, 2015, 01:07:11 AM by TPTB_need_war
 #31

Why isn't that also the case for m = r + c * x? Verifier knows c thus the possibilities for x are entirely determined by the possible values for r.

Because c*x is not modulo M, but some M-r, so you can't get the multiplicative inverse without the secret r, for which you need ECDL.

I don't know what is this M you refer to. The only mod I say was 2t on c only.

I think I may understand what you were referring to. In your NIZKP it is necessary that it not be easy to guess an r that is negative such that x > b and c * b < m < T. (And why did you remove "3.6 Optional repeated application of smallness proof" from the new version of the paper?)

However that requirement (on inability to remove the crucial c from the Fiat-Shamir transform) appears to be unnecessary in my idea. I think this can be shown by noting that a NIZKP is not even required to construct my suggestion. The relative weights of the outputs can instead be explicitly stated by expressing each output as a multiple of some base magnitude commitment (mag * G), which proves each output is not negative (unless the input is but can't be due to transitivity).

It is still required to construct your NIZKP of smallness from the first iteration of the Sumcoin (CCT) paper, and better yet afaics this can then be done for the entire sum of the outputs instead of for each output separately.

What am I missing? Afaics my suggestion is much more efficient and secure.

I paradigm-shifted away from the intractable (security assurance and indeterminism of the) sum-of-squares approach(es) by thinking more holistically about the anonymity and the risk of enabling inflatacoin.

TPTB_need_war
Sr. Member
****
Offline Offline

Activity: 420
Merit: 257


View Profile
June 29, 2015, 01:50:29 AM
Last edit: June 29, 2015, 03:02:55 AM by TPTB_need_war
 #32

I think I may understand what you were referring to. In your NIZKP it is necessary that it not be easy to guess an r that is negative such that x > b and c * b < m < T. (And why did you remove "3.6 Optional repeated application of smallness proof" from the new version of the paper?)

Didn't I just break your smallness proof? The attacker can always use a negative r then adjust the value of x accordingly. But the x * G has to match the value of the inputs (assuming non-negativity of outputs is assured), thus the attacker has to make a search for match.

I see why you dropped that section, because in the NI (non-interactive) context the hacker can search for R proofs which succeed, so the security is not improved. The only way to increase the security of (both blinding and smallness of) x is to increase the size of the curve.

Afaics, there needs to be some sum-of-squares NIZKP on r which should be much easier (deterministic) and more secure than attempting to constrain a sum-of-squares to fuzzedValue x. Probably should use the 3 squares approach of [38] which is entirely blinded.

Edit: ah thus the sum-of-squares smallness proof fails for the same reason recursively forever. Thus Sumcoin is broken and can't be fixed, i.e. [38] "ZK-proof of membership in a much-widened interval." only applies in the non-interactive context.

Quote from:  〚38〛
(If we add to the protocol, as we really should, that Vera
refuses to believe Peter if he employs more than S loop iterations,
then there is an acceptably small probability≈ 2−St
that the whole protocol will fail even if Peter is honest.)

TPTB_need_war
Sr. Member
****
Offline Offline

Activity: 420
Merit: 257


View Profile
June 29, 2015, 03:48:24 AM
Last edit: June 29, 2015, 04:20:20 AM by TPTB_need_war
 #33

I think I see the solution and it avoids all zero-knowledge proofs. It is my former (unpublished because invalid) ridiculous simplification combined with my new idea.

Take my idea that the outputs are expressed as multiples of some base magnitude commitment for the transaction (not global for all transactions).

Then employ two curves to express the equality of the sum of the commitments of inputs and outputs.

So then in a simplified way of visualizing:

5 mod 5 = 15 mod 5 + 0 mod 5

5 mod 7 = 15 mod 7 + 4 mod 7

Before my recent idea to expose relative weights of the outputs, then the attacker could hide the unmatched excess (0 and 4) in the second output and only spending the matched inflation (15), but now he can't because the relative weights  (15 / 0 != 15 / 4) have to be consistent across both curves, even though the public base magnitude commitment isn't (the private base magnitude is consistent).

TADA Smiley

Unless I've missed something obvious (which is likely!), this radically simplifies, increases security, and increases efficiency.

Breaking this appears to be related to factoring the discrete logarithm?

TPTB_need_war
Sr. Member
****
Offline Offline

Activity: 420
Merit: 257


View Profile
June 29, 2015, 04:47:46 AM
Last edit: June 29, 2015, 05:00:03 AM by TPTB_need_war
 #34

Edit: ah thus the sum-of-squares smallness proof fails for the same reason recursively forever. Thus Sumcoin is broken and can't be fixed, i.e. [38] "ZK-proof of membership in a much-widened interval." only applies in the non-interactive context.

Ah maybe we can fix this, ci = Hash(Ui|Vi|ci-1) mod 2t where co = 0 or probably requires instead ci = Hash(ci-1) mod 2t where co = Ui|Vi ∀ i. Set the required number of iterations of i high enough to be sure the attacker couldn't have guessed a negative ri match on every iteration, no matter how many times he tries. The proofs get huge and the computational load explodes. This only has a chance to be viable if the chance of success on any try for ri is greater if ri is positive than if it is negative.

So the CCT which also hides relative weights of the outputs perhaps can be rescued, but with huge proofs.

Now we need to compare with Blockstream's CT.

Mixles (OP)
Member
**
Offline Offline

Activity: 63
Merit: 11


View Profile
June 29, 2015, 08:16:18 AM
Last edit: June 29, 2015, 09:17:26 AM by Mixles
 #35

Didn't I just break your smallness proof?

Nope. This is a long time peer-reviewed proof, r cannot leak into x because of the equations the verifier checks. Please check literature for this.

Thus Sumcoin is broken and can't be fixed, i.e. [38] "ZK-proof of membership in a much-widened interval." only applies in the non-interactive context.

Quote from:  〚38〛
(If we add to the protocol, as we really should, that Vera
refuses to believe Peter if he employs more than S loop iterations,
then there is an acceptably small probability≈ 2−St
that the whole protocol will fail even if Peter is honest.)

Such interactive proofs can be converted to non-interactive. Then the hash limits the prover. If you can see some specific technicality that I've missed in the conversion, please let me know.

And why did you remove "3.6 Optional repeated application of smallness proof" from the new version of the paper?

This is about soundness, which with t=128 in CCT is already high enough, so repeated application is not required.

Please spend more time to verify your work, as it takes me some time to get through the messages.

my recent idea to expose relative weights of the outputs

We don't want to reveal which output got most of the coins. That's kind of a big deal around here. Long term network analysis would reveal everything.

Donations to 1SumKArxoEJ1HoGibmj8ygw1DZWYBvjmM
johoe
Full Member
***
Offline Offline

Activity: 217
Merit: 238


View Profile
June 29, 2015, 09:18:24 AM
 #36

So I think you should make the square roots at least about 250 bits, so that using baby-step giant-step is hard enough.  This means that you need more than 512 bit ECC.

Updated paper to 768 bit curve for now, which brings square roots up to x1@220 and x2@190.

Fine, 440 fuzzbits + 64 bit brings x1 to 252 bit, which is what I had in mind.

Quote
x2 doesn't get that low generally, it retains about 87% as many bits as x1. Note, I am not rooting x1 to get x2, I am rooting |x1^2 - fuzzvalue|.

But you choose x1 to be the square root rounded down.  Since x1^2 and (x1+1)^2 are only 2x1+1 away, this means x2^2 < 2x1+1.  Hence x2 is very small.

Quote
If you have multiple outputs you could distribute the squares over the outputs.  You could use a single square root x_i for each output, such that the first 64 bit of x_i^2 equal the satoshi amount.  Choose the lower bits of x_i at random except for the last one.  Then publish the square-roots and the difference Δ = sum inputs - sum x_i^2.  

In your approach you have basically removed sum of squares and settled on a single square per output and Δ per txn?

I was also thinking of a less dramatic optimisation with a dominant x1 that is reused for each output (based on log mean), then only require additional x2 and Δ per output.

BTW, how much information would leak if Δ = fuzzvalue - floor(sqrt(fuzzvalue))^2 is known (if there is only one output and the simple scheme is used)?  To avoid leaking the magnitude of x, one could make sure that Δ is not the smallest remainder but equally distributed with (fuzzbits+64)/2 bits.

The bit lengths of everything will be revealed. I don't see a way to have Δ both evenly distributed and complete the exact output sum with a square x_i. If you mean to only publish one Δ for the whole transaction, then each output would be constrained to only ever spend square amounts. I guess you mean using multiple outputs to send the amount to save revealing a Δ per output.

As I interpret it now, the fuzzvalue is really the value (as a very high precision number, 504 bits).  Only the first 64 bit are visible in the user interface and the lowest bits are chosen at random.  Rounding errors (in the range of a satoshi) could be displayed as part of the fee in the user interface.

True, every output must be a square in my scheme, but the square numbers are not that far apart.  Δ has worst case 253 bits. In the 440+64 bit scheme the rounding error due to choosing a square is only 10^-55 satoshi.  The added 440 bit fuzz value (which is needed for securing privacy) brings in much more rounding errors.

In this scheme, you can add Δ to the high-precision fee and you can randomize the lowest 256 bits of the fee before computing the outputs (the fee must be public anyway).  This way you don't reveal Δ at all.

Donations to 1CF62UFWXiKqFUmgQMUby9DpEW5LXjypU3
johoe
Full Member
***
Offline Offline

Activity: 217
Merit: 238


View Profile
June 29, 2015, 09:56:26 AM
 #37

Updated paper to 768 bit curve for now, which brings square roots up to x1@220 and x2@190.


I should add that it is not true that x1 has 220 random bits.  This is because x1 is the square-root of fuzzvalue rounded down:

Assuming you know that fuzzvalue is exactly 10 million bitcoins, i.e. it is a 440+50 bit number with the first 50 bits known. Then x1 = sqrt(fuzzvalue) is a 245 bit number with the first 50(!) bits known.  So x1 has "only" 195 random bits.  This is the worst case, though.

Essentially every known bit of the transaction value except leading zeros remove half a bit of the 220 random bits.  My scheme has the same problem.  I don't see it as a big problem since 195 bits should be secure against baby-step giant-step.

Donations to 1CF62UFWXiKqFUmgQMUby9DpEW5LXjypU3
johoe
Full Member
***
Offline Offline

Activity: 217
Merit: 238


View Profile
June 29, 2015, 10:01:19 AM
Last edit: June 29, 2015, 10:11:50 AM by johoe
 #38


#!/bin/python
fuzzvalue = 747474747474747474747474747474747474747474747474747474747474747474747474 # math.randint(0, 2**440)
x1 = int(n**0.5)randint(0, 2**440-1)
x2 = int(abs(fuzzvalue - x1**2)**0.5)
print x1
print x2

> 864566219253764036048470441025601536L
> 10613488258738423614016585728L # <- much better than sqrt(x1)

I assume the randint in the x1 line is just a typo.

Check the precision.  x1 should be 864566219253763970902429276239497915, and x2 should be 841252670382898896.

I assume python does now use full big integer precision since a float (0.5) is used.

Update: math.sqrt doesn't work, either.  Any python guru knows a way to compute square roots of a 500 bit number?

Donations to 1CF62UFWXiKqFUmgQMUby9DpEW5LXjypU3
Mixles (OP)
Member
**
Offline Offline

Activity: 63
Merit: 11


View Profile
June 29, 2015, 10:11:39 AM
 #39

I assume the randint in the x1 line is just a typo.

Check the precision.  x1 should be 864566219253763970902429276239497915, and x2 should be 841252670382898896.

I assume python does now use full big integer precision since a float (0.5) is used.


Yeah the python lied, verified your number in sage. x2 is small, which makes the single square scheme more appealing.

Donations to 1SumKArxoEJ1HoGibmj8ygw1DZWYBvjmM
Mixles (OP)
Member
**
Offline Offline

Activity: 63
Merit: 11


View Profile
June 29, 2015, 12:05:21 PM
Last edit: June 29, 2015, 12:17:50 PM by Mixles
 #40

With big curves the satoshi error is small. Paper revised to use single-square method. Will think further about 440+64.

I've added an Acknowledgements section, let me know if you're ok to be credited.

Donations to 1SumKArxoEJ1HoGibmj8ygw1DZWYBvjmM
Pages: « 1 [2] 3 4 5 6 »  All
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!