Bitcoin Forum
April 15, 2021, 02:47:36 PM *
News: Latest Bitcoin Core release: 0.21.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: « 1 2 3 4 [5] 6 »  All
  Print  
Author Topic: [Crypto] Compact Confidential Transactions for Bitcoin  (Read 18294 times)
gmaxwell
Moderator
Legendary
*
qt
Offline Offline

Activity: 3388
Merit: 5084



View Profile
July 07, 2015, 10:01:23 PM
 #81

I've been toying around with a 384 bit quadratic extension field curve that supports a four dimensional endomorphism (GLV-GLS).  The use of the quadratic extension avoids much of the bad scaling, but I'm not to a point where I can benchmark anything yet.
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction. Advertise here.
ABISprotocol
Sr. Member
****
Offline Offline

Activity: 278
Merit: 250

ABISprotocol on Gist


View Profile WWW
July 13, 2015, 08:16:34 AM
 #82

I've been toying around with a 384 bit quadratic extension field curve that supports a four dimensional endomorphism (GLV-GLS).  The use of the quadratic extension avoids much of the bad scaling, but I'm not to a point where I can benchmark anything yet.

Very interesting stuff, I will keep my eyes on both threads (this one and also the original Confidential Transactions). 

ABISprotocol (Github/Gist)
http://abis.io
Mixles
Member
**
Offline Offline

Activity: 63
Merit: 10


View Profile
October 15, 2015, 12:26:54 PM
Last edit: October 18, 2015, 09:14:42 PM by Mixles
 #83

I have had a little bit of time, so I can present an up-to-date apples-to-oranges comparison results from my most recent CCT test:

CriteriaBlockstreamCTDenisCCTImprovement
value bits hidden3264100%
blockchain space2.55kB0.30kB850%
verifications per sec1300800-38%

CCT was tested using OpenSSL (normalised by factor 1.82 from Q9550 to i7-4770R) whereas the published CT result was using libsecp256k1, which is a much faster curve-specific library (20x faster than OpenSSL).

I believe with a faster library and picking a convenient curve, CCT can reach performance parity. Curve extensions, and/or using the point halving multiplication algorithm could yield further performance gains.

Donations to 1SumKArxoEJ1HoGibmj8ygw1DZWYBvjmM
Mixles
Member
**
Offline Offline

Activity: 63
Merit: 10


View Profile
October 18, 2015, 07:05:04 PM
 #84

Thinking of replacing U and V in the hash with their sum W = U + V = r*G + r*E then c=HASH(E||F||W).

Then the verification can be done in a single multiply-add step as m*G + (m-c)*E - c*F, instead of the current two steps.

Donations to 1SumKArxoEJ1HoGibmj8ygw1DZWYBvjmM
Come-from-Beyond
Legendary
*
Offline Offline

Activity: 2142
Merit: 1009

Newbie


View Profile
October 18, 2015, 08:12:10 PM
 #85

I've been toying around with a 384 bit quadratic extension field curve that supports a four dimensional endomorphism (GLV-GLS).  The use of the quadratic extension avoids much of the bad scaling, but I'm not to a point where I can benchmark anything yet.

384 bits perfectly fit into 243 trits, is there an advantage of using balanced trinary numeral system instead of binary?
Mixles
Member
**
Offline Offline

Activity: 63
Merit: 10


View Profile
October 18, 2015, 09:01:18 PM
Last edit: October 18, 2015, 09:11:33 PM by Mixles
 #86

Thinking of replacing U and V in the hash with their sum W = U + V = r*G + r*E then c=HASH(E||F||W).

Then the verification can be done in a single multiply-add step as m*G + (m-c)*E - c*F, instead of the current two steps.

Without this optimization, I'm up to 800 verifications per second now (implemented efficient inversions).

With this optimization, I'm up to 1400 verifications per second, which would make CCT faster than CT (even before we get into curve extensions).

Can anyone suggest an attack on the optimization, or maybe suggest a proof of its validity?

Donations to 1SumKArxoEJ1HoGibmj8ygw1DZWYBvjmM
Mixles
Member
**
Offline Offline

Activity: 63
Merit: 10


View Profile
October 18, 2015, 09:33:30 PM
 #87

I've been toying around with a 384 bit quadratic extension field curve that supports a four dimensional endomorphism (GLV-GLS).  The use of the quadratic extension avoids much of the bad scaling, but I'm not to a point where I can benchmark anything yet.

384 bits perfectly fit into 243 trits, is there an advantage of using balanced trinary numeral system instead of binary?

This is sortof what is done for the scalars in Joint Sparse Form multiplication, see "Low-Weight Binary Representations for Pairs of Integers" http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.29.440

I don't know how it can be applied to curve representation. Do you have an idea?

Donations to 1SumKArxoEJ1HoGibmj8ygw1DZWYBvjmM
Come-from-Beyond
Legendary
*
Offline Offline

Activity: 2142
Merit: 1009

Newbie


View Profile
October 18, 2015, 10:22:01 PM
 #88

This is sortof what is done for the scalars in Joint Sparse Form multiplication, see "Low-Weight Binary Representations for Pairs of Integers" http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.29.440

I don't know how it can be applied to curve representation. Do you have an idea?

Well, I'm asking because I'm not completely familiar with this stuff. I noticed that representation of numbers in balanced trinary form does give some advantage (e.g. https://www.cdc.informatik.tu-darmstadt.de/~dahmen/papers/DOT05b.pdf), also balanced trinary is the only numeral system where multiplication is as simple as in binary, and addition is even simpler (carry propagation happens less often). Even more, balanced trinary doesn't require a special "bit" to represent the sign of a number, i.e. naturally stores the sign inside its digits. All this should speedup math beyond inherent advantage of trinary being closer to 2.718 than binary, this is why I asked the question.
Mixles
Member
**
Offline Offline

Activity: 63
Merit: 10


View Profile
October 18, 2015, 10:51:14 PM
Last edit: October 18, 2015, 11:31:05 PM by Mixles
 #89

This is sortof what is done for the scalars in Joint Sparse Form multiplication, see "Low-Weight Binary Representations for Pairs of Integers" http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.29.440

I don't know how it can be applied to curve representation. Do you have an idea?

Well, I'm asking because I'm not completely familiar with this stuff. I noticed that representation of numbers in balanced trinary form does give some advantage (e.g. https://www.cdc.informatik.tu-darmstadt.de/~dahmen/papers/DOT05b.pdf), also balanced trinary is the only numeral system where multiplication is as simple as in binary, and addition is even simpler (carry propagation happens less often). Even more, balanced trinary doesn't require a special "bit" to represent the sign of a number, i.e. naturally stores the sign inside its digits. All this should speedup math beyond inherent advantage of trinary being closer to 2.718 than binary, this is why I asked the question.

Ah yes, I've seen this. It is also about scalar representation, it is an improvement on Joint Sparse Form with a speed up of 10%.

The reason I'm not doing stuff like this is basically the effort:improvement ratio. It would take months/years to get this 10% improvement (probably I would write a new crypto library, and still it would not be mature nor well reviewed), whereas by improving the protocol (or usage of OpenSSL) I could still get a 60% improvement in days. I don't implement GLV-GLS for the same reason. Avoiding premature optimization. There could be a special curve that is fast enough to multiply by some other method.

Donations to 1SumKArxoEJ1HoGibmj8ygw1DZWYBvjmM
johoe
Full Member
***
Offline Offline

Activity: 217
Merit: 160


View Profile
October 18, 2015, 11:39:38 PM
 #90

Thinking of replacing U and V in the hash with their sum W = U + V = r*G + r*E then c=HASH(E||F||W).

Then the verification can be done in a single multiply-add step as m*G + (m-c)*E - c*F, instead of the current two steps.

Without this optimization, I'm up to 800 verifications per second now (implemented efficient inversions).

With this optimization, I'm up to 1400 verifications per second, which would make CCT faster than CT (even before we get into curve extensions).

Can anyone suggest an attack on the optimization, or maybe suggest a proof of its validity?

The optimization doesn't work.

For E=a*G, F = (-a-2)*E,  F is a negative number but it can be "proved" with the new protocol:

  W = U + V = r*G + r*E,  c=HASH(E||F||W)
  m = r - c*a

satisfy m*G + (m-c)*E - c*F = W

Donations to 1CF62UFWXiKqFUmgQMUby9DpEW5LXjypU3
Mixles
Member
**
Offline Offline

Activity: 63
Merit: 10


View Profile
October 19, 2015, 12:37:22 AM
 #91

The optimization doesn't work.

For E=a*G, F = (-a-2)*E,  F is a negative number but it can be "proved" with the new protocol:

  W = U + V = r*G + r*E,  c=HASH(E||F||W)
  m = r - c*a

satisfy m*G + (m-c)*E - c*F = W

That was quick, thanks! I can see it now.

Donations to 1SumKArxoEJ1HoGibmj8ygw1DZWYBvjmM
TPTB_need_war
Sr. Member
****
Offline Offline

Activity: 420
Merit: 255


View Profile
November 12, 2015, 05:32:13 PM
 #92

Denis, on quick glance this appears to be superior to the current Distribute algorithm that appears in Appendix A of your paper:

http://math.stackexchange.com/questions/1276206/method-of-generating-random-numbers-that-sum-to-100-is-this-truly-random

TPTB_need_war
Sr. Member
****
Offline Offline

Activity: 420
Merit: 255


View Profile
November 13, 2015, 02:26:03 PM
Last edit: November 14, 2015, 01:25:32 AM by TPTB_need_war
 #93

I want to compute the probabilities of restarting the protocol in step 4 of the section 3.52 proof of small magnitude of Compact Confidential Transactions. I will be more verbose to attempt to make this comprehensible to those who didn't complete relevant courses in Calculus and Probability Theory. Please check my math below. Tangentially, am I correct that step 4 in the white paper is currently missing a required stipulation that the protocol must also be restarted if c = 0?

Joint Probability of Sum

In consideration of the relevant value r + c * x̄ (where x is renamed x̄ so isn't confused with x-axis below) from step 3 (of the proof of magnitude within a much larger interval T), then in general the probability that any sum (e.g. x + y) is greater (or separately less) than some chosen value k, can be modeled by the area above (or respectively below) the line x + y = k within the rectangle containing the range of tuples (x, y) and relative to the total area of said rectangle. For example, if the possible values of x and y are between 0 and 1, then the probability of x + y ≥ 1 (i.e. k = 1), is given by the area above the line y = 1 - x bounded by the unit square[1] divided by the area of the said square as shown below:



The area above the line y = 1 - x contained within the unit-square can be computed by integrating over the difference (distance) between the square's top line y = 1 and the lower line y = 1 - x over the interval where the line intersects and is within the said square from x = 0 to x = 1:

area = ∫10 1 - (1 - x) dx = x2/2 |10 = 12/2 - 02/2 = ½

Since the total area of the unit square is 1 * 1 = 1, then P(X + Y ≥ 1) = ½ / 1 = ½.

Note this can also be computed as double integral directly computing the area between the unit-square as the upper curve and the line y = 1 - x as the lower curve:

1011-x dy dx = ∫10 (y |11-x) dx = ∫10 1 - (1 - x) dx.


Note n00bs may skip this paragraph. The above is a 2D simplification that only applies where x and y are independent and uniformly distributed variables, which applies in our case. Given x and y are independent but not uniformly distributed, then (even though it doesn't apply in our case but stated to so as to be thorough) the joint probability distribution function (pdf) of x and y must be plotted along the z-axis to compute the 3D volume above the 3D plane x + y = k (which requires a double integral to compute, as we will see when we consider the joint probability of sums and products). The joint pdf is the convolution of the independent pdfs[1] as intuitively depicted below for the uniform distribution case. In the animation below visualize 1 + 1 = 2 (as the two distributions first touch) has a probability of ~0 and ½ + ½ = 1 has the maximum probability (because the pdfs of x and y are maximally overlapped).




Joint Probability of Product

In general the probability that any product (e.g. x * y) is greater (or separately less) than some chosen value k, can be modeled by the area above (or respectively below) the hyperbola x * y = k within the rectangle containing the range of tuples (x, y) and relative to the total area of said rectangle. For example, if the possible values of x and y are between 0 and 2, then the probability of x * y ≥ 1 (i.e. k =1), is given by the area above the hyperbola y = 1 / x bounded by the two-unit square[2] divided by the area of the said square as shown below (note only the hyperbola is draw below and the two-unit square is not shown so just draw it in your mind). Note that in this example a unit-square (as opposed to a two-unit square) would contain none of the area above the hyperbola, because mean of the product of two (independently, uniformly distributed) numbers is half the mean of the product, e.g. 16 (mean of 8) = 16 * 1 (means of 8 * ½) = 8 * 2 (means of 4 * 1) = 4 * 4 (means of 2 * 2) with mean of 8 for [0, 16], 4 for [0, 8], 2 for [0,4], and 1 for [0,2]:



The area above the hyperbola y = 1 / x contained within the two-unit square can be computed by integrating over the difference (distance) between the square's top line y = 2 and the lower hyperbola y = 1 / x over the interval where the hyperbola intersects and is within the said square from x = ½ to x = 2:

area = ∫2½ 2 - 1/x dx = 2 * x - ln(|x|) |2½ = 4 - ln(2) - (1 - ln(½)) = 3 - 2 * ln(2) = 1.6137

Since the total area of the two-unit square is 2 * 2 = 4, then P(X * Y ≥ 1) = 1.6137 / 4 = 0.40343.

Note this can also be computed as double integral directly computing the area between the two-unit-square as the upper curve and the hyperbola y = 1 / x as the lower curve:

2½21/x dy dx = ∫2½ (y |21/x) dx = ∫2½ 2 - 1/x dx.

Joint Probability of Sum and Product

(Again assuming the simplification of independent and uniformly distributed variables x, y, and z, then ) In general the probability that any sum and product (e.g. z + x * y) is greater (or separately less) than some chosen value k, can be modeled by the 3D volume above (or respectively below) the 3D hyperbolic paraboloid z + x * y = k within the 3D cuboid (i.e. cube without requirement for equal area sides) containing the range of triples (x, y, z) and relative to the total volume of said cuboid. A third axis z is required to model the joint probability of (two) combined 2D joint (sum and product) probabilities.

Afaik, generally the closed form of computing the volume under a 3D curve constrained by another 3D curve is complex because the definite integral needs the definite bounds of the intersection of the two curves on each axis, which can require for example rebasing the equation to a suitable coordinate system such as the use polar coordinates in Example 2 of [3].

The 3D hyperbolic paraboloid is a saddle shape[4] where cross-sectional hyperbola x * y = k - z (where z is constant for the cross-section) drifts lower moving away from the origin as shown below. Note the equations for hyperbola and hyperbolic paraboloid at Wikipedia and else where contain x2 and y2 terms and mine above do not, this is because those equations have the center axis aligned with an axis of the coordinate system as depicted below:



So visualizing the above rotated to as the hyperbola was depicted for Joint Product of Sum for z = 0 cross-section for 3D hyperbolic paraboloid and considering the relevant r + c * x̄ from the white paper, then a cuboid defined by the 3D segment [(0,0,0), (b, 2t, T)], I attempt to write the correct integral to compute volume above the hyperbolic paraboloid and contained within 4 faces of the cuboid:

volume = ∫k0(k-z)/2tb2t(k-z)/x dy dx dz = ∫k0(k-z)/2tb 2t - (k-z)/x dx dz = ∫k0 (2t * x - (k-z) * ln(|x|) |(k-z)/2tb) dz.

volume = ∫k0 (k-z)(1 - ln(|(k-z)/2t|)) - 2t * b + (k-z) * ln(|b|) dz

volume = 1/4 (-z (b 2t+2+3 z)-k2+2 (k-z)2 log(2-t (k-z))+6 k z)+log(b) (k z-z2/2)+constant |k0 (as integrated by Wolfram Alpha)

Even if the above is correct so far (which I am not so confident of[5]), it assumes k < T and it doesn't subtract the volume above the hyperbolic paraboloid that is under the bottom two faces of the cuboid. So the correct probability would be smaller.

For the moment, I have expended my available time to further this.

Can anyone help finish this and compute some sample probabilities for the originally stated goal at the top of this post?

[1] http://math.stackexchange.com/questions/285362/choosing-two-random-numbers-in-0-1-what-is-the-probability-that-sum-of-them

[2] http://math.stackexchange.com/questions/1292294/find-the-probability-of-the-product-of-two-random-variables

[3] http://mathinsight.org/double_integral_volume

[4] http://www.math.ubc.ca/~cwsei/math200/graphics/hyperbolic_paraboloid.html

[5] http://math.stackexchange.com/questions/581440/finding-the-volume-of-a-region-below-a-plane-and-above-a-paraboloid-triple-inte

Mixles
Member
**
Offline Offline

Activity: 63
Merit: 10


View Profile
November 13, 2015, 07:59:54 PM
 #94

Denis, on quick glance this appears to be superior to the current Distribute algorithm that appears in Appendix A of your paper:

http://math.stackexchange.com/questions/1276206/method-of-generating-random-numbers-that-sum-to-100-is-this-truly-random

1. As per responses within the link, at first glance, this does not achieve uniform distribution, and therefore is not a good choice to replace Distribute.

2. Following the sum-of-squares approach, Distribute algorithm has become a minor (and optional) part, as it is now only used by the miners. I kept it in as it looks pretty, could easily be useful for related applications (and took some effort to test and typeset neatly...), so would be a shame to leave out.

3. The Distribute algorithm, if used, already runs in a trivial time compared to the elliptic curve operations.

Donations to 1SumKArxoEJ1HoGibmj8ygw1DZWYBvjmM
Mixles
Member
**
Offline Offline

Activity: 63
Merit: 10


View Profile
November 13, 2015, 08:20:49 PM
Last edit: November 13, 2015, 08:38:18 PM by Mixles
 #95

I want to compute the probabilities of restarting the protocol in step 4 of the section 3.52 proof of small magnitude of Compact Confidential Transactions.

Sounds like a useful thing to calculate.

Tangentially, am I correct that step 4 in the white paper is currently missing a required stipulation that the protocol must also be restarted if c = 0?

The probability of this happening is 1 in 2^128, or about as likely as breaking 128-bit security. The impact is that the single output would be revealed, as then m=r. You're right in that the protocol, technically, should restart in this unlikely case.

Donations to 1SumKArxoEJ1HoGibmj8ygw1DZWYBvjmM
TPTB_need_war
Sr. Member
****
Offline Offline

Activity: 420
Merit: 255


View Profile
November 14, 2015, 01:15:32 AM
Last edit: November 14, 2015, 04:48:44 AM by TPTB_need_war
 #96

I want to compute the probabilities of restarting the protocol in step 4 of the section 3.52 proof of small magnitude of Compact Confidential Transactions.

Sounds like a useful thing to calculate.

I believe I have more need to calculate this in my version of Compact Confidential Transactions, because I eliminate the use of squaring to prove x ∈[0,T]. Thus the probability of restarting the protocol is greater in my version of your design. Afaics in my design, there is a trade-off between that probability and performance of verification. This also ties into proving that values are not dust, which afaics will become critical for anti-DDoS (against bot nets) when implementing micro-transactions (noting that Lightning Networks can't do end-to-end principled anonymity), which I will explain in a new thread I will create today in this Development & Technical Discussion forum (as the issue pertains to Bitcoin). This will become more clear once I have published my design. I am rushing to get to the point where I can publish.

Note I may have errors in my thinking because no one has checked my work, I haven't implemented my design yet, I am not a mathematician (took mathematics courses in college but 30 years ago and haven't employed higher maths much hence until recently autodidact on crypto), my mind is scattered all over on many aspects of holistic design for a crypto-currency, and I am still some what "foggy brain" from my ongoing battle with (undiagnosed specificity of) autoimmune disease.


Denis, on quick glance this appears to be superior to the current Distribute algorithm that appears in Appendix A of your paper:

http://math.stackexchange.com/questions/1276206/method-of-generating-random-numbers-that-sum-to-100-is-this-truly-random

1. As per responses within the link, at first glance, this does not achieve uniform distribution, and therefore is not a good choice to replace Distribute.

Afaics the comments under Thomas Andrew's answer argue that it does achieve a more uniform distribution than just subtracting iteratively starting from 100 in a deterministic number of steps, because it doesn't favor the first value having a 50% chance of being greater than 50. Your Distribute is sorting random values and adding differences, so perhaps yours is equivalently or more uniformly distributed, but since it is iterative in a non-deterministic number of steps, I think it may be slower than what Thomas Andrew proposed?

Perhaps the only way to get a perfectly uniform distribution is to select uniformly distributed random values until they sum as required, but this would be computationally prohibitive. Further analysis would be to examine how the non-uniformity in the algorithm employed impacts the security and/or the probability of restarting the protocol in step 4 (and any other impacts).

2. Following the sum-of-squares approach, Distribute algorithm has become a minor (and optional) part, as it is now only used by the miners. I kept it in as it looks pretty, could easily be useful for related applications (and took some effort to test and typeset neatly...), so would be a shame to leave out.

Please consider to not remove because my version of your design eliminates the use of sum-of-squares (so the huge 801-bit ECC curve isn't needed and other impacts), and I cite your paper rather than repeat everything from your paper.

3. The Distribute algorithm, if used, already runs in a trivial time compared to the elliptic curve operations.

I hadn't thought of that. Nevertheless I think it is useful to have two algorithms to compare, so with further analysis perhaps someone can determine which has better uniformity of distribution if so.

Tangentially, am I correct that step 4 in the white paper is currently missing a required stipulation that the protocol must also be restarted if c = 0?

The probability of this happening is 1 in 2^128, or about as likely as breaking 128-bit security [for the case the chosen t = 128]. The impact is that the single output would be revealed, as then m=r. You're right in that the protocol, technically, should restart in this unlikely case.

Agreed and inserted into your comment that t is a configurable parameter.

Mixles
Member
**
Offline Offline

Activity: 63
Merit: 10


View Profile
November 14, 2015, 11:49:06 AM
Last edit: November 14, 2015, 12:01:08 PM by Mixles
 #97

Please consider to not remove because my version of your design eliminates the use of sum-of-squares (so the huge 801-bit ECC curve isn't needed and other impacts), and I cite your paper rather than repeat everything from your paper.

Someone at UCL suggested a couple of ways to achieve smaller curves with sum-of-squares (maybe this would improve your idea too). I am very tired after a busy week (devcon1), will be trying out the approaches next week.

This will become more clear once I have published my design. I am rushing to get to the point where I can publish.

Note I may have errors in my thinking because no one has checked my work, I haven't implemented my design yet, I am not a mathematician

I have found that peer review has been very useful to move the work forward. There are many things that I have thought of previously, but discarded too quickly (or assumed too hastily), and it is helpful when people point them out. Happy to have a critical look at your method in public or in private.

Donations to 1SumKArxoEJ1HoGibmj8ygw1DZWYBvjmM
Mixles
Member
**
Offline Offline

Activity: 63
Merit: 10


View Profile
November 19, 2015, 02:11:11 PM
Last edit: November 19, 2015, 02:41:26 PM by Mixles
 #98

Someone at UCL has pointed out the assumption of unknown factorisation for the modulo group, that underlies the CFT proof. This means that CFT proof is not (automatically under ECDL assumption) secure in EC, as the curve order is possible to calculate and factor.

CCT is dead. Long live CCT.

I have re-based the proof of smallness on Warren's work (Cryptography meets voting), which seems not to rely on the assumption. Unfortunately, the new construction takes twice as much space and runtime (for now):

CriteriaBlockstreamCTDenisCCTImprovement
value bits hidden3264100%
blockchain space2.55kB0.71kB359%
verifications per sec1300400-69%

Paper has been updated.

Donations to 1SumKArxoEJ1HoGibmj8ygw1DZWYBvjmM
TPTB_need_war
Sr. Member
****
Offline Offline

Activity: 420
Merit: 255


View Profile
November 20, 2015, 06:59:11 PM
Last edit: November 20, 2015, 08:17:08 PM by TPTB_need_war
 #99

Someone at UCL has pointed out the assumption of unknown factorisation for the modulo group, that underlies the CFT proof. This means that CFT proof is not (automatically under ECDL assumption) secure in EC, as the curve order is possible to calculate and factor.

Does this apply because of the two base points G and E introduced when you switched to the proof-of-square?

In other words, does the original proof-of-smallness (that did not employ H, the decomposed hash of G) have this assumption if E is not employed (i.e.  before you added proof-of-square to your white paper)? I think not (because the assumption of factorization is between G and E correct?). I ask because my version of your paper does not use a proof-of-square.

Unfortunately, the new construction takes twice as much space and runtime (for now):

By space, you mean the size of the proofs (because of the extra parameters introduced with H) but not an increase in the bit width of the ECC curves employed.

Also afair the proof-of-square had increased the space considerably too. So if my design is correct (and perhaps it isn't since it seems quite implausible so many mathematicians would have failed to see the simple method I used so they must have dismissed my line of thought as insecure!), it should get back to efficiency of your original design, except not entirely on performance, because in my design there is still extra computational overhead to substitute for the proof-of-square.

Mixles
Member
**
Offline Offline

Activity: 63
Merit: 10


View Profile
November 20, 2015, 11:14:09 PM
 #100

Does this apply because of the two base points G and E introduced when you switched to the proof-of-square?

In other words, does the original proof-of-smallness (that did not employ H, the decomposed hash of G) have this assumption if E is not employed (i.e.  before you added proof-of-square to your white paper)? I think not (because the assumption of factorization is between G and E correct?). I ask because my version of your paper does not use a proof-of-square.

No. It's because the original CFT widened interval proof depends on an unfactorable number (or as Boudot and Cao put it, "n is a large composite number whose factorisation is unknown [to the prover]").

By space, you mean the size of the proofs (because of the extra parameters introduced with H) but not an increase in the bit width of the ECC curves employed.

Yes, the transaction payload.

Also afair the proof-of-square had increased the space considerably too. So if my design is correct (and perhaps it isn't since it seems quite implausible so many mathematicians would have failed to see the simple method I used so they must have dismissed my line of thought as insecure!), it should get back to efficiency of your original design, except not entirely on performance, because in my design there is still extra computational overhead to substitute for the proof-of-square.

My intuition says there's high probability of a mistake there.

Donations to 1SumKArxoEJ1HoGibmj8ygw1DZWYBvjmM
Pages: « 1 2 3 4 [5] 6 »  All
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!