Bitcoin Forum
June 24, 2024, 02:15:38 PM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
  Home Help Search Login Register More  
  Show Posts
Pages: « 1 2 [3] 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 »
41  Bitcoin / Development & Technical Discussion / Re: Why build on the longest chain? on: September 09, 2015, 04:42:05 PM
That said, we should try to deliver something that has the best security properties under the weakest possible assumptions.  

Fee delegation with or without human intervention has many problems. It has the huge drawback that nobody has come up with proven strategy for miners that is optimal and stable (or a Schelling point).

The solution is to share the block reward using the DECOR+ protocol.

In a nutshell DECOR+ stipulates that all competing blocks receive a share of the reward. For instance, if A finds a block with a 10 BTC fee, and B finds a competing block, then A gets 5, and B gets 5 (- some small amount that is burnt and some small amount that is paid to the miner that includes the uncle header).

In practice, we can soft-fork so that the coinbase transactions pays to OP_TRUE, and specify an additional payload of a bitcoin address. Also the coinbase field can include a reference to an uncle header: UNCLE:<version,parent-height,time,merkle-root,nonce> (48 bytes). Or this can be embedded in an OP_RETURN <data>. 100 bocks later (when the coinbase matures), the miner must split the reward between all competing miners by spending the 100-block old coinbase that was paid to OP_TRUE. If he does not, then the block becomes invalid.

The outcome when a high fee is paid is that is that the network slows down until the revenue from the high fee is split between many miners, and the share value goes below the average fees (in the backlog).

The huge benefit is that during the time a huge fee is being split, the blockchain does not fork, there are no competing views of the block-chain, it's stable. There is no hidden strategy. The network is incentive compatible. All participants know that fee sharing is taking place. They can send their transactions as normal and they will be queue in the backlog. Transactions won't be confirmed and than rolled back. Just queued.

The drawback is that the block-chain does not move forward during the sharing time (*), as all miners are mining at the same block height. However, once all miners see the B competing blocks, and HighFeeReward/B < AverageBlockReward, they all start moving forward again as usual. (this requires also that competing blocks are forwarded).

I hate to repeat myself over and over about DECOR+, but it's the solution to most of the problems/vulnerabilities I'm hearing about Bitcoin these days. I will present a paper about DECOR+ in the Scalability workshop in China. Because... it also solves some scalability problems too Smiley

(*) This is not entirely true, as miners who already mined a competing block will stop competing with themselves earlier than the others.
The miner who has already mined K competing blocks will move forward if (K+1)*HighFeeReward/(B+1) - K*HighFeeReward/B < AverageBlockReward







42  Bitcoin / Development & Technical Discussion / Re: Exploiting Bitcoin Network Flaws on: September 05, 2015, 02:13:24 AM
Are there any bitcoin network flaws that can be exploited to gain money without losing other people's money in the process.

Yes, but probably you can't get much money out of them.

For instance Selfish mining is one (if you interpret that the remaining miners are not "loosing money", but just earning less money).

Certain design failures in the Bitcoin mining algorithm (double SHA-256 of the header) also allow non-obvious speedups that miners may be exploiting or may exploit in the future to obtain a small advantage. All commercial Bitcoin ASIC designs are confidential.

And some critical things don't leak so easily. As a comparison, Pieter Wuille held information about a critical bug in Bitcoin for a year until it was finally solved.

I think that the incentive to exploit weaknesses is low for many different reasons: low monetary gain, high initial cost to setup the exploit, high economic penalty if exploit discovered, personal reputation at stake, legal concerns, etc.

Bitcoin is fantastic for a lot of reasons, but it is not perfect.



 
43  Bitcoin / Development & Technical Discussion / Re: Block Size/Transaction Speed/Mainstream Adoption on: September 05, 2015, 01:46:36 AM
My opinion is that there is no well-founded technical reason not to reduce the average block interval instead of increasing the block size, if you accept the later.

Every change to the Bitcoin protocol benefits some parties more than others. If you want to improve Bitcoin core constants, you will generate an imbalance.

But reducing the block interval actually benefits all the community so much that any imbalance it creates becomes irrelevant.

I've been pushing for block rate reduction to at least 5 minutes since long ago.

With some technical changes, such as implementing the DECOR+ protocol, you could increase the block rate and also even reduce the orphan rate, since you can prevent selfish mining.

44  Alternate cryptocurrencies / Announcements (Altcoins) / Re: [AXIOM] AxiomMemHash and SHABAL-256 with Schnorr Signatures - POW / POS on: July 29, 2015, 05:44:59 PM
AXIOM developers pinged me to give an opinion on the GPU speedup of the AXIOM PoW function.

I wrote the draft of the paper about RandMemoHash back in 2013, after playing with RandMemoHash a bit. I tested it with a SINGLE GPU (that was even older), and could not get any speedup. My paper does not show any proof, not any real world comparison over several GPU models, so it is definitively unfinished. The suggested parameters are not proven to be valid for all existent GPU models (and less for future models). People should never take for granted what an non-peer-reviewed privately-published draft paper says.

I would have liked that Axiom developers would have pinged me to give an update on it before implementing it.

Having said this, I find that a 5x GPU speedup quite good resistance, considering that a GPU is generally better than a CPU in almost every task. CPUs are good at executing large programs, doing paging, protecting memory, doing I/O. Nothing that a PoW can make use of.

GPU memory system is designed for throughput, and it has generally about 6 times the bandwidth available to a CPU. RandMemoHash bottleneck is the DDR/GDDR memory bus, so that speedup can be expected.

Also you must consider power-usage in the equation: that may make it better or worse for GPU-mining.

I don't know if tweaking the parameters would improve it. Maybe using 8 MB instead of 2 MB can prevent the use of some L2 caches in some boards. But the more space you require, the slower it is to verify the PoW. And this is a cat-and-mouse race, since future models will probably increase the size of the cache.

The Axiom developers & community could collect some real world numbers and try to find an improved parameter set (if it is really needed and if there is a better one).

I wish AXIOM users the best luck.

Sergio.
45  Bitcoin / Development & Technical Discussion / Re: Safe(r) Deterministic Tie Breaking on: July 16, 2015, 10:47:16 PM
Once the network can decide which block is the "right" one, the best strategy for fast near short term global consensus is to allow the block reward to be split between competing blocks. To reward both blocks you need to be able to reference the orphan uncle header in a following block. To incentivize referencing uncles you need a monetary reward. To prevent back-mining you need a penalty (not all money should be split, some must be burned).
A way to do it incentive-compatible is described here:

https://bitslog.wordpress.com/2014/05/02/decor/ (Decor section)

and here:

https://bitslog.wordpress.com/2014/05/07/decor-2/

The Decor+ strategy slightly INCREASES the chance of a 1-confirmation reversal during a short window of time (e.g. 10 seconds) after block arrival, but DECREASES considerably the chances of a >1 confirmation reversal (e.g. from 0.16% down to 0.06%). Also it decreases the chance of 1-confirmation reversal after the 10 seconds window has passed from about 4% (assuming a 50%-50% split, and a 4% orphan rate) to 0.06%.

So even 1/confirmation transactions after 10 seconds are much more secure with Decor+ than without it (taking into account only random reversals, not attacks).

And remember....

GHOST:   Fast Money Grows on Trees, Not Chains
DECOR+: Fast Money Grows on Cooperation, not Competition

What happens if you combine both?

Best regards!
 Sergio.
46  Bitcoin / Development & Technical Discussion / Re: Slowing down block propagation on: June 03, 2015, 11:47:24 PM
Regarding the original subject of this thread about block propagation. This is a prediction, although I have strong arguments supporting it.

If Bitcoin succeeds, it will go thorough 5 stages:

1. 2009-2014: O(N) propagation (the past stage). The cost of a transaction (in a block) for a miner is bounded by the time it takes to process each transaction and the times it takes to transmit it. When the signature cache was included (and other optimizations), block processing time stopped being a bottleneck.

2. 2015-2016: O(N) propagation (our current stage). The cost of a transaction for a miner is bounded by the time it takes to transmit each transaction (of a block).

3. 2017-2018: O(N) propagation for a multiplicative constant much much lower than in the previous stage (next stage, wrongly called O(1)). Each transaction costs less than 10 bytes of propagation time (the space needed to uniquely identify it), so transactions are really cheap for miners. Better we have a working market for fees during this stage.

4. 2019-2069: O(1) propagation (the real one). Miners mine on top of headers without even having the full transactions contained in them. Miners mine empty blocks until either they give up waiting or they receive the previous transactions. Subsidy is still high, and Bitcoin is very valuable, so the risk of a good header with missing txs is low. Miners connect to each other with reliable backbones and transmit both blocks and transactions. The real cost of a transaction is close to ZERO, so the only defense from spam is a working market for fees and rationality. The cost of a transaction may be bounded by the storage cost in the blockchain, but miners may not even store the full block-chain, and only process the UTXO set.

5. O(?) propagation (50 years from now maybe). Miners cannot mine on top of empty headers anymore because subsidy is too low to have any advantage on doing so. Either all miners form a syndicate, or propagation becomes an issue again.

Enjoy!
47  Bitcoin / Development & Technical Discussion / Re: [Crypto] Borromean ringsig: Efficiently proving knowledge for monotone functions on: June 03, 2015, 11:12:11 PM
Some here may be interested in a new cryptosystem I've been working on which efficiently and privately proves the knowledge of secrets according to an policy defined by an AND/OR network:

https://github.com/Blockstream/borromean_paper/raw/master/borromean_draft_0.01_34241bb.pdf


Very interesting! I have several ideas on how to improve it, but I must think more.

One possibility to extend one level depth of logic operations is to create signatures for additions of keys (P1+P2). Then, as long as keys are linearly independent there is no way to cheat (I think this would be the Representation Problem in the EC). User 2 may cheat by choosing his pubkey as (-P1+Q) as to allow proving the signature of both (and not having a private key for any of them). One way to prevent this cheating would be that each public key must be accompanied by a non-interactive ZPN of the secret key. Of course, if two users collude to create two keys so that one is a multiple of the other, then there is a hidden key (the difference) that is neither one nor the other that can be used to build the signature, but this seems not to be a practical concern.

So you can achieve circuits like  ( (P1 AND P2 AND P3) OR (P4 AND P5 AND P6) ) AND ( ....  ) with 3 levels of gates: AND-OR-AND

PS: using a edge-to-vertex dual graph where signatures are represented as nodes and edges are time implications seems easier to reason about.

Regards



48  Bitcoin / Development & Technical Discussion / Re: WTF is this? Someone found a trick for fast mining? on: May 08, 2015, 07:14:51 PM
The only attack I was thinking of when I wrote the Bitcoin header post, was all mining companies adopting tricks that give them some little advantage, but at the same time they degrade the performance of the network as a by-product. One of such attacks is cited when I posted about using approximate adders, and the danger that a monoculture of approximate ASICs can get stuck in a header that always generates a faulty addition.
Thinking out-of-the-box has both good and bad aspects:

+) on the positive side it allows novel and unusual solutions to enter the field, like your idea of intentionally breaking the topmost level in the carry-look-ahead login of a 32-bit parallel adder, which you called "approximate addition"

-) on the negative side it disconnects one from the already known solutions in the field. Some EDA tools already can split a 32-bit adder in a critical path into a pipelined pair of 16-bit parallel adders. The general methodology is called "register balancing" or "delay balancing".

You've made a far-reaching statements about a possibility or necessity of changing Bitcoin hashing algorithm in the face of your discovery. Have you consulted your discovery in private with somebody knowledgeable with digital logic design? What did they say?


No many people read my blog, so nothing I say is "far-reaching" Smiley

And I still think it would be better to change the Bitcoin header. But every bitcoiner wants to change Bitcoin in some way or the other, so I'm not alone. I promise I will write why I still think so in less than a month. I don't have time now.

Regarding consulting about discoveries, I hadn't consulted with anybody regarding the approximate adders, and that was not a good idea. I received a call the next day from the CEO of a well-known Bitcoin ASIC company telling me that my idea combined with their own optimizations would make their chips a lot faster.

If I had consulted with an expert about the advantages of approximate addition in some designs I would have tried to sell the idea for some bucks instead of just publishing it  Smiley

Best regards,
49  Bitcoin / Development & Technical Discussion / Re: WTF is this? Someone found a trick for fast mining? on: May 08, 2015, 04:42:42 AM
I think gmaxwell is right, there is not enough evidence in those 4 blocks to suggest that there has been a breakthrough in mining ASICs.

On the other hand I guess valiron is also right, he knows (and probably can't disclose) that some mining companies may be testing new tricks to improve their ASIC hashing power. Maybe he hacked into one company computer or he was told by a friend who works in one of them and he promised not to say anything.

Mining companies want to maximize their profits and they are not obligated to disclose their engineering achievements. Those are trade secrets and nobody has ever complained about ASIC Mining companies having close designs. I don't even think that the Bitcoin community can even enforce ASIC companies to open their designs, because they are already using other companies IPs that they can't disclose, and because they can always go anonymous to avoid disclosing anything.

There is plenty of technical information put together in this long thread (given by gmaxwell explicitly, and DannyHamilton's analysis on which parts of the block header can be used as nonce, and my very old posts about modifications to the Bitcoin header) to ease you discover one of such tricks. Take some time to think about it, take aside all posts with personal insults, and you'll probably find the solution right in front of your eyes.

I'm not that clever so there may be more tricks to discover.

However, a trick can only give you a certain speedup, say 20%, based on a reorganization of the SHA256D operations, or the pre-computation of some operations that change less often. Other changes (such as reducing the fabrication node) can give you much higher speedups. So this isn't alarming.

A completely different thing is to find a way to invert SHA256D, which I'm absolutely sure nobody will ever be able to do without some revolutionary quantum computer that does not exists even in theory.

The only attack I was thinking of when I wrote the Bitcoin header post, was all mining companies adopting tricks that give them some little advantage, but at the same time they degrade the performance of the network as a by-product. One of such attacks is cited when I posted about using approximate adders, and the danger that a monoculture of approximate ASICs can get stuck in a header that always generates a faulty addition.
 
If such problem ever arises, the community will probably find the way out by doing the right hard fork to prevent it.

IMHO the cryptographic security of SHA256D function of Bitcoin will never be seriously compromised.
However if there were a single mining company manufacturing ASICs being 200% faster than the competition, that would clearly hurt Bitcoin in a practical econo-socio-political way. The good news is that the accumulation of tricks probably will never reach such an improvement.

Best regards,
 Sergio.
 
50  Bitcoin / Development & Technical Discussion / Re: Theoretical minimum # of logic operations to perform double iterated SHA256? on: April 20, 2015, 04:03:59 AM
What is the theoretical minimum number of logical operations an ASIC needs to perform to compute double iterated SHA256, i.e., sha(sha(•))?

(cf. the Bitcoin StackExchange question)

Cryddit gave an estimation on the number of standard gate building blocks required for a Bitcoin ASIC (adders, logic gates)
However, adders require more space than OR gates, so generally the number of gates will be dominated by the number of adders. Also adders can be implemented in several ways,  with different delay/space trade-offs, so even if there could be a theoretical minimum number of gates, practically all implementations would use much more to reduce the delay.

More interesting, you can:

- Compute SHA^2 approximately, and get a better practically good SHA^2 ASIC for mining.
See https://bitslog.wordpress.com/2015/02/17/faster-sha-256-asics-using-carry-reduced-adders.

- Compute SHA^2 asynchronously (e.g. using asynchronous adders)

Last, it has not been proven that performing a complete SHA^2 evaluation is required on average to check that a changing header has a SHA^2 hash that is below the target value. In fact, several widely known optimizations have disprove it.
51  Bitcoin / Development & Technical Discussion / Re: Low latency block distribution without validation on: March 19, 2015, 07:09:24 PM
I proposed something similar long ago and we implemented it in the NimbleCoinJ library (you can get it from github).
Basically we have a "newblock" command which sends only the header. And a "blockhashes" command which sends a list of hashes (partial hashes can also be sent). Nodes spread the header as fast as they can. Then they spread the hash list as fast as they can. Finally they validate block. Miners work on unverified blocks for a fixed amount of time, and then roll-back if they were unable to reconstruct the block. This works well if: the subsidy is high (so that creating empty blocks is no problem) or the fees are averaged (so even if you create an empty block you still get some fees) or if the block rate is so high that trying to withhold block transactions only reduce your chances of your block being accepted, and then there is no incentive to do so.

We implemented header-only propagation to achieve a 6 seconds block interval and an orphan rate similar to Bitcoin. We tested and it really worked well. Verifying 100x Bitcoin transaction volume in real-time was possible. It uses some additional magic, such as the DECOR+ protocol, and a local route-optimization protocol. NimbleCoin will probably be a Bitcoin side-chain. We're waiting for Blockstream to do their first step.

Regards


52  Bitcoin / Development & Technical Discussion / Re: SHA256 Compression? on: March 01, 2015, 10:03:48 PM
You could start reading Dadda's paper.
"The design of a high speed ASIC unit for the hash function SHA-256 (384, 512)"
 
53  Bitcoin / Development & Technical Discussion / Re: Individual Block Difficulty Based on Block Size on: February 18, 2015, 02:25:05 AM
I think Bitcoin with and without subsidy work very differently and have very different incentives and equilibriums.

The idea of block difficulty based on block size is interesting (and old). It is one of several ideas of how to regulate, smooth or create a market we know very little yet, since we have our subsidy and we''ll have it for a long time.
I suggest nothing is done before we actually get into the no-subsidy trouble, if we ever get into it.

I recall some other similar ideas I had in the past:

- The Co-var fee restriction: https://bitcointalk.org/index.php?topic=147124.0

- Spreading the fee of transactions over future blocks (the first miner gets only a % of the fee, the following gets another %, in geometrical or linear steps)

- Creating an open market where miners FIRST announce their fee/kilobyte (e.g. in the coinbase field) and then they are bound to that price (they cannot include transactions of lower fee/kilobyte). This requires also miners pre-announcing an pseudo-identity.

- Pre-announcing transactions in blocks so everyone has the same view of the market (a global tx pool). Transactions would be included in a special part for additional data in the block and they won't be "executed". The miner who first publish the transaction would take 50% of the fees when the tx is executed. A following miner would specify the transactions to execute (by tx-ids to save space) and claim the rest 50% of the fees. Non-executed transactions would be dropped from the global tx pool after a fixed number of blocks.

Maybe anyone wants to dig into them, or combine them with the dynamic difficulty idea,  and see if they worth a math paper or a trash.

Best regards, Sergio
54  Bitcoin / Development & Technical Discussion / Re: Why is difficulty a float number? on: February 18, 2015, 01:10:57 AM
Somebody posted it in bitcointalk some time ago. Search for the post. If you cannot find it, send me a direct message with your e-mail and I send it to you.
55  Bitcoin / Development & Technical Discussion / Re: Why is difficulty a float number? on: February 17, 2015, 03:02:43 AM
There is a curious fact about the "bits" field:
In the first private release of Bitcoin, the "bits" field actually counted the number of zero bits the hash would need to have as prefix, and that's why it got named "bits".

56  Bitcoin / Development & Technical Discussion / Re: Thoughts on type safety and crypto RNGs on: December 24, 2014, 02:31:14 AM
All coders make mistakes. In every language, in every library. Formal verification methods are generally too expensive. That's why peer review and audits exists. To detect those errors. And the more auditors, the better.
 
C++ code is generally more concise because of a higher versatility of the grammar (e.g. overloaded operators), but not as easy to understand to anyone but the programmer. C++ is very powerful, but can more easily hide information from the auditor. However the programmer has grater control regarding timing side-channels and secrets leakage.
 
Java code is generally more explicit and descriptive. It forces to do things that make the auditor's work simpler, such as class-file separation.
Obviously you can program C++ as if it were Java, but that's not how c++ libraries are built, nor how c++ programmers have learn. Nobody changes a language standard semantics.

Dynamically-typed languages are the worse, because you cannot fully understand the consequences of function without looking at every existent function call to see the argument types (and sometimes you cannot infer those without going deeper in the call tree!)

One example I remember now is Python strong pseudo-random generator seeding function. If you call the seeding function with a BigInt, it uses the BigInt as seed, but if you call it with an hexadecimal or binary string (and I've seen this), it performs a 32bit hash of the string, and then seeds the random with a 32 bit number. And this is allowed because a 32 bit hash is a default for every object. You can write Python that does not make use of dynamic typing, but that requires checking the type of every argument received, which nobody does.

I would prefer that low-lever crypto code (key management, prng, signature, encryption, authentication) is written in c/c++ (e.g. Sipa's secp256k1 library in Bitcoin) and every other layer is written in a more modern static typed language, such as Java. For most projects, that probably means that 90% of the code would be in Java and 10% would be in c/c++ (and that would probably be crypto library code)
The 90% Java code would be more secure not because Java code is more secure per se, but because it's would be easier to audit. The 10% would be harder but since it would be small you would be able to double the audit time for that part.
 
At the end, you get a more secure system having used the same audit or peer review time.
57  Bitcoin / Development & Technical Discussion / Re: A covert-channel-free black-box signer without ZNPs on: December 18, 2014, 01:15:26 AM

Do I get this right that the table only checks whether the signer's current behavior is consistent with the signer's past behavior? I don't get how the user could possibly know what “h” or “Q” is correct for a certain secret key (or its corresponding public key).

Yes, just that, only to verify consistency. And I think it's important to prevent the hardware wallet from using communication failures as a covert-channel.
The attack is the obvious one: You try executing the protocol but it fails (e.g. the hardware wallet never responds with the signature). The hardware fails on purpose because the resulting signature does not meet the side-channel requirement (e.g. it does not start with the bit of the private key it's trying to leak). When you retry the protocol, if the hardware wallet cannot send a different h, then the signature will be the same. So there is no point in aborting the protocol.
With all randomized protocols (e.g. the two-move protocol proposed), the hardware wallet may abort the protocol in order to get a different random from the user or from itself in order to create a signature with the required leakage properties.

Once the signature has been created, the TX table can be cleared. It should only store data for unfinished protocol runs.
58  Bitcoin / Development & Technical Discussion / Re: A covert-channel-free black-box signer without ZNPs on: December 16, 2014, 12:43:13 AM
This is a very nice protocol to tackle the problem of leakage, but it is not perfect either. It is practically the same as mine with ZK proofs or gmaxwell's based on Schnorr, because leakage can still happen via “u”.
The value u never leaves the user computer, so it would be impossible to a backdoored hardware wallet to communicate u to the malicious party.

But it is better than mine because it is easily computable. It is better than gmaxwell's because it does not require a protocol change.
Could you provide the link here to your method ?
59  Bitcoin / Development & Technical Discussion / Re: A covert-channel-free black-box signer without ZNPs on: December 15, 2014, 03:33:54 AM
To prevent any leakage due to simulated communication failures by the hardware wallet I propose making the whole protocol deterministic.
The user now stores a table TX[msg,pubk] of the h value received for the transaction with the message msg to sign and the public key pubk. This table is used to check that the signer is using a deterministic method to build h. Also the user has a private HMAC key s, that the signer does not know (it's not stored in the hardware wallet).

In bold are the modified steps.

1-2. These steps are similar to the standard protocol.
2.1. The user tells the signer which private key it should use, by sending the pubkey pubk.
3. The signer computes u = HMAC(privkey,msg). Where msg is the transaction hash to sign and privkey is the ECDSA private key.

3.1. The signer calculates Q=u * G.
3.2. The signer calculates h=HASH(Q). This is a commitment to Q.
3.3. The signer sends h to the user.
3.4. The user verifies if TX[msg,pubk] exists. If exists then the user checks that TX[msg,pubk] = h. If not then the signer is cheating and he will never ever use this signer again. Then the user computes t = HMAC(s,msg | pubk )
3.5. The user sets TX[msg,pubk] = h and sends t to the signer.
3.6. The signer verifies t is  [1, n-1]. The signer sends Q to the user.
3.7. The user verifies that HASH(Q)=h and that Q lies on the curve. If not then the signer is cheating.
3.8. The signer calculates k = t * u.
4-7. These steps are similar as the standard protocol.
8. The user calculates the curve point (x_2, y_2) = t * Q.
9. The user verifies that r = x_2 (mod n). If not equal, then the signer is cheating.

60  Bitcoin / Development & Technical Discussion / Re: A covert-channel-free black-box signer without ZNPs on: December 15, 2014, 03:17:14 AM

3.3 The user sends z, P = t*G to the signer
3.4 The signer selects k = u*P and performs ECDSA


It's not exactly the same.  ECDSA security is based on the difficulty of the discrete log of the subgroup.
AFAIK, I does not require the additional assumption of the difficulty of DH on the curve.
But this does not seems to be a problem, since P is not published.

Another detail:

In 3.4 the signer should check that P lies on the curve.
Also in my protocol the user should check that Q lies on the curve.
Pages: « 1 2 [3] 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 »
Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!