Bitcoin Forum
May 28, 2015, 09:56:57 AM *
News: Latest stable version of Bitcoin Core: 0.10.2 [Torrent]
  Home Help Search Donate Login Register  
  Show Posts
Pages: [1] 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 ... 200 »
1  Bitcoin / Development & Technical Discussion / Re: QoS for bitcoind on: May 27, 2015, 07:45:46 PM
If I'm understanding the OP correctly, then you are trying to develop a way to prevent malicious nodes from spamming the network and essentially DoSing a home node and shutting it down because of the bandwidth cap. If this is the case, then something has already been implemented to prevent this kind of spam, it is the banscore. Each node is given a banscore by its peers, and if the score is above a certain threshold, the peer will disconnect and refuse connections from that node. I'm not sure how the banscore is calculated, but it can stop spamming from one node.
This is not what banscore is for. Banscore is for gratuitous misbehavior.  It is not very effective against denial of service attacks since the attacker can simply use a botnet or proxy network-- which can easily given them connections from tens or hundreds of thousands of addresses.
2  Bitcoin / Development & Technical Discussion / Re: help with finding issues in a modifcation of Bitcoin core on: May 27, 2015, 04:25:24 AM
So you think it's better to just ignore the issue and hope people don't modify the source?
I think I've arguably done more to advance privacy in Bitcoin than the next two people combined, I've also done a lot of research towards reducing resource exhaustion attacks.   No one is advocating "just ignoring";  but the fact that we're not yet able to completely mitigate the risk of harm due to chimpanzees with firearms does not mean that it would be wise to start handing out uzis at the zoo or, especially, that we're somehow obligated to arm those primates who have failed find any firearms on their own.
3  Bitcoin / Development & Technical Discussion / Re: Rounding error in calculation of chainwork on: May 27, 2015, 04:11:25 AM
Fair enough!   I will likely use this as an example implementation detail in the future, since it's softer-than-softforkness makes for an interesting discussion point around convergence behavior.

(Though, since we're on a pedantic tangent, involving various angles and angels on the heads of pins; I could question the classification of rounding as unbiased in an environment where the values might be adversarially chosen! Truncation is, I believe, the unique procedure which guarantees that an opponent could not choose values in a manner which would systematically overstate their work! Tongue )
4  Bitcoin / Development & Technical Discussion / Re: Rounding error in calculation of chainwork on: May 26, 2015, 11:20:34 PM
It's just mathematically inaccurate.
It's not clear to me that it's "mathematically incorrect".  If it isn't an infinite precision rational number (which it isn't since its an integer) there has to be a precision loss somewhere. E.g. even if it rounded instead of truncating at each block there would still eventually be a difference with the infinite precision rational version. Chainwork is the sum of the integer work estimates (with truncation) of the blocks; this seems like a reasonable thing, it's not the only reasonable thing, but any thing will have some approiximation in it somewhere, unless what it returns is a rational number.

5  Other / Meta / Re: Was TF finally given a scammer tag? on: May 26, 2015, 06:55:09 AM
Looking at TF on his face, he appears to be very honorable and to be someone that is very ethical.
uhh.  I'm wondering what bizarre reality distortion field you've been hanging around. First time I saw him there was some kerfufflel because he was selling stolen login credentials. Then there was the exploitation of the rotten-ripple flaw in the design of the ripple system and a number of other other small scams (e.g. safecoin) it was clear to many that he was bad news before the thing went explosively tits up.
6  Other / Meta / Re: Are these attacks preventable? on: May 26, 2015, 06:28:43 AM
"Please do not perform any actions on my account without a signed PGP message".
These instructions are routinely ignored, seen it myself many times.  They either don't see it at all and just follow the ordinary procedure or are easily convinced by things like "but that key is _on_ that server and by other backup was erased, thats why I need in!" Of course, actual users do things like this-- which is part of why the social engineering works so well.  Even if it doesn't fail that way, they'll likely use the age old xkcdmethod for verifying the signature.

Of course, maybe if someone can't even pull off sounding like a competent adult on the phone; then perhaps they'll have a harder time convincing a facilities operator to do the wrong thing.  I understand in this case the attacker(s) came off as barely literate. (but again, since plenty of legitimate customers are barely literate...)

Pretty much all attacks are preventable. The real question is was the recent attack reasonably able to be anticipated?
And at what cost. Without anyone knowing the details we can probably guess that having the equipment in its own, isolated, security facility, behind armed guards and no remote administrative access would likely have prevented this issue (and many others-- since the site would probably be down most of the time ... Smiley ) but that kind of cost is hardly justified for the forum.
7  Bitcoin / Development & Technical Discussion / Re: Rounding error in calculation of chainwork on: May 26, 2015, 06:11:16 AM
150,112 hash is nothing, of course. But I think this would affect the way we determine the longest chain and a fix would be a hardfork?
It has always been computed with precise integer arithmetic without rescaling. That it doesn't compute the same as an infinite precision number you came up with shouldn't be surprising or important. (The calculation is the sum of the getwork returned as the integer hash count on each block in the history).

It would be a different kind of short term consensus inconsistency ("softest fork", perhaps) if there were different implementations and  a competing blockchain ever exposed it (which would be an odd corner case to get into).

Unless in the worst case you expect work to be computed as and expressed a bit count proportional to the total number of block hashes, there will be a limit to the precision. The precision is normative, but only in a very soft sense, since either side would happily reorganize onto the other.

[I am reasonably confident the precision has never changed in any version of the software; but I haven't bothered researching it, since it's not even clear if you're alleging if something is actually wrong rather than the result not matching an infinite precision rational arithmetic value for it-- if you really think it's changed in some possibly inconsistent way, let me know and I'll go look.]
8  Bitcoin / Development & Technical Discussion / MOVED: If I download the wallet of some altcoin, how to compile and build in windows or on: May 25, 2015, 11:37:24 PM
This topic has been moved to Altcoin Discussion.
9  Bitcoin / Development & Technical Discussion / Re: Softfork cutoff on: May 25, 2015, 10:01:53 PM
The idea sounds like the rule activates for 75% and is locked in for 95%, but really 95% is the only one that matters.
Yup. it's just a point where you find out if your software was broken.
10  Bitcoin / Development & Technical Discussion / Re: Softfork cutoff on: May 25, 2015, 03:52:32 PM
Setting the thresholds is tricky.   First you much recognize that this process is emphatically _NOT_ a vote, it is a safe transition mechanism for behavior changes which are up front believed to have (near universal) consensus. Miners are vendors for the network, providing a service and getting paid for it; their opinion counts for very little here compared to all the other users of Bitcoin, except in terms of avoiding a consensus fault which disrupts SPV clients which aren't enforcing the rules themselves-- it's just an objective question,:how likely is a significant consensus divergence if the rule is enforced now.

If the feature activates when there is only a minor super-majority and a invalid-to-the-new-rules block gets created then there will be a rather long reorganization opening up SPV clients to theft; effectively handing over the non-upgraded haspower to an attacker.  If you think in terms of how much hashpower one is willing to hand to an attacker 25% sure seems like a lot!  Of course-- it isn't quite that simple, because if the soft-fork is the only thing going on (e.g. no additional forking bug being exploited against non-updated nodes) then the forked off nodes will reorg back when the other chain gets decisively ahead.  Keep in mind, even with soft-forks that only impact non-standard transactions we cannot be sure some miner won't go and mine one (invalid signatures incompatible with BIP66 were still being regularly mined every few days until an extensive manual effort was recently made to hunt down and get miners to unpatch their software. (In one case a miner had simply undermined all signature validation on his system.)

Another factor is that the estimate is very noisy, you might observe 60% density when only 40% is actually enforcing. Activating in that case would potentially result an an enormous (e.g. tens of blocks) reorg with non-negligible probability. The actual hashpower levels also change, especially due to the extreme consolidation of hash-power-- you might have 60% but then a single facility shuts off or loses power and again-- it's back to 40%, with a severe risk of a large reorg.

The fact that the upper threshold isn't 100% is already a compromise to prevent being totally stuck due to absentee operators. One could debate how much risk of hold up vs risk of consensus fault is tolerable; but whatever the thresholds are they should be pretty conservative.

In terms of safety before lock-in: Most soft-fork features are unsafe until locked in and enforcing in all blocks but permitting the rule to fall out is not safe... e.g. consider CLTV as an example.  CLTV is enforced you think you can use it, you write some refunds, and then get completely ripped off when the txouts are spent via the refund path months before they were due when the network momentarily goes below threshold.
11  Bitcoin / Development & Technical Discussion / Re: help with finding issues in a modifcation of Bitcoin core on: May 21, 2015, 04:39:49 PM
Since there are people who are going to override this anyways I think its better for those users to use tested software than being forced to use their own fork.
I disagree. Most of the people who think doing this is okay are not technically competent enough to make the change, -- this is why making the change is hard for them. The result is that the software crashes for the (as we see in this thread), and they adopt a new approach-- sometimes one which is less uncivilized. To whatever extent that people do successfully do so, having more users of the software able to make modifications is a virtue. There are parts of the system where diversity is a hazard, but thats generally limited to the consensus parts.

you usually only get 20-30, these are available resources and using them is not something I consider to be wasting.
Thats only true when you are on colocation subnets which are already saturated with nodes. A more common numbers is 80+ and we've had instances of widespread fullness in the past (resulting in problems with nodes getting online). If you would bother to do simple arithmetic you would conclude that even if only 20 were typically used (which isn't the case), it would only take about 100 parties total running "connect to everyone" settings to completely saturate it.  Moreover, what you "consider" or not is irrelevant. It's other people's resources you would be consuming, usually for reasons which are highly adverse to their interests (e.g. spying on them and other users of Bitcoin)... beyond the spying activity, the second most common reason for wanting to do this is also ill advised (some people erroneously believe more connections improves block propagation, when in fact it too many slows it down).  
12  Bitcoin / Development & Technical Discussion / Re: help with finding issues in a modifcation of Bitcoin core on: May 21, 2015, 04:43:43 AM
Having to maintain and compile a separate fork means they have to run code that is less well tested than it could be especially since the core developers actively refuse to provide any assistance and encourage others to not help people making these modifications.
Yes, we refuse to help people who are objectively attacking the network, wasting other users resources and harming their privacy.  Beyond it being harmful to bitcoin and bitcoin's users, knowingly assisting in this activity might make a participant a target of torts from harmed parties or even subject to criminal prosecution in jurisdictions with expansive computer crime laws.

Why people think that its casually okay to attack the network, and think there is any chance that standard Bitcoin software would come with a user exposed switch to make a single node try to use a substantial fraction of the network's total capacity is beyond me. That isn't going to happen; and users should be highly wary of the competence or ethics anyone who ships software that does that (after all, if might makes right then why not also have the software backdoor your computer?-- if it's possible to do, it's okay, by the logic presented here, no?). The fact that someone don't have any ethical qualms about the resources of other people that they waste or the privacy harm most of these efforts are intended to create, nor do they have the most basic software engineering experience to understand the requirements and ramifications of a software change; doesn't create any obligation on my part to compromise my own integrity and aid in these activities.

And sure, sufficiently software-competent people can technically modify the software or write their own which behaves aggressively; but fewer people doing it is an improvement (less resources wasted) even if it is not possible to prevent it completely. This isn't the only thing that is done with respect to this kind of abuse, like in many things a layered response is important, the first is avoiding cases where thoughtful and ethical people do not accidentally abuse the network-- and making it clear what behavior is abusive--, then social, technical, and legal measures can be employed against those who remain. (All of which are continually being worked on by many people).
13  Bitcoin / Development & Technical Discussion / Re: Can dynamic accumulators be used to store UTXOs on: May 20, 2015, 10:42:30 PM
Deletion is possible in this scheme.  It requires running the extended gcd algorithm (should be cheap) and two exponentiations for every participant to update their witnesses.  Deleting from the accumulator means just replacing it by the witness.
I'll have to work through it then. I wasn't aware of how one could remove an n-th root without knowing the group order, which seemed to be what prior schemes required. Hm. Maybe the fact that the value being removed can be public is what makes it work.

Yes, this would be using Merkle trees, right?  But if my math is correct, the witness proofs it requires are larger than the transactions.  I'm not sure if ordering the tree is necessary (and you would then need to rebalance it) or do you just propose to put the elements in the order they are inserted?  Instead of zeroing out the transactions one could also put the new outputs in the freed spaces to avoid the trees from growing too deep or becoming unbalanced, but still the proofs are quite large.
A hashtree constructed out of 32 byte hashes with 4 billion entries would be 32 levels deep-- so the size compares very favorably with an RSA accumulator + witnesses with equivalent security.

A significant portion of the proofs are also the first couple levels of the tree, which could be cached to avoid retransmitting them-- and such caching could be negotiated on a peer to peer basis.   For example, if nodes cached the first 16 levels of the tree (2 MBs of data) a tree membership/update proof for a 2^32 tree would be 512 bytes.

No balancing is required, which is why I mentioned insertion ordered.  You can incrementally build an insertion ordered tree with log(n) memory (you just remember the leading edge of the tree, and as levels fill you ripple up to the root).  A search tree can be used instead, but it would require proofs for both outputs and inputs.

If you don't think 2^32 coins is enough, you can pick numbers arbitrarily large... e.g. 2^51 entries, with a 500MB cache, requires 928 byte proofs.  Because of the log factor  the cost is _essentially_ a constant in a physically limited world, so its just a question of if that constant is low enough.

I don't see why Merkle Trees are smaller than the RSA accumulator, but probably you assumed that deletion is not possible and you need a list of spent TXO.  I think you just need the accumulator (fixed size, about 400 bytes) and as witness proof another value with all inputs deleted per block.   Per transaction you also need the witness for all inputs (another 400 bytes), but this doesn't have to be stored.  With Merkle-Trees you need a single hash for the accumulator, but the witness proofs of the transactions need also be included to check the updates and merging them doesn't save much.  Even assuming a depth of only 24 (16 million UTXOs; we currently have more) you need 768 bytes for a single witness of a single transaction input assuming 256 bit hashes.
See my figures above.  The zerocoin membership witnesses are about 40kbytes in size for 80 bit security, so I may have been overestimating the size of the accumulator updates... but even if we assume that they're 400 bytes then that puts them into the same ballpark as the TXO proofs with top of the tree caching.  And, of course, verifying hundreds of thousands per second on a conventional CPU is not a big deal...

As you wrote, instead of RSA you just need a group of unknown order.  But "unknown order" means that it is infeasible to compute the order.  Are there any other group besides RSA with this property?  AFAIK for elliptic curves it is expensive but not infeasible to compute the order.
For EC the order computation is quartic in the size of the field (via the SEA algorithm) unless the curve has special structure which allows it to be computed faster; so it's not really possible to get a reasonably efficient EC group where the order is cryptographically infeasible to compute.  But class groups of imaginary quadratic orders are believed strong in this respect, but there hasn't been that much work or analysis for cryptosystems based on ideals; so I'm unsure of how useful they'd be...

Another way to address the trusted setup with the RSA schemes is to use a UFO, but this means you end up with a field tens of KB in size just to get 80 bit security.
14  Bitcoin / Development & Technical Discussion / Re: Reduce Block Size Limit on: May 20, 2015, 09:52:32 PM
There was a hard 256kb limit on maximum acceptable blocksize?  Are you sure about that?  I don't remember that.  Regardless, there's a significant difference in risk between increasing the block size limit and removing it.

There was a target on the size-- not a blockchain validation rule (this has created some confusion for people because they go look back at old discussions about the temporary target and how easy it would be to increase and thing it was about the blocksize limit);  but that was just local policy, by default miners running stock software wouldn't create blocks over 250k, but all nodes would happily accept larger blocks up to the validation rule limit. When that policy-target was up upped we saw a massive influx of things like unsolicited advertisement transactions, which also increased when it was increased further. The only actual limit on block sizes (beyond the message encoding behavior) has only ever been the million byte limit.

There is zero incentive for miners to not fill the blocks entirely; almost any non-zero fee would be sufficient.
There are physical limits and costs that would prevent this.  Each additional transaction increases the size of the block.  There are costs associated with increasing the size of a block.  At a minimum, there is a (very small) increase in the chance that the block will be orphaned.
The only _fundamental_ cost is communicating the discrepancy between the transactions included and the assumed included transactions.  This can be arbitrarily low, e.g. if miners delay a little to include only somewhat older well propagated transactions-- the cost then is not a question of "size" but in breaking rank with what other miners are doing (and, in fact, producing a smaller block would be more costly).

Even without optimal differential transmission, and only looking at techniques which are nearly _universally_ deployed by large miners today; with the relay network protocol the marginal cost of including an already relayed transaction is two bytes per transaction. I can no longer measure a correlation with block size and orphaning rate; though there was a substantial one a few years ago before newer technology mostly eliminated size related impact on orphaning.

Importantly, to whatever extent residual marginal cost exists these costs can be completely eliminated by consolidating the control of mining into larger pools. We saw people intentionally centralizing pooling as a response to orphaning already (two years ago) which prompted the creation of the block-relay-network/protocol to try to remove some of that centralization pressure by reducing the cost of block relay so there was less gain to lowering the cost by centralizing. Moreover, any funds being spent coping with these costs (e.g. paying for faster connectivity to the majority of the hash-power) cannot be funds spent on POW security.  So I would refine DumbFruit's argument to point out that it isn't that "fees would naturally be priced at zero" but that the equilibrium is one where there is only a single full node in the network (whos bandwidth costs the fees pay for) and no POW security, because the that is the most efficient configuration and there is no in system control or pressure against it, and no ability to empower the users to choose another outcome except via the definition of the system.  I believe this is essentially the point that he's making with "the most competitive configuration in a free market"-- even to the extent those costs exist at all they are minimized through maximal centralization.  This is why it is my believe that its essential that the cost of running a node be absolutely low and relatively insignificant compared to POW security, or otherwise centralizing is a dominant strategy for miners.

storage costs associated with holding the list of unconfirmed transactions in memory.
One does not need to store transactions in memory ever-- that Bitcoin Core currently does is just an engineering artifact and because there is currently no reason not to.  Technically a miner does not need to store a transaction they've verified in any way at all, beyond remembering that it successfully verified. (and remembering that something verified doesn't even need to be reliable.) Depending on people to not get around to writing more efficient software or forming more efficient (e.g. more centralized) institutions would be a weak protection indeed!

or should be controlled by market forces based on the the physical limitations and costs associated with increasing the block size.
Thats a problem when the physical limitations largely do not exist, and to the extent that they exist can be eliminated almost completely by configuring the ecosystem in a more centralized manner (and incrementally so, given an existing ecosystem with block relay related costs you can always mitigate those costs by centralizing a little bit more).

15  Bitcoin / Development & Technical Discussion / Re: Regtest Consensus Forking Behavior Introduced in Bitcoin Core in May 2014 on: May 20, 2015, 08:19:17 AM
Sorry if this is a stupid question. Why would you need to test with a low difficulty like this, while you could have a difficulty of 46 with the now worthless 330MH/s USB Block Erupter?
To test on some 'cloud' server that doesn't have the worthless usb block erupter. Smiley  Also because you want to crank through thousands of simulated blocks in a few minutes.

I personally think its of fairly marginal value (thus the mention of testing with mainnet) but not worthless.

Perhaps  I should start a collection effort for old asic miners for bitcoin software developers?  Smiley  There are actually USB miners with a lot more than 330MH/s which should be worthless-ish now. 
16  Bitcoin / Technical Support / Re: How long would it take me to complete Bitcoin client upgrade? on: May 20, 2015, 08:14:47 AM
it should not if your chain is not corrupted and you shut it down cleanly first. The databases are forward compatible.

If a reindex is taking 3 or 4 days your computer is very slow or there is something wrong with it. On a several year old desktop (3.2GHz quad core with SSD) I can download and fully sync in about 3 _hours_ with the current software; and reindexing is somewhat faster.
17  Bitcoin / Development & Technical Discussion / Re: Reduce Block Size Limit on: May 20, 2015, 08:11:22 AM
a socialist production quota that removes the incentive to improve the p2p network in order to avoid centralization.
Yes? and so?   The limited total number of coins is a socialist production quota used to create the scarcity needed for Bitcoin to function as a money like good.   The enforcement of digital signatures is a socialist constraint on the spend-ability of coins that makes possible something akin to ownership.  Decentralization is no less a fundamental defining characteristic of Bitcoin than limited supply or ownership of coins, -- that it must be protected shouldn't be up for debate anywhere; but reasonable people can easily disagree about the contours of trade-offs or the ramifications of decisions.

Following your general logic you suggest, nothing at all would be enforced miners could publish whatever they wanted-- and the system would be worthless.  Bitcoin is a system that has any value at all because it enforces rules against behavior that would otherwise be permitted by the laws of nature.

This isn't to say that all limits are proper or good or well calibrated; but you cannot take a principled stance against any and all limits in general and then speak reasonably about Bitcoin at all.
18  Bitcoin / Development & Technical Discussion / Re: Can dynamic accumulators be used to store UTXOs on: May 20, 2015, 06:07:21 AM
AFAIK These accumulator schemes require a spent coins list that grows forever; as they require a trusted party trapdoor to efficiently delete, so you would need to keep a linear database to prevent double spends. Or have you found a scheme with an efficient trustless delete?

RSA based accumulators also require trusted setup generally, violation of the trusted setup lets you make false membership proofs.  (The same protocols could be applied to non-trapdoor groups of unknown order; but the performance and security are much more questionable.)

So I'm not seeing how this can help with UTXO.

Though you actually do not need fancy number-theoretic cryptography: as has been proposed previously,  if the utxos are stored in an insertion ordered hash tree, appending to the end requires a log sized proof (just the leading edge of the tree), showing membership requires a log sized proof. Membership proofs can by updated cheaply by observing other updates that intersect with your proofs. And spending requires just zeroing out a member which can be done with the same proof path as was used to show membership.

So you can imagine a world with stateless miners and full nodes, and wallets either tracking their own coin proofs or outsourcing that work to archive nodes that help them form the proofs needed for their transactions.

If you go out and compute the concrete sizes for the above scheme you'll find two things: the results even for gigantic databases are _much_ smaller than RSA accumulator approaches even though it scales with a log() due to constant factors (and of course its must less cpu intensive too), but the bandwidth required for processing transactions/blocks is increased massively.  So it isn't clear that it would actually be a win as bandwidth tends to be the more scarce and slower growing resource.
19  Bitcoin / Development & Technical Discussion / Re: Regtest Consensus Forking Behavior Introduced in Bitcoin Core in May 2014 on: May 19, 2015, 10:45:34 PM
I have to admit I'm disappointed that the answer is basically that Bitcoin Core doesn't feel like regression and simulation testing is important enough to warrant proper retarget behavior for it, but I appreciate the response nonetheless.
Testing is very important, but adding additional code to accommodate makes the simulation even _less_ faithful, and more likely to miss real issues (or potentially even introduce real issues).  I do regression testing with a ASIC miner and the main network code (with checkpoints=0, of course). Testing with the actual production-time behavior is the gold standard and cannot be replaced with shortcutted version without compromise.  (FWIW, testnet which, in spite of its own stupid shortcuts, is somewhat closer to Bitcoin has test cases in the chain for adjustment extremes).

The only reason you were able to make this comment at all is because regtest exists, and the only reason regtest mode exists is because it was specifically created for the block tester harness that runs on externally hosted ci setup, e.g. its intended use. For a long time the harness applied a patch to change the behavior to make it computationally cheaper to test-- at the expense of making them less accurate and faithful--, but maintaining the patch externally took work.

Use of it has expanded since then--  I think today a somewhat different approach would make more sense for the regtest shortcutting and would result in a smaller divergence from the normal network behavior.  (e.g. when in testing mode, mask out the highest bits of the block hashes before the target check).  So quite the opposite, testing is important enough that one should actually be testing the actual Bitcoin network code and not a altcoinified mockup that makes testing easier, or to the extent a modified version for testability is used great care should be taken to minimize the number of differences (and  to not add risk to production code).  How you could extract "testing is not important" from my comments about a whole alternative network mode created specifically for testing is beyond me-- specifically given the amount of effort I put in previously in convincing you that agreement testing was essential.

20  Bitcoin / Development & Technical Discussion / Re: Regtest Consensus Forking Behavior Introduced in Bitcoin Core in May 2014 on: May 19, 2015, 05:36:14 PM
This was actually pointed out at the time the change was made; but it seemed silly to change regtest's minimum just to make it fit (or worse, to add a lot of additional complexity and another number type just to handle values which _cannot_ occur in Bitcoin). Somewhat similar to how the testnet 20-minute rule exposes weird behavior where if the penultimate block in the retargeting window is diff-1 the difficulty will just from whatever back to ~1: The tests intentionally break the system in order to make it easier to test, sometimes that has collateral damage; needless generality exposes its own risks-- e.g. the OpenSSL bignum code was wrong in a platform dependent way until fairly recently (and irritatingly, we spent months with that discovery embargoed)-- any use of it carries its own risks.
Pages: [1] 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 ... 200 »
Sponsored by , a Bitcoin-accepting VPN.
Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!