Bitcoin Forum
April 30, 2024, 03:32:58 PM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
  Home Help Search Login Register More  
  Show Posts
Pages: « 1 ... 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 [112] 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 ... 288 »
2221  Alternate cryptocurrencies / Altcoin Discussion / Re: WTF happened to ripple? on: December 07, 2014, 06:20:28 AM
In this old thread I described how the Ripple consensus model was unsound... that it could be expected to spontaneously break unless the topology met certain characteristics which were unlikely to be met by any graph except a centrally controlled one and that without additional unspecified functionality (perhaps hidden in assumed behaviour of users or via centralization) couldn't resist sibyl attacks. Unfortunately Ripple's creators responded to these concerns-- to the extent that they responded at all-- with evasion and a seeming refusal to make a clear statement of their security assumptions that, coupled with their design, supported their security claims.  And the media and finance industry seems to have largely swallowed their claims without much critical thinking seemingly counting on an orgy of social proof that seems to have been ultimately backed by the same nothingness that backs their pre-mined currency.

I wasn't the only person to point out these issues, more recently Ripple labs published a paper claiming the soundness of their model, which made a number of clearly illogical arguments and rested on many unclear and unsubstantiated assumptions, and it was also criticized by Andrew Miller, for many of the same reasons I criticized it here.

(and in this thread I was handicapped by the fact that ripple was closed source at this time: but even so its limitations were apparent simply from the seemingly impossible claims that its creators couldn't back up)

On Tuesday at a Bitcoin event I was still being harangued by Ripple/Stellar advocates claiming the absolute soundness of the system.  I care about the whole cryptocurrency ecosystem since, in the minds of the public any failure is harmful to all of us, and I don't want to see anyone suffer losses not even the gullible... But it makes no sense for me to spend my limited time providing free consulting for the impossibly torrent of ill-advised, impossibility claiming, systems... especially when they're not thankful and/or respond with obfuscation that makes their work unrealizable or hand-waving without admitting their new assumptions. I don't want to see anyone get hurt, but ... hey, I spoke up a bit and people continued on anyways without asking the kind of tough questions they should have been asking. I'm certainly not going to spend all me time correcting everyone who is wrong on the internet, especially when altcoin folks have been known to play pretty dirty toward their critics. No one should assume that other people are going to go out of their way to beg them to not use something broken.

So, when I found out that Stellar spontaneously split consensus state, apparently just as I described in this thread, without even an attacker (though any consensus split is easily exploited by attackers of opportunity once it exists)-- Well, the only thing that surprised me was the burst of honesty in admitting that the system was unsound, but I was also disappointed that the lack of frankness about how fundamental the limitations are in this space-- instead advocating the hope of magical fixes sure to be found by a respected authority, and I was also disappointed that no mention was given of that fact that other experienced people in this space had warned of precisely these issues, going back several years.   I also was saddened to see that no one noticed the dissonance in the 'temporary' solution of converting to a centralized model:  If a system can be converted by some loss correcting central bank into a centralized system ... can we really say it was ever decentralized in the first place?

Perhaps in the future more people will ask the hard questions and demand better answers?  If so, it would be worth more time for experienced people to spend time reviewing other systems and we could all benefit. Otherwise, perhaps those who aren't interested in standing up to some of the rigor we'd normally expect from a cryptosystem will stop calling their broken altcoins "cryptocurrencies".  Those of us who actually want to build sound systems don't want our work sullied by these predictable failures, and being able to say "I told you so" is no consolation.
2222  Bitcoin / Development & Technical Discussion / Re: Bitcoin protocol standarization on: December 06, 2014, 07:31:12 PM
One of my longer term hopes around the refactoring of Bitcoin core into a separate library for consensus is that it allows us to compile the consensus parts into a simple bytecode with a narrow interface which can be executed by an easily implemented and testable virtual machine, so every implementation can just use the same bytecode and be confident that they'll be consistent.  Considering how hard it has been to get people to understand the unusual requirements of consensus, it may turn out to be hard to get people to accept the performance hit (and to avoid doing crazy JIT stuff which breaks the obvious-correctness of the approach).
2223  Bitcoin / Development & Technical Discussion / Re: Bitcoin 0.8.1 Clients vulnerable to easy bruteforce attack using RPC on: December 06, 2014, 04:49:47 PM
Yes, the Debian packaging of Bitcoin was broken. This was known and fixed years ago, you're linking to a two year old version of the files. People building for themselves or using the Bitcoin.org binaries were never exposed to it.

The RPC is also not exposed outside of the localhost unless you go and add additional configuration, and the additional configuration results in it still being limited to particular networks normally.
2224  Bitcoin / Development & Technical Discussion / Re: crypto software - writing the grotty bits. on: December 06, 2014, 01:17:50 PM
On the last bit, I have to take grau's side. Porting low-level code to a high-level language may need a bit of effort, but the amount of pointer-safety checks, fuzz tests and unit tests are tremendously reduced. Also, not every coder can write secure low-level code. If a project needs community contribution, being in a low-level language may be a burden for more novice programmers.
And certain kinds of safety, in particular the kinds of safety Cryddit was almost exclusively talking about become impossible.  Meanwhile, in the kind of codes Cryddit is talking about many of the concerns you'd hoping to address are already structurally impossible, and provably so... e.g. because they do _no_ dynamic memory allocation at all, because there are no data-dynamic memory accesses at all, or the accesses are very efficiently bounds checked via masking, etc.  This also can apply to different kinds of software than Cryddit was talking about, ones where worse case memory or cpu complexity is a significant security constraint due to the need to resist attacks in the form of pathological inputs, or suppress timing side-channels.
2225  Bitcoin / Development & Technical Discussion / Re: How Perfect Offline Wallets Can Still Leak Bitcoin Private Keys on: December 05, 2014, 07:31:54 PM
@gmaxwell:
I agree with you that ZK is very expensive and hard to secure. With “best I can imagine so far” I wanted to express that I am very unsatisfied with any proposed solution so far.

Btw: Can you point me to a text where you argue why your (second) attack is undetectable?

Update: Pardon, you never claimed that. You just said that only the attacker can decrypt the leaked data. Then this is a difference between our attacks, since my malicious signatures are provably indistinguishable from regular ones.
Mine are computationally indistinguishable to anyone to anyone but the attacker and the party that knows the key (or I had another version that was only distinguishable to the attacker or someone with the private key and a random nonce from the device; but its not deterministic).   I think your definition of indistinguishable is a bit limited: emitting related nonce values is pretty distinguishable!
2226  Bitcoin / Development & Technical Discussion / Re: Decimal Places in Bitcoin? on: December 05, 2014, 07:24:22 PM
There are other ways to check the sanity of your operations... In monero, uint64 is used for coin units instead.
It's it's possible to do this, but it becomes much harder to make correct, overflow and rounding error free computation of many formula. (e.g. more complexity for users of the system).  In Bitcoin the sum and product of any two values fits, naive comparison is safe, etc.
2227  Bitcoin / Development & Technical Discussion / Re: How Perfect Offline Wallets Can Still Leak Bitcoin Private Keys on: December 05, 2014, 06:15:24 PM
But a new problem arises: How to implement the proof in a way that ensures that we don't create new side channels for leakage?
Any system that has sound zero knoweldge is going to have a random input.  E.g. the CRS SNARK construction which is the only remotely practical implementation for NP available that I'm aware of is freely rerandomizable. Maybe a unique proof is possible if you give up soundness on the ZK but then a cryptographic break in the ZK system could make it leak your private key.

This complexity is part why I'd previously proposed the alternative where the online requesting device blind the signature request,  then give the signing device a ZKP that the blinded message being signed is the message being signed...  The result is the that the sidechannel is reduced to 1 bit (sign/don't sign) unless the requesting device and the offline device conspire. (also the aforementioned fact that it's much easier to verify a proof than create it)

From your writeup,
Quote
Another counter-measure would be to strictly not use any address more of-ten than once
This doesn't solve it: The key can be leaked in a single signature, and the attacker can race the user in the network; and 'future' keys, if they're known to the device can also be leaked at the same time using the techniques I've described.
2228  Bitcoin / Development & Technical Discussion / Re: How Perfect Offline Wallets Can Still Leak Bitcoin Private Keys on: December 05, 2014, 05:58:02 PM
Deterministic choice of “k” unfortunately does not solve the issue, because you cannot verify that choice without knowledge of the private key.
We know and every message here is pointing this out.

Quote
Since the whole point of an offline/embedded wallet is that the key never leaves the wallet, there is no way for a user to verify that “k” has been chosen according to RFC6979 or anything alike.
That may be the point for _you_, but really the point is to be offline, transfering a key between two offline devices is an option for some.. it's certainly an option for testing... and even simply testing increases your assurance, though its not a complete assurance for the reasons enumerated.  E.g. without determinism every device off the factory line may be leaking your keys in every signature, and no one could tell short of a successful reverse engineering of the device.  With determinism, the evil could only be intermittent and escape discovery ... since just a single user loading a static test key and checking the output would catch the case where device device leaks in every signature.

If your offline device is just a HSM that signs everything then there is an obvious solution: blind the signatures. It's trivial with schnorr but even for ECDSA there is a scheme which should work for this.  Otherwise, multisignature seems to be the only reasonable fix.   Any of the ZK proofs are too complex and expensive in the prover.  Your paper suggests the device create the proof, but thats likely out of the question complexity wise for many signers (running a k*G under a ZKP is a very expensive computation), better would be to blind the signature and then use the ZK proof to prove to the device that the blinded signature is something it wants to sign... this works better because existing constructions have fast verifiers.


Quote
You have to know that any of them is properly doing their job
Yes, if you have no trustworthy devices you are simply out of luck. No system can save you.  If _all_ of your devices are concurrently bad they can just emit your key instead of a signature and your online hosts can simply call home to report it.

There must be some assumption of honesty.  A reasonable one is that you will use multiple devices and at least one of your devices will be honest, or at least not serving the same evil master.  Under that assumption secure systems can be built, without it no secure system can be built.

Understood - but the offline device does have the private key and presumably could display that, and if it can do that then it could also display the "k" value that could then be audited via another offline device.
Showing K doesn't seem prudent.  Better to just sign twice and compare the results: they should be identical.  If you really wanted to show K, better to show H(K).  Otherwise someone could just use the revealed K to immediately compromise the security if they could see the device's screen. Smiley
2229  Bitcoin / Development & Technical Discussion / Re: How Perfect Offline Wallets Can Still Leak Bitcoin Private Keys on: December 05, 2014, 05:10:59 PM
Perhaps a better RFC might help (or maybe a BIP)?
IMO having a "standard" is the key thing (provided it works as expected and being deterministic can be easily tested).
A standard is important. And yes, we could restate RFC6917 in a way more people would find easy to implement from a BIP... and that might be wise. But really in the bitcoin space far far too many people are writing their own cryptographic code, and are making little to no effort to implement best practises in other ways either.  E.g. prior to libsecp256k1 I am aware of no implementation for secp256k1 which has even attempted to close the timing sidechannels.  Ideally, there should exist a good, well reviewed, library of the systems work that other people are not going to bother making a best-practises implementation of...

From the discussion, It's not clear to me if I'm clearly explaining why I keep saying it's not enough:  The problem is that your evil device can follow the RFC until asked to sign a transaction paying a particular destination or every 1000th time at random and never in the first 100 signatures... and so you cannot test for non-evilness unless you test every output.  It's possible to test every output (assuming the implementation uses a standard), but basically no one will.   It ups the bar, narrows the exposure, etc... but without additional magic it doesn't eliminate the risk. This is also one reason that offline signing isn't a replacement for multi-signature.
2230  Bitcoin / Development & Technical Discussion / Re: How Perfect Offline Wallets Can Still Leak Bitcoin Private Keys on: December 05, 2014, 04:41:49 PM
Isn't that exactly what the point of the RFC is (i.e. every implementation should have exactly the same result if they have followed the RFC correctly)?
Yes.  But the RFC makes a (IMO) strategic error of using a convoluted explanation motivated by showing that the construction is identical to a pre-existing standardized CSPRNG.  As a result, some implementers in the Bitcoin space vomit all over it and just implement something adhoc like H(key || message). Arguably people who do this have no business authoring cryptographic software for other people to use, but we live in a world where-- for various reasons-- people who have no business writing cryptographic software for others to use sometimes do.

Still, the solution is not complete... since how many users are going to use two independent implementations and compare all their outputs just to check for side-channels?
2231  Bitcoin / Development & Technical Discussion / Re: How Perfect Offline Wallets Can Still Leak Bitcoin Private Keys on: December 05, 2014, 03:41:01 PM
I had thought that the idea of making k deterministic rather than random was a better solution all around (from memory there is already a RFC that describes how this can be done).
Just "deterministic" is not a complete fix though it can help.. because while it's deterministic someone who doesn't know the private key cannot tell if the procedure has been followed. You can use two independently created signers that both know the private key and compare their outputs, however... which is past of why I was trying to get people to use a canonical implementation and not some adhoc construction for derandomization.
2232  Bitcoin / Development & Technical Discussion / Re: How Perfect Offline Wallets Can Still Leak Bitcoin Private Keys on: December 05, 2014, 03:31:22 PM
I want to draw your attention to another attack, that (to my knowledge) has not been discussed in the context of Bitcoin yet, which also arrives from the fact that the wallet implementation freely chooses “k”:
It's been discussed several times. E.g. https://bitcointalk.org/index.php?topic=285142.msg3077694#msg3077694 and http://www.mail-archive.com/bitcoin-development@lists.sourceforge.net/msg02721.html

Because of it I pushed very hard for embedded hardware implementations (which make the ECDSA inherently somewhat unaudiable to use derandomized dsa... which helps but not completely since it could choose to leak only very infrequently such that a random audit wouldn't catch it.

To convince them, I created an implementation of ECDSA signing with the following fun properties:

(1) A single signature leaks the private key but only to the attacker and no one else even if they know about the back door.
(2) If you collect slightly more than 16 signatures, with different or identical keys, from a single victim the attacker (and no one else) can, with exponentially increasing probability, recover an additional 256 bit secret, such as a master private key.
(3) The signing was stateless (didn't require additional memory outside of the ecdsa signing function, and determinstic-- would always give the same signatures for the same key,message regardless of repetition or which order they were input.

An obvious path for improvement would be to use a blinded signature, but doing so would prohibit the signing device from verifying the message, which is something we usually want, without an additional ZKP.
2233  Bitcoin / Development & Technical Discussion / Re: Headers-first client implementation on: December 04, 2014, 12:27:00 AM
There is no change in behaviour for SPV lite wallets, as they are _headers-only_ for the most part.

This is just a change in Bitcoin core that improves performance, it's been our recommended process for full node synchronization for years; it just took time to fully prepare and test it for Bitcoin core.
2234  Bitcoin / Development & Technical Discussion / Re: crypto software - writing the grotty bits. on: December 03, 2014, 05:15:56 PM
I feel like this is misleading and is an anti-pattern. Not that 100% test coverage is a bad thing, but that it leads to a false sense of security with regards to your codebase quality. 100% coverage is a pretty low standard to be achieving.
Also a focus numbers are kind of silly, it's fine for the tool to not report 100% when it's really 100% of the actual code that runs... obsessing about the number is likely to lead to bad incentives, like removing error handling code. Also, 100% line coverage is way different from 100% branch coverage.  I mention coverage because it's insanely useful to just look a it, regardless of the numbers, and because some powerful testing approaches I mentioned are not possible unless you have basically complete coverage.

For things like small security cryptographic kernels and other focused high risk work its pretty reasonable to expect basically complete coverage; less so on other code bases.

(Basically complete:  Sometimes you may have error handling code which you cannot trigger. You may _believe_ it is impossible to hit, but you are unable to _prove_ that it is so you cannot remove the code and shouldn't even consider removing it. Or you have some harness goop that lowers your reported coverage. All these things should get careful review, but they shouldn't result in you feeling you failed to get complete coverage.)

Quote
I'm sure I'm not the only one that sometimes wonders how software manages to function at all. It often feels as if there are thousands of potential entry vectors on my machine and just one is enough. But that doesn't mean it's not worth trying our best.
It's pretty bad.   I mean, often programming tools are some of the most heavily tested and reliable pieces of software (after all, many of their users are well qualified to find, report, and fix bugs in them) ... and yet I don't consider my code well tested unless I've found a new toolchain or system library bug while testing it.

The priority comments are pretty good too.  The complete result is what counts.
2235  Bitcoin / Development & Technical Discussion / Re: crypto software - writing the grotty bits. on: December 03, 2014, 06:45:38 AM
I guess the most important thing to know is that in C there are no promises that the compiler will not undermine you with leaks... the compiler is only obligated to produce visibly identical behaviour to the c abstract machine.

Good techniques are good and I thank you for sharing ... I'm just also sharing some caution because it's easy to get caught up in the ritual of techniques and miss that there are limits.

Consider, the compiler is free to spill registers to random local variables when it can prove that their value is irrelevant (e.g. because it'll never be read again, or because it will just be overwritten).  They do this.  So now you have memory with your secret data that your zeroizing functions don't hit.

It gets worse, of course data in the registers ends up being sensitive... and you call another function, and it calls pusha (or friends) and pushes all registers onto someplace else on the stack. ... even if this doesn't happen with your current toolchain, if you depend on it and you're not testing, ... well, you better not ever upgrade your toolchain.

And then when you're on a multitasking operating system the kernel can go copying around your current state at basically any point.

Use of volatile sounds interesting but take care: Until fairly recently volatile was fairly buggy in GCC and CLANG/LLVM because it's infrequently used, beyond often doing nothing it all it was sometimes causing miscompilation.  I say was here primarily because more recent testing with randomly generated source code fleshed out a lot of bugs... so I don't think I'd suggest this if you're targeting GCC prior to 4.8. (Or any compiler which hasn't been subjected to the same extensive randomized testing that GCC and CLANG have been).

On testing. BE SURE TO TEST THE CASES WHICH SHOULD NOT AND ESP. "CANNOT HAPPEN"  I have seen ... just dozens.. of programs fail to have any security at all because they were JUST NEVER TESTED WITH FAILING CASES.  agreed there.

Fuzz test too, when you write all the tests yourself you'll fail to discover your misconceptions, fuzzing can help.  Non-uniform randomness can be more useful, long runs of zeros and ones tend to trigger more behaviour in software. Whitebox fuzzers like AFL (and KLEE, though its a pain to run) can get you much more powerful fuzzing when testing a complete system, though for unit tests you don't need them generally.

Instrument your code for testability. If there is some part thats hard to reach, make sure you have a way to test it.  Use branch coverage analysis tools like lcov to make sure you're covering things.

Write _lots_ of assertions. The computer is not a mind reader, it won't know if your expectations have been violated unless you exposed them. The asserts can be used only during testing, if you really want (e.g. performance concerns or uptime matters more than security).

Assertions make all other testing more powerful, time spent on them has super-linear returns.

Test your tests by intentionally breaking the code, both manually and by just incrementally changing + to - or swapping 0 and 1 or > and <, etc.   You cannot use this kind of mutation testing successfully, however, until you have 100% test coverage, since executed code is obviously safe to mutate.

I have found many one in a billion input scale bugs by mutating and improving tests until they catch all the mutations.

There are tools for 'testcase reductions' for finding compiler bugs like: http://embed.cs.utah.edu/creduce/  You can run it on your normal code and add tests until its unable to remove anything but formatting.

Coverity, clang-static-analysis, cppcheck, pc-lint, are useful informal static analysis tools. You should be using one or all of them at least sometimes.

If the codebase is small, consider using sound/formal static analysis tools like frama-c, at least on parts of it.

Valgrind is a life-saver. Learn it. Love it. (likewise for its asan cousin in GCC and clang).

Don't leave things constantly warning, figure out ways to fix are silence the harmless warnings or you'll miss serious ones.

Test on many platforms, even ones you don't intend to target... differences in execution environment can reveal bugs in code that otherwise would go undetected but were still wrong. Plus, portability is a good hedge against the uncertainty of the future.  I've found bugs on x86  by using arm, pa-risc, and itanium which were real bugs but latent on x86 but immediately detected on other platforms because of small differences.

Unit tests are nice and important, but don't skip on the system tests. A lot of bugs arise in the interactions of otherwise correct parts.

Don't shy away from exhaustive testing. Have an important function with <=32 bits of input state space? You can test every value.

Testing once isn't enough: All these things can be done from continuous integration tools. Every commit can be tested, and for the random tests they continue to add more testing... not just wasted cpu cycles.  I've spent time proving code correct or extensively fuzzing it, only to later accept a patch that adds an easily detected misbehaviour, if only I'd redone the testing. Automating it makes it harder to forget to do it or lazy out of it when the change was "obviously safe".  Obviously safe code seldom is.

Quote
but there is no way to get around it because AFAIK no other language allows me to absolutely control when and whether copies are made,
Yes, its the norm in every place other than C for any code you've not written to be effectively a non-deterministic black box. C++ can be less than awful if you subset it enough that you're almost no better off than with C.

Quote
A lot of crypto software makes extensive use of global variables for sensitive values.  They are fast, never get deallocated or (accidentally) copied during runtime
They can be copied at runtime.

Quote
I always use unsigned integers.
Do take care, a lot of C coders cut themselves on the promotion rules around unsigned. Use of unsigned in loop counters results in lots of bugs in my experience, esp with less experienced developers. Take the time to really learn the promotion rules well.

Quote
You can't even check to see if undefined behavior has happened, because the compiler will go, "oh, that couldn't happen except for undefined behavior, and I can do whatever I want with undefined behavior.  I want to ignore it."
Yes, though you can check in advance if it would happen, without causing it and avoid it.  Though experience suggests that these tests are often incorrect.   GCC and CLANG now have -fsanatize=undefined which instruments signed arithmetic and will make the program scream errors at runtime.  Not as good as being statically sure the undefined behaviour cannot happen.

Quote
I tend to use do/while loops.
I have used this same construct. Also with the masked variables.

I've intentionally made objects larger and null filled so that masked access were guaranteed safe. e.g. char foo[27]; foo = 3; becomes char foo[32]; foo[i&31] = 3;.

Some other things:

Avoid recursion, and no recursion at all unless you can statically prove its depth.  Running out of stack is no joke, and a source of many embarrassing (and even life threating bugs). It's not worth it.

Avoid function pointers.  They give you _data_ that controls the flow of your program. ... which may be somehow writable to attackers.   When a function pointer can't be avoided completely try to make it a bit-masked index into a power of two size const array which is filled with zeros in the slack space.

Sidechannels are very hard to avoid and even more likely to be undermined by the compiler into leaks.  First assume you can't prevent them, and try to be safe regardless.  Bitops can get you constant time behaviour in practise, including loads.. e.g. and the thing you want to load with ~0 and the things you don't want to load with 0, and or the results; its tedious and the compiler can still helpfully 'optimize' away your security. ... but the next option is writing things in assembly, which has its own risks.  Valgrind warns on any control flow change (branches!) on 'uninitialized data' there are macros you can use to set any bytes you want to uninitialized, so you can make your secret data uninitialized and have valgrind warn about branches (though it's not completely sound, some warnings get suppressed).

Get comfortable with gcc -S and objdump -d ... reading the assembly is the only way to know for sure what you're getting, and the only way that you're going to discover that your presumed branchless code has been filled with jumps by the helpful compiler. Likewise, you can make your secrets have a distinctive pattern and dump core when you think things should be clean, and confirm if they actually are or not.

It's possible to make wrapper functions that call your real function, and then another dummy function that uses as much stack as your real function and zeros it all. This is one way you can stem the bleeding on stack data leaks..

More recently I leaned that dynamic array accesses are not constant time on all hardware, even if you're sure to always hit the same cache-line pattern, necessitating using masking for loads when the indexes would be secret data.

In some cases it can be useful to make your memory map can be made sparse and your data surrounded in a sea of unaccessible pages, allowing your hardware MMU to do some of the boundary checking for free.  This class of approach is used by asm.js and the classic 'electric fence' memory debugging tool.

Data-structures can have beginning and ending canary values. Set them after initializing them, check them whenever you use the data structure, zeroize them when the datastructure is freed... find cases when code accidentally uses a freed or non-allocated data structure much more often. Esp when crossing a boundary of who-wrote-this-code.

GCC has annotation for function arguments which must not be null and for functions to have results which must not be ignored. These can be used to turn silent mistakes into warnings.  But take care:  not-null teaches the optimizer that the argument cannot be null and _will_ result in optimizing out your not nullness checks, so use them in your headers but don't compile your code with them. (see libopus or libsecp256k1 for examples).

Complexity is the enemy of security. Seek simplicity. They say that code is Nx harder to debug than it is to code, so if you're coding at your limit you can't debug it.  When debugging, you at least have some positive evidence of the bad thing that happened, ... you know badness of some form was possible, at the very minute. Making secure software is even harder, because nothing tells you that there was an issue-- until is too late.

At times I suspect that its (nearly-) impossible to write secure code alone. Another set of eyes, which understand the problem space but don't share all your misunderstandings and preconceptions, can be incredibly powerful, if you can be fortunate enough to find some. They can also help keep you honest around short cuts you make but shouldn't, test cases you skip writting. Embrace the nitpicking and be proud of what you're creating. It deserves the extra work to make it completely right, and the users who will depend on it deserve it too.

2236  Bitcoin / Development & Technical Discussion / Re: 0.10 status on: December 02, 2014, 04:45:02 PM
0.10 is coming along very nicely. I feel pretty good about this release, not everything I wanted in it made it in but there are many important improvements.

Wumpus will likely comment more, but my opinion is that we could move to RC basically any time, although there are still some smaller but important bug fixes being hammered out and since we know about them they'd be better to get in prior to RC.

But you don't have to ask here to find out the status, the development is all in the public... you can go look at the changes yourself now-- https://github.com/bitcoin/bitcoin/commits/master
2237  Bitcoin / Development & Technical Discussion / Re: bitcoind-ncurses: Terminal front-end for bitcoind on: December 01, 2014, 11:17:19 AM
gmaxwell,

I've missed being able to stay in touch with the field. Glad to see that bitcoin has not yet died. Smiley

getchaintips looks quite interesting. If I'm interpreting the source comments correctly (I can't compile and test right now) then it matches a feature I was intending to add to bitcoind-ncurses - monitoring forks and their eventual resolve.
Delete bitcoin-config.h and rerun autogen.sh.  Also make clean before building. Some of the changes we made have upset the build system for dirty build directories. Sorry for that. Hopefully this will get you going. You may also need to install libgmp-devel if you don't have it currently.
2238  Bitcoin / Development & Technical Discussion / Re: bitcoind-ncurses: Terminal front-end for bitcoind on: November 30, 2014, 11:40:33 PM
I'm sad to have not heard from you for a while...

Though I'd come by to point out: If you get some time and interest again; git master getchaintips rpc should be a pretty nice source of information for another screen in this tool.
2239  Bitcoin / Development & Technical Discussion / Re: Best way to handle multiple accounts (like a bank)? on: November 30, 2014, 11:19:33 AM
If you don't know what you are doing
To make this solid advice more useful, I'll add... How do you know when you don't know what you're doing?  If you think you know what you're doing thats a strong indicator that you don't.
2240  Bitcoin / Development & Technical Discussion / Re: ECDSA math on: November 30, 2014, 09:33:30 AM
This also allows you to calculate the public key from the signature.
The public key is not completely unambiguous from the signature. (nor is R, technically)
Pages: « 1 ... 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 [112] 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 ... 288 »
Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!