Bitcoin Forum
December 13, 2024, 10:51:50 AM *
News: Latest Bitcoin Core release: 28.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: [1]
  Print  
Author Topic: [ANN] bitcoinj 0.7 released  (Read 1460 times)
Mike Hearn (OP)
Legendary
*
expert
Offline Offline

Activity: 1526
Merit: 1134


View Profile
February 19, 2013, 10:29:03 PM
 #1

I'm pleased to announce the release of version 0.7 of the bitcoinj Java library for working with Bitcoin. Bitcoinj forms the foundation of MultiBit, Bitcoin Wallet for Android, SatoshiDice and more.

To get bitcoinj 0.7, check out our source from git and then run git reset --hard a9bd8631b904. This will place you on the 0.7 release in a secure manner. This paragraph was written on Tuesday 19th February 2013 and is signed with the following key, which will be used in all release announcements in future: 16vSNFP5Acsa6RBbjEA7QYCCRDRGXRFH4m.

Signature for the last paragraph: IMvY1FsQobjU2t83ztQL3CTA+V+7WWKBFwMC+UWKCOMyTKA+73iSsFnCHdbFjAOEFMQH/NvJMTgGeVCSV/F9hfs=

If you want to, you can check that the original announcement mail sent to bitcoinj@googlegroups.com is correctly signed with the google.com DKIM key, to establish a full chain of trust.

Release notes

  • Thanks to Matt Corallo, we now support a fully verifying mode in addition to simplified verification. This is a tremendous amount of work that wouldn't have happened without Matt! Right now, we strongly discourage anyone from using it for mining (which is not supported out of the box anyway). Use it in a production environment only if you know what you're doing and are willing to risk losing money. If you do use it, let us know so we can contact you when problems are discovered. Read the documentation carefully before you begin.
  • Also thanks to Matt, Bloom filtering is now implemented and activated by default. When bitcoinj connects to a peer that supports Bloom filtering, only transactions relevant to the wallet will be downloaded which makes bandwidth usage scale with the size of your wallet, not global system activity. A configurable false positive ratio allows you to trade off bandwidth vs privacy. App developers don't need to do anything to take advantage of this, it is enabled automatically.
  • PeerGroup now pings its peers and calculates moving averages of the ping times. Ping time, versions and block heights are taken into account when selecting the peer to download the chain from.
  • You can now customize which outputs the wallet uses to create spends. The new default coin selector object allows you to spend unconfirmed change as long as it's been seen propagating across the network, addressing a common end-user pain point in wallet apps.
  • Optimized networking code for faster startup.
  • A new PeerMonitor example app shows how to put properties of connected peers into a GUI.
  • The Wallet is now decoupled from the BlockChain using the new BlockChainListener interface. This will simplify the development of some apps that want to process transactions but not maintain an actual wallet.
  • The dependencies of broadcast transactions are now downloaded and risk analyzed. At the moment they are only being checked for having a timelock. In future we may also analyze tree depth. The goal is to make certain kinds of protocol abuse harder. Wallets will reject timelocked transactions by default, this can be overridden via a property.
  • You can now create timelocked transactions with WalletTool if you want to.
  • Compressed public keys are now used by default.
  • Support testnet3
  • Support bitcoin-qt compatible message signing and verification.
  • ECDSA key recovery is now implemented and allows you to obtain the public key from an extended signature. If the signature is not extended then there are multiple key possibilities returned.
  • Many bugfixes and minor improvements

API changes:

  • ECKey.sign() now takes a Sha256Hash as an argument and returns an ECDSASignature object in response. To get DER encoded signatures, use the encodeToDER() method of ECDSASignature.
  • ECKey.publicKeyFromPrivate now takes an additional compressed parameter.
  • PeerGroup.start()/PeerGroup.shutDown() now run asynchronously and return futures you can use to wait for them. You cannot restart a PeerGroup once it has been shut down any more.

Credits

Thanks to Matt Corallo (a.k.a. BlueMatt) for his huge contributions to this release.

As always, thanks to Andreas Schildbach for his thorough testing, ideas and high volume of quality bug reports. Also thanks to Jim Burton for the same reasons.

Finally thanks to Ben (piuk) of blockchain.info for funding the ECDSA key recovery feature.[/list]
n8rwJeTt8TrrLKPa55eU
Hero Member
*****
Offline Offline

Activity: 588
Merit: 500



View Profile
February 19, 2013, 10:46:00 PM
 #2

    Credits

    Thanks to Matt Corallo (a.k.a. BlueMatt) for his huge contributions to this release.

    As always, thanks to Andreas Schildbach for his thorough testing, ideas and high volume of quality bug reports. Also thanks to Jim Burton for the same reasons.

    Finally thanks to Ben (piuk) of blockchain.info for funding the ECDSA key recovery feature.[/list]

    Very significant new features and performance improvements.  Thank all 5 of you for your efforts!
    TierNolan
    Legendary
    *
    Offline Offline

    Activity: 1232
    Merit: 1104


    View Profile
    February 20, 2013, 01:17:30 AM
     #3

    You mention on your site that the new "full node" operation is very likely to have hard-fork bugs.  Do you think that is a permanent situation?

    Apparently, the official rule is that a chain is correct if the reference client says it is correct.

    I wonder if the creation of some block-chain serialization format would be appropriate.  This could be combined with a verifier.

    This would be a much shorter program than an entire client that needs to deal with networking.

    Maybe that could be vetted into some kind of semi-official spec.

    Probably all the blocks, one after another, in the same format as the network protocol, is sufficient, so maybe I am over-thinking it.

    1LxbG5cKXzTwZg9mjL3gaRE835uNQEteWF
    Mike Hearn (OP)
    Legendary
    *
    expert
    Offline Offline

    Activity: 1526
    Merit: 1134


    View Profile
    February 20, 2013, 10:50:45 AM
     #4

    No, it's not a permanent situation. The level of effort taken to find and eliminate all hard-forking bugs is large but finite. If there's enough interest (and Matt seems very interested) then eventually we'll have a high degree of confidence in the correctness of the code, at least to the point where emergency security scrambles are no more common than for any other kind of software.
    TierNolan
    Legendary
    *
    Offline Offline

    Activity: 1232
    Merit: 1104


    View Profile
    February 20, 2013, 11:32:56 AM
     #5

    No, it's not a permanent situation. The level of effort taken to find and eliminate all hard-forking bugs is large but finite. If there's enough interest (and Matt seems very interested) then eventually we'll have a high degree of confidence in the correctness of the code, at least to the point where emergency security scrambles are no more common than for any other kind of software.

    What you be your view of splitting off the "validator" as a separate project that is included, so separate from the network code?

    That way all java clients could use the same block chain validator.

    1LxbG5cKXzTwZg9mjL3gaRE835uNQEteWF
    Jouke
    Sr. Member
    ****
    Offline Offline

    Activity: 426
    Merit: 250



    View Profile WWW
    February 20, 2013, 12:02:15 PM
     #6

    Thanks for all the updates and great new features Cheesy

    Koop en verkoop snel en veilig bitcoins via iDeal op Bitonic.nl
    Mike Hearn (OP)
    Legendary
    *
    expert
    Offline Offline

    Activity: 1526
    Merit: 1134


    View Profile
    February 20, 2013, 12:05:42 PM
     #7

    What you be your view of splitting off the "validator" as a separate project that is included, so separate from the network code?

    That way all java clients could use the same block chain validator.

    I don't understand your proposal I'm afraid. The networking code is already a separate group of classes. Switching from SPV mode to full mode means instantiating a couple of different classes and plugging them into the rest, but the code is modular enough already.
    TierNolan
    Legendary
    *
    Offline Offline

    Activity: 1232
    Merit: 1104


    View Profile
    February 20, 2013, 12:10:41 PM
     #8

    I don't understand your proposal I'm afraid. The networking code is already a separate group of classes. Switching from SPV mode to full mode means instantiating a couple of different classes and plugging them into the rest, but the code is modular enough already.

    Sounds like you already did it.  I was thinking of a specific library that other projects could include in the maven pom.xml files, so there is 1 central java reference for how to do validation.

    1LxbG5cKXzTwZg9mjL3gaRE835uNQEteWF
    Mike Hearn (OP)
    Legendary
    *
    expert
    Offline Offline

    Activity: 1526
    Merit: 1134


    View Profile
    February 20, 2013, 12:18:43 PM
     #9

    Yeah, bitcoinj is that library. You can refer to it from your POM, instantiate a few objects and you're done. Look at the docs on the website for examples.

    Currently we don't upload to Maven Central. The problem is, it doesn't seem to have any security features, and compromising Maven Central then swapping out bitcoinj for a backdoored copy would be a superb way to steal peoples wallets. Extending Maven to support specifying the hashes of the dependencies along with their names and versions would be a good solution, but nobody has done it yet. Until then you need to use git and check out the code by hand.
    Peter Todd
    Legendary
    *
    expert
    Offline Offline

    Activity: 1120
    Merit: 1164


    View Profile
    February 20, 2013, 01:00:18 PM
    Last edit: February 20, 2013, 01:14:00 PM by retep
     #10

    You mention on your site that the new "full node" operation is very likely to have hard-fork bugs.  Do you think that is a permanent situation?

    Apparently, the official rule is that a chain is correct if the reference client says it is correct.

    I wonder if the creation of some block-chain serialization format would be appropriate.  This could be combined with a verifier.

    This would be a much shorter program than an entire client that needs to deal with networking.

    Maybe that could be vetted into some kind of semi-official spec.

    Probably all the blocks, one after another, in the same format as the network protocol, is sufficient, so maybe I am over-thinking it.


    Well, you gotta look at the process by which the reference client determines a block is valid. First of all it's received from a peer with ProcessMessage(). That function almost immediately calls ProcessBlock(), which first calls CheckBlock() to do context independent validation of the block; basic rules like "Does it have a coinbase?" which must be true for any block. The real heavy lifting is the next step, AcceptBlock(), which does  the context dependent validation. This is where transactions in the block are validated, and that requires the blockchain as well as full knowledge of the unspent transaction outputs. (the UTXO set) Getting those rules right is very difficult - the scripting system is complex and depends on a huge amount of code. Like it or not, there is no way to turn it into a "short verifier program"; the reference implementation itself is your short verifier program.

    Thus right now we are far safer if all miners use the reference implementation to generate blocks and nothing else. However, we are also a lot safer if the vast majority of relay nodes also continue to use the reference implementation, at least right now. The problem is that even if a block is valid by the validation rules, if for some reason it doesn't get relayed to the majority of hash power you've caused a fork anyway. With the reference implementation this is really unlikely - as I explained above the relay rules are the validation rules - alternate implementations of relay nodes might not have that property though.

    An interesting example of relay failure is how without the block-size limit any sequence of blocks with a size large enough that some minority of the hashing power can't download and validate them fast enough creates a fork. Specifically the blocks need to be large enough that the hash power in the "smaller-block" fork still creates blocks at a faster rate than the large-blocks are downloaded. Technically, with the block-size limit this can happen too, but the limit is so low even most dial-up modems can keep up. Block "discouragement" rules can also have the same effect, for much the same reasons.


    For merchants a hard-fork bug leaves them vulnerable to double-spends by anyone with a lot of hashpower, but it'll cost the attacker one block reward per confirmation required. (the attacker can amortize the attack across multiple merchants) Merchants should be running code that looks for unusually long block creation time, and automatically shuts down their service if it looks like the hash rate has dropped significantly. Just doing this is probably good enough for the vast majority of merchants that take at least 12 hours to process and ship an order.

    Some merchants are more vulnerable - a really bad example would be a chaum token issuing bank. Once you accept a deposit, give the customer the chaum token you have absolutely no way of invalidating the token because redemption is anonymous. Merchants like that should be running their own reference implementation nodes, double-checking those blockchains with other sources, and keeping their clocks accurate so they'll know when hashing power has dropped off mysteriously.

    For instance you could run a service that would (ab)use DNS to publish each block header as a DNS record. Headers are just 96 bytes long so they'd still fit in single UDP packet DNS requests I think. Caching at the ISP level would reduce load on the server. (although ISP's that don't respect TTL's are a problem) The proof-of-work inherently authenticates the data and parallel services should be run by multiple people with different versions of the reference client. I wouldn't want to only trust such a service, but it'd make for a good "WTF is going on, shut it all down" failsafe mechanism for detecting forks.

    Pages: [1]
      Print  
     
    Jump to:  

    Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!