grau (OP)
|
|
December 30, 2012, 12:21:40 AM |
|
"testnet3 memdb" failed 4 times in a row with the same IRC problem.
Attempting redirection of standard output gives 'Exception in thread "main" java.io.IOException: unable to obtain console' and exits.
May I also have one suggestion: configure the program to default to "-nolisten", because it pops up the firewall configuration dialog first time it is being run. I let it have the firewall exception enabled, but I would really prefer if the program didn't ask for it first time.
I will try to disable Ipv6 for IRC and not listen by default. The exception is because of the password at the beginning. I disable that too. "production leveldb" starts, but spews too fast to really understand what's going on.
This is just a demo and it meant to demonstrate that it is doing a lot and fast. I will let the log filtered to INFO at the console and write the rest into a file. But its too late for me today...
|
|
|
|
2112
Legendary
Offline
Activity: 2128
Merit: 1073
|
|
December 30, 2012, 01:34:39 AM |
|
This is just a demo and it meant to demonstrate that it is doing a lot and fast. I will let the log filtered to INFO at the console and write the rest into a file.
Thanks. I don't mind that it spews everything at TRACE level, but I do mind that I cannot redirect and grep/tee/tail through it (on Windows it would be findstr.exe and tailf.exe). If the "no stdout redirection" is not easily fixable then at least make the log file a complete log not "everything less than INFO" log. Thanks again and good night (or good morning.)
|
|
|
|
grau (OP)
|
|
December 30, 2012, 11:25:15 AM |
|
Thanks. I don't mind that it spews everything at TRACE level, but I do mind that I cannot redirect and grep/tee/tail through it (on Windows it would be findstr.exe and tailf.exe). If the "no stdout redirection" is not easily fixable then at least make the log file a complete log not "everything less than INFO" log. Thanks again and good night (or good morning.)
I updated so you can now redirect. Since it is a just a preview I will not invest more into it.
|
|
|
|
grau (OP)
|
|
December 31, 2012, 04:10:48 PM |
|
An update on testing progress.
The bitsofproof supernode now passes bitcoind´s script and transaction related unit tests as stored in the Satoshi code:
base58_encode_decode.json script_invalid.json script_valid.json tx_invalid.json tx_valid.json
I think this is a promising result of compatibility. May I honor the compiler of those tests, they really helped to pinpoint very subtle differences.
In 2013 I will write more complex tests for blocks an chain events.
|
|
|
|
gmaxwell
Moderator
Legendary
Offline
Activity: 4270
Merit: 8805
|
|
January 01, 2013, 05:17:23 AM |
|
An update on testing progress. The bitsofproof supernode now passes bitcoind´s script and transaction related unit tests as stored in the Satoshi code:
CONGRATULATIONS! These updates fixed the first of the issues I saw in code review. You also fixed a later issue I found in review, and several issues I did not notice. This is a big improvement, and even though I've been complaining that you're obligated to get all this right I don't at all mean to imply that its a small accomplishment. However, some validity issues remain. I'm surprised the tests in the reference codebase got you as far as they did here (well, also because I think I previously though you'd already consulted them. The next testing point would be Bluematt's blockchain tester.
|
|
|
|
K1773R
Legendary
Offline
Activity: 1792
Merit: 1008
/dev/null
|
|
January 01, 2013, 05:35:42 AM |
|
does an RPC interface like bitcoind ones exist?
|
[GPG Public Key]BTC/DVC/TRC/FRC: 1 K1773RbXRZVRQSSXe9N6N2MUFERvrdu6y ANC/XPM A K1773RTmRKtvbKBCrUu95UQg5iegrqyeA NMC: N K1773Rzv8b4ugmCgX789PbjewA9fL9Dy1 LTC: L Ki773RBuPepQH8E6Zb1ponoCvgbU7hHmd EMC: E K1773RxUes1HX1YAGMZ1xVYBBRUCqfDoF BQC: b K1773R1APJz4yTgRkmdKQhjhiMyQpJgfN
|
|
|
grau (OP)
|
|
January 01, 2013, 07:49:31 AM |
|
does an RPC interface like bitcoind ones exist?
This will offer synchronous remote invocation for a few functions and have an asynchronous message bus for notification and broadcast between known extensions or authenticated clients. The API (BCSAPI) is supposed to isolate components that need highest level of compatibility with bitcoind from the rest, where innovation should take place. The BCSAPI seeks to serve low level data, such as validated transactions and blocks chain events or outputs by address eliminating the need to hack into core components or interpret binary storage. I do not plan to imitate bitcoind's RPC out-of-the-box but, make it possible to implement extensions/proxies connecting to above API that create a compatibility layer if you need. This is a project in development, so interfaces or database schema will likely change. Keep watching this space for a ready to go before using it for real or committing serious development on top of the API. Suggestions, wishes are welcome.
|
|
|
|
K1773R
Legendary
Offline
Activity: 1792
Merit: 1008
/dev/null
|
|
January 01, 2013, 10:11:21 AM |
|
does an RPC interface like bitcoind ones exist?
This will offer synchronous remote invocation for a few functions and have an asynchronous message bus for notification and broadcast between known extensions or authenticated clients. The API (BCSAPI) is supposed to isolate components that need highest level of compatibility with bitcoind from the rest, where innovation should take place. The BCSAPI seeks to serve low level data, such as validated transactions and blocks chain events or outputs by address eliminating the need to hack into core components or interpret binary storage. I do not plan to imitate bitcoind's RPC out-of-the-box but, make it possible to implement extensions/proxies connecting to above API that create a compatibility layer if you need. This is a project in development, so interfaces or database schema will likely change. Keep watching this space for a ready to go before using it for real or committing serious development on top of the API. Suggestions, wishes are welcome. thanks, il take a closer look ASAP
|
[GPG Public Key]BTC/DVC/TRC/FRC: 1 K1773RbXRZVRQSSXe9N6N2MUFERvrdu6y ANC/XPM A K1773RTmRKtvbKBCrUu95UQg5iegrqyeA NMC: N K1773Rzv8b4ugmCgX789PbjewA9fL9Dy1 LTC: L Ki773RBuPepQH8E6Zb1ponoCvgbU7hHmd EMC: E K1773RxUes1HX1YAGMZ1xVYBBRUCqfDoF BQC: b K1773R1APJz4yTgRkmdKQhjhiMyQpJgfN
|
|
|
Mike Hearn
Legendary
Offline
Activity: 1526
Merit: 1134
|
|
January 01, 2013, 04:00:43 PM |
|
It's great that you made progress with testing. Unfortunately the first issue I saw is still there. As it's a chain-splitting bug more testing is required.
|
|
|
|
|
Mike Hearn
Legendary
Offline
Activity: 1526
Merit: 1134
|
|
January 01, 2013, 05:40:59 PM |
|
As far as I can tell the issue is still there.
|
|
|
|
grau (OP)
|
|
January 13, 2013, 07:43:47 PM Last edit: January 15, 2013, 06:37:16 AM by grau |
|
An update on testing. The bitsofproof server validates testnet3 chain, passes its own and bitcoind's script unit tests and run in sync for the last week without any exception connected to the prod net with 100 peers. I was pointed to the "blocktester" that was developed by BlueMatt for bitcoinj and adapted for bitcoind as part of the jenkins build after each pull. My findings with the "blocktester" are: - It uses the bitcoinj library to generate series of sophisticated block chain test scenarios.
- The generator is part of bitcoinj but is packaged such that it can not be utilized without modification for an other product, it serves bitcoinj maven test.
- The chain that it generates is
likely dumped into a file and then sourced into bitcoind, I could not find the process that does that fed through P2P into bitcoind. - There is a standalone java class that runs on BlueMatt's jenkins build server that downloads the test scenario chain from bitcoind and compares it with expectation (that is simple the residual correct chain)
Above findings explain why the tester could not be used out of the box for bitsofproof, but I made the best out of it in following steps: - Modified a copy of bitcoinj such that the block generator could be embedded into an other project.
- dumped the generated test chain into a JSON file, that contains the behavior the tester expects and the raw block data as wire dump in hex.
- Wrote a unit test for bitsofproof that sources the JSON encapsulated wire dump as if it would be sent to the validation and storage engine.
- compare accept/reject of the blocks as expected by the test vs. done by bitsofproof.
Thereby I made following findings: - The tests allow spending of coinbase earlier than allowed by bitcoind. The method this is fed to bitcoind must surpass that check.
- The accept/reject assumes that checks for double spend are not performed for branches of the chain until that branch becomes the longest.
Especially the second is a difference to bitsofproof behavior, that is more eager to do this check. The result for the longest chain (that counts) is same in both cases but validations happen in different order. Since tests as they are depend on the validation order, conversion of test cases will remain partial. I believe that the method bitsofproof works is more elegant and easier to follow, but it comes at the cost of eventually performing throw away validation. The cost is low since double spend checks are done after cheaper checks like POW, so using this difference to DoS bitsofproof is not feasible. Edit: The described difference is not observable on the P2P interface.Given the unsatisfactory state of test utility for complex behavior, I began developing a tool similar but more advanced than the block tester. I will explicitelly develop it to be suitable for bitcoind and bitcoinj or any other implementation that is P2P compatible, thereby addressing a problem that is not only mine. Edit: unsatisfactory is that test cases are programmed in a particular implementation and are tightly coupled with its build. I would like to create test cases for complex behavior as editable text files, so they can be extended by the maintainer of any implementation
|
|
|
|
caveden
Legendary
Offline
Activity: 1106
Merit: 1004
|
|
January 14, 2013, 07:23:39 AM |
|
The tests allow spending of coinbase earlier than allowed by bitcoind.
IIRC, bitcoind has soft (bitcoind's) and hard (Bitcoin protocol) limits for that. Meaning that it will not spend nor build a block spending coinbase less than 120 blocks old, but it will accept a block which spends it a bit earlier (100 blocks I believe, not sure).
|
|
|
|
Pieter Wuille
|
|
January 14, 2013, 08:36:29 PM Last edit: February 24, 2013, 03:05:21 PM by Pieter Wuille |
|
IIRC, bitcoind has soft (bitcoind's) and hard (Bitcoin protocol) limits for that. Meaning that it will not spend nor build a block spending coinbase less than 120 blocks old, but it will accept a block which spends it a bit earlier (100 blocks I believe, not sure).
Hard limit: 101 confirmations, i.e. created 100 blocks before (network rule) Soft limit: 120 confirmations, i.e. created 119 blocks before (client policy, which is probably overly conservative)
|
I do Bitcoin stuff.
|
|
|
runeks
Legendary
Offline
Activity: 980
Merit: 1008
|
|
February 24, 2013, 02:51:56 PM |
|
The problem is that the rules (as defined by Satoshi's implementation) simply pass data directly to OpenSSL, so the network rule effectively is "whatever cryptographic data OpenSSL accepts", which is bad. OpenSSL has all reasons for trying to accept as much encodings as possible, but we don't want every client to need to replicate the behaviour of OpenSSL. In particular, if they add another encoding in the future (which, again, is not necessarily bad from their point of view), we might get a block chain fork.
Considering cases like these, it occurs to me that it might be desirable to - at some point - split the Satoshi code into three parts: "legacy", "current" and "next". The "legacy" part would handle the odd corner cases described in the above quote. It would basically pull in all the relevant OpenSSL code into the legacy module (including the bugs in question), where it would stay untouched. This module would only be used to verify already-existing blocks in the chain; no new blocks would be verified with this code, as pulling in OpenSSL code into Bitcoin and managing future patches is next to impossible. This is the part that should be possible for future clients to not implement. They will miss the part of the block chain that follows the rules defined by this module, but I reckon that we really don't need 5+ year old blocks for more than archival purposes. The "current" module would handle verifying current blocks, and be compatible with the "legacy" module. It would depend on OpenSSL still, and if changes are made to OpenSSL that break compatibility with "legacy", patches would need to be maintained against OpenSSL to work around this. This module cannot code-freeze OpenSSL, as vulnerabilities can become uncovered in OpenSSL, and no one must be able to produce blocks that exploit the uncovered attack vectors. Newly uncovered attack vectors aren't a problem for the "legacy" module, as it only verifies already-existing blocks, produced before the vulnerability in question was uncovered. The "next" module would be backwards incompatible with the "legacy" and "current" module. This module changes verification rules to not accept, for example, the otherwise invalid signatures that OpenSSL accepts. The "next" module would have a block chain cut-off point into the future where, from that point on, a Bitcoin transaction would be considered invalid if, for example, it includes an invalid signature (was it a negative S-value?) even though it's accepted by the old OpenSSL code. It's sort of a staging module, where undesirable protocol behavior is weeded out. These protocol changes wouldn't take effect until some point in the future (a block number). The block number cut-off point would be advertised well in advance, and from this point on, the "next" module would become the "current" module, and the old "current" module would move into the "legacy" module (and no new blocks would be verified using this module). The new "next" module would then target fixing undesirable protocol behavior that was uncovered when running the previous "current" module, and would set a new cut-off time into the future, at which point new blocks would need to follow this improved protocol to get accepted. Couldn this work? It would mean we could slowly (very slowly) clean up the protocol, while still maintaining backwards compatibility with clients not older than, say, 2 years, or however long into the future we choose the cut-off point for the "next" module's protocol to become mandatory.
|
|
|
|
Mike Hearn
Legendary
Offline
Activity: 1526
Merit: 1134
|
|
February 24, 2013, 03:57:30 PM |
|
FYI the bug I mentioned way up thread has been found and fixed.
|
|
|
|
grau (OP)
|
|
February 24, 2013, 06:23:31 PM |
|
You might expect great progress of this project now that I work full time on it. bitsofproof now passes the block tester test cases that is also used to validate the Satoshi client. Mike Hearn was so kind to notify me that the error he spotted is fixed in the meanwhile. You can review a continuos integration test of it similar to the official client's pull tests at: https://travis-ci.org/bitsofproof/supernodeI plan to bring this to production quality definitely before San Jose conference where I plan to present and launch. Please experiment with it. It is a modern full validation implementation that loads the chain from scratch in about 8 hours if using the leveldb option. An instance of this server is constantly working in sync with the network without exceptions for over a month now at bitsofproof.com, feel free to connect and challenge it. I am working on further tests and a merchant module that will support multiple wallets and payment notifications over its communication bus. The project is also considered as a building block for bitcoinx.
|
|
|
|
grau (OP)
|
|
March 12, 2013, 07:06:59 PM |
|
The bitsofproof node sailed through the recent chain turbulences.
It was accepting the forking block in sync with 0.8 and was also able to re-org the 25 blocks as the other branch became longer.
|
|
|
|
K1773R
Legendary
Offline
Activity: 1792
Merit: 1008
/dev/null
|
|
March 12, 2013, 08:48:25 PM |
|
It is a modern full validation implementation that loads the chain from scratch in about 8 hours if using the leveldb option.
8 hours is a long time, are the bottlenecks known (ie. tested and made publicly)?
|
[GPG Public Key]BTC/DVC/TRC/FRC: 1 K1773RbXRZVRQSSXe9N6N2MUFERvrdu6y ANC/XPM A K1773RTmRKtvbKBCrUu95UQg5iegrqyeA NMC: N K1773Rzv8b4ugmCgX789PbjewA9fL9Dy1 LTC: L Ki773RBuPepQH8E6Zb1ponoCvgbU7hHmd EMC: E K1773RxUes1HX1YAGMZ1xVYBBRUCqfDoF BQC: b K1773R1APJz4yTgRkmdKQhjhiMyQpJgfN
|
|
|
grau (OP)
|
|
March 12, 2013, 09:03:34 PM |
|
It is a modern full validation implementation that loads the chain from scratch in about 8 hours if using the leveldb option.
8 hours is a long time, are the bottlenecks known (ie. tested and made publicly)? I'd say bootstrap duration is on par with the reference client. You would use this implementation to create functionality not offered by the reference implementation, not to compete with that within its abilities.
|
|
|
|
|