I would not know how to use that tool as I'm not a very big fan of java, but I can say that Gocoin is being checked against the official core's test vectors for transaction and scripts - both valid and invalid: https://github.com/bitcoin/bitcoin/tree/master/src/test/datahttps://github.com/piotrnar/gocoin/tree/master/lib/testAnd recently I added a functionality that is (optionally) using the consensus lib (which came with core 0.12.0) to cross-check every transaction. I have been using it ever since and there have been no mismatches observed so far. The biggest potential risk of an incompatibility is in verifying of entire blocks - checks that are only executed on a block level. Like recently I found that I wasn't checking for the maximum number of sigops - which bitcoin core limits to 20000/block. Some other time I found that I wasn't checking for the coinbase transactions to be "mature" while verifying blocks that spend them. Then there was no check of the timestamp against the median time of the last 11 blocks. Few other things, some of them I don't even remember... The odds are that there are still some check on the new blocks that bitcoin core does, which gocoin doesn't (yet), but I am unaware of any specific cases. If anyone finds such, please report.
|
|
|
Yes, it is possible.
You would put the text in the pk-script of your transaction output(s). The most common method is to use OP_RETURN opcode, followed by the data (text message) you want to notarize.
Just remember that if you put too much data after OP_RETURN, you might have problems getting the transaction confirmed. But the blockchain protocol itself allows a transaction to be as big as 100KB and you can fill it with a text almost entirely. Apart from the limitations inside the official software, miners should not have problems including such a transaction into a block, as long as you'd pay them a sufficient fee.
A cheaper way however would be to only embed a hash of your message into the blockchain, while the message itself would be broadcast via some other channel.
|
|
|
If a transactions is broadcasted and everyone receives the transaction, why not add a hash (256 bits) of the transaction in a block body? Propagation time of a block would be similar, the number of transactions send in a block is increased by a factor 8-ish. The problem is that not everyone receives the transaction. And even if they do, they may not store it in their memory pool. Having learning about the new block (that these days usually contains over thousand txs), you will most likely always be missing some of them - even if your node has been online for weeks. You need each single transaction to verify the new block and that is why the protocol sends you all the txs along with each block. There is obviously a space for some optimization here, but the problem is actually quite complex and I think that is why nobody has implemented any solution for it yet. Just think of a specific scenario: Your node receives a notification announcing a new block. You know it is a new block, but you still don't know what transactions are inside it. So you need to ask the peer who knows that block already for the list of the tx_ids inside it - best case scenario, it will come back after the ping time, plus the time needed to transfer the data (which often would exceed 100KB). Now, having the list of tx_ids already, the odds are that you will miss some of the txs and you will need to ask the peer for them. This requires another ping time plus the time to transfer back the data - and that is the problem. What I am saying is that in this scenario you need two request to acquire the content of the block: first for txs list, second for txs you didn't know. You cannot do these two in parallel - need to proocess the answer for the first request, to issue the second one. And currently you need only one request: asking for the entire block, with all the transactions. It obviously depends on the specifics of the connection to the peer (ping vs bandwidth), but in general it is questionable whether the scenario with two requests would give you the entire content of the block faster (aka block propagation speed). It would most likely require to send less data, but causing an additional ping-long delay for the additional request.
|
|
|
Hi guys i use bitaddress.org to generate 1 paper wallet I get address and private key. I imported that private key on blockchain.info and i get different address on blockchain.info then on paper wallet so i checked bitaddress.org and saw it was compressed address of that same uncompressed address that was on my paper wallet. So my question is if is send bitcoins to the address on paper wallet,will i get same bitcoins on the address that got imported on blockchain? Are compressed and Uncompressed address same thing and exchangable? Please some technical guy explain this to me
You have the private key so you can spend it. Some of the tools I make can help you with this. https://sourceforge.net/projects/gocoin/files/?source=directoryFirst do .. this will download the balance of your problematic address. Then use the wallet command to sing a transaction that spends it. You will however first need to import your private key to it (put the "5..." type of string into file named ".others") Then broadcast the signed transaction to the network, e.g. using this form: https://blockchain.info/pushtxDrop me a PM if you need some more help, or just ask here.
|
|
|
I'm about to release 1.6.0 that has got some cool new features, which you can briefly read about int the changelog. Upgrade is highway recommended as all the versions up to (including) 1.5.0 have had a serious security issue, which I prefer to not talk about because it's just embarrassing. As for the new features. There is a basic RPC API, implementing functionality required for mining. It is compatible with bitcoind's API and so far has been tested successfully on testnet3 with ckpool mining software. The Home, Network and Transaction pages of WebUI shows nice graphs: http://imgur.com/a/eq1AHThe node counts sigops inside transactions and blocks. The new tool bdb allows to play (extract/add blocks, check integrity) with the block's database. Also allows to defragment the db without a need to have the double space on the disk. UTXO database can now work in a "volatile mode" where it only writes changes to disk when exiting. Use -v switch for either client or downloader to trigger it. The qdb enginie has also changed a bit and now it writes any pending changes on disk much faster. You can also start the client with "-undo <n>" switch, which will cause the node to undo the last n blocks (on the UTXO db) and exit. The most recently added is the feature that looks for "libbitcoinconsensus.so" or "libbitcoinconsensus-0.dll" (depending on host OS) and if found uses the function from it to cross-verify each transaction. So far it hasn't found any mismatches. Use TextUI command cons to see the stats of the cross-checking. I would also like to mention that the recent versions of Gocoin have no external dependencies and should build out-of-the-box. Just make sure to have 64 bit OS and at least 6GB of RAM (it's swapping too much slow less memory). In peaks (especially when rebuilding the UTXO db) it may even need more memory. But normal node, working on a synchronized chain, should never use more than 4GB. I also strongly advise to use downloader to sync up the block chain - it's really fast as it doesn't verify transactions. Theoretically it is less secure, but there is no way to exploit it, if you only check the hash of the last block it assumed trusted.
|
|
|
As I learned only recently one of the steps in a new block's verification is counting a total number of sigops inside the block - if it'd exceed 20000, such block supposed ot be rejected. So I've implemented this in my own software, but cannot find any reliable reference to make sure that I'm counting these numbers properly. Does anyone know any block explorer that would show number of sigops in each block? Or if not, can anyone confirm whether I've counted these numbers properly: Block# Block Hash Sigops 403908 0000000000000000067549f6a6a4590405161bd6773166cd00518f121e6961b5 5600 403907 0000000000000000019955303142ecd05a374a0657945b8b2d1b89fd10cb9f17 2635 403906 0000000000000000060d52b13b0cd892dbf986177c8f2d7fe3f652e9574fce04 4527 403905 0000000000000000026031da9dc81b633bf3c566ddfc85bef72216a33c2166ec 1516 403904 000000000000000004f2b65476bcf0b72d9846283b87b488dd6be0c5c8423d39 930 403903 000000000000000001ee250b7132b41ed068ab443465b1d2954350c2e034b93a 1170 403902 000000000000000004a983db2a27e573b2aa2501a904440a8a7a00058eabdbb9 1708 403901 000000000000000002f5b6d90cff1f4083271f5dfc607c72cc99b80355de01d3 723 403900 000000000000000001e5e0c849184c832968f220456731820107acde3c91ed6a 5709 403899 000000000000000001bf70c02bc4bc1c962e808d411b0a3a627a0acdabb7d4ba 3311 403898 00000000000000000388e1db2b93eb338a48eec7f5c554e7f25c5a260c164ffb 3733 403897 0000000000000000017bf7996829ac5939d24e5c23c50b14d5cc94e131a1d43a 4603 403896 000000000000000006670bd2ec8902308ac712eca9421538b631471b6b18e110 7197 403895 0000000000000000024767f99ac1028162b728debebde0864b5aa3d932855c69 8951 403894 000000000000000002b6b8f7de6485c5829622306197af41aa338f59b56d38eb 3049 403893 000000000000000000520b20ac8dfc6388e8afa071d55e8e8f338645cf3db7a3 2380 403892 000000000000000001f3b14dcbc09dfce73c77890f82f935dc5e90b9cea1f69a 1809 403891 000000000000000000c52f273c7382616fbc72ae9cd080bb4a2b52d8deb70d2f 2791 403890 0000000000000000022bb622accabc0b8739adafa425ed4a605b0164c5d81b7d 3029 403889 00000000000000000313a13e433822d39c9f7226bff7230f3d92e971faf1f46c 4483 403888 000000000000000004a4388ca15be3d5a49ef66960972e3baf9a9ed27a27ff20 5372 403887 000000000000000003dc28c943e06f97809194b0d5460d58267c92152c03fc70 5728 403886 00000000000000000453bf07533a0a66bb164571759d5cd289b1bea250be758d 6282 403885 000000000000000006675fdbd01884e44bb8a4c4636209709ee768d9990c3609 1094 403884 000000000000000003e477a5f58893daa99578a1dd87014661d7e85df8370ebe 602 403883 0000000000000000037398599c10b434976d5825244f3dfafda8dd851e9313cf 2682 403882 000000000000000002a7161e6ad70e45ca8eaaa97c9821e0edc51612e977119f 6604
|
|
|
so say, @james you only have the blocks data - from 1 to 400000. stored close, on ssd disk or whatever. say ramdisk how long does it "load" for you to have all the database content of the block 400000?
|
|
|
this is my preferred expression. thank you.
i think its nice and I will look into it more.
|
|
|
Assuming you have already the blockchain internally, it might be easier to just directly generate the bundle files. The key is the encoding and if interop is not a constraint, then you can just use a similar model without making it exactly compatible.
Sounds good, but can you express it in a way that I could understand, what does it actually mean to do for the software?
|
|
|
James, can you please describe how you actually implemented your db? I'm especially interested in the index part, that (according to you) fits into the CPU's cache. I've been trying to read through your code, but its shit loads and I don't have that much patience. I once implemented sipa's secp256k functions in Go, so I do have some patience... but it has its limits Say, I'd like to implement the read-only database you made, in another language - what does it take?
|
|
|
this looks like a right way to go, but.. 1) I failed to install the software on my linux, following the docs. Or to build the sources.... 2) I don't think TCP based API is suitable for the needs of a DB we're discussing here how do you imagine browsing trough a content of the current 35M+ UTXO records whit this TCP based API?
|
|
|
Once all the balances for all the addresses are verified at all the bundle boundaries, then "all" that is left is the realtime bundle being validated. The brute force way would just be to generate a bundle without all the blocks, as each new block come in. I guess it wont be so bad as both the bloom filter and the open hashtable was designed to be able to be incrementally added to.
Although I'm not quite sure what you mean by "brute force" among other things, indeed the way to go is to keep just a blockchain secured snapshot of the utxo db at a certain block. Its the only way to go. Everybody knows that, but nobody does anything about it
|
|
|
sounds like a good stuff. I'll give it a better look once I get sober so can you refer to some specific code that implements this?
|
|
|
Not sure what sort of things "like to do anything on each single one of them" you would want to do across all UTXO. To my thinking, other that total balance, the UTXO is not spanning across all addresses, so if you can extract all the relevant tx per address efficiently, that is sufficient. ok, so let me ask you this way. you have your utxo database, or so you call it. you get a new transaction and you have to verify it. how much does it take? i hope you know that you need to fetch each tx's input from the db, verify its consistency and then put the changed records back to the db.
|
|
|
In its raw form with all the data scattered across the 200 bundles, a simplistic search for a specific address took 2.5 milliseconds on a 1.4Ghz i5. Not a parallel process, but serial. Using faster CPU and 8 cores, I think times of 100 nanoseconds could do a complete search and sum for a specific address. Yes, but I'm not interested in checking balance on addresses. I just want to know how fast it is to go trough each of the UTXO db records - like to do anything on each single one of them. As for the rest, the memory mapping, storing it on disk and stuff, I think you're on to something. But first I just want to know how fast your utxo db works comparing to mine.
|
|
|
for utxo, my approach with iguana creates a parallel set of datasets in read only memory mapped files. Arguably, this is the ultimate DB as it simply cant get any faster than this.
All things needed for block explorer level queries are precalculated and put into the read only files.
By having each bundle of 2000 blocks being mostly independent of all other blocks, this not only allows for parallel syncing of the blockchain but also using multiple cores for all queries regarading tx history.
I also calculate all address balances for each bundle, so by adding all the values across all the bundles you end up with the current balance as of the most recent bundle. Special case handling of the realtime bundle gets us up to date block explorer data for all blocks, tx, addresses (even multisig)
I dont quite have everything done, but the basic structure is showing very good results. I also remove as much redundancy as possible during the creation of the read-only files so instead of taking more space than the raw blockchain, it ends up taking about half the space. About half of that is the signatures, which can be purged for non-relaying nodes after it has been validated.
The format of the utxo vector would just be the location of the unspent that is being spent and the location of the spend is implicit in the order of the vins.
[u0, u1, u2, u3, ...] is a list of 6 byte locators for the unspent that the corresponding vin in the bundle is spending, where each ui contains the bundle id and the unspentind. Each unspentind is simply the order it appears in the bundle (starting at 1).
the above is not directly usable, but it is invariant and can be created once and put into the read-only file(system). using mksquashfs reduces the size of the index tables by almost 50%.
Given such a list for all bundles, then using a parallel multicore process, they can all be traversed and update a bitmap to indicate which outputs are spent. So a 0 bit for this means it is unspent. Maybe 1 bit of space per utxo is good enough, as that is seemingly as good as it gets, but it is 1 bit per vout, so more like 30MB size. However, I think this can be compressed pretty good with most of the early vouts already being spent (ignoring the satoshi ones). I havent had a chance to get an up to date utxo bitmap, but I hope to get one in the next week.
At the 30MB size or smaller, the entire utxo bitmap will fit into CPU L cache and provide 10x faster performance than RAM. RAM is actually quite slow compared to things that can be done totally inside the CPU.
James
I'm sorry I don't have time to dig into your code, though I'm sure its a good stuff. So how do you actually store the db in memory, in some simple words? I use hashmaps and its pretty fast, but also very primitive. Plus it takes loads of memory and ages to load from the disk. And writing it into disk - separate problem. I'm sure there must be so much space to improve it. Can you say how many seconds you need to browse through all the current utxo records and calc, lets say, the sum of all the values. With my engine I cannot get below 4 seconds anymore. UTXO db is over 35 million records now. It really needs a better shit than leveldb
|
|
|
It really is a pointless question. I think it's very important to be able to browse through all the records in a shortest possible time.
For whom is the speed so important? For the people who treat finances responsibly the transactional integrity will be most important specification, immediately followed by capability to integrate with existing financial and accounting middleware. Speed would be secondary or even tertiary, especially since the UTxO set is widely replicated and six confirmations take one hour on average. You're mixing up things. Probably because you don't really understand what the UTXO database actually does inside the bitcoin software. Spreed of accessing the records is important for obvious reasons. The traditional reason is that you want to verify new transactions/blocks quickly and efficiently. And the new reasons, like hardware wallets or any kind of stealth payments. The current approach from the satoshi's client, where the wallet is glued to the node and keeps track of its addresses, updating their balance as new blocks appear - IMHO this solution has proven to be unreliable. Plus is totally unfeasible for private offline wallets. An efficient UTXO database that can quickly browse through all the records is definitely a must for the next generation of bitcoin node software. I am just not aware of any existing engines that would be ready to replace the currently used leveldb. I think one eventually needs to be (and will be) created explicitly for Bitcoin. Just like sipa's secp256k1 lib was created explicitly for Bitcoin, as the originally used openssl's implementation wasn't good enough to handle a crucial part of the software.
|
|
|
No, I don't have one I was just wondering, because I know the UTXO db is a very important component of the bitcoin system and it is obviously going to be somehow profitable in a future to have it well performing. How would you imagine a database engine designed specifically for bitcoins UTXO db? I think it's very important to be able to browse through all the records in a shortest possible time.
|
|
|
You're going to vote with your bitcoins on implementing a wishing system for devs to a write a code which the miners then shall take?
And you don't see any problems convincing any of the parties involved to actually obey it?
It's not going to happen.
|
|
|
|