Is this interface just a modified 'getwork'?
My CPU miner is quite fast using the '4way' algorithm, and should provide superior khash/sec for pooled miners.
|
|
|
+1
well done! bitcoin needed better exchange with asian currencies.
|
|
|
I think accounts and labels are two separate needs.
+1 agreed
|
|
|
I timed two runs with clean data directories (no contents), -noirc, -addnode=10.10.10.1, Linux 64-bit. Hardware: SATA SSD
Mainline, no patches: 32 minutes to download 94660 blocks.
Mainline + TxnBegin/TxnCommit in AddToBlockIndex(): 25 minutes to download 94660 blocks.
|
|
|
This does not seem like the best time to be creating some sort of wikileaks-bitcoin connection.
|
|
|
I instrumented my import using the -initblock=FILE patch posted last night, putting printf tracepoints in TxnBegin, TxnCommit, TxnAbort, Read and Write: ProcessBlock: ACCEPTED CDB::Write() DB4: txn_begin CDB::Write() CDB::Write() CDB::Write() DB4: txn_commit SetBestChain: new best=000000005b5c1859db19 height=1751 work=7524897523416 ProcessBlock: ACCEPTED CDB::Write() DB4: txn_begin CDB::Write() CDB::Write() CDB::Write() DB4: txn_commit SetBestChain: new best=00000000f396ab6b62ba height=1752 work=7529192556249 ProcessBlock: ACCEPTED CDB::Write() DB4: txn_begin CDB::Write() CDB::Write() CDB::Write() DB4: txn_commit SetBestChain: new best=000000000c6bcf972117 height=1753 work=7533487589082
So, it appears that we have a CDB::Write() that occurs outside of a transaction (vTxn is empty??). txnid==NULL is perfectly legal for db4, but it does mean that callpath may be operating outside of the DB_TXN_NOSYNC flag that is set in ::TxnBegin(). Thus, a CDB::Write() outside of a transaction may have synchronous behavior (DB_TXN_SYNC) as governed by DB_AUTO_COMMIT database flag. EDIT: Wrapping WriteBlockIndex() inside a transaction does seem to speed up local disk import (-initblocks). --- a/main.cpp +++ b/main.cpp @@ -1427,7 +1427,10 @@ bool CBlock::AddToBlockIndex(unsigned int nFile, unsigned pindexNew->bnChainWork = (pindexNew->pprev ? pindexNew->pprev->bnChainWork CTxDB txdb; + txdb.TxnBegin(); txdb.WriteBlockIndex(CDiskBlockIndex(pindexNew)); + if (!txdb.TxnCommit()) + return false;
Of course that implies begin+commit+begin+commit in quick succession (SetBestChain), so maybe a less naive approach might be preferred (nested transactions, or wrap both db4 writes in the same transaction).
|
|
|
Hi all,
just built the latest bitcoin-git source, seems like the wallet or database format has changed
************************ EXCEPTION: NSt8ios_base7failureE CDataStream::read() : end of data bitcoin in AppInit()
terminate called after throwing an instance of 'std::ios_base::failure' what(): CDataStream::read() : end of data
Could it be 84d7c981dc52cc738053 that broke something ?
What are the lines in debug.log leading up to the end-of-file exception you pasted?
|
|
|
The basic mining algorithm looks like this: sha256(sha256(data)), where "data" is the "data" member returned by 'getwork' RPC.
However, one complication with standard sha256 library implementations is that they will byte-swap input data, to change the input data into big endian format, which is normally required for endian-neutral sha256 to work on all platforms.
To increase speed, bitcoin has already performed that byte-swapping for you.
Which means that, to use a standard library (.net or whatever) implementation of sha256, you need to byte-swap your data again, to change it to little endian, then let your .net sha256 byte-swap back into big endian.
That's simple enough. What about the nonce? data&nonce? nonce is a 32-bit (4-byte) value patched directly into 'data' at a particular offset. You've got one more question for me now though, what do I return to getwork[data] ? If I just return the found hash that meets the requirements, how will it know what nonce was used?
You return 'data', modified with a nonce that results in a hash containing a sufficient number of leading zero bits.
|
|
|
vb.net has a nice little json library for it at http://www.pozzware.com/pozzware/Corsi/Programmazione/VB.NET/JSON%20Library.aspxAsp has some too http://code.google.com/p/aspjson/Server grabs the getwork json from the bitcoind, and passes it onto the client software. I'm building a sqlite back (because that's what I use, not berekelydb, sorry) and it tracks usage based on the bitcoin address for near hits. .net already has the libraries needed for sha256 Imports System.Security.Cryptography hash = New SHA256Managed() Dim hashinBytes As Byte() hashinBytes = hash.ComputeHash(hashhere) What would I take from the getwork and toss inside there? The basic mining algorithm looks like this: sha256(sha256(data)), where "data" is the "data" member returned by 'getwork' RPC. However, one complication with standard sha256 library implementations is that they will byte-swap input data, to change the input data into big endian format, which is normally required for endian-neutral sha256 to work on all platforms. To increase speed, bitcoin has already performed that byte-swapping for you. Which means that, to use a standard library (.net or whatever) implementation of sha256, you need to byte-swap your data again, to change it to little endian, then let your .net sha256 byte-swap back into big endian. I'm looking at jgarzik's little c miner, and hash1, data, etc, those all are fine, but what is the hash variable he is passing in there as well?
'hash' is the output of sha256(sha256(data)), ie. the sha256 hash.
|
|
|
Building blkindex.dat is what causes all the disk activity. [...] Maybe Berkeley DB has some tweaks we can make to enable or increase cache memory.
The following code in AddToBlockIndex(main.cpp) is horribly inefficient, and dramatically slows initial block download: CTxDB txdb; txdb.WriteBlockIndex(CDiskBlockIndex(pindexNew));
// New best if (pindexNew->bnChainWork > bnBestChainWork) if (!SetBestChain(txdb, pindexNew)) return false;
txdb.Close();
This makes it impossible to use a standard technique for loading large amounts of records into a database (db4 or SQL or otherwise): wrap multiple record insertions into a single database transaction. Ideally, bitcoin would only issue a TxnCommit() for each 1000 blocks or so, during initial block download. If a crash occurs, the database remains in a consistent state. Furthermore, database open + close for each new block is incredibly expensive. For each database-open and database-close operation, db4 - diagnose health of database, to determine if recovery is needed. this test may require data copying.
- re-init memory pools
- read database file metadata
- acquire file locks
- read and initialize b-tree or hash-specific metadata. build hash table / b-tree roots.
- forces a sync, even if transactions called with DB_TXN_NOSYNC
- fsync memory pool
And, additionally, bitcoin forces a database checkpoint, pushing all transactions from log into main database. That's right, that long list of operations is executed per-database (DB), not per-environment (DB_ENV), for a database close+open cycle. To bitcoin, that means we do this for every new block. Incredibly inefficient, and not how db4 was designed to be used. Recommendations: 1) bitcoin should be opening databases, not just environment, at program startup, and closing database at program shutdown. db4 is designed to handle crashes, if proper transactional use is maintained -- and bitcoin already uses db4 transactions properly. 2) For the initial block download, txn commit should occur once every N records, not every record. I suggest N=1000. EDIT: Updated a couple minor details, and corrected some typos.
|
|
|
Just pushed out v0.2.1 to cpuminer.git, and updated the Windows installer: http://yyz.us/bitcoin/cpuminer-installer-0.2.1.zipSHA1: d85390e1bb4da94b84f0968d0c98590f4be22f39 cpuminer-installer-0.2.1.zip MD5: 02bcd0b22fa62499e96244ebd86efcd5 cpuminer-installer-0.2.1.zip Windows users should see a slight increase in khash/sec, due to improved optimization.
|
|
|
This has happened to me every time a hash with zeros is found on my 64-bit Atom system:
Fixed in git commit 145e5fe141857c0757fdb5bb6909583aa67691b1, just pushed to cpuminer.git.
|
|
|
Patch updated to fix minor bug.
|
|
|
URL: http://yyz.us/bitcoin/patch.bitcoin-initblocksThis patch adds "-initblocks=FILE" parameter to bitcoin, where FILE is the path to a blk0001.dat file downloaded off the Internet somewhere. With -initblocks, bitcoin will import, verify, index and store each block in this file, thus making it acceptable to obtain from untrusted sources. The import stops upon any error, preventing re-loads of the same data. If nothing else, this patch permits the benchmarking and comparison between local disk import and P2P network download.
|
|
|
I think miners should retry connection every now and then if the master is not available, so you don't have to care if it is temporary down (i.e. while updating).
'getwork' miners (such as mine) already do this. The basic algorithm looks like this: - 1. Connect to bitcoin via TCP, obtain work
- 2. Work on work (no TCP connection required, needed, or used)
- 3. If solution found, connect to bitcoin via TCP and submit solution
- 4. Go to step #1.
If bitcoin is down when a solution is found, presumably the solution will be invalid by the time bitcoin comes back up, so the "retry" with solution seems less than useful. A "retry" is really just going back to obtain more work (step #1).
|
|
|
Just added the Crypto++ implementation of SHA256 to cpuminer.git. C only, though I would eventually like to add asm for 32-bit.
sha.cpp, converted to C, seems slower than cpuminer's current "c" algorithm.
|
|
|
Platform: 64 bit Linux.
Strange. I wouldn't have thought the mainline implementation would be that much faster on 64-bit. It might be better to catch the KILL SIGNAL complete the current work and even catch this unlikely case. OTOH, it might not be worth the effort, since the change of creating one is so small...
At some point you have to surrender to the fact that killing the miner means possibly missing a solution that would have been found, if you only gave it another microsecond to run...
|
|
|
|