Bitcoin Forum
June 22, 2024, 04:06:18 AM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: « 1 2 3 4 5 6 7 8 [9] 10 11 12 13 »  All
  Print  
Author Topic: Starting preliminary 0.94 testing - "Headless fullnode"  (Read 15098 times)
joel_
Full Member
***
Offline Offline

Activity: 190
Merit: 100


View Profile
July 07, 2015, 07:33:54 PM
 #161

Just a quick question for devs - will Armory 0.94 finally support import of compressed private keys ?
Thank you very much for reply.

And good luck with testing, hope I and other - less tech savvy people - might put our hands on 0.94.x binary soon! Or is it already out there ? Thanks.
doug_armory
Sr. Member
****
Offline Offline

Activity: 255
Merit: 250

Senior Developer - Armory


View Profile WWW
July 07, 2015, 09:15:47 PM
 #162

Just a quick question for devs - will Armory 0.94 finally support import of compressed private keys ?
Thank you very much for reply.

I don't think so. I believe that will come when the v2.0 wallets are ready. (We're still working on them! I'm guessing they'll be in the post-0.94 release. Sorry for the wait.)

Senior Developer -  Armory Technologies, Inc.
joel_
Full Member
***
Offline Offline

Activity: 190
Merit: 100


View Profile
July 08, 2015, 07:03:22 PM
 #163

I don't think so. I believe that will come when the v2.0 wallets are ready. (We're still working on them! I'm guessing they'll be in the post-0.94 release. Sorry for the wait.)
Thank you very much for reply. Never mind, it is certainly not one of most important features. Just that yesterday I was trying to import Multibit PK's to Armory wallet, and failed miserably. And have found that I am not alone, when searching this forum for solution Smiley

Anyway, I have started to use Armory recently and ditched all those SPV/ dumbed down web wallets and so far I am loving Armory advanced features! The only thing I hate, but seems fixed with 0.94 is that it eats double the blockchain space on hdd Smiley So let me wish to all devs of Armory good luck with making Armory the best bitcoin wallet.
fallinglantern
Sr. Member
****
Offline Offline

Activity: 260
Merit: 251


View Profile
July 13, 2015, 11:34:03 PM
 #164

current revision of ffreeze (hash b627160) won't build on debian 7.8 32-bit:

Quote
g++  -Icryptopp -Imdb -DUSE_CRYPTOPP -D__STDC_LIMIT_MACROS -I/usr/include/python2.7 -I/usr/include/python2.7 -std=c++11 -O2 -pipe -fPIC -c ScrAddrObj.cpp
ScrAddrObj.cpp: In member function 'void ScrAddrObj::purgeZC(const std::set<BinaryData>&)':
ScrAddrObj.cpp:160:45: error: invalid initialization of non-const reference of type 'TxRef&' from an rvalue of type 'TxRef'
ScrAddrObj.cpp:175:46: error: invalid initialization of non-const reference of type 'TxRef&' from an rvalue of type 'TxRef'
make[1]: *** [ScrAddrObj.o] Error 1

I've reverted to 90586da for the time being which does still compile.
doug_armory
Sr. Member
****
Offline Offline

Activity: 255
Merit: 250

Senior Developer - Armory


View Profile WWW
July 14, 2015, 04:25:06 AM
 #165

current revision of ffreeze (hash b627160) won't build on debian 7.8 32-bit:

Quote
g++  -Icryptopp -Imdb -DUSE_CRYPTOPP -D__STDC_LIMIT_MACROS -I/usr/include/python2.7 -I/usr/include/python2.7 -std=c++11 -O2 -pipe -fPIC -c ScrAddrObj.cpp
ScrAddrObj.cpp: In member function 'void ScrAddrObj::purgeZC(const std::set<BinaryData>&)':
ScrAddrObj.cpp:160:45: error: invalid initialization of non-const reference of type 'TxRef&' from an rvalue of type 'TxRef'
ScrAddrObj.cpp:175:46: error: invalid initialization of non-const reference of type 'TxRef&' from an rvalue of type 'TxRef'
make[1]: *** [ScrAddrObj.o] Error 1

I've reverted to 90586da for the time being which does still compile.

Confirmed. I have a fix but will check with goatpig before it gets checked in.

Senior Developer -  Armory Technologies, Inc.
doug_armory
Sr. Member
****
Offline Offline

Activity: 255
Merit: 250

Senior Developer - Armory


View Profile WWW
July 14, 2015, 04:32:20 PM
 #166

current revision of ffreeze (hash b627160) won't build on debian 7.8 32-bit:

Quote
g++  -Icryptopp -Imdb -DUSE_CRYPTOPP -D__STDC_LIMIT_MACROS -I/usr/include/python2.7 -I/usr/include/python2.7 -std=c++11 -O2 -pipe -fPIC -c ScrAddrObj.cpp
ScrAddrObj.cpp: In member function 'void ScrAddrObj::purgeZC(const std::set<BinaryData>&)':
ScrAddrObj.cpp:160:45: error: invalid initialization of non-const reference of type 'TxRef&' from an rvalue of type 'TxRef'
ScrAddrObj.cpp:175:46: error: invalid initialization of non-const reference of type 'TxRef&' from an rvalue of type 'TxRef'
make[1]: *** [ScrAddrObj.o] Error 1

I've reverted to 90586da for the time being which does still compile.

Confirmed. I have a fix but will check with goatpig before it gets checked in.

Fix has been checked in. Give it a go.

Senior Developer -  Armory Technologies, Inc.
fallinglantern
Sr. Member
****
Offline Offline

Activity: 260
Merit: 251


View Profile
July 18, 2015, 05:21:37 PM
 #167

Fix has been checked in. Give it a go.

Works like a champ. Thank you very much.
Searinox
Full Member
***
Offline Offline

Activity: 147
Merit: 100


Do you like fire? I'm full of it.


View Profile
July 19, 2015, 08:15:20 PM
 #168

Much good news around here. When do we start preliminarily testing the build cross-platform too? I would very much love to dive right in with a 0.9.4 Win x64 build despite whatever remaining uncertainties, and I doubt I'd be the only one. Smiley
Carlton Banks
Legendary
*
Offline Offline

Activity: 3430
Merit: 3074



View Profile
July 19, 2015, 08:29:33 PM
 #169

https://bitcointalk.org/index.php?topic=1112974.msg11911942#msg11911942

The 0.94 release makes use of multi-threading to improve syncing performance, but multi-threading is a notoriously difficult discipline from which to iron out the kinks. Your enthusiasm (as well as that displayed by others in the thread) will be very useful when ATI do put testing builds out; getting this sort of code as solid as can be is important to qualify for release quality.

Vires in numeris
goatpig
Moderator
Legendary
*
Offline Offline

Activity: 3682
Merit: 1347

Armory Developer


View Profile
August 03, 2015, 06:21:00 PM
 #170

New commit, calling out the testers =P

Most of the changes are stability and supernode improvements.

Carlton Banks
Legendary
*
Offline Offline

Activity: 3430
Merit: 3074



View Profile
August 03, 2015, 08:43:05 PM
 #171

In standard (headless?) mode, I'm getting block Db rebuilds every quit/restart. No apparent errors in logs, either in the build or the restarts. Db rebuilt/rescanned using latest commit. Using 0.11 for Core.

Vires in numeris
goatpig
Moderator
Legendary
*
Offline Offline

Activity: 3682
Merit: 1347

Armory Developer


View Profile
August 03, 2015, 09:54:39 PM
 #172

In standard (headless?) mode, I'm getting block Db rebuilds every quit/restart

Will look into it.

goatpig
Moderator
Legendary
*
Offline Offline

Activity: 3682
Merit: 1347

Armory Developer


View Profile
August 04, 2015, 02:43:33 PM
 #173

In standard (headless?) mode, I'm getting block Db rebuilds every quit/restart. No apparent errors in logs, either in the build or the restarts. Db rebuilt/rescanned using latest commit. Using 0.11 for Core.

Fixed.

Carlton Banks
Legendary
*
Offline Offline

Activity: 3430
Merit: 3074



View Profile
August 04, 2015, 04:39:30 PM
 #174

In standard (headless?) mode, I'm getting block Db rebuilds every quit/restart. No apparent errors in logs, either in the build or the restarts. Db rebuilt/rescanned using latest commit. Using 0.11 for Core.

Fixed.

Will try it out shortly.

Here's something from a crash I got in supernode mode:

Code:
-WARN  - 1438705827: (BlockWriteBatcher.cpp:505) Finished applying blocks up to 40000
-WARN  - 1438705827: (BlockWriteBatcher.cpp:505) Finished applying blocks up to 42500
-WARN  - 1438705827: (BlockWriteBatcher.cpp:505) Finished applying blocks up to 47500
-WARN  - 1438705828: (BlockWriteBatcher.cpp:505) Finished applying blocks up to 52500
-WARN  - 1438705828: (BlockWriteBatcher.cpp:2621) Readjusting thread count:
-WARN  - 1438705828: (BlockWriteBatcher.cpp:2622) 0 readers
-WARN  - 1438705828: (BlockWriteBatcher.cpp:2623) 4 workers
-WARN  - 1438705828: (BlockWriteBatcher.cpp:2624) 4 writers
-WARN  - 1438705828: (BlockWriteBatcher.cpp:2625) 1 old reader count
-WARN  - 1438705828: (BlockWriteBatcher.cpp:2626) 4 old worker count
-WARN  - 1438705828: (BlockWriteBatcher.cpp:2627) 4 old writer count
-WARN  - 1438705828: (BlockWriteBatcher.cpp:505) Finished applying blocks up to 55000
Floating point exception

Could try it again using gdb if that helps.

Vires in numeris
goatpig
Moderator
Legendary
*
Offline Offline

Activity: 3682
Merit: 1347

Armory Developer


View Profile
August 05, 2015, 12:54:04 PM
 #175

Fixed. Just a dumb omission on my part.

Carlton Banks
Legendary
*
Offline Offline

Activity: 3430
Merit: 3074



View Profile
August 05, 2015, 02:47:49 PM
 #176

In standard (headless?) mode, I'm getting block Db rebuilds every quit/restart. No apparent errors in logs, either in the build or the restarts. Db rebuilt/rescanned using latest commit. Using 0.11 for Core.

Fixed.

Confirmed.

With supernode, I'm now getting no thread toggling (1.5 days to scan history). Logging for the thread toggling got removed from headless too, and yet that mode clearly multithreads the scanning workload. I like the "resume initialising from blockfile xxx" behaviour, serious productivity boost when testing supernode.

Vires in numeris
goatpig
Moderator
Legendary
*
Offline Offline

Activity: 3682
Merit: 1347

Armory Developer


View Profile
August 05, 2015, 04:20:29 PM
Last edit: August 05, 2015, 04:36:05 PM by goatpig
 #177

With supernode, I'm now getting no thread toggling (1.5 days to scan history). Logging for the thread toggling got removed from headless too, and yet that mode clearly multithreads the scanning workload.

That was way too much verbose anyways. After profiling a bit of profiling it seems thread toggling is pointless. Better off setting all processes to max thread count (as returned by std::thread::hardware_concurrency()). There already is a RAM ceiling coded in, so the different parts of the scan (reading data, parsing, serializing, writing) cannot get ahead of one another. In this case it's simpler to max out thread count for each and everyone of them and let the OS sort things out.

Each part waits on the next one through mutexes and condition variables, so all these threads are sleeping until they're allowed to work again. No harm done and it squeezes as much CPU time as possible. On the other hand toggling is a pain to tune properly. With the current toggler, a mainnet fullnode scan takes me ~8m30. With all thread counts maxed out it takes short of 5m.

Quote
I like the "resume initialising from blockfile xxx" behaviour, serious productivity boost when testing supernode.

A lot changed there. The DB is now write ahead only. The previous version would modify earlier entries to mark spent TxOuts. Now it always writes ahead and keeps spentness in a dedicated DB. It speeds the process a lot (reduces rewrites) and guarantees that the DB can overwrite data by starting at the top of the last properly committed batch with no risk of corrupting the dataset.

The one thing that did get corrupted a lot was the balance and transaction count for each address. There's a whole new section of code to handle that now, independently of scanning history. You need context to compute balance, since you are tallying the effect of each TxOut and Txin for each address. The previous version of supernode tallied balance while scanning history. If the DB failed to resume in the exact same state as before it crashed, there was a decent chance a least one balance got corrupted, and that meant rescanning from scratch.

This version separates the 2 processes entirely. It first scans history, then computes balances. This simplifies and speeds up a lot of code. First of all, keeping track of balance at all times creates a lot a rewrites: every time an address appears in a batch, you need to pull the existing balance from the DB, update it and write it back. Before splitting the 2 processes, 0.94 scanned supernode in 4h30. After splitting it, I scanned the history in 1h30 and built balance in 5min.

The good part, besides the speed boost, is robustness. Since the 2 are now separate, I added an option in supernode to run only the balance tallying part for quick fixing a damaged DB. It's called "Rescan SSH". Should fix the DB in 5~20min depending on the machine.

PS: There still is room for some very significant optimization, but I've concluded they are out of the scope of this release.

Carlton Banks
Legendary
*
Offline Offline

Activity: 3430
Merit: 3074



View Profile
August 05, 2015, 05:38:03 PM
 #178

After splitting it, I scanned the history in 1h30 and built balance in 5min.

The good part, besides the speed boost, is robustness. Since the 2 are now separate, I added an option in supernode to run only the balance tallying part for quick fixing a damaged DB. It's called "Rescan SSH". Should fix the DB in 5~20min depending on the machine.

PS: There still is room for some very significant optimization, but I've concluded they are out of the scope of this release.

What could be accounting for low CPU usage (20%) and a 2 days estimate scan time? The number of addresses Armory scans? One thing I perhaps should have mentioned is that I did not delete the database folder from previous experiences with supernode (which ended midway through a troubled tx scan once you hinted supernode changes a few weeks back)

Vires in numeris
goatpig
Moderator
Legendary
*
Offline Offline

Activity: 3682
Merit: 1347

Armory Developer


View Profile
August 05, 2015, 06:25:17 PM
 #179

Forgot to mention the DB format has changed quite a lot, you are better off getting rid of that older DB and starting fresh.

What could be accounting for low CPU usage (20%) and a 2 days estimate scan time?

Eventually it all comes down to your drive's bandwidth. A lot of in depth optimization (that I skipped this time) would be to modify the DB engine to line up all new written data sequentially. Another big optimization I skipped would be to fragment to spentness DB. The history DB is easy to fragment into smaller subsets so you can throw a thread at each of them while keeping the subset per thread fairly small. It speeds up searches and reduces the effort to realign data within each subset (as opposed to one massive DB).

The spentness DB however is one single block and is thus written to by a single thread. LMDB only allows a single writer per DB and a single transaction per thread (which is a common sense approach to a transactional DB design: 1 writer, unlimited readers). There's quite an effort to provide to split down that data in a way multiple threads can write several subsets concurrently. History is keyed by addresses so it's pretty simple to break it down into group of addresses. Spentness is keyed by block height & transaction index & txout index, so breaking down the subset is more complicated. You need to fragment it in a way where you can still resolve arbitrary searches without more context that a transaction hash and txout id.

I got a good idea to implement that, and this change (which I would apply to the history DB as well) would allow for crazy fast scanning on SSD and moderately fast scanning on HDD. However it is massive, requires some dynamic parameters that will add a lot of interlocking in certain corner cases, and I had to wrap this version up at some point.

Keep in mind that the target use case for supernode is still a medium to large server meant to run as the backend to a web service like bc.info or as a bootstrap server for litenodes. We don't have the time nor the resources to get supernode working on HDD. I'm not sure this will ever be a target hardware for supernode either, although I may err on that path for pure hobbyist satisfaction.

jje
Newbie
*
Offline Offline

Activity: 1
Merit: 0


View Profile
August 05, 2015, 06:35:06 PM
 #180

got 0.94 working on fedora 22. very enthused about the smaller database. hats off to the devs!
Pages: « 1 2 3 4 5 6 7 8 [9] 10 11 12 13 »  All
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!