Just saw the HDD Optimization branch, goatpig seems to be working hard! BTC
If I'm lucky, it should be done tonight. Otherwise, I'd say sometimes tomorrow
|
|
|
snip
Fullnode will be slow on HDD until I'm done with the current round of optimizations. Ignore supernode on HDDs. There will be a testing release when the changes are solid. Don't toy with the current version if you don't have a SSD.
I'm using an SSHD (hybrid drive) I doubt that will help. These usually top at 32GB of SSD, and most of that is occupied by your OS files. You'll have to wait for HDD opts. If all goes well I'll have something ready for early next week.
|
|
|
Especially important for our more-hardcore users, we now have a "supernode" mode, that doubles Armory's DB size, but indexes all scripts on the blockchain.
My supernode database file is 85.2 GB. This is 2.6x the size of the old Armory DB. Is this size expected? Can someone report the size of the new Armory's fullnode DB size? (I'd imagine it doesn't vary significantly from machine to machine) 85.2 GB - new Armory supernode 32.9 GB - old Armory fullnode 31.5 GB - Bitcoin Core 26.6 GB - raw blockchain Yeah supernode will be very large, it averages 90GB on my machine. That's partly due to LMDB. The Fullnode DB itself sits at around 50GB. The optimized version should approach a 1:1 ratio with Core's blocks folder.
|
|
|
Is the initial DB build/scan going to be this slow for a while?
I invest in a lot of cloud mining and I need to be able to monitor payments, invest more, etc. with a lot of addresses without having to check each one on blockchain.
Fullnode will be slow on HDD until I'm done with the current round of optimizations. Ignore supernode on HDDs. There will be a testing release when the changes are solid. Don't toy with the current version if you don't have a SSD.
|
|
|
(using 0.92.99.2 now) Adding a wallet on my supernode instantly shows the balance in the Available Wallets pane and the transactions, as expected (woot woot! The wait almost feels like it's worth it to see that ). However, the balances in the lower right are not immediately updated. I can force them to update by modifying the Filter selected on the lower left. I can email screenshots documenting this if needed to clarify (for privacy reasons, not posting it publicly). Is the wallet filter set to All Wallets? No, it was set to My Wallets. I have a correction: there's still something wrong, but it's not what I reported at first. Immediately after changing a wallet from Watching-Only to Offline (i.e. specifying that I own it) while My Wallets is selected, the balance of that wallet does not show up in the Maximum Funds, etc. balances in the lower right. I can force them to update by modifying the Filter selected on the lower left. Oh yeah, I didnt put in a mecanism to update wallet filters from individual wallet status change. Good catch.
|
|
|
(using 0.92.99.2 now) Adding a wallet on my supernode instantly shows the balance in the Available Wallets pane and the transactions, as expected (woot woot! The wait almost feels like it's worth it to see that ). However, the balances in the lower right are not immediately updated. I can force them to update by modifying the Filter selected on the lower left. I can email screenshots documenting this if needed to clarify (for privacy reasons, not posting it publicly). Is the wallet filter set to All Wallets?
|
|
|
cool, i'll uninstall and delete everything again armory related, and try again, lets see how long it takes from now to build
bitcoin was kept as is, so it loaded up, synced the few blocks quickly, now, quickly scanning through block headers, 50% done in a few minutes
There are no DB changes in this testing release, there won't be any difference.
|
|
|
Also, an update on the transaction scanning: I moved it from my HDD to SSD and now the estimate is 8 hours, and disk IO is about 30-80 MB/s. Maybe there's a lot of random access going on, not just linear access? That would explain why it's running so much faster, even though my HDD's linear access is much higher than 1-3 MB/s.
LMDB doesn't keep much of its data sequentially as opposed to LevelDB, and that's demolishing HDD speeds. I'm working on some optimizations for HDDs. I for one am greatly awaiting this optimization! I was at 66% on "Building Databases" and it said 9 hours left. I'm at 71% now and it says 3 days. I'm not complaining. I know it's beta code. I'm willing to wait because Armory is a great product and I believe in it. You know the old saying... To paraphrase "Good things come to those who wait" Obviously we're not gonna release a DB in a poor state.
|
|
|
Also, an update on the transaction scanning: I moved it from my HDD to SSD and now the estimate is 8 hours, and disk IO is about 30-80 MB/s. Maybe there's a lot of random access going on, not just linear access? That would explain why it's running so much faster, even though my HDD's linear access is much higher than 1-3 MB/s.
LMDB doesn't keep much of its data sequentially as opposed to LevelDB, and that's demolishing HDD speeds. I'm working on some optimizations for HDDs.
|
|
|
85%, reporting 4 hours now, although the number randomly just jumps around.
maybe incorporate some sort of mini-game for the wait, or a value/total transactions counting down (or up) until it's up to date? The time doesn't make sense, so are there other options on the dashboard screen to show something else?
Nothing more on the dashboard. If you were running this on Linux, you'd see a report in the terminal for each 2500 block milestone. We'll adjust the progress bar granularity eventually.
|
|
|
HDD is getting murdered I see. I'll test building on a HDD and optimize fullnode for that. Supernode is hopeless on a HDD to begin with.
|
|
|
I completely understand And btw, I'm just running normal armory, not the supernode. still scanning transactions, up to 46% now and reporting 11 hours to go What's your RAM and CPU usage at again?
|
|
|
This takes 1/2 an hour to fix, if you need a walkthrough, I'm happy to help You just need to get the links that are broken, create page's, configure them to be the URL that's now broken, and then use page links to, to send them to the correct (working) page. This would broken links across old builds, as well as this new build. There is a wealth of solutions and yours certainly is an acceptable one. Thank you for your concern. However, the issue is about segregation of duties, not implementation. This simply is not a part of the codebase I deal with right now, and every time I ran across this bug in the past few months, I failed to pass it on to the right person. CircusPeanut is keeping track of everything being reported in this thread and assigning tasks to the proper personnel. This may sound selfish, but I deal with the DB issues, not the rest. Maybe 90% of the DB code was changed this time around and I am solely focusing on getting that part solid. Also I don't usually answer unless the issue is dire or I have at least some sort of a fix worked in already. However I do read all the posts in this thread. I'm already working on a couple bugs I found on my own a few days ago, although I don't expect any of you will run into those. I'm also paying attention to DB building speed with the different machines you guys run on. From that I will decide if there is need to make anything faster. Keep in mind that I expect most private users will be running fullnode, not supernode, so give that DB type some love too =)
|
|
|
Links have been down for a while now. We're aware of the snafu but have been lazy about it I guess =(
|
|
|
16gb ram here and quad core i7, but not ssd
now up to 34% into scanning transaction history. my cpu is low, but disk and ram usage nearly 100%
I have a large swapfile though, so I wonder if that's what's causing the delay in my case?
It's not helping that the OS is constantly moving stuff in and out of the swap, it eats disk I/O for no reason. You can try without. You can halt the scan process at any time, even by killing the process. The DB is resilient to segfaults.
|
|
|
I have 16 GB of RAM as well, and my swap file is on my SSD. However, I have another task using just under 9GB. Armory uses about 900MB in the scanning stage. Is Armory limiting itself because it sees that there isn't a lot of RAM to spare, or is something else the cause of this 900MB vs +2GB thing?
As I said LMDB is a mapped DB. File mapping is entirely controlled by the OS. Depending on how much resources your other active processes require, the OS will throttle down the amount of physical RAM it makes available to hold the mapped file. This in return will crush the write speed (which needs to resolve the page on which the key is or will be before it can write in the data). My scans take usually an extra 30-40% speed if I play old steam games in the meantime. To give you an idea, I use a laptop with an i7 haswell, 32GB DDR3 and 3 micro SSDs in RAID0 and no swapping file. It takes me about 1h30 to copy the blocks over and 3h30 to scan supernode (last time I did it was around block 335k). About 45 minutes to scan fullnode, but that's cause the processing is performed in a single thread. Fullnode scans could be lightning fast if I multithreaded that part, as they require very little writing past the block copying. However, that again was not a priority. I can't wait for DDR4 in laptops so that I can move to 128GB RAM
|
|
|
You are still building the DB (copying block data over), not scanning (actually parsing the transaction data). When scanning supernode, you will be using over 2GB RAM at least.
But why does it use such little resources? That's what I don't understand. About 40% of RAM is free on my system at any given time, why doesn't Armory take it? Same with IO. The DB creation works in 2 phases: 1) Building, which is simply copying the blockchain as a whole over to the DB. This is the phase you are in, which takes very little resources. This code is single threaded and was barely modified in this release (only to support headers first Core). It could be made faster but this isn't a priority in any way. 2) Scanning the transaction history: this part requires heavy IO and processing, and has been largely modified. It will eat all the RAM on your system and I/O capacity of your drives, take several hours/days, and runs on 2 to 4 threads (possible more to come).
|
|
|
snip
The obvious bottleneck is I/O. Moreover, LMDB maps its whole dataset in RAM, so a machine with little RAM and a swapping file on a HDD will get crushed.
I have 16 GB of RAM and a 1 GB swap file on my SSHD(85 MB/s read/write), it still goes extremely slowly, and I see Armory using ≤41 MB of the swap file and ≤200 MB of RAM. Armory only uses about 1 MB/s of IO right now. You are still building the DB (copying block data over), not scanning (actually parsing the transaction data). When scanning supernode, you will be using over 2GB RAM at least.
|
|
|
Hi,
I'm responsible for the DB code. For those who care, it takes me 3h30 to build supernode, which approximates 90GB with the current blockchain size.
If you have less than 16GB of RAM and a HDD, do not expect supernode to build under a week. Your machine can't handle it. This DB type tracks ALL transactions in the blockchain, it's meant for heavy duty servers. You should at least try and see how long it will take your system to build fullnode before thinking of going with supernode.
The obvious bottleneck is I/O. Moreover, LMDB maps its whole dataset in RAM, so a machine with little RAM and a swapping file on a HDD will get crushed.
|
|
|
Is deterministic signing fully implemented? Is the deterministic signing code coming from libsecp256k1?
Yeah it's fully implemented and set by default. We still use cryptopp only 1) Can't export Log File. On a new Linux install, Core 0.10.0 and in Supernode go to "File" > "Export Log File" > "OK"
Translation tags (tr()) have introduced some bugs, we'll get on top of it
|
|
|
|