Bitcoin Forum
October 14, 2024, 11:43:55 AM *
News: Latest Bitcoin Core release: 28.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: [1]
  Print  
Author Topic: Effects of DBcache Size on Bitcoin Node Sync Speed  (Read 131 times)
SilverCryptoBullet (OP)
Member
**
Offline Offline

Activity: 216
Merit: 90


View Profile
June 28, 2024, 02:56:18 AM
Merited by pooya87 (4), vapourminer (2), ABCbits (2), odolvlobo (1)
 #1

https://blog.lopp.net/effects-dbcache-size-bitcoin-node-sync-speed/

Jameson Lopp did another test and this time, he did it with DBcache to test optimal parameter for DBcache to get best Bitcoin node syncing speed.

Related discussions and resources like data sheets.
https://github.com/jlopp/bitcoin-core-config-generator/issues/69?ref=blog.lopp.net
https://docs.google.com/spreadsheets/d/15ZxywThjwJdRMKrj3wF1pkyyD5ZY9_ia4vfyEIp30ao/edit?ref=blog.lopp.net&gid=0#gid=0
https://github.com/bitcoin/bitcoin/blob/aa2ce2d64696c030fb39f4e63b12271bb700eb28/src/kernel/chainparams.cpp?ref=blog.lopp.net#L108

Quote
An investigation into tradeoffs between different dbcache sizes when performing a full bitcoin node sync.

I recently had someone point out on my Bitcoin Core Config Generator project that there are tradeoffs with high dbcache settings and initial block download performance on low powered devices.

What is dbcache?
It's not exactly a cache in the traditional sense - it's mostly a write buffer and it prevents you from needing to regularly write the current state of the UTXO set to disk. This can be a performance improvement when syncing many blocks because you're avoiding having to make a ton of disk operations that are relatively slow.

What's the problem?
In short, if your node crashes before the initial full sync is completed but you have a high enough cache setting that you never completely filled it, the node never flushes the UTXO set to disk. This means if you restart an interrupted sync it requires an incredibly resource intensive process of reindexing the blockchain in order to rebuild the UTXO set that you failed to persist to disk.

The discussion around these tradeoffs led to an interesting claim:

Quote
With a modern SSD there is very little reason to change the default especially because OS will use free RAM to opportunistically cache the filesystem in the free RAM anyway so a machine with higher RAM will always get an implicit speedup.
This claim made sense to me at a high level but I wasn't completely sure if it would hold true.

Testing Time
Naturally, I set forth to determine whether or not the theory could be proven with real world data. So I ran several node sync tests on my benchmark machine I've been using for 6 years. The raw results can be found in this spreadsheet.



If you're wondering why it takes the node ~5 minutes to start syncing, that's because it's doing the synchronization of the block headers first before starting to download any blocks.

We can see here that the huge dbcache sync was 24% faster than the default cache sync: 452 minutes versus 597 minutes with default cache size. Whereas with a moderate 4 GB dbcache it's only 10% faster than the default, taking 536 minutes.

If you look closely at the chart you might notice that the slope / rate of syncing slows down a bit around block 820,000. As we can see from the code here, the "assumed valid block" for the Bitcoin Core v27.1 release was at height 824,000. So at that point the node starts having to perform more CPU intensive operations by verifying all of the signatures on transaction inputs. However, disk I/O still remains a larger factor (bottleneck) when it comes to sync performance.

Let's visualize the sync times slightly differently so that we can more easily compare the performance gap. This chart shows the delta between how many minutes it took my benchmark machine to reach a given block height with a 28 GB dbcache size versus with the default dbcache size of 450 MB.



We can see they're pretty much neck and neck until block ~485,000 which takes my machine 100 minutes to reach. After that point, the large dbcache performance breaks away and never looks back. If I were to speculate as to why, my bet is that the default 450MB dbcache doesn't fill up until you hit that part of the blockchain, so after that point the default sync will start flushing the chainstate to disk regularly, thus slowing down the sync.

Conclusion
The theory doesn't appear to hold true, and I think the reason for that is because dbcache is not primarily used as a (read) cache. As such, node sync performance can not benefit from opportunistic filesystem caching at the operating system level.

However, the problem with interrupted initial node syncs is quite real. If you're performing a sync on low end hardware like a Raspberry Pi, it's probably worth the slightly slower sync time in order to protect against having to reindex the whole blockchain if the sync gets interrupted.

I understand it like if we want to run a Bitcoin Core full node, sync it, we need something as high-end hardware, SSD, high RAM, and pay attention an parameter for DBcache to avoid node crash and reindex issue.

Lopp has 2023 Bitcoin Node performance test article too.
ABCbits
Legendary
*
Offline Offline

Activity: 3024
Merit: 7941


Crypto Swap Exchange


View Profile
June 28, 2024, 08:52:33 AM
 #2

His finding is interesting. Aside from stated theory, i was expecting bigger difference between default cache and 28GB cache. I wonder about the result if he used cheap SATA SSD or HDD instead.

█▀▀▀











█▄▄▄
▀▀▀▀▀▀▀▀▀▀▀
e
▄▄▄▄▄▄▄▄▄▄▄
█████████████
████████████▄███
██▐███████▄█████▀
█████████▄████▀
███▐████▄███▀
████▐██████▀
█████▀█████
███████████▄
████████████▄
██▄█████▀█████▄
▄█████████▀█████▀
███████████▀██▀
████▀█████████
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
c.h.
▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄
▀▀▀█











▄▄▄█
▄██████▄▄▄
█████████████▄▄
███████████████
███████████████
███████████████
███████████████
███░░█████████
███▌▐█████████
█████████████
███████████▀
██████████▀
████████▀
▀██▀▀
nc50lc
Legendary
*
Offline Offline

Activity: 2562
Merit: 6240


Self-proclaimed Genius


View Profile
June 28, 2024, 09:08:53 AM
Last edit: June 28, 2024, 09:35:15 AM by nc50lc
Merited by pooya87 (4), ABCbits (2), SilverCryptoBullet (1)
 #3

-snip- and pay attention an parameter for DBcache to avoid node crash and reindex issue.
"Node Crash" isn't caused by the higher database cache size, it's to minimize the chance of corruption of the UTXO set if the user is expecting frequent force close.
In general use-case, that isn't even a concern and most would've preferred faster sync speed.

Plus they use "probably" so the conclusion regarding higher dbcache could potentially cause higher chance of corruption on abrupt shutdowns may be debatable.
Because a low dbcache setting that causes frequent flushing of chainstate to disk may also be a factor to consider specially on slow drives.

█▀▀▀











█▄▄▄
▀▀▀▀▀▀▀▀▀▀▀
e
▄▄▄▄▄▄▄▄▄▄▄
█████████████
████████████▄███
██▐███████▄█████▀
█████████▄████▀
███▐████▄███▀
████▐██████▀
█████▀█████
███████████▄
████████████▄
██▄█████▀█████▄
▄█████████▀█████▀
███████████▀██▀
████▀█████████
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
c.h.
▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄
▀▀▀█











▄▄▄█
▄██████▄▄▄
█████████████▄▄
███████████████
███████████████
███████████████
███████████████
███░░█████████
███▌▐█████████
█████████████
███████████▀
██████████▀
████████▀
▀██▀▀
Knight Hider
Member
**
Offline Offline

Activity: 346
Merit: 88

a young loner on a crusade


View Profile
June 28, 2024, 07:20:21 PM
 #4

Quote
However, the problem with interrupted initial node syncs is quite real. If you're performing a sync on low end hardware like a Raspberry Pi, it's probably worth the slightly slower sync time in order to protect against having to reindex the whole blockchain if the sync gets interrupted.
A Raspberry Pi doesn't have 28 GB RAM. Low end hardware forces you to use less DBcache.
His finding is interesting. Aside from stated theory, i was expecting bigger difference between default cache and 28GB cache.
Thats because the system still has 32 GB memory. He should perform the same test with more aggressive file system caching.

in a world of criminals who operate above the law
one man can make a difference and you are going to be that man
achow101
Moderator
Legendary
*
expert
Offline Offline

Activity: 3514
Merit: 6863


Just writing some code


View Profile WWW
June 29, 2024, 12:24:34 AM
Merited by ABCbits (4), vapourminer (1), SilverCryptoBullet (1)
 #5

Thats because the system still has 32 GB memory. He should perform the same test with more aggressive file system caching.
I highly doubt that more aggressive filesystem caching, or even having the chainstate be on ramdisk, would result in a meaningful time difference.

In reading and writing data to disk, Bitcoin Core has to do so many more things than if it simply skipped it because it had a large dbcache. These include leveldb needing to figure out which file to read, leveldb's own decompression of the data in that file, Bitcoin Core having to decompress its own compressed representations of the UTXO, allocating memory for the UTXO, etc. These are all things that happen independently of whether the file is read from fast or slow storage. These things are all skipped when there's a large dbcache and that saves CPU time.

NotATether
Legendary
*
Offline Offline

Activity: 1750
Merit: 7326


In memory of o_e_l_e_o


View Profile WWW
June 29, 2024, 09:07:34 AM
 #6

Aside from stated theory, i was expecting bigger difference between default cache and 28GB cache.

Not me though. Performance here follows the law of diminishing returns. Once you have so much memory allocated to the dbcache, even the larger blocks just fit inside it along with all their transactions and immediate parent TXOs, so the dbcache doesn't get fully utilized as a result. That's why the performance improvement seems to stagger between the orange and green lines.

Pages: [1]
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!