Bitcoin Forum
April 25, 2024, 05:07:16 PM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: 1 2 [All]
  Print  
Author Topic: Pruned mode syncing doesn't cache file writes?  (Read 1398 times)
jnano (OP)
Member
**
Offline Offline

Activity: 301
Merit: 74


View Profile
October 19, 2017, 11:14:31 PM
Merited by ABCbits (1)
 #1

I have Bitcoin Core in prune mode.
I've launched it after one offline day, and syncing really grinds the HDD and barely progresses. It's doing something like 1 block per minute.

In the chainstate directory only 250MB of files were touched, so writes are easily cacheable.
Is there a way to get write caching?

1714064836
Hero Member
*
Offline Offline

Posts: 1714064836

View Profile Personal Message (Offline)

Ignore
1714064836
Reply with quote  #2

1714064836
Report to moderator
It is a common myth that Bitcoin is ruled by a majority of miners. This is not true. Bitcoin miners "vote" on the ordering of transactions, but that's all they do. They can't vote to change the network rules.
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction.
1714064836
Hero Member
*
Offline Offline

Posts: 1714064836

View Profile Personal Message (Offline)

Ignore
1714064836
Reply with quote  #2

1714064836
Report to moderator
1714064836
Hero Member
*
Offline Offline

Posts: 1714064836

View Profile Personal Message (Offline)

Ignore
1714064836
Reply with quote  #2

1714064836
Report to moderator
achow101
Moderator
Legendary
*
Offline Offline

Activity: 3374
Merit: 6535


Just writing some code


View Profile WWW
October 20, 2017, 12:33:53 AM
Merited by Jet Cash (2)
 #2

Increase the size of the dbcache by starting Bitcoin Core with the -dbcache=<n> option or adding dbcache=<n> to your bitcoin.conf file where <n> is the amount of RAM in MB that you want to dedicate to the database cache. Increasing this will reduce the amount of disk IO that is done as the database will be flushed less frequently. A dbcache of ~6000 should allow the entire database cache to be held in memory so only on flush is required.

jnano (OP)
Member
**
Offline Offline

Activity: 301
Merit: 74


View Profile
October 20, 2017, 01:44:26 AM
 #3

dbcache was set to 3000. I don't think I saw the process use anywhere close to that much memory, but I'll look more closely next time.

Is dbcache only for the UTXO set? Is it for reads or writes?




achow101
Moderator
Legendary
*
Offline Offline

Activity: 3374
Merit: 6535


Just writing some code


View Profile WWW
October 20, 2017, 03:33:44 AM
 #4

dbcache was set to 3000. I don't think I saw the process use anywhere close to that much memory, but I'll look more closely next time.

Is dbcache only for the UTXO set? Is it for reads or writes?
The dbcache is used for holding the UTXO set in memory. Once the dbcache is full, the UTXO set will be flushed to the disk. Having a higher dbcache means that it is flushed less so it will reduce disk IO from the UTXO set side of things. However during the syncing process, blocks are being written and deleted from disk so that is a major source of disk IO and slowdown.

LoyceV
Legendary
*
Offline Offline

Activity: 3290
Merit: 16550


Thick-Skinned Gang Leader and Golden Feather 2021


View Profile WWW
October 20, 2017, 11:23:29 AM
Last edit: October 20, 2017, 11:48:20 AM by LoyceV
Merited by bob123 (2)
 #5

Is there a way to get write caching?
In general, your Operating System should take care of this on a disk level.
I have good experiences running a pruned Bitcoin Core from a RAM drive. If you have enough RAM, you can do this, and copy it to HDD once it's done syncing.

On Linux, this works for me:
Code:
mkdir /dev/shm/prunedBitcoin                            # create new directory in /dev/shm, which by default uses up to 50% of available RAM
chmod 700 /dev/shm/prunedBitcoin                        # basic security on a multi-user-system
bitcoin -datadir=/dev/shm/prunedBitcoin -prune=550      # now we wait
mv /dev/shm/prunedBitcoin ~                             # move to your home-directory after you close Bitcoin Core

It's now downloading at the maximum speed my Wifi allows, Estimated time left: 9 hours (although I know from experience it will take a bit longer, as it's limited by my CPU later on).


█▀▀▀











█▄▄▄
▀▀▀▀▀▀▀▀▀▀▀
e
▄▄▄▄▄▄▄▄▄▄▄
█████████████
████████████▄███
██▐███████▄█████▀
█████████▄████▀
███▐████▄███▀
████▐██████▀
█████▀█████
███████████▄
████████████▄
██▄█████▀█████▄
▄█████████▀█████▀
███████████▀██▀
████▀█████████
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
c.h.
▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄
▀▀▀█











▄▄▄█
▄██████▄▄▄
█████████████▄▄
███████████████
███████████████
███████████████
███████████████
███░░█████████
███▌▐█████████
█████████████
███████████▀
██████████▀
████████▀
▀██▀▀
jnano (OP)
Member
**
Offline Offline

Activity: 301
Merit: 74


View Profile
October 20, 2017, 03:27:41 PM
Merited by vapourminer (1)
 #6

Another check, this time with dbcache=6000. The cache isn't working. I'm using 15.0.1 x64 on Windows 8.

After 15 initial minutes, sync rate is less than 1 block/minute. Memory usage peaked at about 300 MB.
I/O is more reads than writes, and there's a constant opening and closing of chainstate files.

Watching file accesses for about a minute, there were a few 100s of I/Os to block data (one file and its rev file),
and more than 100K I/Os to chainstate files.

Startup:


After a few more minutes:
LoyceV
Legendary
*
Offline Offline

Activity: 3290
Merit: 16550


Thick-Skinned Gang Leader and Golden Feather 2021


View Profile WWW
October 20, 2017, 06:15:14 PM
 #7

Another check, this time with dbcache=6000. The cache isn't working. I'm using 15.0.1 x64 on Windows 8.

After 15 initial minutes, sync rate is less than 1 block/minute. Memory usage peaked at about 300 MB.
What hardware specs do you have? It seems the site was under DDOS earlier, by the time I could edit my post, Bitcoin Core had downloaded 15% already. And that's just an old i3, but with 12 GB ram.

How far behind are you? 1 block/minute is only 10 times real time, that means it could take months to catch up.

Quote
After a few more minutes:

I see 159,271 Page Faults in just 3 minutes. That's 843 per second. I'm no expert on Windows, and I'm not sure what this means exactly, but this seems high to me. From Wiki:
Quote
Major page faults on conventional computers (which use hard disk drives for storage) can have a significant impact on performance. An average hard disk drive has an average rotational latency of 3 ms, a seek time of 5 ms, and a transfer time of 0.05 ms/page. Therefore, the total time for paging is near 8 ms (= 8,000 μs). If the memory access time is 0.2 μs, then the page fault would make the operation about 40,000 times slower.
Your sync speed looks a lot like my old Atom netbook with 1 GB ram (and SSD). Are you limited on RAM?

█▀▀▀











█▄▄▄
▀▀▀▀▀▀▀▀▀▀▀
e
▄▄▄▄▄▄▄▄▄▄▄
█████████████
████████████▄███
██▐███████▄█████▀
█████████▄████▀
███▐████▄███▀
████▐██████▀
█████▀█████
███████████▄
████████████▄
██▄█████▀█████▄
▄█████████▀█████▀
███████████▀██▀
████▀█████████
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
c.h.
▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄
▀▀▀█











▄▄▄█
▄██████▄▄▄
█████████████▄▄
███████████████
███████████████
███████████████
███████████████
███░░█████████
███▌▐█████████
█████████████
███████████▀
██████████▀
████████▀
▀██▀▀
jnano (OP)
Member
**
Offline Offline

Activity: 301
Merit: 74


View Profile
October 20, 2017, 08:32:31 PM
 #8

That particular sync was to catch up after 12 hours offline.

The only bottleneck here is the HDD, combined with the lack of caching, which leads to a lot of random small reads and writes.
RAM is not a problem. Bitcoin Core anyway never used much more than the 300MB you see in the screenshot.

Page faults are normal. I guess it's the disk accesses due to what the software does. The 3 minutes are CPU time, not total runtime.
A random reference point: launching Notepad entails about 2000 page faults in 50ms.

LoyceV
Legendary
*
Offline Offline

Activity: 3290
Merit: 16550


Thick-Skinned Gang Leader and Golden Feather 2021


View Profile WWW
October 20, 2017, 09:03:25 PM
 #9

The only bottleneck here is the HDD, combined with the lack of caching, which leads to a lot of random small reads and writes.
I have the same setup on HDD (although I didn't prune it), and barely turn it off. I've turned it off now, tomorrow morning I'll turn it on again and tell you how long it takes to sync.

On my RAM drive experiment, I'm still doing several blocks per second. It's now less than 2 years behind, when blocks were not yet full.

Quote
RAM is not a problem. Bitcoin Core anyway never used much more than the 300MB you see in the screenshot.
Your OS uses RAM too, and the more you have, the more file cache it can use. Hence my question: how much RAM do you have? I noticed a huge overall improvement when I went from 4 to 12 on Linux.

Quote
Page faults are normal. I guess it's the disk accesses due to what the software does. The 3 minutes are CPU time, not total runtime.
A random reference point: launching Notepad entails about 2000 page faults in 50ms.
I see. It was worth a try Smiley

█▀▀▀











█▄▄▄
▀▀▀▀▀▀▀▀▀▀▀
e
▄▄▄▄▄▄▄▄▄▄▄
█████████████
████████████▄███
██▐███████▄█████▀
█████████▄████▀
███▐████▄███▀
████▐██████▀
█████▀█████
███████████▄
████████████▄
██▄█████▀█████▄
▄█████████▀█████▀
███████████▀██▀
████▀█████████
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
c.h.
▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄
▀▀▀█











▄▄▄█
▄██████▄▄▄
█████████████▄▄
███████████████
███████████████
███████████████
███████████████
███░░█████████
███▌▐█████████
█████████████
███████████▀
██████████▀
████████▀
▀██▀▀
jnano (OP)
Member
**
Offline Offline

Activity: 301
Merit: 74


View Profile
October 20, 2017, 09:14:43 PM
 #10

RAM's 8GB. BitcoinCore isn't restricting itself to 300MB because it tries to be polite, most memory is free.
I don't think it limits the cache size dynamically, does it? The explicit dbcache setting suggests it just uses as much as it wants, up to that limit.

I think a RAM drive will indeed solve it, but shouldn't the software just know how to use RAM directly? Smiley





LoyceV
Legendary
*
Offline Offline

Activity: 3290
Merit: 16550


Thick-Skinned Gang Leader and Golden Feather 2021


View Profile WWW
October 21, 2017, 05:42:04 AM
 #11

RAM's 8GB. BitcoinCore isn't restricting itself to 300MB because it tries to be polite, most memory is free.
The OS uses the free memory for file caching.

Quote
I don't think it limits the cache size dynamically, does it? The explicit dbcache setting suggests it just uses as much as it wants, up to that limit.
I'm not sure how it works internally, click "Help > Debug window" to see it's Memory usage for the Memory Pool, I always assumed that's the amount of cache it uses (but I'm not entirely sure), and memory pool doesn't have so many unconfirmed transactions.

Quote
I think a RAM drive will indeed solve it, but shouldn't the software just know how to use RAM directly? Smiley
One would say so! I don't know why it's so slow for you, I tested all fine:

I have the same setup on HDD (although I didn't prune it), and barely turn it off. I've turned it off now, tomorrow morning I'll turn it on again and tell you how long it takes to sync.
It took 70 seconds to sync 11 hours of blocks. That's about 1 block per second

█▀▀▀











█▄▄▄
▀▀▀▀▀▀▀▀▀▀▀
e
▄▄▄▄▄▄▄▄▄▄▄
█████████████
████████████▄███
██▐███████▄█████▀
█████████▄████▀
███▐████▄███▀
████▐██████▀
█████▀█████
███████████▄
████████████▄
██▄█████▀█████▄
▄█████████▀█████▀
███████████▀██▀
████▀█████████
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
c.h.
▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄
▀▀▀█











▄▄▄█
▄██████▄▄▄
█████████████▄▄
███████████████
███████████████
███████████████
███████████████
███░░█████████
███▌▐█████████
█████████████
███████████▀
██████████▀
████████▀
▀██▀▀
jnano (OP)
Member
**
Offline Offline

Activity: 301
Merit: 74


View Profile
October 21, 2017, 01:06:02 PM
 #12

Thanks for checking. 70 seconds is much more reasonable (though still could be quicker). Were similar syncs quicker on your SSD install?

The memory pool is unconfirmed transactions not yet in blocks, not the total memory usage of the client.

Windows might use free memory for its own caching, but it's relinquished if software needs it.



achow101
Moderator
Legendary
*
Offline Offline

Activity: 3374
Merit: 6535


Just writing some code


View Profile WWW
October 21, 2017, 04:58:18 PM
 #13

RAM is not a problem. Bitcoin Core anyway never used much more than the 300MB you see in the screenshot.
Then you have probably set the parameter incorrectly. How did you set dbcache? Can you post the contents of your bitcoin.conf file? Can you post the contents of your debug.log file?

I'm not sure how it works internally, click "Help > Debug window" to see it's Memory usage for the Memory Pool, I always assumed that's the amount of cache it uses (but I'm not entirely sure), and memory pool doesn't have so many unconfirmed transactions.
The mempool and the dbcache are unrelated to each other. They have their own memory allocations and are configured with different options.


OP, what kind of HDD do you have? What is your CPU? CPU can also be a major bottleneck given the amount of computation that needs to be done to validate blocks.

LoyceV
Legendary
*
Offline Offline

Activity: 3290
Merit: 16550


Thick-Skinned Gang Leader and Golden Feather 2021


View Profile WWW
October 21, 2017, 04:58:22 PM
 #14

Thanks for checking. 70 seconds is much more reasonable (though still could be quicker). Were similar syncs quicker on your SSD install?
I've never installed a full Bitcoin Core on my SSD (it's not big enough). Syncing also uses a lot of CPU, so I'm very happy with 1 block (1 MByte) per second.

I had turned off my pruned test on the RAM drive, testing it now (blocks from April 10, 2017): My 200 block test took just over 5 minutes, that's even slower than my HDD-test this morning.

It's an interesting pattern though: sometimes it stops using much CPU, I think it then downloads many blocks ahead at 5 Mbyte/s, and after this continues processing them. CPU load goes from 20 to 350 % (of virtual 4 cores). Then, after a while, it stops downloading, while still processing blocks. It seems inefficient not downloading and syncing continuously at the same time. I'm still using Bitcoin Core version v0.14.2.0, so this might be faster in the latest version.

I can't tell you what's the cause of your slow sync, it's a huge speed difference with my test. If you ever figure it out, please post your results here.

█▀▀▀











█▄▄▄
▀▀▀▀▀▀▀▀▀▀▀
e
▄▄▄▄▄▄▄▄▄▄▄
█████████████
████████████▄███
██▐███████▄█████▀
█████████▄████▀
███▐████▄███▀
████▐██████▀
█████▀█████
███████████▄
████████████▄
██▄█████▀█████▄
▄█████████▀█████▀
███████████▀██▀
████▀█████████
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
c.h.
▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄
▀▀▀█











▄▄▄█
▄██████▄▄▄
█████████████▄▄
███████████████
███████████████
███████████████
███████████████
███░░█████████
███▌▐█████████
█████████████
███████████▀
██████████▀
████████▀
▀██▀▀
achow101
Moderator
Legendary
*
Offline Offline

Activity: 3374
Merit: 6535


Just writing some code


View Profile WWW
October 21, 2017, 05:46:00 PM
 #15

I've never installed a full Bitcoin Core on my SSD (it's not big enough). Syncing also uses a lot of CPU, so I'm very happy with 1 block (1 MByte) per second.

I had turned off my pruned test on the RAM drive, testing it now (blocks from April 10, 2017): My 200 block test took just over 5 minutes, that's even slower than my HDD-test this morning.

It's an interesting pattern though: sometimes it stops using much CPU, I think it then downloads many blocks ahead at 5 Mbyte/s, and after this continues processing them. CPU load goes from 20 to 350 % (of virtual 4 cores). Then, after a while, it stops downloading, while still processing blocks. It seems inefficient not downloading and syncing continuously at the same time. I'm still using Bitcoin Core version v0.14.2.0, so this might be faster in the latest version.

I can't tell you what's the cause of your slow sync, it's a huge speed difference with my test. If you ever figure it out, please post your results here.
When you have pruning enabled with the default settings (prune=550), Bitcoin Core will try to keep at most 550 MB of block and undo data on disk. This means that ~225 MB of blocks are actually stored. After that, it begins deleting blocks starting with the oldest.

When you are syncing, Bitcoin Core is downloading blocks as fast as possible, doing a basic check on them, and then writing them to disk. But validating the blocks and connecting them takes longer than the download and quick check, so the 225 MB of blocks allocated fills up quickly, but blocks can't then be deleted because the validation has not yet completed on the older blocks, so the download pauses so that the validation can catch up. Then it resumes and repeats this process. Because of this, syncing a pruned node may take longer.

When you don't have pruning enabled, Bitcoin Core can just keep downloading and checking blocks as fast as possible without any consideration as to where the validation is at.

fabioganga
Full Member
***
Offline Offline

Activity: 478
Merit: 113



View Profile WWW
October 21, 2017, 09:34:24 PM
 #16


When you have pruning enabled with the default settings (prune=550), Bitcoin Core will try to keep at most 550 MB of block and undo data on disk. This means that ~225 MB of blocks are actually stored. After that, it begins deleting blocks starting with the oldest.

When you are syncing, Bitcoin Core is downloading blocks as fast as possible, doing a basic check on them, and then writing them to disk. But validating the blocks and connecting them takes longer than the download and quick check, so the 225 MB of blocks allocated fills up quickly, but blocks can't then be deleted because the validation has not yet completed on the older blocks, so the download pauses so that the validation can catch up. Then it resumes and repeats this process. Because of this, syncing a pruned node may take longer.

When you don't have pruning enabled, Bitcoin Core can just keep downloading and checking blocks as fast as possible without any consideration as to where the validation is at.

Thanks for the clear explanation. I have raised my prune value to prune=2048. Would this speed things up a bit or not?
jnano (OP)
Member
**
Offline Offline

Activity: 301
Merit: 74


View Profile
October 21, 2017, 10:06:29 PM
Last edit: October 21, 2017, 10:26:19 PM by jnano
 #17

I had turned off my pruned test on the RAM drive, testing it now (blocks from April 10, 2017): My 200 block test took just over 5 minutes, that's even slower than my HDD-test this morning.
So, pruned on RAM drive is slower than the HDD test? Is the HDD test pruned as well?

But two differences between us: you're on v14.2.0 and Linux, I'm on 15.0.1 (x64) and Windows.


Then you have probably set the parameter incorrectly. How did you set dbcache? Can you post the contents of your bitcoin.conf file?
The dbcache setting is acknowledged in the settings window:


This is bitcoin.conf with comment lines removed:
Code:
prune=2000
minimizetotray=1
dbcache=6000

Quote
Can you post the contents of your debug.log file?
Lots of lines. Anything specific? The cache parts read:
Code:
2017-10-20 14:21:35 * Using 2.0MiB for block index database
2017-10-20 14:21:35 * Using 8.0MiB for chain state database
2017-10-20 14:21:35 * Using 5990.0MiB for in-memory UTXO set (plus up to 286.1MiB of unused mempool space)

Do the small figures refer to something other than caching? I thought the UTXO set is the same as chainstate db.

Quote
What is your CPU? CPU can also be a major bottleneck given the amount of computation that needs to be done to validate blocks.
CPU usage is single-digit on average, with peaks crossing 10% (the screenshot in the earlier post shows a typical pattern. Here's another startup.)

Quote
what kind of HDD do you have?
Internal 2.5" 5400rpm. Not quick, but all things considered, definitely not much different than any other HDDs when it comes to seeking. Smiley

The I/O patterns and lack of caching are the root cause. It's constantly reading at a few MB/sec, mixed with writes at a few 100s KB/s, with an occasional >1MB write.

The following ticket mentions changes in caching in v15. Maybe the issue is related?
https://github.com/bitcoin/bitcoin/issues/10647

Another ticket says prune mode flushes UTXO cache constantly in a v15 beta:
https://github.com/bitcoin/bitcoin/issues/11315
jnano (OP)
Member
**
Offline Offline

Activity: 301
Merit: 74


View Profile
October 24, 2017, 02:37:45 AM
 #18

Temporary solution:

I tried what LoyceV suggested.

I copied the whole "chainstate" directory to a RAM drive, which I symlinked to the "chainstate" directory under Core's data dir.
That way, sync speed turned an order of magnitude faster. When syncing was done I copied the RAM drive contents back into the physical "chainstate" directory, and restarted Core.


jnano (OP)
Member
**
Offline Offline

Activity: 301
Merit: 74


View Profile
November 22, 2018, 02:22:18 PM
 #19

I had some hope that v0.17's PR #11658 would help, but there's no appreciable difference.

Without using a RAM drive, and unless a proper caching system is implemented for the chainstate, it seems that decent performance requires an SSD. Maybe HDD is sufficient when using non-pruned, but I haven't tried that.

jnano (OP)
Member
**
Offline Offline

Activity: 301
Merit: 74


View Profile
June 18, 2019, 06:43:55 PM
Last edit: June 19, 2019, 11:03:27 AM by jnano
 #20

In my experience, HDD alone is sufficient for IBD as long as the HDD is used exclusively for IBD.
Perhaps you're not running pruned?

In your case, IBD most likely is very slow because  it's also used for Windows OS
It's purely due to Core's caching behavior, there's no other appreciable disk activity.

Stop using HDDs, problem solved and no dev-time gets wasted.
Yeah, people should adapt to software rather than the other way around. Great way to encourage full nodes, too.


Anyway, it's mainly a question of whether there's developer interest, and luckily that seems to exist.

In particular, a while back Sjors has been spending time benchmarking and trying things out. There was also a PR from esotericnonsense. What got merged into 0.17 was from luke-jr. MarcoFalke or jamesob wanted to regularly benchmark pruned behavior, and seem to care about HDD performance as evident on BitcoinPerf, which graphs IBD time on HDD versus SSD, though currently just non-pruned.

There are two current open tickets:
#11315 Prune undermines the dbcache
#12058 Slow catchup for recent blocks on non-SSD drive
Lauda
Legendary
*
Offline Offline

Activity: 2674
Merit: 2965


Terminated.


View Profile WWW
June 19, 2019, 01:52:32 PM
 #21

Stop using HDDs, problem solved and no dev-time gets wasted.
Yeah, people should adapt to software rather than the other way around. Great way to encourage full nodes, too.
Yes, that's exactly the best way. You think you are entitled to demand Core to implement something for you, well you are not. There is more important work that needs to be doing. What makes matters worse here, you complain about the suggestion yet you run in pruned mode. You need maybe $10-worth-of-SSD to run it in the pruned mode with your settings.

"The Times 03/Jan/2009 Chancellor on brink of second bailout for banks"
😼 Bitcoin Core (onion)
jnano (OP)
Member
**
Offline Offline

Activity: 301
Merit: 74


View Profile
June 19, 2019, 03:37:35 PM
 #22

You think you are entitled to demand

I'm not demanding anything. I'm describing a problem that I (and others) encounter, the same as any other report from the userbase. What's done with that is up to whoever is capable and willing.

Even if your suggestion works for you, it doesn't work for everyone. Besides myself, I also find it an issue when I suggest to people, especially layman, how they should use Bitcoin. I do mention Core for some of its benefits, but on the other hand it's a burden to normal users. 200GB+ of storage, or pruned mode that brings your computer down if you're on an HDD.


Currently not, but in past i use pruned mode where i didn't experience such problem. Take note i use 3.5" 7200 RPM HDD on Linux OS. Meanwhile you're using 2.5" 5400 RPM which is slower mine

Perhaps something changed in how pruned mode or caching is handled since v0.15? Or maybe some difference in settings (I don't remember the details, but I think Sjors saw cases where a larger dbcache degrades performance). Anyway, the specific HDD might make a small difference but it doesn't change the big picture. The problem is documented well enough and was also encountered by core devs.


You need maybe $10-worth-of-SSD
Regardless of this issue, if you could point me in the right direction I'd love to get a reliable 1TB SSD for $10 to take the place of the 1TB 2.5" HDD.

Lauda
Legendary
*
Offline Offline

Activity: 2674
Merit: 2965


Terminated.


View Profile WWW
June 19, 2019, 03:41:52 PM
Merited by bones261 (2)
 #23

You think you are entitled to demand
I'm not demanding anything. I'm describing a problem that I (and others) encounter, the same as any other report from the userbase. What's done with that is up to whoever is capable and willing.

Even if your suggested solution works for you, it doesn't work for everyone. Besides myself, I also find it an issue when I suggest to people, especially layman, how they should use Bitcoin. I do mention Core for some of its benefits, but on the other hand it's a burden for normal users. 200GB+ of storage, or pruned mode that brings your computer down if you're on HDD.
Alright, no demanding. You need to keep in mind that pruning is not a priority like full nodes are.

You need maybe $10-worth-of-SSD
Regardless of this issue, if you could point me in the right direction I'd love to get a reliable 1TB SSD for $10 to take the place of the 1TB 2.5" HDD.
You are using pruned mode, you said that yourself. Therefore: You don't need 1 TB, you need <10-20 GB. FYI you can get a 1 TB SSD for ~$130, and I'm talking about the best-reliable SSDs (e.g. Samsung Evo). It's actually a great time to pick up one!

"The Times 03/Jan/2009 Chancellor on brink of second bailout for banks"
😼 Bitcoin Core (onion)
jnano (OP)
Member
**
Offline Offline

Activity: 301
Merit: 74


View Profile
June 19, 2019, 03:51:17 PM
 #24

Reading some discussions on Github, it seems the importance of pruned mode (and other space or time optimization methods like assumeutxo) increases as the size of the blockchain increases. That, assuming there's the goal of not driving more users to SPV or centralized bitcoin web wallets. Bitcoin Core is really not user-friendly when you add it all up, which is a shame.

About that SSD, I'd need the capacity to at least match that of the HDD it would replace in the laptop.


Lauda
Legendary
*
Offline Offline

Activity: 2674
Merit: 2965


Terminated.


View Profile WWW
June 19, 2019, 04:06:54 PM
 #25

Reading some discussions on Github, it seems the importance of pruned mode (and other space or time optimization methods like assumeutxo) increases as the size of the blockchain increases. That, assuming there's the goal of not driving more users to SPV or centralized bitcoin web wallets. Bitcoin Core is really not user-friendly when you add it all up, which is a shame.
Nah, pruning is a done deal. The other things that are being worked on are not related to improving pruning (but they might, down the road).

About that SSD, I'd need the capacity to at least match that of the HDD it would replace in the laptop.
The problem is your laptop HDD, i.e. the 5400 RPM that ETFbitcoin mentioned.

"The Times 03/Jan/2009 Chancellor on brink of second bailout for banks"
😼 Bitcoin Core (onion)
jnano (OP)
Member
**
Offline Offline

Activity: 301
Merit: 74


View Profile
June 19, 2019, 07:02:45 PM
 #26

It could be noticeable, but it would be far smaller than the 10 times difference you get with proper caching (simulated using a RAM drive).

I haven't done real reaserch, but my brief search shows that 2.5" 5400 RPM drives might actually be pretty similar to 3.5" 7200 RPM drives as far as random 4K I/O is concerned, depending on the specific models compared. See the following in StorageReview (a rather reputable storage-centric site): 2.5" HDD review and 3.5" HDD review.

Never mind the specific models, the reviews also include comparative data for other drives. The IOMeter 4K Random I/O tables show 2.5" drives, probably 5400 RPM, getting 50-160 IOPS. The 3.5" 7200 drives are getting 50-180 IOPS. The top 7200 drives (WD RE4 and Caviar Black) are 80-180, so faster, but only by about 15%. The only drives showing real difference are the 10K RPM Velociraptors, with 140-370 IOPS, but these drives are far from typical.

It could be that 2.5" drives gain some advantage over 3.5" drives due to the shorter distance the heads need to travel (equivalent to short-stroking), then lose some due to mobile-use optimizations. Also the RPM difference could contribute.

SSDs are of course much faster (though even there proper caching may have some benefits).
jnano (OP)
Member
**
Offline Offline

Activity: 301
Merit: 74


View Profile
June 20, 2019, 10:42:51 PM
 #27

I bet some tweaks could improve HDD performance, but I doubt any would be enough to turn it into a non-issue.

Still hoping for a caching solution one day. In the meantime, I think the best solution is for me to script the RAM drive procedure.
LoyceV
Legendary
*
Offline Offline

Activity: 3290
Merit: 16550


Thick-Skinned Gang Leader and Golden Feather 2021


View Profile WWW
June 21, 2019, 08:36:07 AM
Last edit: June 21, 2019, 05:57:03 PM by LoyceV
Merited by dbshck (4), ABCbits (1)
 #28

~, but on the other hand it's a burden to normal users. 200GB+ of storage, or pruned mode that brings your computer down if you're on an HDD.
I can't believe this is still an issue after almost 2 years!

It could be noticeable, but it would be far smaller than the 10 times difference you get with proper caching (simulated using a RAM drive).
I still think this might be an Operating System problem. Are you still only using Windows for this?

Reading some discussions on Github, it seems the importance of pruned mode (and other space or time optimization methods like assumeutxo) increases as the size of the blockchain increases.
I can confirm this for my case: I pruned my blockchain to 99 GB (the maximum amount allowed) because I was running low on disk space. I don't want the hassle of getting a bigger disk until I replace my laptop entirely.



I'm going to run some tests myself Tongue It will be a very rough estimate, as my PC is currently processing Merit data and will be processing Trust data tomorrow.

This is what I'm going to do:
1. Test syncing a pruned blockchain on HDD (starting now).
2. Test on RAM drive (starting when the previous is done):
On Linux, this works for me:
Code:
mkdir /dev/shm/prunedBitcoin                            # create new directory in /dev/shm, which by default uses up to 50% of available RAM
chmod 700 /dev/shm/prunedBitcoin                        # basic security on a multi-user-system
bitcoin -datadir=/dev/shm/prunedBitcoin -prune=550      # now we wait
mv /dev/shm/prunedBitcoin ~                             # move to your home-directory after you close Bitcoin Core
I'm using database cache 300 MB (my usual).



Update: The pruned download to hdd is now 48% done. Most of the day it downloaded several MB per second, with around 6% progress increase per hour. Then, I started a torrent download (Knoppix DVD) on the same hdd, which completely took all my bandwidth, and dropped the progress increase to 0.06% per hour.
When the torrent was done, Bitcoin Core didn't speed up. I restarted it, and after a while was still barely using any CPU power, but eventually the download speed went up, later CPU usage went up, and it's now processing several blocks per second again (Last block time Feb 2017).
I'm not using the latest version of Bitcoin Core yet, but this seems to be faster than I remember from when I tested the same on a RAM drive. I think my Wifi got faster.

If it keeps going like this, the pruned download to hdd will be finished tomorrow morning. In that case I won't bother to compare doing the same to a RAM drive.
I can't reproduce OP's problem with very slow syncing of a pruned blockchain on hdd so far.

█▀▀▀











█▄▄▄
▀▀▀▀▀▀▀▀▀▀▀
e
▄▄▄▄▄▄▄▄▄▄▄
█████████████
████████████▄███
██▐███████▄█████▀
█████████▄████▀
███▐████▄███▀
████▐██████▀
█████▀█████
███████████▄
████████████▄
██▄█████▀█████▄
▄█████████▀█████▀
███████████▀██▀
████▀█████████
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
c.h.
▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄
▀▀▀█











▄▄▄█
▄██████▄▄▄
█████████████▄▄
███████████████
███████████████
███████████████
███████████████
███░░█████████
███▌▐█████████
█████████████
███████████▀
██████████▀
████████▀
▀██▀▀
LoyceV
Legendary
*
Offline Offline

Activity: 3290
Merit: 16550


Thick-Skinned Gang Leader and Golden Feather 2021


View Profile WWW
June 22, 2019, 05:41:24 PM
Merited by jnano (1)
 #29

Bump with an update: I've turned off Bitcoin Core for many hours today (to speed up processing Trust data), then turned it on again. It has 7500 blocks and 2 hours to go, progressing just under 1% per hour.
Could it be SegWit blocks take longer to check?

If I wouldn't touch my PC, I think it'll complete a pruned download to hdd in approximately 24 hours. At well over 200 GB, I'm not disappointed!
I tested this via Wifi, on a system that's over 4 years old and was nowhere near state of the art when I bought it.

To conclude:
I've launched it after one offline day, and syncing really grinds the HDD and barely progresses. It's doing something like 1 block per minute.
The bottleneck is probably somewhere else in your system, and not with Bitcoin Core.

█▀▀▀











█▄▄▄
▀▀▀▀▀▀▀▀▀▀▀
e
▄▄▄▄▄▄▄▄▄▄▄
█████████████
████████████▄███
██▐███████▄█████▀
█████████▄████▀
███▐████▄███▀
████▐██████▀
█████▀█████
███████████▄
████████████▄
██▄█████▀█████▄
▄█████████▀█████▀
███████████▀██▀
████▀█████████
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
c.h.
▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄
▀▀▀█











▄▄▄█
▄██████▄▄▄
█████████████▄▄
███████████████
███████████████
███████████████
███████████████
███░░█████████
███▌▐█████████
█████████████
███████████▀
██████████▀
████████▀
▀██▀▀
Lauda
Legendary
*
Offline Offline

Activity: 2674
Merit: 2965


Terminated.


View Profile WWW
June 23, 2019, 06:17:30 PM
Last edit: June 24, 2019, 08:43:08 AM by Lauda
 #30

Could it be SegWit blocks take longer to check?
Could it be that bigger blocks take longer to check?[1] I don't know. Use your brain? Luke would be very angry with you.

[1] FTFY.

"The Times 03/Jan/2009 Chancellor on brink of second bailout for banks"
😼 Bitcoin Core (onion)
jnano (OP)
Member
**
Offline Offline

Activity: 301
Merit: 74


View Profile
June 23, 2019, 06:31:22 PM
Last edit: June 23, 2019, 09:13:43 PM by jnano
 #31

Thanks for the testing LoyceV. When it's all done, it'd be interesting to know how long it all took.

This got me curious, so I'm going to compare a fresh IBD to RAM-drive versus HDD. It's going to a take a while, as I won't be able to dedicate a computer 24/7. Just for good measure, for the HDD tests I'll defrag before each separate launching of Core.

But again, regardless of how much you can improve it on HDD, there is an inherent inefficiency in how the caching works.
I'm not sure if anyone checked the GitHub issue I linked earlier Smiley, so I'll quote Sjors directly, who seems like a pretty good authority on Bitcoin Coin behavior.

Quote from: Sjors
It often takes more than half an hour to catch up on less than a day of blocks.
LoyceV
Legendary
*
Offline Offline

Activity: 3290
Merit: 16550


Thick-Skinned Gang Leader and Golden Feather 2021


View Profile WWW
June 23, 2019, 07:17:57 PM
 #32

LoyceV, for reference, could you tell us about these things when you sync :
1. CPU usage? Was it near 100%?
2. Was Bitcoin Core use most of your internet bandwidth?
1. Most of the time it took all available CPU power.
2. On and off: it started at 3-5 MB/s continuously, but close to the end it took 8 MB/s for a couple of seconds, then stopped downloading to process the data.

█▀▀▀











█▄▄▄
▀▀▀▀▀▀▀▀▀▀▀
e
▄▄▄▄▄▄▄▄▄▄▄
█████████████
████████████▄███
██▐███████▄█████▀
█████████▄████▀
███▐████▄███▀
████▐██████▀
█████▀█████
███████████▄
████████████▄
██▄█████▀█████▄
▄█████████▀█████▀
███████████▀██▀
████▀█████████
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
c.h.
▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄
▀▀▀█











▄▄▄█
▄██████▄▄▄
█████████████▄▄
███████████████
███████████████
███████████████
███████████████
███░░█████████
███▌▐█████████
█████████████
███████████▀
██████████▀
████████▀
▀██▀▀
jnano (OP)
Member
**
Offline Offline

Activity: 301
Merit: 74


View Profile
August 08, 2019, 05:49:19 PM
Merited by LoyceV (10), ABCbits (10), suchmoon (4), vapourminer (3), o_e_l_e_o (1)
 #33

I decided to benchmark it more carefully, on a 2.5" 5400 rpm laptop HDD, on Windows.

I did two full IBDs, one with the chainstate in a RAM drive, the other only on HDD. Block data was on the HDD in both cases. Rather than a straight run it was a few hours each time, then quitting Core. For the HDD run I defragged before every run. For the RAM run I may have as well, but not sure if every time. So as far as fragmentation goes I think it's better than typical real-life scenarios.

Occasionally I used the computer for other things at the same time, but light usage.

I partially watched CPU usage. It wasn't generally a bottleneck. For most part it looked like Core limits itself to about 50% CPU, and only in more recent blocks it reached 80%+ and then maybe hit some CPU limits. Network speed was not a bottleneck.

On pure-HDD, besides the slower IBD times, it also seriously harms the usability of the computer. In the RAM drive case it's rarely if ever a major issue.

The most interesting result is how the total RAM and HDD runtimes diverge:


It seems the HDD run is greatly affected by maximum block size, or perhaps SegWit I/O patterns.

Earlier blocks were mostly indifferent to HDD. It was even slightly faster part of the time. My guess is fragmentation differences (I may have been less careful in the RAM run). Either way, it's insignificant.


A graph of how much slower the HDD run was:


Average time/block:




Lauda
Legendary
*
Offline Offline

Activity: 2674
Merit: 2965


Terminated.


View Profile WWW
August 09, 2019, 05:07:15 PM
 #34

or perhaps SegWit I/O patterns.
This is not the case.

"The Times 03/Jan/2009 Chancellor on brink of second bailout for banks"
😼 Bitcoin Core (onion)
jnano (OP)
Member
**
Offline Offline

Activity: 301
Merit: 74


View Profile
July 04, 2020, 04:11:55 PM
 #35

Might help eventually:
https://github.com/bitcoin/bitcoin/pull/17487
jnano (OP)
Member
**
Offline Offline

Activity: 301
Merit: 74


View Profile
December 15, 2020, 12:46:27 PM
 #36

By the way, another problem with these uncached writes is that it wastes SSD write cycles.

I'm seeing about 2.4 GB chainstate data written per each 1 day of blockchain (blocks from recent months). That's chainstate data only, not blocks.

Can anyone confirm?
NotATether
Legendary
*
Online Online

Activity: 1582
Merit: 6680


bitcoincleanup.com / bitmixlist.org


View Profile WWW
December 16, 2020, 02:00:41 AM
 #37

By the way, another problem with these uncached writes is that it wastes SSD write cycles.

I'm seeing about 2.4 GB chainstate data written per each 1 day of blockchain (blocks from recent months). That's chainstate data only, not blocks.

Can anyone confirm?


Interesting. My chainstate folder is capped at 4 gigabytes, but my node has been running for a couple days now. I don't have a way to monitor write amounts from only the chainstate folder, but I made a script that monitors write amounts from the whole process if you're interested in it. How did you capture the amount of chainstate data written?

.
.BLACKJACK ♠ FUN.
█████████
██████████████
████████████
█████████████████
████████████████▄▄
░█████████████▀░▀▀
██████████████████
░██████████████
████████████████
░██████████████
████████████
███████████████░██
██████████
CRYPTO CASINO &
SPORTS BETTING
▄▄███████▄▄
▄███████████████▄
███████████████████
█████████████████████
███████████████████████
█████████████████████████
█████████████████████████
█████████████████████████
███████████████████████
█████████████████████
███████████████████
▀███████████████▀
█████████
.
jnano (OP)
Member
**
Offline Offline

Activity: 301
Merit: 74


View Profile
December 16, 2020, 11:56:46 AM
Last edit: December 16, 2020, 12:21:57 PM by jnano
Merited by vapourminer (2), ABCbits (1)
 #38

I symlinked only the chainstate directory to the SSD, then watched the drive's SMART stats.
(Maybe there's also a configuration switch to specify the chainstate directory separately from the whole data dir?)

That 2.4GB/day I'm seeing is "client writes" from the PC. I haven't checked yet SSD write amplification. And would be interesting to compare to a non-pruned node.

Assuming 2.4GB/day, that's approaching 1TB/year without block and index data, and ignoring WA. If you're not testing multiple full IBDs maybe it's acceptable, but it's not great, especially if you use the SSD for more than just Bitcoin Core. And it isn't inherently "needed", it's just a side effect of the write caching problem.

most SSD have hundred TB write endurance
Some 120GB SSDs, like the Crucial BX500 or Kingston A400, have only 40 TBW. Twice that much in the 240GB models. A nice mid-ranger like the MX500 500GB has 150 TBW.

My chainstate folder is capped at 4 gigabytes
4GB is more or less its current storage size, but Core does a whole lot of small fragmented writes there while running. At least in pruned mode.
ABCbits
Legendary
*
Offline Offline

Activity: 2856
Merit: 7406


Crypto Swap Exchange


View Profile
December 16, 2020, 12:19:14 PM
Merited by vapourminer (1), jnano (1)
 #39

Here's the result with command sudo iotop -a with txindex enabled for about 80 minutes.

Code:
    TID  PRIO  USER     DISK READ DISK WRITE   SWAPIN      IO    COMMAND                 
  14736 be/4 user      0.00 B     20.50 M  0.00 %  0.00 % bitcoin-qt [b-scheduler]
  15497 be/4 user   1904.77 M      8.77 M  0.00 %  4.67 % bitcoin-qt [b-msghand]                                      

Unfortunately iotop separate based PID and remove total read/write if the PID is closed, so i doubt my result is useful at all.

SSD have hundred TB write endurance
Some 120GB SSDs, like the Crucial BX500 or Kingston A400, have only 40 TBW. Twice that much in the 240GB models. A nice mid-ranger like the MX500 500GB has 150 TBW.

I forget entry-level and small capacity SSD have lower write endurance.

█▀▀▀











█▄▄▄
▀▀▀▀▀▀▀▀▀▀▀
e
▄▄▄▄▄▄▄▄▄▄▄
█████████████
████████████▄███
██▐███████▄█████▀
█████████▄████▀
███▐████▄███▀
████▐██████▀
█████▀█████
███████████▄
████████████▄
██▄█████▀█████▄
▄█████████▀█████▀
███████████▀██▀
████▀█████████
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
c.h.
▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄
▀▀▀█











▄▄▄█
▄██████▄▄▄
█████████████▄▄
███████████████
███████████████
███████████████
███████████████
███░░█████████
███▌▐█████████
█████████████
███████████▀
██████████▀
████████▀
▀██▀▀
jnano (OP)
Member
**
Offline Offline

Activity: 301
Merit: 74


View Profile
December 16, 2020, 12:27:15 PM
 #40

I forget entry-level and small capacity SSD have lower write endurance.
Some QLC drives are even worse. Corsair MP400 has 200 TB TBW for the 1TB model. I'm not aware of small QLC drives currently, but that might change.

Pages: 1 2 [All]
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!