This post is the result of some trial&error. I also noticed
blockdata.loyce.club/ gets terribly slow once in a while, which made it useless to use RamNode for this data.
Really curious how that test works out. I do hope it does a little bit more than just merge the file and not sort them.
It merges all lines from both sorted files in sorted order. After several tests (on my old desktop with HDD), these are the relevant results:
Old process:
time cat <(gunzip -c addresses_sorted.txt.gz) daily_updates/*.txt | sort -uS80% | gzip > test1.txt.gz
real 90m2.883s
Faster new process:
time sort -mu <(gunzip -c addresses_sorted.txt.gz) <(sort -u daily_updates/*.txt) | gzip > test2.txt.gz
real 51m26.730s
The output is the same.
Interestingly, when I tell
sort -m to use up to 40% of my RAM, it actually uses that (even though it doesn't need it), which slows it down by 7 minutes.
Most CPU time is spent compressing the new gzip file.
I think you wrote that you'd need about 256GB of RAM for that operation, right? Sorry... can't help you out there. However a
bloomfilter might be nice to implement if you have a 'bit' of RAM (a lot less than 256GB).
That's going over my head, and probably far too complicated for something this simple.
I can think of another option that might work: if I use the sorted list to get the new addresses, I can get those out of the daily update while keeping the chronological order. This way I only have to deal with two 20 MB files which is easy. After this, all I have to do is add them to the total file.
This would definitely work and was the solution I originally proposed:
The 'All addresses ever used, without duplicates, in order of first appearance' list could be created in pretty much the same way.
This would be faster than the bloom filter if there's more than 1 new address that's already in the list.
I'll try:
Old code:
time cat <(gunzip -c addresses_in_order_of_first_appearance.txt.gz) daily_updates/*.txt | nl | sort -uk2 -S40% | sort -nk1 -S40% | cut -f2 | gzip > newchronological.txt.gz
real 194m24.456s
New:
time comm -13 <(gunzip -c addresses_sorted.txt.gz) <(sort -u daily_updates/*.txt) > newaddresses.txt
real 8m4.045s
time cat daily_updates/*.txt | nl | sort -uk2 -S40% | sort -nk1 -S40% | cut -f2 > all_daily_addresses_chronological_order.txt
real 1m14.593s
cat all_daily_addresses_chronological_order.txt newaddresses.txt | nl -nln | sort -k2 -S80% > test.txt
real 0m36.948s
I discovered
uniq -f1 on
stackexchange:
cat test.txt | uniq -df1 | sort -nk1 -S80% | cut -f2 > test2.txt
real 0m7.721s
Combined:
time cat <(cat <(cat ../daily_updates/*.txt | nl | sort -uk2 -S40% | sort -nk1 -S40% | cut -f2) <(comm -13 <(gunzip -c ../addresses_sorted.txt.gz) <(sort -u ../daily_updates/*.txt)) | nl -nln | sort -k2 -S80%) | uniq -df1 | sort -nk1 -S80% | cut -f2 > newaddresses_chronological.txt
real 9m45.163s
Even more combined:
time cat <(gunzip -c addresses_in_order_of_first_appearance.txt.gz) <(cat <(cat <(cat daily_updates/*.txt | nl | sort -uk2 -S40% | sort -nk1 -S40% | cut -f2) <(comm -13 <(gunzip -c addresses_sorted.txt.gz) <(sort -u daily_updates/*.txt)) | nl -nln | sort -k2 -S80%) | uniq -df1 | sort -nk1 -S80% | cut -f2) > new.alladdresses_chronological.txt
real 19m34.926s
This can significantly improve performance, especially if I keep uncompressed files for faster access.
But it's wrong, I have 3 different output files from 3 different methods.
By the way, I just checked out (but not downloaded) the daily file on blockchair. It's close to 1GB (compressed), but you mentioned 20MB for new addresses on numerous occasions. I guess there's a lot of cleaning to do there. Could I maybe get one of your (old) daily files?
I use Blockchair's daily
outputs to update this, not the daily list of
addresses.
See:
http://blockdata.loyce.club/alladdresses/daily_updates/ for old daily files.
split -l 50000000 sorted ( it will split filesstarting with name xaa next xab....)
I don't see any benefit in splitting files for processing.
I have a server on RAID0 with 882MB/s read and 191MB/s write, so copying this stuff to a different place on the same disk will take about 40 seconds or so for a 30GB dataset.
Dedicated?
That's the dream
But even then, sorting data means reading and writing the same data several times.
AWS VPS's run on shared hardware so that's probably why you're getting throttled. There are dedicated servers on AWS you can get where you're in total control over the hardware and they don't throttle you and stuff.
AWS dedicated is totally out of my price range (for this
side project that already got out of hand). I wasn't planning on spending a lot of money on this long-term, but if I can find a very affordable solution I can just keep adding servers to my collection.
So far it's looking good on speed improvements, especially getting rid of the sort-command executed on disk helps a lot.
You are on AWS, right? Why not have your sponsor upgrade your instance to a higher class for a few hours? That's the beauty of on-demand processing.
I don't want to be demanding, and AWS charges (my sponsor) $0.09 per GB. That's okay for HTML, but not for large files. My
List of all Bitcoin addresses with a balance alone transferred 450 GB since the start of this year. That would be $1000 per year on AWS, while it costs only a fraction elsewhere. I love how reliable AWS is, it just always works, but that's not necessary for my blockdata.