An update: I've acquired a copy of all posts up to January 2025 and am now in the process of deleting the removed posts.
Any reason why you're scraping the forum with headless browsers (zendriver, selenium, etc) instead of using pure http requests? The forum doesn't have any anti-bot measures or heavy javascript, so that's not a problem. Requests >>> browser drivers. Much less memory usage, faster, can scale up to infinite if you use proxies (theymos wouldn't like that, though).
Off-topic: which terminal is that?
That's Konsole in dark mode, running the 'tmux' multiplexer.
At first, I was utilizing the Javascript DOM that's built in to Chrome to remove the quote and code tags from all the posts, but eventually I moved away from that because 1) the DOM API is very unreliable and caused frequent crashes and 2) this data could end up being useful for another case case and can simply be filtered out later.
I must admit though, I resorted to getting a list of "alive" topics from LoyceV's website, then ran the topics through your API to get the posts in JSON. It allowed me to get all of the posts really fast without hitting the forum's rate limiter all the time <which turned out to be a really big problem when scraping directly>. Now though, for updates it will be no issue querying the forum itself, even with one IP address.
@TryNinja - sorry for forgetting your name in my "post archiver" comment, it wasn't intentional.

OP - what parameters are you collecting in each of your posts?
- Original thread title
- thread author (do you store by username or usernumber?)
- post title
- post author
- contains quotes
- UTC time/date
- category
- subcategories
- partial url of post
Also, what database engine are you using? I want to make sure I can use your data when it's complete.

I have all of these parameters actually. It's just a copy of TryNinja's data and I plan to modify my scrapers to use the same names.
It's just JSON data, so not easy to load into a database at all. I plan to put it inside Elasticsearch. Now I will start the cluster and once the empty topics are weeded out I can begin the load procedure.