This is interesting, and another proof that the problem is in the servers and not in the software. I think 8 GB of RAM is suitable, so can you randomly try some of available servers? 3-5 servers to see if switching between servers will solve this problem or the user must run a full node.
As you requested, I had some spare time for testing, I used my watch-only wallet with the 21954 distinctive addresses as test subject.
Electrum server | | Sync time until responsive | |
fortress.qtornado.com:443 | | 1m52s | |
2ex.digitaleveryware.com:50002 | | 3m34s | |
bitcoin.lu.ke | | 1h8m12s | ...arrgh, that was painful! |
electrum.jochen-hoenicke.de:50006 | | 1m28s | |
fulcrum.grey.pw:51002 | | 1m25s | |
fulcrum.theuplink.net:50002 | | 1m32s | |
fulcrum.tinsb.xyz:50002 | | 1m27s | |
No particular issues were observed, the 2
nd server had a few "Server busy" sync restarts which I didn't observe on any of the other servers, even the snail slow server bitcoin.lu.ke sifted through the addresses in slow but steady pace. Still, this server won't see my wallet ever again.
I know from past experience with other problematic "forensic" wallets and an electrs Electrum server in particular. If you have addresses with a very very large address history, like 5- to 6-figure transaction histories, you run into trouble because electrs can't send big enough data packets, at least I never figured out how to configure it that it doesn't choke on such large histories. electrs can be a pain when you do some blockchain forensics and stumble over "problematic" addresses.