[I'm ignoring the wallet sidebar, because it's offtopic for the question AFAIK, and the answer depends a LOT more about what you're doing.]
I wonder if this happens in other fields.
Are there forums of people that don't have a lot of direct experience with practical plumbing plus people that have only done plumbing as part of nuclear reactors, theorizing on if solid gold fittings are superior to hand carved wooden pipe fittings?
From the perspective of a Bitcoin node usage of the UTXO:
The UTXO set is logically a set. There is no important natural order of the elements. For a Bitcoin node the only operations is to insert an element (with replacement of duplicates), to lookup an item by key (create an output), or to delete an item by key (spend an output). For that query load, the logical data-structure is some kind of hash-table since those can give asymptotic O(1) performance. For Bitcoin's use this datastructure needs to be persisted on disk (both because it may be bigger than ram, also because people don't want to resync on restart) and be crash recoverable.
Previously-- until 2017-- (and still in some less sophisticated software), it also needed to support transactional updates, but we eliminated that requirement in Bitcoin core by using the blockchain itself as a write ahead log: Bitcoind keeps a record that tracks of as what height were all utxo changes were last completely flushed to the database, and at startup if that block isn't the node's most recent block it just goes and reapplies all the changes since that block, which works because insert/delete is idempotent-- there is no harm in deleting an entry already deleted or inserting an entry that is already there.
Now, because all a node needs is a persistent hash table in some sense you could use practically any kind of database to support it, or even just a bare file system. But validating the chain requires approximately three billion operations (currently) with a current database of sixty-eight million (tiny) entries. As a result, most things people discuss aren't within two orders of magnitude of acceptable performance, like -- if it communicates over a network socket with a round-trip (thus context-switch) per operation that alone probably blows your entire performance budget. LevelDB is among the fastest of the very simple in-process key-value stores, many things which claim to be faster are actually not-- at least not against the bitcoin like workload. (there are many leveldb clones which meet this description, but the only way to know for sure is to try something out)
Because the usage is so simple, the particular choice is not very important so long as it is extremely fast. Among all extremely fast choices the performance doesn't tend to differ that much because (1) their performance is driven by stuff like memory access speed, which is the same among them, and (2) the majority of the load is actually removed from the database in any case: There is an in memory buffer (called the dbcache) that prevents most of the utxo entries from ever touching leveldb: It turns out that most utxo created are spent quickly, so that a when they are they can be settled in a simple non-persistant hash table without being written to the database.
There are plenty of ways that what Bitcoind does could be improved-- for example, changing dbcache to be an open hash-table to avoid memory allocations and improve locality would probably be a big win, as would creating an incremental flushing facility to keep flushing constantly running in the background.
But no amount of "just swap out leveldb with FooDb" is likely to make an improvement. For most values of Foo I've seen suggested it's so much slower that it kills performance. You're welcome to try-- the software abstracts it so that it's pretty easy to drop something else in, in the past people have and give up after their sqllite or whatever hasn't synced the first couple years of blocks in a *week* of runtime. For any highly tuned system, speculating about the performance without getting in and understanding what's already there is unlikely to yield big improvements.
Personally I think modular buzzword thinking is a cancer of the IT industry: "My car won't start, any idea whats wrong?" "Have you tried MongoDB? I heard it has WebScale!".
Now if you want to talk about some block explorer or chainanalysis like usage then the needs are entirely different. But the way to go about answering questions there isn't to pick some DBMS off a shelf, it's to first think about what data it will need to handle and what queries will need to be performed against it, with what kind of performance... and then that will inform the kinds of systems which could be considered.