Some user here even recently argued against preventing the creation of new P2PK outputs, which is an opposition as stupid and ridiculous as it gets.
You probably underestimate, how hard it would be, to actually abandon secp256k1. I think many people would agree to drop P2PK support entirely, if it could be done easily. However, it has some consequences, for example: there could exist some pre-signed, timelocked transactions, which would use it. And then, if you block it on consensus level, then these transactions would be turned from valid into invalid, and they could no longer be included later. Even for things like P2SH, old outputs were not blocked just like that: the old way of moving coins was only made non-standard, but not invalid.
Also, as mentioned previously by Saint Wenhao, we have an example of a cryptographic primitive, where people thought, that it would be just "replaced", but the reality proven otherwise: SHA-1. When Git will migrate from SHA-1 to SHA-256, or anything else? Never? Because now, they migrated only to "hardened SHA-1", as well as many other entities. Before 2017, people thought, that if some hash function will be broken, then it will be simply replaced. But in case of SHA-1, it didn't happen: old systems just received some "patches", and now we know, that if something is heavily used in many places, then it will be endlessly "hardened", instead of being "replaced", because this is just how the backward compatibility works in our world.
I also prefer block size increase over increasing witness/signature discount factor, unless hard-fork rejected by many
If increasing the size of the block from 1 MB to 4 MB was difficult in 2017, then guess how much harder it would be now, when people are pushing JPEGs into the blockchain. Of course if anything will be increased, then it will be done in the same way as previously, when Segwit discount was invented: old nodes will still see 4 MB witness, and new nodes could see 4 GB "commitments", or whatever will be the name of the "space for quantum things", which wouldn't be processed by any old nodes (because if it would, then it would obviously lower TPS, as you can easily notice from many tables).
But I think it is much more likely, that if 4 MB limit will be kept as it is, then people will do everything they can, to pick a signature, which will take the least amount of space. Because this is the thing, that is the easiest one to deploy in existing testnets, and because all old nodes could simply treat it as valid through OP_SUCCESS (so the whole cost will be paid only by new nodes, and everyone else will continue using secp256k1, for as long, as they can).
the first issue is which signatures
Of course all of them, if not more. Even if you don't know about some things, then still: some mockups can be prepared. It is possible to assume, that "foobar signatures" are deployed, and handled properly by some unknown, upgraded nodes. And then, some things can be simulated, like: what is the maximum acceptable verification time, signature size, or any other metric you want to measure.
In this case, things can be deployed into existing testnets to see, how a particular algorithm works in practice. And then, if you want to compare different algorithms within the same coin, then it is obvious, that they should be somehow attached to the same chain, to make things comparable.
So, how to fairly compare verification time? By allowing the old nodes to not do that: just like pre-Segwit nodes know nothing about Segwit, and pre-Taproot nodes never verify P2TR addresses. And how to fairly compare signature sizes? By allowing any discount a particular algorithm would need, which means having a sigops limit, instead of a size limit.
And then, you can have some existing testnet, where a zoo of future algorithms could be tested, and the winner could emerge from all of that. Then, one client could use Lamport, another could use XMSS, and someone else could use FALCON 512, while still using the same blockchain, moving the same coins to different subnetworks, and so on, and so forth.
Which means, that the answer to the question "which signatures" is simple: anything, that could be deployed faster, than other competitors. If you want to join that race, then just pick anything you like, and push things forward. Because in the open-source world, things are not picked, because they are better: for many things we use, there are cheaper, faster, and better alternatives. But Bitcoin Core is not written in C++, because it is the best language: it is written in that way, just because Satoshi decided to do so, and deployed the first working client faster, than other mailing list readers, who also read the whitepaper. Which also means, that we won't necessarily have "the best possible thing in existence". Instead, we will have "the earliest deployed thing", and we will be stuck with it for years or decades.
Also, something else outside the list, can have even higher chances of being deployed in practice. For example: if someone will find a flaw in secp256k1, and someone will made a "hardened secp256k1", which will fix only that bug, without breaking anything else, then we will be in the same situation, as it was with SHA-1: a patch will be applied, and nothing will be changed for a longer time. Because patching wins with replacement: that's why UTF-8 won with UTF-16, UTF-32, and other things: because of being ASCII-compatible.
By the way: do I like, that our world is constantly patched? Of course I don't. And many other people would happily replace old systems with new inventions, if it would be simple. But this is not how the world works, and there are many examples, where things are not replaced, unless you find a very critical vulnerability, where everything fully collapses instantly, like in Value Overflow Incident. Only then you can hard-reject old things: because the old system is no longer usable.