Your opinion, and you talk like you know better. You don't. You're merely a pleb in a signature campaign like the rest of us.
There are fewer than 10 active forum members that know better than me, and I am not biased in topics like this. You are a borderline shitposter at best.
Justin Drake of the Ethereum Foundation has started to work with the leading researchers in Quantum Computing. The Monero Research Labs have also started doing their own initiative to make Monero Quantum Resistant. The Core Developers WILL NOT merely wait and do nothing.
Wrong,
most developers never touch cryptography research in their lives. Most Core developers will absolutely not do a single thing in regards to quantum research. Stop spreading misinformation.
Robustness against cryptosystem breaks-- quantum or otherwise-- is a prudent and reasonable concern, and it's good to let people decide how to secure their own coins even if you don't share the same security concerns as them. Keeping someone who wants their coins secured by something other than just ECC having an option would be incompatible with Bitcoin's ethos exactly like the knotzis trying to kneecapp multisig and descriptor wallets. It's just a question of constructing a scheme that is efficient enough in the right ways that it won't have a big adverse impact on those who don't care about it, and I think progress in that direction looks pretty good.
Here is an authority that was more active in the past, basically confirming what I have said numerous times -- but you for sure know better than us, continue running around posting half-panicky posts like a true pajeet.
It is not a reversible computation.
Yes it is, but it is considered unfeasible.
Elliptic curve point addition is claimed (ZK-proved) to be REVERSIBLE in a CLASSICAL COMPUTING way. Is no one bothered at all?
Of course it is reversible, because you have 1:1 mapping between private and public keys. If you use some weaker elliptic curve, and you start using bigger and bigger numbers, then you will see, that each and every valid public key has exactly one matching private key.
If this is what you were trying to talk about, my bad -- though nothing groundbreaking. We knew this was coming, so I still don't get what the so called big deal is supposed to be according to you kTimesG.
@d5000 you too, which one would you chose if you had to?
Ugh. I'm not an expert of this at all, but as I'm not that bad at googling, here's my (only slightly "informed") opinion:
As you can see here by some members and in many other threads, people want random "community members" to give their "expert" opinion on this -- so why the hell not, we might as well join their game.

- For now, hash based schemes like SPHINCS+ seems to be the safest option. They're based on well known mathematical properties, and hashes are also "holding the blockchain together". Jonas Nick has proposed a variant called
SHRINCS with signature sizes of 272 bytes, which is even better than FALCON. FALCON and lattice-based systems seem more experimental and complex.
- For the long term, SQIsign looks nice, but it seems it's the most experimental and untested of all these variants. I think the main problem, the cost of creating a signature, is not that much of a bottleneck than block sizes. The verification cost could however increase the cost of running a full node. If my googling results are correct, if the current Bitcoin blockchain was based on SQIsign, the initial blockchain download would take about 6 months with consumer hardware. The bottleneck seems to be mainly the CPU.
Thanks for the link, that variant solves the issue of signature sizes for SPHINCS. What about the verification and signing cost? We need someone to keep a table of these signature proposals updated, perhaps you could use another hobby project -- you definitely do not have many threads open.

Signing and verification time:
Stateful signing time: 3742.92 ms
Stateful verification time (local): 0.015506 ms
Stateless signing time: 17974.8 ms
Stateless verification time (local): 0.073762 ms
Machine: Intel Core i5, 16 GB RAM (one thread, w/o parallelization)
Someone posted this in there. While it does not directly translate into the overview post that we have, if we extrapolate from it then it seems that it constitutes a massive improvement over the issues that SPHINCS has for our context.
Focusing solely on the information from that chart, if you told me to pick one even if it is terrible I wouldn't be able to choose TBH right away because the tradeoffs are extreme in one thing or another. What about you ABCbits? @d5000 you too, which one would you chose if you had to?
Actually we've discussed this in past,
https://bitcointalk.org/index.php?topic=5550298.msg65630757#msg65630757. I still think Falcon-512 is least worst option, regardless signature aggregation could happen (without new security issue or much higher computation) or not.
Good find, I honestly do not remember that anymore -- the forum is terrible at moderating, and allows any amount of duplicate topics as long as there is a public person/entity that said anything "new" so I have quantum fatigue.

Check out the modified SPHINCS proposal that d5000 linked to. Unless there is some major issue with it that I am not seeing right now / or that has not been found yet, it seems to me that it would be the best choice from what we have at this moment (at least from the ones presented here).
Some user here even recently argued against preventing the creation of new P2PK outputs, which is an opposition as stupid and ridiculous as it gets.
You probably underestimate, how hard it would be, to actually abandon secp256k1. I think many people would agree to drop P2PK support entirely, if it could be done easily. However, it has some consequences, for example: there could exist some pre-signed, timelocked transactions, which would use it. And then, if you block it on consensus level, then these transactions would be turned from valid into invalid, and they could no longer be included later. Even for things like P2SH, old outputs were not blocked just like that: the old way of moving coins was only made non-standard, but not invalid.
I definitely am underestimating it, but the reasoning provided was not technical difficulty but some normie bullshit about never removing anything that was introduced into Bitcoin -- which is not a good approach long term. If something bad happens down the road because of technical debt, they would blame the developers, reviewers, or whoever else except themselves and those who favored such views.
Also, as mentioned previously by Saint Wenhao, we have an example of a cryptographic primitive, where people thought, that it would be just "replaced", but the reality proven otherwise: SHA-1. When Git will migrate from SHA-1 to SHA-256, or anything else? Never? Because now, they migrated only to "hardened SHA-1", as well as many other entities. Before 2017, people thought, that if some hash function will be broken, then it will be simply replaced. But in case of SHA-1, it didn't happen: old systems just received some "patches", and now we know, that if something is heavily used in many places, then it will be endlessly "hardened", instead of being "replaced", because this is just how the backward compatibility works in our world.
I am not a big fan of endless backward compatibility. Fairly long compatibility like Bitcoin has is great, extremely long or until something breaks is not in my view.
But I think it is much more likely, that if 4 MB limit will be kept as it is, then people will do everything they can, to pick a signature, which will take the least amount of space. Because this is the thing, that is the easiest one to deploy in existing testnets, and because all old nodes could simply treat it as valid through OP_SUCCESS (so the whole cost will be paid only by new nodes, and everyone else will continue using secp256k1, for as long, as they can).
Why not both though? We should pick the signatures that come with the best balance of the least cost across all the 3 main metrics, size, signing and verification and couple it with a small discount?
Which means, that the answer to the question "which signatures" is simple: anything, that could be deployed faster, than other competitors. If you want to join that race, then just pick anything you like, and push things forward. Because in the open-source world, things are not picked, because they are better: for many things we use, there are cheaper, faster, and better alternatives. But Bitcoin Core is not written in C++, because it is the best language: it is written in that way, just because Satoshi decided to do so, and deployed the first working client faster, than other mailing list readers, who also read the whitepaper. Which also means, that we won't necessarily have "the best possible thing in existence". Instead, we will have "the earliest deployed thing", and we will be stuck with it for years or decades.
Well, the question is always is there are race yet. If we create a false urgency at a time where it is not required, we run into a risk of deploying something that is terrible and is going to bite us in the long run. That is why panic and urgency are always wrong when it comes to these things. Any amount of extra time that can assuredly be utilized for research and during which there is no risk is extremely beneficial. If we had decided on a candidate and deployed it some years ago we would have picked something that is much worse than what is available now. Balanced approach is what we need, not anxiety over media PR.
By the way: do I like, that our world is constantly patched? Of course I don't. And many other people would happily replace old systems with new inventions, if it would be simple. But this is not how the world works, and there are many examples, where things are not replaced, unless you find a very critical vulnerability, where everything fully collapses instantly, like in Value Overflow Incident. Only then you can hard-reject old things: because the old system is no longer usable.
The reason why we end up in very critical vulnerabilities that make things completely collapse anyway is primarily because we don't replace systems with things that are much better beforehand. Excellent post by the way.