There is ample evidence, all over the internet, of how bad AI is at detecting AI. False positives are enormous.
On the other hand, AI is becoming increasingly intelligent, so it will increasingly produce texts that are more difficult to understand as if they were created by it.
Therefore, it's pointless to have a battalion of bots or a team of special moderators chasing after any posts created by AI. This only consumes time and resources. The best way to combat this is through interaction with the user who may be the offender. Any member can do this if they find something strange, by interacting with the moderator and then reporting it.
Why is interaction minimally effective? Because AI can be very intelligent, but it follows patterns and, to a certain extent, lacks initiative. A human is different. A human expresses thoughts, feelings, and ideas in a logical and coherent way. Besides, and no offense intended, humans can sometimes be a little unintelligent...

So, if they want to fight AI, they interact with whoever they think makes posts that are 100% AI-powered.
I agree with you!
I've been picking up some random posts that, in my opinion, were is not generated by AI, and their false positives are sometimes impressive, just as other posts with a "strange pattern" go unnoticed with IA.
I've been following the complaint threads in the reputation tab and really nothing replaces human analysis and interpretation. It is very easy compare messages before and after someone starts using AI and see how they suddenly change their writing style.
Or sometimes a really "crazy" paragraph in the middle of a post that makes we read again and again it several times to try understand... those are grotesque errors.
In short, I think AI is a good ally for finding information and helping with a response, but the final writing should always be done by a human, and if it is not... someday it will be caught, we do not need a bot for that.