AI tools have made it easier for scammers to compose a more believable phishing email, but even such can be easily recognized because most AI tools have a similar writing system. Usually, a neutral introduction, then followed by a list of possibilities and at the end the mandatory conclusion.
They can also be used as a tool to recognize AI-generated emails.
Victims are too carelessly and naively to believe in "too good to be true" offers sent in email, messengers and any platform in the between of mass conversation of scammers and potential victims. If they are only carefully or don't naively, they won't be scammed.
Because they are too sensitive and vulnerable with scammers, they are not only vulnerable with scam methods and traps related to AI. They can be scammed with manual methods, and don't wait too complicated scam methods to be scammed.
Recognize AI-generated emails or not, it is only very first step, but the bottom line is avoiding "too good to be true offers" will help them avoiding scammers most of times.
When receiving any offer, specially in regards with crypto or other investments, one must be on high alert.
Even experienced users can have a bad day, or be in a hurry, and skip detailed check of e-mail address or something similar and fall victim to scammers. Thats why scammers try to put time pressure in their e-mails, so the victim would act in haste and skip security checks.
The thing with AI generated scams is not that they are more convincing or better quality than manual but that they make scammer more productive. To compose convincing phish email took probably few hours without help of AI and now it takes a few seconds.