Transferred 25 million dollars over the video call? For that big amount there must be a legal process involved with all the paper and documentation so it is not that easy unless someone is stupid enough and given that much money to manage.
Yeah, it sounds insane. This was a multi-national engineering firm and they had all the processes that you are talking about. The employee that got scammed knew the protocols. And they still approved fifteen separate transfers on five different bank accounts because they SAW their colleagues on video that it was legitimate.
Critically, the scam utilised exact the legal processes. The deepfakes were good enough that the employee thought they were following proper authorization procedures. They believed that they were being careful in demanding the video call.
But for the smaller amounts, it is possible, I heard a few stories that someone over the phonecall sounded exactly like a father or husband asked for an emergency need and asked to send money to an bank account but only later the person realized that they have been scammed.
SO keep the numbers saved in your contact and don't asnwer unless they call from that number.
Caller ID can be faked. Video can be faked. Even if the call is coming from the right number and you see the right face, you don't still know whether it is real.
You're not wrong smaller amounts are more common. And as the technology is becoming cheaper, what's preventing these syndicates from doing hundreds of smaller scams, instead of one big one? The barrier to entry is coming down quickly.
This is indeed becoming very serious. With artificial intelligence becoming ubiquitous and potentially misused, there will be more problems like this in the future.
I remember reading a news article about two years ago about Turkish security forces arresting someone who used AI to impersonate President Recep Tayyip Erdoğan's voice, attempting to deceive select companies and high-ranking government officials.
https://www.trtworld.com/article/14515799That was two years ago. Now, the situation is much more complex and dangerous, as it's now possible to create realistic videos of people who look real, not just voices.
The Turkey case is interesting because it shows how fast this was to move, right? Anyway, they caught the person. Security forces arrested them. There was accountability. But I wonder how many times there were attempts that we didn't hear. How many worked. How many government officials or companies got fooled but never say anything like it's embarrassing.
Voice to video is a huge leap. With voice you can be suspicious anyway. But video? That's where our brains just... accept it. We tend to believe what we see rather than what we hear.
Do you think that the institutional targets (governments, high-ranking officials) are rather more or less vulnerable than the regular people? Because on the one hand, they have security teams and protocols. On the other hand, they're high value targets worth the effort.
Like, the individual who tried to impersonate Erdogan probably spent a lot of time and money to do so because the payoff would be massive. But now that the technology is getting cheaper, are those economics changing in terms of who gets targeted?
I find it extremely hard to believe.
I see a lot of AI-generated videos on Social media, and I can detect them within a few seconds. We are talking about a video conference here where AI had to match the voice, face, and everything, which is nearly impossible. Even if AI can do this, it should not be that hard to notice the difference.
The social media deepfakes are pretty obvious. But, the deepfakes that you see on the social media are created for fun or low-effort scams. They're not targeting you specifically. They don't have to be perfect because they're casting a wide net.
This was a targeted attack on a specific company with preparation time. The scammers knew who the CFO was, what he looked like, how he talked. They probably scraped hours of video from company presentations, earnings calls, public appearances. Then they created the deepfake - specifically for that employee. And they impersonated several executives during a real-time video conferencing.
If this is a real news, I would expect some referance link so I can have a read.
Click to the image for fast transfering to googleIf a 3rd person can get the data, that means you didn't care about your privacy and you are liable for that.
The privacy angle is interesting though. You're correct, if a third party has your data, then something went wrong. But in this case, the data they needed was in the public domain. Company website. LinkedIn. Earnings calls. Press conferences. All public information. They didn't have to hack anything. So they just had to scrape out what was already out there. And moreover, He is CFO
The employee in this case did everything right by traditional security standards. They were suspicious about the email. They demanded video verification. They still got destroyed. So what's the standard now? Never trust video? Never trust voice? Only perform financial transactions in person?
Anti-AI software is going to become big, I think.
I am quite tired of the AI slop and advanced scamming.
We need a way to filter these things.
AI has its limitations and those limitations should be poked and prodded at until something gives way.
Who pays for it? Because if the solution is "buy software to detect fakes", then we're just creating another problem from our own advancement. People who have money get protection. Everyone else continues to be vulnerable.
The economics of this arms race do not favor detection. Generating deepfakes is becoming less and less expensive by the month. Detection requires constant updating, constant investment. The attacker does not have to win all the time. The defender must win each and every time.
I've been thinking about this a lot since I wrote the post. The limitation of AI that you're talking about, yeah, it exists now. But it's shrinking. Every time detection gets better, generation gets better too. And generation's always gonna be cheaper because you're creating something, you're not analyzing something.
So even if anti-AI software becomes big, I don't know if it solves the underlying issue. Which is: trust used to be free. Now it costs money. And that cost is going to continue to increase.