He fbi has launched a serious alert about a new trend that is being observed recently, where criminals began to use the latest Artificial Intelligence technology to create deepfake videos that are used to blackmail their victims in a dynamic known as “sextortion.” ”.
All this year 2023 has been distinguished by the takeoff of these platforms as ChatGPT or Midjourney, which produce increasingly impressive results in a fraction of the time by assembling images, videos, portraits, audio and even music that can be mistaken for real even to the most trained eye.
In May 2022 we even witnessed the start of a furore and uncomfortable trend in the long run, when TikTok began to be plagued by deepfake videos starring supposed celebrities. This is how Tom Cruise and other stars began to star in clips in distinctive random situations of said video platform.
Unfortunately, that was just the beginning and what started as something curious has now escalated to become a security problem, where even the FBI has already reacted, warning of a new class of crime where anyone who has uploaded a photo of themselves to the internet can be the victim of a new type of extortion.
FBI warns of rise in sextortion cases with deepfake videos that use Artificial Intelligence
Through a statement on its official website, the Federal Investigation Bureau of the fbi He has launched an advert to the public about the substantial increase in cases of sextortion scams. Where criminals use these fake videos in compromising situations to extort victims with threats to spread the material if they don’t pay money, give them gift cards or other forms of payment.
Deepfakes are videos or photos generated by Artificial Intelligence (AI) that substitute the face or body of a person for another, showing the victim in scenes that are generally too explicit and compromising if they were true. The big problem is that they are quite convincing.
These false images of sexual content would be achieved, as described by the Agency, when criminals access public and harmless images of their victims that they previously shared on their own social networks or other sources.
From there, the attackers use AI techniques to create explicit videos or photos, then demand money even though the material is not real, knowing that most people will take it for granted.
Another sore point is that now even scammers can also ask victims to send them real explicit content to avoid spreading the deepfake material, which carries even more risks.
Since April 2023, the registration of cases and complaints of this type has skyrocketed, where many of the victims are unaware that their images were copied, manipulated and circulated until someone lets them know or finds them on the internet.