It was only a matter of time. With the birth and subsequent release of OpenAI ChatGPT we are all fascinated with the scope of this generative Artificial Intelligence. But many of us feared what could be done with these kinds of platforms in the wrong hands. Well, today we have the first incarnation of that danger: WormGPT.
It would also be a generative system powered by Artificial Intelligence, in the purest style of the popular OpenAI platform that half the world already knows and that has triggered several alternatives, such as bingchat, google bard Or until midjourney.
Only that this new “evil” alternative would have been developed and fed by both hackers and cybercriminals to turn it into a support tool in their attacks and crimes. So now the plot of Mission Impossible: Dead Reckoning Part One It doesn’t sound like pure fiction anymore.
WormGPT is a serious threat. Although it could be described as an evil ChatGPT, it itself represents the manifestation of a seemingly inevitable aspect. Where AI systems will also be used to commit actions that threaten the peace and integrity of the innocent.
This would be WormGPT: the Artificial Intelligence of hackers
WormGPT is a new generative Artificial Intelligence tool similar to ChatGPT, which is currently being used by cybercriminals to launch sophisticated phishing and spam attacks.
According to a report from the email security vendor SlashNextthis tool has started to advertise itself on underground forums as a rogue alternative to ChatGPT.
The AI would come at a cost and had been kept out of the public eye by its commercialization in the deepest corners of the web:
“We see that malicious actors are now creating their own custom modules similar to ChatGPT, but easier to use for nefarious purposes.
Our team recently gained access to a tool known as ‘WormGPT’ through a prominent online forum often associated with cybercrime.
This tool is presented as a blackhat alternative to GPT templates and has been specifically designed for malicious activities.”
They are parts of the revelations of the SlashNext firm in its report, accompanied by multiple screenshots of these forums where sales agents show the interface and general operation of the rogue Artificial Intelligence.
WormGPT would broadly be a generative AI module powered by the GPTJ language model, developed in 2021. Unfortunately, this language model would have been trained on large sets of malicious data, specifically focusing on malware-related data.
So the platform would be able to produce persuasive emails for phishing, credential theft, and spam.
The tricky thing is that phishing emails written with this Artificial Intelligence would be opened 8 out of 10 times, since they would appear authentic at first glance.
We are in serious trouble.