The development of generative AI has been considered as revolutionary as the graphical user interface, the microprocessor, at the time the Internet and recently the cloud.
In business specifically, we see AI adoption growing exponentially. According to McKinsey, AI could generate up to $13 trillion in global economic value by 2030. In addition, AI-enabled process automation has enabled companies to reduce operating costs, improve efficiencies, and streamline workflows, thereby translates into greater competitiveness. Or like in the manufacturing industry, where AI is used to optimize the supply chain and predict machine failures, and in the financial sector, where it is used to detect fraud and improve risk management.
Due to the large amount of data that needs to be analyzed and processed, the use of Machine Learning (ML) as a cybersecurity mechanism began to take shape since the beginning of this century on par with Big Data. For example, today, thanks to ML, AI helps detect unusual activity in user accounts or unusual charges on credit cards.
AI has enabled IT areas to identify threats faster and more effectively, and is highly effective in detecting security attacks such as ransomware; however, it is undeniable that this technology also has the potential to be used by cybercriminals.
according to a survey Surveyed by BlackBerry in 2023 to IT specialists in North America, Australia and the United Kingdom, more than half of those surveyed estimated that in less than a year there will be cases in which cybercriminals use ChatGPT to generate code that exploits a vulnerability. Added to the above, with digitization and the rise of the IoT – the premise that any item can be a computer capable of generating, processing and reporting information – the burden of ensuring the security of each device has become greater.
In a interview For ABC News, Sam Altman, CEO of OpenAI, maintained that one of his main concerns was the use of models of this type for mass disinformation and cyber attacks. For their part, various industry specialists They have noticed that another of the points of interest and caution should be the privacy of user data. Interesting topic if we remember that, in March 2023, OpenAI suspended ChatGPT operations due to a bug that allowed viewing the activity history of some users.
At the end of the operational chain, both on the cybersecurity and cybercrime sides, are people. Guaranteeing the ethical and transparent use of AI tools is absolutely essential and, as in everything, this will only be possible through a consensus among expert voices that gives rise to an informed regulatory framework.