There are thousands, perhaps millions, of users who are enthusiastic about ChatGPT, the Artificial Intelligence (AI) tool, developed in 2022, which allows the automatic creation of tasks, essays and articles, but also of malicious software or malware, Therefore -in both cases- there are already various alerts that we must take into account.
The chatbot ChatGPT, created by the company OpenAI, founded by billionaire Elon Musk, is a system that works as a web page, or as an application that can be installed on any mobile device.
For its operation it uses the so-called Great Language Model (Large Language Model or LLM in English), which is a computer program created with instructions (algorithms) that are executed depending on the actions that are requested.
The goal of the tool, in addition to the automation for the creation of texts, is to “obtain external comments to improve”, strengthen, and make the system more secure, although it is noted that the responses expressed by the bot may occasionally contain offensive or biased content, and “incorrect or misleading information”.
The general intention is clarified when accessing its interface, it is not to give advice, while it is requested not to share any confidential information in the conversations introduced.
In this sense, the first recommendation for all users is just that: Be careful with the data that is provided to the chat. Avoid personal details, sensitive data, and information from third parties, in addition to the fact that all users have to verify, with authorized sources, the results produced by the program, since not all of them are correct.
constant inconsistencies
If you are thinking that ChatGPT will do all the tasks that are left to you during the semester, regardless of the degree you are studying, or if you want the bot write for you those work reports that you must deliver each month, you have to know that not all the answers that are obtained can be considered valid or correct.
The tool throws constant errors and divergencesfor example, HIGH LEVEL performed several exercises, verifying that in the answers there are mistakes and inconsistencies in the resulting information.
For example, considering that this is recent news, the question “Who is Ovidio Guzmán?” was asked. The first result was partially wrong, as ChatGPT responded that he “is the son of the late leader of the Sinaloa cartel, Joaquín “El Chapo” Guzmán.” As we know, the character is the offspring of said drug trafficker, but he has not died.
That response also included that “Ovidio is known to have been arrested in October 2020 in Mexico and his arrest caused a violent confrontation between the authorities and members of the Sinaloa cartel.” In reality, he was first arrested in October 2019, released immediately thereafter on the orders of President Andrés Manuel López Obrador, and re-apprehended on January 5, 2023.
In turn, when asked “Who is Arturo Zaldívar?”, ChatGPT replied that he is “currently president of the Supreme Court of Justice of Mexico and has been a magistrate of the same since 2013.” Obviously, the bot It is not yet updated, since Zaldívar is no longer, since January 2, president of the Court. Another mistake is that it locates the jurist as a member of the country’s highest court since 2013, when in reality he reached that instance in 2009.
In this result it is added that, “before his career as a judge, Zaldívar worked as a lawyer in various firms and in the public sector, being Secretary of Agricultural Development and Hydraulic Resources and Foreign Relations of Mexico.” Completely erroneous data, since the former president of the Court was never secretary of those dependencies of the Mexican government.
These are just two examples of simple news reporting that show that the results returned by ChatGPT are not reliable, so they cannot be used automatically as if it were a true report.
Impacts on learning
It can be argued that ChatGPT is a language model whose primary purpose is to establish conversations, so it should not be compared to an Internet browser. This is true, however, the problem is that users are using it as if it were an advanced search engine.
In fact, the same chat responds that it can serve, among other functions, for the “generation of content for the media (for example, generation of news, stories, etc.)”, which -as we have already seen in the examples- It is not true.
There are other problems with the tool, highlighting the negative impact it can have on learning, which is why the New York Public Schools have already decided to ban its use among students.
This chatbot “It can provide quick and easy answers to questions” but, at the same time, it fails to develop critical thinking and problem-solving skills, “which are essential for academic and lifelong success,” Jenna said in a statement. Lyle, deputy press secretary for New York Educational Institutions.
What worries the most, he added, are the issues that we already mentioned: The security and accuracy of the content from ChatGPT.
Viruses, attacks and threats
On the other hand, if ordered to do so, this Artificial Intelligence tool is capable of creating computer code to -for example- be used in the development of a web page. But you may also be ordered to provide malicious code or malware.
According to a report by the Check Point Research (CPR) firm, specialized in cyber threat research, with ChatGPT “it became clear that code generation can help less-skilled actors to effortlessly launch cyberattacks.”
CPR’s analysis points out that, in major underground hacking communities, there are already several cases of cybercriminals using OpenAI to develop malicious tools. The warning, which was made public on January 6, gives an account of the tests to which the chat has been subjected, which has come to carry out “successfully a complete infection flow, from the creation of an email of phishing convincingly selective.
CPR adds that, as suspected, “some of the cases clearly showed that many cybercriminals using OpenAI do not have any development skills at all,” so the use of this tool to create malware is no longer just a hypothesis.
surya palaces Journalist and lawyer, specialist in legal analysis and human rights. She has been a reporter, radio host and editor.