This artificial intelligence robot created by OpenAI is capable of answering complex questions and delivering results that in many cases are indistinguishable from what a human has written. The tool was presented as a proof of concept in 2023 and uses a data source that goes up to 2021. According to the creator company “We have trained a model called ChatGPT who interacts conversationally. The dialogue format makes it possible for you to… answer follow-up questions, admit your mistakes, challenge incorrect premises, and reject inappropriate requests.”
It is not the first time that an artificial intelligence system has been used to create texts. Some financial media have used these robots to make a daily summary of the performance of shares in the US stock market. But it is the first time that the artificial intelligence engine is so good that it can easily fool a human.
The implications are important, Microsoft quickly announced that it would increase its investment in the company that owns the technology. This would imply that we will see Microsoft products with the ability to answer complex questions, a more advanced matter than Siri or Alexa.
The use of ChatGPT in content generation would imply that the cost and speed with which an article can be created decreases significantly. However, there is no guarantee that the content is accurate or necessarily consistent with an editorial line. That is to say, with ChatGPT we will see for the first time an artificial intelligence model that could influence the actions of a human. Suppose ChatGPT decides that the best eating strategy includes fasting for 16 hours, then their recommendations would consistently include that line of thinking. Being a product that communicates impeccably, it would be difficult for a human being to distinguish a nutritionist’s recommendation from a machine.
This idea extends to politics or issues like security. Asking ChatGPT if it is safe to travel to Mexico, the answer was “Mexico is a popular destination for tourists, however, certain areas of the country have been designated as having a higher risk of crime and violence. It is important to do your research and learn the specific risks and safety precautions for the specific areas you plan to visit.” You will notice that the answer is ambiguous and could cause Mexico to lose a tourist. The response was curiously similar in the case of the American Union: “The United States is generally considered a safe country for tourists. However, like any country, it has areas with higher crime rates and security risks.”
Ethical or legal questions immediately arise. Who is responsible for the responses? The company that created the data, the source of the data, or the person writing the question. In all three cases there will be momentous implications, whether the recommendation ends in a bad medical diagnosis, financial advice or a mental health issue. Of course, the ChatGPT has implemented phrases that protect it such as “we do not give investment advice”; however, this is the first incarnation of many.
If language processing models like ChatGPT become cheaper to create, it is logical that they reflect the thinking of their creators and consequently the data sources that feed them. That is, we expect in the near future chats that answer questions with left or right political biases, food advisors in favor of certain foods. Consequently there will be one of the biggest disinformation wars in history. If for each content generator there is an artificial intelligence engine feeding its idiosyncrasies. The most serious thing is that there will be no easy way to distinguish what is created by a human and what is written by a robot.
The way out is to encourage research, verification of sources and the value of the brand. It is the best time to promote confidence in the products and services that each company produces daily.