Arjun Narayan has given a very interesting interview to gizmodo in which he has exposed the great dangers that journalism and society are currently facing: all due to the ‘boom’ of AI and the generation of false news or ‘fake news’.
In different exercises ChatGPT and Bard have been shown to be inaccurate. As complementary tools they are useful, but the database they have access to is still very limited. For example, if you do the exercise of asking Bard about “recent Elon Musk news”, for Bard the mogul hasn’t bought Twitter yet.
The former head of Google’s security department, Arjun Narayan, elaborated on the subject in a recent interview with gizmodo. They discussed the complexity of the current moment, how news organizations need to approach AI content in a way that can build and nurture reader trust, and what to expect in the near future.
“It is important to be transparent to the user that this content was generated by AI”
“One of the big challenges is making sure that AI systems are trained correctly and trained with the correct ground truth. It is extremely important to carefully calibrate and heal any data points you enter to train the AI system,” he explains.
In a nutshell, ChatGPT is based on a model that responds based on probabilities. For example, if you ask what day of the week you are on, an answer that includes Monday through Friday is more likely than a day of the month, although this does not mean that you will correctly answer the day you are on.
Bearing this in mind, in the explanation of a concept that you request from this chatbot this will build an answer, word by word, with probabilities and, therefore, this can be misleading. If this is used to write news, there is a big problem.
“It’s important to understand that it can be difficult to spot which stories are written entirely by AI and which are not, but that distinction is fading,” explains Narayan.
“Personally, I think there’s nothing wrong with AI generating an article, but it’s important to be transparent to the user that this content was AI generated. It is important for us to indicate, either in a byline or in a disclosure, that the content was generated in whole or in part by AI. As long as it meets your quality standard or editorial standard, why not?” he adds.
“We need more people thinking about these steps and giving it the dedicated head space to mitigate the risks. Otherwise, society as we know it, the free world as we know it, will be at considerable risk. I honestly think there needs to be more investment in trust and security, ”she concludes.