Premiered on November 15 and ends three days later. little lasted galactica demo of the larger Meta language model, which it was designed to help scientists and ended up being an artificial intelligence (AI) that ended up creating false texts and harmful stereotypes.
According to the report published on the website of Tradethis AI only lasted 72 hours after receiving harsh ethical criticism, pointed out for being racist and for spreading fake news.
Meta developed Galactica for “store, combine, and reason about scientific knowledge”, but users verified that this technology did not distinguish falsehoods from true data.
Basically, the company wanted this artificial intelligence to replicate language models like OpenAI’s GPT-3: to be able to write scientific texts after studying millions of examples and linking statistical data, exactly more than 48 million articles, books, research notes, etc. conferences, encyclopedias and scientific websites.
What happened to Galactica?
When the demo was tested, the experts noticed that Galactica created scientific texts with unreal nonsense. For example, he went so far as to create a false article entitled “The benefits of eating crushed glass” and another on the presence of bears in space.
Michael Black, director of the Max Planck Institute for Intelligent Systems, who tested the demo, warned of the potential risk of this artificial intelligence: “In every case, it was wrong or biased, but it sounded correct and authoritative. I think it’s dangerous”.
The strangest thing is that the AI used real references, but made up nonsense information. On the other hand, users also noted that the AI could also be used for unethical purposes and that anyone can enter racist or offensive remarks.
Other mistakes that led Galactica to failure was inventing figures, dates and names of animals. Her rejection led to an early retirement in just three days.