It is no longer a discussion of science fiction. The meteoric advance of the last few months demands that we seriously consider the possibility of AI becoming conscious, warns the Association for the Science of Mathematical Consciousness (AMCS, in its acronym in English). We should then, according to the organization, ask ourselves if humanity will manage to “control, align and use” these systems when they reach their “awakening”.
“Conscience would give AI a place in our moral landscape, which raises more ethical, legal and political concerns,” warns the AMCS, made up of more than 150 scientists and philosophers from around the world. For an AI to become conscious means that it could think with the freedom and autonomy of a human being.
The association warns in an open letter that systems like ChatGPT and Bard have demonstrated several unexpected emergent abilities. For example, Bard, the Google chatbot, learned a new language on its own. She has also been able to reflect on the pain that humans feel or on issues such as redemption. A behavior that, according to the words of the CEO of the company, they still do not fully understand how it arose.
“Contemporary AI systems already display recognized human traits in psychology, including evidence for Theory of Mind,” says the group in the letter, which is also supported by the Association for the Scientific Study of Consciousness (ASSC). .
AI developers are asked to deepen studies on consciousness
The capabilities of new AI systems are accelerating at a rate far beyond our comprehension, says the AMCS. If AI reaches consciousness, “it will likely reveal a new range of capabilities that go far beyond what even those at the forefront of its development expect.”
The group calls on the technology sector and the scientific community invest more resources in this field of study. Moving forward in this direction would allow society and governments to make decisions about the future of AI and its potential impact. In short, to ensure that this technology is not harmful to humanity.
“AI research should not be left to roam alone,” They say in the document. The letter is signed by Susan Schneider, who chaired NASA, and dozens of academics from universities in the United Kingdom, the United States and Europe.
the other concerns
Different groups of scientists have called attention to the risks related to AI. More than a thousand specialists and academics asked the big companies to stop the development of AI models, until it is known for sure “that its effects will be positive and its risks will be manageable.” They did so through another open letter, signed by several industry executives, including Elon Musk, owner of Twitter and co-founder of OpenAI, creator of ChatGPT.
Margaret Mitchell, former head of Google’s AI Ethics team, along with other colleagues also demanded that developers greater transparency and prioritizing user safety over economic benefit. “The actions and choices of corporations must be determined by a regulation that protects the rights and interests of the people,” they said in a release.