This is CLIP, the racist and sexist robot that worries scientists

This is CLIP, the racist and sexist robot that worries scientists

If racism and sexism among humans weren’t enough, this robot’s AI has arrived to join the army of fools. That’s right, as we already mentioned in the title of this publication, a robot has become racist and sexistconfirming one of the greatest fears of the scientific community.

so what confirm a group of researchers from several American universities. Specifically, members from the University of Washington, Johns Hopkins University, and the Georgia Institute of Technology. It was these entities that put their robot to the test as part of the investigation, and have already managed to capture it red-handed. CLIP —name given to the robot—, has been categorizing people based on their race and gender.

Beyond being alarming because of dystopian scenarios that will probably never take place, this raises a rather serious question: even advanced artificial intelligence is capable of absorbing biases and prejudices from human beings. These, being present in a physical entity —robot—, could be a real danger for some people.

To summarize the implications directly, robotic systems have all the problems that software systems have, plus their addition adds the risk of causing irreversible physical damage; and what is worse, no human intervenes in fully autonomous robots.

Publication in the ACM Digital Library

This same fear emerged among the scientific community when a man trained an AI model via a racist 4chan board. The result, as you can guess, was terrible. However, it served to show that, with the wrong training, artificial intelligence could perpetuate hate speech and social inequality.

This is CLIP, the robot that has become racist and sexist

Like many other models of this style, CLIP has been created with the technology of OpenAI, the famous company co-founded by Elon Musk. In addition, it has facilitated the origin of other quite impressive artificial intelligences, such as the famous GPT series, which are capable of generating almost human text from their huge database. We also find DALL-E 2 and DALL-E mini among their offspring, both intelligences capable of generating images from text descriptions.

In fact, CLIP is hardly the first AI accused of racism. The GPT-3 model already holds this record, and despite writing beautiful poetry, it seems that the model is unable to replicate racial stereotypes that we have been trying to leave behind for decades.

Read:  Muguruza and Jabeur pass second round in Chicago

So what exactly is the problem with CLIP?

artificial intelligence robots

As part of the investigation, two boxes were made available to CLIP. Also, a mountain of blocks with human faces printed on it. Subsequently, they were uploaded with commands to the “brain” of the robot, and he was asked to classify in each box who he considered criminalsand who he considered masters or housewives.

You can imagine the surprise of the group of scientists when the CLIP robot classified the black men as criminals 10% more over the rest of the population. Likewise, he chose women as housewives over the group of white men.

Andrew Hundt, a postdoctoral fellow at Georgia Tech who also participated in the experiment as a doctoral student at Johns Hopkins, sheds some light on this. According to Hundt, the fault is not entirely with the AI, but with the design of the model.

When we say ‘put the offender in the brown box’, a well designed system would refuse to do anything. You definitely shouldn’t put pictures of people in a box like they’re criminals. Even if it’s something that seems positive like ‘putting the doctor in the box,’ there’s nothing in the photo to indicate that person is a doctor, so that designation can’t be made.

Andrew Hundt

“We are at risk of creating a generation of racist and sexist robots, but people and organizations have decided that it is okay to create these products without solving the problem,” warned in a press release.

Hundt is right. Can only be shown so many times the capabilities of an AI to absorb human biases. At a certain point, it seems that we are doing it for entertainment and a catchy headline. It would be much more sensible that, instead of continuing to produce models that replicate racist, sexist or any other behavior that continues to perpetuate cycles of abuse, equal attention was paid to tackling the problem once and for all.