Digital algorithms are not far from social biases, so the results of an open Twitter contest dematron, where some young people who found algorithmic errors were rewarded with the intention that the company take action on the matter.
The action takes place after in March of this year, Twitter deactivated the automatic cropping of photos, after various tests carried out by users in 2020, where it was identified that the social network favored the faces of white people over black ones.
A competition against the algorithm
Twitter saw an opportunity in this error to create a contest with the support of the AI Village of DEF CON, with the intention of improving the algorithm of the social network and responding to user feedback. With this in mind, the company offered a reward for all those who found major errors within the platform.
First of all, it he pointed that Twitter’s clipping algorithm favors faces that are “thin, young, light or pale skin color and smooth skin texture, and with stereotypically feminine facial features.”
Followed by this, second and third place entries showed that the system was biased against people with white or gray hair, which would indicate that it is discriminated against by age, while a preference is given to writing in English over Arabic in the images.
Twitter vs skewing the population
Rumman Chowdhury, Team Principal GOAL Twitter (which studies the ethics, transparency and accountability of machine learning) congratulated the participants for showing the real-life effects of algorithm segmentation.
“When we think about biases in our models, it is not just about the academic or the experimental […] it’s about how that also works with the way we think about society, ”Chowdhury said. “He used the phrase ‘life imitating art imitating life.’ We create these filters because we think that’s beauty, and that ends up training our models and pushing these unrealistic notions of what it means to be attractive. “
The results seem discouraging, since it showed that Twitter does divide its users and give preference to some over others due to their physical and cultural characteristics, however, this is a first step for this, and other technology companies, to combat the problems of its algorithms.
“The ability of people in a competition like this to immerse themselves in a particular type of harm or bias is something that corporate teams cannot afford to do,” Chowdhury said.
This is certainly in contrast to other cases on large platforms, as was the case with Amazon, when a group of researchers led by MIT’s Joy Buolamwini found some racial and gender biases within Amazon’s facial recognition algorithms.
In the above case, the company discredited the investigators and called their investigation “misleading” and “false.” Finally he ended up giving up and prohibiting the use of these algorithms within the deliveries.
“Artificial intelligence and machine learning are just the Wild West, no matter how skilled you think your data science team is,” he said. Patrick Hall, Twitter competition judge and artificial intelligence researcher who works on algorithmic discrimination, who asserts that within all artificial intelligence systems there are biases that companies must work to avoid
READ MORE