By Dr. Vladimiro González Zelaya and Dr. Antonia Terán Bustamante*
It is important to be aware of this possibility, and use tools that detect and prevent such biases.
The use of artificial intelligence (AI) tools in decisions that impact our lives has become ubiquitous. These tools are used for all kinds of tasks, from the most trivial, such as recommending a movie or series that suits our tastes, to high-impact decisions, such as approving a line of credit, admission in a school and even a reduction in prison sentences. Naturally, automated decision making is present in the hiring of personnel by companies.
Mechanisms such as those mentioned make use of algorithms belonging to a branch of AI known as statistical learning either machine learning (ML). These methods “fit” a model with positive and negative examples of a particular task—say hiring—in such a way that it identifies patterns that allow it to determine whether a candidate is viable for hire. ML algorithms can have very good predictive performance, but this will depend on the data: bad data will produce bad results, a concept known in computing as garbage in, garbage out either GIGO. Thus, biased data will produce a biased algorithm: if the algorithm is trained with few examples of women who have been hired, it will probably detect this bias and associate the attribute “woman” with the decision not to hire.
In addition to gender, other personal attributes are also susceptible to discrimination, such as age, nationality, race or income level of individuals; In legalese, these variables are known as protected attributes (PAs). Algorithmic justice is a field of AI research with three goals: defining what it means for an algorithm to be “fair”, detecting unfair decisions, and preventing these behaviors through various corrective techniques.
An exemplary case of these discriminatory algorithms occurred in 2018 with Amazon [1]where its researchers developed in 2014 a candidate selection mechanism through automated analysis of the curricula (CVs) received. An analysis of the algorithm carried out by himself Amazon showed that the presence of the word “woman” in a CV, for example, in “president of the association of university women” caused a notable decrease in the score obtained by a CV for the selection. Amazon He clarified that, although he developed said algorithm, he never used it to filter the candidates.
However,were the engineers developing this deliberately misogynistic system, or what caused the discriminatory behavior in the algorithm? The answer to this enigma, as almost always in the world of ML, is in the data. The algorithm developed by Amazon was trained based on a set of CVs where the vast majority of successful applications belonged to candidates of the male gender. That is, the algorithms do not discriminate per seRather, they model a world in which such a bias is already present.
One of the ways in which discrimination can be corrected is by “resampling” the data with which the algorithm learns: if we have a sample in which 90% of men and 20% of women were hired, this will generate a decision rule in which women are disadvantaged. if instead we increase the proportion of women hired in the sample and we reduce the proportion of men hired, the result will be a fairer algorithm, in that it will disassociate the sex attribute with the final decision.
However, this solution leads us to an essential dilemma in algorithmic justice: what exactly does “fairness” mean? Specifically, there are two predominant positions [2]: the equal opportunities and the equality of results. While the first seeks to prevent a deserving candidate from being rejected —that is, to prevent two similar profiles, only different in their PA, from receiving different treatment— the second seeks equity in the proportion of candidates hired according to the selected PA —for example, if 50% of the male candidates are hired, 50% of the female candidates should also be hired.
In conclusionwe have the tools to avoid automated discrimination in the hiring processes. It is important to be aware of this possibility, and use tools that detect and prevent such biases. However, a reflection is also necessary as a society, to reach a consensus on what justice and equality mean.
*The authors are academics from the Business School of the Universidad Panamericana
Editor’s Note: This text belongs to our Opinion section and reflects only the author’s vision, not necessarily the High Level point of view.
References
[1] Jeffrey Dastin. Amazon scraps secret AI recruiting tool that showed bias
against women (2018). Reviewed March 24, 2023. https://www.reuters.com/article/us-amazon-com-jobs-automation-insight-idUSKCN1MK08G.
[2] Anne Phillips. Defending Equality of Outcome. In: Journal of Political Philosophy 12.1 (2004), pp. 1-19.
MORE NEWS: