There is one person drowning and 10 people in a burning building. Who would you save? The philosopher William McAskillprofessor at Oxford and leader of Effective Altruism, no doubt: «I think you have a moral obligation to save the 10». He himself raised the problem in a interview last year to exemplify the rationality and practicality that inspires his movement. And to explain why, for them, fighting against the potential emergence of an evil artificial intelligence is more important today than addressing, for example, extreme poverty.
They call it “long termism«. They think that it is better to prioritize the risk of future catastrophes than to attend to some of humanity’s current emergencies. It all comes down to terms of efficiencies. How can we do the greatest good? How can we use our time and resources in the most effective way possible?” MacAskill once said. He is only 38 years old and serves as a guru among a group of notable billionaires in the technology industry.
Three key names account for their level of influence: dustin moskovitzco-founder of Facebook; Sam Bankman-Fried, protagonist of the FTX scandal; and Elon Musk, leader of Twitter and Tesla. All three are sympathetic to the organization, in which more than 70% of its members they are white men. His followers move between the University of Oxford and Silicon Valley. He has plenty of resources and also scandals: he has received complaints of sexual harassment, fraud and some speak of it as an elite cult.
Effective Altruism is a powerful current behind the scenes of the current AI boom. Other leaders in the sector accuse him of leading a dangerous career in the development of these technologies, behind the facade of a movement that calls for prudence.
Effective altruism and the advancement of AI
Effective Altruism believes that the emergence of a General Artificial Intelligence —a superintelligence that would give machines the ability to think like a human. Therefore, its objective is to find a way for its development to guarantee that it becomes a beneficial tool for humanity.
Timnit Gebru, a former member of Google’s AI ethics team, maintains that, with the excuse of seeking a “good” AI and the “technological utopia”, impressive amounts of money are moved that are the engine of the dizzying race around AI. the AI that we are living. Contradictoryly, he says, the benefit of the people is not being prioritized but the economic profit. “It is another example of how our technological future is not a linear march towards progress, but rather It is determined by those who have the money and the influence to control it.” published in an article in WIRED.
Gebru mentions Open AI as an example of the paradox that it denounces. The company was founded as a nonprofit organization in 2015 by Silicon Valley elites, including Paypal co-founder Peter Thiel and Elon Musk. Both are linked to Effective Altruism, so much so that they were speakers at the movement’s conferences in 2013 and 2015, respectively. Musk has also made public his sympathy with the organization and his link with MacAskill was exposed with text messages published during a lawsuit over the purchase of Twitter.
Musk, when he was still part of OpenAI, was far from the AI hater he appears to be today. In fact, at the beginning of 2018, he was very concerned that they were far behind Google in the development of this technology. It was then that he proposed a solution: take control of the company and run the project himself, as revealed last week traffic light. The rest of the OpenAI founders objected, and Musk left the company. The rest is history.
A multi-million dollar move full of scandals
OpenAI, from being a non-profit organization, became a multi-billion dollar company. Musk himself complained: “I’m still confused about how a non-profit organization that I donated $100 million became a $30 billion for-profit company.” he vented on Twitter last March. Without him, OpenAI went on to lead AI development with the release of ChatGPT and is now a Microsoft partner.
Musk, already out of business, published a letter last week in which more than a thousand specialists participated. He asked that the development of AI be stopped until there are guarantees that it is something safe. He disseminated the document through future of lifean institute of which he is an adviser and which is a partner of Open Philanthropyanother foundation that promotes effective altruism as a mainstream.
The contradictions of the movement were also exposed with the scandal of the FTX cryptocurrency platform. «I don’t know which emotion is stronger: if my absolute anger towards Sam (Bankman-Fried) for causing so much harm to so many people, or my sadness and self-hatred for falling for this hoax,” MacAskill, leader of Effective Altruism, tweeted. However, even though he wanted to pretend he didn’t understand, Time revealed that your organization was aware of the fraudulent strategy for which Bankman-Fried could now receive a sentence of more than 100 years in prison.
FTX Future Fund, Bankman-Fried’s foundation, donated more than $160 million to causes that promote effective altruism, including more than $33 million to organizations directly connected to MacAskill. FTX also sponsored, for example, activities at NeurIPS, one of the most influential machine learning conferences in the world.
Complaints of harassment and misogyny
A group of women denounced widespread sexual misconduct within the Effective Altruism movement in Silicon Valley. A report from Bloomberg reviewed the cases of at least eight women, at different levels of the organization, who were victims of abuse and harassment. Those who made complaints, either to the police or to community mediators, say they were labeled as problematic and dismissed. The accused men always received the backing of the leaders, the article explains.
It is difficult to trace the level of influence of this community, but 80,000 Hours —another partner organization founded by MacAskill—estimates that $46 billion for Effective Altruism causes between 2015 and 2021. It reports a growth in donations of around 21% each year.
“Research priorities follow funding,” insists Timnit Gebru, the former member of Google’s AI ethics team. She says it’s not surprising that, if some of the funding comes from groups like Effective Altruism, harmful products proliferate.
Gebru was fired from Google along with Margaret Mitchell, head of the AI team responsible until 2021. Both publicly denounced racism and sexism in the company. They also warned about various risks in the development of these systems.
The two, along with other peers, published another letter in response to the one publicized by Elon Musk. There they ask for regulations that companies guarantee transparency and accountability. And let us not be distracted by “imaginary apocalypses”: “The responsibility itself does not fall on the machines, but on their builders.”