Click play to listen to this article
expressed by artificial intelligence.
A lobby group backed by Elon Musk and linked to a controversial ideology popular among tech billionaires, fighting to stop killer robots from ending humanity, has taken over European robots. artificial intelligence law Let’s do it.
The Future of Life Institute (FLI) has over the past year made itself a force of influence on some of the most controversial elements of AI law. Despite the group’s links to Silicon Valley, tech giants like Google and Microsoft have found themselves on the losing side of FLI arguments.
In the EU bubble, the arrival of a group that colors its actions with fear of AI-triggered catastrophe has been received rather than ordinary consumer protection concerns like a spaceship flying in a Schumann roundabout. Some worry that the institute embodies a technical concern about low-probability threats that could distract from more pressing problems. But most would agree that FLI was effective during its time in Brussels.
“They’re fairly hands-on and have legal and technical expertise,” said Kai Zenner, digital policy adviser to centre-right MEP Axel Voss, who works on AI law. “They are sometimes very concerned about technology, but they make a lot of good points.”
Launched in 2014 by an academic at MIT and backed by tech greats including Musk, Skype’s Jaan Tallinn, and Vitalik Buterin, FLI is a non-profit organization dedicated to addressing “existential threats” – events with the potential to wipe out humanity or eliminate them. He counts other hot-shots like actors Morgan Freeman and Alan Alda and famous scientists Martin (Lord) Reese and Nick Bostrom among his works. External advisors.
Chief among these threats – and a priority for FLI – is corrupt AI.
“We’ve seen an airplane crash because autopilot couldn’t be overridden. We’ve seen the US Capitol stormed because an algorithm was trained to increase engagement. These are failures in AI security today — with the increasing power of these systems, the damage may get worse.”
But the lobbyist faces two public relations problems. First, Musk, the most well-known backer, has been at the center of a storm ever since he began mass firings on Twitter as its new owner, drawing the attention of regulators as well. The controversies raised by Musk may cause lawmakers to worry about talking to the FLI. Second, the group’s associations with a set of beliefs known as effective altruism raise eyebrows: ideology faces reckoning, and was recently blamed as a driving force behind the scandal surrounding cryptocurrency exchange FTX, which unleashed a financial carnage.
How FLI penetrated the bubble
The arrival of a lobby fighting against extinction, skewed artificial intelligence and killer robots was bound to be a refresher for policymakers in Brussels.
The FLI office in Brussels opened in mid-2021, where discussions began on the European Commission’s Amnesty International proposal.
“We would prefer to develop AI in Europe, where there will be regulations in place. We hope that people will be inspired by the European Union,” Brackel said.
The Dutch-born former diplomat Braquel joined the institute in May 2021. He chose to work in AI policy as an area that was both impactful and underserving. Political scientist Risto Awok joined him two months later. Skilled Digital Operator – publishes its analytics and newsletters from the domain artificialintelligenceact.eu – Uuk has previously done AI research for Commission and the World Economic Forum. He joined FLI out of a philosophical affinity: Like Tegmark, Uuk endorses the Principles of Effective Altruism, a value system that prescribes the use of strong evidence to decide how to benefit the greatest number of people.
From the outset in Brussels, the Institute’s three-person team (with the assistance of Tegmark and others, Including the law firm Dentons) brilliantly led lobbying efforts on little-known AI issues.
Exhibit A: General-purpose AI – Software such as speech recognition or image generation tools that are used in a wide range of contexts and are sometimes affected by serious biases and inaccuracies (for example, in medical settings). General-purpose AI is not mentioned in the Commission proposal, but has made its way into the final text of the EU Council, and is guaranteed to appear in Parliament’s position.
“We went out and said, ‘There’s this new class of AI — general-purpose AI systems — and AI law doesn’t consider it at all. “You should be worried about this,” Braquel said. This wasn’t on anyone’s radar. Now it is.
The group also plays on European fears of the technological dominance of the United States and China. “General-purpose AI systems are primarily built in the US and China, and that could hurt innovation in Europe, if you don’t ensure they adhere to some requirements,” Brakel said, adding that this argument has resonated with center-right lawmakers with whom he recently met.
Another of FLI’s horses is banning AI that is capable of manipulating people’s behaviour. The original proposal bans the use of AI, but that is limited to “subliminal” techniques – which Braquell believes will create loopholes.
But the co-rapporteur of the AI law, Romanian MP Rinio Drago Teodorac, is now pushing to make the ban more comprehensive. “If this modification is made, we will be much happier than we are with the current script,” Brakel said.
He’s so smart he caused the cryptocurrency to crash
While the group’s input on key provisions in the AI bill has been welcomed, many in the Brussels institution view its worldview with suspicion.
Tegmark and other FLI proponents adhere to what is referred to as Effective Altruism (or EA). A thread of utilitarianism typified by philosopher William MacAskill – who worked musk It’s called “a close match to my philosophy” – EA dictates that one should improve the lives of as many people as possible, using a rational, fact-based approach. At the most basic level, this means donating a large portion of one’s income to appropriate charities. A more radical and long-term line of effective altruism requires that one seek to reduce the risks capable of killing many people—especially future people, who will be vastly outnumbered. This means that preventing the potential rise of artificial intelligence whose values conflict with the well-being of humanity should be at the top of one’s list of concerns.
Crucial to FLI is that it reinforces this interpreting the so-called instrumental altruism agenda, which is supposedly unconcerned with current world ills — such as racism, sexism, and hunger — and focuses on science-fiction threats to the unborn. Timnit Gebru, the AI researcher whose Google’s abrupt exit from Google made headlines in 2020, says I slammed FLI on Twitter, expressing “grave concerns” about it.
“They’re backed by billionaires, including Elon Musk – and that really makes people suspicious,” Gebru said in an interview. “The entire field around AI security is made up of many billionaire ‘institutes’ and corporations pouring money into it. But their concept of AI safety has nothing to do with current harms towards marginalized groups – they want to redirect the whole conversation to preventing this AI apocalypse.” .
The reputation for effective altruism has taken a hit in recent weeks after the fall of FTX, a bankrupt exchange that lost at least $1 billion in clients’ crypto assets. CEO Sam Bankman-Fried used to be a darling of EA, speaking in interviews about his plan to make cash and give it to charity. With the collapse of FTX, commenters He argued that the ideology of effective altruism led Bankman Fried to cut corners and justify his recklessness.
Both macaskill And the FLI donor Buterin He defended EA on Twitter, saying Bankman-Fried’s actions went against the principles of the philosophy. “Automatically degrading everything the SBF believes is a mistake,” wrote Buterin, who invented the Ethereum blockchain and funds an FLI grant for AI existential risk research.
Brakel said that FLI and EA are two distinct things, and that FLI’s advocacy focused on current problems, from biased software to autonomous weapons, for example at the UN level. “Do we spend a lot of time thinking about what the world will look like in 400 years? No,” he said. (Neither Braquel nor Claudia Breitner, the EU representative to the FLI, call themselves active influencers.)
Another criticism of FLI’s efforts to fend off evil AI argues that it obscures an idealized technical motivation for the development of human-level AI. in 2017 conferenceAnd FLI consultants—including Skype’s Musk, Tegmark, and Tallinn—discussed the possibility and desirability of smarter-than-human intelligences. Most of the panelists considered that “superintelligence” would occur; Half of them considered it desirable. The outcomes of the conference A series of (fairly moderate) guidelines in the development of useful artificial intelligence, which Brackle cites as one of the founding documents of FLI.
This technical optimism was led by Emile B. candidate in philosophy Used to collaborate with FLI, to finally turn against the organization. “None of them seem to be thinking maybe we should explore some kind of endowment,” Torres said. Raising such points with an FLI employee, Torres said, resulted in a kind of excommunication. (Torres’ articles have been removed from the FLI website.)
Within Brussels, the worry is that going forward the FLI could change course from the current real-world incarnation and steer the AI debate toward far-reaching scenarios. “When discussing AI at the EU level, we wanted to draw a clear distinction between boring, tangible AI systems and science fiction questions,” said Daniel Laufer, lobbyist for digital rights group Access Now. “When there have been previous EU discussions about regulating AI, there were no organizations in Brussels focused on topics such as superintelligence – it’s good that the debate is not going in that direction.”
Those who see FLI as a spawn of California’s future point to its board and portfolio. Besides Musk, Tallinn and Tegmark, donors And the advisors include researchers from The Google And the Open AIMeta Dustin Moskovitz’s Open Philanthropy, co-founder of the Berkeley Existential Risk Initiative (which in turn has Funding received of FTX) and actor Morgan Freeman.
In 2020, most of the global funding for FLI ($276,000 out of $482,479) from the Silicon Valley Community Foundation, a philanthropic foundation favored by tech bigwigs like Mark Zuckerberg; 2021 accounts have not been released yet.
Brakel denied that FLI was comfortable with Silicon Valley, saying the organization’s work on general-purpose artificial intelligence had made life more difficult for tech companies. Brakel said he has never spoken to Musk. Meanwhile, Tegmark is in regular contact with members of the science advisory board, which includes Musk.
In Brackel’s view, what FLI is doing is similar to climate activism in the early days. “We’re currently seeing the warmest October on record. We’re worried about it today, but we’re also worried about the impact in 80 years time,” he said last month. “[There] are today’s AI security failures — and as these systems become more powerful, the damages may become worse.”
#Stop #killer #robots #Muskbacked #lobbyists #fight #save #Europe #bad #POLITICO