Power-hungry robots, space colonization, and cyborgs: Inside the strange world of “The Long Run” | technology

Most of us don’t believe that energy-hungry killer robots pose an imminent threat to humanity, especially when poverty and the climate crisis are already ravaging the Earth.

This was not the case for Sam Bankman Fried and his followers, powerful actors who embraced a school of thought within the effective altruism movement called the Long Run.

This past February, the Future Fund, a charitable organization endowed by a cryptocurrency entrepreneur, announce It will spend more than $100 million — possibly as much as $1 billion — this year on projects to “improve humanity’s long-term prospects.”

The slightly vague reference may be confusing to those who think of philanthropy as funding homelessness charities and medical NGOs in the developing world. Indeed, the Future Fund’s special areas of interest include artificial intelligence, biological weapons, and “space management,” a vague term referring to the settlement of humans in space as a “possibility.”A watershed moment in human history“.

Out-of-control AI was another area of ​​concern for Bankman-Fried — so much so that in September the Future Fund announced awards of up to $1.5 million to anyone who could make a convincing estimate of the threat that untethered AI might pose. humanity.

SpaceX's Elon Musk gives an update on the company's Mars rover.  Musk is a proponent of the long run
SpaceX’s Elon Musk gives an update on the company’s Mars rover. Musk is a proponent of the long run. Photograph: Callahan O’Hare/Reuters

“We believe that artificial intelligence” is “the development that is likely to dramatically change the course of humanity in this century,” said the Fund for the Future. “With the help of advanced artificial intelligence, we can make tremendous progress toward ending global poverty, animal suffering, premature death, and debilitating diseases.” But AI can also “acquire unwanted targets and pursue power in unintended ways, causing humans to lose all or most of their influence over the future.”

Less than two months after the contest was announced, Bankman-Fried cryptocurrency worth $32 billion was released empire It collapsed, many of the Fund for the Future’s top leaders have resigned, and its AI prizes may never be rewarded.

And most of the millions of dollars Bankman-Fried has promised will create a constellation of charities and thought banks associated with effective altruism, a once-obscure ethical movement that has become influential in the past. Silicon Valley and the highest levels in the world of business and international politics.


TheMediationists argue that the well-being of future humans is as morally important as—or more important—than the lives of current humans, and that benevolent resources should be dedicated to predicting and defending against extinction-level threats.

But rather than giving nets to fight malaria or drill wells, long-haulers prefer to allocate money to researching existential risks, or “sigmoid risks”.

In his latest book What We Owe the Future, William MacAskill—the 35-year-old Oxford moral philosopher who became the public intellectual face of effective altruism—makes an argument for the long haul with a thought experiment about a park who accidentally crashes someone. Glass bottle on the trail. He believes that any conscientious person would clean the glass immediately to avoid injury to the next hiker – whether that person came in a week’s time or in a century’s time.

Similarly, MacAskill argues that the number of possible future humans, over many generations throughout the duration of the species, far outnumbers the number of humans currently alive. If we truly believe that all human beings are equal, then protecting human beings in the future is more important than protecting human life today.

Some of the funding interests of those in the long run, such as nonproliferation and vaccine development, are more or less unquestioned. Others are more exotic: investing in space colonization, preventing the rise of power-hungry artificial intelligence, and cheating death through “life extension” technology. A group of ideas known as “transhumanism” seeks to elevate humanity through the creation of digital versions of humans, the “bio-engineering” of humanoid robots and the like.

People like futurist Ray Kurzweil and his followers believe that biotechnology will soon enable “a union between humans, smart computers, and artificial intelligence systems,” Robin Mackey explained in The Guardian in 2018. “The resulting human-machine mind will become free to roam a world of its own creation, mounting itself at will on an ‘appropriately robust computational substrate’, thus creating a kind of immortality.”


tsaid Luke Kemp, a research associate at the Center for the Study of Existential Risk at the University of Cambridge, who describes himself as “a critic of EA next door” to effective altruism. What remains on the table, he says, are critical and credible threats that are happening right now, such as the climate crisis, natural pandemics, and economic inequality.

“The things they push tend to be things Silicon Valley likes,” Kemp said. They are the kinds of speculative and futuristic ideas that tech billionaires find intellectually exciting. “And they almost always focus on technological solutions to ‘human problems’ rather than political or social problems.”

There are other objections. Experimental bioengineering is expensive, Kemp said, which will be available, especially at first, with “only a small piece of humanity.” It could lead to a class system in the future in which inequality is not only economic but biological.

He argued that this thinking is seriously undemocratic. These big decisions about the future of humanity must be decided by humanity. Not by two Oxford white philosophers funded by billionaires. It is literally the most powerful and underrepresented class in society that imposes a certain vision of the future that suits them.”

Some adherents of the long-term principle are interested in
Some long-termists are interested in “transhumanism,” the idea that technology can extend our lives. Composite: Linsey Irvin/Getty

Kemp added, “I don’t think EAs — or at least EA’s leadership — care too much about democracy.” In its most dogmatic forms, he said, the long run is preoccupied with “rationalism, militant utilitarianism, a pathological mania with quantitative measurement and neoliberal economics”.

organizations like 80,000 hoursA program for early-career professionals, Kemp said, it aims to encourage potential influencers in four main areas: AI research, research in preparing for man-made pandemics, EA community building, and “global priorities research,” meaning the question of how funding should be allocated. .

The first and second areas, Kemp said, while worth studying, are “very speculative,” and the second “self-serving,” because they channel money and energy back into the movement.

This year, the Future Fund reports Having recommended grants for noteworthy projects such as research on “Feasibility of Inactivating Viruses by Electromagnetic Radiation” ($140,000); A project that connects children in India with online STEM education ($200,000); a search for “Therapeutic Disease Neutralizing Antibodies” ($1.55 million); and research on child lead exposure ($400,000).

But much of the Future Fund’s largesse appears to have been invested in the long run itself. recommended $1.2 million to the Global Priorities Institute; $3.9 million for the long-term future fund; $2.9 million to set up a “long-term co-working office in London”; $3.9 million to create a “long-term co-working space in Berkeley”; $700,000 for the Legal Priorities Project, a “long-standing legal research and field-building foundation”; $13.9 million for the Center for Effective Altruism; and $15 million to Longview favor To carry out “the provision of independent grants on global priority research, nuclear weapons policy, and other long-term issues.”

Effective and long-lasting altruism, Kemp argued, often seems to work toward some kind of organizational takeover. “The long-term strategy,” he said, “is to bring the ideas of cyber experts and EA agencies into places like the Pentagon, the White House, the British government and the United Nations” to influence public policy.

Sam Bankman-Fried at a Senate Agriculture, Nutrition and Forestry Committee hearing in Washington, DC.
Sam Bankman-Fried at a Senate Agriculture, Nutrition and Forestry Committee hearing in Washington, DC. Photo: Bloomberg/Getty Images

There may be an upside to the timing of Bankman-Fried’s fall. “In a way, it’s good that it happened now rather than later,” said Kemp. He was planning to spend huge amounts of money on the elections. At one point, he said he was planning to spend up to $1 billion, which would have made him the largest donor in US political history. Can you imagine if that amount of money contributed to the victory of the Democrats – and then turned out to be based on fraud? In an already fragile and polarized society like the United States? That would have been horrific.”


TThe main tension of the movement, as I see it, is one that many movements deal with, said Benjamin Soskis, a historian of philanthropy and a senior research fellow at the Urban Institute. “A movement fueled by regular people—and their passions, interests, and different types of sources—has attracted a number of very wealthy financiers,” and has become driven by “funding decisions, sometimes just public identities, from people like the SBF, Elon Musk, and a few others.” (Soskis noted that he has received funding from Open Philanthropy, an EA foundation.)

Effective altruism put Bankman Fried, who lived in a luxury compound in the Bahamas, “on a pedestal, like this monk who drives a Corolla, sleeps in a beanbag, and earns for giving, which is clearly not true,” Kemp said.

Suskis believes that effective altruism has a natural appeal to people in technology and finance — who tend to have an analytical and computational way of thinking about problems — and that EA, like all movements, spreads through social networks and work.

Effective altruism also attracts the rich, Suskes believes, because it provides “a way to understand the marginal value of extra dollars,” especially when talking about “vast sums that can defy comprehension.” Movement focus on numbers (“Silence and multiply’) helps the wealthy understand what $500 million can do for philanthropy, say, versus $500,000 or $50,000 for example.

He believes that one positive outcome is that EA-affected donors openly discuss their charitable commitments and encourage others to make them. Historically, Americans have tended to view philanthropy as a private matter.

But there is something that “I think you can’t escape from,” Soskis said. Effective altruism” is not based on a strong critique of the way money was made. Elements of it have been interpreted as an understanding of capitalism in general as a positive force, and through a kind of consequential calculation. To some extent, it is a safer landing place for people who wish to isolate Their philanthropic decisions reflect a broader political debate about the legality of certain industries or ways to make money.”

Kemp said it’s rare to hear expert advisors, especially long-term ones, discuss issues like democracy and inequality. “Honestly, I think this is something donors don’t want us to talk about.” Cracking down on tax evasion, for example, would lead to “the loss of both donors of power and wealth”.

The collapse of Bankman-Fried’s crypto empire, which has endangered the Future Fund and countless other long-standing organizations, could be revealing. Long-term scientists believe that humanity’s future existential risks can be accurately calculated–and yet, as economist Tyler Cowen recently argued, pointed outThey couldn’t even foresee the existential threat to their leading charitable organization.

Soskis said there should be a “soul-searching”. “It has a long term stain on it and I’m not sure when or if it will be removed completely.”

“A billionaire is a billionaire,” journalist Anand Giridardas Wrote recently on Twitter. for him 2018 book Winners Take All has been highly critical of the idea that private philanthropy will solve human problems. “Stop believing in good billionaires. Start organizing towards a good society.”


#Powerhungry #robots #space #colonization #cyborgs #strange #world #Long #Run #technology

Leave a Comment

Your email address will not be published. Required fields are marked *