Why Longtermism?

Mark Borthwick
6 min readDec 14, 2021
If humanity survives as long as the average mammalian species, 99.8% of all people who will ever live are yet to be born. What are our moral obligations to the unborn? [Photo: Ryoji Iwata]

Longtermism, or the “long-term value thesis”, is the proposition that since altruism has long-standing effects into the future, the main beneficiaries of philanthropy are future generations. This means that some of the best philanthropic opportunities will be either nascent or currently emerging, but their potential is overlooked. Longtermism posits that present actions have such a high potential to ameliorate civilisation in both the near and distant future that we are morally obliged, on the behalf of future generations, to explore them now. Per pound spent, long-term interventions have the potential to be the highest-impact by far.

Why do Future Generations Matter?

It’s easy to understand why future peoples are overlooked: They’re not presently here, they can not presently suffer, and chances are we will not ever personally know them. However, as they are our descendants and heritors, we have a huge amount of control over the world they will inhabit.

There is a massive potential difference between the best and worst outcomes for future generations, even in the short term. The world in the near future could be like a utopia, where all living beings live exclusively positive lives; a dystopia, where all living beings live inescapably negative lives; or fully extinct, where nothing lives at all. Climate change within our lifetime has the potential to ‘lock in’ extreme weather patterns for hundreds of years, which cannot be prevented by even the most extreme regime change. Technologies are currently emerging which have the potential to be as transformative as the internal combustion engine or nuclear bomb, and we get to determine the role these technologies play in society, in a way that is unique to our generation. We are in what Aristotle called a ‘kairos moment’ — a uniquely opportune moment with morally relevant consequences which will totally define the lives of our descendants.

The future is big

The future of humanity will almost certainly dwarf its past. 15% of all people who have ever lived are alive today. By 2100, this will be 50%. If humanity survives as long as the average mammalian species, 99.8% of all people who will ever live are yet to be born. If we avoid extinction and survive on Earth for as long as it will remain habitable, a billion years of civilization remains. If we successfully colonize other planets in that time, it could be a trillion years. Even if the predictability, tractability, or feasibility of increasing the lives of future people turns out to be low, reducing the risk of human extinction by one in a million has the expected value of 1,000–10,000 additional years of civilization, doubling the current human lived experience.

If charities are obliged to support mothers in accessing vaccinations for their unborn babies, are we also obliged to ensure access to vaccines for other unborn future generations? [Photo: CDC]

The Philosophical Approach

It seems that neither geographic nor temporal distance are morally relevant when thinking about our obligations to other people. It is commonly accepted that, if we would feel morally required to spend £5 to save the life of a starving child next door, we are equally required to give that money to a child that we’ve never met. The fact that the second child is far away, or that the assistance is provided by a charity on our behalf, is not relevant to the moral obligation upon us to prevent their suffering. This moral obligation is still present even if the child is yet to be born, as seen in charities that support mothers in accessing vaccinations for their unborn babies.

The sheer number of beneficiaries living in the future means that longtermist interventions are very likely to be orders of magnitude greater in terms of impact than solving any current problem. Preventing Malaria is considered to be one of the most cost-effective ways to save a life in the world, with the Against Malaria Foundation (AMF) requiring $3,641 per life saved. However, a longtermist cause like preventing extinction due to climate change will affect all future peoples. One estimate suggests that climate change could be totally halted by an investment of $2.67 trillion. Some quick back-of-the-napkin math tells us that money could save 773 million lives at the AMF’s current rate. However, if ending climate change mitigates a 3.5% chance of human extinction, the expected value over the subsequent 21 million future generations is 7.1 quintillion lives, meaning it saves nine million times more lives per dollar than even the most effective present-day intervention.

We have the chance to define the human experience of emergent technologies with high predicted influence and staying power. [Photo: Andrew Ridley]

Why The Time Is Right For Longtermism now

A number of convergent factors mean that we are uniquely temporally positioned to produce positive outcomes for future generations:

● There is a political bias towards short-termism, with long-term initiatives struggling to survive across political terms. For example, the 2016–2020 USA Administration dissolved both the Global Health and Biosecurity Defence Unit and the ‘Predict’ pandemic detection system, despite both having existed for less than ten years, and pandemics consistently being ranked as a top global existential risk during that time.

● As we become more interconnected, the risk of global catastrophe increases. For example, global aviation greatly contributed to the spread of the COVID-19 pandemic. We can reasonably expect this interconnectivity to increase over time.

● Rapid technological expansion means that we have the chance to define the human experience of emergent technologies with high predicted influence and staying power. Artificial Intelligence is predicted to disrupt every industry within 40 years, and the nature of its emergence will define whether GAI is beneficial to most people, or is monopolized by the interests of the very few.

● Technological advancement is friable, and missed opportunities have the potential to be very impactful on the future. The world’s first cars were electric but failed to achieve price parity with petrol cars. A huge opportunity was missed here to invest in that technology and mitigate climate risk. We are surrounded by similar opportunities to dramatically alter the path of breakthrough technologies. (EG, factory farming of fish and insects)

● Human society tends to be static and hard to change. If we colonize other planets with current attitudes, it could be very hard to transform them. It’s not inconceivable that factory farming, dependency on non-renewables, and societal injustice, could become entrenched features of a new martian society.

Preventing the development of new weapons is consistently listed as one of the best interventions for the future of humanity. [Photo: Maximalfocus]

Intervention Areas with high impact potential

Work in the following cause areas might have a high chance to define the future of humanity:

Climate change

The risks of climate change are well documented and range from widespread disruption of human activity to mass extinction. We believe independently assessing the viability of carbon innovations, building policy coalitions, and investing in political structures with constitutional longtermist values, are all high-value interventions for preventing climate change.

Biosecurity

COVID-19 has killed 0.1% of the world’s population so far, making it the ninth biggest pandemic in history. Some estimates put mortality from the Spanish Flu at 5%, meaning there’s precedent to suggest the next viral pandemic could be much more deadly than this one. Investments in pandemic preparedness, advancing vaccine technology, as well as preventing known epizootic risk factors, are all investments with high long-term potential.

Transformative Artificial Intelligence

AI also has the potential to be extremely disruptive if weaponized, or otherwise misused. Delaying the development of AI, improving its comprehension of and alignment with human value systems, implementing governance structures, have all been identified as high-value interventions into Artificial Intelligence development.

Building the Longtermist Community

Overcoming the human bias towards short-termism has the potential to dramatically alter our structures and decision-making practices at every level. Directing resources towards longtermist projects, movement building, outreach to policymakers and academics, and establishing hierarchical priorities for both research and policy, have high estimated value for longtermist outcomes.

Fore more information on Longtermism, I recommend engaging with the Forethought Foundation, and Longview Philanthropy.

--

--

Mark Borthwick

Traditional storyteller, animal ethicist, and effective altruist based in the Lake District, England. @MDBorthwick