On The Fundamental Incoherence of Longtermism
Some very basic math
I haven’t written much on ethics on this substack, and there’s a reason for that: my academic work, both in seminary and in graduate school, focused mostly on systematic theology and then philosophy, especially on fundamental questions of epistemology and philosophy of mind. I just didn’t study ethics much in detail, so my insight in ethics is, well, a lot more limited.
That said, I certainly do think about ethics quite a bit, and of course ethics (like everything else) is something we can (and should!) submit to careful philosophical reflection—especially since, in many ways, ethics is basically philosophy as applied to the actual business of living. And so, I have a number of ethics-related articles in the pipeline, and today’s is the first. I am going to start with a narrowly-focused critique of one aspect of one particular ethical system, before hopefully addressing ethical questions more broadly in future pieces.
That said, my comment above is important context for what follows—I am certainly no expert on these matters, and am certainly happy for folks to offer correction or counter-arguments if they see fit (of course, I am always happy to hear such rejoinders, even on topics where I have more expertise, but that applies even more here, where I am wandering a bit further afield).
A Brief Introduction to Consequentialist Ethics
(If you are already familiar with consequentialism, utilitarianism, and their specific application in longtermism, you may want to skip down to the next section where I engage my principal critique of longtermism directly: “The Problem with the Long View”.) Longtermism is a new ethical perspective, an outgrowth of effective altruism, which itself is really just a very market-oriented outgrowth of utilitarianism. Utilitarianism is the primary mode of so-called consequentialist ethics, which argues that the moral rightness of any given action is to be calculated solely on the basis of what the consequences of that action will be. So, for example, if we ask whether it is acceptable to lie to someone, we must ask: what will the consequences of lying be in this particular circumstance? So, if we are considering lying to someone who has a peanut allergy about whether there are any peanuts in our peanut butter cheesecake, consequentialism would argue that such a lie would be wrong specifically because it would cause harm to that person. Meanwhile, if we are considering whether to lie to a bunch of Nazis who show up at our door about whether our Jewish neighbors are hiding in the basement, the consequentialist would argue that in this case it would wrong not to lie, since if we tell the truth, our neighbors will be killed.
So far, so good. Of course, there are other approaches to ethics (most notably virtue and deontological) that would approach these questions differently. Even so, I think most people intuit that, at least in these particular cases, a consequentialist ethics seems to provide a solid guide to behavior.
Utilitarianism—and therefore effective altruism—and therefore longtermism—attempts to apply this consequentialist ethics universally as a complete ethical and moral system. And here, we start to run into some problems. I don’t want to try and discuss all of these problems here (but I hope to address many of them in future posts), but the thoughtful reader (and, of course, all of my readers are the very paragons of thoughtfulness!) can easily surmise some problems with trying to systematize consequentialism: what do we do if we can’t easily determine the consequences of a decision? What if a decision will have both negative and positive consequences? How do we measure and compare different kinds of outcomes? What if different people disagree on whether a given outcome should be considered positive or negative?
One way that consequentialist/utilitarian ethicists try to address or outmaneuver these questions is to argue that what is considered “good” or “bad” must be reduced to joy or suffering, respectively. That is: something is a good consequence to the extent that it creates more joy—more qualitative experiences of pleasure, basically—and something is bad to the extent that it causes pain or suffering. The way this is systematized is relatively simple, conceptually: when considering a given decision, just add up all the joy/pleasure experienced by every conscious being that will result, and then subtract all the pain/suffering experienced by every conscious being that will result.
Of course, what is simple in theory is basically impossible in practice: when it comes to really big decisions made on the scale of a whole society, discovering and adding up all the impacts of a given decision—e.g. should we use nuclear power or not?—is an insurmountable challenge. Not only are there a vast array of possible impacts, both positive (nuclear power is basically carbon-neutral and provides a huge amount of electricity very cheaply once the capital is in place) and negative (nuclear power generates dangerous waste and there is always the possibility of an accidental release of radioactivity), but those impacts must be calculated over tens of thousands of years, because of the half-life of some of the byproducts of nuclear power is so long (hence the policy of burying nuclear waste in super deep, hopefully impermeable caverns).
There is certainly an apples-&-oranges problem here: how do we weigh the benefit of cheap, carbon-neutral power against the potential negative impact of radioactivity harming someone in the distant future? Is there any kind of even semi-mathematical, neutral, objective way of doing that?
Now, as I said above, despite the fact that I just introduced all of these thorny questions here, I will not attempt to address all of these questions here(!). Again, I do want to engage them in the future, though, so if your curiosity (and/or animosity) is piqued, well, put a pin in that and check back in a few weeks/months. I do want to point out here, though, that obviously serious utilitarian ethicists are aware of these problems and have worked to address them. Whether their proposed solutions are effective or not is a discussion for another day, but I don’t mean to imply that the speculative questions I have raised above somehow by themselves invalidate utilitarianism—they certainly do not.1 (This is not to say that I don’t have criticisms of consequentialism and utilitarianism! I certainly do. But they deserve a fuller and fair-minded treatment than I can provide here, as I focus on one major flaw in longtermism in particular.)
The Problem with the Long View
Today, I want to consider and critique just one offshoot of the consequentialist/utilitarian ethical family tree: the previously-mentioned longtermism.
Longtermism, as its awkwardly non-hyphenated name suggests, is an ethical position that is focused on the long-term; to wit: longtermists argue that our ethical decisions must be tailored to best meet the needs of future generations of humans/post-human sentient, sapient, conscious beings.2 The logic of this position is relatively straightforward (which is not to say that it’s correct; more about that below!)
Longtermists assume that the human species has a long future ahead of it, that our descendants will continue to exist and even thrive for many thousands, even millions of generations into the future. And if it is the case that there will be, say, one million generations of post/humans who come after us, then, the longtermists reason, we must essentially weigh the impacts of our actions on those future generations as a million-times more important than the impacts of our actions on ourselves. This flows relatively cleanly from their consequentialist ethics: since the rightness or wrongness of any action depends entirely on how it effects either pleasure or pain in other conscious beings, if we trust that there are so many more conscious beings who will exist in the future than do exist now, then then impact on that much larger proportion of people/beings-that-evolved-from-people who will live in the future is of much greater import than what we experience today, simply because there are (potentially!) so many more of them than there are of us.
Although the specific ethical program will vary from longtermist to longtermist, the basic pattern is that they tend to deemphasize many of the current problems that we face today (and which many people think are dire, existential concerns)—specifically climate change, food security, and the solving of common medical maladies—and believe that we should spend resource instead on solving other issues—most often their concern is with achieving so-called Artificial General Intelligence (AGI), effective means of interplanetary and even interstellar human travel, and a pro-natalist position (although they are generally only concerned with increasing the birth rate among the right sort of people—well-to-do, scientifically-gifted westerners, for the most part). The working assumption is that these achievements, though they may or may really help those of us living in the present, will be crucial to our success and fulfilment many centuries and millennia in the future.3
Now, as before, there are myriad arguments against and questions about this position, even if we accept a consequentialist ethics for the sake of argument. For one, we really don’t know how long humanity and our future descendants will survive. Maybe we have millions of years of star-faring civilization ahead of us, maybe we won’t exist at all in a century. Any predictions on this score are, well, highly probabilistic. We also have a hard time really calculating the impact of current decisions on future people. Can we say with confidence how a change in the capital gains tax today will affect people living on one of the planets in the Alpha Centauri system a million years from now?4
However, and stop me if you’ve heard me say this before earlier in this very piece but—I’m not going to address these and other concerns in this particular piece. I want to focus on one specific issue with the longtermist argument that I think is decisive, no matter where we land on all the other issues, problems, and questions surrounding it.
For the longtermist, as we saw above, future outcomes are inordinately more important than current outcomes, and this means that when we make decisions today—especially large-scale economic and governing decisions—we should be asking ourselves how those decisions will impact future generations and considering those outcomes much more than how those policies actually affect us, here and now. Now, the insights and intuition at the heart of position aren’t wholly counterintuitive or even necessarily controversial. Most of us in fact do think we should care about the future and act accordingly. After all, both conservatives worried about federal debt and liberals worried about climate change often invoke future generations to make their cases—we don’t want our great-grandchildren to have to pay off our massive debts or suffer under climate chaos, respectively, and therefore we must act today.
The difference with longtermism is both in how far forward they are looking and in how much they are willing to discount current outcomes. Debt activists and climate activists may invoke our great-grandchildren, but longtermists are invoking our great-great-great-great-great-great…great-grandchildren (where the ellipses represents many hundreds or even thousands of additional “great”s). It ain’t called longtermism for nothing.
Likewise, because longtermists are considering such a huge number of generations, and they expect that each those generations will contain many hundreds of billions, even trillions, of individuals, their math indicates that the pleasure and pain of our current generation essentially doesn’t matter at all. We should be willing, so say the longtermists, to accept nearly any degree of suffering now so that future generations can experience greater pleasure, since there are so many future generations to come, and each of those generations will have so many more individual members than our current one does.
Now, again, there are plenty of questions to ask and criticisms to offer here (not the least of which is why it seems that longtermists think that poor people today should suffer for the future while they themselves seem to assume they should still live in general ease and luxury) but, again, I want to focus my criticisms here on one particular problem in the logic of longtermism. Again, I hope to address those other issues at a future point (the list of things I am going to bring up here and then not address is getting a bit ridiculous, I must admit!)
Here’s the thing: even if we grant everything that longtermists think, assume, and hope—even if we accept a thorough-going consequentialist ethics, even if we accept that there are millions of generations of human/oid life to come, even if we accept that our current decisions can meaningfully impact them—even if all of this is true, I think longtermism is still incoherent, precisely on its own terms. Let me (finally!) get to the point and explain why.
So let’s assume we have a million generations to come, and that, on average, each future generation will have a trillion people—just to keep the numbers relatively simple (the logic that will unfold below will not change even if we change these numbers substantially, so long as they both remain relatively large—which longtermists definitely assume they will be). That would mean that, compared to our current 8 billion souls, the future has at least 1 quintillion (1,000,000,000,000,000) people to come: 125,000,000 times as many people. For the consequentialist longtermer, obviously that group of people (so long as we consider them as one group) obviously is of greater significance than us.
So, OK, the longtermist will demand that we make whatever sacrifices necessary to help improve the future lives of these people. And, remember, for the sake of argument, we will buy into their program. So that means we will all choose to consume as little as possible, eating just potatoes (and B12 pills), drinking only water, living in tiny windowless rooms, and working feverishly to build AGI and rocketships so that we can become an interplanetary species governed by only the smartest and most honest computers. In other words, living the dream.
The question is: at what point do we, as a species (or, as the species we evolve into) get to actually start enjoying the fruits or our (and our miserable rocket-scientist-cum-programmer ancestors) labor? Let’s fast-forward one-hundred thousand generations. That’s pretty far into the future (if we assume 20-year generations, that’d be 2 million years from now). But even at this far-flung future date, according to our original (and possibly quite conservative) assumption, that would mean we still have 900,000 generations to come. And, again, note that, with the longtermists’ method, we are always just comparing the current generation against all future ones. Past generations are, well, past, and we can’t do anything for those poor potato-eating souls now. So we compare the (let’s just guess) 1 trillion people living in 2,000,000 AD with the 900 quadrillion people (900,000,000,000,000) living in what is left of the future, and, then, as now, the generation living at 2,000,000 AD is as nothing compared to their future successors. So, it would seem, that even in 2,000,000 AD, we must continue the potato-eating ascetism so that we can continue to invest in the future.
Hopefully, you can see the problem here. Even as we advance further and further into the future, the logic really won’t change. Even once we have lived through 90% of our supposed future, the number of future people still completely dwarfs the number of people who would be living at that particular point in time. This is obviously true if we expect the number of people to remain more-or-less constant, but of course, if anything, we would expect the population to keep growing, and this only makes future generations ever more valuable than the present one.
Let’s say we get near to the close of history. Whatever horror is going to end us 20 million years from now, it’s only a few generations off: we are talking about generation 999,990—19,999,800 years from now. Can this generation, living in the final .000001% of humanity’s existence, finally start eating something other than potatoes? Well, no—of course not! After all, with ten more generations to go, there are still about 10 times many people in humanity’s future as in its (then) present. So, potatoes all around!
It certainly seems as if longtermism wouldn’t allow humans to actually worry about their own enjoyment until the penultimate generation—the point at which there is only one future generation, and so the needs of those alive perfectly balance against the needs of the future. Then, in the final .000005% of humanity’s existence, we can finally bake a freakin’ cake.
Now, obviously, plenty of people will reject this outcome as obviously awful, but for the moment we must make sure we are not bringing in our non-consequentialist ethics or non-longtermist assumptions, since I want to grant the longtermist position all of its own premises for the time being. But I do think that even granting this “steelman” version of their argument, the longtermist position is rendered incoherent on its own terms.
Notice how the rigid application of longtermist ethics resulted in a situation in which every human/humanesque generation suffered horribly except the final two generations. This outcome seems absolutely necessary under longtermist assumptions, but it also means that longtermism fails on purely consequentialist grounds, since, looking backward from the end of humanity, we see that longtermism resulted in a vast amount of suffering in almost every generation of sentient beings. In other words, longtermist ethics actually causes the very catastrophe it is meant to avert, promising a future of near universal suffering and privation all to create a future that only arrives at the very end of (human) time, for a tiny fraction of humanity. By constantly obsessing about the future, longtermism actually creates a horrible future for nearly everyone.
Now, I am certain longtermists themselves would argue that such an outcome wouldn’t actually occur under their leadership, once we take into account that only some decisions will have a decisive impact on our collective future; that is: a thoughtful longtermist might argue that we do indeed need to sacrifice now, in 2025 CE, so that we can build AGI and spaceships, but that once we have done that and become truly interplanetary, we will be able to rest on our laurels and enjoy the wonders we have wrought (within the moral boundaries set by the space pope, of course).
But, of course, we don’t know what kinds of technological opportunities might be available in that future that still might require that potato-eating grind-hustle, and, in any event, it would seem that longtermism would always require us to constantly focus on how to create more humans/post-humans and to colonize more planets/asteroids/Dyson spheres since, again, the whole logic of this ethics is that the outcomes of the hordes of the future outweigh our interests today. And this would seem to lead inexorably to the logic that at any point in time (that is, whatever year the current moment is in), longtermism must demand sacrifice of the present to the future, right up until there basically is no future, resulting, as already argued above, in a future history of basically unmitigated suffering all so that two, or at best, three or four generations at the end of history can party like it’s 1999 (again).
I can’t see how anyone would actually want such a future to become our past, even on purely consequentialist grounds. And therefore, on its own premises and even assuming the best-case scenario, longtermism is, I think, very clearly incoherent, and therefore not an ethical system we ought to employ—and again, I think this conclusion is necessary especially if you agree with its goals!5
Now, none of what I’ve argued above, on its own, proves that consequentialism or utilitarianism are incoherent, and it doesn’t even mean that effective altruism is incoherent (though that latter specific mode of ethics does seem closely tied to longtermism). And I take it that plenty of principled utilitarians reject longtermism. As I said (way) above, I hope to address some of the problems for utilitarianism more broadly in future posts. For now, though, I hope I have shown that the specific mode of consequentialism known as longtermism is a dead-end we should not drive into.
And, as I hope to discuss in the future, though I certainly don’t subscribe to utilitarianism, I think consequentialist ethics makes some good arguments and I think those arguments ought to be integrated into a broader ethical program. But, again… more on this in a future post!
It’s important to note how longtermism dovetails with so-called transhumanist and post-humanist futurist bio-politics. I will not actually attempt to address that here, though, since, as already stated, I am trying to focus on just one particular critique of longtermism here. That said, gentle reader, you will notice that I will invoke human as well as post-human beings above, since longtermists tend to assume that at one point, many of our descendants will not be homo sapiens per se.
There are, of course very good reasons to doubt that either AGI or space travel will prove to be especially beneficial to humanity—indeed, there are reasons to doubt whether either of them will even prove truly feasible (especially in the case of, for example, colonizing Mars). But such issues are outside the scope of this piece.
Most of the arguments around this kind of question tend to be not-so-subtly guided by political views and their current impacts, e.g. the idea that we should tax rich people less now so they can build better spaceships tomorrow. It’s not hard to imagine that this may be a someone cynical and dishonest argument from the get-go, especially considering how much of that un-taxed money seems to go towards yachts and mansions and other various and sundry items that would seem to provide little benefit to future generations living amongst the stars.




I agree, Longtermism fails on its own terms. If you take the present as instrumental to the future, then the same logic applies next week, next year, next century. It sounds like quite a poor way to go through life.
For me, the even bigger theoretical failure is that Longtermism shows an excessive amount of confidence in the power of planning. Even greater even than that exhibited by the Communists of the 20th century, who tried to plan their way to utopia (and failed). At least they were only thinking in decades!
The world is an exceedingly complex interdependent system, and such systems are notoriously hard to plan or predict. Culture, the economy, and most everything else that matters, goes in fits and start, backwards and forwards. Try to push things in one direction, and it's almost 50/50 whether you actually move them in that direction or the opposite, or most likely, sideways.
Also, trying to plan for aims in the far future shows a complete disregard for the developing preferences of future generations.
Trying to do good requires subtlety and a measure of humility, and I really don't see those in the Longtermist movement. I'd rather take my wisdom from Hypocrates: if you can't help, at least don't harm. Be mild and prosper!
I think the ultimate problem with any consequentialist view of ethics has partly to do ultimately with how bad we are at a) determining the difference between pleasure and pain and moreover confusion over what happiness is. In my recent article I try and deal with the problem of framing ethics as a problem of choice in the first place—as it is usually construed: https://open.substack.com/pub/nasmith/p/the-bondage-of-choice-e36?utm_source=app-post-stats-page&r=32csd0&utm_medium=ios