Resurrected in the Machine?
A Brief Comment on so-called "Digital Immortality"
The more things change, the more they stay the same.
It is something of a truism among scholars of religion that the last two centuries have seen a curious phenomenon: on the one hand, modern societies have definitely gotten more secular: attendance at religious services is lower—in some countries, much lower—than it was in the mid-1800s, more and more people are explicitly identifying as having no religious belief whatsoever, and religious views are having less and less influence over public life (though of course this varies substantially from region to region).
Even so, we have not seen a real decline in religiosity per se. The old beliefs may be waning, but new beliefs are being taken up with just as much fervor.1 Nationalism is perhaps the best example: the idea that a person should identify themselves primarily as the member of a given national group is actually a very new idea, not gaining real traction until the late 19th century. And most nationalists were actually rather hostile to Christianity both because Christianity (especially in the West) was an explicitly international movement but also because it was seen as a competitor for peoples’ loyalty: a committed Christian, after all, might choose loyalty to Christ and all of his teachings about peace and love rather than the good of the nation (which might instead demand the spilling of (others’) blood).
Many scholars of religion, then, insisted that Nationalism should be seen as a new mode of religion—different, of course, in many ways than more traditional religions, and lacking in many of the features we tend to expect of religions (belief in deities or some kind of spiritual personalities beyond the visible, prayer and other spiritual practices, ritual worship, etc.). The argument was that whatever “religion” is, it was not the union of those features specifically, but rather something more basic, and more universal to human life:2 a basic human need to organize the personality around some shared sense of meaning. While this might involve God, metaphysics, afterlife, etc., it doesn’t necessarily need to. On this view, the modern world wasn’t getting less religious, exactly, it was just getting differently religious.
Nationalism is hardly the only example of religion taking a new, modern form. The state cult that developed in the Soviet Union is another obvious example: Lenin, in particular, was held up as near-messianic figure, complete with his own iconography. And the struggle to achieve full communism through the work of socialism had undeniably eschatological overtones.
And many commentators have, of course, noted the way in which celebrity culture has taken on deeply religious overtones in the last few decades. Taylor Swift’s fans, in particular, have shown a degree of commitment, attachment, and loyalty that is often hard to understand for those not as enamored with her music or personality. The extreme reaction—including death threats—to music critics who dare to publicly say they don’t particularly like her music shows that many people certainly do organize their sense of personality around the shared meaning of Taylor Swift fandom.
But of course, in this digital age, we would expect digital religion. And indeed, there is plenty to go around. We could talk about the way in which certain games (most notably MMOs such as World of Warcraft) can become all-consuming activities for people, or the way in which social media activity itself has become a central pillar in many peoples’ identities (please don’t ask me how often I check my subscriber count here on Substack!). But today I want to focus on a truly, undeniably religious movement in the digital era: the belief in “Digital Immorality”.
Digital Immortality is a belief system caught up with two other overlapping beliefs: transhumanism and the Singularity: transhumanism is that idea that we humans can and should transform ourselves through both digital and biological technologies, to give ourselves new and improved abilities that may in time cause us to become a wholly new species altogether. The “Singularity” refers to the idea that as “artificial intelligence” advances, there will be a threshold at which the technology not only becomes smarter than humans, but will develop the ability to improve itself so quickly that its capacities will begin to accelerate exponentially. Singularitarians believe that once this threshold is met, some kind of AI will essentially take over not only earth but reality itself, reshaping existence through its extraordinary intelligence.
Now, both of these ideas are, or at least can be seen as, religious views—anchored more in narrative than in obvious fact. However, at least one version or interpretation of transhumanism seems at least trivially true: that we humans can modify our bodies (and, to some degree, our minds) through technology is undeniable. I was just speaking with a neighbor yesterday who has an electronic device implanted in his back that sends current into his spine to reduce pain he has from an old back injury. This is obviously a life-changing technological intervention into his body. The question, of course, is the extent to which humans can effectively modify ourselves—are we talking about relatively basic changes that essentially just make life a bit easier, or are we talking transformations so severe that we essentially become a new species? It’s hard to say what will eventually prove possible, of course, but transhumanists are committed to the maximal prediction with something that can only be called faith: not only do they believe that we will be utterly transformed into genetically-engineered cyborgs, they can’t wait to get their upgrades.
The centrality of faith for Singularitarians is even more obvious; few serious computer scientists or engineers will publicly argue that computers will definitely achieve anything like the superhuman intelligence that the singularitarians predict. Indeed, the ways in which LLMs today, in 2025, struggle to handle even very basic logic problems suggest that a superhuman AI is very far from being achieved, and it may turn out that such a computer program only ever ends up being slightly smarter than us (or, indeed, just differently smart than us, just as even the pocket calculator already is). But I can remember talking with some folks fifteen years ago who were convinced that this all-consuming AI was just a few years down the road, even if they couldn’t even begin to explain the technology that would be required to achieve this.
Both transhumanists and singularitarians, I think, confuse the ideas of science and technology, on the one hand, and magic, on the other. The assumption seems to be that if we dump enough R&D money into solving a problem, we can simply solve it, no matter what. The laws of physics, of course, tend to be rather less flexible than this perspective suggests.
In any event, let’s get back to the matter at hand: when one combines transhumanism and belief in the singularity, one gets the belief in digital immortality. It goes like this: if it’s the case that human life can be utterly transformed by technology, and if there will (soon?) be a computer with essentially unlimited processing power, then perhaps this computer will create digital copies of currently-existing humans, which can then “live” in a simulated reality within the computer program itself. On this view, once the singularity is achieved, we can all essentially “go to heaven” inside a server stack, living as digital copies of our current selves in a perfect environment either forever, or at least for a very long time.
Now, of course, there are even more ifs required by this belief than either transhumanism or the singularity alone. There is much to be said about the belief in digital immortality, but I subtitled this post “a brief comment on digital immortality” so I’m going to limit myself to one particularly contentious conditional.
The belief in digital immortality requires a very specific understanding of identity. Notice that digital immortality involves neither the continuation of embodied life, nor the belief that some particular aspect of ourselves (such as a soul) continues after death. Instead, the argument here is that a specific kind of copy of us will go and live after we die. I think this is a big problem. Even if we were to accept all other beliefs required by digital immorality (and I don’t!), this one misunderstanding is, I think, lethal to the whole idea.
Believers in digital immortality claim that if you have a perfect copy of something, than that copy is ontologically identical to the thing of which it is a copy. On this view, the identity of something is simply a question of what features or properties it has. If two things have exactly the same properties, down to the atom, then they are actually the same thing.
Now this view of identity is one that few philosophers would endorse as it seems to ignore what is often referred to in analytic philosophy as “numerical identity”, which refers to the identity that something has not only as the specific kind of thing it is (its essence or quiddity), but the identity it has as being that thing at that given place and that given time. (Continental philosophers would simply refer to this as “existence” as such, which really does cut to the chase.) That is to say: even if you have two copies of the same thing, each copy is obviously a separate thing. In other words, identity involves not only what something is but also that it is.
Now it seems to me that this is pretty obvious, and is a feature of reality that most of us just take for granted, but believers in digital immortality must deny it in order for their belief in the digital afterlife to make any sense at all. To see why, let’s run a very simple thought experiment. Let’s say that we can make an atom-for-atom simulation of your body inside a computer. Now, according to believers in digital immortality, when you die, this copy of you will “live” on, and you will experience a wonderful afterlife in this digital realm.3
But let’s assume that we copy you, and we begin to run the programmed copy of you in the computer, but you don’t die. Instead, you go on living in your embodied life for many years, while the digital copy of you lives its own (paradisal) life in the machine.
Does anyone think that, while they are alive, they will somehow automatically experience whatever their digital copy is experiencing? If I get myself copied into the computer, but then I go back to my daily life of digging coal 16 hours a day, while my digital clone has simulated margaritas on a simulated beach, will I taste those margaritas in the coal mine? Will I feel the warm sand on my skin down there?
Of course not. Whatever events are happening in the computer program will, of course, simply be complex patterns of electric current running through the CPUs of the server stack. It’s important to note that the problem here has nothing to do with the thornier questions of whether a computer, or a computer program, or a specific entity within a computer program, can have, feel, experience, or be conscious (though that is an interesting an important topic!). The issue here is much more fundamental: whatever events happen in the computer happen in the computer; meanwhile, whatever happens in my brain happens in my brain. The fact that the computer is hosting a digital copy of me does not magically mean that I have some kind of phenomenal access to those events. To believe that I will experience what my digital copy experiences is to believe in an extremely souped-up version of telepathy or extrasensory perception—rather unscientific, non-materialist ideas which the belief in digital immortality was supposed to make superfluous!
Now, if no one expects that I will experience the joys of digital heaven that my digital copy experiences while my body is still alive, why would anyone expect that I could experience them after I die? Surely even then telepathy and E.S.P. would be of no help to me. This question is particularly pressing for believers in digital immortality since, as far as I can tell, they tend to be rather hardline materialists: generally speaking, they seem to think consciousness is something that is generated by certain arrangements of matter. Regardless of what we think of this view (and I think it’s untenable), surely this only makes digital immoralism even less coherent than it already otherwise would be. If my consciousness is nothing more than the non-material epiphenomenal properties generated by my brain’s activities, even if you think that a computer can generate a new state of consciousness associated with your digital copy, surely that new conscious experience won’t be yours at all: when you die, that new copy will go on “living”—but you won’t. Even if the copy has an identical “whatness” to mine, it, by definition, does not have my “thatness” (or, from my perspective, thisness).
Again, this brings us back to the fundamental question of identity. Digital immortalism rests on the idea that two things can be said to be identical so long as they have exactly identical qualities and properties. Even if we give them the benefit of the doubt that a simulated quality is the same as its non-simulated equivalent (that is, even if we allow that a simulated oxygen atom is identical to an actual oxygen atom—a position that strikes me as already untenable, for the record), this view of identity still fails to account for numerical identity, as our little thought experiment shows above.
What’s interesting and odd about digital immortalism is just how basic this problem is—it seems to me that even a moment’s thought would reveal this incoherence, and the thought experiment here is extremely basic and obvious. Yet the belief persists among a certain demographic of people. And here we return to the pugnacious reality of human religiosity: whether spiritual or not, we humans really can’t help but organize our personalities around shared narratives of meaning. Even as the world gets lets traditionally religious, all that means is that it is getting more untraditionally religious. And I am not so sure that this has been an improvement.
It is worth noting that scholars of religion have notoriously been unable to even define the word “religion”, so an expansive view of this term is definitely justified.
This whole discussion is considerably hobbled by the simple fact that no one can agree on what the word “religion” definitely means. Right now the field of religious studies essentially takes the “you know it when you see it” approach, which, helpfully for the scholars themselves, greatly increases the number of articles that can be published on the topic.
As I said above, I am not going to trace every ontological or metaphysical problem with digital immortality in this piece, but the thoughtful reader will no doubt have noticed that there is another huge problem in the ideology of digital immortality here: is a simulation of an atom ontologically identical to, or interchangeable with, an actual atom?




This is exactly the thought experiment that broke materialism for me. Imagine I'm walking down the street, I get zapped by a measuring ray, and either a simulation or a physical clone of my body gets built somewhere. It seems obvious that there is a fact of the matter, as to what my experience continues to be at that point, and whether it involves the new being or not. I agree with your assessment that the new being would just have its own separate thread of consciousness.
So I can understand that if the 'singularitarians' are materialists all the way down, they may bite the bullet on the above, and decide that there is no fact of the matter as to who you continue to be.
That does bring back memories... "consciousness doesn't necessarily transfer over to a simulation" has been a sci-fi trope for at least 30 years. Greg Egan explored all sorts of paradoxes in his Axiomatic, back in 1995.
My first feature film, fresh out of college, was a little sci-fi story about a guy developing this "digital brain" technology, with the goal being the ability to upload a sort of "save state" for a healthy brain. I really wish I'd possessed the serious interest in philosophy and theology back then that I have now...in writing that movie I didn't even bother to question the materialist presuppositions about whether or not a brain was a computer and wrote the whole film under the assumption you're describing in this article, that if a 1:1 technological replication could exist, the uploaded mind and the original mind would be the "same thing."
That being said, I feel like I accidentally made at least one statement about the matter that I still like, namely that the digital character who develops throughout the film uses a real person's "uploaded" emotions to develop her own, and as such, has a sort of identity crisis where she can't figure out where she ends and her human template begins, eventually resulting in her talking said human into switching places with her, and it is only once she is embodied that she can properly form her own identity. If I made the movie today, I think it would focus almost entirely on those questions of identity and consciousness, because that question, as you've excellently laid out here, is so much more interesting than the materialist assumptions so many of us have (especially the transhumanists you mentioned).
I think it's also (sorry just thinking out loud in your comments section now) telling that our current understanding of "AI" is almost entirely focused on LLMs, which as I know you've detailed elsewhere, don't actually "think" but certainly appear to. My suspicion is that any successful attempt to copy or upload one's consciousness would result in a ChatGPT version of you, not an actual, intentionally thinking copy, but a sort of algorithmic illusion of your identity, perhaps advanced enough to be convincing, but ultimately just another stochastic parrot. The irony, of course, is that in reducing our worldview to the materialist presupposition that mind and matter are interchangeable, we would be left with exactly the deficient materialist imitation of consciousness that materialism assumes is the whole picture. And I fear that is a very real possibility for our future: In seeking to make ourselves immortal, we will render ourselves ghosts.