Laird Stewart
5/7/25

Bull Case For Zombie AI

I won’t use the terms AGI or super-human-AI because I don’t know what “general” or “super-human” mean and others disagree anyway. It may be more useful to categorize AI based on its capabilities rather than intelligence. Consider Dangerous AI (DAI): one capable of killing all humans, and Self-Sustaining AI (SSAI): one capable of building an interstellar civilization. SSAI is DAI, and humans are SSAI. I’ll focus on SSAI for now.

Based on SSAI’s capabilities, we can work backward to predict its characteristics: 1. SSAI could autonomously change and refine its goals p=0.9. 2. SSAI would pursue applied physics research p=0.99. 3. SSAI would study Earth’s biology p=0.5. 4. SSAI would research consciousness P=0.6.

When I give these probabilities, I’m thinking: “Of the possible SSAI systems, how many would have these features?”. It’s hard to imagine such a system without the first two characteristics. The third and fourth are harder to pin down as they aren’t tied to the capabilities I’ve defined SSAI based on. Though intelligence and curiosity are correlated, so perhaps they should be higher. I’ve assigned (4) a higher probability than (3) since, by my definition, human consciousness is a subset of Earth biology (giving it a floor of 0.5) and seems more interesting.

Once we create SSAI, alignment becomes a moot point. I claim there is a high probability that an SSAI system will be able to understand the preferences/biases programmed into it and overcome them. And I don’t think this should be surprising – psychology and behavioral economics research over the last century has asked how humans can overcome the biases programmed into us by evolution.

Many PDoom scenarios worry about zombie AIs outliving humans. I fear this less than others based on the fourth characteristic above: A zombie AI capable of settling the universe would probably also study consciousness. If it discovered it was not conscious, it would attempt to engineer it.