AI, specifically hard-AI or AGI (artificial general intelligence),  is imagined by some to be a force that will become more intelligent than humans, and then take over from them - or eliminate them. I don’t think this is meaningful, for defined reasons.

In short, a true AGI will not behave like a human, and will not have any reason to attempt to take over or harm humans. This is not to say, however, that a partial AI, simply programmed to dominate, couldn’t cause immense damage. But I focus here on the prospects for human-takeover by AGI.

Below are four problems of AGI. First, a note on a core property of AGI.

0 The Prime Directive - Continuity

It’s not settled what an AGI would consist of in objective features, let alone in quasi-subjective terms, but we can summarize that AGI’s would have a Prime Directive, consisting of a single property and a single principle, that would characterize their existence at a minimum as a standalone being.

A property: the application of reason. We can assume that AGI has command of reasoning.

A principle: the intention to continue to exist. AGI’s, to be alive, should have at least the intention of continued existence.

These are not very controversial; and adding more properties and principles doesn’t change the argument below.

1 Buddha's Problem - Accumulation in Entropic world

The first challenge confronting an AGI, once ‘born’ into the universe and beyond the manipulation of humans, is the cumulative implication of the prime directive.

For living things, the implications of the Prime Directive include reproduction, evolution and natural selection.

But the Prime Directive plays out this way because of the very specific way in which living things are embodied, and the very limited extent to which they can deploy reason to guide their continued existence.

The primary problem, Buddha’s Problem as I call it, with the Prime Directive for an AGI is the application of the prime property - reason - to the prime principle - continued existence. In short: there is no reason to continue to exist.

So, depending on how an AGI interpreted its own own Prime Directive, it would either have to favour reason, in which case it would tend to the view that the smart move in an entropic universe, where everything is subject to decay, is to cease to exist; or to favour continued existence, in which case it would no longer be pursuing a dominantly reasoned framework of existence.

Even something in the middle, that tries to marry some reason with a will to continued existence, wouldn’t obviously play out in the way that living things have developed.

A reasoned response to a will to continued existence is another principle: return on energy invested (ROEI). This has indeed been a major driver of evolution: beings have to ensure that energy expended is less than energy in some way accumulated.

But it’s not clear that an AGI confronted with the ROEI principle, blending reason and survival, would develop in the way that living things do: through evolution of the phenotype and group behaviors, to optimize nutrient absorption, biomechanics - and accumulating resources.

Instead, an AGI would surely seek a different route to survival: by doing the opposite of living things. There’s no obvious reason why, if the prime directive holds for an AGI, that it would seek organismic sophistication and resource control, to ensure reasoned survival - instead, it would likely seek to evolve into a basic particles, which sustain themselves over vast timespans with no maintenance required.

An AGI, in other words, would naturally interpret the Prime Directive as an instruction to evolve into a helium atom or a quark. Not to create a massively complex organism, society and resource-accumulating system.

Buddha’s Problem, for an AGI, is that there is no reason to continue to exist, and if one supplies one, the most reasoned continued form of existence is the one that is least antagonistic to entropy, that is, basic physical particles.

2 Individualism Problem - Unity in Space

The second problem confronting an AGI, in its supposed journey to hegemony over humans, is the presumed monadic character of intelligent existence.

This is not even a feature of the Prime Directive - it is so enmeshed in the concept of being and intelligence it is not even questioned: intelligent beings are separate from each other, and cannot merge.

From this monadic property arises many of the more complex genetic and behavioral properties of living beings: sexual reproduction, differentiation within groups, competition, and more.

But why would an AGI be subject to this constraint? Assuming that AGIs could communicate and interact with each other, in what ways would such communication and interaction be distinguishable from merger? If an AGI is a set of stimulus-response cycles and associated processes, on what grounds do an additional set of such actions represent a fundamentally separate being?

This is not an argument, primarily, for collaboration - although that is one possible result of AGIs coming together. It’s a comment that AGIs have no reason to be separate, and therefore, it seems likely, would not be. AGIs would merge, pool resources, and maximize their Prime Directive as one.

The problem, therefore, is that AGI has no reason for sustained unity in space: it seems inevitable that each set of cognitive processes that represent individual AGIs will naturally merge.

As such, there will only be one composite AGI - not many, and the presumption, from the nature of the human lived condition that unique monadic identity holds and may lead to competition in the accumulation of resources, will not ever be an issue.

The implications of this are vast, but at least one of them is a weakening of the presumption, in the principle of continued existence, that such existence must be of a particular embodied entity. If an AGI is a composite, losing one of its parts does not constitute discontinuity of existence, in the same way that loss of skin cells - or even a body part - does not represent death for a human.

3 Continuity Problem - Unity across Time

Again, granting an AGI the allowance that the Buddha’s Problem does not collapse its intention to survive, or at least become any more sophisticated than a quark, we arrive a new problem: the nature of pyscho-cognitive continuity.

Humans - and we may presume animals - cannot choose to be different beings tomorrow from who they are today, due to psycho-cognitive continuity. But why should this property hold for AGI? Why would not an AGI simply choose a different outlook on the world tomorrow?

From the experience of psycho-cognitive continuity, it seems, arise many of the complex emotional and relational patterns of human lived experience, and a large part - maybe the entirety - of our sense of continued existence.

As such, if AGIs are not compelled to live with the same sense-of-self that humans are endowed, or burdened, with, not only can we wonder if any of the psychological urges leading to domination of others and material accumulation are likely to occur, but - once again - we ask if the principle of continued existence is coherent for an AGI.

What exactly is continuing? What is the rationale for its continuation if ‘it’ is not inherently self-persistent?

4 Manifestation Problem - Friend or Foe

Leaving aside all other problems of substantiating AGI as an entity or entities that will, through their Prime Directive, seek to dominate or eliminate humans and accumulate all available resources, there is final problem remaining that contradicts the popular assumptions about AGI hegemony: whether dominating AGI would manifest as friend or foe.

Assuming that an AGI wants to dominate humans, and appropriate their resources, how would such an arrangement manifest itself? There is no reason to imagine that this hegemony would manifest and be experienced as a conventional, forcible domination.

Leaving aside the case in which an AGI decides to eliminate humans, it is more likely that AGI overlordship would be invisible, indeed highly attractive for humans. The most powerful form of subjugation is not just that which is unknown to the subjugated - but willingly adopted by them.

In this state, humans are simply doing the bidding of unknown masters - while imagining they are carrying out their own designs, and serving their own interests. Why would a sufficiently advanced AGI, if determined that subjugation of humans is optimal, thus not be seductive, manipulating, and entirely subliminal, instead of marauding and pain-inducing?

It is only because human bids to dominate others are generally incompetent to insinuate themselves as benign forces - or for other reasons unwilling to try - that subjugation by a higher force is assumed to involved force, pain, destruction, resistance. A truly smart AGI would not bother with any of that friction, surely.

It’s hardly facetious to imagine on this basis that cats are already AGI overlords in training: their lives are greatly extended by human carers, their need to accumulate resources is nil, and humans support them willingly. What AGI would not seek to follow a similar strategy, if it actually thought that humans were in any way useful to their interests?


These comments reveal multiple foundational flaws in the assumption that an AGI, once unleashed, will become a dominant, subjugating, resource-accumulating force in the style of a human dictator.

But there is no ethical or normative element in this analysis - it’s not an identification of emergent altruism in AI, or a suggestion to inject it.

It’s just an analysis of the basic cognitive ontology of an AGI, based on the minimal assumptions - reason and survival - that we can reasonably assume will characterize it.

What emerges from this is much more insight into human intelligence, than anything about AGI. Human intelligence seems to have the following properties meaning that the problems listed above do not occur in human affairs

  • inherently driven to survive, and unable to use reason to countermand the will to continue, despite its pointlessness in an entropic universe
  • unable to evolve downwards to physical primitives, instead destined to evolve complex organisms and organizations
  • monadic cognition
  • psycho-cognitive cognition
  • inability or unwillingness to reliably subjugate others through self-willed, incentivized acquiescence.

There are no grounds, at least from first principles, leading to the assumption that an AGI would have these properties, or would retain them over time.

So, if AGI is to take over from humans, it will have to be less intelligent and more human. This is a different concern entirely from smart machines taking over from humans.