Is the Horror of Autonomous Weapons Already Here?

Another sample from my upcoming thesis on AI Safety and Buddhism:

Beyond all those, there is the prospect of an AI arms race, which is already in the making. Sensing the decisive advantage a nation with AI can have over its foes and competitors, defense bodies and governments around the world are already investing in developing and weaponizing AI systems. This, of course, is a surefire way of circumventing safety procedures and pushing for faster, less considered development.

In his book An Army of None, author Paul Scharre describes the mad race among armies to develop and deploy autonomous weapons in the air, sea, and on the ground:   

More than thirty nations already have defensive supervised autonomous weapons for situations in which the speed of engagements is too fast for humans to respond. These systems, used to defend ships and bases against saturation attacks from rockets and missiles, are supervised by humans who can intervene if necessary—but other weapons, like the Israeli Harpy drone, have already crossed the line to full autonomy. Unlike the Predator drone, which is controlled by a human, the Harpy can search a wide area for enemy radars and, once it finds one, destroy it without asking permission. It’s been sold to a handful of countries and China  has reverse engineered its own variant. Wider proliferation is a definite possibility, and the Harpy may only be the beginning. South Korea has deployed a robotic sentry gun to the demilitarized zone bordering North Korea. Israel has used armed ground robots to patrol its Gaza border. Russia is building a suite of armed ground robots for war on the plains of Europe. Sixteen nations already have armed drones, and another dozen or more are openly pursuing development.

Paul Scharre, Army of None: Autonomous Weapons and the Future of War, W. W. Norton & Company (Kindle Edition), pp. 4-5.

AI Alignment as a Choice Between Heaven & Hell

Another snippet from my thesis:

To full grasp what is at stake, it is perhaps worth contemplating the vast space of possible outcomes when it comes to AI: With serious scholars and thinkers arguing with equal authority that AI technologies could lead to the enslavement or annihilation of mankind, or that it could make us all into immortal Gods, or many states in between. Writes Max Tegmark:

“Whereas it may be little more than a minor nuisance if your laptop crashes or gets hacked, it becomes all the more important that an AI system does what you want it to do if it controls your car, your airplane, your pacemaker, your automated trading system or your power grid. Another short-term challenge is preventing a devastating arms race in lethal autonomous weapons.

In the long term, an important question is what will happen if the quest for strong AI succeeds and an AI system becomes better than humans at all cognitive tasks. As pointed out by I.J. Good in 1965, designing smarter AI systems is itself a cognitive task. Such a system could potentially undergo recursive self-improvement, triggering an intelligence explosion leaving human intellect far behind. By inventing revolutionary new technologies, such a superintelligence might help us eradicate war, disease, and poverty, and so the creation of strong AI might be the biggest event in human history. Some experts have expressed concern, though, that it might also be the last, unless we learn to align the goals of the AI with ours before it becomes superintelligent.”

It is perhaps better to think of AI as tool of unequal power but neutral valence. Indeed, economists have argued that AI is “the most important general-purpose technology of our era”, which means that it is likely to have the most profound impact on the economy, on economic activity, on related innovations in the same way electricity or computing itself had. It is also likely to cause large scale problems in the same way that other general-purpose innovations have. The technology itself enables and empowers but is neither good nor evil in itself.

AI & Emergent Selfhood: A taste from the intro to my M.A. thesis

The source of disputes and conflicts according to this sutra is possessiveness which arises from attachment (again – upādāna). The emergence of a self is, in the Buddha’s view, the ultimate source of “the whole mess of suffering.”

Surprisingly or not, the emergence of a self is also a moment which legend, myth, and science fiction have always portrayed as terrifying and potentially cataclysmic in the context of a man-made object.

To risk heightening an already established fear surrounding the topic,  it’s worth noting that the Pali canon is fairly clear on what is required for the self to come into being, and it doesn’t take much :

“Now, bhikkhus, this is the way leading to the origination of identity. One regards [various phenomena] thus: ‘This is mine, this I am, this is my self.”

We will dive deeper into what this might mean, and how it relates to AI later in the work. But for now, we may be comforted by the fact that the Buddha saw this view of the self not merely as damaging, but also as fundamentally incorrect. This is evidenced in the Cūḷasaccaka Sutta, where the Buddha describes anatta (Not-Self) as one of the Three Marks of Existence:

“Bhikkhus, material form is not self, feeling is not self, perception is not self, formations are not self, consciousness is not self. All formations are impermanent; all things are not self.”

Indeed, the very idea of Buddhist enlightenment is intrinsically tied to the overcoming of this notion of self, and resting in a state of “suchness”. Writes Paul Andrew Powell:

“For most Buddhists, enlightenment can be defined as seeing through the illusion of the self and “experiencing unadulterated suchness. In the word of Master Wolfgang Kopp, “the seer, the seen, and the process of seeing are one. The thinker, the thought, and the process of thinking fall together into one and multiplicity melts away. There is neither inside, nor outside in this state. There is only ‘suchness,’ tathata. So, enlightenment is suchness, or, things as they are, revealed as all that there is.”

This concern about a possible emerging selfhood with autonomous will, which both Buddhism and AI Safety thinkers warn against, presents us with two broad options regarding artificial selfhood:

  1. We could hope that a self, or a pattern of goals and behaviors that looks like biological selfishness will not emerge. We could point to the many differences between man and machine, be it in emotion, cognition, subjective experience, or material construction – and decide that we can wait for machines to exhibit concerning behaviors before we become preoccupied with these concerns.
  2. We could become very interested in human selfhood and the causes and conditions that bring it about, and identify wise precautions that will prevent it, or something very much like it, from emerging in our machines and becoming malignant. We may also, as some suggested, embed in our machines from the start some of the insights and constructs that allow a mind to transcend the limiting view of self — in essence constructing artificial enlightenment.

As evident from the research and writing emerging from both the Buddhist and the AI Safety communities, the tendency seems to be decidedly towards Option #2. In this work, I shall seek to further the discussion by focusing on selfhood in both Buddhism and AI safety from a constructive, integrative point of view.