This Carbon Atom is Mine, and This is Yours

Interesting explanation of anattā (Not Self) from Mori Masahiro’s The Buddha in the Robot:

When we are born into this world, we do seem to have been given a portion of our mothers’ flesh. Yet when sperm fertilizes ovum and a baby is conceived, the most important element is not ordinary flesh, but the hereditary information contained in DNA, an acid found in chromosomes. The molecular structure of DNA determines our sex, our looks, and to a large extent our personalities.

Once these features are decided, as they are at the time of conception, it remains for our mothers to furnish us with flesh and bones. This they do by eating vegetables from the greengrocer’s, beef and pork from the neighborhood butcher, bread from the baker. Any of these foods, supplied by a production and distribution system that may involve millions of people in many countries, could contain carbon from our Alaskan polar bear. How can you and I say then that this carbon is mine and that carbon is yours? At the atomic level, all carbon is the same; no two carbon atoms differ in the slightest, either in form or in character.

When you look at the problem this way, it begins to seem only natural that we have trouble distinguishing between what is us and what is not. Our chemical and physical composition is such that no one is entitled to say, “This body is mine, all mine.” When you have mastered this point, you are ready to start thinking about “nothing has an ego.”

The Buddha in the Robot, pp. 29-30.

AI & Emergent Selfhood: A taste from the intro to my M.A. thesis

The source of disputes and conflicts according to this sutra is possessiveness which arises from attachment (again – upādāna). The emergence of a self is, in the Buddha’s view, the ultimate source of “the whole mess of suffering.”

Surprisingly or not, the emergence of a self is also a moment which legend, myth, and science fiction have always portrayed as terrifying and potentially cataclysmic in the context of a man-made object.

To risk heightening an already established fear surrounding the topic,  it’s worth noting that the Pali canon is fairly clear on what is required for the self to come into being, and it doesn’t take much :

“Now, bhikkhus, this is the way leading to the origination of identity. One regards [various phenomena] thus: ‘This is mine, this I am, this is my self.”

We will dive deeper into what this might mean, and how it relates to AI later in the work. But for now, we may be comforted by the fact that the Buddha saw this view of the self not merely as damaging, but also as fundamentally incorrect. This is evidenced in the Cūḷasaccaka Sutta, where the Buddha describes anatta (Not-Self) as one of the Three Marks of Existence:

“Bhikkhus, material form is not self, feeling is not self, perception is not self, formations are not self, consciousness is not self. All formations are impermanent; all things are not self.”

Indeed, the very idea of Buddhist enlightenment is intrinsically tied to the overcoming of this notion of self, and resting in a state of “suchness”. Writes Paul Andrew Powell:

“For most Buddhists, enlightenment can be defined as seeing through the illusion of the self and “experiencing unadulterated suchness. In the word of Master Wolfgang Kopp, “the seer, the seen, and the process of seeing are one. The thinker, the thought, and the process of thinking fall together into one and multiplicity melts away. There is neither inside, nor outside in this state. There is only ‘suchness,’ tathata. So, enlightenment is suchness, or, things as they are, revealed as all that there is.”

This concern about a possible emerging selfhood with autonomous will, which both Buddhism and AI Safety thinkers warn against, presents us with two broad options regarding artificial selfhood:

  1. We could hope that a self, or a pattern of goals and behaviors that looks like biological selfishness will not emerge. We could point to the many differences between man and machine, be it in emotion, cognition, subjective experience, or material construction – and decide that we can wait for machines to exhibit concerning behaviors before we become preoccupied with these concerns.
  2. We could become very interested in human selfhood and the causes and conditions that bring it about, and identify wise precautions that will prevent it, or something very much like it, from emerging in our machines and becoming malignant. We may also, as some suggested, embed in our machines from the start some of the insights and constructs that allow a mind to transcend the limiting view of self — in essence constructing artificial enlightenment.

As evident from the research and writing emerging from both the Buddhist and the AI Safety communities, the tendency seems to be decidedly towards Option #2. In this work, I shall seek to further the discussion by focusing on selfhood in both Buddhism and AI safety from a constructive, integrative point of view.

The Highest Form of Engagement

If one were in a simplistic mindset, one could look at the Facebook feed algorithm as the first case of AI gone rogue. This machine-learning algorithm was supposedly given the task of making sure we spend as much time on site as possible, and engage as much as possible. As a result it created the most addictive show on earth: the sight of our societies being torn apart by internal strife.

Facebook found the fault lines in each society and pounded on them. Unconscious of its actions but as intelligent as a vengeful God. We fed the fires with our fear and anger, but the incentive loop was in the background, optimizing, selecting, and highlighting the things that would most likely cause us to go crazy. We asked for engagement, and we got the highest form of engagement: war.

It is up to us now to wake up and realize how critical it is that the goals we set are aligned with our values. Facebook is a benign company that wants, I truly believe, nothing more than to make the world better. But it is playing with fire.

I think today they are beginning to realize that.

IEEE quoting Aristotle

If a few years ago someone would have told me that the IEEE (Institute of Electrical and Electronics Engineers) would start an urgent discussion of the purpose of life and cite Aristotle, I may not have believed you. Now, thanks to rapid developments in AI, it has become a necessity:

We need to make sure that these technologies are aligned to humans in terms of our moral values and ethical principles. AI/AS have to behave in a way that is beneficial to people beyond reaching functional goals and addressing technical problems. This will allow for an elevated level of trust between humans and our technology that is needed for a fruitful pervasive use of AI/AS in our daily lives.

Eudaimonia, as elucidated by Aristotle, is a practice that defines human wellbeing as the highest virtue for a society. Translated roughly as “flourishing,” the benefits of eudaimonia begin by conscious contemplation, where ethical considerations help us define how we wish to live.

By aligning the creation of AI/AS with the values of its users and society we can prioritize the increase of human wellbeing as our metric for progress in the algorithmic age.

FROM:
Ethically Aligned Design: A Vision for Prioritizing Human Wellbeing with Artificial Intelligence and Autonomous Systems