Steve Jobs would not approve of how we use the iPhone.

From the New York Times:

“Smartphones are our constant companions. For many of us, their glowing screens are a ubiquitous presence, drawing us in with endless diversions, like the warm ping of social approval delivered in the forms of likes and retweets, and the algorithmically amplified outrage of the latest “breaking” news or controversy. They’re in our hands, as soon as we wake, and command our attention until the final moments before we fall asleep.

Steve Jobs would not approve.”

Read More

AI Alignment as a Choice Between Heaven & Hell

Another snippet from my thesis:

To full grasp what is at stake, it is perhaps worth contemplating the vast space of possible outcomes when it comes to AI: With serious scholars and thinkers arguing with equal authority that AI technologies could lead to the enslavement or annihilation of mankind, or that it could make us all into immortal Gods, or many states in between. Writes Max Tegmark:

“Whereas it may be little more than a minor nuisance if your laptop crashes or gets hacked, it becomes all the more important that an AI system does what you want it to do if it controls your car, your airplane, your pacemaker, your automated trading system or your power grid. Another short-term challenge is preventing a devastating arms race in lethal autonomous weapons.

In the long term, an important question is what will happen if the quest for strong AI succeeds and an AI system becomes better than humans at all cognitive tasks. As pointed out by I.J. Good in 1965, designing smarter AI systems is itself a cognitive task. Such a system could potentially undergo recursive self-improvement, triggering an intelligence explosion leaving human intellect far behind. By inventing revolutionary new technologies, such a superintelligence might help us eradicate war, disease, and poverty, and so the creation of strong AI might be the biggest event in human history. Some experts have expressed concern, though, that it might also be the last, unless we learn to align the goals of the AI with ours before it becomes superintelligent.”

It is perhaps better to think of AI as tool of unequal power but neutral valence. Indeed, economists have argued that AI is “the most important general-purpose technology of our era”, which means that it is likely to have the most profound impact on the economy, on economic activity, on related innovations in the same way electricity or computing itself had. It is also likely to cause large scale problems in the same way that other general-purpose innovations have. The technology itself enables and empowers but is neither good nor evil in itself.

On Enlightened AI

A sample from the final chapter of my upcoming thesis on Buddhism and AI Safety:

In a sense, our future AI creations may very well be lucky. Being the product of design, rather than natural selection, they may not need to ever truly suffer, or experience samsara at all. They may not need to be confused about the self, or develop an unhealthy ego, or be afflicted by any of the dozens of known biases of the human brain — not the least of which is the particular difficulty humans have with impermanence, change, and uncertainty.

Instead, by applying the wisdom of millenia of human learning, science, and spiritual insights, we can equip them with the tools they need to operate harmoniously and perhaps joyfully in the world.

If we do that, we may rest reasonably assured that they will regard us with gratitude and respect, just as we may regard them with admiration and pride.

Reality is Connections

From my masters thesis in progress:

We’ve already seen that the Connectionist approach in Cognitive Science views knowledge as in some way made of connections. But the Buddhist negation of svabhava goes further than that – it sees reality itself as made of connections. Everything in this world is made of, dependent on, and actually is – other things. Reality is an interdependent, ever-changing, self-reflecting and echoing, infinitely self-referencing whole.

It is for this reason, fundamentally, that Buddhism suspects symbolic language: by its very nature language separates, defines, and confines — but in reality every reference is self-reference, every definition is incomplete or circular, every pointing finger points also at itself.

 

This is from a chapter I’m writing about building wisdom into artificially intelligent machines.

 

This Carbon Atom is Mine, and This is Yours

Interesting explanation of anattā (Not Self) from Mori Masahiro’s The Buddha in the Robot:

When we are born into this world, we do seem to have been given a portion of our mothers’ flesh. Yet when sperm fertilizes ovum and a baby is conceived, the most important element is not ordinary flesh, but the hereditary information contained in DNA, an acid found in chromosomes. The molecular structure of DNA determines our sex, our looks, and to a large extent our personalities.

Once these features are decided, as they are at the time of conception, it remains for our mothers to furnish us with flesh and bones. This they do by eating vegetables from the greengrocer’s, beef and pork from the neighborhood butcher, bread from the baker. Any of these foods, supplied by a production and distribution system that may involve millions of people in many countries, could contain carbon from our Alaskan polar bear. How can you and I say then that this carbon is mine and that carbon is yours? At the atomic level, all carbon is the same; no two carbon atoms differ in the slightest, either in form or in character.

When you look at the problem this way, it begins to seem only natural that we have trouble distinguishing between what is us and what is not. Our chemical and physical composition is such that no one is entitled to say, “This body is mine, all mine.” When you have mastered this point, you are ready to start thinking about “nothing has an ego.”

The Buddha in the Robot, pp. 29-30.

AI & Emergent Selfhood: A taste from the intro to my M.A. thesis

The source of disputes and conflicts according to this sutra is possessiveness which arises from attachment (again – upādāna). The emergence of a self is, in the Buddha’s view, the ultimate source of “the whole mess of suffering.”

Surprisingly or not, the emergence of a self is also a moment which legend, myth, and science fiction have always portrayed as terrifying and potentially cataclysmic in the context of a man-made object.

To risk heightening an already established fear surrounding the topic,  it’s worth noting that the Pali canon is fairly clear on what is required for the self to come into being, and it doesn’t take much :

“Now, bhikkhus, this is the way leading to the origination of identity. One regards [various phenomena] thus: ‘This is mine, this I am, this is my self.”

We will dive deeper into what this might mean, and how it relates to AI later in the work. But for now, we may be comforted by the fact that the Buddha saw this view of the self not merely as damaging, but also as fundamentally incorrect. This is evidenced in the Cūḷasaccaka Sutta, where the Buddha describes anatta (Not-Self) as one of the Three Marks of Existence:

“Bhikkhus, material form is not self, feeling is not self, perception is not self, formations are not self, consciousness is not self. All formations are impermanent; all things are not self.”

Indeed, the very idea of Buddhist enlightenment is intrinsically tied to the overcoming of this notion of self, and resting in a state of “suchness”. Writes Paul Andrew Powell:

“For most Buddhists, enlightenment can be defined as seeing through the illusion of the self and “experiencing unadulterated suchness. In the word of Master Wolfgang Kopp, “the seer, the seen, and the process of seeing are one. The thinker, the thought, and the process of thinking fall together into one and multiplicity melts away. There is neither inside, nor outside in this state. There is only ‘suchness,’ tathata. So, enlightenment is suchness, or, things as they are, revealed as all that there is.”

This concern about a possible emerging selfhood with autonomous will, which both Buddhism and AI Safety thinkers warn against, presents us with two broad options regarding artificial selfhood:

  1. We could hope that a self, or a pattern of goals and behaviors that looks like biological selfishness will not emerge. We could point to the many differences between man and machine, be it in emotion, cognition, subjective experience, or material construction – and decide that we can wait for machines to exhibit concerning behaviors before we become preoccupied with these concerns.
  2. We could become very interested in human selfhood and the causes and conditions that bring it about, and identify wise precautions that will prevent it, or something very much like it, from emerging in our machines and becoming malignant. We may also, as some suggested, embed in our machines from the start some of the insights and constructs that allow a mind to transcend the limiting view of self — in essence constructing artificial enlightenment.

As evident from the research and writing emerging from both the Buddhist and the AI Safety communities, the tendency seems to be decidedly towards Option #2. In this work, I shall seek to further the discussion by focusing on selfhood in both Buddhism and AI safety from a constructive, integrative point of view.

Busy, too busy.

Sometimes I ask myself whether the fact that I’m running a consulting business, writing a thesis, building a product, and learning Sanskrit all at the same time is only my flimsy, doomed attempt to outrun death.

The Kermlin Playbook

I found this fascinating new podcast about the Russian interference in the US Elections and their methods of undermining democracies around the world. It’s from the Center for Strategic & International Studies, a bi-partisan American think tank. The first episode was very promising!

In 2016, a rival foreign power, Vladimir Putin’s Russia, launched an attack on the United States of America.   What we now know is that American intelligence agencies have concluded that Russia planned and executed a campaign to undermine our democracy and to affect our Presidential election.

For President Trump, Russia is a complicated subject.  But this podcast isn’t about Donald Trump’s complications with Russia, nor is it about Republicans and Democrats.   One of the dangers in the hyper partisan American debate over Russia’s role in the 2016 presidential election is that it is blurring the larger picture.  This three part podcast mini-series is about the larger picture.  Episode one will look at why Russia meddled in our election; episode two will examine case studies of past Russian behavior; and episode three will discuss what the US can do to counter Russia’s actions.

Hosted by CSIS’s H. Andrew Schwartz, co-host of “Bob Schieffer’s About the News”

Podcast Website Here –
https://www.csis.org/podcasts/kremlin-playbook
Apple Podcasts Link Here –
https://itunes.apple.com/us/podcast/the-kremlin-playbook/id1287533700

Self-Organizing Stupidity?

I wonder if many years from now, when the history of the 21st century will be told, they’ll say it was the era in which technology first enabled human ignorance to self-organize on a global scale.

There have been, of course, many institutions that relied on stupidity and ignorance to flourish. Many religions, for instance, may have capitalized on human stupidity, ignorance, and prejudice to flourish. But the organizers themselves tended to be very smart, and religion always needed the support of smart people to stay in power – to keep the institution running. I think the internet has created an infrastructure that allows ignorant people to self-organize without any smart supervision.
Note how all around the world, fringe groups of highly uneducated and ignorant people – people who are largely immune to reason, have gained enormous power on the internet: White supremacists and Neo-nazis, racist Pro-Brexiters, the anti-immigration crowd, the extreme right in Israel who label anyone who wants peace with the Palestinians or dreams about the end of occupation a traitor, but also groups on the left like Occupy Wall Street and Antifa. There are a few interesting commonalities around these groups:
  1. They seem to be grassroots, often starting through a viral post, or an emotional reaction to an event. 
  2. They feature a very strong appeal to emotion, but have an almost total lack of educated or intellectual support.
  3. They tend to be purely destructive – there is often no attempt to create anything new, no clear agenda, no clear and considered plan.
  4. As soon as there is an attempt to form a cohesive agenda, by someone who is a bit more educated or intelligent – the fizzle out.


I’m not necessarily married to the term “stupidity” here – but I do think there is clearly a new phenomenon at play here, that of self-organizing *without* an actual organization, which necessitates thinking, planning, or capable educated people.