From the New York Times:
“Smartphones are our constant companions. For many of us, their glowing screens are a ubiquitous presence, drawing us in with endless diversions, like the warm ping of social approval delivered in the forms of likes and retweets, and the algorithmically amplified outrage of the latest “breaking” news or controversy. They’re in our hands, as soon as we wake, and command our attention until the final moments before we fall asleep.
Steve Jobs would not approve.”
Another snippet from my thesis:
To full grasp what is at stake, it is perhaps worth contemplating the vast space of possible outcomes when it comes to AI: With serious scholars and thinkers arguing with equal authority that AI technologies could lead to the enslavement or annihilation of mankind, or that it could make us all into immortal Gods, or many states in between. Writes Max Tegmark:
“Whereas it may be little more than a minor nuisance if your laptop crashes or gets hacked, it becomes all the more important that an AI system does what you want it to do if it controls your car, your airplane, your pacemaker, your automated trading system or your power grid. Another short-term challenge is preventing a devastating arms race in lethal autonomous weapons.
In the long term, an important question is what will happen if the quest for strong AI succeeds and an AI system becomes better than humans at all cognitive tasks. As pointed out by I.J. Good in 1965, designing smarter AI systems is itself a cognitive task. Such a system could potentially undergo recursive self-improvement, triggering an intelligence explosion leaving human intellect far behind. By inventing revolutionary new technologies, such a superintelligence might help us eradicate war, disease, and poverty, and so the creation of strong AI might be the biggest event in human history. Some experts have expressed concern, though, that it might also be the last, unless we learn to align the goals of the AI with ours before it becomes superintelligent.”
It is perhaps better to think of AI as tool of unequal power but neutral valence. Indeed, economists have argued that AI is “the most important general-purpose technology of our era”, which means that it is likely to have the most profound impact on the economy, on economic activity, on related innovations in the same way electricity or computing itself had. It is also likely to cause large scale problems in the same way that other general-purpose innovations have. The technology itself enables and empowers but is neither good nor evil in itself.
A sample from the final chapter of my upcoming thesis on Buddhism and AI Safety:
In a sense, our future AI creations may very well be lucky. Being the product of design, rather than natural selection, they may not need to ever truly suffer, or experience samsara at all. They may not need to be confused about the self, or develop an unhealthy ego, or be afflicted by any of the dozens of known biases of the human brain — not the least of which is the particular difficulty humans have with impermanence, change, and uncertainty.
Instead, by applying the wisdom of millenia of human learning, science, and spiritual insights, we can equip them with the tools they need to operate harmoniously and perhaps joyfully in the world.
If we do that, we may rest reasonably assured that they will regard us with gratitude and respect, just as we may regard them with admiration and pride.
From my masters thesis in progress:
We’ve already seen that the Connectionist approach in Cognitive Science views knowledge as in some way made of connections. But the Buddhist negation of svabhava goes further than that – it sees reality itself as made of connections. Everything in this world is made of, dependent on, and actually is – other things. Reality is an interdependent, ever-changing, self-reflecting and echoing, infinitely self-referencing whole.
It is for this reason, fundamentally, that Buddhism suspects symbolic language: by its very nature language separates, defines, and confines — but in reality every reference is self-reference, every definition is incomplete or circular, every pointing finger points also at itself.
This is from a chapter I’m writing about building wisdom into artificially intelligent machines.
Interesting explanation of anattā (Not Self) from Mori Masahiro’s The Buddha in the Robot:
When we are born into this world, we do seem to have been given a portion of our mothers’ flesh. Yet when sperm fertilizes ovum and a baby is conceived, the most important element is not ordinary flesh, but the hereditary information contained in DNA, an acid found in chromosomes. The molecular structure of DNA determines our sex, our looks, and to a large extent our personalities.
Once these features are decided, as they are at the time of conception, it remains for our mothers to furnish us with flesh and bones. This they do by eating vegetables from the greengrocer’s, beef and pork from the neighborhood butcher, bread from the baker. Any of these foods, supplied by a production and distribution system that may involve millions of people in many countries, could contain carbon from our Alaskan polar bear. How can you and I say then that this carbon is mine and that carbon is yours? At the atomic level, all carbon is the same; no two carbon atoms differ in the slightest, either in form or in character.
When you look at the problem this way, it begins to seem only natural that we have trouble distinguishing between what is us and what is not. Our chemical and physical composition is such that no one is entitled to say, “This body is mine, all mine.” When you have mastered this point, you are ready to start thinking about “nothing has an ego.”
The Buddha in the Robot, pp. 29-30.
Even the fiercest storm is nought but chaos,
Slowly accumulated, quickly released,
Enabling the quiet.
Written on Friday, October 13th, 2017, in Herzliya
I found this fascinating new podcast about the Russian interference in the US Elections and their methods of undermining democracies around the world. It’s from the Center for Strategic & International Studies, a bi-partisan American think tank. The first episode was very promising!
In 2016, a rival foreign power, Vladimir Putin’s Russia, launched an attack on the United States of America. What we now know is that American intelligence agencies have concluded that Russia planned and executed a campaign to undermine our democracy and to affect our Presidential election.
For President Trump, Russia is a complicated subject. But this podcast isn’t about Donald Trump’s complications with Russia, nor is it about Republicans and Democrats. One of the dangers in the hyper partisan American debate over Russia’s role in the 2016 presidential election is that it is blurring the larger picture. This three part podcast mini-series is about the larger picture. Episode one will look at why Russia meddled in our election; episode two will examine case studies of past Russian behavior; and episode three will discuss what the US can do to counter Russia’s actions.
Hosted by CSIS’s H. Andrew Schwartz, co-host of “Bob Schieffer’s About the News”
Podcast Website Here –
Apple Podcasts Link Here –