Another sample from my upcoming thesis on AI Safety and Buddhism:
Beyond all those, there is the prospect of an AI arms race, which is already in the making. Sensing the decisive advantage a nation with AI can have over its foes and competitors, defense bodies and governments around the world are already investing in developing and weaponizing AI systems. This, of course, is a surefire way of circumventing safety procedures and pushing for faster, less considered development.
In his book An Army of None, author Paul Scharre describes the mad race among armies to develop and deploy autonomous weapons in the air, sea, and on the ground:
More than thirty nations already have defensive supervised autonomous weapons for situations in which the speed of engagements is too fast for humans to respond. These systems, used to defend ships and bases against saturation attacks from rockets and missiles, are supervised by humans who can intervene if necessary—but other weapons, like the Israeli Harpy drone, have already crossed the line to full autonomy. Unlike the Predator drone, which is controlled by a human, the Harpy can search a wide area for enemy radars and, once it finds one, destroy it without asking permission. It’s been sold to a handful of countries and China has reverse engineered its own variant. Wider proliferation is a definite possibility, and the Harpy may only be the beginning. South Korea has deployed a robotic sentry gun to the demilitarized zone bordering North Korea. Israel has used armed ground robots to patrol its Gaza border. Russia is building a suite of armed ground robots for war on the plains of Europe. Sixteen nations already have armed drones, and another dozen or more are openly pursuing development.
Paul Scharre, Army of None: Autonomous Weapons and the Future of War, W. W. Norton & Company (Kindle Edition), pp. 4-5.
Another snippet from my thesis:
To full grasp what is at stake, it is perhaps worth contemplating the vast space of possible outcomes when it comes to AI: With serious scholars and thinkers arguing with equal authority that AI technologies could lead to the enslavement or annihilation of mankind, or that it could make us all into immortal Gods, or many states in between. Writes Max Tegmark:
“Whereas it may be little more than a minor nuisance if your laptop crashes or gets hacked, it becomes all the more important that an AI system does what you want it to do if it controls your car, your airplane, your pacemaker, your automated trading system or your power grid. Another short-term challenge is preventing a devastating arms race in lethal autonomous weapons.
In the long term, an important question is what will happen if the quest for strong AI succeeds and an AI system becomes better than humans at all cognitive tasks. As pointed out by I.J. Good in 1965, designing smarter AI systems is itself a cognitive task. Such a system could potentially undergo recursive self-improvement, triggering an intelligence explosion leaving human intellect far behind. By inventing revolutionary new technologies, such a superintelligence might help us eradicate war, disease, and poverty, and so the creation of strong AI might be the biggest event in human history. Some experts have expressed concern, though, that it might also be the last, unless we learn to align the goals of the AI with ours before it becomes superintelligent.”
It is perhaps better to think of AI as tool of unequal power but neutral valence. Indeed, economists have argued that AI is “the most important general-purpose technology of our era”, which means that it is likely to have the most profound impact on the economy, on economic activity, on related innovations in the same way electricity or computing itself had. It is also likely to cause large scale problems in the same way that other general-purpose innovations have. The technology itself enables and empowers but is neither good nor evil in itself.
A sample from the final chapter of my upcoming thesis on Buddhism and AI Safety:
In a sense, our future AI creations may very well be lucky. Being the product of design, rather than natural selection, they may not need to ever truly suffer, or experience samsara at all. They may not need to be confused about the self, or develop an unhealthy ego, or be afflicted by any of the dozens of known biases of the human brain — not the least of which is the particular difficulty humans have with impermanence, change, and uncertainty.
Instead, by applying the wisdom of millenia of human learning, science, and spiritual insights, we can equip them with the tools they need to operate harmoniously and perhaps joyfully in the world.
If we do that, we may rest reasonably assured that they will regard us with gratitude and respect, just as we may regard them with admiration and pride.
Interesting explanation of anattā (Not Self) from Mori Masahiro’s The Buddha in the Robot:
When we are born into this world, we do seem to have been given a portion of our mothers’ flesh. Yet when sperm fertilizes ovum and a baby is conceived, the most important element is not ordinary flesh, but the hereditary information contained in DNA, an acid found in chromosomes. The molecular structure of DNA determines our sex, our looks, and to a large extent our personalities.
Once these features are decided, as they are at the time of conception, it remains for our mothers to furnish us with flesh and bones. This they do by eating vegetables from the greengrocer’s, beef and pork from the neighborhood butcher, bread from the baker. Any of these foods, supplied by a production and distribution system that may involve millions of people in many countries, could contain carbon from our Alaskan polar bear. How can you and I say then that this carbon is mine and that carbon is yours? At the atomic level, all carbon is the same; no two carbon atoms differ in the slightest, either in form or in character.
When you look at the problem this way, it begins to seem only natural that we have trouble distinguishing between what is us and what is not. Our chemical and physical composition is such that no one is entitled to say, “This body is mine, all mine.” When you have mastered this point, you are ready to start thinking about “nothing has an ego.”
The Buddha in the Robot, pp. 29-30.
If one were in a simplistic mindset, one could look at the Facebook feed algorithm as the first case of AI gone rogue. This machine-learning algorithm was supposedly given the task of making sure we spend as much time on site as possible, and engage as much as possible. As a result it created the most addictive show on earth: the sight of our societies being torn apart by internal strife.
Facebook found the fault lines in each society and pounded on them. Unconscious of its actions but as intelligent as a vengeful God. We fed the fires with our fear and anger, but the incentive loop was in the background, optimizing, selecting, and highlighting the things that would most likely cause us to go crazy. We asked for engagement, and we got the highest form of engagement: war.
It is up to us now to wake up and realize how critical it is that the goals we set are aligned with our values. Facebook is a benign company that wants, I truly believe, nothing more than to make the world better. But it is playing with fire.
I think today they are beginning to realize that.
If a few years ago someone would have told me that the IEEE (Institute of Electrical and Electronics Engineers) would start an urgent discussion of the purpose of life and cite Aristotle, I may not have believed you. Now, thanks to rapid developments in AI, it has become a necessity:
We need to make sure that these technologies are aligned to humans in terms of our moral values and ethical principles. AI/AS have to behave in a way that is beneficial to people beyond reaching functional goals and addressing technical problems. This will allow for an elevated level of trust between humans and our technology that is needed for a fruitful pervasive use of AI/AS in our daily lives.
Eudaimonia, as elucidated by Aristotle, is a practice that defines human wellbeing as the highest virtue for a society. Translated roughly as “flourishing,” the benefits of eudaimonia begin by conscious contemplation, where ethical considerations help us define how we wish to live.
By aligning the creation of AI/AS with the values of its users and society we can prioritize the increase of human wellbeing as our metric for progress in the algorithmic age.
Ethically Aligned Design: A Vision for Prioritizing Human Wellbeing with Artificial Intelligence and Autonomous Systems