For many people, artificial intelligence is the Wizard of Oz of our time—a mysterious and threatening power behind the curtain, ominous and seemingly omnipotent.
The entertainment industry has done more than its fair share to feed us horrific futuristic scenarios about AI: HAL in 2001, Ava in Ex Machina, humans used as batteries in The Matrix. Our primal fears have been manipulated and magnified by the big screen.
Should we be so afraid of AI? Buddhist teachings offer many sophisticated tools and strategies to work with fear. One is wisdom and understanding. So let’s start with answering the question: what is AI anyway?
There are two types of AI: weak and strong.
In weak AI, a computer simulates the mental states and behavior of sentient beings like us. A weak AI program is at most a simulation of a cognitive process but is not itself a cognitive process, the same way that a computer simulation of a hurricane is not itself a hurricane.
In strong AI, the idea is that machines could have the kinds of mental states we have. In principle, they could be programmed to actually have a mind—to be intelligent, to understand, to perceive, have beliefs, and exhibit other cognitive states normally ascribed to human beings. We can further distinguish between two versions of strong AI: the claim that machines can think, and the claim that machines can enjoy conscious states.
All of today’s AI systems, such as virtual assistants, self-driving cars, facial recognition, recommendation engines, IBM Watson, Deep Blue, and AlphaGo, are instances of weak AI. We are far away from strong AI. Even the most optimistic proponents argue that strong AI is decades away, while others conjecture centuries. Many theorize that it is not possible at all, especially the claim that machines can have consciousness.
As an AI scientist and a Buddhist teacher, I am in the last camp, together with UC Berkeley philosophy professor John Searle, who writes that computers “have, literally… no intelligence, no motivation, no autonomy, and no agency. We design them to behave as if they had certain sorts of psychology, but there is no psychological reality to the corresponding processes or behavior… The machinery has no beliefs, desires, [or] motivations.”
The “processes” that Searle talks about are lines of code, or algorithms, that primarily learn to recognize patterns in data (images, words, speech, etc.). There is no real intelligence, no motivation, no autonomy, and no agency.
The apocalyptic fears of super-intelligent machine overlords are primarily born out of the strong AI hypothesis. By allowing ourselves to be disabused of fears of strong AI in a far-off future, our energies are freed to turn to the challenge at hand: weak AI. There are many aspects of weak AI to be concerned about, which should propel us toward right action.
Traditionally, weak AI needed a set of rules predefined by humans to make a decision. Today, machine learning algorithms (a technique in AI) allow systems to infer rules by themselves by finding patterns in data. This “learning” requires a large amount of data of the right kind. If the learning algorithm (designed by humans) or data selection (chosen by humans) is biased, so will be the decision of the AI system. There have been many examples in which weak AI systems were found to exhibit racial and gender bias. One example is COMPAS, where the system predicts that Black defendants pose a higher risk of recidivism than they actually do. Another example is mortgage algorithms that perpetuate racial bias in lending.
With weak AI, the humans are still behind the wheel, and that’s where the real challenge of AI lies. We can design robotic doctors, nurses, and caretakers who never tire or become error-prone, with super steady hands to perform operations any time of the day. Or AI powered drones to help fight ocean plastic. On the other hand, we can build robotic killing machines, giving them the power over life and death without human oversight, or create deepfake videos to swing an election.
To err is to be human: our algorithms are not perfect, nor is our data. Therefore, neither are the AI systems we create.
From where I sit, the challenge of ethical AI is as crucial as climate change. We need to remain vigilant about data fairness, privacy, and the ethics of AI systems. Given the potential for abuse, San Francisco has just banned the use of facial recognition software by police and other agencies, and many cities are following suit. IEEE, the world’s largest technical professional organization for the advancement of technology, has more than a dozen working groups on ethical design standards for autonomous and intelligent systems. There are many more examples of organizations mobilizing to create guidelines and regulations to infuse ethics in AI.
There is another side to the story of fear in Buddhism: many teachings refer to fear as a valuable ally and instrument, a doorway to awakening if explored skillfully. Among these are fear of moral remorse and disesteem (hiri and ottappa), which are considered to be the ethical guardians of the world. According to Giuliano Giustarini in The Journal of Indian Philosophy, stirring fear (bhaya) and letting go of fear and fostering fearlessness (abhaya) “are two essential steps of the same process,” the latter being a “quality of liberation as well as an attitude to be cultivated.”
As Buddhists, we need to skillfully utilize fear and concern not to become frozen in a feeling of overwhelm or inaction, but to wisely and compassionately use our voice, votes, and energy to ensure humanity’s AI systems are designed with kindness, fairness, honesty, and full ethical considerations. Our freedom depends on it.