To use our advanced search functionality (to search for terms in specific content), please use syntax such as the following examples:
Our efforts at playing God in the realm of artificial intelligence have taught us something important about what it is to be human.
Artificial intelligence, or AI, has become a part of our daily lives. It powers our apps, processes our credit applications, recognizes faces to unlock our smartphones, and decides what advertisements we will see on the Internet. It is practically omnipresent, to the point that we scarcely notice it anymore. Snapchat users make videos for their friends using funny filters that make them look like animals or cartoon characters, all unaware of the AI advances that made their giggles possible.
And our headlines continue to report astonishing new achievements in AI. Some are exciting, such as the success of DeepMind’s AlphaGo program at beating the world’s best Go players, while some are unnerving, such as United Nations reports that AI drone systems may have been used in recent conflicts to attack human combatants, making the “kill” decision with no human controller involved.
Yet, for all the ways in which AI is transforming our lives and our world, it still falls short of what most think of when they hear the words “artificial intelligence.” That’s because, as amazing as they are, most AI applications are—compared to humans—remarkably stupid.
Your smartphone’s map app can find an efficient route back to your hotel, but it can’t chat with you on the way about the restaurant you just left. AlphaZero (a “descendant” of AlphaGo) will beat you at chess, but it won’t muse on the benefits of occasionally putting away the chessboard and spending time outside. And the AI drones that may have been released to attack soldiers in the Libyan civil war have never paused to weigh the moral implications of their actions.
As Dave Gershgorn wrote for Quartz magazine,
Our best AI today can do very specific tasks. AI can identify what’s in an image with astounding accuracy and speed. AI can transcribe our speech into words, or translate snippets of text from one language to another. It can analyze stock performance and try to predict outcomes. But these are all separate algorithms, each specifically configured by humans to excel at their single task. A speech transcription algorithm can’t define the words it’s turning from speech to text, and neither can a translation algorithm. There’s no understanding; it’s just matched patterns (“Inside the mechanical brain of the world’s first robot citizen,” November 12, 2017).
“There’s no understanding.” That’s the rub.
In the 1950s, flush with success in the relatively new field of computing—and motivated by a belief that man was, in essence, little more than a very complicated machine—many researchers assumed that within a few decades computers could be created with a capacity to learn and understand equivalent to that of the human mind. Using wires, circuit boards, and our own ingenuity, we would one day be able to reproduce ourselves with technology, creating machines that truly think as we do.
Time has not been kind to their ambitions, and the task has proven far harder than early researchers imagined.
But the dream lives on, and computer scientists across the world are still trying to create a machine in our own image—a truly thinking machine. To distinguish between the narrow “problem solving” approach that dominates much of the AI work we see today and the larger goal of creating programs or systems that can actually think and understand in the same manner as the human mind, many researchers call the latter “artificial general intelligence,” or AGI.
As with AI, we have seen remarkable advances in the realm of AGI. But how close are we to creating machines that can truly think and understand as we do?
A robot named Sophia is one of the most famous public faces of AGI. Built by Hong Kong-based Hanson Robotics—and given much media coverage—Sophia was made to look like the upper torso of a woman, with a mask-like human “face” and visible wires and circuitry displayed at the back of its “head.” This machine was programmed to track human faces, display human facial expressions, and respond to human conversation.
Sophia made a splash in the news with the October 2017 announcement that Saudi Arabia had declared “her” its first non-human citizen. A month later, the United Nations bestowed on “her” the title “Innovation Champion.” Giving speeches in front of large audiences and serving as a guest on late-night talk shows, Sophia has been touted by “her” press as a dramatic step forward in robotics and human-like thinking and interaction.
The reality is less impressive.
Though there is real AI technology at work in Sophia—and an opportunity for valuable research—the animatronic robot is also in part an ongoing publicity stunt. “Her” speeches are often pre-written, and the robot’s creators acknowledge that Sophia serves as a sort of “human-crafted science fiction character depicting the future of AI and robotics,” and not as an actual, functioning example of that future. As James Vincent wrote in November 2017 for The Verge, Hanson Robotics’ former chief scientist Ben Goertzel sees Sophia as an artistic means of inspiring others to believe in the potential for AGI, not an actual achievement of that potential.
Yet, while Sophia inspires debate—and, in some quarters, derision—real advances in AI are making headway, reaching new heights of technological achievement yet starkly illustrating how far we have yet to climb.
In the summer of 2020, San Francisco company OpenAI revealed its artificially intelligent system GPT-3. The New York Times reported in November of that year that GPT-3 was trained through machine learning to “understand” human language by feeding it almost one trillion words on the Internet—a process that took months, cost tens of millions of dollars, and required a specialized supercomputer. Their results are impressive. When the AI system was asked to answer the question “How do we become more creative?” in the style of pop psychologist Scott Barry Kaufman, the New York Times reported that GPT-3 responded with the following paragraph:
I think creative expression is a natural byproduct of growing up in a diverse world. The more diverse the world is, the more you get exposed to different people, to different opportunities, to different places and to different challenges. And the more diverse that is, the more likely you’ll be to be able to put the dots together to form something new. And in many ways, I think if you want to be creative, you have to go for it. If you want to be a writer, you have to write, if you want to be a musician, you have to create music, if you want to be a comedian, you have to create comedy, and the more you create, the more likely it is that you’ll stumble onto some interesting stuff ("Meet GPT-3. It Has Learned to Code (and Blog and Argue)").
Even Kaufman admitted that the paragraph was eerily like something he would say, though it was completely original to the AI. And GPT-3’s abilities do not end at imitating writers. It has demonstrated the ability to generate poetry, stories, and music, to play games it was not programmed to play—such as chess and Go—and even to create original, functional computer code. Some of these outcomes surprised even the system’s creators.
David Chalmers, professor of philosophy and neural science at New York University, called GPT-3 “one of the most interesting and important AI systems ever produced,” and said its ability to pick up brand new tasks after being exposed to a few examples shows “hints of general intelligence” ("Philosophers On GPT-3," DailyNous.com). But is GPT-3 thinking? Is it a system truly on the verge of artificial general intelligence?
No, it isn’t. A New York Times article noted that “if you ask for 10 paragraphs in the style of Scott Barry Kaufman, it might give you five that are convincing—and five others that are not.” As computer scientist Mark Riedl explained, “It is very articulate. It is very good at producing reasonable-sounding text. What it does not do, however, is think in advance. It does not plan out what it is going to say. It does not really have a goal.”
The creation of GPT-3 was quite an achievement for its human creators. But what has GPT-3 achieved, itself?
Writing at MindMatters.ai, neurosurgeon Michael Egnor often highlights the fundamental difference between computer processing and truly intelligent thinking. Quoting the observations of philosopher Ed Feser, he reminds us:
[C]onsidered by themselves and apart from the intentions of the designers, the electrical currents in an electronic computer are just as devoid of intelligence or meaning as the current flowing through the wires of your toaster or hair dryer. There is no intelligence there at all. The intelligence is all in the designers and users of the computer ("Computers Are No Smarter Than Tinkertoys," April 30, 2019).
We will surely continue to develop increasingly advanced machines, able to imitate human output in dramatic and convincing ways. Given the impressive feats of GPT-3, who knows what to expect from a future GPT-4 or GPT-5? But, as Egnor emphasizes, “There’s not a shred of intelligence in a computer. Human beings are intelligent and we use computers to represent and leverage our human intelligence. All of the logic ‘in’ a computer is really human logic, represented in a computer.”
The expert hands of a team of digital artists can bring to life a virtual version of the Grand Canyon, imitating—with enough processing power—details as fine as the texture of the rock and the visual effects of dust particles lingering in the air and scattering the sunlight. But the result remains only an imitation. The Grand Canyon has been represented, but not truly reproduced.
So, too, do programmers of remarkable skill work together to produce systems such as GPT-3, yet the results of their work remain just that: imitations. The creators of the first digital calculator did not create a machine that understands arithmetic. They simply designed a system capable of imitating their own work at arithmetic. And, so far, attempts to produce artificial general intelligence show no hope of creating machines that truly comprehend what they are doing—regardless of the grand scale of our newest imitations.
So, is there a fundamental problem in the search for AGI? Is there some assumption being made that dooms researchers’ efforts to reproduce the human mind through microchips and networks—some missing element that keeps true AGI forever out of reach?
Attempts to create genuinely human-like intelligence and its associated characteristics—consciousness, free will, and abstract thinking—have been rooted in fundamentally materialist assumptions from the very beginning. Researchers take for granted that the material world—the physical matter and energy of the universe and the physical forces that act upon them—is all there is to reality. Their work relies on the idea that matter is essentially all there is to mind.
And, truly, matter does have its effect on the human mind. Humans are physico-chemical beings. Like the rest of creation around us, we are made of particular arrangements of atoms and molecules. The human brain—which plays an essential role in the human mind—is clearly a physical object, made up of a vast network of neurons engaging in intricate chemical and electrical transmissions. Neuroscientists have many tools that allow them to measure how different activities and feelings excite different regions of the brain. Studies show that damage to the brain can dramatically alter one’s personality, and researchers have found that applying a magnetic field to a portion of the brain can even affect the moral choices a person makes. Unquestionably, our physical brains play a vital role in making us who we are.
But there is abundant evidence that our minds are also more than our brains—that something outside the physical, chemical confines of the human brain also contributes powerfully to the human mind.
In this regard, neurosurgeon Egnor frequently points to the work of two famed individuals in his field of brain science: Roger Sperry and Wilder Penfield.
Roger Sperry won the Nobel Prize in Physiology and Medicine for his famous split-brain research with human subjects whose corpus callosum had been severed—the connection between the right and left halves of their brains had been completely removed. The subjects seemed to function just fine, even with one half of the brain completely unable to communicate with the other. But Sperry’s tests teased out intriguing details—such as finding instances when subjects could not describe an object seen by just one eye, as the brain’s language center was located in the hemisphere lacking access to the image that eye saw.
But he also found that his subjects’ normal powers of reasoning and abstract thought were intact. Surgery had limited the information to which they had access—the right eye, for instance, could only pass information to the left hemisphere—but the ability to reason, make conjectures, and think conceptually was not diminished in the slightest, even though their brains were literally severed into two distinct parts, each unable to communicate with the other.
To fully account for his research results, Sperry concluded, one must consider the realm of ideas and consciousness not simply as a byproduct of the brain’s chemicals and molecules, but rather as vital elements that act on those chemicals and molecules. “Mental forces in this particular scheme are put in the driver’s seat, as it were,” Sperry concluded. “They give the orders and they push and haul around the physiology and physicochemical processes as much as or more than the latter control them. This is a scheme that puts mind back in its old post, over matter, in a sense—not under, outside, or beside it” ("Mind, Brain, and Humanist Thinking," The Bulletin of Atomic Scientists, September 1966).
Simply reducing the human mind to the physical components of the brain, in Sperry’s formulation, fails to account for the complex activity of human thought, consciousness, and will. While not denying that humans possess a physical brain like that of the animals, Sperry concluded that his view of the mind “does deny, however, that the higher human properties in the mind and nature of man are the same as, or are reducible to, the components from which they are fashioned.”
Another suggestion that “mind” transcends the physical brain is found in neurosurgeon Wilder Penfield’s advances in the field of brain surgery to control epilepsy. Penfield found that, when performing brain surgery on conscious patients, he could prompt them to experience particular phenomena by stimulating different parts of the brain. Patients would experience sights, odors, and other physical sensations—even emotions—prompted by nothing more than his electrical stimulation of parts of the brain.
Yet he noticed that one outcome never resulted from his work: No poking or prodding of the brain ever produced an abstract thought in a patient. It never produced a movement in the patient’s intellect or conceptual thinking. While physical sensations could be teased physically out of his patients’ brains, abstract thoughts and concepts could not. In fact, as patients could communicate with him and reason about the illusory sensations he was prompting, he understood that their human intellect, reasoning, and will stood in some way apart from the work he was doing on the physical brain.
Although Penfield began his career as a materialist—believing that there was no more to the human mind than the collection of material that makes up the brain—his 30-year career in neurosurgery forced him to reconsider that position and conclude the opposite: that something exists outside the brain, completing the human mind and contributing to its higher faculties.
The idea that Sperry and Penfield developed through their research—that the human mind is not completely reducible to the physical components of the brain and possesses some additional element—is actually reflected in the inspired pages of your Bible. There, we are told that “there is a spirit in man, and the breath of the Almighty gives him understanding” (Job 32:8). Indeed, it is the spirit in man that gives him knowledge and comprehension: “For what man knows the things of a man except the spirit of the man which is in him?” (1 Corinthians 2:11). This is not an immortal soul; it is an essential God-given element of the mortal human mind, making men and women who and what they are.
The spirit in man and the remarkable human brain combine to produce the phenomenon we know as the human mind. Just as a player sitting at a piano produces music, the human spirit and human brain work together to make thoughts, plans, and consciousness possible. Separate the player and piano, and the music stops. Similarly, separate the human spirit from the human brain, and thoughts cease. Indeed, the book of Psalms describes the human condition at death: “His spirit departs, and he returns to the earth. In that very day, his thoughts perish” (Psalm 146:4, World English Bible).
Consider the difference between a “player piano”—a piano with a rotating drum that can reproduce preprogrammed instructions to play its keys—and a seasoned musician sitting down at a finely tuned instrument to perform a masterpiece. There may be a superficial similarity at first, but when the preprogrammed notes run out, the similarity ends. So, too, have all human attempts at creating AGI fallen short of the wondrous reality of the human mind.
We may—in fact, we almost certainly will—come closer and closer to creating convincing imitations of the real thing. But the dream of truly and fully reproducing the wonder of the human mind through the realm of silicon and copper wire will likely remain just that: a dream.
The human brain will continue to create fascinating approximations of itself, as scientists develop ever-more-complex software that can interact with flesh-and-blood human beings. It will surely create broader and broader implementations of what today are narrow problem-solving artificial intelligences.
But the more we advance in imitating ourselves through AGI, the more we will discover about the complex nuances of our own human image. The more we learn in our attempts to “reproduce” our humanity, the more we learn about ourselves—including the amazing discovery that we are far more than an assembly of chemicals produced by repeatable physical processes. We are in fact much more than the sum of our physical parts. We are truly something astonishing.
Indeed, how apt are these words attributed to Wilder Penfield: “How little we know of the nature and spirit of man and God. We stand now before this inner frontier of ignorance. If we could pass it, we might well discover the meaning of life and understand man’s destiny.” Indeed, God reveals what mankind is unable to discover—not only that there is a spirit in man, but also that mankind has a wonderful destiny ordained by its Creator.
That destiny will not ultimately be found in man’s struggle to create technology in his own image, but in his rediscovering that he himself is made in the image of Another. With one foot planted in the physical realm and one in the spiritual, we bear the fingerprints of our Divine Designer in a way that should fill us with wonder and cause us to reflect: For what purpose has He made us so? And how have we aligned our lives in harmony with that purpose?
Our efforts to create something in our own image should impress upon us the humbling significance of the fact that we are made in His.