Answer
An “artificial intelligence” is a program or computer system mimicking aspects of human communication. Artificial intelligence is usually abbreviated as AI. The ultimate expression of this is labeled “artificial general intelligence,” or AGI. A synthetic system would be considered AGI when it thinks, reasons, and learns in the same way as a human being. AGI has long been a theme of science fiction. For instance, the droids from Star Wars films are sentient, emoting machines. Fictional AGI often takes on a villain’s role, such as HAL 9000 from 2001: A Space Odyssey, the machines of The Matrix, or the character Ultron from Marvel Comics.
Today, the term AI is most often associated with language-based systems such as ChatGPT, Gemini, Watson, Copilot, and Grok. It is also connected to the idea of a technological singularity: the point at which artificial reasoning, problem-solving, and self-development overtake that of humans. This second concept, more associated with AGI, generates both hopes and fears. Yet there is no reason to think that “true” artificial intelligence—a self-aware, sentient, “living” program—is possible, let alone actual.
Tools
God allows us to use our minds to create useful tools (1 Timothy 4:4). The general concept of AI is not morally different from using a kitchen knife or driving a car. AI can be used appropriately or inappropriately. As with any other technology, AI has upsides and downsides that should be understood.
Recent generations of AI are incredibly complex, analyzing tremendous amounts of data. New AI models are much better than former iterations at correctly interpreting human language. They are narrowly tuned toward creating new content based on user instructions and a large database of background data.
These developments have led to an explosion of AI use for all sorts of tasks. Some uses of AI are wonderful, such as sorting and summarizing huge databases. Other AI impacts are concerning. Academic cheating is one. Creating even greater dependence on machines for basic knowledge and skills is another. The generation of false yet convincing pictures and voices raises alarm. A growing number of people are becoming emotionally addicted to custom-tuned AI models and struggling to relate to actual humans.
There has long been speculation that computer intelligence could eventually eclipse human ability. Computers store, recall, and manipulate bulky data far more efficiently than a person can. Computers have beaten human opponents in contests such as chess and the TV game show Jeopardy. The possibilities excite some people, but others are unsettled by the idea of machines that think as well as or better than the average person.
Efficiency vs. Intelligence
For all their potential, every “artificial intelligence” is still a machine limited by its own creators. AI serves the same basic functions of all machines: to make a task easier and faster. Industrial robots are stronger and work longer than people. Computers sort data more quickly and accurately. But extending these ideas to say that AI can become equal or superior to humans falls short. Computers occasionally appear intelligent, but their actual mechanism is performing extremely low-level thinking extremely quickly in extremely long chains. They aren’t truly “smart” but complete certain tasks in less time than people. Some things they cannot do at all. If a person defines intelligence in a way that eliminates concepts such as morality, emotion, empathy, humor, relationship, and so forth, then the term artificial intelligence is not so meaningful.
When someone says, “Machines and AI (or AGI) will be better or smarter than human beings,” it’s like saying, “Animals are better than humans. Cheetahs are faster. Elephants are bigger. Birds are more agile.” Of course, those are all separate animals, and they are only “better” than humans in separate categories. A single AI program might be “better” at chess or cooking or even making music. But for AI to be legitimately as smart as or smarter than people, a single program would need to excel in all those things at once.
Modern AI systems are adept at some of these multi-disciplinary tasks. Yet none of them are “thinking” the way a person does. Large Language Models (LLMs) are used in AI to imitate human speech patterns. But they do so via an algorithm matching inputs to mathematically preferred outputs. This is why modern AI often fumbles answers to speech-based questions. AI is also known to “hallucinate”: to combine information incorrectly and present false statements as truth.
This is key to understanding AI: even the most advanced computer is still a product of human intelligence. Thus, it is limited by human intelligence. A computer playing chess or Jeopardy is not smarter than the people it beats. ChatGPT is not better educated on literature or philosophy than actual experts. They do not “understand” their interactions any more than a gasoline engine, a mousetrap, or a digital bathroom scale. These systems are simply machines tuned to give automated results according to a complex set of manmade rules.
The Singularity
The phrase technological singularity specifically refers to a theoretical moment when artificial intelligence reaches a tipping point. Once it moves from “artificial intelligence” to “artificial general intelligence,” the system then self-improves without human input and beyond human ability. Some people anticipate great benefits from discoveries made by a vastly superior intellect. In most cases, however, the singularity is feared as the downfall of humanity. A common staple of science fiction is an AGI computer system that outruns the human mind and eventually dominates the world. The resulting dystopia is depicted in movie franchises such as The Terminator and The Matrix.
The concept of technological singularity also assumes that processing power will advance infinitely. This is contrary to what we know about the natural laws of the universe. Advancement in computing technology eventually runs into the limits of physics. Scientists and computer experts agree there is a “hard limit” to how fast certain technologies can operate. Since the complexity required to simulate a human mind is so far beyond even theoretical designs, there is no objective reason to say that sentient artificial intelligence can exist, let alone that it will exist. Even in its current form, modern AI requires mind-boggling levels of electrical and computing power.
The Creations and the Creators
On an abstract level, math and logic suggest that AI can never exceed the human mind. Gödel’s Incompleteness Theorem is a powerful argument that a system can never become more complex or more capable than its originator. To make an AI better than a human brain, we’d need to fully understand and then surpass ourselves, which is logically contradictory.
Spiritually, we understand our own limits because, being creations of God (Genesis 1:27), we can’t outdo God’s creative power (Isaiah 55:8–9). Also, God’s depiction of the future does not seem to include any kind of technological singularity (see the book of Revelation). It’s possible, however, that the false prophet’s “living” image and the mark of the beast may be enabled by some form of artificial intelligence.
In the meantime, as researchers continue to develop AI systems, humanity continues to react in bizarre and unfortunate ways. Scripture describes idolatry as man worshiping his own creation (Isaiah 2:8; Habakkuk 2:18–19). This may sound absurd to the modern reader. Yet it’s already being embraced by some. A new religion, called Way of the Future, was started by a former Google engineer as a means to worship AI as mankind’s caretaker and guide. Such strangeness is nothing new; humanity has often been guilty of worshiping the work of their own hands. The Way of the Future is just a modern version of carving an idol.
The Choice
In short, AI might be able to perform certain tasks faster than a human being. Yet there is no logical, philosophical, or biblical reason to think it can be “better” in a meaningful sense. AI might emulate human thought patterns, but it can never replace the prowess, dexterity, and creativity of the human mind. Overdependence on AI legitimately threatens to make those traits weak and undeveloped. If AI is “good enough” for complex tasks, fewer people may choose to undertake hard projects, and human competence may regress. God called us to steward the created world (Genesis 1:28). This can include use of AI, but it excludes misuse of artificial intelligence the same way it excludes abuse of any tool.
Despite fears and speculations, the possibility of fully sentient AGI or a technological singularity is refuted by science, observation, and Scripture. The concept of self-aware, sentient, superhuman AI makes for entertaining fiction but not much else.
