By Dr. Steven Wyre | 01/22/2025

For a long time, the scientific community has sought to understand whether our cognitive functions could ever be replicated with wires and chips. After all, artificial intelligence (AI) is just a rudimentary version of us.
It seems likely that all cognitive functions involved in the human experience will someday be replicated in a machine’s consciousness. However, that does not mean that any machine will experience human consciousness; a machine will experience conscious awareness and its own kind of consciousness.
The reasoning here is pretty straightforward. Large language models and machine learning model human consciousness by processing information in the same way our brains do. The process just uses computer systems for communication, rather than a biological system.
Similarly, neural networks are replaced by computer circuits, but the effect is the same. Scientists are creating a mechanical version of human beings.
From a metaphysical perspective, consciousness arises as a by-product of a normally functioning brain. As a result, it is possible that computers can possess some rudimentary form of consciousness.
The Computational Theory of Mind
The connection between computers and minds is not new in philosophy. The Computational Theory of Mind can be traced back to computer scientist Alan Turing and the development of the Turing Machine. Turing conjectured that “machines could potentially exhibit intelligent behavior indistinguishable from humans.”
In his 1950 paper “Computing Machinery and Intelligence,” Turing considered whether machines could think. Philosophers and a wide variety of scientists have attempted to answer this question ever since.
Many philosophers believe that human cognitive functions are in some way like a computer. As computers have gotten more sophisticated, the analogy has only grown stronger. Intelligent machines that could compute more types of things would become even more brain-like.
The Blue Brain project developed at École Polytechnique Fédérale de Lausanne demonstrates the computer-like nature of our brains. This cognitive neuroscience project has led to better computational models.
Software developers are seeking to make generative AI even more like human beings.
Advancing toward Artificial General Intelligence
One of the primary goals of AI is to create artificial general intelligence (AGI), which will truly surpass human intelligence. The fundamental goal of AGI is to create a machine that could replicate human-like thinking and potentially develop consciousness.
Doing so is no longer just the purview of science fiction. Like it or not, humanity seems destined to unleash full AGI.
According to a Nature Communications article written by Chinese scholar Nanyi Fei and colleagues, “the fundamental goal of artificial intelligence…is to mimic the core cognitive activities of human.” Fei and his colleagues suggest that one multimodal foundation model can be “quickly adapted for a broad class of downstream cognitive tasks.”
According to the computer science platform Geeks for Geeks, AI could cross over into AGI but it would need:
- Visual perception
- Audio perception
- Natural language processing
- Problem-solving ability
- Creativity
- Social and emotional engagement
Based on these criteria, we may be closer to AGI than many of us would like to admit.
Computers and Human Behavior
To be sure, there are still challenges in AI technology and its ability to simulate the behavior of humans. Can any computer language solve the problem of pragmatics? Can we ever truly say a computer “understands” like a human? Will some machine somewhere express self-awareness and subjective awareness?
How will we know? There is still no consensus on the “right” description of biological consciousness. Cognitive scientists are attempting to identify something in a mechanical system that has not been fully identified or described in a biological system.
Life sciences reporter Mariana Lenharo identified six neuroscience-based theories of consciousness that could determine if any AI experienced “phenomenal consciousness.” No AI tool currently satisfies the conditions of consciousness. However, there is no theoretical reason that conscious AI systems could not be developed.
Replicating Cognitive Functions
If the brain is a computational organ, then computers will eventually replicate the kind of processing that occurs in human brains. Brain function is complex, but the process is simple. The brain works on the laws of causation, where sufficient stimulus from a variety of inputs determines the output.
If you hit your thumb with a hammer, an outburst of some kind is normal. What you yell may depend on myriad factors, like whether the neighbors have their windows open and whether you care.
The process is the same; a complex set of inputs equals an output. In this case, the output is a verbal expression of your pain.
These complex inputs are true of all brains, not just human brains. Ultimately, our brains help us hunt for sustenance and avoid danger. We are unique, however, in building a mechanical version of ourselves.
If part of that quest is to create a machine that can think like human beings, why would we not expect it to wake up someday and demonstrate human-like behavior?
Large Language Models
Language is a key part of what makes us human. It makes sense that our mechanical selves would incorporate large language models.
Generative AI can craft an answer to a question, one word at a time. It accomplishes this task by applying multiple filters, arranged according to a prompt, to a massive collection of terms. The answer is based on an algorithm incorporating the words available, the filters, and mathematical probabilities for the “next word.”
Generative AI is a super-ramped up “autocomplete” system, completing a sentence or spelling a word based on the common responses.
This explanation of generative AI is hugely oversimplified, but the process mirrors what the brain does when choosing the “next word” to use in any speech act. Both systems are limited by the availability of words and the context for each term in different contexts. The answer is only as good as the prompt or context offered.
We expose children to new words in new contexts and teach them the syntax, semantics, and pragmatics of language. If we are asking our computers to learn the way we ask our children to learn, what do we expect to be different in a computer's artificial consciousness?
Computer scientists are already studying language, cognitive science, and artificial intelligence. For example, natural language processing is enabling machines to understand, interpret, and generate human language in a meaningful way.
The Neuroscience of Speech
How does the brain choose the next word in any sentence or thought you construct? Recent research conducted at Massachusetts General Hospital used brain imaging techniques to observe how the brain enables language production. The researchers found that the brain “encodes detailed information about the phonetic arrangement and composition of planned words.”
The brain plans ahead to the next sound a person will make to express a thought and then arranges the neuronal output to articulate that sound. The brain is constructing a reply, word by word, and assembling the code, phoneme by phoneme, to articulate each sound.
Generative AI systems replicate what our brains do in formulating sentences. AI technology builds as it goes, based on the prompt it is given and the available supply of words.
Additional work at Tufts University by linguistics and computer science researchers highlights current issues with AI replicating human conversation. These researchers found that current programs are missing a key component, noting that “when humans interact verbally, for the most part they avoid speaking simultaneously, taking turns to speak and listen. Each person evaluates many input cues to determine what linguists call ‘transition relevant places’ or TRPs.
“TRPs occur often in a conversation. Many times, we will take a pass and let the speaker continue. Other times we will use the TRP to take our turn and share our thoughts.” However, artificial intelligence systems are bad at this function.
This AI research suggests it is possible that additional training can overcome this deficit in artificial consciousness to make AI systems even more capable of engaging in normal conversations. Real-time “corrections” and “clarifications” can lead to a more precise answer to a prompt.
Is this facet of language processing in humans or machines all there is? What about that inner voice most of us have that allows us to test comments and repeat what we thought we heard?
What about emotions? How much of our language is about expressing emotions? What about creating machines that understand abstract concepts, emotional intelligence, ethical considerations, and human values? Maybe, like language itself, such advanced capabilities with a conscious AI are not impossible.
Anendophasia and Affective Artificial Intelligence
Humans with a condition called anendophasia lack that inner voice. You could have a normal conversation with a person missing this inner voice and never know it. This experience may not be that different from conversing with a computer that has human-like intelligence but doesn’t have the ability to conduct this inner dialogue with its artificial consciousness.
Some people would argue that a conscious AI will never have human emotions. However, emotions are simply complex reactions to a set of inputs. These reactions fire synapses in the brain to produce a behavioral response.
Sorrow and happiness are both the brain’s reaction to a specific set of internal and external cues. If we can produce conscious machines that think like a human, they could also process emotions like a human. This ability to process emotions is called affective computing, and scientists have been working toward this goal since at least 1997.
The Future of AI
Generative AI is a continuation of humanity’s attempts to recreate itself. It is based on the way brains function.
A better understanding of how computers “think” will help us to better understand our own brains, communication, and deep learning. Likewise, better understanding our own brains is helping produce more “human-like” thinking machines.
Admittedly, I have passed over the possibility of the presence of a non-material mind or soul working with the brain to make us what and who we are. This move was deliberate, since attempting to prove or disprove the existence of this “mind” in conscious beings has proven intractable.
Nevertheless, barring some unforeseen circumstances, achieving AGI seems inevitable.
The Philosophy Degree at American Public University
For adult learners interested in studying consciousness, contemporary issues in philosophy, philosophy’s history, and other similar topics, American Public University (APU) offers an online bachelor of arts degree in philosophy. Faculty members with a deep knowledge of philosophy teach the courses for this degree, and the courses include:
- Research, analysis, and writing
- Logic
- Critical thinking
- The philosophy of science
- Contemporary issues in philosophy
For more information about this degree, visit our philosophy degree program page.
Dr. Steve Wyre is currently the interim Department Chair of Religion, Art, Music, and Philosophy. He received his B.A. and M.A. in philosophy from the University of Oklahoma and his Ed.D. from the University of Phoenix. Steve has been teaching various ground-based philosophy courses since 2000 and online since 2003. He has also served as a subject matter expert (SME) for courses in ancient philosophy, ethics, logic, and several other areas.