The question of whether intelligence can be reproduced by computers or software has gained renewed relevance in recent years due to the rapid development of large language models (LLMs) such as GPT. While LLMs demonstrate impressive capabilities in processing and generating natural language, the question remains whether this constitutes true intelligence or merely (linguistic) competence. So, intelligence or competence?
In psychology, intelligence is often defined as the ability to solve problems, learn from experience, cope with new situations, and think abstractly. Intelligence tests, such as the IQ test, typically measure cognitive abilities like logic, memory, and verbal comprehension.
In 1983, Howard Gardner introduced the theory of ↗multiple intelligences, which aims to provide a more differentiated view. According to him, there are nine distinct forms of intelligence, each representing different types of human abilities – from linguistic talent to self-reflection.
Linguistic intelligence refers to the ability to use language consciously and creatively - in writing, reading, arguing, or storytelling. LLMs like GPT are specifically trained for this and show high performance in this domain. They can compose complex texts, distinguish stylistic nuances, and generate coherent arguments. This form of intelligence is the one that LLMs simulate most convincingly.
Logical-mathematical intelligence involves analytical thinking, pattern recognition, abstraction, and mathematical understanding. LLMs can appear to solve logical problems, but their performance is inconsistent, especially with complex reasoning or formal proofs. They do not possess a true understanding of logic or mathematical structures, but operate purely on statistical patterns.
Spatial-visual intelligence relates to the ability to think in images, plan spatially, and develop visual concepts - crucial in fields like architecture, art, or navigation. LLMs lack a visual system and any spatial awareness. This form of intelligence cannot be independently exercised by LLMs.
Musical intelligence involves recognizing and creating rhythms, melodies, and harmonies. While LLMs can describe music or generate song lyrics, they do not experience music and have no auditory awareness. Music AIs can now compose, but they lack the emotional and cultural understanding of musical expression.
Bodily-kinesthetic intelligence is the ability to purposefully and precisely use one’s body - as seen in sports, dance, or craftsmanship. Since LLMs have no physical embodiment, this form of intelligence is fundamentally inaccessible to them. Even in robotics applications, control typically lies outside the model itself.
Interpersonal intelligence refers to the ability to understand other people, communicate with them, and intuitively grasp social dynamics. LLMs can generate texts that appear empathetic, simulate social roles through language, and mimic conversational behavior. However, genuine empathy, intentionality, or social motivation are beyond their capabilities.
Intrapersonal intelligence is the ability for self-awareness, introspection, and understanding one’s own emotions and motives. It requires a conscious self - something LLMs fundamentally lack. They have no subjectivity or internal states, even though they can talk about such concepts linguistically.
Naturalistic intelligence involves recognizing and categorizing natural phenomena: plants, animals, ecosystems. LLMs can describe facts about nature or accurately reproduce biological classifications, but they have no experience of the environment. Their “knowledge” is entirely text-based and lacks any sensory grounding.
Existential intelligence refers to the ability to reflect on meaning, death, faith, and one’s own existence. LLMs can reconstruct philosophical discourse and summarize religious ideas. Yet these outputs remain simulations, as LLMs have no real existential awareness or search for meaning.
Even though LLMs demonstrate capabilities in specific areas that may resemble human forms of intelligence, they lack what many consider essential to true intelligence:
AI is not intelligent. There is no “I” in AI. Large language models like GPT can simulate certain aspects of intelligence - especially on the linguistic level. But human intelligence goes far beyond that: it is embodied, emotional, social, and reflective. In Gardner’s sense, many real forms of intelligence cannot be captured algorithmically; they are deeply embedded in our human existence. That’s why LLMs can only be seen as demonstrating language competence or functional simulations of intelligence - but not intelligence in the human sense.
The fact that AI is not truly intelligent - that it lacks consciousness, deep understanding, or independent intentions - can actually be a significant advantage. AI systems do not act out of their own drive, whims, or greed; they follow rules and training data. As a result, they are predictable and easier to regulate. Without self-interest or a survival instinct, there is no risk of “rebellion” as imagined in science fiction. Developers and operators retain control and responsibility, which serves their interests as well as the public’s.
It is also advantageous that AI has no real thoughts or feelings. This means we don’t need to consider moral rights as we do with animals or humans. AI can be used for tasks where human suffering would raise ethical concerns - for example, in hazardous environments or emotionally taxing conversations. There is no risk of unintentionally creating a sentient being capable of suffering.
Lacking its own motivation, AI is less prone to unpredictable, self-directed behavior. Precisely because AI is not truly intelligent, it remains an efficient and ethically manageable tool - as long as it is used responsibly by humans. When people clearly understand that AI is not intelligent, it remains a tool to support human capabilities. People generally want to retain decision-making authority over a tool - unless, of course, managers decide otherwise in the name of cost reduction.