The Unspoken Knowledge: Polanyi's Paradox and the Quest for True AI
- Aki Kakko
- 1 day ago
- 6 min read
From composing music to diagnosing diseases and driving cars, AI systems are performing tasks once considered uniquely human. Yet, beneath this veneer of rapid progress lies a fundamental challenge, one articulated decades before the first perceptron fired: Polanyi's Paradox. This paradox, which states "we can know more than we can tell," continues to shape, constrain, and inspire the field of AI.

What is Polanyi's Paradox?
Coined by Hungarian-British polymath Michael Polanyi in his 1966 book "The Tacit Dimension," the paradox highlights the fact that much of human knowledge is tacit rather than explicit.
Explicit Knowledge: This is knowledge that can be easily articulated, written down, codified, and transferred from one person to another. Think of mathematical formulas, grammatical rules, historical dates, or the steps in a recipe.
Tacit Knowledge: This is the knowledge we possess that is difficult, if not impossible, to fully articulate or transfer through explicit instruction. It's often learned through experience, intuition, and practice. It's the "know-how" rather than the "know-that."
Classic Examples of Polanyi's Paradox (outside AI):
Riding a Bicycle: You can read a physics textbook on balance and motion, but that won't teach you how to ride a bike. The subtle adjustments of weight, the feel for balance, the coordination – these are learned tacitly through practice. Try explaining exactly how you stay upright to someone who has never ridden.
Recognizing a Face: We can instantly recognize thousands of faces, even in different lighting, angles, or with changed hairstyles. Yet, describing precisely what features or combination of features allows us to do this for a specific face is incredibly challenging.
A Master Craftsman's Skill: A seasoned carpenter, musician, or chef possesses a wealth of tacit knowledge. They "feel" the wood, "hear" the right note, or "know" when the sauce is perfect, often without being able to fully verbalize the intricate sensory cues and subtle judgments involved.
Understanding Humor or Sarcasm: The subtle cues, context, and cultural understanding required to get a joke or detect sarcasm are deeply ingrained and hard to codify into explicit rules.
Polanyi's Paradox in the History of AI: The GOFAI Era
Early AI, often referred to as "Good Old-Fashioned AI" (GOFAI) or symbolic AI, primarily focused on explicit knowledge. The assumption was that intelligence could be replicated by:
Representing knowledge: Using formal languages like logic or semantic networks.
Manipulating symbols: Applying rules and algorithms to these representations to deduce new information or make decisions.
Expert systems were a prime example. They attempted to capture the explicit knowledge of human experts (e.g., doctors, geologists) in a set of "if-then" rules. While they achieved some success in narrow domains, they hit a wall when faced with tasks requiring common sense or skills laden with tacit knowledge.
Examples of GOFAI struggling with Polanyi's Paradox:
Natural Language Understanding: Early translation systems and chatbots struggled immensely because human language is rich with ambiguity, context-dependency, and unspoken assumptions – all forms of tacit knowledge. Simply knowing dictionary definitions and grammar rules wasn't enough.
Computer Vision: Programming a computer to identify a "chair" by explicitly defining all possible shapes, materials, and configurations of a chair proved to be an intractable problem. The tacit understanding of what constitutes a "chair" is far more flexible.
Robotics and Motor Control: Trying to explicitly program every single muscle movement and sensory feedback loop for a robot to walk or grasp an object was incredibly complex and brittle.
The limitations encountered by GOFAI, largely due to its inability to handle tacit knowledge, contributed significantly to the "AI Winters" – periods of reduced funding and interest in AI research.
How Modern AI (Deep Learning) Addresses (or Sidesteps) Polanyi's Paradox
The resurgence of AI, particularly through machine learning and deep learning, represents a significant shift in approach. Instead of trying to explicitly codify knowledge, these systems learn it from vast amounts of data. Deep neural networks, inspired by the structure of the human brain, can be trained on millions of examples (e.g., images, text, sounds). Through this training process, they automatically discover the intricate patterns and features relevant to a task, even if those patterns are too complex for humans to identify and articulate.
Examples of Deep Learning leveraging tacit knowledge:
Image Recognition: Convolutional Neural Networks (CNNs) trained on ImageNet (a large database of labeled images) can identify objects with superhuman accuracy. They learn hierarchical features – from simple edges and textures in early layers to complex object parts in deeper layers – without human programmers explicitly defining what a "cat's ear" or "car's wheel" looks like. The network knows what a cat looks like, but it can't "tell" us in explicit, human-understandable rules how it does so.
Natural Language Processing (NLP): Large Language Models (LLMs) like GPT-4 are trained on massive text corpora. They learn grammar, context, sentiment, and even some degree of common-sense reasoning implicitly. They can generate coherent text, translate languages, and answer questions by capturing the statistical relationships and tacit patterns within the language data. We didn't explicitly teach GPT-4 the rules of iambic pentameter, but it can learn to generate poetry in that style.
Game Playing: AlphaGo famously defeated world champion Go players not by being programmed with every possible Go strategy (an impossible task) but by learning optimal strategies through self-play and reinforcement learning, discovering moves human masters hadn't considered.
Robotics: Reinforcement learning allows robots to learn complex motor skills like grasping diverse objects or walking on uneven terrain through trial and error, essentially developing their own tacit understanding of physics and motor control.
In a sense, deep learning doesn't solve Polanyi's Paradox by making tacit knowledge explicit. Instead, it creates systems that can acquire and utilize tacit knowledge directly from data, much like humans do through experience.
Contemporary Challenges: The Paradox Persists
While deep learning has made incredible progress, Polanyi's Paradox still casts a long shadow:
Explainability and Interpretability (XAI): A major challenge with deep learning models is their "black box" nature. While a model might make an accurate prediction (e.g., identify a tumor in a medical scan), understanding why it made that specific decision is often difficult. The knowledge it has learned remains tacit to us, even if the machine is effectively using it. This lack of interpretability is a significant hurdle for critical applications like medicine, finance, and autonomous systems.
Common Sense Reasoning: Despite advances, AI still struggles with robust common-sense reasoning – the vast web of unspoken assumptions and background knowledge humans use to navigate the world. For example, knowing that "water is wet" or "you can't fit an elephant in a teacup" is tacit. LLMs can sometimes parrot these facts but may lack a deeper, grounded understanding.
Robustness and Generalization: AI systems can be brittle. They perform well on data similar to their training set but can fail unexpectedly when faced with novel situations or adversarial attacks (subtly manipulated inputs designed to fool the model). This suggests their tacit understanding might be shallower or different from human tacit knowledge.
Data Dependency and Bias: Since modern AI learns tacit knowledge from data, it also implicitly learns any biases present in that data. If training data for facial recognition predominantly features one demographic, the system's tacit "understanding" of faces will be skewed, leading to poorer performance and unfair outcomes for other demographics.
Human-AI Collaboration: Effectively combining human tacit knowledge with AI's learned tacit knowledge is a complex challenge. How do we ensure that AI systems augment, rather than override, valuable human intuition and expertise?
The Future: Living With or Transcending the Paradox?
Polanyi's Paradox is unlikely to be "solved" in the sense of making all human knowledge explicitly codifiable. Instead, the future of AI will likely involve:
Better Tacit Learning: Developing more sophisticated AI architectures and training methods that can acquire deeper, more robust, and more generalizable tacit knowledge.
Improved XAI: Creating techniques to peer into the "black box," allowing us to better understand the tacit representations and decision-making processes of AI models, even if we can't fully articulate them.
Neuro-Symbolic AI: Combining the strengths of deep learning (tacit pattern recognition) with symbolic AI (explicit reasoning and knowledge representation) to create more versatile and trustworthy systems.
Embodied AI: Systems that interact with the physical world (like robots) may develop richer forms of tacit knowledge, similar to how humans learn through sensory-motor experience.
Polanyi's Paradox remains a profound and enduring concept in the philosophy of knowledge, with deep implications for artificial intelligence. It reminds us that human intelligence is far more than just explicit rule-following. Early AI stumbled against this barrier, while modern AI has found ingenious ways to build systems that learn and operate using forms of tacit knowledge. However, the paradox continues to highlight critical challenges in explainability, common sense, and robustness. Understanding and navigating the "unspoken knowledge" is not just an academic curiosity; it is central to building AI systems that are truly intelligent, trustworthy, and beneficial to humanity. The quest for true AI is, in many ways, a quest to understand, replicate, and collaborate with the vast, powerful, and often ineffable realm of tacit understanding.
Comments