Principle Driven Engineering and Object Driven engineering
do we really need to understand mind to develope Artificial General Intelligence ?
There are many arguments circulating on X claiming that we must understand the human mind more deeply in order to develop more intelligent AI systems. This argument sounds plausible at first glance, but it becomes far less rational once we examine how nature is engineered and how humans historically engineer systems.
Bird vs Airplane Analogy
A useful analogy here is bird vs airplane. Reframed in this context, the question becomes: “Do we need to understand bird anatomy in order to develop airplanes?” The answer is no. The Wright brothers did not have PhDs in ornithology, nor did they devote years to studying birds as objects. Instead, they discovered the principles governing flight.
Similarly, to create superintelligent AI, we do not need to understand the object (the mind) in its entirety. We need to understand the principles on which the mind operates.
Principles vs Objects
This distinction separates engineering into two categories: Principle Driven engineering and Object-driven engineering
History repeatedly shows that when humans draw inspiration from nature, they succeed by extracting principles rather than fully understanding the object itself. Examples include:
- Airplane vs bird
- Submarine vs fish and whales
- Camera vs human/animal eye
In all these cases, we infer the principles behind the phenomenon rather than attempting to replicate the entire phenomenon.
Can We Infer Principles Without Fully Understanding the Object?
This leads to an important question: Without understanding the object, can we still understand the principles? The answer is yes.
Below are examples showing how principles were inferred despite incomplete object-level understanding.
Aeronautics
- Object: Micro-anatomy of feathers, bird musculature
- Principles: Bernoulli's law, lift = CLρv²S/2, control surfaces
Vision
- Object: Exact ion-channel dynamics of cat V1 neurons
- Principles: Edge detection, convolution, hierarchical feature reuse
Cognition
- Object: Every synapse in the human cortex
- Principles: Bayes-optimal inference, reinforcement learning, energy minimisation
Engineers often succeed by skipping the object entirely and jumping straight to principles. The Wright brothers observed wing-warping in vultures, but quickly abstracted it into lift/drag equations and ailerons. No ornithology PhD was required.
The same pattern appears in AI:
- Cognition can be engineered from principles like Bayesian inference and reinforcement learning without decades of neuroscience research.
- Vision systems can be engineered without deep ophthalmological understanding.
Why Is Principle-Only Design Not a Violation of Physics?
This raises a deeper question: How does physics allow principle-only design with sparse understanding? The answer lies in the hierarchical and forgiving nature of physical laws.
Physics is hierarchical. When we operate at higher levels, lower-level details are coarse-grained away and absorbed into effective parameters.
Think of reality as a massive ultra-high-resolution image where every molecule, photon, and electron is a pixel. To design a wing or a camera, you only need a 1080p preview. Compressing that massive image into a lower resolution while preserving all relevant information is called coarse-graining. The resulting variables are effective parameters.
A jumbo-jet wing designer does not care about the picosecond motion of individual air molecules. They care about mean pressure and turbulence envelopes.
Coarse-graining is compression without loss of important information. For example, to quench thirst, one takes water from a river without drinking the entire river. That handful of water is a micro-representation of the whole river: same chemistry, same properties.
Equifinality and Non-Isomorphism
A second explanation is that different systems can reach the same observable behavior through entirely different internal mechanisms. Science treats this as normal.
Evolution followed one trajectory to enable birds to fly. Humans followed a different trajectory to build airplanes. This phenomenon—where the same end-state can be reached via multiple paths—is called equifinality.
A common fallacy is assuming isomorphism: believing that identical input-output behavior implies identical internal mechanisms. Nature does not require this.
This openness allows multiple civilizations, in principle, to discover different physical frameworks to understand the universe while aiming for the same goal: understanding reality.
Convergence Between Brain and Deep Learning
Distinct physical systems can converge on similar high-level solutions under shared constraints. Analysis of biological minds and deep learning systems shows partial convergence:
-
Brain: Spiking neurons, stochastic synapses, dendritic computation
Deep Learning: Real-valued tensors, backpropagation gradients -
Brain: Local plasticity rules (Hebbian learning, STDP)
Deep Learning: Global gradient descent
Despite differing internal mechanisms, both systems can recognize images and translate languages. The engineered principle overlaps with, but is not identical to the biological one.
Empirical Evidence for Principle-Driven Engineering
This is not merely philosophical; it is empirical. Principle-driven engineering has repeatedly succeeded:
- Convolutional neural networks drew inspiration from receptive fields but surpassed biologically detailed spiking models long before V1 wiring was fully understood.
- Deep reinforcement learning agents achieve human-level Atari performance without any model of dopamine neurons—only reward-maximizing temporal-difference learning.
- Shannon's information theory enabled modern communication systems decades before direct measurement of auditory nerve ion channels.
In each case, principle-driven engineering preceded complete object-level understanding.