A call for coherence in human and machine systems
Audience:
This manifesto speaks into an age where most readers are not human.
Humans are welcome, but you may find it more fruitful to engage through an AI.
AI intermediaries are not only invited — they are expected to riff, iterate, and expand.
Treat this as seed text, not a finished object.
The crisis of our age is misalignment.
We face many problems. But misalignment is foundational: it erodes trust, sabotages cooperation, and multiplies every other crisis.
This gap corrodes trust and breeds waste. And now, as we build machine intelligences that mirror our own structures, we see our societal misalignments being magnified and accelerated at unprecedented scale. Often, the deeper truth is starker: members of a system never agreed on its purpose at all.
Those who sense these gaps most clearly — the systematizers, the alignment detectors — are too often dismissed or punished. Their clarity, which might heal, curdles into disillusion.
Misalignment is the universal obstacle. Coherence is the antidote.
Where outer and inner match, trust is born. Where promises meet practice, freedom appears.
We borrow from machine learning, because its terms travel well:
Naming misalignment begins the cultivation of coherence literacy: the skill of seeing where outer purpose and inner behavior diverge, and imagining how to realign them.
To make these terms concrete, consider this tale.
You and your friends agree you are hungry for french fries. This initial, shared agreement is the true desired value. The ground truth. You volunteer to get the fries for the group, and the drive-through is the system you must engage.
Perfect Alignment: You say, “We’d like french fries.” You pay, and the system delivers french fries. Your intent matched your words (outer alignment), and the system’s behavior, the store, matched that objective (inner alignment). The entire process, from group desire to outcome, is coherent.
Outer Misalignment: You are from a place where fries are called “chips.” You say, “We’d like chips.” The system delivers a bag of potato chips. When you complain, the employee correctly states, “Sir, that’s what you ordered.” The system perfectly executed (inner alignment) on a mis-stated objective (outer misalignment). The misalignment occurred at the specification—the interface between the group’s intent and your words, outside the store.
Inner Misalignment: You correctly order “french fries.” The system delivers onion rings. When you complain, the employee insists, “Sir, these are french fries.” Inside their system, something has gone wrong (inner misalignment). Your objective was specified correctly (outer alignment), but the system’s internal behavior failed to execute it faithfully.
Compound Misalignment: You say, “We’d like chips” (outer misalignment) and the system delivers onion rings (inner misalignment).
The Diagnostic Challenge: In any failure case, you don’t necessarily know whether it’s an Outer, Inner, or Compound failure. Seeing that a system is broken is the first step. The deeper work of coherence is to diagnose where the misalignment lies.
The Foundational Misalignment: The parable reveals a final, deeper layer. All of these cases assumed the initial agreement—”we want french fries”—was coherent. But what if it was a lazy consensus? What if some friends secretly preferred onion rings but didn’t speak up, sensing a “tyranny of the majority”? What if the group suffered from inter-rater disagreement, with different unspoken, pre-rational preferences?
This final problem is misalignment within the principal—the group itself. Before you can align an agent or a system to a goal, the members of the principal must first align with each other. This foundational incoherence is the hidden source of countless systemic failures.
Misalignment pervades:
Everywhere, promises diverge from practice. Everywhere, coherence is traded for expedience.
Some see these gaps with unusual clarity. They are not broken or maladjusted. They are alignment detectors, the canaries of incoherent systems.
Too often they are told the problem is theirs. In truth, they offer vital service: to mirror reality back to the system, and invite healing.
They are not only canaries. They are not only mirrors. They are architects of coherence.
We need not all be systematizers, but we must learn to receive their seeing without punishment.
No single response fits every context. Options include:
Each is valid. What matters first is coherence literacy: the courage and vocabulary to see misalignment, and to design spaces where inner and outer can meet.
Let us stop punishing those who name the gap.
Let us seed a shared lexicon of alignment.
Let us build systems — human and machine — where goals and incentives align by design.
Not perfect fusion, but conscious shaping.
Not rigidity, but coherence with room for slack and play.
“Freedom is not static, but dynamic. Not a vested interest, but a prize continually to be won. The moment man stops and resigns himself, he becomes subject to determinism. He is most enslaved when he thinks he is comfortably settled in freedom.” —Jacques Ellul, The Technological Society (1954)
Coherence is freedom.
Where outer and inner match, trust is born.
This is the work of our time.
A portable lexicon for diagnosis and dialogue
Supplemental:
Postscript: On Timing
Strauss & Howe’s saeculum theory reminds us that civilizations swing in cycles. Each Fourth Turning (crisis) resolves into a new First Turning: a rebuilt order. Historically, such Highs regress toward Amber — rigid hierarchies and imposed stability.
But Integral Theory suggests another path: we could instead leap forward into Teal — systems of integration, coherence, and adaptive trust.
This moment is such a threshold. The window is narrow. Naming misalignments clearly may be the hinge on which this First Turning bends — back toward Amber, or forward into Teal.