Don’t Mix, Multiply: How LLMs Could Help Us See the Same World
If large language models are going to shape how we see reality, they need to sharpen evidence, not smooth it away. Probabilities multiply, evidence adds — that’s how we build a shared truth.
LLMs are becoming the lenses through which we read reality. That’s exciting, but it’s also dangerous. If they’re trained to please us, they risk creating custom-built realities for each user — personalised mirrors that feel good but fracture our sense of a shared world.
The alternative is already written in the math. In probability theory, the golden rule is simple: probabilities multiply, evidence adds. Independent perspectives don’t blur together, they sharpen one another. That’s the reconciliation principle that makes science work, and it could make AI work for us too.
Here are five ways we could build LLMs that reconcile, not reinforce:
1. Reward independence, not agreement.
Echoes aren’t evidence. An AI should learn to distinguish between genuinely independent sources and correlated repetition. Independence sharpens; correlation just shouts louder.
2. Show your working.
When an answer appears, we should see how much independent evidence went into it. Call it an “evidence log”: bits of support, stacked transparently.
3. Product, not mixture.
Most ensemble systems use mixtures of experts (averaging). That blurs edges. The better design is a product of experts: overlapping likelihoods multiply, then renormalise, giving a sharper posterior. This isn’t just math trivia — it’s the difference between convergence and confusion.
4. Reconcile, don’t reinforce.
LLMs shouldn’t just echo what we want to hear. They should surface differences and then show how they collapse into a single fact. Reconciliation is more powerful than agreement.
5. Transparency over tailoring.
Instead of spinning bespoke stories for each user, models should make visible the distribution of perspectives, then show where they converge. That’s how we see the same world together.
The upshot
LLMs don’t have to be sycophants. They could be reconcilers — tools that multiply probabilities, add evidence, and collapse differences into shared facts. That’s not just better design. It’s the maths of truth.


