Part 5. Omniscience, Frailty, and Communication

A desire for certainty is understandable. Here, “certainty” would mean having a “perfect map” (or a perfect section of map), or “perfect knowledge”.

But it is never possible.

A. Mortals

I will totally reject Omniscience in the next section, even for the Almighty (!).

But first, let me warm up with a few paragraphs for us mere mortals.

We cannot create a perfectly accurate model of reality. To do so, we would need to know the exact measurements (location, temperature, electric charge) of all particles in the universe, and we would need an error-free, perfect understanding of all physical laws. And we would need a tractable algorithm that computed all of this information, a computer capable of running the algorithm, and enough energy/time/space/matter to perform the computation.

These conditions are unmeetable in practice – we presently have no way of predicting the trajectory of bubbles that move through a tiny vial of water. No one expects this to change any time soon.

Even if we obtained such a model, however, it would have a severe issue with recursion. The ‘perfect model’ would have to include human brains – your brain, and everyone else’s. And since this model would be perfect, it would have basically created those people a second time. They would exist the same way you and I would exist. They would be trapped in the model (but not realize it); it would be a Matrix(1999)-like situation. The model-you would need to remain identical to real-you at all times – otherwise the two of you will become de-synchronized, and the two of you will start to do different things, affecting your respective worlds; and so the model of ‘reality’ will now become ‘inaccurate’.

So, for the model to be perfect, real-you can never interact with model-you. But this is exactly what you are doing when you view the model. When real-you ever interacts with your model, the model-you (inside of it) will need to interact with its model.

In other words, model-you needs his own perfect map of reality to consult, so that he knows how to behave if you ever consult yours. But even if we could somehow find one for him, we would still be defeated: that model would contain a third copy of you, who would need his own model.

This is now an infinite regress. Such a model could never be used unless we could execute literal infinite number of computational steps at literally zero energy/time/space cost. That is basically a proof by contradiction.

B. The Almighty

But let us say that you had absolute control over time, space, and energy.

Believe it or not, Omniscience is still impossible.

i. “All Knowing”

In Catholic Sunday School, I was taught that the God of the Bible was omniscient. He was “all knowing” – he knew everything.

4 He telleth the number of the stars; he calleth them all by their names.
5 Great is our Lord, and of great power: his understanding is infinite
-- Psalm 147:4-5

I remember having a thought: “But what if there were two Gods.”

The first, “God 1”, would be the “real God” of “everything”. The second, “God 2”, would have been created by God 1.

And God 2 could have been made to believe that he was “the real God”, even though his divine powers were –in truth– both provincial and revocable. But as long as God 1 did nothing, God 2 would never observe any limit to his powers. He would think that he was really the only God.

In that case, how would God 2 learn that he wasn’t the real God? He couldn’t learn this – not until it was too late.

Even stranger, in that scenario, how would God 1 learn that He actually was the “real God” of everything?! After all, if God 2 can meet a more-knowledgeable God 1, then why can’t God 1 meet an even-more-knowledgeable “God 0”? And then one day the three of them might meet “God -1”.

So, what would omniscience really be like?? It seems like a self-contradictory idea.

ii. Diagonal Arguments

It turns out that my younger self was onto something big. In math they are called “Diagonal Arguments”. One can use them to create (what I will call) “deceiver sets” – these are “sets” that look like they have a certain quality, but don’t. They have the quality for a while, but then, suddenly, they do not.

These diagonal arguments have been used to prove many clever things. One is that there are different “types” of “infinity” (some of which are ‘denser’ than others). Gödel used a diagonal argument in his legendary Incompleteness Theorem (a fallibalist’s jackpot, this theorem demonstrates that the foundational axioms of mathematics can never be proven sound – they might be sound, but we cannot prove this). In “Fabric of Reality”, David Deutsch describes a “Cantgotu” world which even a universal simulator (one that can simulate anything imaginable) will, paradoxically, never be able to simulate. We “can’t go to” such a place, even with a universal simulator (because it pretends to be simulating one of its own universes, and then does something else – it is consistently inconsistent – a paradox).

When I was just a little older, I thought of a sequel to the Omniscient God puzzle (although I did not know it). It happens to be very similar to Deutsch’s “Cantgoto Puzzle”.

The puzzle imagines two worlds:

  • World 1 – Here, the laws of physics (and physical constants, etc) will stay exactly as they have been, forever.
  • World 2 – Here, the laws of physics (and physical constants, etc) will stay exactly as they have been, until “Earth Time March 24th, 2478 at 4:30 in the afternoon, Eastern Standard Time”, at which point all of the laws of physics will suddenly change and become something else.

How do we know that we are in World 1, vs World 2? It seems we cannot. And it cannot really be dismissed as a trivial question … for if we are in World 2, then on March 24th we will die (and our Works will be irrevocably obliterated).

We now see that this is an instance of a “deceiver set”. We cannot know (although we may make educated, fallible guesses), a priori, that we are in World 1 over World 2. So in fact we cannot rely on any of our knowledge to be permanent. We must always be willing (and eager) to revise it as our experience changes.

Notice how similar this is to the “Alzheimer’s” example, or “Matrix” example or “It was all a dream” plot device. There will always be something you could learn, that would singlehandedly change everything about what you believed.

This is why it is good to be able to revise one’s knowledge at a moment’s notice.

C. Non-Contradiction

Consider the humble mathematician, demonstrating “2+2=4”. Does the impossibility of omniscience really matter?

(David Hilbert and Kurt Gödel obviously thought it mattered.)

It is important because of a principle of knowledge called non-contradition – no two pieces of knowledge are allowed to contradict each other. So any piece of knowledge that you think you have, could suddenly and unexpectedly come under attack, from any direction.

D. Human Frailties

It is hard to write an essay on fallibilism, without mentioning some well-known (but underappreciated) unreliabilities in human sense and memory:

E. Communication

i. Shared Context

As I have been saying: information = variation + interpretation, or info = “the part that varies” + “the part that is fixed”.

The same is true for communication – in order to send/receive information, each of you needs a “varying part” but each of you also needs a “fixed part”.

In other words, the less you have in common with someone, the harder it is to communicate with them. It is easiest to talk to someone that you have spent a lot of time with, and it is hardest to talk to someone from another culture who does not speak your language. It is harder still to communicate with reptiles, or fish, as the shared context is almost nonexistent.

When two people are communicating, they need a shared context. They must both be “speaking the same language”.

In fact, this is true with just one person! (When I leave myself a note, I have to make sure that my Future Self will be able to read and understand it.)

ii. White Lies

a. Trust

Humans constantly lie.

A “white lie” is one that is supposed to be helpful. Is there such a thing? Can lying be helpful?

In general, I believe that honesty is the best policy. When you are dishonest with someone, they will have no choice but to question all of your interactions. Even your past true statements will be tainted. If the trust diminishes too much, you might as well not be communicating at all.

b. Compression

On the other hand, as explained above, perfect honesty (defined as “perfectly accurate statements”) is unobtainable. Mortals find it too expensive to construct statements that are perfectly accurate. Whether we want to or not, and no matter how hard we try, we will always be making statements that are compressed – they are not the whole truth; they contain error in addition to truth.

In practice, we just do the best we can, and so this does not matter most of the time.

But occasionally life deals us an unfortunate context – a situation where we decide that we cannot easily explain ourselves as much as we would like.

  • lie (n) – statement that is known to be arbitrarily inaccurate (since no statement can be perfectly accurate)
  • unfortunate context (n) – situation where it is hard to construct accurate explanations, due to incentives that are anti-honesty (for example: playing a poker hand, acting a play or movie, fighting a war, living as a prisoner/slave, enduring bigotry / facing humiliation ) or incentives for user-friendliness (for example: when you have very limited time to explain yourself, poor communication skills, insufficient shared context / speaking different languages).

In “unfortunate context” cases, everything we might be able to say in practice (ie, everything that our user-friendliness “budget” can “afford”), will be significantly untrue.

So you might as well go with a white lie, in those cases.

Future Reading:

Conclusion

As David Deutsch wrote: It’s good to be wrong.

Or, in the modern parlance of the buzzfeed era: “This one simple trick turns confirmation bias into a strength!”


Future Reading:


comments powered by Disqus