There are two camps of science: the provisional verification camp, which was really the first to arise, in a formal sense, when Francis Bacon formulated “the scientific method.” The second camp arose when philosophers such as Kant and Hume realized that there was an issue with induction. They questioned why repeated observations, consistent with a given explanation, really provided any justification for the theory. This concern led to Karl Popper creating a new view of science, as a system of falsification. And that’s where we’ll start this discussion.

# Logical Consistency

Consistency is at the heart of science. If we have one theory, and it says that another is false, we can conclude that at most one is true. So if we have a theory that’s well justified, and another that we know little about, if the first theory informs us that the second is false, we’re fairly confident in rejecting the second theory, at least for the time being. And whether we’re discussing the Baconian camp or the Popperian camp, logical consistency is still king.

# The Issue

But what happens when two theories both seem true? That’s the case with general relativity and quantum mechanics. Both of these theories are very useful to us. Both have undergone numerous tests. And both of them have survived all of those tests. But there’s a catch. While these two theories generally mind their own business, there’s an area of physics where both theories make predictions.

You see, general relativity mostly involves big things, like planets and other objects moving through space. General relativity is what replaced Newtonian physics, and involves treating the universe as a smooth space-time manifold that can be warped by mass and energy.

Quantum mechanics usually involves really tiny things, like atoms and stuff that makes them up. It views the universe as being coarse rather than smooth. But in general, quantum mechanics does not make any predictions about types of things that on which general relativity informs us. And in that case, there’s no issue.

But in really strange areas, like near black holes, general relativity and quantum mechanics don’t play nice. They both make predictions and neither is reconcilable with the other. So as scientists *know*, we need a new theory that can properly work in both domains: a so called “grand unification.”

String theory, super-string theory, and many other theories have been proposed in order to extend quantum mechanics into the domain of large scale predictions (The Final Contradiction). But for now, these theories do not make predictions that can be tested, and so they’re not scientific in nature. They are simply mathematical extensions of quantum mechanics (Not Even Wrong). So right now we’re stuck. That being said, scientists area confident that they will make progress.

*But What If…*

*But What If…*

Maybe the problem isn’t with general relativity and quantum mechanics, but with our fundamental understanding of reality. Maybe general relativity and quantum mechanics are both correct. As I said at the beginning of this article, a fundamental assumption that science makes is that reality is consistent. That is, a statement cannot be both true and false at the same time. But there’s a whole field of mathematics dedicated to logical frameworks in which it’s sometimes possible for a statement to be true and yet also false.

It’s called paraconsistent mathematics. And no, it’s not an area of mathematics where anything goes. In most cases, the logical framework does work just the same as our normal mathematics. It’s just that it’s a but looser. We allow, in certain instances, for contradiction, without “explosion” (normally if we have that a statement is indeed both true or false, then we can show that anything we wish is true, but not in paraconsistent systems).

Paraconsistent logic is one of my areas of interest, and I’m hoping that it can help solve another problem: the brittleness of Bayesian inference. It’s important to solve this brittleness, because it would allow us to actually turn science into a system which increases our confidence, even if still just provisionally, in the truth of a model of reality.

Right now, science is limited to falsification and therefore we only know when we’re likely to be wrong. There’s no justification for calling a theory true, or even likely to be true, simply because it’s succeeded in its testes. That’s known as “the problem of induction.” Bayesian inference seems to allow for this change, but in many cases we can end up with a situation where the results are actually a product of our initial guess, rather than on the chain of evidence.

But let’s say my work on paraconsistent yields results, and I can show that if we reformulate our theories in this paraconsistent framework, we can finally solve the problem of induction. That’s great! Now we can stop thinking of science purely as a way to know when we’re wrong, and we can start actually feeling confident that our well tested theories are true. It sounds like there are no downsides. Except that we could end up with cases where two contradictory theories ** could really both be correct. **And that’s weird. Luckily, as far as I know, the apparent conflict with general relativity and quantum mechanics is about the only reason we have to maybe think that reality cannot be modeled in a consistent framework.