Theories and hypotheses allow you to move forward; they’re pragmatic problem solving devices. They make sense of facts or lackthereof. They help you make decisions. **But is there a way to make ***better* decisions? Could you more accurately *predict* future outcomes?

## A theory of theories

Suppose you have a grand theory. A theory of theories. But it’s a work in progress and resides at the back of your mind steering your curiosity. It’s also nearly impossible to articulate because of its latency and oscillating magnitude. Lost yet? I was.

One day in December of 2013 I clicked arbitrarily on a video for lunchtime viewing pleasures and everything changed.

**Dr. Feynman** postured in front of his whiteboard, one hand in the other, listening. You can see the gears moving and formulating an analogy to explain in simple terms, **Perturbation Theory**.

What happens in the next seven minutes was the key to pushing that grand theory forward into the limelight.

I wasn’t familiar with this “Perturbation Theory,” but I made a connection to his explanation. I subbed in my own application into the formula example he wrote out and it lined up precisely with what I couldn’t spit out on my own. Without a physics or math background, I didn’t quite know how to express it. And here it was– the beginning of my grand theory.

**Keep this in mind: Problems in general, tend to be oversimplified. And situational outcomes are the result of a lot of factors.**

## Approximations are more accurate!

In short, **Perturbation theory** provides methods to solve for an ‘approximate’ thus *more correct* solution to an imprecise problem, rather than seeking a precise solution that turns out to be less accurate.

Here’s an Example:

Think of a whole number versus a decimal. Rounding a decimal to zero places leaves you with a whole number and it’s less accurate.

The real answer = 5.28604837…?

A bad answer is = 5

A better answer is = 5.2860

An even better answer = 5.286048

My previously inexpressible grand theory, aka The Mild Goosechase, aka the interconnectedness of everything began to express itself when **two things happened**:

1. I abstracted the equation and created a derivative that Dr. Feynman put on the chalkboard.

2. I recognized a problem.

To set this up, here’s the equation/analogy that Feynman used:

(Side A) 1/ 1 – .01 = (Side B) 1 + (.01)(.01) + (.01)^3….

1 + .01 + .0001 + .000001

Here’s what’s the equation says: Side A = Side B when Side B equals Side A plus detail + more detail…(+ even more detail etc.)

The more detail added on Side B, the more accurate and the better equivalence to side A. Remember, the answer is the best approximate.

My two observations:

1. Abstracted Derivative: There’s usually more than meets the eye. It’s just that the information typically omitted is complex and more difficult to use and so it’s thrown out. The problem with the omission is that it can be dramatically less accurate (particularly in system iterations). Which leads to the problem.

2. The Problem: This equation could potentially go on forever and becomes incredibly complicated and difficult to calculate. Imagine a different circumstance. One in which there are **more than** just a few variables. It becomes **complex** AND complicated! It’s really a question of optimization or optimal state.

How can this be addressed?

## Meet the new Mild Goosechase

Relationships between people, objects and environments can be highly complex. They’re not only interconnected but **interdependent**. Solutions to challenges or problems can also be complex.

Perturbation theory showed mathematical proof of “complicated” systems. Adding more variables to the equation implies complexity.

With that, how can complexity be represented mathematically?

The months following this mind-bending and otherwise typical lunch have opened an entirely new world to me. A world I’ve been exploring ever since.

**TLDR: A Random click will change your life.**