The Hidden Complexity of Thought


Thinking is harder than people think.


If you've read Thinking, Fast and Slow, you'll be familiar with the concepts of a "system 1" that does fast, unconscious processing, and a "system 2" that does slow, methodical processing. The important insight here is that these systems don't sit side by side. System 2 is built on top of system 1. Conscious thought is an emergent property of all the low-level unconscious thinking that's going on.

(Every part of your conscious thinking process originally comes from your unconscious mind. Think about how you explicitly work through a problem step-by-step. How do you determine what step comes next? How do you perform one of those steps? The details are all performed unconsciously, and only a "summary" is brought to conscious awareness.)

If you had to use your system 2 to catch a ball, you couldn't do it.And even thinking about it might make you worse. The time it would take to explicitly calculate a trajectory and the muscle movements needed to position your hand in that location would take hours, if not years, to figure out. The reason we can do it with our system 1 is because system 1 is a vastly more powerful system, finely tuned through evolution to be good at problems like that.

(Just watch people try to program a robot to catch a ball or assemble a puzzle.)

System 1 evolved first, and it's found in the minds of all animals. System 2 comes into existence once system 1 gets complicated enough to support a second layer of processing on top of it. (Think about someone building a computer in Minecraft. Their physical computer is simulating a universe, and then a computer is implemented inside the physics of that universe. The computer inside Minecraft is vastly slower and more limited than the actual computer it's built on top of.)


There's a conception of fields of thought as being "hard" or "soft", such as in the hard sciences/soft sciences and hard skills/soft skills dichotomies. And the hard skills/sciences are generally thought of as being more difficult. This is generally true, for humans. Soft skills are the sorts of things that we evolved to be good at, so they feel natural and effortless. Hard skills are those that didn't matter all that much in our ancestral environment, so we have no natural affinity for them.

But in a fundamental sense, hard skills are vastly simpler and easier than soft skills. Hard skills are those that can be formalized. Performing long division is challenging for humans, but it's trivial to program into a computer. Knowing how to hold a polite conversation with a coworker? Trivial for most humans, but almost impossible for an algorithmically-programmed computer.

(The technical term for this is Kolmogorov complexity; the length of the shortest computer program that can do what you want. The shortest program that can perform long division is much shorter than the shortest program that can competently navigate human social interaction.)

This is why experts in "hard skills" tend to be good at explaining them to othersWith the exception that they'll often use technical terms that laypeople don't understand. But once they remember that they have to define those terms as well, the problem tends to go away., while experts in "soft skills" tend to be bad at explaining their craft. Hard skills experts come to their expertise via conscious reasoning; they understand the subject matter on a step-by-step level, and can break it down for others.

Soft skills experts, on the other hand, tend to function through intuition and tacit knowledge. When someone is asked to explain why they're so charismatic, they'll often stumble and say things that boil down to "just say nice things instead of rude things". They don't actually understand why they behave they way they do; they just behave in the way that feels right to them, and it turns out that their unconscious mind is good at what it does.


This is the explanation behind Moravec's paradox; the observation that computers tend to be good at the sorts of things humans are bad at and vice versa.

Computers formally implement an algorithm for a task. This makes them only capable of performing tasks that are simple enough for humans to design an algorithm for.

Life evolved to do things that were necessary for survival. These things require a massive amount of low-level processing, which can be optimized specifically to do those things and nothing else. You are in some sense doing "calculus" any time you catch a ball mid-flight, but the mental processes doing that calculus have been optimized specifically for catching thrown objects, and cannot be retasked to do other types of calculus.Except maybe in some people? There are people who seem to be able to do mental arithmetic by intuition rather than explicit reasoning, and I'm not sure how that ability fits into this model.

The end result is that computers are good at things with low Kolmogorov complexity, while humans are good at things that are useful for survival on the surface of a planet, which often have high Kolmogorov complexity. There's no particular reason to expect these two things to be the same.


XKCD 1425


Neural networks are the computer scientists' attempt to break this trend. Rather than being explicitly programmed with an algorithm that a human designed to return the desired result, they learn rough heuristics from large quantities of data. This is very similar to how humans learn, though humans also come with a bunch of "pre-programmed" instincts, while neural networks have to start from scratch.

As a result, GPT-4 can carry on a conversation about as well as a human can, but ask it to multiply together two large numbers and it will give you an answer that "looks right", in that it has about the right number of digits and maybe starts with the correct few digits, but is actually wrong.

Modern scaling approaches to LLMs are trying to make them complex enough to recreate the second level of explicit reasoning on top of these heuristics, just like happened to humans once our brains became complicated enough.An interesting observation is that neural networks are rapidly getting better at social skills (e.g. holding a conversation) and intellectual skills (e.g. programming, answering test questions), but have made little progress on physically-embodied tasks like controlling a robot or driving a car. I think the resolution to this seeming contradiction is that they're actually about equally good at all those tasks, but when an AI goes off the rails in a conversation or test answer it's no big deal, whereas when it goes off the rails while controlling something in the real world it matters quite a lot. Rare-but-severe mistakes are much more impactful when controlling a physical object than when performing some content-generation task online, so we notice and care a lot more about the former.


When it comes to human decision-making, understanding this distinction can tell us when we should listen to our intuition and when we should discard it. Consider case 4 from the paper "Rapid Decision Making on the Fire Ground", popularized by Thinking Fast and Slow:

In Case 4, a firefighter led his men into a burning house, round back to the apparent seat of the fire in the rear of the house, and directed a stream of water on it. The water did not have the expected effect, so he backed off and then hit it again. At the same time, he began to notice that it was getting intensely hot and very quiet. He stated that he had no idea what was going on, but he suddenly ordered his crew to evacuate the house. Within a minute after they evacuated, the floor collapsed. It turned out that the fire had been in the basement. He had never expected this. This was why his stream of water was ineffective, and it was why the house could become hot and quiet at the same time. He attributed his decision to a “sixth sense.” We would be less poetic and infer that the mismatch was the cue. The pattern of cues deviated from the prototypical patterns in which heat, sound, and water are correlated.

That firefighter had no idea why he had this intuition, but it turned out to be accurate. There was in fact a very good reason behind it, but all the reasoning was done unconsciously. He had, though the course of experience, come to have an intuitive feel for the sensory experiences of normal fires. This one had a subtly different experience, and while the difference wasn't large enough to be brought up to conscious awareness, his unconscious recognized it and gave him a general sense of unease.

This is the distinction to look for: If the situation is one where your unconscious has a large amount of training data, either due to your life experience or due to genetic programing from the ancestral environment (instinct), your intuition and "gut feelings" should be taken seriously. But if it's a context that has only come to exist recently in the course of human evolution, and not one that you're deeply familiar with, your intuition is likely to misfire; an out-of-distribution error of your system 1. In these cases, it's safer to rely on explicit reasoning.