If natural tasks could be solved with linear computation, then we wouldn’t need a brain. We could just wire our sensors to our muscles and accomplish the same goal, because multiple linear processing steps is the same as just a single linear processing step. Real problems require the brain to expand its representation nonlinearly, so that simple operations can solve them. Complex tasks require a lot of untangling to isolate the task-relevant properties. Machine learning has developed methods to train artificial neural networks on practical tasks, and the results have some striking similarities with biological neural networks.


Sensory evidence is ambiguous. To interpret this evidence, the brain has to use its prior knowledge, and combine information according to its reliability. Ample behavioral evidence shows that animals do just this. For example, a ventriloquist tricks our brains into localizing the sound to a puppet’s mouth because we overweight our usually reliable visual signals that inaccurately indicate the speaker’s location, while we underweight our imprecise but accurate auditory signals. How does the brain do this? The rules of probabilistic inference, or principled approximations to them, offer hypotheses about how neural circuits could draw inferences in an uncertain world.

Distributed processing

Information is represented redundantly across many neurons in the brain. How can we understand the collective computational action of many neurons? We study population codes and how the measurable responses of only a subset of neurons out of many can help us understand neural computation and animal behavior.


Despite its impressive powers, the brain has limited computational resources. It directs them toward the most useful computation by looking for change. This might be change over time, when something new comes into view, or change over space, when something differs from its surroundings. Theories of efficient coding and predictive coding quantify how the brain could best highlight changing information.