Pedagogy

Neural Computation

Carnegie Mellon University
Instructor: Xaq Pitkow
Spring, TTh 2:00–3:20, Mellon 115. 12 credits. [Syllabus]

How does the brain work? Understanding the brain requires sophisticated theories to make sense of the collective actions of billions of neurons and trillions of synapses. Word theories are not enough; we need mathematical theories. The goal of this course is to provide an introduction to the mathematical theories of learning and computation by neural systems. These theories use concepts from dynamical systems (attractors, chaos) and concepts from statistics (information, uncertainty, inference) to relate the dynamics and function of neural networks. We will apply these theories to sensory computation, learning and memory, and motor control. Our learning objectives are for you to formalize and mathematically answer questions about neural computations including “what does a network compute?”, “how does it compute?”, and “why does it compute that way?”

Prerequisites: linear algebra, probability and statistics, calculus

Principles of NeuroAI

Carnegie Mellon University
Instructor: Xaq Pitkow
Fall, TTh 11:00–12:20, Baker Hall 340A. 12 credits.

Can we find fundamental principles of intelligence for both brains and machines? This course focuses on properties of computational systems that lead to good generalization in the natural environment. We’ll answer questions including: How can we measure intelligence? How can we compare computations in brains and machines? How does hardware affect which computations are easy or hard? How do small-scale architectures like attention affect generalization? How does large-scale architecture affect generalization? How do local learning rules and global objectives affect what is learned? What can we learn from brains to make smarter machines? And might we learn from machines to make even smarter brains?

Prerequisites: linear algebra, probability and statistics, calculus, some knowledge of neuroscience and deep learning