Projects


Reverse-engineering neocortical intelligence

BranchesTask_4bWe are using detailed measurements of function and structure of mouse visual cortex to reverse engineer the inference and learning algorithms of the brain. We have recorded optically from 105 neurons (thousands at a time) of a behaving mouse across all layers of cortex using 2- and 3-photon microscopy. We then use electron microscopy to reconstruct the nanoscale wiring of this circuit, and synthesize these diverse measurements in the context of probabilistic inference to relate distributed computations to algorithms. Finally, we will apply these new algorithms to real-world computer vision problems. See ninai.org for more details.

Collaborators: Andreas Tolias (PI); Xaq Pitkow (co-PI); Key personnel: Ankit Patel, Chris Xu, Raquel Urtasun, Rich Zemel, Matthias Bethge, Liam Paninsky, Clay Reid, Sebastian Seung. Funding: IARPA, MICrONS project. Lab members: Rajkumar Raju, KiJung Yoon, Emin Orhan.


Inferring interactions between neurons, stimuli, and behavior

InteractionsWe are developing statistical tools to infer how large populations of neurons interact with each other and with the external world, using the large-scale data enabled by emerging neuroscience technologies. This is a large collaboration between statisticians, mathematicians, computational neuroscientists, machine learning practitioners, and experimental neuroscientists.

PIs: Krešimir Josić (contact), Genevera Allen, Ankit Patel, Xaq Pitkow, Robert Rosenbaum, Andreas Tolias. Funding: NSF NeuroNex. Lab members: Aram Giahi, Yicheng Fei.


Distributed computations for complex tasks

VR setup and gameNeuroscience has traditionally used simple, largely static tasks to peer inside the brain. Here with our collaborators we will ask animals to perform dynamic, complex, naturalistic tasks, in both virtual reality and enriched physical spaces. By including multiple nuisance variables, we will challenge the brain to untangle the representations of task-relevant variables in ways that simpler, highly controlled tasks do not. We will use this richness to drive and then model population responses to identify the distributed nonlinear processing that dynamically couples neural activity patterns between brain areas and thereby generates behavior.

Collaborators: Dora Angelaki, Valentin Dragoi, Paul Schrater. Funding: Simons foundation, BRAIN Initiative (NSF and NIH). Lab members: Zhengwei Wu, Saurabh Daptardar, Kaushik Lakshminarasimhan, Baptiste Caziot.


Dynamic network structure in human language

BrainNetworks_TandonOur collaborators record from inside the skulls of human epilepsy patients (with eCoG). While they are monitored to find the source of the abnormal electrical patterns, patients are asked to name words from pictures. We use this data to understand the process of word selection and production.

Collaborators: Nitin Tandon, Greg Hickok, Bob Knight. Funding: NIH. Lab members: Aram Giahi.


Nonlinear population codes

Many task-irrelevant variables affect how neurons are tuned to task-relevant stimuli. To extract the relevant ones, the brain must transform its responses nonlinearly. Does it do this well? We have a new test for optimal nonlinear decoding and a method to extract the effective population decoding strategy.

Lab members: Qianli Yang. Funding: McNair Foundation, NSF


Interactive Apps for Scientific Visualization

ColorSlidesMany scientific concepts can be conveyed effectively by good interactive graphics. We are looking for a programmer-artist to develop interactive games both for teaching computational neuroscience, as well as for teaching concepts in color theory grounded in visual perception.

Collaborators: Luanne Stovall. Funding: National Science Foundation. Lab members: Elizabeth Borneman