Talk abstracts



Return to list of talks


Matt Reynolds
Title
Communication and power for high data rate implanted systems
Abstract
Significant challenges remain in powering and communicating with implanted recording and stimulation systems, given the demand for ever-increasing channel count and ever-increasing recording/stimulation fidelity. In this talk I will present several technologies currently under development in our lab, including quadrature amplitude modulated (QAM) backscatter communication at data rates of up to 96 megabits per second, and multiplexing of power and data on implanted single wire transmission lines. These technologies suggest scalable approaches to the communication and power needs of future implanted systems with 100s or more electrodes.


Philip Holmes
Title
A multi-area stochastic accumulator model for a visual search and decision task
Abstract
Using behavioral and electrophysiological data from two monkeys performing a covert visual search and decision task, we develop a leaky accumulator model for the dynamics of neural populations. The model represents inferior temporal cortex (ITC), anterior intraparietal area (AIP), motor cortex and six receptive fields in the lateral interparietal area (LIP). Parameter values fitted to the data allow us to propose mechanisms that account for differences between the animals in terms of connection strengths among ITC, AIP and LIP, suggesting that they use different strategies to accomplish the task. More generally, our approach may be of interest in modeling cognitive tasks that involve multiple brain areas.
If time permits, I will describe how leaky accumulators can be derived from biophysically based spiking neuron models.
The talk will draw on joint work with Sam Feng, Mike Schwemmer and Jonathan Cohen, using data kindly provided by Jackie Gottlieb (Columbia University).


Katherine Steele
Title
Synergies and Simulation: Exploring Neuromuscular Control in Individuals with Neurological Disorders
Abstract
The human neuromuscular and musculoskeletal systems are complex, with many more actuators than degrees of freedom. This complexity gives us the ability to perform the many tasks of daily living, but also makes treatment after injury, such as in stroke or cerebral palsy, incredibly challenging. In this seminar, I will explore how new tools in musculoskeletal simulation and synergy analysis can be used to probe both unimpaired control and pathologic movement after brain injury.


Mark Frye
Title
Olfactory neuromodulation of motion vision behavior and circuitry in Drosophila
Abstract
It is well established that perception is largely multisensory, often served by modalities such as touch, vision, and hearing that map multimodal stimuli emanating from a common point in space, and processed by brain tissue maps in spatial co-register. However, the neural interactions among modalities that share no spatial or temporal domain, yet are essential for robust perception within noisy environments, remain uncharacterized. These cross-modal sensory interactions are further modulated by factors reflecting the animal’s behavioral state. Drosophila makes its living navigating food odor plumes, yet in free-flight requires strong visual feedback to localize an odor source. Odor increases the strength of gaze-stabilizing optomotor reflexes to keep the animal aligned within an invisible plume. We have recently used calcium imaging to characterize a motion-selective interneuron of the third optic ganglion in Drosophila that shows cross-modal enhancement of visual responses by paired odor. Presynaptic inputs to this neuron are required for behavioral odor tracking, but are not themselves the target of odor modulation. A class of neurons widely innervating the brain releases the biogenic amine octopamine, show odor-evoked calcium activity, and make synaptic contact with motion detecting neurons. Genetically restoring synaptic function within the octopaminergic neurons of animals carrying a null mutation for all aminergic signaling is sufficient to restore odor tracking behavior. These results demonstrate the behavioral algorithms and mechanistic functions for rapid and specific neuromodulatory visual-olfactory integration in fruit flies, which may be representative of adaptive cross-modal interactions across taxa.


Hamish Meffin
Title
Retinal Implants: Patient Results and Future Directions
Abstract
Dr. Meffin will provide an overview of the retinal implant research program of Bionic Vision Australia, a national consortium of researchers working together to develop a bionic eye. Retinal implants aim to restore some level of functional vision to patients with degenerative retinal diseases, such as retinitis pigmentosa, through electrical stimulation of surviving neurons. This talk will be divided into three parts, and will include a summary of results from three patients implanted with a 24-channel prototype implant and preliminary results on the measurement and modeling of spatial patterns of activation in visual cortex in response to retinal stimulation in an animal model. The talk will also include a description of the new technologies underlying a next generation high-acuity device containing 256 channels. These technologies include the electrode array, which is made from synthetic diamond, and the stimulator microchip, with the flexibility to apply near-arbitrary patterns of stimulation.


Tom Richner
Title
Optogenetic & multiple unit investigation of micro-electrocorticography

Abstract
Recording potentials from the surface of the brain, electrocorticography (ECOG), is an important clinical approach and a developing method for implementing brain-computer interfaces (BCIs). Microfabricated ECoG (micro-ECoG) arrays are smaller and more flexible, well suited for long-term applications. We leveraged multi-unit recordings and optogenetics to gain insight into the spatial, temporal, and spectral properties of micro-ECoG recordings. In addition to investigating the micro-ECoG signal itself, these integrated approaches are potentially useful for developing brain-computer interfaces and researching epilepsy.



Jason Ko
Title
Targeted Muscle Reinnervation (TMR) As a Treatment for Neuromas: From Bedside to Bench and Back to Bedside

Abstract
Targeted Muscle Reinnervation (TMR) is a revolutionary strategy whereby amputated nerve endings are transferred to otherwise functionless target muscles to create new "myoneurosomes" that allow an amputee to control a bionic prosthesis in an intuitive fashion. Clinical and animal evidence have demonstrated that TMR is effective in preventing painful neuromas in patients who have undergone the procedure, which is the premise behind an upcoming Department of Defense (DoD)-funded multicenter prospective clinical trial that Dr. Ko will help lead at the University of Washington. In addition, there are exciting areas of future research focused on the sensory recovery that occurs after TMR, and Dr. Ko is collaborating with various members of the CSNE to perform sensory mapping studies in post-TMR amputees in an effort to develop a novel bioprosthetic device that can provide real-time sensory feedback to amputees, which many consider to be the "holy grail" of prostheses.




Bill Bialek
Title
Physics problems in early embryonic development

Abstract
The living world present many striking and beautiful phenomena. As physicists, we would like to understand these phenomena in the same way that we understand the inanimate world, but biological systems are terrifyingly complex. How do we cut through the complexity? Are the myriad phenomena that attract our interest examples of some more general ideas, or are we stuck with what Rutherford would have called stamp collecting? Can we imagine theories that have the power and generality that we have come to expect in theoretical physics, yet still engage with the details of experiments on particular biological systems?

A little over ten years ago, several colleagues and I started exploring early events in the development of the fruit fly embryo. We have discovered that phenomena in these first few hours of a fly's life are vastly more precise than previously imagined, perhaps so precise that they approach the limits of what physics allows. This approach to optimal performance then provides a path to developing theories for the "design" of the underlying mechanisms, and these theoretical principles have the chance of being more generally applicable. I'll give a review of this work, starting with basic facts and hopefully getting to edge of our current understanding. Since this is an informal talk for students, I'll also try to provide some more general perspective on the intellectual landscape near the borders of physics and biology.


Bing Brunton
Title
Exploiting sparsity for discovery driven by large scale, dynamic neuronal networks

Abstract
Brains are remarkably complex networks of neurons, and many functions and dysfunctions cannot be localized to any particular part of a brain. Further, network activity changes in time at every spatial scale, from the molecular dynamics of single synapses, to coordinated oscillations across brain areas, to circadian rhythms. Advances in technology and infrastructure are delivering the capacity to record signals from brain cells in much greater numbers and at even faster speeds. Distilling spatial-temporal coherent patterns from large scale, noisy measurements is vital to understanding how networks of neurons give rise to behavior, from merely 302 neurons in the nematode to the billions of neurons in human brains. Fortunately, despite the size of the system or dataset, relatively low-dimensional patterns often emerge in complex systems, and it is possible to find sparse representations of important behaviors.

My talk focuses on two approaches to harness this inherent sparsity. First, I describe a reverse engineering perspective to distill principles of how networks of neurons gather sensory inputs to measure coherent features in the physical world. Second, I describe an equation-free framework to model dynamics of large-scale recordings. This framework derives simple models built directly on the observable data, which complements biophysically motivated models built on governing equations. I believe such data-driven, equation-free approaches are essential to characterize network neural dynamics across multiple spatial and temporal scales.


Adrian KC Lee
Title
Towards incorporating user's intent in a next-generation hearing aid design

Abstract
Current hearing aids users find minimal benefit in their device when they are conversing in a crowded environment because all sounds are amplified irrespective of the user's focus of attention. We are currently working toward the creation of a next-generation hearing aid that will selectively amplify a signal of interest based on the user's intent. This requires a fundamental paradigm shift in instrumentation design, moving away from feed-forward amplification to systems that incorporate brain signals as feedback mechanisms. To accomplish this, we need to first understand the cortical network recruited for auditory attention. From a device perspective, our challenges lie in interpreting brain wave patterns in real-time to dynamically follow the user's attentional focus. In this talk, I will describe the neuroscience and the engineering effort that we are pursuing to make this next-generation hearing aid vision a reality.


Sanjeev Arora
Title
Overcoming intractability in unsupervised learning

Abstract
Unsupervised learning---learning with unlabeled data---is increasingly important given today’s data deluge. Most natural problems in this domain are NP-hard, e.g. for mixture models, HMMs, graphical models, topic models and sparse coding/dictionary learning, and deep learning. Therefore researchers in practice use either heuristics or convex relaxations with no concrete approximation bounds. Several non-convex heuristics work well in practice, but this is also a mystery.

The talk will describe a sequence of recent results whereby rigorous approaches leading to polynomial running time are possible for several problems in unsupervised learning. The proof of polynomial running time usually relies upon non-degeneracy assumptions on the data and the model parameters, and often also on stochastic properties of the data (average-case analysis). Some of these new algorithms are very efficient and practical, e.g. for topic modeling.




Yann LeCun
Title
Computer perception with deep learning

Abstract
The combined emergence of very large datasets, powerful parallel computers, and new machine learning methods, has enabled the deployment of highly-accurate computer perception systems, and is opening the door to a wide deployment of AI systems.
A key component in systems that can understand natural data is a module that turns the raw data into an suitable internal representation. But designing and building such a module, often called a feature extractor, requires a considerable amount of engineering efforts and domain expertise.
The main objective of 'Deep Learning' is to come up with learning methods that can automatically produce good representations of data from labeled or unlabeled samples. Deep learning allows us to construct systems that are trained end to end, from raw inputs to ultimate output. In deep architectures, data is represented hierarchically: the representation in successive stages are increasingly global, abstract, and invariant to irrelevant transformations of the input.
The convolutional network model (ConvNet) is a particular type of deep architecture that is somewhat inspired by biology, and consists of multiple stages of filter banks, interspersed with non-linear operations, and spatial pooling. ConvNets, have become the record holder for a wide variety of benchmarks, including object detection, localization, and recognition in image, semantic segmentation and labeling, acoustic modeling for speech recognition, drug design,handwriting recognition, biological image segmentation, etc.
The most recent speech recognition and image understanding systems deployed by Facebook, Google, IBM, Microsoft, Baidu, NEC and others use deep learning, and many use convolutional networks. Such systems use very large and very deep ConvNets with billions of connections, trained in supervised mode. But many new applications require the use of unsupervised feature learning. A number of such methods based on sparse auto-encoder will be presented.
Several applications will be shown through videos and live demos, including a category-level object recognition system that can be trained on the fly, a system that can label every pixel in an image with the category of the object it belongs to (scene parsing), a pedestrian detector, and object localization and detection systems that rank first on the ImageNet Large Scale Visual Recognition Challenge data. Specialized hardware architectures that run these systems in real time will also be described.


Aaron Seitz
Title
Applying perceptual learning to produce broad-based benefits to vision

Abstract
Perception is the window through which we understand all information about our environment, and therefore deficits in perception due to disease, injury, stroke or aging can have significant negative impacts on individuals’ lives. Research in the field of perceptual learning has demonstrated that vision can be improved in both normally seeing and visually impaired individuals, however, a limitation of most perceptual learning approaches is their emphasis on simplicity. In the present research, we adopted an integrative approach where the goal is not to achieve highly specific learning but instead to achieve general improvements to vision. We combined multiple perceptual learning approaches that have individually contributed to increasing the speed, magnitude and generality of learning into a perceptual-learning based video-game. Our results demonstrate broad-based benefits of vision in a healthy adult and visually impaired populations. We find improvements in near and far central vision peripheral acuity and contrast sensitivity, and real world on-field benefits in baseball players. The use of this type of this custom video game framework built up from psychophysical approaches takes advantage of the benefits found from video game training while maintaining a tight link to psychophysical designs that enable understanding of mechanisms of perceptual learning and has great potential both as a scientific tool and as therapy to help improve vision.


William Shain
Title
What is needed to enable chronic, high performance recordings with implantable neuroprosthetics

Abstract

A long-term goal of neural engineering has been development of implantable devices that have long-lasting, high-fidelity recording performance. Devices typically have been designed to comply with limitations of fabrication processes. The first, and still affective, devices were made with wires beginning 40 years ago fabrication technologies developed by the electronics industry were employed to make new generations of devices. Our research has focused on understanding the biological events associated with implanted devices. Thus our laboratory has attacked device-tissue interaction by both developing new tools to measure changes in the biology and using this information for developing new design criteria that can be used to make devices that will enable long-term, high-fidelity electrode recording performance on implanted devices. I will discuss critical issues concerning cell-resolution imaging of brain samples in which devices are left in place, critical features of device design, and electrode performance evaluation of devices fabricated to test a wide design space.



Christian Pozzorini
Title
Adaptive coding in single neurons

Abstract
How do cortical neurons encode information into spike trains? To study this question, we developed a new spiking model using a novel fitting procedure that enables reliable nonparametric feature extraction from intracellular recordings. We applied this method to understand two specific aspects of single-neuron computations: i) Spike-frequency adaptation on multiple timescales (Lundstrom et al., Nature Nuroscience 2008); ii) Enhanced sensitivity to input fluctuations (Arsiero et al., Journal of Neuroscience 2007).

Our results indicate that: i) Adaptation on multiple timescales is implemented both by a spike-triggered adaptation current and a spike-triggered movement of the firing threshold. These processes last for more than 20 seconds and decay according to a power-law. We also found that power-law adaptation is optimally tuned to efficiently encode "natural inputs" received by single neurons in behaving mice. ii) Enhanced sensitivity to input fluctuations results from a nonlinear coupling between firing threshold and membrane potential. Our model successfully predicts that, depending on the input statistics, pyramidal neurons behave as leaky integrators or coincidence detectors (or differentiators), as found in the data.

Russ Angold
Title
Exoskeletons: Turning Science Fiction into Reality

Abstract
Since 2005, Ekso Bionics has been pioneering the field of robotic exoskeletons to augment human strength, endurance and mobility. Russ Angold, Co Founder and Chief Technology Officer, will highlight the technical, business, and clinical challenges in taking DARPA funded high risk technology and transforming it into a product that has made an impact on more than 3000 users around the world. The talk will be followed by a live demonstration of the latest Ekso Bionic robotic exoskeleton, whose variable assist features allow subjects to be active participants in their gait training and rehabilitation programs.


Christof Koch

Title
The Biology of Consciousness

Abstract
The interactions of myriad of neuronal and sub-neuronal processes not only give rise to behavior but also to conscious experience. I will discuss the progress that has been achieved over the past several decades in characterizing the behavioral and the neuronal correlates of consciousness in human and non-human primates. I shall also discuss a theory of consciousness that explains in a principled manner which physical systems are capable of conscious, subjective experience. Tononi's Integrated Information Theory is the best candidate for such a theory. It assumes that any physical system that is informationally integrated (defined appropriately) will have conscious experiences whose content depends on the exact nature of the causal interactions of the underlying components (e.g. neurons). The theory explains many empirical facts about consciousness and its pathologies in humans. It can also be extrapolated to more difficult cases, such as fetuses, mice, or bees. The theory predicts that many, seemingly complex, systems are not conscious, in particular digital computers running software, even if these were to faithfully simulate the neuronal networks making up the human brain.


David Schoppik

Title
The Neural Basis of a Conserved Oculomotor Asymmetry

Abstract
While the composition of a neural circuit is thought to constrain the behaviors it can generate, the complexity of most vertebrate circuits makes it difficult to understand how form defines function. Here we study how a previously unknown anatomical asymmetry in the vestibular system of the larval zebrafish might explain conserved anisotropies in oculomotor behavior. We find that larval zebrafish stabilize their gaze better following nose-up (surfacing) pitch tilts than following nose-down (diving) tilts. We identify a set of ~100 vestibular neurons projecting preferentially to motoneurons that stabilize gaze during upward swims. Targeted lesions demonstrate the necessity of these neurons for gaze stabilization, swim bladder inflation and viability. Optical activation of this asymmetric population rotates the eye as if in response to an upward swim. These results indicate that the asymmetric projection of vestibular neurons to motoneurons constitutes the anatomical basis for the superior gaze stabilization following upward swims, suggest a neural basis for a behavioral asymmetry conserved in primates, and establish a new system to explore the relationship between circuit form and function.



Kathy Nagel
Title
Synaptic and circuit mechanisms promote broadband coding at depressing synapses
AbstractNatural odors can have a broad range of temporal waveforms, from rapidly fluctuating plumes to slowly diffusing clouds. Second-order olfactory projection neurons in Drosophila can effectively encode a wide range of temporal dynamics in odor stimuli. However, the synapse between first and second order neurons exhibits strong synaptic depression, which should tend to filter out slow and prolonged stimulus modulations. Here we identify two mechanisms that allow this circuit to overcome the coding limits imposed by synaptic depression. First, the synapse exhibits two pharmacologically separable post-synaptic response components. While the fast component (previously characterized) depresses rapidly, a slow component depresses more slowly, allowing the synapse to transmit responses to slow and prolonged stimuli. Second, presynaptic inhibition sculpts the dynamical properties of the synapse. Inhibitory interneurons are tonically active and respond transiently to odor stimuli. Synapses from interneurons onto projection neurons are slow and facilitating, creating delayed feedback. These properties allow inhibition to truncate responses to brief stimuli, and to stabilize responses to prolonged stimuli. Using a simple computational model of synaptic properties and inhibitory dynamics, we show that these two mechanisms increase the range of frequencies that the circuit can encode. Related mechanisms may be at work in many sensory systems.

Bard Ermentrout
Title
Heterogeneity & synchronization in the olfactory bulb

Abstract
Synchronous neural oscillations are found throughout the brain and are believed to contribute to information processing and coding. One mechanism of synchrony is through the driving of intrinsic neural oscillators with correlated noise. Here we use some recently developed theory in conjunction with experimental recordings in the mouse olfactory bulb to study the effects of heterogeneity on the ability of neurons to synchronize. We find, somewhat surprisingly, that in some circumstances heterogeneity will improve synchronization. We also find that differences in frequency between oscillators leads to a resonance with respect to the correlation time of the noise.


Bard Ermentrout
Title
All the way with Gaston Floquet: A theory for flicker hallucinations

Abstract
When the human visual system is subjected to diffuse flickering light in the range of 5-25 Hz, many subjects report beautiful swirling colorful geometric patterns.In the years since Jan Purkinje first described them, there have been many qualitative and quantitative analyses of the conditions in which they occur.
Here, we use a simple excitatory-inhibitory neural network to explain the dynamics of these fascinating patterns. We employ a combination of computational and mathematical methods to show why these patterns arise. We demonstrate that the geometric forms of the patterns are intimately tied to the frequency of the flickering stimulus. We combine a Turing-type stability analysis with Floquet stability theory to find parameters regimes where there are flicker-induced hallucinations. We close with some general comments on what symmetric bifurcation theory says about the patterns.


Stephen Boyd
Title
The Science of Better: Embedded Optimization in Smart Systems

Abstract
Many current products and systems employ sophisticated mathematical algorithms to automatically make complex decisions, or take action, in real-time. Examples include recommendation engines, search engines, spam filters, on-line advertising systems, fraud detection systems, automated trading engines, revenue management systems, supply chain systems, electricity generator scheduling, flight management systems, and advanced engine controls. I'll cover the basic ideas behind these and other applications, emphasizing the central role of mathematical optimization and the associated areas of machine learning and automatic control. The talk will focus will on understanding the central issues that come up across many applications, such as the development or learning of mathematical models, the role of uncertainty, the idea of feedback or recourse, and computational complexity.


Stephen Boyd
Title
Convex Optimization: From Embedded Real-Time to Large-Scale Distributed

Abstract
Convex optimization has emerged as a useful tool for applications that include data analysis and model fitting, resource allocation, engineering design, network design and optimization, finance, and control and signal processing. After an overview, the talk will focus on two extremes: real-time embedded convex optimization, and distributed convex optimization. Code generation can be used to generate extremely efficient and reliable solvers for small problems that can execute in milliseconds or microseconds, and are ideal for embedding in real-time systems. At the other extreme, we describe methods for large-scale distributed optimization, which coordinate many solvers to solve enormous problems.


Chris Dyer
Title
Translation into Morphologically Rich Languages with a Hierarchical Model

Abstract
Morphologically rich languages challenge the assumptions made in statistical models of translation. On one hand, the independence assumptions don't go far enough: opportunities to share statistical strength across related lexical items are missed (as a result, excessive amounts of data are required to reliably estimate model parameters). On the other hand, the independence assumptions go too far: decomposing translations into independent events fails to capture large-scale grammatical structure. While previous attempts to remedy this situation have been numerous, they tend to be highly language-dependent, or they failed from a modeling perspective: they improve performance on morphologically regular long-tail types at the expense of frequent --- but often idiosyncratic --- word types.

We present a solution to these problems that conceives of translation in terms of a hierarchical model. First, a sentence-specific translation model (i.e., a set of stochastic translation rules) is generated by predicting inflection paradigms of target words, given their source context. Second, this translation model is applied to translate the source sentence. Our approach can be understood as smoothing a surface-form translation model by backing off to a model of the translation process that applies at the morpheme level. We report significant improvements in translation quality when translating from English into Russian, Hebrew and Swahili. Finally, our approach relies on morphological analysis of the target language but we show that an unsupervised Bayesian model can also be used in place of a standard supervised analyzer.


Brian Ziebart
Title
Beyond conditionals: structured prediction for interacting processes

Abstract
The principle of maximum entropy provides a powerful framework for estimating joint, conditional, and marginal probability distributions. Markov random fields and conditional random fields can be viewed as the maximum entropy approach in action. However, beyond joint and conditional distributions, there are many other important distributions with elements of interaction and feedback where its applicability has not been established. In this talk, I will present the principle of maximum causal entropy—an approach based on directed information theory for estimating an unknown process based on its interactions with a known process. I will discuss applications of this approach to assistive technologies and human-robot interaction.


Yanping Huang
Title
General Examination: Learning Efficient Representations for Reinforcement Learning

Abstract
Markov decision processes (MDPs) are a well-studied framework for solving sequential decision making problems under uncertainty. Exact methods for solving MDPs based on dynamic programming such as policy iteration and value iteration are effective on small problems. In problems with a large discrete state space or with continuous state spaces, a compact representation is essential for providing an efficient approximation solutions to MDPs. Commonly used approximation algorithms involving constructing basis functions for projecting the value function onto a low dimensional subspace, and building a factored or hierarchical graphical model to decompose the transition and reward functions. However, hand-coding a good compact representation for a given reinforcement learning (RL) task can be quite difficult and time consuming. Recent approaches have attempted to automatically discover efficient representations for RL.

In this thesis proposal, we discuss the problems of automatically constructing structured kernel for kernel based RL, a popular approach to learning non-parametric approximations for value function. We explore a space of kernel structures which are built compositionally from base kernels using a context-free grammar. We examine a greedy algorithm for searching over the structure space. To demonstrate how the learned structure can represent and approximate the original RL problem in terms of compactness and efficiency, we plan to evaluate our method on a synthetic problem and compare it to other RL baselines.


Michael Fee
Title
Neural clocks and noisemakers: Mechanisms underlying the timing of complex learned behaviors

Abstract
How do brain circuits control the temporal structure of behavior? Songbirds provide a marvelous animal model to address this question. By recording from neural circuits, and manipulating them with temperature to observe the effect on song, we have been able to localize cortical premotor circuits that control song timing. One such circuit generates ‘random’ patterns of activity that drive the ‘babbling’ vocalizations of young birds. Another circuit generates a highly stereotyped sequence of bursts that controls the precisely-timed vocal gestures of adult song. Intracellular neuronal recordings during singing support the hypothesis that this ‘clock’ sequence results from a wave of activity propagating through a synaptically-connected chain of neurons. Recently, we have found that the extended neural sequence underlying adult song emerges from the successive differentiation of a simple rhythmic juvenile motor program. Altogether, we find that cortical circuits appear to generate a diversity of dynamics capable of supporting the temporal structure of a wide range of behaviors.
http://web.mit.edu/feelab/


Floris van Breugel
Title
How a Fly Finds Food: Complex behavior and perception in Drosophila emerges from iterative feedback-regulated reflexes

Abstract
With a brain of only 100,000 neurons, a fruit fly must rely on algorithms that simplify their behavioral control and sensory perception in order to solve complex tasks. In this talk, I will present results from hundreds of hours of flight data to examine how flies use olfactory and visual information to track odor plumes to their source, and ultimately land on it. The results indicate that the complex foraging behaviors we observe emerge from the iteration of a handful of distinct sensory-motor reflexes. Many of these reflexes require sensory information that is not directly available to the fly. Flies solve this problem by using feedback regulation to control their motion in order to indirectly measure these unobservable quantities. My results provide the groundwork necessary to begin dissecting how complex behaviors are controlled by the brain using genetic tools, as well as providing novel ideas for the engineering design of computationally limited robotic systems.


Eric Rombokas
Title
Multi-modal Human-Computer Interaction

Abstract
New sensory devices are enabling a fundamental shift in how we interact with technology, bringing interaction out of the screen and into the world using movements of the hands, face, and body. Cameras have long been explored as a means for this, but we are seeing a revolution in depth-sensing cameras that are making it better than ever. I will argue that we can go even further by exploiting other sensors simultaneously. We humans are naturally multi-modal, seamlessly blending the senses together simultaneously as we move, so it is natural to explore body movement not just visually, but also through the action of our muscles. I am working on sensing the action of muscles through electromyography (EMG) to compliment and extend gesture computing. This information provides an ideal counterpoint to the limitations of imaging, providing a richer physical interaction and potentially new insights into human movement and sensation.

Bingni Brunton
Title
Sparse Decision Making: How to Classify Using Very Few Sensors

Abstract
Dr. Brunton will speak about our recent work developing an algorithm that harnesses enhanced sparsity, the orders-of-magnitude reduction in number of measurements required for signal classification over reconstruction. This sparse sensors algorithm provides one approach to the question, given a fixed budget of sensors, where should they be placed to optimally inform decision making? I will argue that this perspective has applications to sensory-motor neural technology development, as well as to our understanding of how biological organisms process information about a complex environment.


Noah Simon
Title
On Estimating Many Effect-sizes, Selection Bias, and Local Adaptive Shrinkage

Abstract
There is a growing scientific interest in simultaneously estimating the means (or effect-sizes) of many different features. Classical (but very unintuitive) results in Stein-estimation show that even if these features are independent, cleverly coupling their mean estimates can greatly improve estimation accuracy. These ideas have been extended to build locally adaptive estimators based on perspectives from non-parametric empirical Bayesian estimation (and compound decision theory). Unfortunately, these estimators are not simple, and are really only tractable in highly idealized scenarios.

In recent work, we introduce a different framework for this estimation problem. Our framework is intuitive and shows how estimates from "independent features" become coupled through selection bias. We also give a simple general estimator based on resampling which is applicable and performs well in a wide variety of scenarios.

In this talk I will discuss the intuition behind "coupling estimates from independent features." I will review empirical Bayes/Stein Estimation, and I will introduce our framework. I will explain the connections between these estimators, show how they compare in practice.

This presentation is intended for a general machine learning audience.

This is joint work with Richard Simon (NIH) and recently Kean Ming Tan and Daniela Witten (UW) with occasional skepticism from Brad Efron (Stanford).


Liang Huang
Title
Linear-time Algorithms in Natural Language Understanding and Learning

Abstract
Why are computers so bad at understanding natural language and why are human-beings so much better at it? Can we build a model to simulate human language processing so that computers can process human language the same way we humans do, i.e., fast, incremental (left-to-right) and accurate?
In this talk I'll present a linear-time dynamic programming model for incremental parsing inspired by human sentence processing (from psycholinguistics) as well as compiler theory (LR parsing). This model, being linear-time, is much faster than, but also as accurate as, the dominant cubic-time algorithms. It overcomes the ambiguity explosion problem by approximate dynamic programming, which corresponds to local ambiguity packing in psycholinguistics.
But how do we efficiently learn such a parsing model with approximate inference from huge amounts of data? We propose a general structured machine learning framework based on the structured perceptron that is guaranteed to succeed with inexact search and works well in practice. Our new learning algorithm can learn a large-scale state-of-the-art parsing model with dramatically reduced training time, thus having the potential to scaling up to the whole Web. More importantly, our learning algorithms are widely applicable to other structured domains such as bioinformatics.

Jason KerrTitleRats! What are they looking at? Imaging activity in the freely moving animal

Emo TodorovTitleSynthesis of complex motor behaviors with optimal control

AbstractDesigning control systems for complex motor tasks remains challenging even after decades of research. Behavioral evidence suggests that the brain solves such problems in a way that resembles optimal control. While the theory of optimal control is well developed, we lack the algorithms to translate it into practice and achieve brain-like functionality in synthetic systems. Here I will describe recent progress that brings us closer to this goal. Instead of trying to design a control system that knows what to do in each situation, we rely on numerical optimization to invent complex movements in real-time. These movements always start at the present state regardless of how we got to that state; thus the algorithm makes no distinction between normal and perturbed movement. Furthermore the task objectives can be modified at any time, and the resulting changes are seamlessly integrated in the ongoing behavior. This approach has many potential applications including interactive games, robotic and prosthetic control, as well as new models of the neural control of movement.


Joel Zylberberg
Title

Computational Neuroscience: from the top-down, the bottom-up, and everything in between

Abstract

A deep understanding of how brains work will have myriad implications for the community: we will be able to build smart machines that can match or exceed our impressive cognitive abilities, we will be better able to diagnose and treat malfunctions of the brain, and in the ultimate coup for introspection, we will understand the biological, chemical, and physical processes that govern our thoughts. At the same time, human brains are enormously complicated, with order 10-100 Billion neurons, and even larger numbers of non-neuronal cells (like glia) interacting to form the systems and sub-systems responsible for their function. By virtue of the strong inter-connectedness -- neurons receive, on average, order 1,000 - 10,000 synaptic inputs from other neurons -- and heterogeneity of their components, brains pose a challenge to our standard reductionist approaches. So how do we make progress on this important, yet mind-bogglingly hard problem?


In my talk, I will outline three complimentary ways that computational methods are helping us to make progress. First, there are top-down approaches, were we guess at the computations being performed by particular neural systems, and then search for experimental evidence to support or refute our guesses. In the realm of sensory neuroscience, many of these hypotheses take the form of unsupervised learning algorithms, making an interesting connection with the engineering and computer science literature. Second, there are bottom-up approaches where we hunt for structure in experimental data, with the hope that the governing principles will reveal themselves. Finally, there are intermediate approaches that take experimentally developed mechanistic models, and extrapolate to understand their implications for larger systems.


In my talk, I'll illustrate each of these three approaches with examples from my own work, and that of my colleagues, on the visual cortex, hippocampus, and retina. Importantly, I will not assume any special background knowledge on the part of the listener.