Hacking neural learning with deep learning: real-time perturbation of high-dimensional song features in the zebra finch

May 22, 2024·
Elizabeth (Liz) A. O'Gorman, Ph.D.
Elizabeth (Liz) A. O'Gorman, Ph.D.
· 1 min read
Abstract
Male zebra finches learn to produce a single, highly stereotyped song and maintain this song over the course of their adult lives using auditory feedback. Such continuous production and evaluation of song is commonly conceptualized as a variant of actor-critic reinforcement learning, requiring the precise coordination of dozens of muscles with millisecond precision. Much of what we know of the underlying algorithms comes through studies of adult male zebra finches adapting to white noise feedback triggered by either high or low-pitch variants of harmonic stack syllables — a single perturbation of a single syllable of a crystallized song. However, zebra finch song is spectrally and temporally rich, with numerous degrees of freedom, and the learning process must tackle this complexity. Thus, to probe the full range of this complexity, new methods are needed for adaptively intervening in the learning process. To this end, we developed a pipeline to quantify and selectively manipulate dimensions of variance within the high- dimensional song space in real time. Using the improv analysis platform, we acquired raw audio from male adults as they practiced in sound-isolated boxes, computed spectrograms from fixed-width segments, and encoded them using a pretrained variational autoencoder (VAE), an unsupervised representation learning method, with resultant latent representations used to trigger delivery of stimuli to the bird. Using asynchronous and parallelized processing, analysis can be performed in 7.396 ms ± 0.917 ms per 120 ms of song, allowing us to update embeddings as frequently as every 10 ms. Even accounting for network latencies, the lag between data acquisition and feedback to the bird is 15.164 ms ± 2.409 ms, well within behaviorally relevant timing. As a result, this pipeline can be used to study adaptations of song in response to algorithmically guided perturbations, allowing us to test fundamental learning reinforcement learning hypotheses in a tractable high-dimensional system.
Date
May 22, 2024 8:10 PM — 8:20 PM
Event
Location

Newry, ME

events
Note
Click on the Slides button above to view the built-in slides feature.

Slides can be added in a few ways:

  • Create slides using Hugo Blox Builder’s Slides feature and link using the slides parameter in the front matter of the talk file
  • Upload an existing slide deck to this page bundle and link it using links: [{ type: slides, url: path/to/file } ] in front matter
  • Embed your slides (e.g. Google Slides) or presentation video on this page using shortcodes.

Further event details, including page elements such as image galleries, can be added to the body of this page.

Elizabeth (Liz) A. O'Gorman, Ph.D.
Authors
Computational Neurobiologist
Elizabeth (Liz) A. O’Gorman is a computational neurobiologist with a Ph.D. in Neurobiology and a concurrent M.S. in Electrical and Computer Engineering concentrating in Data Analytics and Machine Learning from Duke University. Her research interests include developing and using state-of-the-art computational methods to discover fundamental algorithms and computations used by the brain to produce adaptive individual and social behaviors. Methods confine and compliment scientific discoveries — to that end, her objective is to develop and use novel methods to advance science. Most recently, as a computationalist, she has developed methods for adaptive neuroscience experimentation, and as an experimentalist, she has used these methods to reinforce adaptive behaviors. Ultimately, she is interested in how environment and experience shape adaptive behaviors throughout the lifespan, in health and disease or disorder, using computational and experimental approaches.