Skip to main content
Feature Story | Mathematics and Computer Science

Modeling a Neural Network

Usually, Anne Warlaumont deals only with the rough computer counterpart to a brain — building neural” networks that classify sounds from babies and emulate the way they learn to speak. But Warlaumont, a Department of Energy Computational Science Graduate Fellowship recipient, saw the real thing during her summer 2009 practicum at Argonne National Laboratory (ANL) near Chicago.

Warlaumont shadowed two students in University of Chicago (UC) researcher Wim van Drongelen’s lab and witnessed electrophysiology experiments on mouse and human brain tissue. As her practicum ended she also watched brain surgeons operate on a girl who suffered epileptic seizures since infancy. I definitely never thought I would see something like that. It was a special bonus,” says Warlaumont, a doctoral student in the School of Audiology and Speech-Language Pathology at the University of Memphis (UM).

Warlaumont’s peek inside the skull was only appropriate. In her practicum, she designed a program that compared detailed and abstract computational models of an epileptic brain. While working with Mark Hereld, an ANL experimental systems engineer in the Mathematics and Computer Science Division; ANL Associate Laboratory Director for Computing, Environment and Life Sciences, Rick Stevens; and van Drongelen, Hyong Lee and Marc Benayoun at UC, she also compared the models with activity recorded from slices of mouse brain tissue.

Warlaumont says the project gave her a different perspective. In my main research I build neural network models. Those models are even more abstract than the ones we worked with (at ANL). I was happy about the opportunity to work with a detailed model and see what I was missing.”

The models Warlaumont develops in her UM research learn” to identify infant utterances — the vowels, squeals, growls, grunts, babbling, crying and laughing babies do as they test their speech abilities and learn to talk.

There are theories of how infants develop the ability to vocalize and theories of how infants learn sound and why they produce some sounds before others,” she adds. There is a small group of us interested in translating some of the theories into a way to more rigorously test those computationally.” The end results could include better speech analysis tools or even a model that perceives sounds — both from others and from itself — and learns to speak” much as infants do.

I have several different research threads, all related to infant vocalization research and understanding the computational or technological components involved,” Warlaumont says. I see these as part of a very long-term research program.”

Warlaumont and Hereld hope her practicum project will help us better understand brain activity by improving neural models, which range from highly complex and computationally demanding to abstract and easy to run.

Our project was comparing temporal signatures of neural network data produced by a couple of very different types of computational models. We wanted to compare them with each other and with a real system of brain cells,” Warlaumont says.

For example, the researchers wanted to know whether the advantages of a highly realistic model outweigh the demand it places on computer resources. This project is helping us to understand how low can we go — how simple and therefore computationally fast we can make a model that will still deliver appropriate results,” Hereld says.

Warlaumont adds, Another factor is your ability to understand what’s going on with a model. The more detailed a model is, the more like a real system it is, but it may be so complex it’s hard to understand.” On the other hand, simpler models aren’t as readily tweaked to match reality.

The researchers had their work cut out for them. The problem is pretty difficult because it’s terra incognita,” Hereld says. Scientists typically have some intuition about the relevant processes behind a phenomenon, but epilepsy’s complex, variable nature resists prediction.

The researchers ran two models, one detailed and one abstract, then compared average cell membrane potential — a measure of the voltage difference between the interior and exterior of a cell. Neurons use electrical membrane potentials to transmit signals between different parts of the cell and to initiate communication across cells.

Van Drongelen designed the detailed model, which simulated 656 neurons of six different kinds. Each cell is modeled as a set of compartments corresponding to its parts and includes chemical channels that regulate spontaneous firing and transmission of nerve impulses between cells.

The researchers ran two versions: One with persistent sodium ion channels and one without. Persistent channels could be important to understanding network behavior, Hereld says.

The more abstract model, developed in 2003 by Eugene Izhikevich, then of the Neurosciences Institute in La Jolla, Calif., treats each neuron as a single compartment and randomly varies parameters to model different types. Neurons are networked more randomly and a simpler mathematical method models ion channels.

The researchers also ran two versions of this simpler model: one with instantaneous transmissions between neurons and one with a six-millisecond delay. They compared simulation results with data recorded from slices of mouse frontal lobe tissue that was excited to produce normal and seizure behavior.

The detailed simulations ran on Jazz, ANL’s recently replaced 350-node computing cluster. Each persistent sodium version took about 200 seconds to run. In contrast, each run of the abstract model’s instantaneous transmission version took about 10 seconds on a standard laptop.

For each model and for the mouse data the researchers averaged neuron activity across all the cells. That was tricky: Both models generated data in physical units — microvolt waveforms — but they had different temporal resolutions and some unimportant differences. Those disparities are one of the reasons we had to struggle to try and eliminate artifacts and find real differences,” Warlaumont says.

The researchers filtered simulation results to make them comparable to mouse brain traces, then extracted eight primary metrics from each time series. We looked at things like, within the network, are all the neurons spiking? If there is a lot of heavy synchrony in the firing of neurons, it would end up leaving traces of big peaks and valleys in the network signal,” a sign of a possible seizure, Warlaumont says. We also looked at the spectral character of those network voltage signals, the amount of power in different frequency bands.”

They ran a principal components analysis to reduce the dimensionality of the behavioral features space. That let the researchers compare the range of behaviors a given model produced.

Ultimately, the abstract models seemed to produce a range of behaviors as broad and nearly as similar to mouse data as the detailed ones. You are not necessarily disadvantaged, from that perspective, if you use the less detailed version,” Warlaumont says. Yet, I wouldn’t want to make it sound like simple models are always better, because there are advantages to detailed models.”

As to what this work means for understanding and treating epilepsy, Warlaumont says computational models of brain disorders are imperfect, but still informative and can help guide research.

Assuming you accept that computational modeling is valuable for those purposes, it’s a logical next step to ask how we are going to objectively compare these models,” she adds. It’s not a big problem now because there are only a few, but if you see a future for this there will be more models. I think figuring out how to compare and evaluate those is important.”

The details of speech recognition and learning Warlaumont hopes to emulate in her UM dissertation research are too complex to capture with the models and computing technologies available today. Still, she hopes her work will help researchers better understand speech development.

One of Warlaumont’s models, built in collaboration with UM colleagues Eugene Buder, Robert Kozma and Rick Dale, is a neural network that recognizes protophones — early categories of infant vocalizations — with potential ramifications for speech analysis.

Currently infant speech research depends on assistants who spend days listening to recordings and manually coding each sound for its type and other properties. It’s time-consuming, expensive work, and the amount of data to be sifted is growing. Coders also must make subjective judgments about how to classify a protophone, leading to inevitable inconsistencies.

The neural networks Warlaumont and colleagues are developing and testing automatically classify infant vocalizations, helping researchers better understand how humans perceive them and creating data that’s more standardized.

Their model, reported in an April 2010 paper in the Journal of the Acoustical Society of America, first converts utterances into spectrograms — frequency, duration and intensity represented as 225 shaded pixels on a square. Those are sent to a type of neural network, called a self-organizing map (SOM), of 16 nodes mathematically arranged in a four-by-four grid with randomly weighted connections between each.

The SOM is trained” by matching random spectrograms with the nodes whose weights are most similar to it, then updating that node’s weights and its neighbor’s weights. It’s similar to how real brain cells develop connections based on the animal’s particular previous experiences, Warlaumont says.

Each vocalization triggers states of SOM node activations, which are sent to a second layer, a neural network called a perceptron. The perceptron measures the relevance of the learned SOM features to various categories, classifies the protophone and determines which SOM nodes best distinguish one utterance from another.

After training, the perceptron can classify each protophone input by type and infant age and identity, based on SOM layer node activations. In tests, the model performed significantly better than chance. It guessed the correct protophone more than half the time, the infant’s age 42.8 percent of the time, and its identity 32.4 percent of the time. It is understandably very hard to classify age and identity on the basis of a single second of vocalization. We might be able to average many vocalizations in a recording and get better performance,” Warlaumont says.

The model is a step forward, says D. Kimbrough Oller, Warlaumont’s doctoral advisor. We’re testing its basic capabilities and developing the scripts and tools we need to go on to much more exact things.”

Warlaumont will have lots to contribute to the effort, Oller says. She is going to have a very significant academic career in helping to establish not only new foundations in the theory of vocal development, but the application of the tools that she’s developing.”

ARTICLE SOURCE
https://​www​.krellinst​.org