Archive for the ‘Neural network models’ Category

Combinatorial Structures in Language and Visual Cognition

Wednesday, March 22nd, 2006

What gives humans the unique ability to construct novel sentences from the building blocks of language? A recent article in Behavioral and Brain Sciences proposes a “neural blackboard architecture” is capable of just this.

From the article (doi: 10.1017/S0140525X06009022):

“This paper aims to show that neural “blackboard” architectures can provide an adequate theoretical basis for a neural instantiation of combinatorial cognitive structures. [...] We also discuss the similarities between the neural blackboard architecture of sentence structure and neural blackboard architectures of combinatorial structures in visual cognition and visual working memory [...]”

As with all main articles in Behavioral and Brain Sciences, this one is followed by extensive comment and criticism from colleagues, and finally a reply by the authors. This provides a very deep look at the article and the issues surrounding it.

An older, but freely available, version of the article is available here.

Oscillation mini-reviews in JNeurosci

Friday, February 17th, 2006

A recent issue of J Neurosci has a series of mini-reviews on how oscillations play a role in network computations. Two of the reviews are by Sejnowski and collaborators. I haven’t read them yet but I thought I’d post a link here.

Blue Brain Project News

Tuesday, January 31st, 2006

Henry Markram, the director of the IBM-sponsored Blue Brain Project has written an article in the latest issue of Nature Reviews: Neuroscience that provides the most technical details about the project to date.

From the article:

“The three-dimensional neurons are then imported into BlueBuilder, a circuit builder that loads neurons into their layers according to a ‘recipe’ of neuron numbers and proportions. A collision detection algorithm is run to determine the structural positioning of all axo-dendritic touches, and neurons are jittered and spun until the structural touches match experimentally derived statistics. [...] Probabilities of connectivity between different types of neuron are used to determine which neurons are connected, and all axo-dendritic touches are converted into synaptic connections. The manner in which the axons map onto the dendrites between specific anatomical classes and the distribution of synapses received by a class of neurons are used to verify and fine-tune the biological accuracy of the synaptic mapping between neurons. It is therefore possible to place 10–50 million synapses in accurate three-dimensional space, distributed on the detailed three-dimensional morphology of each neuron.”

–Stephen

Polychronization: Computation With Spikes

Thursday, January 5th, 2006

news from the future:

Eugene M. Izhikevich. Polychronization: Computation with Spikes. Neural Computation, Vol. 18, No. 2. (February 2006), pp. 245-282.

This readable, highly recommended paper is about a concept that Izhikevich calls “polychronization”. When a bunch of neurons tend to fire at the same time, you say that group is “synchronized”. But what do you call it when a group of neurons tends to participate in a consistent spatiotemporal firing pattern? Izhikevich would say that such a group of neurons is “polychronized”. Note that synchronization is just a special case of polychronization.

For example, maybe you notice that you often observe the sequence {neuron A fires, then 10ms later neuron B fires, then 14 ms later neuron C fires} in a network. These neurons are not synchronized (they are firing at different times), and the timing of the start of the pattern may or may not be timelocked to anything else (stimulus onset, gamma rhythm, etc), but what is important is that they are firing with a consistent pattern of timing within themselves.

(One complication is that the pattern may not repeat exactly; in the above example, suppose neuron B fails to fire 20% of the time. Just as we say that 3 neurons are mostly synchronized even if one of the neurons fails to fire with the others every single time, we will say that a group is polychronized even if the pattern is inexact [1])

The paper proposes a mechanism which could cause these patterns, and also a mechanism for other neurons to detect them. Both mechanisms are based on axonal conduction delay. As a warm-up, consider how we could get synchronization. Suppose that neurons A, B, and C all synapse onto both neuron D and neuron E, and that the axonal conduction delay is constant (say, 5ms). Now suppose that neurons A, B, and C all fire at the same time. If the synaptic weights are sufficently strong, this will cause D and E to fire at the same time. So, we have synchronization between D and E.

Next, suppose that there are longer delays between A,B,C and neuron E than between A,B,C, and D; it still takes 5ms for a spike to get from A,B, or C to D, but that now it takes 10ms for a spike to get from A,B, or C to E. Now if A,B, and C fire at the same time, it causes a pattern: first D fires, then E fires 5 ms later. This is how axonal conduction delays can cause these patterns.

Now, detection. In the previous two examples, D and E acted as synchrony detectors; they “detected” when A, B, and C fired together. Now what if we add 3ms to the time it takes for a spike to get from neuron C to either D or E? Here are all our axonal conduction delays now:

A->D: 5
A->E: 10
B->D: 5
B->E: 10
C->D: 8
C->E: 13

Now how should A,B,C fire in order to excite D and E? Well, C needs to fire 3ms before A and B if we want D and E to receive 3 spikes at once. So, now D and E are detecting not synchrony, but rather polychrony; they are detecting the pattern {C fires; then 3 ms later, A and B fire}.

Furthermore, D and E’s response to their pattern is not a synchronized burst, but rather it is a different firing pattern; {D fires; then 5 ms later, E fires}.

The first contribution of this paper is in providing us with a language to describe this sort of computational system. In the above example, we might notice that following pattern recurring in network activity: {C fires; then 3ms later, A and B fire; then 5ms later, D fires; then 5ms later, E fires}. The neurons involved in this pattern are termed a polychronous group [1] . Note that a single neuron might participate in multiple polychronous groups; for instance, if the above example is part of a larger network, perhaps there is another firing pattern involving neuron F, G, and C.

The second contribution of this paper is providing a simple 1000-neuron model which exhibits this sort of behavior [2]. The model itself is one page of Matlab code, and is based on the spiking neuron model in (Izhikevich, 2003). The model uses STDP plasticity to update synaptic weights as the model runs, and STDP is key to the formation of the polychronous groups [3].

The third contribution is the analysis of this model. The model displays different network states including slow and fast (“delta” and “gamma”) rhythms. The emergence of polychronous groups is robust to changes in some of the model parameters. Polychronous groups appear and disappear and change over time. Application of an external stimulus can cause a “response” consisting of the probabilistic activation of a certain subset of polychronous groups.

The most important result of the paper in the view of the authors is that the number of polychronous groups exceeds the number of neurons in the network (remember, each neuron may be part of multiple groups). This is important because if the real “logic elements” in the computation are the polychronous groups, not the individual neurons, then the memory capacity or computing capacity of the network may be larger than otherwise expected.

The forth contribution is an interesting discussion of various aspects of how the brain might work if its computation is indeed based on polychronous groups.

FOOTNOTES

[1] Actually, technically, Izhikevich defines a polychronous group based whether the group of neurons has the POTENTIAL to fire in such a pattern, based upon its anatomical connectivity. So, technically, “polychronous group” is an anatomical property, not a functional one.

[2] Although I guess Izhikevich has published similar models before so you might say that “contribution” belongs to his previous papers. Whatever.

[3] Which is analyzed further in (Izhikevich, 2004). Although I don’t think Izhikevich has conclusively shown that STDP is the ONLY kind of plasticity that could cause this.

Amplification using recurrent connectivity

Thursday, September 8th, 2005

This post has much the same content as this NeuroWiki page; you may wish to read and comment on it there.

I’ve only skimmed this interesting article, so beware that I may not correctly understand it.

This article proposes that recurrent excitation in cortex leads to amplification, and analyzes this using the mathematics of basic amplifiers (taught in introductory electrical engineering courses; i.e. open-loop gain and closed loop gain).

They construct a simulation based on this principal that agrees with some electrophysiological and pharmacological results from neurobiology experiments in layer IV of V1.

At the end, they conjecture that a sensory network could use this principal for noise reduction and possibly pattern recognition.

Douglas, Rodney J.; Koch, Christof; Mahowald, Misha; Martin, Kevan A. C.; Suarez, Humbert H. Recurrent Excitation in Neocortical Circuits. Science, Volume 269, Issue 5226, pp. 981-985.

Machine learning theory blog

Tuesday, August 30th, 2005

For those with theoretical interests with respect to machine learning flavored AI, the ML Theory blog run by John Langford is highly recommended. Though recently started, Langford and others have so far been doing an excellent job of commenting on both the science and culture of theoretical learning research.

Neuroimaging with Rescorla-Wagner model

Sunday, August 28th, 2005

Neuroimaging data of different brain areas fit to a Rescorla-Wagner model show that different cortical areas integrate stimulus changes over different time intervals. The result itself probably isn’t that shocking but I liked the nice combination of theory and experiment.

From the July 21 Neuron:

Formal Learning Theory Dissociates Brain Regions with Different Temporal Integration

Jan Gläscher and Christian Büchel

Learning can be characterized as the extraction of reliable predictions about stimulus occurrences from past experience. In two experiments, we investigated the interval of temporal integration of previous learning trials in different brain regions using implicit and explicit Pavlovian fear conditioning with a dynamically changing reinforcement regime in an experimental setting. With formal learning theory (the Rescorla-Wagner model), temporal integration is characterized by the learning rate. Using fMRI and this theoretical framework, we are able to distinguish between learning-related brain regions that show long temporal integration (e.g., amygdala) and higher perceptual regions that integrate only over a short period of time (e.g., fusiform face area, parahippocampal place area). This approach allows for the investigation of learning-related changes in brain activation, as it can dissociate brain areas that differ with respect to their integration of past learning experiences by either computing long-term outcome predictions or instantaneous reinforcement expectancies.

How does this relate to Hawkins’s idea that all cortex implements the same underlying “algorithm”? Is the integration time constant (or, in RW terms, the learning rate) tuned differently by different inputs?

Differential equations for neuroscientists

Saturday, August 27th, 2005

I wrote up a little primer on differential equations for neuroscientists. It can be found here: http://science.ethomson.net/Diff_Eq.pdf. Any comments or suggestions appreciated, especially at this early stage!

Here is the first paragraph:

Ordinary first-order differential equations come up frequently in neuroscience. They are used to model many fundamental processes such as passive membrane dynamics and gating kinetics in individual ion channels. When the equations come up, most electrophysiology texts provide the solution, but do not provide any explanation. This manuscript tries to fill the gap, providing an introduction to many of the mathematical facets of the first-order differential equation. Section One provides a brief statement of the problem and its solution. Section Two works through the solution for a special case that often comes up in practice. I also work through a concrete example chosen for its near-ubiquity in neuroscience, the equivalent circuit model of a patch of neuronal membrane. Section Three contains a simple derivation of the general solution given in Section One. The manuscript presupposes a little knowledge of first-year calculus, much of which is reviewed when needed.

Best,
Eric (Thomson)

McCulloch-Pitts-Wiener neurons?

Monday, March 7th, 2005

Interesting article in the NYT about a new biography of Norbert Wiener, the father of the field of cybernetics. The surprising revelation is that Wiener (who received a PhD from Harvard in mathematical psychology at the age of 18) was “tricked” by his wife into stopping his collaboration with McCulloch, shortly before McCulloch went on to propose the perceptron with Walter Pitts.

The relevant portion is cited below…
(more…)

NEOXI.com – Neural Network Resources

Wednesday, January 5th, 2005

Neural Network Resources: www.neoxi.com

* Content: Professionally selected extensive collection of neural network resources.

* Audience: Communities of commerce, industry, academics, engineers, practitioners, and individuals interested in neural networks, machine learning, data mining, artificial intelligence, soft-computing, and numerous other fields directly or indirectly utilizing the neural network technology.