One of the data sets we worked with in this case study is two hundred seconds of activity of a simulated neural network undergoing memory recall processes. This particular neural network simulation was of particular interest to us as it revealed an inherent behaviour which manifested itself in the spontaneous, i.e. without external stimuli, self-activation of stored memory patterns.

Seeking ways to grasp this behaviour we focused on the mutual relationships or interdependence that nodes have with each other across the network by computing the value of correlation of their activities. These values, calculated for each pair of the 81 neurons in the network, constructs a multidimensional structure which evolves, folding and unfolding in time. This abstract structure is placed in a space whose dimensions express the relationship between all possible node pairs (81! = 81*80*79*....*2*1).

In order to visualise this structure we searched for an operation which would transform a high dimensional object into a two dimensional figure. To this end, we devised another dynamical system, which would accomplish this specific task in an iterative process. This system is formed by 81 mutually interacting masses placed on a plane, one for each neuron. The magnitude of the force that each mass pair is subject to reflects the correlation value of that neuron pair: a higher correlation means greater attraction and therefore a smaller distance. A set of correlation values of the neural network activity would simultaneously cause all of the masses to move and search positions whose relative distances to all other masses correspond to that node's relationship to all other nodes. Similarity and interdependence are transposed into geometrical distance relationships. Eventually the dynamical system will result in an arrangement of the masses which reflects the best possible two dimensional approximation of the multi-dimensional structure, constructing a figure that folds and unfolds in time.

Mathematically, solutions of this operation, if any, are mostly non-unique: the task the dynamical system is set to take on is a hard problem. And when the system is pushed to the limits of its capabilities to interpolate between the two spaces, these difficulties become evident and the non-neutrality of the operation we are performing becomes clear. The dynamical system suddenly becomes material. It evolves from a problem solving, dimension reducing or simplifying operator and reveals itself as a form generating agent; it pushes back. Its distinct own behaviour becomes apparent.

In the end we find ourselves dealing with a more complex situation: two interacting and inextricably interwoven dynamical systems whose responsibilities in the result cannot be exactly separated. Clearly we formulated a transposition into a complexification, whose principal value lies less in the reduction function or in the calculation of an output, but in bringing to light qualities of such systems which are inherently incalculable.

We shed light on this situation by looking at it from different perspectives: we find multiple parametrisations for the forces and the figure's visual rendering. The result is a field of figures, artefacts whose mutual relationships construct a network through which incomputable qualities of the involved elements shimmer.

Transpositions [TP]: Case Study 1


Computational Neuroscience and Neurocomputing research group at the KTH Royal Institute of Technology, Stockholm.

Explorative Listening

Explorative listening to neuronal data opens up ways to auditorily engage with the particularities of a simulation. There exists a large repertoire of strategies to turn data into sound, known as different forms of auralisation and sonification, each of which allows for a particular sonic perspective on the data.

As opposed to exploring data visually, listening to it highlights the details of its temporal structure. The dynamics of the data can be felt directly, resonating with embodied experiences. Whereas scientific sonification is expected to be pleasing to the ear and to efficiently expose the type of information sought, explorative artistic sonification may approach the limits of auditory perception. Repetitive listening over extended periods of time usually exposes non-evident structures when auditory fatigue gives way to hearing what lies behind the obvious. Time is suspended through repetition, also allowing to us listen to the way we are listening.

In the following audification one can detect several features which the scientists who created the simulation were not aware of and could not explain. They consider them insignificant with respect to the phenomena studied in the simulation. The spiking activities of a network exhibiting spontaneous self-activation of memory patterns, composed of eight groups of 270 neurons have been simulated for three minutes and 20 seconds. The simulation has been compressed in time to 4.5 seconds, which are repeated 22 times. In order to distinguish better the 8 groups of neurons, they have been projected over 8 loudspeakers positioned around the listener in a virtual concert hall. Use headphones for a better sonic immersion.

Data from neuronal simulations comes in two forms. Either as digital signals of the cell membranes’ electrical potentials or as a sequences of discrete time points at which a spike in the potentials occurs, i.e. when the neurons discharge. Whereas the second form is typically more compact by orders of magnitude, the first form is more detailed but may also be more redundant, especially when comparing neighbouring neurons sharing the same electrochemical milieu. 


Whereas the audification above is based on spikes data, the next one uses the membrane potential signals of a selection of 6 neurons of a network simulation involving 156 neurons in total. The simulation lasted 7.4 seconds and the audification is sped up by a factor of 5. The six potential signals are played through 6 loudspeakers positioned around the listener in a virtual concert hall.

The following sonification tries to tease out the rhythmical fine structure of the spikes by assigning a different pitch to each neuron and playing them to the same speakers as the sonification. 

Computational neuroscience is the study of the peripheral and central nervous system by simulation and emulation of neural tissue. Neurons are cells specialized in chemical and electrical communication. They form large networks in the central nervous system – the brain. It is assumed that the brain does not work in terms of algorithms but instead operates as a complex dynamic network of interconnected nodes. About two thirds of the human brain consist of the cortex, which is responsible for functions such as flexible thinking, impulse inhibition, and several forms of memory. The data used in this case study was generated in simulations aimed at discovering principles of memory storage that are compatible with known cortical anatomy and dynamics. More particularly this concerns the dynamics of mechanisms enabling the storage and recall of memories. The work at the Lansner lab focuses on attractor networks, which have interesting computational capabilities and are based on a learning principle that is compatible with biology.

The focus of this case study was a data set tracing the activities of a trained network of 2430 neurons exhibiting spontaneous self-activation of stored memory patterns. The network is organised in nine populations of neurons, each representing a pattern. Usually memory recall is triggered by external simuli, but this network remembered (activated) the stored patterns spontaneously. This attracted our curiosity, especially concerning the sequence of activation, which priviledged certain patterns. What kinds of dynamics may be at play that make the network string its associations together in this sequence? We approached the question by observing the correlation variations of a selection of 9 representative neurons per pattern, resulting in an 81-dimensional coefficient space. We then developed a dynamical system to reduce this space to two dimensions, which formed the basis for the dynamic visualisation. The synchronous sonification is based directly on the neuronal activities, creating a complementary perspective on the behaviour of the network.

Based on this design, it is expected that the dyncamics of cells in close proximity are highly correlated. However, there are also correlations with cells grouped in different hypercolumns. This is expressed as blue lines connecting those cells across hypercolumns. This virtual structure is emergent and an expression of learned patterns and spontaneous activity rather than 'by design'.

Understanding this, we were subsequently interested in the dynamics of the correlations. How may the network oscillate between local correlation and correlations across hypercolumns? In the above animation, the triangles shaded in grey indicate correlations within hypercolumns, and the non-shaded triangles, those across hypercolumns.


Diagram by Pavel Herman from Lansner Lab explaining the arrangement of cells in the network. They are clustered in minicolumns and hypercolumns. In the diagram, this is expressed as blue dots (minicolumns) grouped together by red outlines (hypercolumns.)