2014 11 21, Gerhard


Network Simulations


I just had a meeting with Pawel Herman, who explained to me what they do with their neural network simulations. The main process they are interested is what could be called an assiciative memory task. The network is trained with a number of 4x4 pixel images. Once trained, it is presented with a blurred version of one of the images and it should recoginze it and produce the unblurred original. So the processes that are simulated are on a relatively low level, assuming visual information as it may come from the retina (very roughly speaking). See more information about the Lansner Lab.


We discussed how to further explore that data auditorily. As the models are essentially 2-dimensional (mini column vectors being the third dimension), we will start with a 2D interface as a basis for the auditory navigation through the hyper and mini columns. There are some interesting details when it comes to they way they analyse (Fourier and constant-Q transforms) data, where an audification could be much more interesting. This concerns cases where data doesn't need to be downsampled in order to be analysed but the full complexity could be made audible. This is the case in the highly correlated activity of neurons sharing a mini column. So listening to this will be one of the next steps.


Biological Data


Pawel also told me that he can get MEG data from human brains presented with some visual stimuli from Karolinska. I asked him to get such data for us (150 positions in the brain, sampled at 250 Hz), ideally in its raw form, so not only the period when the stimulus is presented but also the background activity before and afterwards. It will be interesting to compare this data to the kind of data we got from the EEG.

2014 12 01, Gerhard

Biologically inspired networks


I had a meeting with Pawel and Anders, where they clarified that the 4x4 pixel "image" the networks operate on is actually "just a pattern" of 16 bits. There is no 2-dimensional aspect represented in the network. The network has 16 inputs, where a voltage is injected (on or off), and then the reaction of the network is observed. The network is a biologically inspired network and tries to model what happens at high levels of the cortex. So how the "image" arrives there is left out of the model. The network is trained using another network, which is a normal neural network (i.e. not a biologically inspired one) and the cell connections resulting from this training are then applied to the biologically inspired network. I was asking if it made any sense to think of the 16 bit pattern of a 2D visual image and the answer was no – it is all meant much more abstractly. It could also be an auditory pattern that is "remembered" or "associated" by the network.

Dataset synchAlpha_100s

We also clarified technical details of the "synchAlpha_100s" data set I have been working on so far, which has two files, one with voltages sampled at 1000 Hz and one with discrete spikes, which are extracted from the voltage data. As the output data from the super computer simulation is massive, the voltage data has been down-sampled in space and time. It contains 16 hypercolumns with 1 minicolum of 5 cells each (16 * 5 = 80 cells). In the simulation each minicolum has 30 cells, so 25 have been thrown away. This makes sense as the voltages are highly correlated between those cells arranged in proximity. I was asking for getting a data set with all cells in the minicolums, because it would be interesting to auditorily explore the degree of correlation.

In the data set we have, there are actually 2 networks, where the second one is connected in a simple forward connection to the first (one way communication). They both have the same structure and we have the same 80 cells in the voltage data of both ot them, i.e. a total of 160 cells. Pawel gave me the indices of the cells of both networks, so that we now know also the hyper and minicolum structure. It will be interesting to compare the two networks.

A consequence of the temporal downsampling is that the voltage data doesn't contain all the spikes, because the downsampling is done the crude way by just skipping samples without filtering. The original sample rate is 10 kHz. This is why we will have to integrate the missing spikes (which are in the spikes file) back into the voltage data before we can actually listen to it. What we have listened to so far was not containing all spikes.

The data set contains what is called a ground state, where the brain does some house keeping tasks and several activations, which are reactions to patterns presented to the network. We don't know where these activations happen and it will be interesting to see, if we can hear them.

Browser for listening

I suggest that we build a 2D visual browser allowing to listen to individual cells and groups of cells (by simply adding their signals up, maybe spatialising them over more than one speaker) by using a probe with a varying spatial extent, including more or less cells in a region. The goal here is to get a first idea of what can actually be heard in the data. I assume that this is only possible with an interactive approach, which allows for listening deeply and flexibly into the data, allowing for immediatly testing ad-hoc hypotheses, while leaving a kind of navigation trace in the data, which could also be recorded, analysed and played back. But these are just some initial ideas and more will appear once we start working in building such a tool.

Ultimately such a browser could be implemented in JavaScript using the Web Audio API and run in any web browser. In an ideal world this would happen in the RC, but this is less likely. I just made my first experiences with this option when building an interactive version of my Zeitraum Diagram. You can find the relatively compact code for it here.

We also briefly talked about possibilities of visualisation. An idea came up related to the fact that the network "sees" only these 16 bits, which are internally expands into a much more complex structure (neural activity over time and space). It could be interesting to visualise this structure, which is assumed to show how "the brains things about this image", which in the scenarios they simulate would only mean how associative memory may work, which is one of their central research questions.


I forewarded them the aricle Michael found in spring ("Neural portraits of perception: Reconstructing face images from evoked brain activity") and they promissed they will look at it. They both had skimmed the abstract, seemingly knew what this was all about and sounded interested. Let's see if they get the time to actually read it in depth. I am also not sure at this point how relevant this work is for us at the moment but when I remembered the paper, I though it is good to share it with them.

Since my last meeting with Pawel, he has asked his colleages at Karolinska and University of Stockholm for MRI and EEG data and hopes to get some soon.

2014 12 20, Gerhard


Following our Skype meeting on Dec 18 I asked Pawel the following questions. The answers are to the right.


(1) I have been looking at the spikes data and found spikes for a total of 30959 cells. Are they all part of the same simulation and the 2x80 cells (networks c1 and c2) are just a very small subset? What are the other cells doing? How are they organised (hyper- and mini-columns)? The last 900 or so cells of each set of cells (there is a gap in indices separating them) seem to be special, as they exhibit a much more regular pattern. Are these the “inputs”? 


(2) What is the relationship between the two networks c1 and c2 in the data set? You said they are connected with a forward connection. What does this mean and what is the reason for such a structure? What do you simulate with this setup? Is c2 connected to c1 (c1 -> c2) or the other way around? From the data I could also imagine the reverse (c2->c1), as c1 seems more blurred. But maybe you are simulating a kind of sharpening function?


(3) Could you please tell me again how fast a neuronal signal propagates in the network? I forgot the number.


(4) Would it be possible to get a complete data set (volts not decimated and for all cells) of a possibly smaller and shorter simulation? It would be interesting for me to see how correlated the data is. What is your rationale for downsampling and "sparsifying" the cell data?


(5) Could you please repeat what was the goal of this simulation? I remember you were interested in the relationship between the spikes and the phase of the alpha waves. 


On another note, I just wanted to ask if you have received any biological data from you colleagues.

The fact that the dimensionality of the data does not interest them, does not really come as a surprise since on an information level a string contains all possible images. Every Icon, for example, is a piece of code that will - in time - show all possible images in a 32x32 matrix. However, the code is a counter that counts left to right and top to bottom. The question really is, are 'real' images that fragmented? Does (visual) 'information' always come in the shape of a dot?


While those questions are interesting, given that the project ignores them, we should not really be concerned with the inner structre of the source images.

msc 2014-12-02

Email reply by Pawel


1) As for spike data there are indeed far more cells. As far as I remember cells starting from 2561 until 17920 are our excitatory cells (that we record membrane potentials from) grouped in 16 hypercolumns each containing 32 minicolumns and every such minicolumn consists of 30 cells. So the first 30*32 cells belong to the first hypercolumn, the next 30*32 cells to the second one etc. The pattern should consist of the corresponding minicolumn from each hypercolumn (e.g. all the first minicolumns from all of 16 hypercolumns constitute pattern no.1, all the second minicolumns constitute pattern no.2 etc.). The same calculations start for the second network c2 from index 21506. You are right about the cells beyond the given boundaries - these are inputs and inhibitory cells that belong in small exclusive groups to different hypercolumns.


2) The connection is from c1 to c2 in a feedforward manner, i.e. minicolumns from the lower network c1 project to the corresponding ones from top network c2 (strength of these connections is not particularly high). I cannot remember the connectivity probability but it was not too high. The reason was to simulate a simple hierarchical network setup and the sharpening of selectivity in the top network is one of the emerging dynamical features. We had in the past also experimented with weak diffuse (less defined/structured) connectivity from the top network to the bottom, c2->c1.


3) The velocity is 0.5 m/s.


4) It would be possible but not very straightforward because we would need to go to old simulations and re-run them. There was a PhD student who run these particular experiments and unless the code is available on the server it can take some time to tune the network configuration. To be honest, I am sorry to say that I do not currently have sufficient time for this. Having said that, I must say, I do not really see a strong need to have all voltage data since they are highly correlated within each minicolumn and relatively strongly correlated within each hypercolumn (interesting phenomena in voltage signals occur across hypercolumns). As for the downsampling, we never record at a higher frequency than 1kHz since the frequencies we are interested in lie well below 100 Hz. The disk space and simulation time are radically reduced without loosing any relevant information whatsoever (the sub-milisecond resolution is not critical for the network's functionality and does not reflect any important dynamical phenomena that we would be investigating at depth).


5) We were interested in the effect of prestimulus oscillatory dynamics in the network on the activation of patterns. When the network is ready to receive a stimulus it exhibits alpha oscillations that fluctuate in power. We were investigating the modulatory role of these fluctuations among others on the effectiveness of the stimulus (whether it led to the pattern activation and how long it took for this process). You can gather more insights from the following publication: http://www.jneurosci.org/content/33/29/11817.short.


I have a meeting scheduled in early January at KI to pick some real EEG data of potential interest. They have lately been particularly busy but sounded positive.

Assuming one could trace 'how the brain thinks about a given 16 bit structure' and assuming that this takes place in much larger network, I'd be very keen to 'split' this into 16 patterns, one for each bit. One would imagine that each of those patters will show traces of the other 15 (as a kind of hologram?), which I would find fascinating.

Thus, a good starting point for me would be to look for ways in which to plot activation situations and then to slit them into their 16 constituents. This could be a good basis for a piece, me thinks.

msc 2014-12-02

I find the difference between the non-biological and the biological network interesting, although I don't quite understand why and how they do it. My sense is, though, that there may be a surplus in the biological network i.e. something only it can do that may come out by comparison - some form of diff?

msc 2014-12-02