2014 12 01, Gerhard
Biologically inspired networks
I had a meeting with Pawel and Anders, where they clarified that the 4x4 pixel "image" the networks operate on is actually "just a pattern" of 16 bits. There is no 2-dimensional aspect represented in the network. The network has 16 inputs, where a voltage is injected (on or off), and then the reaction of the network is observed. The network is a biologically inspired network and tries to model what happens at high levels of the cortex. So how the "image" arrives there is left out of the model. The network is trained using another network, which is a normal neural network (i.e. not a biologically inspired one) and the cell connections resulting from this training are then applied to the biologically inspired network. I was asking if it made any sense to think of the 16 bit pattern of a 2D visual image and the answer was no – it is all meant much more abstractly. It could also be an auditory pattern that is "remembered" or "associated" by the network.
We also clarified technical details of the "synchAlpha_100s" data set I have been working on so far, which has two files, one with voltages sampled at 1000 Hz and one with discrete spikes, which are extracted from the voltage data. As the output data from the super computer simulation is massive, the voltage data has been down-sampled in space and time. It contains 16 hypercolumns with 1 minicolum of 5 cells each (16 * 5 = 80 cells). In the simulation each minicolum has 30 cells, so 25 have been thrown away. This makes sense as the voltages are highly correlated between those cells arranged in proximity. I was asking for getting a data set with all cells in the minicolums, because it would be interesting to auditorily explore the degree of correlation.
In the data set we have, there are actually 2 networks, where the second one is connected in a simple forward connection to the first (one way communication). They both have the same structure and we have the same 80 cells in the voltage data of both ot them, i.e. a total of 160 cells. Pawel gave me the indices of the cells of both networks, so that we now know also the hyper and minicolum structure. It will be interesting to compare the two networks.
A consequence of the temporal downsampling is that the voltage data doesn't contain all the spikes, because the downsampling is done the crude way by just skipping samples without filtering. The original sample rate is 10 kHz. This is why we will have to integrate the missing spikes (which are in the spikes file) back into the voltage data before we can actually listen to it. What we have listened to so far was not containing all spikes.
The data set contains what is called a ground state, where the brain does some house keeping tasks and several activations, which are reactions to patterns presented to the network. We don't know where these activations happen and it will be interesting to see, if we can hear them.
Browser for listening
I suggest that we build a 2D visual browser allowing to listen to individual cells and groups of cells (by simply adding their signals up, maybe spatialising them over more than one speaker) by using a probe with a varying spatial extent, including more or less cells in a region. The goal here is to get a first idea of what can actually be heard in the data. I assume that this is only possible with an interactive approach, which allows for listening deeply and flexibly into the data, allowing for immediatly testing ad-hoc hypotheses, while leaving a kind of navigation trace in the data, which could also be recorded, analysed and played back. But these are just some initial ideas and more will appear once we start working in building such a tool.
We also briefly talked about possibilities of visualisation. An idea came up related to the fact that the network "sees" only these 16 bits, which are internally expands into a much more complex structure (neural activity over time and space). It could be interesting to visualise this structure, which is assumed to show how "the brains things about this image", which in the scenarios they simulate would only mean how associative memory may work, which is one of their central research questions.
I forewarded them the aricle Michael found in spring ("Neural portraits of perception: Reconstructing face images from evoked brain activity") and they promissed they will look at it. They both had skimmed the abstract, seemingly knew what this was all about and sounded interested. Let's see if they get the time to actually read it in depth. I am also not sure at this point how relevant this work is for us at the moment but when I remembered the paper, I though it is good to share it with them.
Since my last meeting with Pawel, he has asked his colleages at Karolinska and University of Stockholm for MRI and EEG data and hopes to get some soon.