During this meeting the topic of dimensinoality reduction came up and a few methods to reduce the dimensions of a dataset were discussed. Jonathan then brought up the autoencoder, a type of artificial neural network used to learn efficient data codings in an unsupervised manner. Typically, an autoencoder has an input layer, an ouput layer and one or more hidden layers connecting them, but with the ouput layer having the same number of nodes of the input layer, and with the purpose of reconstructing its own inputs. The autoencoder therefore consists of two parts, the encoder and the decoder. The intermediate, compressed image is referred to as code or latent representation (Wikipedia).
Cutting the structure
Drawing an analogy from anatomical practices, Jonathan expressed his interest in exploring the structure of this kind of model by cutting or dissecting the network in parts. Once the network is trained, if we chop off its left side we obtain a synthesizer. Cutting off its right side, we get a compressor / dimensionality reduction.