Orthogonal

I sometimes feel lonely in the sense that I am unsure whether I am doing something that truly connects to the aesthetic perception of anyone but myself. All the more I am happy when I find a work in which I can see an aesthetic idea embodied that resonates with me. One such work I encountered recently in the exhibition of xCoAx in Bergamo, a video installation by Hector Rodriguez. In it, Godard's Alphaville is decomposed using a specific mathematical projection technique. A number of distinct frames had been selected from the classic black and white movie, a film I love and whose iconic computer narrator voice—"Il arrive que la réalité soit trop complexe pour la transmission orale"—I used in several occasions. Then all frames of the movie are projected onto a vector space spanned by these frames (each taken as a one-dimensional brightness vector). As a result we see a "shadow" version of the film, with the selected frames becoming apparent as they are approached at their respective time location. It struck me that this work is indeed a clear example of the concepts behind imperfect reconstruction.

"We wish to reconstruct the input vector v as a weighted sum of images in A."

Hector Rodriguez made the effort to describe the algorithm on his website, an effort very much appreciated as it allowed me to take his work as a starting point from which I intended to make a "second order imperfect reconstruction"—based on what I saw and how it was described.

However, what I am mostly interested is the activity of algorithmic experimentation, that is, to give space and time to all the intermediary representations and trajectories suggested in the process of implementing specific aspects of an algorithm. Following the idea of the vector space, I found out that the work had a predecessor in which an orthonormalisation on the selected frames was used, something that was dismissed in the newer version, apparently due to the fact that the orthonormalisation introduced bipolar vectors which did not translate any longer naturally to a brightness from zero to one.

In the next step, the algorithm is implemented as a UGen for the upcoming FScape version two, now supporting image and image sequence I/O naturally besides audio file I/O. Observing a histogram with logarithmic decay to both sides of zero, generally yielding very faint tones, a pre-amplified tangens function application produces an adjustable contrast. RGB channels are processed separately, yielding particular noise fields with predominent colours in moving bands for flat input data and monochromatic contours for changes in the input data. Since rows are processed from top to bottom, the vectors faint more and more towards the bottom after orthogononalisation, an effect that is particularly interesting with an image sequence that passed motion-compensation and thus has a dynamically growing and shrinking black margin. The texture is very special in its chromaticity, articulating very well the materiality of the TFT display, almost as if two layers are superimposed—a statically appearing front texture of TFT elements, almost as if a liquid film was applied to the screen, and a moving texture in the back, again separated into two aspects, the coloured noise of sky elements moving "in place" and the contours of the landscape moving linearly.

Nevertheless, I wanted to explore the possibilities of orthogonalisation—the procedure by which a set of vectors each of size N is transformed into another set of vectors that are orthogonal to each other in an N-dimensional space. First attempts with the Gram-Schmidt algorithm, taking a single image as a set of row-vectors, produced some where interesting results. Contours are preserved and marked by a bright/dark shadow.

[hh 15/08/16]