10 Synset_Gloss

 

The hybrid images in Synset_Gloss, created through subjective selection and an automated computational process, embody both intentional and chance elements. This synthesis actively induces defamiliarisation, compelling viewers towards associative thinking and non-normal modes of meaning-making.

 

 


 

Contents                                                                                                    Back Next

                                                                                                                                                                                                                                               Back Next

In the neural network, the neurological exercise involved in natural seeing and interpreting is broken down into discrete, jittery steps. There are two crucial acts: seeing and naming, taking place in order to give rise to an interpretation.

(Nora Khan 2018)


Synset_Gloss generated text from human-action analysis using the ResNet HAR model within a structured pipeline (fig. 13). Initially, the HAR model analysed human actions depicted in British public information films to produce descriptive text labels (see 04 Summary of Project Method for a more detailed description). These labels were automatically processed by a Python script to interact with WordNet to create an enriched “amplification” of the original labels into a small text corpus. A Markov chain generator was created using the Python Markovify library to utilise the small corpus to generate short, nonsensical sentences, which were then curated. Subsequently, a language model trained on the works of Michel Foucault was used to autocomplete the selected Markovify output, resulting in relatively coherent text sequences that, after further curation, served as narration for the video. In addition, video clips were sourced from the internet using the previous ResNet label classifications as search tags. Following a final selection, these clips were integrated into the timeline, contributing an additional layer of visual complexity to the narrated video track.

Video 2. Synset_Gloss, 2020, 07:48. Video and animation with stereo sound.

Figure 13. Schematic showing Synset_Gloss workflow, 2021.