The installation in the entry-room, had a front-end and a back-end.

Central to the installation was a GPT2 (an autoregressive language model that uses deep learning to produce human-like text) that we had trained on for Neuromatic Game Art relevant texts.

Through a rule-based conversation between two philosophers, a GPT2 and a "human server", we investigated how we could engage with "philosophy through technology" rather than give a traditional philosophy-reading. Mark Coeckelbergh had the task to insist on questioning the content, Anna Dobrosovestnova had the role of a philosophy-bot; reading, repeating and playing with the material that was fed to her on her laptop - see video on the right how Georg Luif, the "human server", worked at the back-end.

Example of GPT2 generated text

ENTRY ROOM - BRAIN MACHINE DÉRIVE

NEUROMATIC GAME ART

Technical Information:


For the Brain Machine Dérive installation we used the "Generative Pre-Trained Transformers 2" language model, "GPT-2", released in 2019. The code architecture is similar to existing language models, but the GPT-2 was trained with the gigantic dataset "WebText" - 40GB of pure text. Our own fine-tuning of the GPT-2 model, on a collection of texts from the Neuromatic Game Art research group as well as techno philosophical texts took several hours of pure computation time.


To read more about the conceptualisation and AI in relation to artistic writing see: Brain Machine Dérive published in Carpa 7 proceedings.