1. Introduction

Recent developments in machine learning and Artificial Intelligence (AI) have allowed for a new category of media to appear — synthetic media. Using various generative AI models, it is possible to synthesise novel visuals (or text or sound) from large amounts of data and to direct this process to achieve specific outcomes, as well as to use it to manipulate already-existing media. Synthetic media found its first application in the form of deepfakes (replacing a person in an existing image or video with someone else’s likeness) and generative AI art, which has since become even more popular in the NFT (non-fungible token) marketplace. Soon thereafter, it became monetised, with the arrival of video synthesis services for personalised marketing, celebrity avatars or fake humans and all sorts of computational photography filters and video enhancements. A small part of this consists of creative applications in which AI generative tools are designed for executing repetitive tasks in media production and for assisting creative workers in achieving a higher degree of realism in synthetic footage, synthetic image completion or 3D estimation. In art, experiments with AI tools create an opportunity for achieving new aesthetics as well as for critical discourse that questions the ethical consequences of using these tools, including re-evaluating the concept of authorship and the nature of creativity being solely a human vice. A special fringe area is the research that acknowledges AI as an autonomous agent and speculates on future creative relationships between humans and non-humans.

Generative neural networks bring exciting opportunities to creative practice, while at the same time generating insecurity, devaluing human-captured media, and posing questions about the nature of knowledge production and exchange, with regard to what is real and what is not. Can generative neural networks hold a mirror up to society and point out the areas that require our critical attention?

There are many expectations from AI. The myth of AI as an alien mind superior to human intelligence and able to solve all of humanity’s future problems not only scares most people, but is also open to misinterpretation and can confer false promises. To debunk these misconceptions, Vladan Joler and Matteo Pasquinelli wrote a ‘manifesto of AI dissidents’ (2020: 1), The Nooscope Manifested (2020), whereby they decompose generative neural networks into components to analyse the workings and visualise the process of deep learning. They arrive at a conclusion that AI (as we know it today) is nothing more than a pattern-recognition algorithm, functioning as an instrument of knowledge magnification that supplements the limits of our human brains. It is a rational on-the-spot explanation that draws attention to the need for creating a theory of knowing, to understand what is happening inside the black box of deep learning. The processing of artificially enhanced vision and information in connection with visual media synthesis has greatly inspired this artistic research. Can generative neural networks be used as tools for knowledge production? How can we stretch the pattern recognition of large (visual) data without attributing superficial qualities to the AI algorithm?

This artistic research engages with generative neural networks and AI-driven visual synthesis with the goal of challenging the limits of the research and questioning the value of the generated outcomes. To undertake this work, we had to reject the rational preparation process in training a customised generative AI model (StyleGAN, in this case) and revert to the objective of such training, knowingly anticipating a failed outcome. The aim was not to achieve a functioning model that would generate aesthetically pleasing photorealistic visuals nor to discover how to improve the StyleGAN network for creative visual applications. On the contrary, the motivation to engage with the training process from the outset lay in a speculation on the information value and affective quality of the generated images, the manipulative power of the training dataset and the insufficient validation of such a cost-effective process in the artistic context.

It is thus an experiment with a customised StyleGAN model trained on a heterogeneous dataset, involving the voluntary exposure of the generative neural network to failure while focusing on the unexpected moments of surprise that arise from such a process. To test these qualities, we selected an ambiguous topic for the dataset. Instead of sneakers or human faces, we chose to collect images representing the vague concept of ‘troubling times’, challenging StyleGAN’s ability to recognise patterns within such a diversified dataset. Despite knowing from the beginning that the results would be unsatisfactory, we embraced the anticipated failure as an opportunity to gain insight into a non-human (algorithmic) kind of sense-making, observing the process and considering what kind of new knowledge could be created without following the rules.

This exposition documents the process of artistic experimentation and analyses the outcomes to answer the research question: can generative neural networks produce unexpected knowledge and change our perspective on the studied object (the ambiguous abstract concept of ‘troubling times’)?

 

The documentation includes a technical description of the setting, the training dataset, the training process and the final product — a StyleGAN model called TroublingGAN. This model is made accessible for further experimentation along with the documentation.

Although the visually ambiguous images generated from the vast latent space of our TroublingGAN model hardly consist of specific objects or symbols, they strangely resemble photography and, however abstract they may be, the human mind wants to impose meaning on them. Rather than questioning what they represent, our analysis focused more on the affective quality of these synthetic visuals and speculated on their potential use as a substitute for photojournalism. Images documenting tragedies and catastrophes are so overused in contemporary culture that visually overloaded viewers become indifferent to their content. We suggest replacing photojournalism with visually ambivalent synthetic footage to achieve greater involvement with the attached text message.

Figure 1. TroublingGAN. Generated outcomes from TroublingGAN model.

Lastly, the article explains the areas of friction, where the logic behind the new technological tools is clashing with the artistic ability to combine the incompatible. We find these moments the most precious. Especially in the current ‘troubling times’ of diverse yet interconnected global issues and crises, to seek different perspectives and inject one’s thinking with a bit of nonsense logic is a crucial, although sensitive topic.