Post-colonialism

 

 

This work is an effort to visualize biases in visual artificial intelligence tools. It is presented as a tryptic, with each of the three pieces measuring 100x100cm. The central piece has a grid of 10.000 fake human faces generated by the StyleGAN2 algorithm.

 

 

This algorithm is of special interest to me not only because it is probably the most popular platform for AI-generated imagery, but also because it is one of the offsprings of the original GAN - generative adversarial network, invented in 2014 by researcher Ian Goodfellow. This invention was a breakthrough in the field, as for the first time it allowed a computer program to be trained in almost any visual style and reproduce it, creating new works which look similar to the training images but are never equal. GANs are the base for a few landmark works, like the piece created by the French collective named Obvious, which sold for 432.000 dollars by the auction house Christie’s in New York City. It is a tool used by many artists such as Helena Sarin, Anna Riedler or Gene Kogan.

 

 

In order to generate new images, this platform requires a great number of images to be trained on - in the orders of thousands (although artistic projects might sacrifice quality for creativity and use just a few hundred images to train). Training is a computing-expensive process, which is usually done in the research centers. The graphics cards company NVidia - where the current version of StyleGAN2 is being developed - offers a few pre-trained models for experimentation. Probably the most popular of them is based on the Flickr faces dataset.1 This collection of face images was created by scraping the Flickr website. Images with permissive licences were downloaded and cleaned up to arrive at a total of 70.000 1024x1024 pictures of faces. Even though the creators of the site claim that it “contains considerable variation in terms of (..) ethnicity”, its distribution is clearly biased towards caucasian people.

 

 

In order to make this bias visible, I ran another artificial intelligence algorithm - perhaps just as problematic as the dataset - that proposes to classify human faces according to ethnicity. Using it as a base, I wrote a software that erases all the caucasian faces from the same grid created before, creating empty spaces and therefore leaving only non-white people. This is the piece on the left side of the tryptic. As one could expect the result of the effacement is also highly questionable. The program does a terrible job, leaving persons who are clearly white, while erasing faces of non-western people. It should be said, in any case, that the software used is not professional or peer-reviewed. In fact, most of the companies which offered such services stopped offering them due to the obvious controversy of the possible uses of such a tool.

 

 

Faced with those concerns, I decided to add the third element of the piece. In this last one, I did the selection of non-white faces manually. This allowed me to reflect on the subjectivity and criteria that such a task entails. Ethnicity is not just a visual feature: I weighed my decisions on factors I did not suspect, such as what I evaluated to be the wealth of the subject. Even though I might have corrected what I considered to be errors of the ethnicity algorithm, I certainly have brought my own flaws into play. It also allowed me to compare manual labor against machine labor. The task took me about 16 hours to complete, considering I had to go through 10.000 faces, and that I might have spent a long time on a few ones which were more undefined for my own subjectivity. The automated process, in comparison, took just a couple of minutes.

 

 

No conclusion is wished or possible from this process. The results are the reflections that will be incorporated into the thesis documentation and overall artistic research process.

 

1https://github.com/NVlabs/ffhq-dataset

 

Bruno Caldas Vianna, Singular, Hietsun Pavilion, Helsinki, 2021