References

 

Akten, Memo. 2017. ‘Learning to See’, Memo Akten | Mehmet Selim Akten | The Mega Super Awesome Visuals Company <https://www.memo.tv/works/learning-to-see/> [accessed 23 May 2023].

——. 2021. ‘Deep Visual Instruments: Realtime Continuous, Meaningful Human Control over Deep Neural Networks for Creative Expression’ (unpublished doctoral thesis, Goldsmiths, University of London) <https://research.gold.ac.uk/id/eprint/30191/> [accessed 17 November 2022].

Becker, Howard S. 1995. ‘Visual Sociology, Documentary Photography, and Photojournalism: It’s (Almost) All a Matter of Context’, Visual Sociology, 10.1–2: 5–14 <https://doi.org/10.1080/14725869508583745>

Berardi, Francesco. 2017. Futurability: The Age of Impotence and the Horizon of Possibility (London and Brooklyn: Verso).

Broad, Terence, Frederic Fol Leymarie, and Mick Grierson. 2021. Network Bending: Manipulating The Inner Representations of Deep Generative Models. <https://doi.org/10.48550/arXiv.2005.12420>

Brock, Andrew, Jeff Donahue, and Karen Simonyan. 2019. ‘Large Scale GAN Training for High Fidelity Natural Image Synthesis’, ArXiv:1809.11096 [Cs, Stat] <http://arxiv.org/abs/1809.11096> [accessed 18 October 2021].

Brown, Tom, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, and others. 2020. Language Models Are Few-Shot Learners <https://arxiv.org/pdf/2005.14165.pdf> [accessed 1 January 2020].

Dolejšová, Markéta, and Lenka Hámošová. 2021. ‘Designing in Troubling Times: Experimental Engagements with Socio-Ecological Challenges at the Uroboros Festival’, in NERD - New Experimental Research in Design 2, ed. by Michelle Christensen, Ralf Michel, and Wolfgang Jonas (Berlin: De Gruyter), pp. 36–52 <https://doi.org/10.1515/9783035623666-004>

Eaton, Scott. 2019. ‘AI Artist Scott Eaton’, NVIDIA <https://www.nvidia.com/en-us/research/ai-art-gallery/artists/scott-eaton/> [accessed 23 May 2023].

Entangled Others. 2020. Beneath the Neural Waves <https://beneaththeneuralwaves.com/> [accessed 25 March 2022].

Gonsalves, Robert A. 2021. ‘Creating Abstract Art with StyleGAN2 ADA | Towards Data Science’, Towards Data Science <https://towardsdatascience.com/creating-abstract-art-with-stylegan2-ada-ea3676396ffb> [accessed 11 January 2022].

Goodfellow, Ian, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, and others. 2014. ‘Generative Adversarial Nets’, Advances in Neural Information Processing Systems 27 (NIPS 2014) <https://papers.nips.cc/paper/5423-generative-adversarial-nets.pdf>

Haraway, Donna J. 2016. Staying with the Trouble: Making Kin in the Chthulucene, Illustrated edition (Durham, NC: Duke University Press).

Hertzmann, Aaron. 2020. ‘Visual Indeterminacy in GAN Art’, Leonardo, 53.4: 424–28 <https://doi.org/10.1162/leon_a_01930>

Heusel, Martin, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter. 2018. ‘GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash Equilibrium’, ArXiv:1706.08500 [Cs, Stat] <http://arxiv.org/abs/1706.08500> [accessed 9 January 2022].

Karras, Tero, Miika Aittala, Janne Hellsten, Samuli Laine, Jaakko Lehtinen, and others. 2020b. ‘Training Generative Adversarial Networks with Limited Data’, 34th Conference on Neural Information Processing Systems (NeurIPS 2020) <http://arxiv.org/abs/2006.06676> [accessed 5 January 2022].

Karras, Tero, Miika Aittala, Samuli Laine, Erik Härkönen, Janne Hellsten, and others. 2021. ‘Alias-Free Generative Adversarial Networks (StyleGAN3)’, 35th Conference on Neural Information Processing Systems (NeurIPS 2021) <https://arxiv.org/pdf/2106.12423.pdf> [accessed 18 October 2021].

Karras, Tero, Samuli Laine, and Timo Aila. 2018. A Style-Based Generator Architecture for Generative Adversarial Networks <https://arxiv.org/pdf/1812.04948v1.pdf> [accessed 1 January 2019].

Karras, Tero, Samuli Laine, Miika Aittala, Janne Hellsten, Jaakko Lehtinen, and others. 2020a. Analyzing and Improving the Image Quality of StyleGAN <https://arxiv.org/pdf/1912.04958.pdf> [accessed 18 October 2021].

Mirzoeff, Nicholas. 2016. How to See the World: An Introduction to Images, from Self-Portraits to Selfies, Maps to Movies, and More, Illustrated Edition (New York: Basic Books).

Mitchell, W. J. T. 2006. What Do Pictures Want?: The Lives and Loves of Images, New edition (Chicago: University of Chicago Press).

Pasquinelli, Matteo, and Vladan Joler. 2020. ‘The Nooscope Manifested: AI as Instrument of Knowledge Extractivism’, AI & SOCIETY <https://doi.org/10.1007/s00146-020-01097-6>

Ridler, Anna. 2018. Myriad (Tulips) <http://annaridler.com/myriad-tulips> [accessed 25 March 2022].

Steyerl, Hito. 2009. ‘In Defense of the Poor Image’, E-Flux Journal <https://www.e-flux.com/journal/10/61362/in-defense-of-the-poor-image/> [accessed 21 May 2023].

Taylor, Luke. 2023. ‘Amnesty International Criticised for Using AI-Generated Images’, The Guardian <https://amp.theguardian.com/world/2023/may/02/amnesty-international-ai-generated-images-criticism> [accessed 21 May 2023].

Uroboros Festival. 2021. UROBOROS 2021 | Mary Ponomareva, Chris Kore, Entangled Others: Zero Emissions by 2099, online video recording, YouTube <https://www.youtube.com/watch?v=XXx4Q02tlIg> [accessed 5 March 2022].

Xu, Tao, Pengchuan Zhang, Qiuyuan Huang, Han Zhang, Zhe Gan, and others. 2017. ‘AttnGAN: Fine-Grained Text to Image Generation with Attentional Generative Adversarial Networks’, ArXiv:1711.10485 [Cs] <http://arxiv.org/abs/1711.10485> [accessed 18 October 2021].

Zylinska, Joanna. 2020. AI Art: Machine Visions and Warped Dreams (Open Humanities Press) <http://www.openhumanitiespress.org/books/titles/ai-art/> [accessed 12 March 2021].

 

 

Acknowledgement

This research was created at the Academy of Performing Arts in Prague as part of the project Extending the creative tools of Machine Learning and Artificial Intelligence — Experimental Tools in Artistic Practice supported by the Ministry of Education and Science for specific university research at the Academy of Performing Arts in Prague in 2021.