Photogrammetry struggles to capture certain objects and environments due to their optical nature. Transparent, repetitive, patterned, indistinct, plain, reflective, and ephemeral objects cause problems. As briefly explained above, photogrammetry works by using quantifiable data – such as shapes, textures, or colours – to plot key points (‘cloud points’) in multiple images in order to construct how source objects might have been situated three-dimensionally and make a 3D model from them. Since ephemeral qualities make it harder for the technology to do that, the photogrammetry technology has to make things up; it gets things wrong when it estimates the forms of the objects based on the information it has. This leads to glitches and errors, such as stretched images and warped textures, as well as phantom forms or holes and spikes in the 3D mesh.
I have developed a series of artworks using these processes to explore mimetic visuality in 3D capturing technologies and issues that unpick their representational nature as being heavily mediated. As a result, this research is an assessment of the use of photogrammetry and proposes a methodology to expose the mediation of the technologies – a mediation by algorithms, which make decisions about 3D space based on 2D images. Through a series of artworks, I detail practical techniques for exploring the errors generated by the technology’s perceptions of space. The artworks become a critical investigation of visual capturing technologies by forcing them to visualise the invisible, temporal, and ephemeral. This is not meant in a metaphorical or figurative sense; the technology is literally being used to visualise objects that are transparent or that change during the process of recording them. The deliberate negation of technical instructions forces the technology to speculate, and from these speculations, we get a glimpse of the technology’s decisions and how it constructs the images. This reveals how 3D media imagery is mediated and constructed using automation, although this is often disguised or unclear until the point of error. Errors become useful in understanding the workings of a technology that are otherwise inaccessible. Presented is a methodology that encourages a détournement of automation, exploring the agency of technology through errors in models of transparent, plain, reflective, and ephemeral objects and environments. The works probe at the limitations of algorithmic understanding and force the technology to visualise uncertainty. The research promotes self-reflexivity and critical reflection in users to deliberately side-step, avoid, or actively ignore prescribed workflows of digital tools through the use of a dynamic methodology.