Contextual review of approaches and aesthetics for glitching spatial images




Fractional Space: Between 2D and 3D

 

The Ephemer(e)ality Capture research project was, in part, a response to explorations initiated by Hito Steyerl’s investigation into 3D scanning as part of her EIPCP research project. The project, titled ‘Transformation as Translation’, consists of an essay and documentary imagery of the ‘testing process’. The project "argues that the translation from 2 to 3D is a transformation, [it] brings the limits of a specific representational paradigm into focus." (Steyerl, 2012a) Steyerl starts her essay ‘Ripping Reality: Blind spots and wrecked data in 3D’ by extrapolating on 3D media’s physicalisation of images. She does so through a set of speculative questions, such as: 


What if they transformed into the objects they claim to represent? What if the flat plane of representation acquired an extension and even a body? (2012a)


These questions respond to recent developments in 3D scanning and printing technologies. For Steyerl, images have the potential to become objects through processes of replication. They are scanned and then they are printed. However, transforming scanned images into 3D form throws up a set of issues. Steyerl analyses the terminology of the tech manufacturers who produce these technologies and outlines the issues present within these processes of replication, likening the rhetoric to those of previous documentary media. It raises issues about truth, objectivity, and transparent mediation.


The new technology promises all the things that documentary representation promised ever since: objectivity, full and truthful representation of events, only this time augmented by an additional dimension. A 3D point cloud is no longer a flattened image, missing depth and extension. It is a copy with a volume, dutifully replicating the shape of the initial object. (2012b)

 

Steyerl goes on to test these technical limitations by capturing a kiss. As she describes, a kiss is a time-based action, something that the technology will struggle to capture because of the its reliance on a time-based, linear scanning of the environment.

 

Lets think of kisses. Kisses are travelling events. We can imagine them being passed on like messages or even viruses. […] But a kiss – seen from the point of view of scanning technology also merges various actors, usually two into one surface. Surfaces connect bodies and make them indistinguishable. They connect bodies to grounds and other objects they happen to be in touch with. Surfaces capture bodies as a waveform, entangled with their material environment.(2012b)

 

What Steyerl hits upon here (aside from the inability of scanning technology to distinguish between different unconnected objects) is that by capturing moving, time-based entities, she is creating error through optical phenomena. This creates an issue for the technology’s assertions of truth and objectivity. The technique reveals its technological mediation, its propensity to speculate or extrapolate from the contradictory information it has been given in creating a 3D image, one which Steyerl deems problematic if considered a ‘truthful’ representation. Therefore, it reveals more about the construction of the image, the way the technology compiles a 3D model from flat imagery and the issues therefore inherent with this form of construction. Steyerl writes,

 

What emerges is not the image of the body, but the body of the image on which the information itself is but a thin surface or differentiation, shaped by different natural, technological or political forces, or in this case folding around a kiss. (2012b)

 

Steyerl terms this form of imagery ‘fractional space’. A 3D model is not necessarily one entity but a series of fractured surfaces. Models that are peppered with holes or warped forms; spikes or anomalous forms jut out from the objects. Holes and floating forms display dislocated objects or areas that are disconnected from the rest of the image. The ‘fractional space’ represents what the technology doesn’t know or understand as well as what it thinks it does. Its misinterpretations and mistakes. Steyerl goes on to state that these types of images do not constitute full three-dimensionality.

 

This space is a fractional space, […] a space that hovers between 2 and 3D. It is for example a space in 2,3 or 2,4 D. To create a full 3D rendition one would need  to scan or capture every point of a surface from every side. One has to basically use at least 3 scanners and then superimpose their results in virtual space. But if you have only one point of view, what you get is at best 2,5 D., a space between a surface and a volume […] 2,5D is created by 3D technology, yet it is imperfect 3D.(2012b)

 

Steyerl's concerns move from a consideration of the image’s construction to issues of technical proficiency and although Steyerl questions these technical issues and points to ways in which they could be overcome, the focus here isn’t to overcome the technical aspects but to understand the nature of the image. Steyerl posits that these images occupy a bizarre limbo state between 2D and 3D media because of their imperfect or time-based construction. Indeed, the images have no volume (unlike an MRI scan, for instance); images are all surface, surfaces that appear strangely flat at times. The textures on the surface of the models are comprised of the 2D images used to create the model. The 2D photos are merged and spliced together, yet are often poorly aligned, stretched or warped. All these aspects degrade its ‘3D-ness’ and leave it hovering between 2D and 3D. However, Steyerl does not extensively explore issues of the 3D model’s construction and composition. While Steyerl goes on the describe the condition for their ‘constructed’ nature and the ‘fractional spaces’ produced, she does not elaborate on methods or a discussion about how the images in question are composed by the technology. There are gaps in which a questioning of the composite or constructed nature of the image, as well as the vital role of algorithmic imaging and the agency of the machine. Steyerl hints at the issues of algorithmic construction but makes no moves toward exploring it further through devised methods. Here, on the process of creating a 3D print of the scan, Steyerl considers the logistics of ‘stitching’, the patching up of holes and making models watertight, as form of fictionalisation, and therefore moves away from objectivity. 

 

In fact depending on data, a substantial amount of interpretation goes into the creation of such objects. In the case of this sample it is more than fair to speak of a deliberate objectifiction rather than an objectification or objective rendering of data, since about half of the surfaces are pure estimations, deliberate abstractions and aesthetic interpretations of data.(2012b)

 

Steyerl suggests that less information leads to an increased influence by technological agency but she doesn’t hint at what further approaches could be taken to elucidate these. What conditions result in the creation of ‘fractional spaces’? This has prompted my research to explore these questions. In what ways can the construction of a 3D image be exposed? Perhaps the answer has been alluded to above. In short, this is an exploration of its time-based nature and an examination of the issues that cause it to err. For answers, we look to glitch practices.


 

Forensic Architecture

 

The research collective Forensic Architecture (hereafter FA) uses photogrammetry for investigative purposes. Often FA uses it for the visualisation and extrapolation of data garnered from other image sources (video or photography) to discover new information, as a form of investigative journalism. For FA, digital media act as a witness. For an audience, it visualises spatial details and factual events that are otherwise difficult to grasp. Their practice leans towards the use of photogrammetry due to its kinship with a form of accurate spatial representation — an objective image that is formed using, and supported by, witness testimony. Although these practices examine our relationship with representations through the use of photogrammetry, it is not FA’s purpose to examine the medium and image of photogrammetry itself. In contrast, my research works inversely: using objects/subjects that are difficult to index as a way of examining the media itself. This research is not about acquiring truthful representations of ephemera, but of using ephemera to discover the mediation present within photogrammetry, as well as its limitations.

 

On the limitations of remote imaging and the materiality of technology, FA’s founder Eyal Weizman states, "the forensic-architectural problem that arises forces us to examine the material limit of images." (FA, 2015) In light of this, Weizman reconciles the need for a return to witness testimony. "Facing the limitations of remote witnessing, one might turn to the testimony of survivors."  (FA, 2015) The investigations rely not on the objectivity of technology but on the concurrence of witness testimony of events and acts. FA employs a technique of ‘Ground Truthing’ (FA, 2018b), a process of sourcing public and local testimony and imagery concerning a disputed or recently disturbed or destroyed area. This act of ‘Ground Truth’ is a form of activism, standing in opposition to an invasive or authoritarian force. Interestingly, Forensic Architecture researcher and artist Ariel Caine has developed works that include multiple authors. Caine has developed what he terms ‘civic-led counter practices’ (Caine, 2019) for his photogrammetric works. 

 

Caine has developed work that moves towards a form of participatory spatial imaging, one that utilizes DIY techniques such as kite-photography or the sourcing of multiple residents’ images (see: Ariel Caine, Ground Truth, right). As Caine argues, these practices work towards a de-authoring of the single-perspective photographic image, as the ‘spatial photograph’, as Caine terms it, is:

 

An emergent form of three-dimensional photograph processes and assemblages that constitutes not an image but a navigable, architectural environment. (2019)

 

Not only does this acknowledge the shift in its construction, through his concept of ‘constellation’ , but Caine notes a paradigm shift in the viewing of the image as ‘navigable’. In doing so, it changes the ontology of the photographic image.

 

It transpires that, in the process of photography’s transition from the granular to the holographic, the singular body’s viewpoint vanishes. Photographic space detaches itself from the single perspective and erases the looking body. In its place, the looking eyes and the camera become free-roaming, perspectival subjects within a multi-point constellation that forms the three-dimensional space.(2019)

 

Caine argues that the detachment of a single perspective from the photograph is advantageous for his practice, as it provides an opportunity to create imagery from multiple authors as a way of approaching a community-based form of imaging. My research argues that the author’s perspective cannot be completely removed from many images, and these affordances for vision appear in a strange and political fashion, especially through glitch. The directional glitches, such as textures and meshes, emerge from a central viewpoint, alluding to the affordances and limitations of the author. However, the use of the image for Caine is still closely tied to its use as a representational tool, albeit in a form of representation that allows for multiple authorship. The acknowledgement of photogrammetry’s construction and navigational qualities provide important benchmarks for an assessment of its aesthetics, yet the mediating agent for the image (the algorithm that constructs) isn’t Caine's focus. This is where my research diverges from Caine’s. Glitch can reveal the powers involved in the construction of the image.

Glitch as paralogy

 

In The Postmodern Condition (2005), Jean-François Lyotard posited that culture and research would be increasingly imposed upon by economic, political, and bureaucratic systems, legitimated by the production of power (and not autonomously) — a condition he termed ‘performativity’. His solution to the issue of performativity was an approach of ‘paralogy’. Paralogical practices include methods of research and culture that highlight, critique, or destabilise the systems of power which underpin ‘performativity’. For Lyotard, in research, this meant the production of ideas sought through non-normative means or that went against established norms. Lyotard problematises the Habermasian notion of ‘consensus community’ with regards to legitimacy. For Lyotard, consensus opposes the heterogeneity and diversity necessary within research and culture, as for Habermas "legitimacy [was] to be found in consensus obtained through discussion" (Lyotard, 2005, p. xxv). Consensus favours a homogenisation of approaches, whereas ‘paralogy’ seeks dynamism and difference. Andrew Prior suggests that "Glitch-art practices constitute a vibrant 'paralogical' response to a performativity within arts and research" (Prior, 2013, p. 2). Prior goes on to analyse the importance of cybernetics and systems in Lyotard’s conception of performativity. "One of Lyotard’s key arguments was that the cybernetic characteristics of contemporary culture legitimate knowledge not for its sake, but for its performance." (Prior, 2013, p. 2) He points to how digital media often subject contemporary culture to the performativity of economic systems. Artists involved in glitch practices are often interested in the limitations of systems, adopting methods that test them to the point of error. "Therefore, glitch art might constitute a paralogous approach in drawing our attention to the materiality of its media, the conditions of technology and the constructed character of aesthetics" (Prior, 2013, p. 4). Perhaps, then, methods that promote a reflexive and evaluative questioning of photogrammetry’s mediation offer a paralogical practice, providing an approach when the materiality of the technology is inaccessible. Such methods could encourage a disruption of delimiting of subjects for expression, deliberately negating the exclusion of subjects based on their technical accuracy. And lastly, these could be methods that explore the issues and problems of technology by exploring practices that go against prescriptions of ‘best practice’. 

 

 

Détournement, spectacle and its technical equivalent

 

Similar to paralogous practices, it’s worth looking at other historical methods for cultural disruption. Guy Debord’s Society of the Spectacle (1956) defined détournement as a process of resistance to what he termed ‘the spectacle’. For Debord, the spectacle encapsulated capitalism’s political, societal, and cultural powers, which transformed citizens into passive observers and consumers through seductive forms of visual culture. All that mattered for capitalist power was that citizens consumed and became politically disengaged and stupefied, which could be achieved through an array of pacifying, spectatorship-inducing media. Debord, through his activity as part of The Situationist International, employed acts of détournement to disrupt, critique, or challenge the socially controlling forces of the spectacle. Détournement was often employed when agency was limited, as a form of critique against power or an establishment which one had no agency to change. Disruptive acts or purposeful resistance to social norms resulted in actions that influenced the understanding of how artistic practice could be socially and politically engaged. These acts, although focused on pre-digital world, rally against many of the issues faced by users of commercial digital media: over-commercialisation, a saturation of imagery/media, and subjugation to manufacturers. It could be argued that, because of the development of digital media through the late twentieth and early twenty-first century, the spectacle’s potency has increased with the multiplication of image-producing media. Many of the issues related to power and the production of imagery that Debord was concerned with are more prevalent than ever. Within the emerging world of photogrammetry, we see a shift of agency over to commercial entities and automation. It is also interesting to note that the spectacle, in terms of imagery, had previously been concerned with two-dimensional representation. The imagery of emergent 3D/CGI media, and their promise of greater realism, threaten to bring a new dimensionality to the spectacle. The rhetoric and methods for producing images should be thus tested and pushed in order to understand their limitations. The techniques in this research could be considered a détournement of digital media practices. Specifically, this research investigates ways to disrupt and challenge prescriptive outputs of commercial 3D media. The spectacle, in this case, is represented by commercial technology manufacturers who exert control over a user’s output and cultivate a dependence upon digital media. The recent cloud-based boom has led to a dramatic shift towards manufacturer control. Users submit input for rendering or processing, with very few controllable parameters. Often this is because algorithms decide on parameters of filters, extrapolation, and rendering based on user input and adjust them accordingly. The agency of the user is limited to the input they provide, which are often controlled by parameters of ‘best practice’ or ‘how to’ technical guides. However, refusing ‘good’ and ‘bad’ technicality in favour of agency qualitatively investigates the limitations of technology. This offers an important tool for many artists: a form of questioning that aims to understand more about the algorithms and cloud-based tools that are shaping artistic practice. We must ask: how is it possible to use détournement to challenge the restrictions of commercial digital tools?

Diagram of noise source in information theory according to Shannon (1948, p. 381).

Rosa Menkman, Xilitla, 2014, GIF of glitched VR environment.

Benjamin Mako Hill,  Windows load screen on New York subway sign, 2007.

Glitch Landscape 

 

Many theories on glitch practices are found in communications media studies, and have largely dealt with issues of noise and error. Theorists and practitioners have focused on the issue of mediation and transparency with media, notably Claude Shannon’s (Shannon, 1948) acknowledgement of external influences on signal transfer as a significant issue for all communications media. Rosa Menkman has likewise written about the ‘transparency’ of a medium in The Glitch Moment(um) (Menkman, 2011), in which she discusses the issues of noise and glitch. With Menkman, there is a realisation that all communication technologies are affected by noise and that it is an impossibility to produce a perfect, unmediated transmission. 


While the ideal is always unreachable, innovation is nevertheless still assumed to lie in finding an interface that is as non-interfering as possible, enabling the audience to forget about the presence of the medium and believe in the presence and directness of immediate transmission.(2011, p. 14)

 

These comments come in response to Shannon’s writing on communications engineering and the acceptance that entropy and noise are inevitable consequences of transmission. If noise and mediation are inevitable, then non-mediation and presence are inevitable — even paramount. And as David Bolter and Richard Grusin elaborate, there is a commercial and cultural impetus for this, as "our culture wants both to multiply its media and to erase all traces of mediation: ideally, it wants to erase its media in the very act of multiplying them" (Bolter & Grusin, 1999, p. 5). Menkman discusses the cultural desire to develop an optimally transparent channel, one in which the user is unaware of the mediation due to the directness and transparency of the transmission. They discuss the example of the Graphical User Interface, which


...was developed to let users interact with multiple electronic devices using graphics rather than complicated text commands. This development made these technologies more accessible and widespread, yet more obfuscated in their functionalities.(2011, p. 14)

 

Menkman observes that the technology needn’t be transparent nor direct, so long as the user’s experience of it was deemed so. The user interface’s accessibility allows for a non-technical user to perform complex computational calculations thanks to an easy-to-use operating system or controls. This is certainly true of cloud-based photogrammetry (CBP). CBP obfuscates its algorithmic mediation behind a clear ‘input’ and ‘output’ interface. The construction of 3D models from 2D images is obscured from the user’s view, but is easy to perform. Most users would be unsure of the exact manner in which their models are constructed. The technology displays the process with simple progress bars and buttons, with phrases such as ‘In Progress’ or ‘Uploading’ to signify its activity. This obfuscation of the image-creation process also obscures the algorithmic decision-making about spatiality. Glitches, errors, or failures in the models reveal how the technology struggles to calculate the spatiality of the subject, revealing glimpses of their true mediation and functionality. Their fragmented nature shows which images have been used and relied upon for the 3D model and, conversely, which areas of the model have insufficient information from image discrepancies. All this provides insight into the automated decisions and the way in which the 3D model is constructed.

 

Technological mediation can be unpicked through errors, showing its materiality or functionality. Benjamin Mako Hill's essay, ‘Revealing Errors’ in Error: Glitch, Noise and Jam in New Media Cultures  (2011), guides the reader through ways in which technologies deliberately obscure their own mediation, only for the workings of this mediation to be revealed by errors in the system. Mako Hill gives examples of (in)famous systems errors that have revealed the politics of informational manipulation. Mako Hill argues that these intermediary codes, algorithms, or technologies are hidden to the user until the point of error, which peels back the layer of the user interface, exposing not only the workings of the machine but also the politics of the developers.

 

User interfaces, Mako Hill explains, are described by programmers as ‘abstractions’. They allow the user to easily navigate the system (for instance, operating systems for laptops and phones) without the need to understand or manipulate code. Mako Hill lists several examples of the pervasive way in which technology mediates not only the information we receive but also the way we behave. It is insights such as these that have led artists to also work with systems that deliberately control the direction of output. In working with error, artists become activists, as Mako Hill describes, since "[e]rrors can expose the particularities of a technology and, in doing so, provide an opportunity for users to connect with scholars speaking to the power of technology and to activists arguing for increased user control" (Mako Hill, 2011, p.  32). For Mako Hill, artists uncover errors that reveal technology’s manipulation of their movement or engagement, the control of their data or input, and alterations of their outputs and work.

 

My research aims to put into practice Mako Hill’s assertions that errors uncover the mediations of technology by exploring the limitations of photogrammetric processing, with errors and glitches appearing as a visual exposé of algorithmic confusion. But far from simply being a technical critique of the technology’s inability to capture, the research emphasises an appreciation of errors as more than mere mistakes. They provide a window into the workings of the machine and offer a reflexive, dynamic methodology, which encourages users to work around the often-narrow prescriptions of technical devices. Mako Hill, too, mentions the importance of noticing errors — not least because it provides information about media ecologies and structures of power, but also because it provides a plurality of approaches to digital culture.

 

These approaches can be short-lived, susceptible to change, and superficially reproduced through pastiche. But these works often provide a snapshot of media in the present, showing processes of mediation at that moment in time. "The glitch’s inherent moment(um), the power it needs or has to pass through an existing membrane or semblance of understanding, helps the utterance to become an unstable articulation of counter-aesthetics, a destructive generativity." (Menkman, 2011, p. 44) The temporality of glitch practices is therefore worth addressing, as these works and practices face disappearance. Notable glitch artists such as Jeff Donaldson, Paul B. Davis, NO CARRIER (aka Don Miller) and JODI demonstrate glitch practices in which the media is ‘local’ to the artist. In this sense, the materiality of the ‘hacked’ hardware and/or software was accessible to those artists. Either the artist changed the code of the software to make it perform differently or the artist altered the circuitry of the hardware in order to achieve alternative outputs. These works represent what could be termed as ‘local glitch practices’ and reflect the local nature of technology in the time the artist worked. These practices are potentially under threat due to the increasing dominance of the cloud. In all aspects of digital culture, from gaming to graphic design, media are moving from a system of distributed media commodities (in which the individual user has access to the technology’s materiality) to a centralised, remote, or cloud-based model. This gives the commercial developer more agency in the distribution of the technology (allowing for greater control against piracy, for example) and greater control of security (since it is easier to prevent the hacking of centralised system compared to thousands or millions of individual units). However, practices of glitch, ‘thinkering’ (Huhtamo, 2010), or ‘zombie media’ (Hertz and Parikka, 2012) could disappear due to the difficulty in accessing cloud-based media technology. Against these powers, methods of détournement must be employed. Are there ways of practising media archaeology with cloud-based technologies?

Conclusion

 

This research unpicks the complex chain of mediation present within photogrammetric images. The resulting image relies upon digital translations of image data and on hidden automatic assessments of those data sets about image spatiality. The glitches that occur reveal the unseen algorithmic interferences that give the appearance of transparency, interferences that aim to create a more sophisticated representation of the depth of optical reality. Hito Steyerl’s research initiates a method and a language for understanding these issues of mediation and digital translation. However, if it is to be useful in bringing "the limits of a specific representational paradigm into focus" (Steyerl, 2012a) , Steyerl’s project alone provides insufficient information on the true nature of the image’s construction . While she establishes that "What emerges is not the image of the body, but the body of the image" (Steyerl, 2012a), this reflection on the inadequacies of the image-making process is not thoroughly developed and problematised. The research practice presented here further develops the nature of fractional spaces created by photogrammetry. It initiates a toolkit and conceptual framework for addressing issues of layering and the automated rendering of spatial imagery for future research. This leaves scope for further explorations for researchers interested in investigating the ways that 3D technologies construct images. From the works produced, the investigation focuses not on whether we get any closer to ‘reality’ in an optical representational sense, but whether we get any closer to the reality of understanding how technology constructs and mediates images.

Ariel Caine's project with Forensic Architecture. View of point-cloud terrain of al­Araqib looking at the Bedouin stonehouse of Al Malahi Salman Abu Zayd.

Hito Steyerl, 3D-Body of the image, 2012, PNG of photogrammetric scan.

Hito Steyerl, 3D-Body of the image, 2012, PNG of photogrammetric scan.

Hito Steyerl, Tent / Texture II, Kharkiv, 2015. Description: Photogrammetric texture jpeg from 3D scans of Ukraine battle sites.

The scene of the chemical attack in Khan Sheikhoun. A crater in the center of the image was suspected to be that of a chemical bomb.

Close-up of photogrammetric model of the crater of chemical attack in Khan Sheikhoun. A crater in the center of the image was suspected to be that of a chemical bomb.

Ariel Caine, Ground Truth: Testimonies of Destruction and Return in The Naqab, 2016.

Image of 'circuit bent' toy car as part of Garnet Hertz's workshop 'Toy hacking'. An example of 'Zombie Media' in which obsolete tech is rewired and repurposed. Peter Huynh/ICS Communications

Matthew Plummer-Fernandez and JODI.MATERIAL WANT, 2016. The project is a hybridisation of algorithms, errors, Internet found objects, and digital fabrication. The objects are designed to glitch 3D printing technologies.

Jodi. Untitled Game. 11 Quake modifications for PC Mac. 1999. In Untitled Game, 1999, JODI create glitch through the exploitation of errors in its source code. This leads to unpredictable steering and shooting and notable destabilization of the physics engine.