Sight and vision – The making of Nether

Report from the creation of Nether, part of Alwynne Pritchards Dog/God project.


Nether is a musical, theatrical piece for vocalist/physical performer staged and commissioned by Alwynne Pritchard as part of her performance DOG/GOD II in Bergen during Oktoberdans 27.10.2020.
I elected to work with the combination of an animated figure and physical performer since that is in line with my current research project Emotional machines – composing for unstable media. Nether features an animated figure, either #11 or 12 tentatively named Dog (later renamed Bones), with Alwynne Pricthard as a physical performer and vocalist. It is part of a series of commissions from multiple composers for short pieces that make up her Dog/God project.

Documentation video. Nether 27.10.2020.

The figures

This composition was developed over several months and started with creating two animated figures, 11 and 12 in the series produced for the research project Emotional machines – composing for unstable media.

Most of the figure's body is 3D printed in an off-white filament. This gives the texture of the material a bone-like quality. The figure consists of three main parts, with two almost identical structures easily interpreted as limbs attached to a central body that appears somewhat similar to a human pelvis. At the end of each "leg," a small wheel is attached. The wheels utilize one-way needle pin bearings, so they are only able to rotate in one direction.

I am readily able to associate this sculpture with the lower part of a primate body, but one oddly lacking flesh and having a very unusual bone structure. This figure follows up #9 and 10 in that it attempts to create mobility through the use of "legs." The limbs are actuated using two large Nema 17 stepper motors turning small pulleys driving a G2 timing belt wrapped around a large pulley rigidly mounted to the figure's trunk at the "pelvis" end. This causes the legs to bend at the base.

Generative design

Figures #11 and 12 were developed with the aid of generative design to "grow" the shapes. This process involves defining load points, meaning specific sections where physical forces act on the model, such as joints or axel mounts. This was done by drawing up small boxes intended to join together, some of which have circular holes designed to hold axles. These are made up of rectangular boxes, straight angles, and symmetrical shapes that can quickly be joined using standard fasteners such as screws and nuts.

The generative process creates simulations of material joining the load points using genetic design algorithms with cloud-based rendering. The algorithms deliver many different solutions that fulfil the load and structural requirements and leave it to the instigator of the design process, in this case, me, to choose from the selection of solutions. Typically this type of generative design results in shapes that are perceived to be organic looking.

The organic shapes make figure #11 and 12 a departure from the other figures in the collection in that there are no straight sections and few straigh lines. Forms like these are unintuitive and very difficult to create using standard computer-aided design methods(CAD)where the use of consecutive lines and symmetrical angles can be hard to circumvent.

In addition to CAD, the shapes generated for these figures depend on CAM(computer-aided manufacture), in the form of 3D printing, to be realized as physical objects. The 3D printer I have available can only print up to a certain size, large compared to most 3D printers, small compared to most sculptures. Since I wanted Dog to be larger than the printer's maximum size capacity, more or less the size of a mid-sized child's legs and hip, I needed to break each leg into three parts that, when assembled, could be attached to the central hip-like body.

The figure named Dog ended up having a very organic looking form with a couple of peculiarities. Firstly each of the two legs is precisely the same as the other, which seems at odds with their organic shapes. Secondly, the squares used to attach the sections to each-other seem like out of place square protrusions coming out of the flowing form of the limbs. The overall impression is an organic bone-like structure with odd symmetrical squares forming part of the body and with a peculiar symmetry in dissonance with the shape's organic quality.

Computer-assisted design

When creating the figures in this research project, many factors guide or influence the design process. For example, the possibilities afforded by CAD software and which features that software makes more readily available over others. My own manufacturing skills, available materials, and cost are also influential considerations guiding the design process. The creation of Dog differed because the shapes created by generative processes are quite remote from those that are intuitively easily created using a CAD system targeted at traditional parts design.

An interesting side effect to the use of generative design is that I found it changed my experience of agency in the process of making. When the forms are algorithmically developed, I feel my own perception of myself as the creator to be challenged. Usually I consider the tools and technologies I use as subservient (at least conceptually). Having used generative design to "grow" the shapes instil a feeling that most of the design is not created by me. In fact, the question as to whom(or what) made the shapes for Figures #11, and 12 appears fuzzy. Does the credit belong to the software engineer and mathematicians that created the algorithms that facilitated the generative design rather than me? Is it the generative software itself since the shapes it generates are not predictable by the authors of the algorithms used to create them?

Tools and their agency

When designing shapes using software or making sound using software, many decisions are strongly influenced by that software's layout. Artistic decisions taken by a creator when using such tools are never uninfluenced by the means themselves. When employing generative design to create forms, this fact becomes explicit.

Suppose making is understood as transforming matter from its current state to another. In that case the process of making is the interaction between materials used, the tools used to shape them, and the maker. In the exchanges between tools, maker, and material, what Tim Ingolds describes as a "dance of agency,"1 they are partners taking turns leading the dance of making. As a maker of animated figures, I experience this exchange daily. I accept that my creative decisions are determined by what I know of the tools I employ and the material I apply them to.

Any computer-aided process of making, for example, using CAD software or modern audio production tools, the interactions between myself and the software are complex exchanges of adaption on both parts. In my personal narrative I still consider myself the maker. However, when employing a generative mechanism for creating a design, the question seems more ambiguous. When creating shapes in CAD software by drawing up an outline and extruding it into three-dimensional computer-simulated space, I still experience myself as the primary actant determining the form, albeit as an actant aware of the influence and limitations of the tools used to do so. In generative design, the computer-simulated equivalent of thin air is populated by shapes drawn by an algorithm working in ways of which I have no understanding. I don't know why or how it makes the shapes it does, and I feel that much of my agency creating that shape is removed. I instigated the process by determining some parameters, such as the extent of the form and how strong it should be given the material chosen, but how those requirements are fulfilled is outside my control. Conversely, when using CAD software, I am still able to hold on to the idea that there is a conscious decision guiding every aspect forming the shape towards its final form(although investigating the process can cast doubts). When using generative design, I experience my status as creator seemingly entering a grey area between creator and curator.

Some of the generative outcomes.

Selected design of front part of the leg, after some modeling, shown as mesh.

Who made it?

When I think about who created the material segments that make up figures #11 and 12, I am unsure if I can claim to be the creator. In designing the other sculptures, my process was drawing up the shapes in a CAD program and then have a 3D printer realizing those shapes into what I consider actual reality by melting plastic into a shape corresponding to the form drawn up in virtual reality. In that case, I would still have no problem saying: "I made that." However, in this case, I am no longer so confident. My conception of the tool seems to have crossed a threshold in some ways moving it from tool to entity. An entity that must have agency allowing it to create the forms that make up figures #11 and 12.

My perception seems to be influenced by the abstraction level. When working in CAD, I have no direct physical sensation of the shape I draw in virtual space and how it could transmit to my physical reality, yet I interpret it as such. The software simulates something I can understand; three-dimensional space. And even if I have no connection to how this is achieved in the software, its familiarity makes me feel empowered. It operates in line with my self's image as something occupying space in the world by creating simulations of objects seemingly doing the same. In the generative algorithms hidden workings, this understanding is no longer accessible; it does what it does without direct manipulation from the user. It is unlikely that any spatial considerations are part of the algorithms it develops to create the shapes. That doesn't occur until it is on a two-dimensional computer screen and perceived by a human sensorium that can interpret it as such.

The question arises: What is the difference between the representation of forms generated in this way and those generated by the simpler algorithms, but algorithms nonetheless, used to draw any shape in simulated three-dimensional space?

The dance of agencies

When designing in CAD software, I create objects in simulated 3d space. These are then visualized for me on a two-dimensional screen, and although I have no tactile impressions of the objects, I can, through the mediation of a mouse and keyboard, "handle" the object. The software I use is the mechanism allowing me to draw up the item in a simulated space.

In doing so, my agency is just one of many, even if counting only the agency of people. The virtual object I am manipulating is created by knowledge and effort resting on the shoulders of many. The silicon chips it runs on, the coding language created by some developers allows others to write code that can be compiled for those chips. The mathematicians, engineers, designers, and architects developed the conventions the software adheres to play their part.2 These systems have their own conventions established for how CAD software should be laid out again based on long traditions of pre computerized design conventions going back. My interaction with the software also hinges on my human perceptual ability to understand the concept of simulation. The software is also developed to create objects that correspond to fabrication methods and other infrastructure for making, existing in the world outside of the three-dimensional simulation.

Generative tools appear to me to inhabit a different ontological category than traditional CAD. I believe this to be caused by my conception of those tools as being created by people in their entirety. Generative design by its name and method seems more "alive," and because its working method is hidden from me(since it develops by itself), I feel it as being different from other technology I use to realize figures.

When designing in CAD, one typically specifies the start and endpoint of lines creating a two-dimensional geometrical shape. These are then extruded into three dimensions. The space between the start points is filled up by the software using various algorithms depending on line characteristics, such as whether it is curved. The extrusion of a simulated two-dimensional shape into three-dimensional virtual space is generated by specifying distances that the software then seemingly fills. Of course, there is no material from the CAD software's point of view, not even simulated. Any three-dimensional object consists only of minimal mathematical descriptions of geometrical shapes and their extents. There is no material between those extents for the CAD software because there is nothing between those extents, neither material nor non-material. The distance between what the virtual object is to the software and for the human user seeing it on a screen is vast. It only becomes a shape in the encounter with the human sensorium and perception able to categorize it as such.

The shape I perceive on the screen as a two-dimensional representation of an object minimally simulated in 3d space is generated by computer algorithms, just as the designs appearing after running a generative algorithm is. The software fulfils a minimal set of requirements for the human conception to interpret it as a two-dimensional representation a simulation of a solid object. Most industrial design is created using technology like CAD. That means that the virtual space and the laws it adheres to are instrumental in shaping the physical objects we surround ourselves with and interact intimately with daily. This familiarity again informs the choices and preferences of forms I have when interacting with CAD software.

In its conventional use, the CAD software draws up shapes in virtual space using algorithmic processes adhering to boundaries I define. The generative algorithms also draw up shapes in virtual space using algorithmic functions adhering to boundaries I define. Described in that way, there seems to be no significant difference between the two methods for creating shapes. But for me, as the "creator," it is experienced as different. One makes me question my agency as a maker, and the other does not. Why?

What is

One reason the use of generative design causes me to question my agency as a maker seems to be an encounter with the limits of (the)description. In the previous section, using language, I have described an overview of two working principles of computer-aided design and stated that they, on the surface, seem the same. Yet, I experience them very differently. This is caused by the description encapsulating only a very general overview of the mechanisms' functioning principle and from that claiming that they are similar. The similarity is relative and is dependent on scope or vantage point. The comparison would be very different if I focused on their differences rather than commonalities, and other again if I were to attempt describing their characteristics. My explanation is coherent in itself because it cannot encapsulate(nor can my conception) more than a minuscule morsel of the ever-evolving interactions that make up the "Dance of agency".

A small "reality" has been created by my description, containing a very general description of the two design methods and leaving out the many other possible narratives. My definition and its content are defined by what it lacks more than anything else. Suppose I am a curator more than a creator when using generative design to create virtual forms. We could turn the same lens to the description that leads to that conclusion. I also curate which aspects of the dance of agency I describe, and which I leave out. This determines the content of the description. Ultimately, I experience this personally, my continuously variable curation of the connections I perceive shifts the ontological borders of my conception of agency as a maker.

Why it is

CAD software targeted at parts design generally uses the paradigm of design history. This allows any dimensions to be adjusted at any point in the design process and then have all steps made later in the development, the "history", to update to accommodate that change. This is computationally demanding and, in combination with a focus on easily accessible precise measurements, creates a bias towards straight lines and simple geometry. Generative design is very different. It relates to spatial extents, strength requirements(material dependent), and load points. Simple geometry is not one of the requirements. The results are, therefore, not dominated by simple geometry but rather by complex flowing organic forms. These are often far superior when it comes to optimizing strength(if that is the design goal set for the generative algorithm) than anything a person can design using traditional CAD. When first seeing these shapes, it does not seem apparent that they will be superior to forms we are familiar with as being optimized for strength. They seem unlikely. This results from our shared familiarity with engineering conventions created by methods based on mathematics and traditional means. The shapes we think of when we imagine what constitutes a structurally strong object results from the methods used to generate the strong objects we know of and therefore guides what strong looks like to us. We are empowered by this heuristic dependability in that it allows us with some accuracy to predict the structural strength of an object. But we are also blinded by it. Generative algorithms are somewhat free from the restrictions imposed by such conventions. This results in the organic flowing shapes associated with that type of design.

In conclusion, when looking at the actions I, as an agent of making, undertake to design a shape, it seems on the face of it that the two methods are more or less the same. In both cases, the forms being generated by filling in virtual material between extents defined by the designer. However, for me, as a maker, they feel different. In the first, I have no doubt of my status as the creator of the design, I feel firmly planted in the driver's seat with traditional CAD. In the latter, I feel more like an instigator with the structure generated by forces outside of my control. I am a harvester of the fruits of the labour of the generative algorithms. Both computerized design methods rely mostly on factors outside of my control and can, as I have done, be here argued to be similar. My sense of my agency is a matter of my perception.

If I shift focus from looking at the tool and instead look at the motivation for using the tool, the agentic motivation changes. After all, I opted to use generative because I wanted to achieve a shape I was not able to design myself. Realizing that and opting to enlist the aid of a generative algorithm gives me agentic currency.

What it says

Although it is a relatively young technology, the bodies that typically are created by generative design already represents a design language. Some will recognize the style of shapes generated. For them, they will most likely, like me interpret the form as having been created "by machines" which will carry with it its own associations. One is a narrative widely shared, causing us to feel uncanniness at the thought of the creative machine.

When this happens, it seems like it is the borders of habitus that is encountered. We interpret the output of the generative algorithm as a creative act because this is how our habitus informs us such shapes should be categorized. There seems then, to be a contradiction between the notion that creativity is something humans have, and the idea of a creative machine(although comparing the result of a generative algorithm to human creativity would require a definition of creativity as it occurs in humans).

When composing sound my experience, has always been that a large part of the creative act is collection and curation. The sound sources I use, either field recordings or artificially generated/synthesized audio, always depend on outside sources. The real life sound source, or the programmer's decisions, interface designer, and a musical tradition that inform software for creating sound and music that I use.

 

Sound and movement

Nether consists of three main components: an animated figure, a human performer, and an audio element played back over a PA. The figure's construction is discussed above, so here I will focus on the work with Alwynne and the production of the audio material. The piece is intended to be mobile in the sense that Alwynne Pritchard should be able to bring everything necessary to perform the work if travelling, and my presence should not be required to realize a performance technically. Therefore, the audio component is delivered as a single stereo sound file, which a technician starts at a movement cue from the figure. To keep technical complexity down, synchronization between the figure's movement and audio events is loose. Manually triggering the sound file provides sufficient synchronization. Ideally, a few points have a strict synchronization between the audio and activity in the figure, but I do not consider it critical for the artistic result. There are two versions of the audio component for the piece, and the performer may choose either for each performance. The audio element is an attempted amalgamation between something I would describe as organic sounding with something I would associate with mechanistic. The desired effect is the feeling of it being somewhat uncanny. This uncanniness is reflected and further underlined in the motion material developed in cooperation with Alwynne Pritchard. It is inspired by the mannerism movement seen in art from the late renaissance. There are three main sections to the time-based disposition of the motion. In the first and last, Alwynne assumes different "mannerist" style poses synchronized to the figure's movements. A description of the performer's actions follows:

0:0 – 01:11: Performer sits on knees on the ground, strives for a doll-like quality to all movements.

Movements are synced to movements of sculpture. Slow and deliberate. When blinking eyes, do this slowly.

  1. Open mouth wide slowly in sync with sculpture raising left limb.

  2. Close mouth in sync with sculpture lowering left limb.

  3. Open mouth wide slowly in sync with sculpture lifting right side of the body.

  4. Turn head right in sync with sculpture lifting left limb.

  5. As the figure raises its backend, adopt a contorted pose, head twisted back, left-hand palm up left shoulder raised.

  6. Slowly release into right hand raised left hand relaxed, head slightly turned right.

01:11 performer imagines her arms now paralyzed, upper body convulsed bent over and paralyzed, trying to find a way to move forward without the use of arms.

01:59 Gradually disband convulsive movements, giving up on moving forward, remain bent forward. Relax the body.

02:25 Gradually raise torso to a position seated on knees. Listening and vocalizing – relaxed – eyes closed. Vocalizing is improvised but should strive to harmonize with sounds in the soundtrack. Perform text material3:

curious machine

bolts of bones

part by part (by part)

I am nether

dungeon

enslaved

fettered stands

blinded with an eye

03:25 - Listening and vocalizing – slowly adopt doll-like contorted poses, eyes wide open.

04:20 – gradually raise the body to a standing upright on knees, facing the audience. Only move when the figure is moving. Adopt contorted poses.

04:48 – 05:40 Turn to the left facing to the upstage right corner. Continue small vocalizations.