Sound Design for Media:


                                       Introducing Students to Sound



                                                                Karen Collins and Bill Kapralos

Introduction

Sound plays a vital role in the communication of information in products and entertainment, such as application software, vehicle interfaces, and video games. In interactive applications, such as virtual environments and simulations, auditory cues can help a user to orient themselves, increase a sense of presence or, compensate for poor visual cues (graphics), increase task performance, and add enjoyment and immersion (Shilling and Shinn-Cunningham 2002; Zhou et al. 2007). Done well, sound can help to communicate important information to an audience, serve as a symbol or leitmotif, help to situate the audience in a specific location in time or place, create or strengthen a brand’s attributes, and create a sense of realism. (Done poorly, sound can have the opposite effect.) In particular, sound plays a role by helping to immerse the audience in media, to stimulate emotional investment, anthropomorphize objects and create attachments.


Given the importance of sound, the study of sound should be a part of any multimedia-based curriculum, whether that curriculum is for fine arts, game development, web design, advertising, product design, film, or any other contemporary media. A basic competency in sound design is increasingly important, not only for formal design practices, but also for more abstract creative art practices, as sound art grows in importance in the art world.(1) As Yantaç and Özcan describe, it is unfortunate that today, “in interactive media design education most students have difficulty in designing creative and functional interactive ideas which also include sound” (2006: 91). Although some in-roads have been made in training design students in parts of Europe, in much of the world—particularly North America—sound remains considerably neglected.


While there are often dedicated programs at the trade college level for sound engineers (who record and mix sound) or sound designers (who select and create sound effects for various media), courses on sound geared towards interdisciplinary or creativity-based majors at the university undergraduate level are rare. Although music has a considerable base of instructional literature available (indeed, there are many journals devoted to music education), the other auditory practice, sound (effects) design, is surprisingly underserved. Even within music or film composition courses, it is rare to have any instruction on non-musical sound. 


One of the potential reasons for this lack of curricular consideration of sound is that there is little didactic material available in terms of instruction on sound design beyond trade books for practitioners or textbooks aimed at a very specific practice without any theoretical base. Part of the difficulty with finding teaching material is the disciplinary division between areas of sound study: Music therapists have searched for ways to alter mood using sound and music (Peretti and Swenson 1974), psychologists have studied the roles that sound plays in influencing perception and cognition (Baumgartner et al. 2006), and psychoacousticians have sought to understand the physical aspects of sound perception (Arias and Ramos 1997). Marketing experts and interaction designers have studied the role that sound plays in altering perception of products (Yorkston and Menon 2004; Hug 2010), and semioticians have sought to understand sound as a symbolic language (Pirhonen 2007). From a more technical standpoint, acousticians and audio engineers have studied the physics of sound as well as its perception and reproduction (Nyberg and Berg 2008), while interface and product-related sounds have been studied from the area of human-computer interaction and sonic interaction design (Korhonen et al. 2007; Franinović and Serafin 2013). Meanwhile, the study of sound in media is generally segregated into areas of film studies or game studies (Chion 1994; Collins 2008), but the newly emerging area of “sound studies” suggests an attempt to amalgamate some of this disparate research. Indeed, the sheer number of disciplines involved in sound indicates the importance that sound should play in many of these areas of study.

 

The lack of instruction available on sound design may also be related to the misperception that sound design is merely the programming of realistic sound effects to match and accompany a visual image, a misperception that stems perhaps from film, where the maxim “see a dog, hear a dog” (Kenny 1993: v) is often quoted to describe the practice of sound design. But sound design is for the large part a much more creative than strictly technical practice. Consider, for instance, the decisions that must be made in terms of how to represent fictional spaces, psychological states, brands, or the use of sound as a metaphor, counterpoint, or symbol. In fact, sound provides many important functions in a wide range of media (Collins 2007; Cohen 1998). Most importantly, sound plays a critical role in driving the emotional involvement with these media forms. George Lucas has even suggested that “Sound is 50 percent of the moviegoing experience, and I’ve always believed audiences are moved and excited by what they hear in my movies at least as much as by what they see” (Mellor 2011). A very clear way of demonstrating the role that sound plays in evoking emotion is to take a film clip and replace the existing sounds with different sounds. In our course, for instance, we use a clip of Jaws (depicting a scary and threatening scene) where the music and sound effects are replaced with comical ones. The scary threatening scene is turned into slapstick humour through the use of sound alone. 

As with visual art practice, sound design takes many years to master, but a basic understanding of how to use and implement sound effectively within multimedia applications can be achieved in a single course. In this paper, we present an overview of some exercises, along with their theoretical underpinning, undertaken in interdisciplinary sound design courses taught to undergraduate digital designers, artists, and game developers with the aim of illustrating a scaffolding approach to teaching about sound and its importance within multimedia. Scaffolding provides “scaffolds” or supports that allow students to build on prior knowledge with each exercise, while enabling them to internalize new information with each stage or level of learning (Vygotsky in Raymond 2000: 176). Scaffolds include, for instance, examples or models of prior work, prompting or hinting, and partial solutions (Hartman 2002). The scaffolds are gradually removed as the student becomes more knowledgeable and more independent. In this way, scaffolding provides a clear pathway that students should follow in order to achieve a desired outcome, whereby the student builds on prior knowledge and exercises and is enabled to form new knowledge independently. In other words, each of the exercises described below builds upon the skills learned in the previous exercise. Students engage in periods of exploration during their own project building and during group critique sessions where they learn to discuss and evaluate sound design.

 

We present an approach based on the experiences of two professors involved in teaching sound to students who had no prior training in sound. Our exercises have developed over approximately ten years of experience in teaching sound design, and our evaluation of their success involved conversation and interactions with students, course evaluation questionnaires, and student work. One author was extensively involved in a curriculum development plan specifically designed to teach sound design to both sound majors and non-sound majors (IAsig 2011). As such, discussions with other educators and video game industry professionals about the best ways to teach sound took place over the course of several years, and the publication of the curriculum guideline has itself received considerable feedback from students, educators and professionals (Collins, Stevens and Önen 2011). In other words, the exercises have been explored in practice in the undergraduate classroom and have, to varying degrees, been evaluated by both other educators and industry practitioners. 

Courses Overview

These exercises have been used in the context of two different courses. The first was an undergraduate 20-student Arts and Business class at a large comprehensive Canadian university (University of Waterloo). This course (DAC 301: Sound for Digital Media) is a third-year elective Digital Arts Communication credit and receives a variety of majors interested in various aspects of design, including fine arts, film, games, communication media, and product design. The second course (INFR 2370: Sound and Audio for Games) was a second year course within the Game Development and Entrepreneurship program offered within the Faculty of Business and Information Technology at the University of Ontario Institute of Technology (UOIT) in Oshawa, Canada. The majority of these latter students were studying game development, while several students each year were from varied disciplines (e.g., Computer Science, Engineering), taking the course as an elective. In each course, the desired learning outcomes were to introduce the basic uses of sound, sound editing and recording, and sound design, with a particular focus on the creative use of sound in a variety of media contexts.

(1) It was not until 2010 that a sound artist, Susan Philipsz, won the Turner prize and raised awareness of sound art.

Jaws with alternate sound

Scaffolding the Skills

The scaffolding approach in the courses involves several stages:

 

  1. Introducing one or more new terminologies and skills in class, building on the skills and terminology learned in a previous exercise.
  2. Having students work independently or in groups on an exercise (see Exercises below) and undertake readings outside of the classroom.
  3. Discussions and critiques of the exercises (see Assessment below) and readings in the following class, reinforcing the terminology and skills learned.

 

Class time was used for more traditional lectures, theory and reading discussions, hands-on activities, group work, and critiques. Students were first introduced to the week’s terminology and taught one or more basic skills in a class; they were expected to supplement their learning with readings, online software tutorials, and the exercises. Those readings and the exercises were discussed in the following class, which helped reinforce those skills in time for new material. In class activities and group work, effort was made to pair advanced learners with those who may be struggling. Examples are given in the discussion of individual exercises below. 


Exercise 1. The Sound Walk: Listening to sound


Aims and Skills:

  • Have students listen to and think about sounds in their environment.
  • Introduce students to the terminology required to talk about and discuss sound.
  • Introduce students to basic acoustic theory.
  • Introduce students to the notions and importance of self-produced sound.

 

On the first day of class, students undertake a sound walk (Westerkamp 1974), whereby they “collect” and classify sound. They are instructed to leave the classroom for fifteen minutes and select a location, inside or outdoors, where they must write down any sounds that they hear. Upon returning to the classroom, students list some of the sounds they heard on a whiteboard, and a discussion of the sounds follows, whereby students are asked to describe the sounds that they heard. What becomes clear, as the sounds are listed, is that the language for describing sound is much less concrete than for describing images. Most students describe sounds in terms of causality (e.g. “dog barking”), but for sounds where causality was obscured, they may use perceptual descriptors (e.g. “loud”, “rough”, or onomatopoetic descriptions such as “beep”), while several students (with a musical background) may use musical terms (“a staccato beeping C major”). Although rare, students may have some acoustic knowledge and use this prior knowledge to describe sounds (e.g. “quick attack, rapid fade at 400 Hz”). This imprecise categorization of sounds leads to a discussion of the ways that we can listen to - and attend to - sound. 


Listening is not merely the hearing of sound, but also consciously attending to that sound. Here, film sound theorist Michel Chion’s categorization of three basic listening modes is useful. Chion (1994: 28) describes first the most common mode of listening, what he terms causal listening. Causal listening is when we focus on, or recognize, the cause or source of the sound. We gather information based on the sound: where a sound is located, what type of object caused the sound, and so on. Studies have shown that we identify sound by creating a mental picture of the cause, sometimes in the form of a mental stereotype, or through words that allow us describe the sound (Ballas 1997). In comparison, semantic listening refers to the ways in which we listen to, and interpret, a message that is bound by semantics, such as spoken words. Causal and semantic listening are not mutually exclusive—we can listen to both the words someone says as well as how someone says them (and the fact that, for example, a voice is the source of the sound). Finally, reduced listening denotes listening that focuses on the traits of the sound, independent of cause and meaning. Reduced listening focuses, for instance, on the quality or timbre of a sound. For example, in Fallout 3 (Bethesda Game Studios 2008), a broadcast tower is sending out beeping signals: if we were listening causally, we would likely assume that some form of electronic equipment is making the sound. If we were listening semantically, we may listen to the message, in Morse code, and if we were listening in a reduced fashion, we may describe the sound as a sine wave at about 3000 Hz in short bursts of approximately one half-second each.


What is remarkable about this exercise is that most students will comment that they heard many more sounds than they usually hear, and many will remark that they cannot recall ever stopping to listen to sound before this exercise. Most students omit their own sounds in the environment—the pen scratching on paper, or their breathing for instance—and it becomes a useful teaching moment to talk about the role of self-produced sound in defining our own identity and personal boundaries (Rochat 1995). At this stage, the notion of acoustic ecology (the sonic environment is a soundscape whose composition we are responsible for (Schafer 1977)) is also introduced.


Students are then provided with an introduction to some basic acoustic terminology to provide them with a scientific language to discuss sound beyond causality. They are finally tasked with returning to their original location outside of the classroom, choosing a single sound, and describing that sound without referring to cause. Returning to the class, they must describe the sound to other students, who try to guess what sound is being described. A dog barking, for example, is now changed to “short, staccato, quick attack and decay, low to mid-frequency range of about 500 Hz, rough, angry sound, possibly a warning sound.” Although it is uncommon that students are able to guess what sound is being described, the exercise is very useful in assisting students to think about sounds beyond causality. This thinking is important because sounds can be, and often are, used in media not for their causality, but rather for their semiotic connotations, affect, timbre, colour, etc. In other words, providing students with another way to listen to and describe sound expands their thinking about sound beyond causality.


Homework for the week involves the students repeating the exercise every day and recording their comments in a sound journal after undertaking online skill modules in basic acoustics that reinforces the terminology discussed in class. Teams of students each pick one sound from their journal and describe it in non-causal terms to the class, while the other student teams are required to determine (guess) the sound, in a simple game-like manner that employs competition to correctly guess the sounds. The journals are not assessed beyond checking that students undertook the work. Students often report that they “never really thought about sound” before the journal exercise, but that they had “opened their ears” to sound following this conscious focus. Moreover, the exercise is particularly effective in getting students to learn and feel comfortable with using basic acoustic terminology.

 

Exercise 2. Sonic Mood Boards: Rethinking traditional design approaches


Aims and Skills:

  • To expand basic knowledge of acoustic theory with the introduction of digital effects processing.
  • To expand basic knowledge of acoustic theory through the introduction of some basic psychoacoustic theory.
  • To introduce how sound drives emotion. 

 

The second exercise introduces digital effects processing (reverberation, delay, overdrive, amongst others) and focuses on the emotional role that sound can play in media. In-class work involves discussing and working through examples of sonic mood using media clips both brought in by students and created by the professor. Students freely associate with sound and music clips that use different digital signal processing (DSP) effects (such as phasing, echo, reverb, and so on), discussing the associations created by those effects. For example, phasing is often associated with drug use, illness, hallucination, and psychosis, and overdrive is often heard as angry, aggressive, and male.


Students are then tasked with using DSP clips and sound effects to establish a mood. The exercise draws on the concept of mood boards, “assemblages of images and, less frequently, objects, which are used to assist analysis, creativity and idea development in design activity” (Garner and McDonagh-Philp 2001: 57). In visual media, a mood board “potentially stimulates the perception and interpretation of more ephemeral phenomena such as colour, texture, form, image and status. They are… partly responses to an inner dialogue and partly provocation to become engaged in such a dialogue” (Garner and McDonagh-Philp 2001: 57). Each mood board is designed to elicit a particular mood or feeling associated with a product or design, often presented to clients in design briefs. In practice, we have only come across visual mood boards - collages of visual images often cut from magazines or websites. Students in the class are often familiar with visual mood boards through other courses in the program or through prior high school programs.


This sonic mood board is defined broadly in terms of its implementation, which is to say, students are provided with an overview of visual mood boards and are then tasked with inventing their own way to create and present a sonic mood board. Students select sound effects from various freely available sound libraries (e.g. www.freesound.org) and alter these sounds using digital effects processing in order to establish or increase the sense of mood. We draw on famed sound designer Walter Murch’s (2005) analogies of sound timbre and frequency range to colours, allowing the visually-trained students the opportunity to apply theories from visual design to auditory design. Commonly, students take one of several approaches:

 

  1. A PowerPoint slide with sound samples whereby the student activates one sound after the other in succession.
  2. An ambient “sound bed”, where sounds are layered on top of one another as might be found in the sound to a film. 

     (AudioObject1 “Anxiety” By Rebecca Lee, DAC301 2012)

  3. A slideshow where sounds are introduced along with music. 

It is notable that unless students are explicitly informed not to use visuals or music, many students tend to rely on music and visuals: at this stage, many students still lack the confidence in their own abilities, or in the abilities of sound effects to convey emotional meaning, and resort to familiar media experiences where music accompanies a visual image to create emotion. This reliance is to be discouraged, since the purpose of the exercise is to explore the emotional impact of sound effects and sound processing. 


Assessment is accomplished through peer critique in the following class. Students must attempt to guess what mood the student designer is attempting to convey with his/her mood board, and the designer must present and discuss why chose the sound effects and how he/she used the DSP effects to create that mood. Being able to explain their choices reinforces the terminology learned in the previous lesson and helps the students to feel comfortable using the new terminology introduced relating to DSPs and psychoacoustics.

 

Exercise 3. The Sonic Storyboard: Telling a Story Through Sound 


Aims and Skills: 

  • To advance the understanding of the emotional role of sound by introducing the narrative and informative aspects of sound in media.
  • To advance the understanding of DSP and sounds working together by introducing mixing.
  • To advance the understanding of psychoacoustics by exploring sound perception in narrative media.

Once students have explored how emotion can be conveyed or evoked through sound effects, they are ready to explore sound in a narrative. Students bring in and discuss examples of songs that they feel tell a story. Individual sounds that contribute to that story are compared in class discussion, relying on the terminology learned in the previous two exercises. Special focus is placed on how sounds are placed in the mix of the song - what is foregrounded and what provides background atmosphere. A discussion of dynamic range and exercises around gestalt theory helps students to understand how sound objects can work together as a component of a whole. Visual examples of gestalt principles are introduced (Akerman 2011), and students explore corollaries in the sonic realm.


The third exercise then introduces the practical aspects of sound mixing to the student (mixing is the combination of sounds in a sequence through adjusting the intensity and frequency range of each sound).


Students must tell a simple thirty-second story using only sound effects. They are given several story scenarios from which to choose. Each story represents a series of actions that can be conveyed, but also have an emotional basis:

 

  1. You arrive home and scare your cat or dog, who knocks over your favourite vase. AudioObject2: anonymous Digital Arts Communication 301, 2012)
  2. It’s the bottom of the ninth inning, and you’re up to bat.
  3. Alone in a parking garage, you fear someone is following you.
  4. You are lost alone in the woods and spend the night in an abandoned barn.

Students must find sounds from sound libraries and mix the sounds to create the appropriate sense of space and place to convey the story to the listener. One purpose of this exercise is to provide the student with the opportunity to think about using loudness to create a sense of depth and to create dynamic range over time, whereby the most important sounds are the loudest in the mix, regardless of distance, leading the listener up to a peak crisis point. Analogies can be drawn between the classic story plot structure usually taught in high school and the need to maintain a similar emotional series of peaks and troughs, or dynamic range, in narrative sound design. Students are encouraged to visually map out a general “emotion map” of the story and tie the dynamic range of the sound to the map (Sonnenschein 2001).


Again, in-class peer critiques are used to assess the student work, enabling students to learn from each others’ work and further instilling the use of correct terminology to discuss the projects. Students write down any imagery that comes to mind when listening to the storyboard and share these with the class. This free association exercise is particularly valuable, and students are often surprised at the commonalities that they share with classmates in describing the connotations of fairly abstract sounds. Basic semiotic theory is introduced at this stage to provide students a language with which they can discuss sound as symbol and discuss the universality (or lack thereof) of particular types of sound (some sounds are universal in the sense that we have a basic bioacoustic understanding of them (Collins and Tagg 2001).

 

Exercise 4. Narrative Scripts: A modern radioplay


Aims and Skills: 

  • To build upon the understanding of mixing and sound in narrative by listening to and creating radio plays.
  • To introduce microphones and recording techniques.

In the fourth exercise, we introduce the recording of sound effects and voice, providing students with the opportunity to learn about microphones and recording techniques. The microphone skills include, for instance, placement and distance of the microphone from the mouth (or other sound-emitting objects). Here we draw on Hall’s (1963) conception of proxemics and personal space to create recordings designed to elicit an intimate feeling versus more distanced recordings. Songs are used to compare the microphone technique on voice and instruments such as guitar in different genres, followed by a discussion of the emotional effect of that technique (for example, crooners create a sense of intimacy through close miking). Students then listen to some short radio plays in class and undertake an analysis of the use of sound in the recordings (Huwiler 2005; Verma 2010). In particular, attention is paid to the digital effects processing and dynamic range principles learned in previous exercises, with the addition of the concept of proxemics for emotional impact.

 

Students must then find a script from a radio play (many are available freely online) and record four to five minutes of the script, including sound effects, music, and voice. At this stage, students may use pre-recorded sounds from available sound libraries, such as FreeSound.org, rather than record their own. However, after becoming comfortable with the recording process, many prefer to create their own sounds and only rely on sound libraries for those sounds that they cannot record themselves (e.g., a lion roaring). The purpose of this exercise is to have students attempt to create as much of a mental image of space, place, and character for listeners as possible. Indeed, many people still listen to radio drama because, just as in reading a book, the imagery for the story unfolds in the mind (Cazeaux 2005: 157). Sounds are selected and created based on their ability to evoke a specific mental image and mood for the listener, and an iterative feedback process is undertaken whereby other students in the class freely associate the imagery that appeared in their mind when listening to particular sounds.

Students again present their work to the class for peer critique. These radio dramas are critiqued using all of the technical and theoretical skills learned up to this point, such as acoustic terminology; mixing technique; microphone technique; DSP; the use of sound for emotional, narrative, and informational purposes; and referring to concepts arising from semiotic theory. Students are now assessed not only for their work, but also on their ability to present well thought-out critiques of their peers, incorporating the skills and terminology that they’ve learned.

 

Exercise 5. Creating a Collision: Thinking through a sound event


Aims and Skills:

  • To build on recording technique to incorporate the field recording of sound effects. 
  • To build on an understanding of mixing and proxemics in introducing spatial sound.
  • To build on an understanding of the manipulation and post-processing of sound using DSPs.
  • To introduce notions of interactivity and examine the impact that interactivity has on the listener.
  • To explore the Fmod game audio middleware engine.

 

In Exercise Five, students are first introduced to issues of interactivity: specifically, sound in games, and how they must consider repetition and variability needs. For example, a simple shotgun sound may be made up of several smaller elements including a basic rack, a trigger sound, a shell sound, and so on, that may be mixed with each sound element randomly selected. Students bring in examples from games that incorporate variable sound effects, and we analyze these in class.


Students then form groups of two or three and are tasked with simulating (in the audio domain only) the collision of a car into a brick wall. Although they must adhere to the following requirements, they are encouraged to be creative and use their imagination with respect to sound recording. More specifically, they do not have to explicitly record the required sound (e.g., glass breaking) but can record something similar (e.g., “squishing” a plastic bottle) and then modify it to achieve the required result.

 

  • The car and the wall are initially separated by at least 300m.
  • The car is initially “parked”. It will then start, idle for a brief period (several seconds), and then accelerate towards the wall. Prior to hitting the wall, the car will decelerate for a brief period of time.
  • All of the sounds must be recorded (post-processing is allowed), and the following sounds must be included (any other sounds can also be included):
    • Glass breaking, shattering, etc.
    • One person in the car screaming
    • Car horn (prior to hitting the wall)
    • Car brakes (skidding tires, etc.)

Each group is also required to submit a brief report outlining the process they took to record each of their sounds and provide details regarding any post-processing effects they incorporated. There have been many creative solutions to the problem, including one group who created the entire simulation through self-generated sounds formed entirely with their own mouth/vocals.


The major new skill introduced in this exercise is interactivity: the car must be able to be “driven” by a “player” in the FMod game audio engine, middleware software that integrates audio into a video game. Students must think through and account for variability in the crashes (e.g. alter the mix in real time), and account for parameter-driven sound events (such as speed of car).


Student groups once again present their work to the class for peer critique and grading. Students grade their peers on the realism of the simulation and the “creative effort” to simulate the sounds, given that they could not actually crash a car into a wall and record the resulting sound. The professor also graded each group according to the same criteria in addition to the reports provided by peer groups. Peer grades formed part of each group’s final grade.

 

Exercise 6. The Spotting Session: Film/Game Sound


Aims and Skills:

  • To build on an understanding of the functions of sound in media.
  • To introduce students to Foley sound recording and vocal sketching.
  • To make creative decisions regarding where and when sound should be used.

 

While students are continually considering the many roles that sound plays in multimedia throughout the course, an in-class spotting session with a short film or video game highlights the many functions that sound plays and allows students the opportunity to think about where and how sound can best be used. Students watch a collection of short film clips and identify the multiple functions that sound plays in each clip, including for instance:

 

  • Communication of emotional meaning, helping to immerse the player in media, and so emotionally invest the player in a narrative.
  • Anticipating action, particularly prevalent in games, where the player is given cues to take a particular action.
  • Drawing attention, focusing the audience’s attention on particular objects or characters. As Cohen (2001: 258) describes, sound focuses our attention, as when a “soundtrack featuring a lullaby might direct attention to a cradle rather than to a fishbowl when both objects are simultaneously depicted in a scene.”
  • Structural functions where sound can provide links or bridges between scenes or indicate openings and endings.
  • Spatial and environmental functions through creating a sense of space or place to suspend disbelief and add realism.

 

In groups of three to four, students are assigned a one-minute clip from a movie or video game from which the sound has been removed and discuss the role that sound should play in their given clips (they “spot” the clip for sound).


Students create a cue sheet, where timings, functions, and sonic events are described in a chart that separates ambience, dialogue, music, sound effects, and Foley. Students then create a live Foley session. Foley is the term used for recording sound effects in the studio using a variety of props. The term is often specifically used for the everyday natural sounds, rather than special effects. Groups must create Foley sounds for the clip in real-time in front of the class (Hug 2010; Franinović, Hug and Visell 2007). Using “found sound”, the students explore ways that they can create sound from objects around them in the classroom (they are also permitted to leave the classroom to find other suitable objects). They are given approximately one hour to prepare and practice sound-making, and then perform it live. The lack of time to prepare the sounds encourages onomatopoeic and vocal expression and the rapid exploration of sound-generation from everyday objects: Students will pick up the garbage can and explore squeezing, banging, rubbing, and other interactions with objects and bodies for example, getting them to think about everyday objects in new ways.(2) When students cannot find objects to create sounds, they are free to use vocal sketching (Ekman and Rinott 2010).


This exercise is typically the students’ favourite, and they are remarkably adept at creating and timing the sound events after such a short period of preparation time. Students often remark that when they go home they have discovered a new habit, namely that of picking up objects and trying to elicit sounds from them (an excellent skill for a sound designer). Students are again peer assessed, with the emphasis on creativity rather than technique in this particular exercise. Discussion takes place regarding their approach, particularly as to where other students would have made other choices, e.g. in leaving out, adding, or altering sounds. Again, students must explain their choices in terms of the theory and terminology that they have learned to date.

 

Exercise 7. That’s not the right sound! Synchresis Session


Aims and Skills:

  • To consolidate all skills learned to date.
  • To emphasize creativity and exploration and “out of the box” thinking about sound.

 

What is remarkable about digital media technologies today is the ease with which we can separate a sound from its source and re-associate a sound with a new visual source to create new meanings, what film sound theorist Michel Chion (1994: 63) refers to as synchresis, “the spontaneous and irresistible mental fusion, completely free of any logic, that happens between a sound and a visual when these occur at exactly the same time.” In this way, the fusion of sound and image leads to new meanings that may alter, or add to, the original meanings of the sound and the image. This separation and subsequent integration of sound and its causal agent are central to most sound design in today’s audiovisual media, whether live-action film, animation or video games. For instance, we hear a recording of a stalk of celery being snapped, but through its association with a visual image of a bone breaking, we hear that sound as a bone break. This idea of a fused audiovisual emergent meaning has been popular in film sound. Sound designer Walter Murch remarks, for instance, “Despite all appearances, we do not see and hear a film, we hear/see it” (quoted in Chion 1994: xxi). Murch describes a phenomenon he calls conceptual resonance between image and sound, in which the sound makes us see the image differently, and then this new image makes us hear the sound differently, which in turn makes us see something else in the image and so on. In other words, there is a new meaning generated from the ways in which sound and image work together.

 

Students are introduced to the films of Jacques Tati, who often used highly exaggerated or “wrong” sounds for comical effect. In the kitchen scene of Mon Oncle (1958), for instance, Monsieur Hulot walks into a kitchen and touches a hot radiator with his hands. 

The radiator buzzes like a door entry alarm at the touch. We might expect Hulot to articulate some sound at the burning of his hands, but the buzzer fills in both for his exclamation and for the radiator, as if to say “don’t touch” and “ouch” at the same time. The sound on its own cannot be associated with either the man or the radiator—it simply does not have any causal connection to either—but through its contextualization, an emergent meaning is formed. Indeed, Tati relies on such sound gags for much of his entire aesthetic: it is through such contextualization that we re-experience otherwise rather mundane scenes.

 

In the final exercise, students take a pre-existing clip of film or animation, remove the original sound, spot the clip for sound, and then use “wrong” but still believable sounds in place of the original sound. In other words, students must select sounds that are not causally related to the sound-emitter on-screen for their selected clip. For example, dogs may not bark as dogs, and a recorded sound of a door closing cannot be used to indicate a door closing. This exercise is tricky, as the result must still “sound believable”, relying on context for the emergent meaning. Nevertheless, the exercise encourages originality and creativity in sound design, and often elicits some surprising results as students discover how far they can push the bounds of believability. Again, feedback sessions in class with early attempts are a useful way to discuss what works and what does not in context.

(2) Another option is to allow students out-of-classroom time to find and prepare sounds. Although this, most likely, does not encourage “un-obvious” sound exploration (e.g., if I have to make a gunshot sound, given time, it’s easy to access a cap gun and fire it, but if I have only a rubber band, garbage can and a pencil, I need to use my imagination much more), we might use it to promote and develop an appreciation of field recording.

AudioObject3: Duffy’s Tavern excerpt, by Daniel Pearson Hirdes, DAC301, 2012

AudioObject2: anonymous Digital Arts Communication 301, 2012

A comparison of approaches to one Grand Theft Auto clip

AudioObject1 “Anxiety” By Rebecca Lee, DAC301 2012

Mon Oncle by Jacques Tati

Evaluation of Teaching Methods

In short, three aspects can be lifted out with regard to the teaching methods being used:


  • Being able to give and accept criticism is a highly useful skill for students, particularly those who may pursue work in the design field where they may be regularly subjected to criticism. Students may also be more open to receiving critique when they see a general consensus amongst the class, particularly when some decisions and responses to creative work can seem arbitrary and subject to personal opinion and bias. Peer critiques are also very useful to those conducting the critiques: by learning to point out potential issues in other people’s work, it is often more easily seen (and heard) in one’s own.
  • Peer critique in the courses suffers from one particular issue, however, which is that the critiques (with one exception) take place when the exercise has been completed. We are currently working on incorporating peer critique for larger projects (notably, Exercises 4, 5 and 7) at a mid-point in the exercises to enable students to process and respond to that critique before handing in their final projects.
  • One of the particular strengths of the exercises, according to student evaluations, is that they are given the opportunity to put theory into practice. With the opportunity to incorporate readings and theory directly into what they are developing in class and in homework exercises and then use that theory to explain and explore their own work in peer critique sessions, the theory is often much better understood and much better appreciated by students.

Conclusions

Sound design is a creative and increasingly important skill for design and art students to develop. We have presented a series of exercises by which students can learn the key fundamental aspects of sound design, using a scaffolding approach that gradually brings students to an understanding of various processes and choices that can be used when integrating sound and image. While it is important for students to develop the skill of analysing sound in isolation, these students will most likely be using sound in conjunction with other modalities, and complementing existing skills in visuals is one teaching technique used, to build on existing knowledge.


Following the exercises described above, students can pursue more advanced independent or group work. For example, other opportunities include creating an audio-only interactive game, where little or no visuals are included but in which the player explores the game through sound. A sound installation project could be included in fine arts or product design programs; and interactive public art using sensors with interfaces like Arduino and software like Max/MSP or Pure Data could be integrated into more advanced years. In particular, if students can work on a team with students in other courses producing, for instance, a short film or game, such an opportunity allows for more significant and advanced work. Such an approach has recently been initiated within the Game Development and Entrepreneurship program at the University of Ontario Institute of Technology with the introduction of the Game Development Workshop whereby the courses in each year are linked to a final, year-long project that involves the development of a game by a team of students (a detailed description is provided by Hogue et al. 2011). Within the Game Development Workshop, students learn not only about sound-related topics within the sound and audio course, but they have the opportunity to apply what they have learned within a “real-world” problem (e.g., within a game that must be “shipped”). In this way, the sound skills are brought together with other skills that students learn in their program.


Generally speaking, students have rarely been taught how the correct use of sound can greatly improve their multimedia projects. Indeed, as discussed, many students tend to fall back on image or music to evoke the desired emotions, relying on what they have previously been taught. Sound has been, and to some extent continues to be, largely ignored by most design and art programs, although high-quality equipment is now quite affordably available at the consumer level. It is no longer the case that schools need a large recording studio or piles of specialized and expensive equipment to add sound to their curricula. In fact, as demonstrated in the exercises outlined above, sophisticated and expensive equipment is not necessarily needed to teach sound design. Moreover, past reliance on such equipment has tended to be intimidating for students and faculty, difficult to maintain, and expensive, a clear deterrent to adding sound to curricula. Small pocket recorders like the Zoom H2 provide high-fidelity recording, are very easy to use and cost less than $200. Free cross-platform software, such as Audacity, is available for audio editing, mixing, and recording.

 

Many faculty members who teach art, games, film, or design have not been trained in sound themselves, and there are limited resources for many faculty to feel comfortable in instructing students on sound. Further collection of resources and documentation of explorations by faculty members and students, accompanied by sharing this information online and in journals, can alleviate the current deficit of information and help to aid in the future development of creative sound designers.

References

Arias, Claudia and Oscar A. Ramos (1997). “Psychoacoustic Tests for the Study of Human Echolocation Ability.” Applied Acoustics 51/4: 399-419.

 

Ballas, James (2007). “Self-produced Sound: Tightly Binding Haptics and Audio.” In I. Oakley and S. Brewster (eds.), Haptics and Audio Interaction Design 2007 (pp. 1-8). Berlin: Springer-Verlag.

 

Baumgartner, Thomas, Michaela Esslen, and Lutz Jäncke (2006). “From emotion perception to emotion experience: Emotions evoked by pictures and classical music.” International Journal of Psychophysiology 60: 34-43.

 

Cazeaux, Clive (2005). “Phenomenology and Radio Drama.” British Journal of Aesthetics 45/2: 157-174.

 

Chion, Michel (1994). Audio-Vision: Sound on Screen. New York: Columbia University Press.

 

Cohen, Annabel J. (2001). “Music as a Source of Emotion in Film.” In Patrick N. Juslin and J. A. Sloboda (eds.), Music and Emotion: Theory and Research (pp. 249-279). Oxford: Oxford University Press.

 

Cohen, Annabel J. (1998). “The Functions of Music in Multimedia: A Cognitive Approach.” In Yi, S.W. (ed.), Music, Mind and Science (pp. 40-68). Seoul: Seoul University Press.

 

Collins, Karen (2007). “An Introduction to the Participatory and Non-Linear Aspects of Video Games Audio.” In Stan Hawkins and John Richardson (eds.), Essays on Sound and Vision (pp. 263-298) Helsinki: Helsinki University Press.

 

Collins, Karen (2008). Game Sound: An Introduction to the History, Theory and Practice of Video Game Music and Sound Design. Cambridge, MA: The MIT Press.

 

Collins, Karen, Ufuk Önen and Richard Stevens (2011). “Designing an International Curriculum Guideline: Problems and Solutions.” Journal of Game Design and Development Education 1/1.

 

Collins, Karen and Philip Tagg (2001). “The Sonic Aesthetics of the Industrial: Re-Constructing Yesterday’s Soundscape for Today’s Alienation and Tomorrow’s Dystopia.” Sound Practice UK: UK/Ireland Soundscape Community, 101–108.

 

Ekman, Inger and M. Rinott. (2010). “Using Vocal Sketching for Designing Sonic Interactions.” DIS2010, Proceedings of the 8th ACM Conference on Designing Interactive Systems, Aarhus (Denmark), 123-131. 

 

Franinović, Karmen and Stefania Serafin (eds.) (2013). Sonic Interaction Design. Cambridge, MA: MIT Press.

 

Franinović, Karmen, Daniel Hug and Yon Visell (2007). “Sound Embodied: Explorations of Sonic Interaction Design for Everyday Objects in a Workshop Setting.” In Proceedings of the 13th International Conference on Auditory Display, Montréal (Canada), 333-341. 

 

Garner, Steve and Deana McDonagh-Philp (2001). “Problem Interpretation and Resolution via Visual Stimuli: The use of ‘Mood boards’ in design education.” Journal of Art and Design Education 20/1: 57-64.

 

Hall, Edward. T. (1964). “A System for the Notation of Proxemic Behaviour.” American Anthropologist 65/5: 1003-1026.

 

Hartman, Hope J. (2002). “Scaffolding and Cooperative Learning.” In H. Hartman (ed.) Human Learning and Instruction (pp. 23-69). New York: City University of New York.

 

Hogue, Andrew, Bill Kapralos, and Francois Desjardins (2011). “The role of project-based learning in IT: A case study in a game development and entrepreneurship program.” Interactive Technology and Smart Education 8/2: 120-134.

 

Hug, Daniel (2010). “Performativity in Design and Evaluation of Sounding Interactive Commodities.” Audiomostly 2010, Sept 14-17, Piteå, Sweden.

 

Huwiler, Elke (2005). “Storytelling by sound: a theoretical frame for radio drama analysis.” The Radio Journal – International Studies in Broadcast and Audio Media 3/1: 45-49.

 

IAsig (2011). “Game Audio Curriculum Guideline.”

 

Kenny, T. (1993). Sound For Picture: An Inside Look at Audio Production for Film and Television. Emeryville, CA: Hal Leonard Publishing.

 

Korhonen, Hannu, Jukka Holm, and Mikko Heikkinen (2007). “Utilizing Sound Effects in Mobile User Interface Design.” In C. Baranauskas, et al. (eds.), INTERACT 2007 (pp. 283–296). LNCS 4662. Berlin: Springer.

 

Mellor, David (2011). “Sound and Sync: An Introduction to Location Sound.” Sound on Sound April 2011. Retrieved from http://www.soundonsound.com/sos/apr11/articles/simple-sound-sync.htm

 

Murch, Walter (2005). “Dense Clarity Clear Density.” Transom Review 5/1.

 

Nyberg, Dan and Jan Berg (2008). “Listener Envelopment - What has been done and what future research is needed?” Audio Engineering Society Convention Paper, May 17–20, Amsterdam (The Netherlands)

 

Peretti, Peter O. and Kathy Swenson (1974). “Effects of Music on Anxiety as Determined by Physiological Skin Responses.” Journal of Research in Music Education 22/4: 278-283.

 

Pirhonen, A. (2007). “Semantics of Sounds and Images: Can they be paralleled?” In Proceedings of the 13th International Conference on Auditory Display, Montréal (Canada), 319-325.

 

Raymond, Eileen B. (2000) “Cognitive Characteristics.” In Eileen B. Raymond (ed.), Learners with Mild Disabilities (pp. 169-201). Needham Heights, MA: Allyn and Bacon.

 

Reilly, Jill (2013). “It’s supposed to be constructive criticism: Art student destroys her own painting in front of class after bad review.” Daily Mail Online.

 

Rochat, Pierre (1995). “Early Development of the Ecological Self.” In P. Rochat (ed.), The Self in Infancy (pp. 53–71). Amsterdam: Elsevier.

 

Schafer, R. Murray (1977). The Tuning of the World. New York: Knopf.

 

Shilling, Richard D. and Brian Shinn-Cunningham (2002). “Virtual auditory displays.” In K. Stanney (ed.), Handbook of virtual environment technology (pp. 65–92). Mahwah, NJ: Lawrence Erlbaum.

 

Sonnenschein, David (2001). Sound Design – The Expressive Power of Music, Voice and Sound Effects for Cinema. Studio City, CA: Michael Wiese Productions.

 

Verma, Neil (2010). “Honeymoon Shocker: Lucille Fletcher’s ‘Psychological’ Sound Effects and Wartime Radio Drama.” Journal of American Studies 44/1: 137-153.

 

Walter, B. and J. Hendler (1996). The Studio. In Leo M. Lambert, Stacey Lane Tice, and Patricia H. Featherstone (eds.) University Teaching- A Guide for Graduate Students (pp. 37-43). Syracuse, NY: Syracuse University Press.

 

Westerkamp, Hildegard (1974). “Soundwalking.” Sound Heritage, 3/4: 25. 

 

Yantaç, Asim E. and Oguzhan Özcan (2006).The effects of the sound-image relationship within sound education for interactive media design.” Digital Creativity, 17/2: 91-99.

 

Yorkston, Eric and Geeta Menon (2004). “A Sound Idea: Phonetic Effects of Brand Names on Consumer Judgments.” Journal of Consumer Research 31: 43-51.

 

Zhou, Z. Y., A.D. Cheok, Y. Qiu, and X. Yang (2007). “The role of 3-D sound in human reaction and performance in augmented reality environments.” IEEE Transactions on Systems, Man, and Cybernetics – Part A: Systems and Humans 37/2: 262-272.