Frozen Moments in Motion

An Artistic Research on Digital Comics by Fredrik Rysjedal

Chapter 2: Sound of the Aurora

The Faculty of Fine Art, Music and Design (KMD), The University of Bergen

In this chapter I record my process and development of the performance comic Sound of the Aurora (2014). The research subject of this chapter is spatial motion, which includes both mobile framing and motion graphics. I will also address the performance comic format used for Sound of the Aurora, and aspects related to live editing.


Before discussing spatial motion, I present a framework for Sound of the Aurora. This includes an account of the project’s origin, my discovery of the lost laterna magica (magic lantern) tradition and a description of the performance comic as an art form, thus to place it in an historical perspective.


Making Sound of the Aurora has involved collaboration with other artists. I engaged the improvisation trio called ‘1982’, consisting of Nils Økland, Sigbjørn Apeland og Øyvind Skarbø, to play live during my performances. I have also collaborated with Aslak Helgesen and Thomas Tussøy from the Bergen-based game developer Rain Games. They assisted me in my experiment with a three-dimensional (3D) comic. Dylan Stone at the London Film School pointed me in the direction of magic lanterns, and Preus Museum in Horten let me study its collection of magic lanterns and slides. I invited Mervyn Heard, a magic lantern expert from Bath, to give a lecture and hold a real magic lantern show at a conference on the theme of visibility at Bergen Academy of the Arts.


This chapter also introduces readers to some technical aspects. I use Modul8, a type of video-jockey (VJ) software, in the performance and live editing of Sound of the Aurora. I also use the virtual reality technology Oculus Rift and the programming software Unity in my experiments with spatial motion in a 3D comic.


Sound of the Aurora premiered in June 2014 and was performed twice before I gave the first official performance of it as a 3D digital comic with live mobile framing in March 2016. It has been performed 15 more times in the period 2017–2018.



Sound of the Aurora was originally intended to be a small sidestep experiment in my artistic research, a method to facilitate possible new discoveries and opportunities. It was a daring stunt, with the premiere date announced the very day I started on the experiment. I gave myself one and a half months to create a performance comic. It premiered on 5 June 2014, and the experience surprised me in many ways. I enjoyed performing the comic and the positive response from the audience. No one in the room had ever experienced a performance comic before. The form appealed to me because it was a less common digital format than webcomics and apps. I also saw the potential for experimenting with live editing and live motion. Therefore, from being a sidestep, Sound of the Aurora morphed into one of the final works in my artistic research project. It also affected my final comic, Close, Closer, Closest (2016), which I made with performance in mind.


Story development

I brought with me a new autobiographical and biographical comic project when I started this artistic research. I interviewed relatives on the subject of my grandfather who died 15 years before I was born (Picture 5). Five versions of a manuscript were written during this process. The first draft, titled I Don’t Know Grandpa, was the subject for my first experiments, but it was not realized. Sound of the Aurora is one of two manuscripts/ideas that was realized.


The story I tell in Sound of the Aurora is from my aunt Astrid. It takes us to a Saturday afternoon in her family’s living room in 1951, to an incident she still remembers today at the age of 77. Her father, Andreas, has settled down to listen to his favourite radio show featuring classical music. However, Gerda, Astrid’s mother, turns the radio off. She is provoked by the German theme of the programme. She cannot stand listening to German, she has heard enough of it, and cannot stand Germans because they occupied Norway during World War II. Astrid’s father addresses her anger, saying “Well, if this is how it will be, there will never be peace in this world”. He shows forgiveness for sake of a bigger purpose, even though Astrid knows the war treated him terribly. The story tracks unspoken stories from the war, and it paints a post-war scenario that I believe many Norwegian families still relate to.

The magic lantern: a lost tradition

The impulses that pointed me in the direction of performance comics came quite early in the artistic research project. In November 2012 I attended a storyboard course for directors at the London Film School. There the instructor Dylan Stone talked about the magic lantern tradition, which can be seen as the forerunner of film and TV. He introduced me to an old painting of a magic lantern session and explained how the lantern worked. It was an epiphany for me, a turning point when I realized that the concept of comics on screen had existed long before computers and film, approximately as early as 300 years ago. This gave the screen more existential weight than the computer itself, and the concept of screen-based comics gained new meaning for me. (Read more about the concept of screen-based comics in chapter 4, the section ‘Fundamental Parameters of Digital Comics’.)

Mobile framing
Motion graphics and mobile frames can easily be confused, largely because motion graphics can give the impressing of moving frames. An example is a horizontal panoramic view (‘a pan’), as in the 2D pan of the living room in Sound of the Aurora (Video 27)
. This is a digital version of a panoramic magic lantern slide and it is also a traditional animation technique. By moving a rectangular image horizontally (or vertically), traversing a landscape for example, the motion creates an imitation of a mobile frame (Wright 1891: 141). I therefore only call a frame ‘mobile’ if it actually moves.


The technique of mobile framing is used in traditional comics all the time. An example of a comic where the frame changes position from panel to panel is Martin Vaughn-James’s The Cage (1975) (Video 28). Mobile framing in full motion, however, is a new feature that digital comics and screen-based comics have introduced. Both of the screen levels in a digital comic can become mobile. This means that the screen that frames the negative space, and the panels that frame the fictional space, can move.


The motions of the mobile frames in Sound of the Aurora are bound to the same motions as one finds in motion graphics. They are either directional movements or rotating movements. Directional movements move the frame horizontally, vertically and in depth. If a lens creates the movement in depth, it is called zooming, but then it is not an actual movement. Rotation can be a circular rotation around an object (Video 30), a rotation around the frame’s own axis, as in pan and tilt motions (Video 31), or it can be a rolling rotation (Video 32). In Sound of the Aurora I only use mobile frames in fictional space. Before I address mobile framing in fictional space, I want to comment on mobile framing in negative space, based on my observations.

Picture 15: There is no documentation of Gro Sørdal's project, despite my observation and  these screenshots. Sørdal created a world where her narrative took place. The story was based on the a story by the Norwegian writer Alf Prøysen: Geitekillingen som kunne telle til ti.
Photo by Gro Sørdal.

I studied the magic lantern tradition by travelling to Preus Museum in Horten to look at real lanterns and slides. I invited a British expert on the topic, Mervyn Heard from Bath in England, to present a lecture at a conference on the theme of visibility at Bergen Academy of Art and Design in 2014. He performed a magic lantern show at the culture scene Bergen Kjøtt, which is the first time I saw a live performance of this type.


My encounter with the magic lantern tradition inspired me to make a digital comic in the form of a performance. I noticed that magic lantern slides used spatial motion and motion graphic techniques in their presentation, and I decided to use these techniques in my own production. The creation process necessitated that I reflect on spatial motion versus image-stream motion. To this end, I devoted a lot of thought to Scott McCloud’s (2000) theories and adjusted some of his concepts to make them relevant for my own motion theory (see chapter 4, the section ‘The Screen’).


The performance comic

The performance comic is a rare art form in Norway. I have mostly experienced it as readings from web- and printed comics in lectures and artist talks. It is related to reading aloud, which is a well-established practice amongst authors and picture book creators. I remember such readings at my primary school, when the teacher read from a small booklet and showed an illustrated narrative in a slideshow. This was a modern version of a magic lantern show. The first original comic performance I experienced was watching the artist Kim Holm sing his comics at pubs in Bergen during 2008–2010. Accompanied by his guitar, he projected his comic pages in the background.


I made my first performance comic in March 2013. It was not part of this artistic research, but a collaborative project made with the comic artist Eirik Andreas Vik. We performed a reading of our latest fanzine When We “Met” Lucy Knicley (2012). The printed edition was adapted to screen by presenting panels in Apple’s Keynote software. Most often one panel was presented at the time, and we also introduced music and sound effects. Since this comic originally was designed for print, it did not have any sequences that took advantage of the extra possibilities a screen enables. Had it been made with screen reading in mind, the comic would have looked very different. It was a fun thing to do and the audience had a good time, we received feedback from several people who thought it was a great experience to see a comic read aloud. I had no idea then, that a year later I would do a performance of a new and original work.


There is no mention of performance comics in any theory or history of comics that I have found. To begin with, I used terms such as ‘reading’ and ‘live comic’ before I was introduced to the term ‘comic performance’, which is used by other performers such as the Scottish artist and researcher Damon Herd and the American comic artist Robert Sikoryak. A similar and relevant live practice is mentioned in comic history. The American Winsor McKay is one of the most important pioneers in Western comic history. He made the classic cartoon series Nemo in Slumberland, and he also loved to perform. He practised a Victorian tradition called chalk talks, which involved drawing on a chalkboard in synch to his talk (Lente 2012: 13). McKay is also one of the most important pioneers of animation and film, making history with the animation-film performance Gertrud the Dinosaur (1914). In this work he communicated with the dinosaur, threw an apple to it, and as a finale, ‘broke the fourth wall’ (cf. chapter 4, ‘The Screen’) by transporting himself from the physical room into the pictorial room and the fictional reality. This marked the beginning of multimedia theatre, where film was used in theatrical performances (Dixon 2007: 73). McKay was not the first animation performer, however, since the film pioneer J. Stuart Blackton made an animation performance with his show The Enchanted Drawing as early as in 1900 (Gardner 2012: 6).


In Norway, the opera Blob from 1997 could perhaps be called the first original performance comic. Written and drawn by the renowned comic artist Steffen Kverneland and with music by the composer Ole-Henrik Moe Jr., this was a comic opera, first performed in January at Angouleme International Comic Festival, then without drawings. Illustrations were later added to the performance at Astrup Fearnley Museum in Oslo, at the opening of their comic exhibition in May 1997. Shown as a slideshow, the illustrations functioned as backgrounds for the singers Tage Talle and Hege Høisæter. The music was played by the composer himself with Rolf Lennart Stensø on percussion. Judging from the available written documentation, it seems the performance was not perceived as a comic presentation, but as an opera performance using the language and semiotics of comics in the singing and the music (Skjærvøy 1997).


In 2013 Damon Herd organized DeeCAP (Dundee Comics Art Performance) at the University of Dundee in Scotland. In his report of the event (Herd 2013), he mentions Robert Sikoryak’s Carousel, which presents comic readings and visual performances from comic artists (cartoonists) and theatre artists. Since 1997 over 180 artists have performed at Carousel (Sikoryak 2011). In a mail correspondence, Sikoryak wrote me that Carousel is low key and mostly features adaptations from print and sometimes hybrid formats. Unfortunately, I found out about Carousel at the end of my artistic research, so have been unable to do any research on the works it has presented or to experience the events myself.


As stated, 5 June 2014 marked the premier of Sound of the Aurora, my first original performance comic. From then on, I became more aware of the art form and started searching for other performance comics as they were announced. In April 2015, for instance, the famous French comic artist Tardi performed at the Fumetto International Comic Festival, and later that year in June, an excerpt of Geir Moen’s and David Mairowitz’s comic adaption of Peer Gynt was performed at Oslo Comix Expo. Performance comics is an art form that I interpret as a modern version or a continuation of the magic lantern tradition. Sound of the Aurora and Blob are probably the first original performance comics in Norway, and they both contribute to establishing performance comics as an art form in the country. Nevertheless, how widespread the phenomenon is internationally, remains undocumented.

Investigations into Spatial Motion

One of my key aims for Sound of the Aurora was to translate the motion of magic lantern presentation slides into comic form. I reasoned that spatial motion is the natural motion of physical magic lantern slides. Based on my practise and observations, I divide spatial motion into two types. The first is ‘motion graphics’, which include flying panels in the negative space and moving objects in the fictional space (Figure 2). The second is ‘mobile framing’, which concerns both the frame of the negative space and the frame of the fictional space. I will also, in this investigation into spatial motion, address filters and lenses, for I have used them to create various effects in Sound of the Aurora, not least to modify the images and the reader’s or viewer’s focus. The investigative section ends with addressing the aspect of automated motion.

I travelled to the Preus Museum in Horten to study the real magic lanterns and lantern slides in the museum’s collection. I investigated several types of lanterns and sets of slides, both regular and mechanical. The mechanical slides had levers that could make static objects in the illustrations move, or they could make body parts move, much like cut-out animation or dolls in a shadow theatre. There were also panorama slides that could create a mobile frame as well as filters for adding the effect of falling snow. Browsing in Preus Museum’s library, I found an old instructional book by Lewis Wright, Treatise on the Use of the Lantern in Exhibitions and Scientific Demonstrations (1891), which gives a detailed overview of techniques and equipment. Wright describes the various types of mechanical slides and the effects they can add to a presentation. The following are the slide types that use motion graphics:


a.              Uncovering slides. With moving layers of glass, one can uncover hidden graphics and change an image (Wright 1891: 141).

b.              Lever slides. Moving parts of illustrations such as flickering eyes or an arm wielding a hammer are achieved with a lever enabling one to control a mobile layer of glass (Ibid., p. 142). These motions can be repeated, as in a loop, or executed only once.

c.              Rackwork slides. More technical in their mechanics, rackwork slides can make circular glass rotate in a loop.

d.              Experimental slides. Wright also describes experimental slides which show optical graphics such as a kaleidoscope rather than scenic subjects (Ibid., p. 144).

e.              Roller slides. Acting as filters, roller slides can be rolled onto other slides to add the impression of snow or other atmospheric effects (Ibid., p. 143).

The motion graphic techniques outlined here result in moving objects and figures. Rotating motion graphics can create kaleidoscopic abstractions as well as masked and unmasked images. The filters are an aspect I will return to later in this chapter. The most common motion graphics in Sound of the Aurora are the objects that move from one point to another. Examples are the ship that moves, the ocean passing by, the ship sinking (Video 16), and the lifeboat that crosses the screen
. In my second digital comic, Close, Closer, Closest, I use motion graphics in two scenes (Video 15 and 16). Since my focus when making this comic was to explore the image stream, I noticed a technical aspect of motion graphics; I found the moving graphics are more file-size-friendly than image streams. Instead of loading an image-sequence animation, the computer loads the single graphics only once, and the programmed algorithm produces the animation. The motion graphics in Sound of the Aurora are made with film cuts. The animated sequences are not pure motion graphics, but montages of image streams and motion graphics. An example of this is the sequence with the sinking ship, where the ship is a moving graphic whilst the background is a photographic film of ink being poured (Video 16).


The French comic theorist Thierry Groensteen (2007: 71) claims that motion does not fuse perfectly with the texts and images in comics. The reader may recall that I did some reflection on this in chapter 1. I think one of the main differences between comics and film – that is, sequential images versus full motion and closure that take place on different levels – exemplifies Groensteen’s point. However, I also think it is too simplistic to claim, as Groensteen does, that motion does not work well. In a multi-modal composition, the motion can vary from being subtle details to being a major part of the presentation. The disharmony, or even harmony, will vary depending on which elements are combined. So how does the expression of motion graphics compare with that of classic animation and traditional comic sequences?


Comparing motion graphics and classic animation, I find the expression of classic animation more dynamic in an organic way. This is because classic animation can imitate realistic movement, while motion graphics have a more static expression that becomes a stylized representation of realistic movement. This static expression of motion graphics lies closer to the comic sequence’s static expression than to that of classic animation. Does this mean that motion graphics harmonize better with static comic sequences than with classic animation?


Taking a closer look at motion graphics, I find a difference within the motion graphic expression itself. It is between static objects/figures and dynamic cut-out figures. In cut-out animation and shadow play, there is a tradition to create and use multiple moving limbs to animate for example a walking person. This is what I mean by a dynamic cut-out figure (Video 19). A static object, in this context, is an object that is static in its original form, also when it moves, like a ship for example (Video14). I claim that a static object makes a more realistic motion than a dynamic cut-out figure in motion graphics, because the static object moves according to our expectations. A figure that in reality has dynamic and organic movements will never achieve a realistic representation with a dynamic cut-out figure. It will remain a stylized representation of reality.


I do not use dynamic cut-out figures in Sound of the Aurora. I hide the organic and dynamic motion in-between the images, as in limited animation, or I show the movement through comic sequences. In the limited animation sequences, I focus on moments where the static presence of objects and characters seems natural. I do this to preserve the comic language of sequential imagery, since a fully realistic motion is the opposite of that. Despite this, I do also use static objects in full motion (Video 18). Somehow, I do not find the motion of a static object to challenge the comic sequence as would a dynamic cut-out figure, for instance, of a person running. Somehow the static object in full motion harmonizes with the frozen moments of the comic. This may seem like an illogical conclusion since both the static object and the dynamic object represent full motion. I think the representation of reality is the aspect that comes into play. Static comic sequences create an illusion of realistic motion through the reader’s closure, and static moving graphics show realistic motion. A dynamic cut-out figure would, I think, have disturbed this level of reality. Classic animation is capable of maintaining the level of reality, which is why I think I have intuitively mixed moving static objects and classic animation together with comic sequences in Sound of the Aurora. These three expressions correspond with each other in their portrait of reality. This is also why I do not use dynamic cut-out figures in Sound of the Aurora. So: my conclusion is that a comic sequence is more similar to a classic animation sequence than it is to motion graphics. This is due to the stylized representation of real movement which the motion graphic enables. This is despite the fact that the motion graphic and the traditional comic sequence (e.g., in a comic book) both share a static expression. The exception to this conclusion – that is, the situation where a motion graphic is more similar to a traditional comic sequence than to classic animation – is when the motion graphic presents static objects that move, for instance cars and ships, because their static quality still represents a realistic movement.


Even though I have avoided dynamic cut-out figures in Sound of the Aurora, this does not mean I think it is impossible to create serious content using this visual expression. Nevertheless, it is undeniable that cut-out animation has great potential for humour. This is exemplified in the American animation series South Park. But unintended humour can also arise. To give an example: in Watchmen, the Motion Comic (2008), the creators made some animation choices that break with the otherwise grave and serious mood. This comic film, rich in movement, is well animated, but some of the figures walk in such a weird way that they can trigger unintended laughter. This is one reason why I think it is important to be aware of the potentially humorous aspect of stylized representation in combination with comic sequences.


The motion graphics that I use in Sound of the Aurora are automated, but motion graphics can also be interactive and present movement in real time. Such real-time graphics are well-known from shadow theatre, but they are also found in some computer games. At Preus Museum I observed that magic lantern slides could create real-time animation with their mechanical apparatuses. Such interaction is also possible in digital comics.

I did not use interactive motion graphics in the first editions of Sound of the Aurora, mostly because I was unaware of the concept at the time. In retrospect, it seems like a lost opportunity, since the live performance format facilitates collaborative interaction between me and the musicians. Synergy between the musicians and the direct control of the movement on screen would have been an exciting addition. The Modul8 software that I now use while performing the work lets me access all layers of the presentation (Video 21). I have therefore, in my latest edition of Sound of the Aurora, made single figures for the final fata morgana scene, so I can move the objects and figures directly (Video 21). This gives me much more freedom than with prerendered video. There is also a type of interactive motion in the work, but it relates to a filter, and I will describe it in the section called ‘Filters’ later in the chapter. Other than this, I did not use interactive motion graphics in Sound of the Aurora. I have, however, used it in the opening menu of Close, Closer, Closest. If the reader tilts the tablet, the characters will slide towards each other but still keep a bit of distance (Video 17).

Flying panel delivery

Sound of the Aurora does not juxtapose panels in its presentation, so it does not have a negative space where panels are presented. It has the basic structure of an image stream based on cinematic panels (this concept is explored in chapter 3, in the section ‘Cinematic Panels’). The motion graphics in Sound of the Aurora are in fictional space. I have therefore not had any experience with motion graphics in negative space, most particularly, with ‘flying panels’, which are a form of panel delivery made with motion graphics.


I explore panel delivery through image streams in chapter 3 (in the section ‘Panel Delivery’), and many aspects I address there are also relevant for understanding panel delivery made through motion graphics, which I choose to call ‘flying panel delivery’. Since I have not explored flying panel delivery myself, I refer to other artists who use it. Brendan Cahill, in his digital comic Outside the Box (2002) (Video 22), uses flying panels to make panel delivery. Another example is the Norwegian Jenny Jordal, who switches around fictional space and negative space in her scroll-activated comic Hvorfor ananas heter ananas (2014, [‘Why Pineapple Is Called Ananas in Norwegian’]) (Video 23). This is a digital comic with a basic spatial structure. Pictorial elements fly in and out of the screen like birds. The curious and interesting aspect is that Jordal treats fictional space as if it were negative space. The figures are presented as if they were flying panels, using the fictional space as the backdrop. It is represented through pure colour backgrounds and shapes that change throughout the story.


When I use motion graphics in the fictional space in Sound of the Aurora, I establish a realistic fictional world. Jordal’s fictional world, by contrast, is a coloured canvas, an abstract representation of reality. This gives her the opportunity to treat her characters in the same way as Cahill treats his panels in Outside the Box. Hvorfor ananas heter ananas represents in one sense the opposite approach to Close, Closer Closest, which has a basic image-stream structure. Sound of the Aurora is more an even mix.


With her flying figures and abstract backdrop, Jordal create sequences within a continuous fictional space. This could be compared to a film made in one take. I also do this in Sound of the Aurora, but not throughout the whole comic as Jordal does; I only do it in a cut. We both rely on a mobile frame to create the sequences, so our works are not pure motion graphics. Outside the Box has a fixed frame. Cahill use a panning image of a map in chapter 4 of his work, which is one of two places where he imitates a mobile frame. He also moves in depth by scaling the images in his chapter 6. Most of the presentation in Outside the Box consists of combinations of image-stream panel delivery, flying panel delivery and cinematic panels, with flying panel delivery as the dominate strategy.


What is similar for Outside the Box and Hvorfor ananas heter ananas is that graphics enter and disappear. Paper theatre is probably comparable to this spatial presentation, since all the objects and figures on screen are continuous. Entering and exiting the screen is time consuming. I find this aspect to be one of the challenges of using motion graphics in comic sequences. Since the eyes conceive information rapidly, the intro and the outro of a figure, object or panel slows the pace, and readers might experience this as delay. The image stream, by contrast, appears as a much more time-effective presentation form.


I came to this conclusion through doing an experiment on mobile framing. The test was a full-screen strip that was scrolled horizontally (Video 24). The outcome was that I decided not to use this format because readers would spend more time in-between the images than actually perceiving the images. This issue of entering and exiting the screen is therefore a challenge for both mobile framing and motion graphics within a continuous frame.


I find three ways to handle a mobile frame: through a fixed track, a dynamic track and free mobility. Dynamic tracks are relatively rare in digital comics. Nevertheless, around the year 2000, Marvel developed a concept that uses a dynamic track in its presentation form called Guided View. This is basically a mobile frame mode for a digital comic book. Instead of only reading page by page, Guided View gives the reader the opportunity to move the frame closer to the panels to focus on one or two panel at the time. As the reader navigates, the frame jumps from panel to panel (Video 33). It zooms out if there are bigger panels or splash-pages. A dynamic track can move the frame in all possible directions, like a rollercoaster ride, but it is still a programmed track. An example of a comic that uses a dynamic track that not is Guided View is Meanwhile (2010) by Jason Shiga (Video 34). An alternative approach is the game comic Icarus Needs (2013) by Daniel Merlin Goodbrey, which has a dynamic track that is tied to the panels. The reader controls the main character, who can move freely inside the panels. When the character moves, the mobile frame follows. The panels define the track of the mobile frame (Video 35).




Fictional space is traditionally 2D in comics due to the 2D illustrations. Because the digital comic represents a new medium (discussed in chapter 4), it is not bound to 2D as is a traditional comic on paper. 2D elements can be arranged in layers, even in a 3D perspective like what one finds in paper theatre. This way of handling 2D is also called 2,5D. Ultimately, the content can be sculpted into 3D objects and figures, full 3D.


I have experimented with all these three approaches: 2D, 2,5D and 3D. In 2,5D and 3D, the mobile frame can move in-depth into the environments, but only the 3D space lets the reader explore it from all angels without breaking the illusion of a realistic environment. Turning around in a 2,5D environment would reveal that the elements are just flat, like cardboard figures. In scenes such as my 360-degree pan in Sound of the Aurora, I position the 2D graphics so that they always face the point from which they are viewed. I do this to retain the impression of the realistic world I established in the 2D drawings (Video 39).


Late in 2015 I made a 3D version of the story frame of Sound of the Aurora, which takes place in a living room. My intention was to have a free mobile frame within this space. With help from 3D modelator Aslak Helgesen, I made a 3D space with a virtual camera. The presentation was programmed in Unity by Thomas Tussøy. I used a highly sensitive sensor controller – an Oculus Rift headset – to handle the virtual camera and move around inside the virtual tableau.


Moving the frame around in a 3D static scenario is a bit like watching an installation of sculptures, or it creates an association to a tableau vivant, a theatrical expression that reached its height of popularity in the 17th century. Actors dressed up, posed and imitated known statues or famous paintings; such tableaus could also involve staged environments (Scavenius 2007: 864). This way of presenting a narrative is also used in film, one example being Risttuules (In the Crosswind, 2014) by the Estonian Martti Helde. Its story is told by sequences of tableau vivant with an active mobile frame. The way I present the 3D sequences in Sound of the Aurora is much the same as Helde does, only that he stays closer to the tableau vivant tradition since he uses live actors. But there is still more to say about my experience with 2,5D and 3D.


Just as the Tintin creator Hergé created a sequence within a single panel (Picture 14), it is possible to present a sequence with a mobile frame in the fictional space. Jordal’s Hvorfor ananas heter ananas is an example of a comic that uses this in a stylized way (Video 23), with a combination of moving graphics and a mobile frame. A more realistic approach is found in a student work by Gro Sørdal, who attended a visual storytelling course I co-taught at Bergen National Academy of the Arts.4 Sørdal made an open world with visualization software from the oil industry, and placed characters in the environment. She used keyboard controls to move around in the environment. An open space with a mobile frame that can be moved freely poses a challenge: How can the chronology of the narrative be preserved? This would perhaps not be a problem in a non-linear story, but with a linear story, there is always a chance that the reader will approach the content in an unintended way. I think the problem can be solved with a strict composition that indicates a clear reading direction, just as comic artists arrange panels and subject matter on a traditional comic page in a way that helps readers follow the intended chronologic path. The same approach would be necessary in the 3D space. Another solution would be to move away from a free mobile frame and use a fixed or dynamic track, controlling the motion through the 3D environments. This is what I do in the 2,5D evacuation clip in Sound of the Aurora (Video 41). Sørdal solved this challenge by performing the story, so she was in control of the navigation.


Sørdal’s story was set in a big forest. The 3D scene in Sound of the Aurora is set in a small living room, a space that does not allow enough space to arrange a sequence with the same number of characters as I have in the evacuation clip. I solved this by making a 3D image stream – that is, 3D cinematic panels (Video 42). I had never seen cinematic panels in 3D tableaus before, and with them, aspects such as broken motions occur, just as with instant transitions in 2D cinematic panels (see chapter 3, ‘Instant Transitions’).

The Performance Comic Format

Live editing

In filmmaking, editing means to coordinate cuts and relationships between images and shots. When a comic artist arranges the scenes and the panels of a comic, this is also an editing process. A similarity between physical comics and film is that the editing is part of the production process, and when the film and the comic are published, the editing has been completed. The new medium of digital comics differs from physical comics and film because it is never fixed (Manovich 2001: 36), so it is possible for makers to re-edit and change publications they have already sold.


The editing processes of comic artists are diverse and vary from artist to artist, and speaking for myself, they change from project to project. I will therefore not attempt to define comic editing or compare it with film editing, as this topic is too large to be included in this project.


I will, however, mention one production aspect that I believe marks a difference between the two. In film making, more film material is usually produced than is needed for the final result. This material needs to be adjusted and cut down, and much of the refined rhythm and pacing is created in the editing process. In the traditional comic-making process, there is no need to create an excess of material for the editing process, at least not to the same degree. And if there is an excess, it might be edited out during the ideational and sketching phases.


To make Sound of the Aurora, I therefore had two options: I could use the comic approach and just make the material needed for the performance, or I could use the cinematic approach and make more than necessary. I chose the latter. I created the images, linear and looped animations, single shots and pre-edited montages, and in this way made a catalogue from which to edit. The live editing process has shown me that if I want the freedom to improvise and make variations, I need more material than strictly necessary. Loops and stills that can be exposed on screen for shorter or longer periods are good tools for achieving greater freedom in editing. Portrayed environments and objects help create moods that do not affect the story progression, and they are also a type of additional content that can create flexibility. The software Modul8 lets me control the fade-in between every image and clip, and I have used this live control tool to dissolve images to vary the rhythm of the presentation.


Alternative storylines can take the form of alternative paths with different beginning and endings and are typical for hypercomics. They are also possible with live editing. Nevertheless, a nonlinear approach has not been relevant for me in the story of Sound of the Aurora, because I want the biography to be true to the source material. If I had addressed a theme such as ‘perceptions of reality’, alternative storylines would have been reasonable to include in the performance.


To make an excess of material requires additional effort, and the new-media principle of variability comes into play (Manovich 2001: 36). I mentioned earlier that a digital comic is an unfixed medium. This is also the case for a performed comic, but the performance differs from a regular publication in that it exists only when it is performed. This aspect gave me the possibility to make new material after the premiere and to keep adding material in-between every performance. As long I perform, my catalogue can grow and develop.

Performing Sound of the Aurora

The premiere of Sound of the Aurora was on 5 June 2014, in the drawing hall at Bergen Academy of Art and Design. The impro-musicians in 1982, Nils Økland, Øyvind Skarbø and Sigbjørn Apeland, made the live soundtrack for the performance.


As already stated, the performance was initially intended only as an auxiliary experiment, but the experience of it surprised me and I decided to turn it into one of the two final works in my artistic research. I held a new performance in September 2015 at the culture scene Bergen Kjøtt, updating the presentation to be controlled with the video jockey (VJ) software Modul8. This software let me do live editing of images and film.


VJing – a phenomenon that evolved from video art, liquid shows that project dynamic and abstract water motifs, colour organs and even the magic lantern – is traditionally about creating visual backdrops at concerts and music clubs (Spinrad 2005: 17). A related phenomenon is live cinema, which emerged at the same time in the video art scene of the 1970s, with artists like David Rokeby, Myron Krueger and Erkki Kureniemi being some of the pioneers (Willis 2009: 14). The form re-emerged and enjoyed even greater popularity during the first decade of the 2000s (Ibid., p. 11), probably due to the improved computer technology and easier access to it. Live cinema differs from traditional cinema in emphasizing the VJ performance rather than storytelling (Ibid., p. 13).


A performance comic does not need to emphasize a story, but Sound of the Aurora is a linear story. Live editing, however, gives me the freedom to vary the pace and chronology of certain aspects and allows for more fluid cooperation with the musicians and potential interaction with the audience. In live editing, I become the reader together with the machine, since I control the editing and some camera controls that traditionally are automated features.


By performing my comic, I reach out to audiences who would perhaps never otherwise experience comics in any but the most traditional formats. They seem to appreciate it. I also experience that to perform a comic lowers the ‘reception threshold’ for audiences since it does not require that anyone own an iPad, as is the case if they want to experience Close, Closer, Closest. This means that in future, I will probably continue to perform my comics before I launch them on a digital platform.

2018    Frozen Motions in Motion by Fredrik Rysjedal

Motion graphics

The term ‘motion graphics’ has a broad scope encompassing visual music, abstract animation, broadcast design, kinetic typography, title design and more (Betancourt 2013: 10). When I refer to motion graphics in this research, I refer to the movement of graphic images or parts of images, whether abstract or figurative. Whilst the origin of the phenomenon of motion graphics is connected to the emergence of abstract film (Ibid., p. 40), I think my use of the term can be compared to the idea of shadow-play figures, the static parts of which are combined and physically manipulated to create motion. I also include cut-out animation as a form of motion graphics, even if in film it is captured through an image stream.

Free mobility is the only model that craves interactivity and full reader control, so the reader can roam freely around in a defined or infinite space. This type of mobile framing is rarer to find than dynamic tracking. The only example I can give of free mobility is Cayetanos Garza’s webcomics, comic number 130 (example: Magic Inkwell). It involves free roaming within a very small and defined space, and I long to see a good example on a larger canvas. These three ways of handling a mobile frame are universal and count for mobile framing in negative space as well as in fictional space.

The intention of a mobile frame in negative space is to expose off-screen panels. Horizontal and vertical movements are traditional mobile framing for webcomics, also called scroll comics (Video 36). An example of a digital comic where the frame moves in the depth of the panel space is Daniel M.Goodbrey’s hypercomic PoCom-UK-001 (2003) (Video 37). This is also a type of movement I do not have experience with, since I do not operate with negative space in Sound of the Aurora. On the contrary, I approach the mobile frame in fictional space.


Since the mobile frame’s function in negative space is to expose panels, the role of the mobile frame in fictional space is to expose more of the fictional world within the same frame. This is a major difference, because in the negative space, the reader browses sequences of images which create a series of events. In the fictional space the reader browses an environment and its contents. Having said this, it is possible that the offscreen content is composed so as to create a sequence when the mobile window passes by.


Directional movement on top of a 2D image is an established presentation form. It is a common effect in slideshow software such as Keynote and Powerpoint. It is also often used in digital comics such as motion comics. In documentary filmmaking, this effect is called the ‘Ken Burn Effect’, named after the filmmaker who was known for using pans and zooms on photographs in his documentary films (Mattise 2006). Directional motion creates a pace. Slow motion creates calmness and flow while rapid motion can be hectic and energic. In Sound of the Aurora I use the y-axis for introductions. When we first meet Astrid, the frame approaches her (Video 38). Another example is the scene of the sailors who are left behind in the Atlantic. The frame moves away from them to reveal the survivors one by one, but also to introduce the vast ocean (Video 26). A directional movement can also bring us from location to location (Video 41).

While working on Sound of the Aurora, I intuitively used two types of effects that arose from the mobile frame. The first type were hand-held camera effects, the second were parallax effects. The ‘hand-held camera’ is a film term, and it is the opposite of the ‘steady camera’. It is, however, unnatural to use the term ‘camera’ in the context of making comics. ‘Point of view’ and ‘frame’ are my own preferred terms. Nevertheless, in digital comics and screen-based comics in general, a camera or a virtual camera can be used to capture the visuals. In Sound of the Aurora, all the mobile framing is made with a virtual camera, either in Unity or Adobe After Effects. In the sketches for I Don’t Know Grandpa, I experimented with hand-held camera effects. I filmed illustrations with a camera, adding shakes and movement when holding it in my hands (Video 43). These movements were intended to represent the dramatic environments affecting the camera, to make the reader/viewer feel closer to the event, as if the camera was present in the situation.


In Sound of the Aurora I used a hand-held camera for three reasons. The first was to create the illusion of being close to the event. I used it for close-ups of the character called Andreas, the intention being to have the shakiness help convey a sense of vulnerability (Video 44). The second reason was to imitate the surroundings by recreating the motion of a small boat in rough waves. I recorded real camera movement and adapted it to Adobe After Effects’ virtual camera, which again captured the illustration (Video 45). The third reason is that it gave me the possibility to use an Oculus Rift headset in the same way as one would use a hand-held camera, in real time during the performance. This gave me the freedom to choose the excerpts and the movement in the pictorial frame during the performance.


Parallax effects also spring from the mobile frame. These visual effects occur when 2D or 3D graphics are organized in 3D space and are observed through a mobile frame. When the frame moves, the spatial relationships between the objects change and create motion in the image. Parallax motion communicates space and spatial relationships and does not challenge the sequentiality of the comic. This is because parallaxing creates motion in the image without the objects actually moving. The illusion of parallaxing is also possible to manipulate through motion graphics and classic animation, but the easiest way to achieve it is by using a virtual camera.


While physical masking was used in magic lantern slides to hide objects, in digital comics, parallaxing can be used to hide or mask information and objects and then reveal them again by moving the camera (Video 46). This is an effect I used in the 2,5D evacuation clip. I moved the camera so that objects and environments would mask figures, then used other environments to increase the pace in transitioning from one moment to another (Video 41).

Since I do not focus on juxtaposed panels in my own productions, I want to mention a mock-up made by the Swiss animation student Melanie Wigger. I saw her work at Eric Loyer’s motion comic workshop at the Fumetto Festival in 2015 (Video 47
). She used material from a game she was constructing in Unity. She and Loyer made a page-based presentation with multiple panels, but the environments inside the panels were 3D. The reader could move the cursor on top of the panels in order to tweak the perspective and look around. The negative space functioned as a mask, although the contrast between the flat negative surface and the 3D fictional space strengthened the presence of the mask, and the fictional world became a space that really seemed to exist.


The opposite of automated motion is reader-controlled motion, also called interactive motion. I did early experiments with 360-degree pans with a mobile frame, but it was the Oculus Rift VR-technology that enabled me to use a free mobile frame in Sound of the Aurora. As I mentioned earlier, it was the story frame, the scene in the living room, which was the subject for the mobile framing.


The mobile frame also became a tool for collaborating with my musicians in the trio 1982. I could ‘float’ on their music and they could react to my motions. For every performance, I see new possibilities for how to frame the scene differently. A mobile frame also allows me to act in relation to the context, to build on the drama in the sequences. Examples of this would be a dramatic move to a close-up, or to use a shaking camera effect (Video 48).

My experience with free mobile frames is similar to my experience with motion graphics. Each motion consumes time, and the movements I make as a performer affect the rhythm. If the time the panel represents is long, I can move more freely. In the opening panel where the family is seen in the living room, there is enough time for me as performer to wander around in the panel and observe, just as the 2D edition spent time slowly panning the living room (Video 27). In my first official performance of the 3D edition of Sound of the Aurora in 2016, I used the free mobile frame and moved it constantly. The problem with this was that the scene I had created did not fit the concept of a continuous long take. The motion came in conflict with higher paced action-to-action sequences, where every panel represented a small amount of time and created a lag in the presentation (Video 48). A solution to solve this is to create cuts and fixed frames in the sequences that were supposed to have short and swift screen time.


The whole point of performing the work is to achieve close and direct communication between the performers and the audience. To create this connection, I think it is imperative for the audience to realize how close they are to the interaction in the performance. The audience have no control in the acquisition of Sound of the Aurora, and I, the performer, have only partial control. This is because I present material that is both automated and interactive.


My audiences are used to watching films. When watching a film, the technique used to make the film is not present in the acquisition. I have observed that unless I inform audiences that they will be experiencing an interactive performance, it can be difficult for them to realize it. Audience response is sometimes more positive when the interactive aspect is understood, because viewers otherwise presuppose that they are watching a traditional film. With this in mind, I see the importance of either introducing the concept before the performance starts, or at least of being sure that I create a performance that does not hide its means and conditions from viewers. This means the audience can see me working with my editing equipment, and they can see the sensor which controls the interactive mobile frame. It is similar to watching a musician playing an instrument.


There are historical references to magic lantern performers scaring audiences by projecting images of phantoms onto smoke (Heard 2006: 51). Smoke creates a dynamic screen surface, and the idea of it inspired me when I was making Sound of the Aurora. I interpreted the ocean waves, the Atlantic wind and the sail of the lifeboat as interesting aspects that could be communicated through a flexible and dynamic background. I came up with the idea of a thin textile that could react to wind.


For the performance, the textile is hung as a sail and projected upon. When the story reaches the point where the ship Berganger starts sinking, I summon waves to the comic canvas by using a wind machine or just by pulling the canvas controls. I have not animated the ocean in the images of Sound of the Aurora, so this physical manipulation provides the motion at the same time as creating an association to the ocean’s movement.


My first canvas was made with industrial plastic. When it was projected on from the front, it created moving and reflective highlights that could be associated with water. My second canvas type was a textile that gave an association to a sail. The expression permitted me to create a softer effect as I projected images on it from behind. Also, I no longer had to worry about whether the screen leaked light. Because the screen was moving, light could pass through it and hit the background. With a back-lit solution, this leakage would hit areas where the audience normally would not look.


I have also performed the work without the dynamic canvas, but experienced that it lost a distinct expression and depth. Given that the animated parts of the work are subtle, they are designed to be juxtaposed with a screen interaction. An important value of the performance is that I can produce interactivity by adding elements such as physical and real-time spatial motions. To look at a screen that forms waves on its surface is a simple but expressive effect. It is an element of surprise that is popular with audiences because they have never seen it before. After every performance, the feedback from viewers has focused on the experience of this effect and how unusual it is. According to my findings, it is not just the waves in the canvas that fascinate people, but also the merging of animated motion in fictional space with real motion in our real space.


The motion of the textile functions as a filter, which makes filters a concept for motion and visual presentation in digital comics. This is why I have added filters to my sub-screen map, a figure that I define in chapter 4 (see the section ‘Screen Levels’) (Figure 3). Filters can be used in all levels, on the panel screens or on the main screen. This means that the ‘window’ we look through or at, regardless of whether it is a computer screen or a surface on which an image is projected, can affect the imagery. In filmmaking, there are a few basic camera filter types: diffusion, exposure, focus, colour balance, colour alteration and special effects filters (Brown 2012: 256).


My filter adds physical motion and functions as a ‘real-time screen’, which is one of the four types of screens that Lev Manovic describes in Language of New Media (2001). The screen and the wind create a visual image that can only be seen then and there. It is a simple and analogue real-time screen that does not require technical devices such as a webcam, sensors or sonars to create the live image. The dynamic canvas adds unique motion to the presentation of Sound of the Aurora, and it will differ from performance to performance.


The lens

A camera lens is one of the prime tools in filmmaking. It controls the viewer’s focus by blurring out all areas except the field that is in focus. Focus manipulation is not a technique I have used in my digital comics, and it does not directly relate to motion. Nevertheless, it does transform an image in a way that creates graphical motion and change, and it can be used to create sequences within one and the same image. An example of this is in Marvel’s Wolverine: Japan’s Most Wanted Infinite Comic – Issue #1 (2013). This is a modern digital comic that uses focal manipulation in its panel delivery, causing the focus to shift in the sequence (Panel delivery 1).


Focus manipulation is a technique that has not traditionally been used in comics. In my research, I have found that solid backgrounds and stylized images in comics have involved focus – that is, sharp imagery. However, after photo manipulation software such as Adobe Photoshop became standard in the making of comics, I have seen deliberately blurred images in printed comics. What a selective focus does with an illustration is just as in film: it takes some control from the viewer/reader and directly controls what the viewer/reader should focus on. In a sharp image, the reader can freely look around within the frame and decide what to focus on.


Focusing can be used to hide or reveal information, and like a semi-transparent mask, it can also create visual metaphors, for instance of drug-induced states or madness (Brown 2012: 61). Soft lenses or filters can create associations to the early years of cinematography, to beauty, romance or dream-like situations.


Automated motion

Automated motion is the opposite of interactive motion. Traditional films and animation films are automated motion pictures. A software or machine runs the film or animation at a given frame rate. Both the image stream and the spatial motion can be automated, and they can be executed as a linear animation/film or a looped animation/film.


A linear animation is a motion picture with a beginning and an end. In film, this unit can be called a cut. In a digital comic, as in games or even a PowerPoint, this linear animation can also be an intro or an outro animation. It can be a full-screen animation, or a module (a part) of the composition.


The premiere of Sound of the Aurora was a fully automated film. I had intended to control the presentation, but due to technical problems, I executed my plan B, which was to play it as a pure motion comic/film. The later editions I have edited live, entailing a catalogue of stills, cuts and loops that I play as I want. Sometimes the cuts can be small actions such as an exploding boat or a sinking ship. And, if a sequence becomes too hectic to live-edit because of a rapid pace, I can introduce small automated edits that make the performance easier, at least with the Modul8 live software.


A repetition can be a repeated linear clip or a looped clip. I use repetition in a sketched scene with the cannon firing, and I repeat the same clip to convey that it fired seven times (Video 51).


A loop, in this context, is an animation/film that is automated to run in a circle. I find the loop to be a type of ‘passive’ animation in digital comics because it does not create actions that push the story forward. On the contrary, the loop is potentially an eternal moment. It can describe a long journey or a moment when time stands still. In a reading experience, readers themselves can choose how long they will dwell within this moment. The passive aspect makes the loop a type of animation or film that blends into traditional static comic panel sequences, because it does not create progressive sequences such as are seen in linear animations/films (Video 52).


Loop animations can be ambient backdrops, subtle gestures or mechanical motion. They create rhythm, increase the impression of speed and intensity, or do the opposite by creating a sense of calm and harmony. In Sound of the Aurora I also use the loop in sound symbols (Video 53). These image-stream animations that symbolize radio noise and music indicate that there is diegetic music even if the musicians decide to play non-diegetic music. I also use looped image-stream animation in the sailors’ hair when they abandon the ship. In the lifeboats, they are more exposed to water and wind, so I wanted to strengthen the impression of the weather conditions. I also use a technique I have seen in anime, where the eyes shiver, to show a stage of fragility when Andreas almost breaks into tears (Video 54).


Loops can also be used to compress information in a panel sequence. A looped cinematic panel can show the same content as two panels would have communicated. One example of this is in one of my auxiliary projects, a short scroll comic called Ovis Ariesaurus Rex (2015) (Video 52). All the loops in Sound of the Aurora are relatively short. In retrospect, I admit that I have been too focused on achieving seamless mixes between comic sequences and motion, and perhaps it is a result of trying to disprove Groensteen’s afore-mentioned claim that motion does not fuse perfectly with texts and images in comics. This approach has hindered me from exploring the opposite direction that involves montages with contrast and disharmony. I think I should have done more exploration of full motion animation, both linear and looped, to see how they could work with the existing content. However, a performance comic is never fixed, so it is possible to explore this by adding new content to future performances.

Personal Reflections

I had three important turning points during the process of making Sound of the Aurora. The first was epiphanic, when I gained insight into the magic lantern and its (in my opinion) close relationship to modern digital comics. Its screen format and implementation of motion inspired me to investigate the live format and discover the screen-based comic form called performance comics. I chose to follow this direction because it represented an approach that I think most people would not expect from a digital comic, and I found the live format conducive to working with motion in comics. Performance comics caught me by surprise. I had never heard about it before, and it appealed to me more than I could have imagined. My professional aim is to develop and communicate comic stories that have an emotional and social appeal. I therefore find the performing format appealing; its personal setting encourages and supports my personal narrative style. Performing will probably be a part of my projects from now on.


When making Sound of the Aurora, my focus was not on spatial motion at all. My biggest concern at the time was to decide whether I should continue making comics with a full screen panel or whether I should use a panel layout. My second turning point came the day after my first performance. I realized that a single-image presentation form is suitable for communicating with crowds. The overlapping images that I later called ‘cinematic panels’ keep the audience’s focus within the same frame. This full-screen format therefore had relevance, and the fact that the relevance related to performance made it a constructive finding. I therefore continued working with a full-screen panel when creating Close, Closer, Closest, and with performance in mind. Although I had this experience while making Sound of the Aurora, I have chosen to write about it in chapter 3 where I address cinematic panels.


The third turning point came when I was reflecting on spatial motion and the mobile frame and concluded that it made good sense to use 3D effects. Although this was late in my artistic research period, I decided to spend my last moths testing it out. Although I could not explore it in depth, I am glad I dared to explore it, as it resulted in me addressing three possible ways to work with mobile framing: in 2D, 2.5D and 3D. My use of virtual reality also is conditioned by the technological development of creative tools during the time of my research. The focus on virtual reality escalated from 2012 onwards, when the pioneering firm Oculus Rift heavily promoted its developer kits around the world. This marked the beginning of a new era in digital entertainment and digital comics, and I am glad my project could reflect part of this development. Development goes fast, however. Google Tilt Brush, from 2016, makes it possible to do 3D drawing in a virtual space. Since its launch, I observe that it has led to many new comic experiments around the world. As this artistic research is realized, a new 3D chapter in digital visual art will have begun.


I had to decide to focus on either full-screen panels or panel layouts in this artistic research, and I chose full-screen panels. This was a good choice because it is a format that differs from the traditional comic formula of juxtaposed panels. The result is reflected in chapter 3 and manifested in Close, Closer, Closest, but also through the definition of the concept of cinematic panels. A weakness, however, is that I discovered the structural differences between spatial and image-stream presentations during the process of making Sound of the Aurora. In hindsight, I believe this research would have been stronger if Sound of the Aurora had a basic spatial structure. Since Sound of the Aurora was made during a period of searching for an optimal format, it is more an equal mix of an image stream and a spatial structure, and its basic structure is an image stream. With a basic spatial structure, it would have differed more from Close, Closer, Closest in form, and the contrast could have facilitated other reflections and discoveries, as I believe it would have strengthened my experience and reflection on motion graphics and the mobile frame.


Something little explored in this artistic research is interactive motion graphics that can be controlled live. I have addressed the phenomenon but have not reflected on it to any great extent. Notwithstanding, my awareness of interactive motion graphics has at least resulted in some newly added material to the Sound of the Aurora’s image bank, which allows me to use such graphics. Studying interactive motion graphics could have been a project in itself, but this can be said of many topics I address in this research. I see my artistic research more as an orientation, a giant first step into a new discipline, and future researchers can have the privilege of narrowing their scope to a specific subject. Something else that is little explored is panel delivery through motion graphics, or ‘flying panel delivery’ as I have named it. This is because I do not use panel layouts in Sound of the Aurora. Despite this, I have contributed with thoughts on the time consumption of flying panels and graphics, and elucidated the phenomenon of responsive panels.


Mobile framing has received more attention than motion graphics in this project. Mobile framing is interesting because it adds motion to a presentation without involving motion in the visual subject matter in fictional space. Since the mobile frame is related to observation and exposition, the artwork keeps the static form and presence of a traditional comic. Of the three different types of mobile frames I have mapped in this chapter – fixed track, dynamic track and free mobility – the dynamic track is the one I have least experience with, and that I find rarer than the other two in existing digital comics. The most ordinary mobile frame in digital comics is the fixed track, as in a web-comic scroll, which most often has a fixed vertical scroll. The dynamic track, however, is a concept I would like to see more of in webcomics and scroll comics in general. The motion of the mobile frame is still attached to a track, but it can move in all possible directions. I only use it in one clip in Sound of the Aurora, but the potential of the dynamic track – especially together with panel layouts – is a technique I know I will address in future digital comic work.


Sound of the Aurora is produced in black and white in order to allude to an era before colour photography and film. The figures are stylized so as to represent distance to the material. Since it is impossible to create a realistic representation of what really happened 76 years ago, I chose a stylized expression. This is also why Close, Closer, Closest has a more realistic expression; it is a personal story with me as the original storyteller. It therefore made good sense to draw it in a personal line and to use a more realistic approach.


Sound of the Aurora has a rough look aesthetically. It was initially intended to be a stunt production, so was made quickly. The 2D edition had a production period of only one month. I value intuitive processes and have a lot of experience with them from making comic fanzines. The illustrations are rough, but this roughness sometimes results in a certain energy that cannot be achieved with a more refined look. Nevertheless, while I am satisfied with some of the rough drawings in Sound of the Aurora, there are others I would like to replace. Luckily for me, the digital comic is editable, so there are always possibilities to make changes as long I perform it.


The techniques used to make the comic would suit motion graphics. The visual roughness and pencil colouring, for example, would look very different if the work was made with classic animation. Then the pencil lines would create a more vibrant expression because they would vary from frame to frame, whilst with motion graphics they remain static. I admit that adding a full-motion, classic animation in this technique would have created a contrast I could have used to mark out a special event. It could be effective because the integration between motion and static sequences is intuitively made very seamless in this production.


It was challenging to get the 3D edition to match the 2D graphics, and I concluded that they had to look different, especially in the colouring. Personally, I like the 2D edition best from an aesthetic point of view. However, creating a comic sequence in 3D is one of the most exciting things I have experimented with, and even though I lacked time and funding to refine the visual standard, it is still an experience I am happy to have had. I want to work with it more extensively in the future. That virtual reality and augmented reality are becoming more common in our daily life is an aspect that makes this approach interesting to keep up with.


Sound of the Aurora is about the stories that the sailors who experienced World War II did not tell their families after they returned home. Their silence is something which the children of the sailors commonly experienced. These children are the parents of my generation. When I perform this piece, I often get feedback from the children and grandchildren of war sailors, who tell me they recognize themselves in the situation I depict. They also comment on the format as a visual form of presentation they have never experienced before, but which they highly appreciate.


An example that demonstrates swift-paced flying panels is Upgrade Soul (2013) by Ezra Claytan Daniels. Its presentation form, which is made by the programmer Eric Loyer, makes it look like a vertical scroll comic, because the feed of panels enters at the bottom and exits at the top. But it is not a scroll, since every panel enters and exits the screen (Video 25). Loyer’s programming allows the panels to enter swiftly, and the entrances and exits use no more time than it takes to swipe your thumb. The swift pace of Loyer’s panels emphasizes an aspect I have experienced as crucial, namely reading pace. Automation turns the reader into a spectator, which is why I wanted to develop a scheme of reader control in digital comics (see chapter 4, ‘Reader Control’).



Another aspect of motion graphics and comic panels that I have not had experience of in my own work is ‘responsive panel layouts’. In October 2015, I presented my research at the Comics Electric Symposium at Hertfordshire University. At this symposium, web designer Pablo Defendini showed his experiments with responsive comic-panel layouts, where panels shrink, expand and rearrange themselves when the screen size changes. The moving panel frames demand a dynamic illustration with independent components arranged in layers, and where live text and resolution-independent illustrations make a more adaptable content. In these adaptable layouts, the graphics move only when they need to adapt to something, for example to changing screen sizes, as would be the case when viewing a comic on the screen of a smart phone or reading tablet, first holding the device vertically and then switching to a horizontal position (Defendini 2015).


The afore-mentioned Uprise Soul also uses a responsive panel layout. It is not responsive in the sense of adapting to different screen sizes, but in the sense that the panels adapt to a continuously changing panel layout. From my perspective, I categorize it as panel delivery with motion graphics, which includes flying and adapting panels. Eric Loyer (the programmer of the work), told me at a motion comics workshop at the Fumetto International Comic Festival in Lucerne that he based the flying panels on a ‘sum zero’ concept. This means that the volume of any subsequent frame equals the exact volume of the foregoing frame. All the panels are dynamic, so they can sometimes rearrange and reframe the motif. Even the graphics inside the fictional space adjust. These changes are not recorded; they are generated in real time by programming.

Picture 16: During the performance, Gro controlled the camera with keyboard controls from her computer. She could walk freely a round in the environment, like if it was a first person computer game. The frame moved down from the mountain and into the forest. Characters where placed at different locations in the environment, and she seeked them out during the performance.
Photo by Gro Sørdal.

Figure 3: A figure that show where filters can be used within the levels of the digital comic.
Figure by Fredrik Rysjedal.

Panel delivery 1: Marvel's Infinite Comic, Wolverine: Japan's Most Wanted – Issue #1 is a comic that use panel delivery and cinematic paneels. It also use focus effects in the images, separating fore- and backgrounds. Click on the arrows to see two frames of this panel delivery. The frames are from page 12–13. 
Illustrated by Yves Bigerel and Paco Diaz.

Video 57: A short documentary in Norwegian that I made short after my first performance of Sound of the Aurora to record my thoughts of the experience. Video by Fredrik Rysjedal and Øystein Grutle Hara.

Video 20: A cut from Sound of the Aurora with a life boat sailing on the horizon. This a visual reference to the many laterna magica boat slides I saw at Preaus Museum.
Video by Fredrik Rysjedal.

Video 44: This is an edit of three cuts. In the second cut I use motion tracking in Adobe After Effects from a handheld camera to create an emotional moment.
Video by Fredrik Rysjedal.

Video 46: This cut from Sound of the Aurora is in 2,5D. The 2D illustrations are arranged in a 3D space, and when the frame moves, the illustrations parallax.
Video by Fredrik Rysjedal.

Video 47: A user test of Melanie Wigger's experiment at the Motion Comic Workshop at Fumetto.
Video by Fredrik Rysjedal.

Video 50: A clip from the premiere of Sound of the Aurora. Here you can see the reflections from the plastic canvas as it reacts to air from two fens.
Video by Øystein Grutle Hara.

Video 51: Loop as repitition.
Video by Fredrik Rysjedal.

Video 52: In a scroll comic called Ovis Ariesaurus Rex (2015), I show how loop animations works well in a panel layout.
Video by Fredrik Rysjedal.

Video 55: In Modul8 I organise my comic panels and film cuts in gruops where they can be arranged in layers. All my content is collected in the Media Set, which is the window down to the right.
Video by Fredrik Rysjedal.

Video 19: Watchmen the Motion Comic (2008) from DC and Warner Bros use subtle dynamic cut out figures in their animations and moving static objects.
Video by Warner Bros.

Video 43: In this video you see material from the not realized project I Don't Know Grandpa. These paintings was made for a mobile frame.
Video by Fredrik Rysjedal.

Video 53: In Loop animation of Radio noise from the tuner. Also an example of the short loops I often use in Sound of the Aurora.
Video by Fredrik Rysjedal.

Figure 2: Visualisation of the screen levels of a digital comic, by Fredrik Rysjedal. Read more about screen levels in Chapter 4.
Figure by Fredrik Rysjedal.

Picture 4: Poster for my comic performance at Bergen Kjøtt, at the 1982 Festival. 
Poster by Fredrik Rysjedal.

Video 13: The famous French comic artist Tardi performes his comic at Fumetto 2015, accompanied by song and music. 
Video by Lukas Künzli.

Audio 1: An interview with Eric Loyer at the Fumetto International Comic Festival 2015. Published on the Norwegian comic podcast Teikneseriehovudstaden.
Recorded by Fredrik Rysjedal.

4I co-taught the visual narrative course at Bergen National Academy of the Arts in 2013 and 2014, together with Liv Andrea Mosdøl.

Video 22: Outside the Box by Brendan Cahill, makes use of panel delivery through both motion graphics and image stream sequences.
Video by Fredrik Rysjedal.

Video 27: A cut presenting the living room from Sound of the Aurora. This is an illustration that slides horisontally to imitate a mobile frame. 
Video by Fredrik Rysjedal.

Video 33: Comixology's Guided View on Bullwhip (2017) by Josh Bayer, Benjamin Marra, Al Milgrom, Rick Parker and Matt Rota, published by All Time Comics.
Video by Fredrik Rysjedal.

Picture 12: Handeling the Occulus Rift VR-Goggles as sensors to move the mobile frame in the 3D edition of Sound of the Aurora. Here from my performance in Mariakirken (St. Mary's Church) in Bergen in 2016. From left: Stephan Meidell, Nils Økland, Sigbjørn Apeland, Fredrik Rysjedal and Aslak Helgesen.
Photo by Ben Speck.

Picture 14: This is a panel from The Adventures of Tintin - The Crab with the Golden Claw (1943) by Hérge. The panel shows seven soldiers, but they are composed to be read as a sequense that shows the motion of getting up and start running.
Illustration by Hergé.

Picture 5: The living room from Sound of the Aurora.
Illustration by Fredrik Rysjedal.

Video 11: An extract from Mervyn Heard's laterna magica show at the Visibility Conference. In this video he is telling a longer story. Video by Fredrik Rysjedal.

Video 12: My and Eirik Andreas Vik's first performance comic, where we adapted a fanzine for a screen presentation.
Video by Fredrik Rysjedal.

Video 15: A lever slide from the collection of Preus Museum.
Video by Fredri Rysjedal.

Video 16: This cut from Sound of the Aurora is a combination of motion graphics and photographic film.
Video by Fredrik Rysjedal.

Video 17: Interactive motion graphics in the menu of Close, Closer, Closest.
Video by Fredrik Rysjedal.

Video 25: Ezra Claytan Daniels and Erik Loyer's Upgrade Soul has a unique combination of spatial motions and image stream. When a new panel appear it fades in, and then it moves to give room for the next. The illustrations are also in 2,5 dimensions, and with a tiny mobility on the frame, the readers gestures creates parralax effects in the image. I will return to this effect in the section about the mobile frame.
Video by Fredrik Rysjedal.

Video 28: A preview of The Cage by Martin Vaugn-James. The mobile frame is an important visual presentation form in this comic book, where the panel sequenses moves through different spaces observing.
Video by Fredrik Rysjedal.

Video 36: This is a sketch I made  during the summer of 2014. It is made in a scroll format.
Video by Fredrik Rysjedal.

Video 37: PoCom-UK-001, a hypercomic by Daniel M. Goodbrey.
Video by Fredrik Rysjedal.

Video 23: Hvorfor ananas heter ananas by Jenny Jordal.
Video by Fredrik Rysjedal.

Video 10: A study trip to Preus Museum in 2014, where I got to study the magic lantern for the first time. 
Video by Fredrik Rysjedal.

Video 31: An experiment from August 2014, working on an interactive 360 degree comic experience with Virtual Reality in mind. This mobile frame is rotating around its own axis. Video by Fredrik Rysjedal.

Video 24: Video of a side scrolling comic with fullscreen panels. It is based on the sketch called  I Don't Know Grandpa from 2012, which was the first script I experimented with in this artistic research. It was not realized in the end. Text and illustrations by Fredrik Rysjedal.

Video 35: Icarus Needs a game comic by Daniel M. Goodbrey.
Video by Fredrik Rysjedal.

Video 39: An example of 2D graphics in 3D environments (2,5D). The sailors are arranged so the camera can rotate 360 degrees around its own axis.
Video by Fredrik Rysjedal.

Picture 7, 8, 9 and 10: The course Storyboarding for Directors at London Film School with instructor Dylan Stone. Stone introduced me to the magic lantern tradition.
Photo by Fredrik Rysjedal.

Video 21: When I perform, I work in the Modul8 software. There I can activate and deactivate footage and films. Here I show how a layer with an alpha-channel can be controlled over the other images. This opens up for live motion graphics in the fictional space. 1982's 03:22 is playing in the background.
Video by Fredrik Rysjedal.

Video 30: A sketch made after the premiere of Sound of the Aurora to test spatial motions. In this cut, the camera is orbiting a lifeboat with fata morgana surrounding it. When a camera observes the objects from all angels, I think the objects either needs to be 3D models, or the visual premise has to be to acheive a paper theatre look. The latter is not the case, so I did move on without this cut. However, in retrospect I see a possibility to use 90 degrees of this 360 degree rotation, facing only the front of the graphics. A possibility I might return to in future editions of this comic performance.
Video by Fredrik Rysjedal.

Video 38: A cut from Sound of the Aurora, introducing Astrid by approaching her with a mobile frame.
Video by Fredrik Rysjedal.

Video 48: Documentation of the first official performance of Sound of the Aurora with the 3D scene, at Mariakirken in Bergen 2016 (St. Mary's Church). Music by 1982 and Lillian Samdal reads. Aslak Helgesen assisted me with the Oculus Rift.
Video by Fredrik Salhus.

Video 49: A performance at 1982 Festivalen in 2015. Here we see Øyvind Skarbø and Nils Hornlad from the trio 1982, under the dynamic screen which functions as a filter that adds waves to the images.
Video: Ann-Kristin Stølan.

Picture 6: This is Andreas Strand, my grandfather and the subject of my comics. He was a Norwegian merchant sailor, also during WW2. He died in 1965, 25 years before I was born, which is one of the motivation to get to know more about him. Photographer unknown.

Video 26: A cut from Sound of the Aurora with a mobile frame, moving away from the sailors to reveal the ocean.
Video by Fredrik Rysjedal.

Video 32: A cut from Sound of the Aurora, showing a rolling rotation.
Video by Fredrik Rysjedal.

Video 42: An image stream in 3D, with the possibility of a free mobile frame within each panel.
Video by Aslak Helgesen.

Video 56: Documentation of the premiere of Sound of the Aurora, June 5th 2014. I am reading loud and 1982 (Nils Økland,Sigbjørn Apeland and Øyvind Skarbø are improvising the music. Video by Øystein Grutle Hara.

Picture 19: I believe performance comics can reach out to an alternative audience than for example a digital comic for the tablet. Therefore I have also made my other digital comic, Close, Closer, Closest, that was published on the iPad suited for performance in mind.
Photo: Ben Speck

Picture 11: My studio at Strømgaten 1 in Bergen, here working on Sound of the Aurora. Photo by Fredrik Rysjedal.

Video 14: A lantern slide from Preus Museum. It is constructed like the uncovering slides with a mobile layer on top of a fixed background. These motions are spatial and I define them as motion graphics in this context.
Video by Fredrik Rysjedal.

Video 34: Jason Shigas digital comic Meanwhile operates with a mobile frame in the negative space. This mobile frame got a dynamic track.
Video by Fredrik Rysjedal.

Picture 17: The interactive mobile frame lets the reader frame the story live, and choose how to navigate in the environment.
Photo by Gro Sørdal.

Video 18: Automated motion graphics in the end scene of Close, Closer, Closest.
Video by Fredri Rysjedal.

Video 54: Shivering eyes.
Video by Fredrik Rysjedal.

Video 45: Using motion tracking in Adobe After Effects from a handheld camera to create the movement of the boat in the waves.
Video by Fredrik Rysjedal.

Picture 13: Aslak Helgesen constructed the comic sequence from the living room in 3D.
Photo by Fredrik Rysjedal.

Video 41: The evacuation. A test of mobile framing, where the camera move around on the ship and its movement captures the comic sequenses in the illustrations.
Video by Fredrik Rysjedal.

Video 29: A Soul Reaper is a comic demo from late 2011 that demonstrates the webdesign technique scroll-activation. The mobile frame of the negative space can move vertically, and as the reader scroll, motions and mobile framing in the fictional space move. Soul Reaper is made by Saizen Media and is published online:

Picture 18: The 3-dimensional digital environment lets the reader enjoy the world of the narrative in 360 degrees.
Photo by Gro Sørdal.

Video 40: Astrid in 3D.
Video by Fredrik Rysjedal.