4. Results and Discussion

4.1. Comparison of sonifications for three cell types



A collection of sonifications was constructed from the analysis of proteomic datasets associated with three different types of cells. Again, the motivation for this study was to determine whether the distinct sonifications could reliably indicate differences due to the neuropathology associated with ALS in a study of differentiation of stem cell derived neurons (as in Martens, Poronik and Saunders 2016). Although the details of that study are beyond the scope of this paper, for the sake of reference, the three cell types identified in this previous research are Fibroblast, Control, and ALS, this last type derived from cases of Amyotrophic Lateral Sclerosis. The following audio demonstration presents three examples of sonifications for each of the three cell types. In these audio demonstrations, the differences between the sonifications generated for cells of a given type (i.e., comparisons within each type of sonification) can be examined separately from differences between those generated for cells of different types (i.e., between type comparisons).

AudioObject 4: Examples of sonifications for three Fibroblast cells presented in succession. The duration of each example is 2 seconds with .5 seconds of silence between the three examples.


The sounds presented in Audio Object 4 demonstrate three similar data sonifications of a multivariate proteomic dataset for which one variable (molecular weight of proteins found in each of the three Fibroblast cells) was mapped to grain frequency, and a derived multivariate component was mapped to grain spectrum. When heard over headphones, some grains sound as if they are arriving from the right, and some from the left, in a manner that highlights the temporal patterns created from the data.

AudioObject 5: Examples of sonifications for three cells sampled from the Control group, again presented in succession as in Audio Object 4. As before, the three examples are separated in time by .5 seconds of silence.

 

The sounds presented in Audio Object 5 demonstrate data sonifications generated for three cells sampled from the Control group (i.e., samples of cells from cases that do not exhibit symptoms of ALS).

 

The sounds presented in Audio Object 6 demonstrate data sonifications generated for three cells sampled from the ALS group (i.e., samples of cells from cases exhibiting the neuropathology associated with ALS).

AudioObject 6: Examples of sonifications for three cells sampled from the ALS group, again presented in succession as in the preceding Audio Objects 4 and 5.

 

4.2. Performance of the sonifications and their practical use


 

The proteomic data sonifications demonstrated in Audio Objects 4, 5, and 6 afforded clear distinctions between the three cell types compared. The within-type comparisons showed adequate similarity for a conceptual prototype to be formed in the minds of listeners for each cell type. Indeed, after a group of naïve listeners examined these sets of sonifications, they were even able to describe the differences in such a way that they could classify new sonifications into one of the three types. Of course, the ability of naïve listeners to distinguish between classes of sonifications after some training does not imply that the sonifications will necessarily prove to be useful to practitioners. Barrass and Kramer (1999) have provided a comprehensive survey of approaches for designing sonifications that emphasizes this as an ongoing concern within existing sonification practices. Of particular relevance is their discussion regarding how knowledge derived from the perceptual evaluation of sonification can allow designers to predict how listeners will perceive (if not understand and interpret) variations in novel sonifications. The point is well made by Barrass and Kramer that the comprehensive evaluation of newly designed sonifications requires more than perceptual studies, since such studies typically do not address issues of representation that are central in sonification. Users of sonification systems need to hear the underlying data relation(s) conveyed by the sounds rather than just the changes in auditory attributes that are modulated by them.

 

So, beyond the elementary perceptual evaluations already undertaken, it is worth asking what other evaluation methods should be used in the further design and development of sonifications such as those presented in this paper. Further tests that address more important issues in sonification system usability must be undertaken. Ultimately, the completed sonification system must meet explicit acceptance criteria before its success is demonstrated. As outlined by Schneiderman and Plaisant (2010), these criteria for evaluating system performance might include the following:

 

·      Time for users to learn specific functions

·      Speed of task performance

·      Rate of errors by users

·      User retention of commands over time

·      Subjective user satisfaction

 

In additional to the overall satisfaction with the displayed sonification that may be expressed by system users with domain knowledge whose satisfaction may diminish with time, a more objective evaluation is recommended. It is not enough that users think that they can use a system effectively; rather, it is important to determine whether users can reliably make accurate judgments regarding the information being displayed as part of a typical use-case analysis. If not, otherwise satisfying sonifications, such as those that are aesthetically pleasing,  will eventually be rejected if they ultimately obtain no support or validation from the results of user testing as concerns diagnostic accuracy. Ultimately, it is hoped that such an approach will contribute to the formulation of a more generally acceptable theory of sonification. Incorporating the results of empirical tests might allow sonification theory to evolve through a somewhat natural “winnowing out” of unsuccessful approaches, supporting a general approach to sonification that can potentially provide truly winning applications that fill gaps between existing information displays.