A Proposed Embodied Mapping Strategy for IoT Network Monitoring
In order to demonstrate how some of the parameters and approaches discussed thus far in data-driven music might be applied in a sonic information design context, we will now consider a recently devised mapping strategy for live monitoring of traffic activity in a large-scale internet of things (IoT) network.
Whilst Worrall (2015) has demonstrated how sonification could be successfully deployed for representing metadata in an organization’s own internal network, there are a different set of factors at play in the sonification of IoT data. An IoT network is comprised of physical objects, machines and devices that have been enabled for Internet connectivity. The mapping strategy presented here is intended for use with the Pervasive Nation, Ireland’s national-scale IoT test-bed. The network consists of a diverse set of devices spread across the country, monitoring everything from water levels for flood detection to agricultural applications. These devices relay data through a system of gateways (or base stations) spread around the country. A log of all messages shared across the network is maintained by the network server. Pervasive Nation is a Low-Power Wide-Area Network (LPWAN), which means that it operates at low throughput, processing very few data packets when compared to a modern cellular network. This is more than enough to support messages from IoT devices in which transfer speeds usually fall below 27kb per second (Adelantado, Vilajosana, Tuset-Peiro, Martinez, Melia-Segui and Watteyne 2017). As a result, there are no continuous variables with IoT network data of this nature. However, given the number of devices online, the data can still become quite dense and complex.
IoT networks are generally concerned with machine to machine (M2M) communication and, as such, device payloads (sensor measurements) are encrypted and inaccessible. Network monitoring practices tend to focus on maintaining the overall “health” and integrity of the network. To this end there are a number of behaviors that need to be detected: devices that continually fail to connect to the network server, devices that exhibit irregular behavior (e.g. erratic switching across frequency positions, constantly reconnecting to the network server), and devices with low signal strength or bad signal to noise ratio. Monitoring for these anomalies generally consists of visually scanning large tables which describe the activity of each node over some predefined time period. If a problem is identified, a visual representation of the data from individual devices can be accessed. Given the large amount of data involved, this process can be slow and inefficient. Furthermore, this is all carried out after the fact, with the result that problems in the network can continue undetected for some time. These issues could be addressed by designing an auditory display to represent the data with sound in real-time.
The future is not going to be people talking to people; it's not going to be people accessing information. It's going to be about using machines to talk to other machines on behalf of people. (Tan and Wang 2010)
A recurrent metaphor employed across the IoT literature to describe M2M communication, reflected in the above quote from Tan and Wang, is that of “machines talking to each other.” Drawing from Imaz and Benyon’s (2007) recommendations to structure HCI design on the basis of conceptual metaphors and blends, this metaphor can be adopted as a frame of reference for our auditory display design. The auditory display can be conceptualized as a blend between the data and sound, framed in terms of a conversation between machines (see figure 2). Designing this interpretation of M2M communication into our auditory display might help to make it more intelligible to the listener and support them in understanding and reasoning about the data. Other relevant work in fields related to embodied cognition can be called upon throughout the design process to further inform and refine design choices.