The Lab in the Home


 

It is July 2020, and I am joining the lab’s new postdoc, who specializes in bioinformatics, for an online training session to get some advice on how to use Python, a popular programming language for data science. The other postdoc in the meeting had been learning command line (computer textual interface) from online courses but had run into difficulties while working independently from home. (It was during this meeting that I first noticed the distant crows crying repeatedly in the background were part of the acoustic background of the lab itself, where the bioinformatician was located that day.) As most wet lab experiments were stopped during the seven-week emergency shutdown, finding efficient and useful ways for lab members to keep working, especially for those who typically spent most of their time at the bench, was a key challenge for the PI. In addition to having them map out experiments while at home to prepare for their eventual return to the lab and “to reflect on longer term planning,” as he said in one meeting, the PI also shifted those who were capable, even lab technicians, toward working more with computational methods. 

 

As we get into the lesson, the bioinformatician shares a view of his desktop so we can watch the steps he takes to complete the necessary programming tasks that are the focus of this lesson. This involves him moving between different programs and windows, from the computer’s terminal to a text editor and to a web browser. He even sends relevant links to us in the video chat text box. Every time he hits the enter key on his computer – for example, when he wants to run the computational code he has set up in the command line, or even when sorting through windows or running into trouble with script and needing to alter it and try again – he repeats yoissho quietly into his microphone. In this case, although yoissho can be translated a bit like a grunt, a noise made with physical exertion (like when going up stairs or picking up something heavy), here, he told me later, it reflects a kind of empathy with, or anthropomorphization of, the computer as it struggles to load large documents or open “heavy” files.[4] This is a simple moment that, for me, along with the crow indicated a key difference in the acoustic presence afforded by these particular online forms of exchange: a word repeated, breathed unconsciously, suggested a relative intimacy that hadn’t been part of face-to-face exchanges in the lab. While such close-up sounds might be modulated as ambience in physical presence, in a monophonic, technically-structured space where everything is defined as signal or obstruction of signal, the postdoc’s whisper becomes elevated in significance. 

 

Of course, the absence, or in this case magnification, of sound is now a mundane technological phenomenon. During the months of the shutdown and afterwards, as the lab maintained even its primary meetings in electronic formats and forms of telework continued at the urging of the institute, breakdowns in sound were accepted as a natural part of connecting in this way; distortions and loss or feedback were common, and strategies were developed to take these moments in stride and quickly overcome them, such as ending a meeting and restarting it or deciding to finish the meeting later in person. Kikoemasen (I can’t hear you) became a frequent phrase that always involved a stuttering of confusion while the speaker attempted to determine whether the problem was a complete technical dropout or a microphone that was not working well enough. In a more recent meeting, when those attending from the lab were spread out around the various rooms and those at home were logged in from living rooms, one postdoc’s audio began to echo. His solution was to immediately mute his computer and join another postdoc who was sitting at the opposite bench in the same lab; in his haste, he left his computer logged in, relaying a silent image of the lab’s ceiling. 

 

During another online lab meeting, we got started with an absent postdoc. Most of the lab members logging on were physically present in the lab and again spread out between the office space and the benches. The PI mentioned sending the missing postdoc a message to remind him about the meeting and then asked the others if anyone had seen him that morning. At the same moment, a lab technician swiveled in her chair getting settled at her desk and, realizing it had made a loud squeak, turned to another postdoc one desk removed from her to ask, a bit embarrassed, “Did you hear that?” His confused response, “I haven’t seen him,” referenced instead the missing postdoc who hadn’t joined the meeting, his attention being directed to the meeting’s audio track and the PI’s question. As she began to laugh over the confusion, the PI, in his glass office behind them, inquired teasingly: “What are you laughing about? It’s distracting.” 

 

These brief examples begin to show the way individual soundscapes, like crows, are subtly yet constantly positioned – by both users and the technological affordances of software, computers, and microphones – as barriers to clarity and communication. The audio prioritized in these telepresent formats is up close and spoken directly into the microphone, generating an intimate audio signal, positioning environments, ambience, and soundscapes as noise or interference to be parsed and filtered out. It also emphasizes directed, purposeful speech and forms of on-and-off turn-taking reminiscent of half-duplex devices, like walkie-talkies. More often than not in these examples, telepresence is prioritized over physically located presence, which is relegated to the background.

 

As Paul Virilio warns, “The question of telepresence delocalizes the position or orientation of the body. The whole problem of virtual reality is that it essentially denies [...] the ‘here’ in favor of the ‘now’” (Virilio 1999b: 44). Of course, individuals are always negotiating sonic presence and speech through materials and technologies in their physical environments, such as the modification, or construction, of physical spaces to create real and symbolic acoustic barriers: perhaps to literally wall off sound in the engineering of privacy or to demarcate certain interactions as belonging to specifically-marked spaces, just as the scientists strive to maintain silence in the shared office. In fact, one of the first things the PI did when he acquired the open space that was to become his lab was to build glass walls to create an interoffice for himself. He explained to me that he was having trouble writing in the open – typically Japanese – space that he had inherited and that it was difficult to concentrate. He was also worried, he explained, about how he could give honest feedback to those working in the lab if everyone else could overhear: “What if you want to have a private conversation?” he wondered.