“All stable processes we shall predict. All unstable processes we shall control.” — John Von Neumann
Quantify everything! That is the rule of thumb of computation. As an elaboration on Pierre-Simon Laplace’s fantasy of “an intelligence sufficiently vast to submit (the totality of) data to analysis,” computation emerges from the idea that the past might hold the key to the future. The translation of the complexity of nature into data permits calculation, and thereby prediction. We only need a calculator powerful enough to make said calculation. It was mathematician George Boole who in the mid-nineteenth century produced the groundwork for this abstraction by reducing the logic of computation to the dichotomy of “true” or “false,” or 1 or 0. A thing either exists or it does not.
In Boolean algebra, the world of numbers—and along with it, the world as a whole—breaks down into binary code. Numbers as we know them are forms which this coding manifests itself. In other words, they do not represent anything. The Boolean digits 1 and 0 do not designate a quantity; instead, they mark presence and absence. Thus, 1 stands for the universe, and 0 stands for nothingness. Nevertheless, the terms are not governed by a mutually exclusive relationship. Their relationship is complementary: they follow the same logic. Just as 1 times 1 always yields 1, and 0 time 0 always produces 0, x times x always equals x in the Boolean world. For the same reason, All and Nothing meet up in the formula x = xn. Because x can stand for anything and everything — indeed, for the universe in its entirety—it is no exaggeration to speak of a digital world formula in this context.
Cultural theorists Martin Burckhardt and Dirk Höfer see the Boolean formula as the foundation upon which the digital philosophy of computation is built: either everything is quantifiable, or nothing is. If one thing can be quantified, so can everything, proving the existence of a totality. Almost spiritual in tone, the authors proclaim: “Through the Zero and the One all things were made; without them nothing was made that has been made.”
It is the search for the meaning of totality, and subsequently the quantification of totality, that led mathematicians, statisticians, and cryptographers to expand their visions of computation as a complexity-unravelling and randomness-anticipating strategy. Henri Poincaré states that “if we knew exactly the (totality) of the universe at the initial moment, we could predict exactly the situation of that same universe at a succeeding moment.” But directly afterwards, he counters that “it may happen that small differences in the initial conditions produce very great ones in the final phenomena.” The only problem is that the totality of the universe is not exactly known, and therefore not a totality at all. To correct this problem, we would need more data; indeed, all data would need to be included. It was this thirst for data and data-processing that drove mathematician and electrical engineer Claude Shannon to carry Boole’s concepts of a binary world into the design of electrical switching circuits—which can be positioned on or off—and, in doing so, create a device capable of computing anything computable. As Burckhardt and Höfer write:
Boole envisioned a kind of mathematics that would allow for calculations with apples and oranges—that is, allow for one to jump from one number system to another. Such a leap was made possible when he took the 0 and the 1 out of any order of denotative logic— indeed, out of mathematics itself. In doing so, he made the 1 no longer stand for a quantity, but a presence, and the 0 stand for an absence (of something—whatever it may be). The real revolution lies here: Boole’s binary logic detaches itself from mathematics; it abandons the material quality of what it describes. Because it oscillates between the universe and the void, it can describe anything. Viewed in this light, Boolean thinking is not algebra so much as a general theory of signs. […] In fact, this same abstractness is what fascinated readers of Shannon’s Mathematical Theory of Communication a good hundred years later, and today is what allows us to digitize not only texts, images and sounds, but also earthquakes, brain activity and extraterrestrial phenomena.
As the father of information theory (or so he is called), Shannon imagined the first computer able to abstract the totality of information. This led Edward Lorenz to stipulate his theories on deterministic chaos and complexity, only to discover that the data his computer was able to produce and process was insufficient; complexity reached so far into the decimals, if not infinitely far, that all prediction of future events proved to not stand beyond several days. Meanwhile, mathematician and philosopher Norbert Wiener, the father of a new science called “cybernetics,” which he framed as “the scientific study of control and communication in the animal and the machine,” integrated computation into every possible field of research, vigorously attempting to streamline the totality of the universe along Boolean logic.
Still, over 200 years of proving and disproving the achievability of predicting the future, of attempting to concoct devices and abstractions of comprehension, of rendering reality into data, we see today that it is not enough. Following the Boolean logic, we know it will never be enough, that the totality of information, of “the laws of nature and the situation of the universe” will never be grasped, as it is a matter of all or nothing.
In our endless search for strategies to deal with the future, we have wound down a path of a nearly manic fascination for stability. We began with normalisation, a strategy of exclusion of possible futures, then imagined automation, a strategy for upholding normalisations into infinity, and now live in a world of computation, a strategy for shaping the future. As automation is the extension of normalisation—protocolled normalisation projected into infinity—computation is the extension of automation. It is the assimilation of the Boolean formula into the scripts of automation. Where automation, in its most essential form, can only repeat past experiences indefinitely, merely assuming that the future will be similar to the past, computation models a future actively, updates itself, and hosts the expectation of its self-preservation.
Just as global telecommunications have collapsed time and space, computation conflates past and future. That which is gathered as data is modelled as the way things are, and then projected forward — with the implicit assumption that things will not radically change or diverge from previous experiences. In this way, computation does not merely govern our actions in the present, but constructs a future that best fits its parameters. That which is possible becomes that which is computable. That which is hard to quantify and difficult to model, that which has not been seen before or which does not map onto established patterns, that which is uncertain or ambiguous, is excluded from the field of possible futures. Computation projects a future that is like the past—which makes it, in turn, incapable of dealing with the reality of the present, which is never stable.
Computation is a paradoxical undertaking, according to artist and writer James Bridle, since its goal in adapting to the future is, in fact, stability. Computation presents us a future that supports our assumptions, since computation, and the technology that enables it offers us the comforting pillow of neutrality: machines will make better choices than humans, because they are not tainted, spoiled, or tangled up in our limited capacity to comprehend the random events of the future. They do not need to exclude possible futures to make decisions; they do not need to fragment time into intervals; they are not overwhelmed by arbitrariness. Computational machines only follow the sterile logic of computation. They have no culture, no history, no backstory—only pure, unbiased, algorithmic reasoning. Machines will not only make better choices than humans; they will make the best choices.
This is, of course, the promise of computation. Through computation, our assumption—a fiction, remember—that the future is like the past is continually confirmed, quietly replaced by the fiction of objectivity. By pushing both decision-making processes and responsibilities for those decisions onto the machine, those aspects of human life which have been traditionally always open to interpretation are now quarantined from our contamination. Bridle continues:
This conditioning occurs for two reasons: because the combination of opacity and complexity renders much of the computational process illegible; and because computation itself is perceived to be politically and emotionally neutral. Computation is opaque: it takes place inside the machine, behind the screen, in remote buildings—within, as it were, a cloud. Even when this opacity is penetrated, by direct apprehension of code and data, it remains beyond the comprehension of most. The aggregation of complex systems in contemporary networked applications means that no single person ever sees the whole picture. Faith in the machine is a prerequisite for its employment, and this backs up other cognitive biases that see automated responses as inherently more trustworthy than non-automated ones.[…] Technology’s increasing inability to predict the future—whether that’s the fluctuating markets of digital stock exchanges, the outcomes and applications of scientific research, or the accelerating instability of the global climate — stems directly from these misapprehensions about the neutrality and comprehensibility of computation.
Equal to all normalisations, which transition from a narrative to elucidate the unknown future into a certainty of unknown origin, computation has transitioned from an auspicious aspiration into a solidified given. And not without reason. The quantified world of computation, built on statistical analyses of centuries of data, is right too often for us to ignore. When predicting the weather patterns of tomorrow, the old, superstitious weather lore—”when March blows its horn, your barn will be filled with hay and corn,” for instance—simply cuts it short compared to contemporary satellite imageries, atmospheric models, and fluid simulations. The guarantees of computation offer a form of stability and certainty more than do other types of normalisation. But the grandeur of computation—quantify everything, not just something—is expansive, requiring more and more input to reach the totality of information and tighten the net on its accuracy to predict. This grandeur overwhelms us into compliance: the dataset is good, but it can be better. But outsourcing agency to the automation obfuscates the internal complexity of its processes. More simply,
Computational thinking has triumphed because it has first seduced us with its power, then befuddled us with its complexity, and finally settled into our cortexes as self-evident.
The self-evidence of computation is a reminder of the normality of normalisation, how it dims the logic of any alternative. But the ambitions of computation, Bridle suggests, make its predictions of the future less and less accurate, which in turn makes its self-evidence not only problematic, but self-replicating. Computation needs to lay promise upon promise to mask the original but faulty assumption at its core—that the future is nothing more than the effect of past causes, which can be abstracted into data—and in doing so flatten out its raison d’être altogether. Computation’s promises of neutrality, comprehensibility, self-sustainability and, most importantly, stability have resulted in an intricate self-affirming web.
As we know, our dream of stability, of an unalterable future, is not at all new. Nor are the dynamics of self-affirmation that computation camouflages. What is profoundly different now is computation’s reliance on the past, and its ability to replicate it. Data—which is necessarily gathered in the past and never in the future—is used to design models of the future. Computation therefore assumes that the future will be like the past. Unlike humans, who also live by this assumption but rely on faulty memory, computational machines are capable of maintaining this data, storing it and recollecting it, indefinitely. Where the past was open to interpretation prior to computational thinking, it is now precisely registered and ostensibly objectively archived, ready to be photocopied into the future. The past can be used as a template for the future, yet only the parts which have been transformed into data and which have been logged into the vaults of computation. By actively creating a future which is like the past—through steering our outsourced decision-making and responsibility—computation has given us the ability to repeat, recreate, reiterate and reenact our past with a precision previously unknown to us. And in this sense, the promise of stability—stability through repetition, through management, through continuation, through simulation, through the complete annulation of interpretation and variability—has become reality.
To be is to have been, and to project our messy, malleable past into our unknown future. — David Lowenthal
Computation’s re-creation of the past in the future relies on the rationalisation of memory, which determines our understanding of what the totality of information means exactly. Memory in the automated world has a different meaning than human memory, yet it is the conflation of these two meanings that generates the replicating capabilities of computation. For a clear example of the way memory is dealt with in greedy archives of computation, we can look to Borges’s 1942 story “Funes, the Memorious.” In the story, the narrator recalls an encounter with a mysterious character, Ireneo Funes, who was able to remember everything he had ever seen or experienced.
With one quick look, you and I perceive three wine glasses on a table; Funes perceived every grape that had been pressed into the wine and all the stalks and tendrils of its vineyard. He knew the forms of the clouds in the southern sky on the morning of April 30, 1882, and he could compare them in his memory with the veins in the marbled binding of a book he had seen only once, or with the feathers of spray lifted by an oar on the Rìo Negro on the eve of the battle of Quebrancho. Nor were those memories simple—every visual image was linked to muscular sensations, thermal sensations, and so on. He was able to reconstruct every dream, every daydream he had ever had. Two or three times he had reconstructed an entire day; he had never once erred or faltered, but each reconstruction had itself taken an entire day. “I, myself, alone, have more memories than all mankind since the world began,” he said to me. And also: “My dreams are like other people’s waking hours.” And again, toward dawn: “My memory, sir, is like a garbage heap.” A circle drawn on a blackboard, a right triangle, a rhombus—these are forms we can fully intuit; Ireneo could do the same with the stormy mane of a young colt, a small herd of cattle on a mountainside, a flickering fire and its uncountable ashes, and the many faces of a dead man at a wake. I have no idea how many stars he saw in the sky.
Funes’s ability to remember—or, more accurately, his inability to forget—allows him to accumulate and recollect strings of experiences, so fine-grained that near to no cognitive gaps need to be filled in through inductive means. Like Bergson’s definition of change as a continuous and dynamic whole, Funes’s memory is non-intermittent and indivisible, resulting in an overwhelming resolution which permits the comparison of hypothetical apples to oranges. It becomes clear that, to harvest the totality of information, the storage of this information needs to come first, and it needs to be executed with infinite detail. Funes registers every sensation, every movement, and every vision into the logbooks of his brain; they are immediately available to him at every instance. His body is a mobile recording device, needing only a glance at an object to have it captured and stored and carried within forever.
The baseline of computation is a capacity for memory like this. If a machine forgets nothing, then its scan of the world and the situation of the universe would result in a comprehension of totality. All angles would be seen, all sounds heard, all movements registered. Even if it takes some time before everything is logged, the unforgetting memory would collect and safeguard all data until the totality is achieved, slowly filling in a seemingly endless blank. The retrieval of this data from the archives and subsequent comparison with other data then allows to find overlap and similarity, or produce an impression of an even denser resolution, layer upon layer, map upon map. Just as Funes can compare the southern clouds to a splash of water, placing them virtually next to each other, so can computation place sound next to image, and meta-data next to movement.
However, absolute memory warrants not only the comparison between incomparable differences, but between similarities, even likenesses. Funes, confronted with the generalising logic of the average mind, is baffled by the banality of perception, by the simplifying and compartmentalising processes people use to grasp the complex. He is, in fact, unable to combine two or more experiences as being alike.
Funes, we must not forget, was virtually incapable of general, platonic ideas. Not only was it difficult for him to see that the generic symbol “dog” took in all the dissimilar individuals of all shapes and sizes, it irritated him that the “dog” of three-fourteen in the afternoon, seen in profile, should be indicated by the same noun as the dog of three-fifteen, seen frontally.
What does the comparison between incomparable differences achieve if it is unable to group and read abstract similarities? In computational thinking, similarities are referred to as “patterns”, which is a way to describe recurring sequences or strings of data from different sources. To find a pattern is to lay the sequences on top of each other and note both the similarities and the differences—which pixel or group of pixels accords with another, which wavelength synchronises with another, which quantified representation of reality matches with another. This understanding of similarity is entirely different from what Borges defines as similarity. Indeed, a dog seen from the side on a sunny day and then seen frontally on a cloudy day is visually not the same dog. Yet in human perception, leaving out the differences entirely and only focussing on the similarities, the dog is considered one and the same, regardless of the time passing or the weather changing or the angle of view shifting. It is in this sense that the perception of computation, based on a flawless and complete memory, differs from the faulty and messy mind of mankind. To put it in Borges’s words, “To think is to ignore (or forget) differences, to generalise, to abstract.” And as computers are incapable of forgetting or ignoring, their vision of the world remains particular, built up from endless and immediate particulars.
Bergson describes the same process of abstraction in a different way. He assumes that our mind is, indeed, capable of perceiving particulars, but that the breaking up of continuity into particulars, into intervals, allows the mind to bring certain events to the front and send others to the back.
When the two changes, that of the object and that of the subject, take place under particular conditions, they produce the particular appearance that we call a “state.” And once in possession of “states,” our mind recomposes change with them. […] There is nothing more natural: the breaking up of change into states enables us to act upon things, and it is useful in a practical sense to be interested in the states rather than in the change itself.
Like computation, which breaks down change into particulars, into states, or better, into fixed numbers, human perception cuts the continuous flow of information into befores and afters. Computation, however, is unable to select, to choose between these states. By adding more and more states to the memory without deleting what is between, perception reaches a point at which it reconstructs the graduality of change again, like a camera producing 24 images and placing them so tightly together that the difference between the images blurs into one continuous movement. The human mind selects, first by breaking up the flow of change into states, and then by isolating specific states and white out others.
The facts […] show us, in normal psychological life, a constant effort of the mind to limit its horizon, to turn away from what it has a material interest in not seeing. Before philosophizing one must live; and life demands that we put on blinders, that we look neither to the right, nor to the left nor behind us, but straight ahead in the direction we have to go. Our knowledge, far from being made up of a gradual association of simple elements, is the effect of a sudden dissociation: from the immensely vast field of our virtual knowledge, we have selected, in order to make it into actual knowledge, everything which concerns our action upon things; we have neglected the rest. The brain seems to have been constructed with a view to this work of selection. That could easily be shown by the way in which the memory works. Our past […] is necessarily automatically preserved. It survives complete. But our practical interest is to thrust it aside, or at least to accept of it only what can more or less usefully illuminate and complete the situation in the present. The brain serves to bring about this choice: it actualizes the useful memories, it keeps in the lower strata of the consciousness those which are of no use. One could say as much for perception.
Forgetting permits agency and knowledge. It thereby keeps us sane, as the totality of information would bewilder us, as it does Funes:
Funes could continually perceive the quiet advances of corruption, of tooth decay, of weariness. He saw—he noticed—the progress of death, of humidity. He was the solitary, lucid spectator of a multiform, momentaneous, and almost unbearable precise world. Babylon, London, and New York dazzle mankind’s imagination with their fierce splendor; no one in the populous towers or urgent avenues of those cities has ever felt the heat and pressure of a reality as inexhaustible as that which battered Ireneo, day and night, in his poor South American hinterland.
The totality of information does not, however, bewilder computational devices. Bewilderment is a human trait, inextricably connected to the limitations of the human mind. Yet it is these same limitations which allows me to compare Borges’ fictions to the philosophy of Bergson, written 30 years apart and numerous decades before my birth. My ability to forget permits me to see similarity in the written texts of the same language, by selecting that which is relevant and leaving out that which is not. It permits me to block out the contexts in which both texts were written, which backgrounds both writers had. It supports my mission to create a new insight or a new interpretation. What it does not permit is for me to compare these texts to the structural integrity of a skyscraper or to the average yearly flow of the Nile basin. It does not allow me to capture the total amount of characters used in both texts and compare them to the history of every text ever written, which is, indeed, a possibility for computation.
It is the dynamic of forgetting, reinterpreting, and re-embodying that creates not only the individual memory, but history in general. That is, the collection of past experiences and the selection of the memories captioning those experiences. The limitation active in the human mind is also one that is mirrored by the succession of generations in a community. People do not just forget; they also die, perhaps the most extreme form of forgetting. Their experiences are lost, or when written down or registered otherwise, unprotected by their originator, and thus again open to interpretation. The past inescapably recedes away from the present, taking with it the ideas and memories of those who have passed away. All that is left are the traces they leave behind, either in direct transmission to another, or in another physical form.
Normalisation is intrinsically a fictionalisation from the past projected into the future, a narrative built on the ruins of bygone days, solidified in the collective memory as a given. Its origin is forgotten, irretrievably lost in the inevitable progression of time, which forces memories to fade and generations to pass. The process that forms normalisations is therefore based on the same selective procedure performed by the brain: to make sense of the residue of the past, we include some traces and sources and leave out others, creating a consistent whole with a logical, reasonable narrative. David Lowenthal describes this relationship to the past as a visit to a foreign country, where the similarities and differences are extracted selectively from experience and subsequently internalised as normal to straighten out this unknown territory.
The past itself is gone—all that survives are its material residues and the accounts of those who experienced it. No such evidence can tell us about the past with absolute certainty, for its survivals on the ground, in books, and in our heads are selectively preserved from the start and further altered by the passage of time. These remnants conform too well with one another and with knowledge of the present to be denied all validity, yet residual doubts about the past’s reality help to account for our eagerness to accept what may be dubious about it. There can be no certainty that the past ever existed, let alone in the form we now conceive it, but sanity and security require us to believe that it did.
Interpreting the past as a way to avoid its complexity or escape its “inexhaustible heat and pressure,” by cognitively knitting together a selection of the remnants is not only what keeps us sane, but also what allows us to take a glimpse, through the experience of hindsight, at what may be to come. It provides a sense of security, of certainty. By infusing the narrative past with present conformities, we highlight similarities. When what happens now has happened in the past, it reassures us of the outcome. Lowenthal continues:
As modes of access to the past, memory, history, and relics exhibit important resemblances and differences. By its nature personal and hence largely unverifiable, memory extends back only to childhood, though we do accrete to our own recollections those told us by forebears. By contrast, history, whose shared data and conclusions must be open to public scrutiny, extends back to or beyond the earliest records of civilization. The death of each individual totally extinguishes countless memories, whereas history (at least in print) is potentially immortal. Yet all history depends on memory, and many recollections incorporate history. And they are alike distorted by selective perception, intervening circumstance, and hindsight.
History, here, as immortal as it may seem, grows distorted and fluid as time passes. The documentation of past events, however prone to entropy, is re-interpreted at every possible instance through the selective and seemingly random process of perception. As the documentation is never complete and total, and as that which is documented is dispersed again by future generations, the past becomes malleable.
Computation’s approach to memory, however, and therefore history, goes beyond interpretation. Computation annuls distortion by selective perception or hindsight, putting in place a neutrality based in the comprehension of this totality of the past—the potential immortality of history made manifest. Forgetting is completely ignored by computation. Philosopher Byung-Chul Han marks the difference:
Human memory is a narrative, an account; forgetting forms a necessary component. In contrast, digital memory is a matter of seamless addition and accumulation. Stored data admit counting, but they cannot be recounted. Storage and retrieval are fundamentally different from remembering, which is a narrative process. Likewise, autobiography constitutes a narrative: it is memorial writing. A timeline, on the other hand, recounts nothing. It simply enumerates and adds up events or information.
The dream of computation begins to unravel once we frame it as the fictionalisation that it is, as is the case with any automation and the promises that come with it. The idea that the scale of the totality of information, if ever it could be ascertained, would be of any use at all to us, is formally untrue. In the Borges story, Funes “had reconstructed an entire day” two or three times; “he had never once erred or faltered, but each reconstruction had itself taken an entire day.” Totality, likewise, takes totality to operate. If no cropping of information is made, then no conclusion can be drawn, and the simulation which we create is no different from reality itself. This reminds us of another short story by Borges, only the length of a paragraph:
...In that Empire, the Art of Cartography attained such Perfection that the map of a single Province occupied the entirety of a City, and the map of the Empire, the entirety of a Province. In time, those Unconscionable Maps no longer satisfied, and the Cartographers Guilds struck a Map of the Empire whose size was that of the Empire, and which coincided point for point with it. The following Generations, who were not so fond of the Study of Cartography as their Forebears had been, saw that that vast Map was Useless, and not without some Pitilessness was it, that they delivered it up to the Inclemencies of Sun and Winters. In the Deserts of the West, still today, there are Tattered Ruins of that Map, inhabited by Animals and Beggars; in all the Land there is no other Relic of the Disciplines of Geography.
1_3_4_Averages and Choices
Sometime the same is different, but mostly it’s the same
Queens of the Stone Age
Our desire for stability and quest for the totality of information has led us down a path towards the construction of a replica of reality, a 1-to-1 simulation of the past and present to provide us with the answers to the questions of the unknown future. Yet, the scale of everything is of no use, as it denies us any comprehensible narrative, and confronts us with a copy as complex as the original.
As computation’s undiscriminating view of the world has become so fine-grained that it has no focus, the crux of the question at the heart of computation today is a matter of selection, of deciding what’s relevant. The goal of computation is to attain a neutral and unbiased representation of the world as it currently exists, but the limitations of human mental capacity require selection or categorisation to make any sense of its infinite reach. How do we maintain these two contradicting facets in computational automation? How do you make a computer make an objective choice of what information is okay to exclude?
One means is to align information to its average, compacting large amounts of data into a common denominator. This statistical feat enables a significant simplification of the totality of information, permitting some version of possible human interpretation. Yet it is only able to do so by cutting away. Typically represented by the bell-shaped Gaussian distribution, sometimes called the curve of normal distribution, averages exclude the outer, lower, and less common ranges of the bell curve, while the central, higher parts hold the most relevance and are therefore considered more valuable. We use averages like these all the time. In the mass production of doors and door frames, for instance, builders design for a range of average human body heights, causing abnormally tall people to have to duck in standard door frames. Since these people are fewer in number, they are, according to the curve, cut out of the averages used during the design process.
To recognise a pattern is to make a choice. By reducing the complexity of differences, we bring similarities to the surface. But in doing so, we lose much of the totality of the original, chopping away at differences in order to build patterns. Averages round infinitesimals either up or down, according to the convention. They compress files into workable sizes and scale maps to fit on a table or wall. Averages are what turn Funes’s dog at three-fourteen in the afternoon, seen in profile, and the dog at three-fifteen, seen frontally, into one and the same dog.
The stability generated by the average excludes that which has not been seen before, that which does not align with existing models. By forcing the totality of information into averages, we make selections based on normalisations of the past. The bell curve and all its derivatives are mathematical models of probability and statistics that induce correlation to enable the retrieval of similarities in an utterly vast and complex web of differences.
Because of our inevitable need for selection, the process of developing a neutral representation of reality—one that would store each and every memory or data point possible—ultimately leads us to a profound fictionalisation of future: that the way things are now represents the most neutral, the most efficient, and the most universal model that can be objectively attained. As computation stores and classifies all the data it collects, creating a timeline of its own evolution, it can replicate its history, return to a previous moment, and compare its current state to a past state. It can repeat a past situation with an exactitude which was unimaginable only a century ago, and it develops this quality progressively, becoming more and more precise as it brings in larger amounts of data. It becomes nearly impossible to contradict the outcome of computational models. Every proposition that does not lie within the model’s reach can be cross-referenced with prior examples, and is subsequently averaged out to fit within.
Norbert Wiener, one of the great advocates of the computational quest for totality, also addressed this problem of decision-making and of selection in computational automation. He recognised that computers, loaded with statistical biases hard-coded in, would eventually carry on copying faulty versions of themselves, and thus repeat and reiterate the faulty choices they embody.
Any machine constructed for the purpose of making decisions, if it does not possess the power of learning, will be completely literal-minded. Woe to us if we let it decide our conduct, unless we have previously examined the laws of its action, and know fully that its conduct will be carried out on principles acceptable to us!
Wiener instead proposes a machine capable of learning, a kind of contradiction in terms. As we have seen, the construction of knowledge is a feature computation does not control. For it to do so, it would have to make its selections of totality completely on its own, without human interference. In order for a machine to make decisions, it would first have to know how to make decisions. Wiener qualifies his statement thus:
On the other hand, the machine like the djinnee, which can learn and can make decisions on the basis of its learning, will in no way be obliged to make such decisions as we should have made, or will be acceptable to us. For the man who is not aware of this, to throw the problem of his responsibility on the machine, whether it can learn or not, is to cast his responsibility to the winds, and to find it coming back seated on the whirlwind.
Even if the machine were able to make said decisions, it would be just as necessary to continually recalibrate them, to remind them of what we expect from them. The paradox is that the references we have of what we want are exactly that which we have outsourced to the machine. The confidence we have put in the decision-making capabilities of computation, which we expect to bring stability to the chaotic approach of the future, instead results in the reiteration of the familiar and stable past. This familiarity with the past—our reference point for what is and can be known—is what computation mass-reproduces. Wiener’s whirlwind, in that sense, always returns. It brings us exactly what we originally gave it.
1_3_5_Power and Transposition
In the equation of computation, who are we, exactly? When I use the word we, I do so to suggest that the fictionalisation of the unknown future is an activity common to all of mankind, regardless of its heterogeneous extent or variegated impact, and regardless of the advantages it provides to those who impose their fictionalisations upon others. Indeed, despite its internal variation, the acceptance of this fiction is, in my view, inevitable for rich and poor alike. It is for this reason that I want to discredit the hypothesis that every act of fictionalisation is orchestrated for the express purpose of willingly subjecting another. Fictions are inescapable for everyone: for a person to be completely void of fiction—to have a clear, unobstructed view of each and every fact—is beyond our human capacity. That is why we have built computers.
However the idea that the effects of fictionalisation occur equally and indiscriminately is clearly false. There are people who benefit from the perpetuation of certain fictions, and there are people who do not. Strategies used to copy the past into the future—fictionalisation, automation, computation— intrinsically honor those who held power in the past, and those who intend to maintain it in the future. By presenting the present as the most neutral and most objective way of living, computation, for one, also further normalises existing hierarchies, and shields them from critique. The sanctimonious neutrality of computation waltzes over all other forms of normalisation and solidifies itself as being universal and timeless. James Bridle writes:
We have been conditioned to believe that computers render the world clearer and more efficient, that they reduce complexity and facilitate better solutions to the problems that beset us, and that they expand our agency to address an ever-widening domain of experience. But what if this is not true at all? A close reading of computer history reveals an ever-increasing opacity allied to a concentration of power, and the retreat of that power into ever more narrow domains of experience. By reifying the concerns of the present in unquestionable architectures, computation freezes the problems of the immediate moment into abstract, intractable dilemmas; obsessing over the inherent limitations of a small class of mathematical and material conundrums rather than the broader questions of a truly democratic and egalitarian society.
In Bridle’s use here, we denotes a group of people not in control of computational thinking’s development. Computation suspends any grievances this we has about the problems the group faces and interprets them only as issues that have already been solved, employing the repetition of computational strategies in instances where they do not necessarily apply.
In the entanglement of fiction with reality, of approximation with simulation, of average with totality, of subjectivity with neutrality, repetition becomes a mode of life. If everything can be compared, and if similarities are everywhere—if our faith in computational thinking has us, unlike Funes, forgetting differences between things—then everything is, for us, connected through some form of Boolean logic. This holistic approach to computation allows us to copy and paste the past into the future, but it also enables the transposition of any one thing into anything else through averages and rounding errors. It permits translation between vocabularies and systems, and it allows the designers of computational systems to shape both problems and solutions according only to their understanding of patterns. But, as James Bridle recounts, patterns can be discovered and recognised everywhere, even between geological events and crime:
The Great Nōbi Earthquake, which was estimated at 8.0 on the Richter scale, occurred in what is now Aichi Prefecture in 1891. A fault line fifty miles long fell eight metres, collapsing thousands of buildings in multiple cities and killing more than 7,000 people. It is still the largest known earthquake on the Japanese archipelago. In its aftermath, the pioneering seismologist Fusakichi Omori described the pattern of aftershocks: a rate of decay that became known as Omori’s law. It is worth noting at this point that Omori’s law and all that derived from it are empirical laws: that is, they fit to existing data after the event, which differ in every case. They are aftershocks – the rumbling echo of something that already occurred. Despite decades of effort by seismologists and statisticians, no similar calculus has been developed for predicting earthquakes from corresponding foreshocks. Omori’s law provides the basis for one contemporary implementation of this calculus, called the epidemic type aftershock sequence (ETAS) model, used today by seismologists to study the cascade of seismic activity following a major earthquake. In 2009, mathematicians at University of California, Los Angeles, reported that patterns of crime across a city followed the same model: the result, they wrote, of the ‘local, contagious spread of crime [that] leads to the formation of crime clusters in space and time … For example, burglars will repeatedly attack clusters of nearby targets because local vulnerabilities are well known to the offenders. A gang shooting may incite waves of retaliatory violence in the local set space (territory) of the rival gang.’ To describe these patterns, they used the geophysical term ‘self-excitation’, the process by which events are triggered and amplified by nearby stresses. The mathematicians even noted the way in which the urban landscape mirrored the layered topology of the earth’s crust, with the risk of crime travelling laterally along a city’s streets. It is ETAS that forms the basis of today’s predictive policing programmes […].
Computation, here, is what warrants the transposition of earthquakes in Japan onto crime rates in Los Angeles. The abstraction of reality into the simulation strips data of its context. Not only history does repeat itself, but jumps timelines, landing in different fields and narrative lines. Examples of this transposition are legion, ranging from personal “if you like... then you might like ” algorithms to the use of models of fluid dynamics to simulate the flow of crowds at large events. Once these theories become natural laws, if only in practice, they are dispersed, copied, and transposed without limitations, making their origins irrelevant.
Computation’s transpositional inclination renders it seemingly universal: makes no distinction between facts or fictions, or between apples or oranges. There are only similarities, regardless of their origins, to be compared. Our futile quest for the totality of information, once intended as a means to complicate universality and help us navigate whatever true universal laws remain has fed back into confirmation biases, and reinforced the indiscriminate interchangeability of ideas, myths, and fictions.
Every normalisation, in the end, can be automated, every automation can then be computed, and then all can be undone. Through this process, it becomes extremely difficult to assess whether we, as those who undergo computation, are therefore influencing the world of computational thinking or are just being influenced by it. Are the normalisations that spring from computation our doing, or are we the products of facsimile brought about by the normalisations of computation?