1_A SHORT HISTORY OF NORMALISATION OR: WHY WE NEED TIME


1_1_THE FUTURES


1_1_1_The Inescapable Future

 

The great thing about time is that it goes on.
— Sir Arthur Eddington

 

When Julius Caesar led his armies across the Rubicon in 49 B.C.E., declaring war on the Roman Senate, he is said to have uttered his now-famous words alea iacta est, “the die is cast.” He knew the repercussions of his actions and, no matter his plan for dealing with those repercussions, knew his decisions to be irreversible. There would be no way back.

The future is a condition of time. The laws of physics dictate the progression of time to be an inevitable fact. For us, as for Caesar, there is no way back. Everything taking place on the timeline beyond the present lies beyond our shared Rubicon, the event-horizon we understand as the future. Our vision cannot reach it; we can only wait until the future transforms into the present. Imagine being bound to a chair with your head strapped to the headrest, facing the past with your back towards the future. You can look slightly sideways at the present, and forwards towards the past but never at that which is to come. Epistemologist C.D. Broad makes this contriving spatial analogy in his 1922 contribution concerning time and the future in the Encyclopaedia of Religion and Ethics:

 

To begin with, the distinction between present and not present at any rate may be usefully compared with that between here and elsewhere in space. Here means near my body; elsewhere means distant from my body. If we want an analogy to the distinction between past and future, we can find one in the distinction between things before and things behind our body. It is true, however, that this analogy is incomplete, and that for an important reason, though one extraneous to the nature of time. The reason is that our practical and cognitive relations towards the future are different from those towards the past. We know a part of the past at any rate directly by memory, but we know the future only indirectly by probable inference. There is no analogy to this in space; our knowledge of what is behind our body is of the same kind and of the same degree of certainty as our knowledge of what is in front of it. But we may imagine that a distinction like that between past and future would have arisen for space also, if we had been able to see straight in front of us but had never been able to turn our heads or our bodies round.

 

Dealing with the future is, most of the time, a process of undergoing it. The inevitability of the progress of time pushes us towards the unknown. This surrender burdens us with a feeling of impotence and cripples any dreams of certain agency. We are forced to accept without understanding. Broad illustrates the compulsory premise of the future by comparing it to the presumed fixed gaze of our heads and bodies. Placing the progression of time in a spatial context exposes the limitation of our corporal capacities and the ensuing enclosure of our mental awareness within the boundaries of past and present: time is a straitjacket of our spirit. 

 

[…] we can no more slip back to the past than leap forward to the future. Save in imaginative reconstructions, yesterday is forever barred to us; we have only attenuated memories and fragmentary chronicles of prior experience and can only dream of escaping the confines of the present. 

 

The sole fixed point in our future is the inevitable demise of our body and mind, and even that evidence can only be extrapolated from the individual experience with death present in our surroundings, in our consciousness. Our view only into the past teaches us that everyone dies, as a fact of life, but it does not indisputably include our own death. You, for instance, might possibly be the exception who lives forever. Because you have never experienced the fading of your life’s energy into nothingness, you rely on the recurring pattern that you have witnessed and that you remember through your interaction with society. If you lived in total isolation, the inevitability of death would never be clear. However, even in a setting void of other humans, the confrontation with growth, decay and entropy in the natural world would teach you that time advances towards degeneration, even of your body.

The progression of time as a natural, irreversible, and consequential process is best described as a kind of entropy. The second law of thermodynamics defines entropy as an asymmetry in the progression of time: the transformation of a system from a state of organisation to a state of disorganisation increases with time and is not reversible. A log of wood in a bonfire burns to ashes, releasing great amounts of energy in the form of heat and light. The highly organised molecular structure of the wood is reduced to cinders and soot, dissipated into its immediate surroundings as warmth, and carried away by the wind as smoke and vapour. It is impossible to return the log to its low-entropy state by reassembling its dispersed elements, which now exist in a state of high entropy. The process outlines a direction, one that is described by the astronomer and mathematician Arthur Eddington as the arrow of time:

 

Let us draw an arrow arbitrarily. If as we follow the arrow we find more and more of the random element in the state of the world, then the arrow is pointed towards the future; if the random element decreases, the arrow points towards the past. That is the only distinction known to physics. This follows at once if our fundamental contention is admitted that the introduction of randomness is the only thing which cannot be undone. 

 

In his 1928 The Nature of the Physical World, Eddington applies the physical principles of thermodynamics, which concern the transformation of systems on a microscopic level, to the macroscopic world. While the empirical proof for the irreversibility of entropy may be overwhelming for molecular processes, such as the dispersion of gases in a volume or the transformation of energy in mechanical movements, it remains difficult to imagine the same principle applied in fields of phenomenal perception. That is, when applied to human life on earth, randomness does not always clearly increase as we move into the future.. 

Building a brick wall—piling the stones on top of each other in an organised fashion and binding them together with cement—would, for instance, contribute to the overall order of things, and thus reduce entropy in the future. However, if we take the complete process of wall-building into account, the true course of entropy becomes apparent. Combining the sum total of human interference in the geological layers of clay and chalk needed to produce bricks and cement, the successive purification and dehydration of raw materials, the burning of combustibles (as well as their extraction and refining) to fuel the kilns, the levelling of the landscape to make way for the wall, and so on, we see an overall increase in entropy, not a decrease. The process of building a wall is irreversible, which, in turn, makes its impact irreversible. We are aware of this unidirectionality, what Eddington calls “time’s arrow,” whether or not we pay it any attention.

Our conscious perception of time’s arrow originates from the irreversible succession of causes and their successive effects, such as the ones described by the laws of entropy. Dropping an egg on the floor results in a mash of yolk, white and shell. The cause is the egg falling; the effect is a mess on the floor. The process is irrevocable, and the direction of time presents itself in what Hume, in his Treatise of Human Nature, calls the “phenomenal succession of time.” That is, we know we cannot put the egg back together again by lifting it from the floor, because we feel time pass irreversibly.

The concatenation of cause and effect allows the senses to build up inductive reasoning. In fact, as Hume observes, “all our reasonings in the conduct of life” and “all our belief in history” as well as “all philosophy” are based upon “the inference from cause to effect.” If we witness multiple instances of eggs falling from our hands, and see them shatter at each, we can, through the conjunction of cause and effect, conceptualise any egg’s demise long before it touches the ground. With inductive reasoning, based on habit and memory alone, we anticipate the future. It is this anticipation of the future, drawn only from recorded data, that I refer to as normalisation. 

 

 

1_1_2_The Random Future

 

The generation of random numbers is too important to be left to chance.
R.R. Coveyou 

 

The progression of time would not be much of a bother if the future would unfold for us as we anticipate. We are instead constantly, continually confronted with randomness, disrupting our forecast for the future. As Eddington writes, the direction of time and its irreversibility are inevitably altered by the introduction of any random, unforeseen element. Thus we are not only forced to undergo the future, but are nonetheless, despite our mastery of normalisation, confronted with the uncertainty of what will come to pass. Plainly, sometimes our expectations will come true based on inference and past experience, and sometimes they will not. Things occur outside our field of expectations. As the adage goes, shit happens. 

Randomness, as we understand it, is a cognitive creation, a concept we apply in the present to classify events without phenomenal precedence or preordained certainty. There are those events during which actions lead to entirely predictable outcomes, and there are events during which actions might lead to any of various, less certain outcomes. When a certain outcome is expected but does not materialise, we commonly associate the deviation with randomness; it is a random “turn of events,” we say. 

Declaring his die cast, in English translation, Caesar knew that he had to confront the unforeseeable future that followed. But the etymology of the translation from the Latin alea iacta est is notable. The word aleae is a reference to dice by way of a reference to chance, to stakes, or to risk in English. The French aléatoire, derived from aleae, can be similarly defined as “random,” or subject to coincidence with uncertain results. But this etymological history holds a telling dissonance in definition: a die-roll is open only to restricted randomness, to the numbers one through six. Rolls of seven and up are discarded as possibilities. Dice give us a sense of security: we may be hit with a four or a five, but never a nine. However, if, by some inexplicable occurrence of unrestricted randomness, we find a 2,649 or the letter f on a six-sided die, we would be flabbergasted by what we once thought a sheer impossibility. 

Aléatoire, therefore can be regarded as the preconditioned, collectively promised limitation of possible futures. It is a game of chance and not coincidence, randomness refined into numbers, into odds and wagers, infinite possibilities into likely probabilities. As with any game, its rules are defined at the start. Stepping outside of these rules disqualifies the game. The game—and thus, our understanding of how the future could play out—becomes senseless without them.

Think about the egg falling, its inevitable demise, and the specificity of any individual splat on the floor. The egg falls at a certain angle, giving it spin as it accelerates with the pull of gravity. It experiences drag due both to its irregular shape and to the friction of the shell against air particles on its way down, which turn it as it falls. As the egg nears the floor, the inertia of its liquid interior corrects itself according to the acceleration of the whole, giving the whole object an extra boost or turn. Then, everything comes crashing down: the macro-structure of the egg implodes under the enormous pressure of inertia, ripping the membrane that surrounds the content. The binding agent of the white and yellow ovum completely loses all cohesion as the force of gravity flattens the round egg and compresses it releases pressure sideways. Fluid then escapes through the cracks onto the pavement, dispersing itself. Pieces of shell fly around centrifugally, away from the site of impact, some of them slowed by the stickiness of the mixture, others launched further from a lack of surface tension and the nominal elasticity of the chalk grid. The result is a seemingly arbitrary pattern of viscous sludge mixed with eggshell, a semi-transparent yellow-orange stain garnishing the ground. 

It would be ridiculous to imagine how to predetermine this splat. But we could, if we were so inclined, untangle the cascading causal steps from start to end, chronologically listing each and every aspect of the process of cause leading to effect in order to predict the exact outcome. This would be akin to reducing all chance to a mere six sides of a die, simplifying what the future holds by rationalising it. In the case of the falling egg, the exact knowledge of all the many mechanisms affecting its fall could, in theory, determine the exact shape of the splat on the floor. We ought then, by mathematician Pierre-Simon Laplace’s reasoning, to regard the present state of the universe as the effect of its anterior state, and therefore as the cause of the one that follows.

 

Given for one instant an intelligence which could comprehend all the forces by which nature is animated and the respective situation of the beings who compose it—an intelligence sufficiently vast to submit these data to analysis—it would embrace in the same formula the movements of the greatest bodies of the universe and those of the lightest atom; for it, nothing would be uncertain and the future, as the past, would be present to its eyes.

 

Theoretically, per Laplace, an exact knowledge of the totality of past and present would enable us to predict the future: if we knew every current in every layer of air; every position of every molecule at any given time; every movement of every object; every action of every entity living or having lived; every force interfering with any body of any size, at every scale, simultaneously, we would know what the future holds. Seemingly infinite and incomprehensible amounts of observed data could process and transform randomness into a structured flowchart. Could we extract this data from the totality of reality, through constant observation and all-encompassing measurement, we would need to do it in such a way as to not make any rounding corrections, errors, or guesses. As mathematician Henri Poincaré wrote in 1908,

 

If we knew exactly the laws of nature and the situation of the universe at the initial moment, we could predict exactly the situation of that same universe at a succeeding moment. But even if it were the case that the natural laws had no longer any secret for us, we could still only know the initial situation approximately. If that enabled us to predict the succeeding situation with the same approximation, that is all we require, we should say that the phenomenon had been predicted, that it is governed by laws. But it is not always so; it may happen that small differences in the initial conditions produce very great ones in the final phenomena. A small error in the former will produce an enormous error in the latter. Prediction become impossible, and we have the fortuitous phenomenon.

 

A little more than a half-century later, mathematician and meteorologist Edward Lorenz refined Poincaré’s theories and applied them in service of weather forecast. In 1960, he attempted to simulate weather patterns on a large scale using some of the first digital computers. Running large datasets through his setup, he was surprised to see that minimal variations in his initial input led to dramatically different outcomes when the computer made predictions in the long term. At some point in the simulation, Lorenz realized, the computer used a six-digit rounding (as in .506127) when processing the data on air pressure, currents, rainfall, cloud formation, etc. But, to limit paper use, the printer output the numbers using only three digits (.506). With this error introduced, the first few hours or days of simulation were still quite accurate, but, extrapolating into weeks and months into the future, the predictions started to wildly diverge. This slight variation led to an extraordinarily different prediction of the future. In the process, Lorenz had inadvertently recreated and simulated a complex system, later referring to this complexity as “deterministic chaos,” or simply, “chaos.”

When inductive reasoning requires too much information to be extracted from the past and present to build a viable prediction of the future, it is no longer viable. Complexity begins where causality breaks down. Inference from cause to effect might be practical on a small scale of human action and interaction, but it has no real application to, for instance, global systems, or to any other incomprehensibly large or complicated calculation of future we may be faced with.

Complexity, in this way, outstrips our capacities to think and control. As Poincaré posits, we will never be able to decipher the vast totality of information held by past and present. This leaves us two options: one is to reduce complexity to a human scale—say, to think of the future as limited to chance, to the outcomes circumscribed by a six-sided die, rather than true randomness; the other is to expand our capacities. 

We will come to expansion later; I want us to focus on reduction for now. My definition of normalisation rests on our tendency towards reduction as a way to structure chaos and subsequently control it. Simplifying the nonlinear flow of complexity into a linear, causal narrative allows us to construct sense and reason, which in return offers us forms of short-term agency against chaos. By attempting to break down the whole and studying a selection of its parts, we can make some form of causal prediction. But, as we have learned, as a result of chaos, the emergent whole is more than just the sum of its parts. The predictions we build our society on are therefore short-lived, limited in time and space. They are right too often for us to ignore and wrong too often for us to rely on them. Nevertheless, they determine how we live.

A random event may lead us to fictionalise a pattern, to give it a sensible cause-and-effect narrative, which in due time determines our way of dealing with, or normalising, the future. By focussing on the recurring events, on phenomenal repetition, grouping variations or anomalies under one and the same denominator, or even blindly transposing reasonable causal logic from one field of experience to another, we try to make sense of what happens to us. Psychologist and mathematician Maya Bar-Hillel explains:

 

Where people see patterns, they seek, and often see, meaning. Regarding something as random is attributing it to (mere, or blind) chance. Perceiving events as random or non-random has significance for the conduct of human affairs, since matters of consequence may depend on it. The market price of stocks is essentially a random walk, but people see trends in it, with the help of which they attempt to predict future prices. People may be promoted (or demoted) for strings of job-related successes (or failures) that are in effect no more than chance results. Coincidences are given significant, and often even mystical, interpretations, because their occurrence seems to transcend statistical explanation.

 

It is of no real importance from where randomness emerges or, spiritually speaking, why what happens happens, though everything from religious texts to philosophical tracts to myriad forms of fictionalisation have been used to answer the question. The creation of a narrative, drawn from the pattern of randomness’s repetition, is in every instance an act of normalisation—a human process of defining and delineating the random outcomes of the future into understandable categories. What is important here is what the act of normalisation brings, the possibilities and agencies it offers.

Normalisation, for our purposes, can be understood as a trick to understand what lies beyond the event-horizon we know as the future. Instead of predicting the entirety of the future, normalisation eliminates possibilities, rendering our game of chance as a roll with a result somewhere between one and six. Through trial and error, we negotiate contingencies by sorting them, or normalising them, into manageable patterns. Actions that yield favourable results become normalised, while negative outcomes are, when possible, cut from the game of chance. Normalisations become standards, and, over generations, standards become habits, the means by which we anticipate the future. As time passes, and context disappears, habits become culture, a canon of past trial and error. Almost every aspect of human life is built up from standards, habits, rituals; sequences of performing an action. Our collective visions of the future are nothing more than a series of normalisations laid by previous generations.

One such trick, or fictionalisation of patterns, is the discipline of statistics. A study of numbers that transforms the natural world into calculable data, statistics emerged as a means to truly and transparently understand how the world was put together, and therefore what its future looks like. For Byung-Chul Han, the development of statistics came to fruition at the dawn of the first Enlightenment and set in motion an unprecedented obsession with numerical normalisation: 

 

During the first Enlightenment, statistics was thought to possess the capacity to liberate human knowledge from the clutches of mythology. Accordingly, euphoric celebration occurred. In light of such developments, Voltaire even voiced the wish for a new historiography, freed from past superstition. Statistics, as he put it, offers ‘an object of curiosity for anyone who would like to read history as a citizen and as a philosopher’. Revised by statistics, history would become truly philosophical. As Rüdiger Campe writes, ‘The numbers of statistics provide the basis from which [Voltaire] can articulate his methodological mistrust of all histories that exist only as narratives. The stories of ancient history accordingly offer an example that borders on mythology for [him].’ Statistics and Enlightenment are one and the same for Voltaire. Statistics means setting objective knowledge founded on, and driven by, numbers in opposition to mythological narration.

 

Today, this understanding of numbers-based knowledge of the past—and thus, prediction of the future—is commonplace. Every random event is parameterised into measurable data. Yet, while our age of statistics has yielded many of the greatest scientific breakthroughs humanity has ever seen, allowing us insight into an unfathomably complex system of cause and effect, it has not banished randomness altogether. On the contrary, our faith in the universal power of statistical analyses has left us ever more vulnerable to events of true randomness—that is, random outcomes beyond the six sides of the die. When every probability can be expressed in numerical odds, we are quick to accept that the world follows the same rules on which those odds were based. 

Whether we deal with randomness by creating myths or stories or by simplifying it into numbers and statistics, we rationalise random events and why they occur to us through the extrapolation of past onto the future. Both mythological narrative and objective quantification serve to normalise the effects of a random, complex future. 

 

 

1_1_3_The Infinite Future

 

Meten is weten, gissen is missen. 

(To measure is to know; to guess is to miss.) — Flemish proverb

 

Infinity also outstrips our abilities to think and control. Unable to perceive or grasp infinity, we force limitations onto the future and reduce its complexity not only to conjure order from chaos but to put tangible bounds on our calculations, rationalisations, and fictions. To deal with the future, we must be able to measure and, ultimately, to calibrate those measurements with one another for them to have any meaning or use.

The earth revolves around the sun in one year. Average life expectancy in Belgium at this moment is around eighty years. People born now are therefore expected to witness eighty revolutions around the sun. Statistically speaking, they will die in the year 2100, which is the moment when the earth will have made 2100 revolutions around the sun since Jesus Christ was born. We know this because we have, through the years, measured it. We have delineated how long one year is and when we should begin counting the next. This linear finitude, with a clear start and end, gives us the ability to make assumptions about the possibilities of the near future—indeed, the ability to understand when the future is near and when far. 

Were the future fully undetermined, everything that could happen would happen somewhere between near and far. Given a limitless expanse of time, for instance, a monkey punching random keys on a typewriter would eventually write out the entire works of William Shakespeare. Similarly, an infinite number of monkeys going wild on an infinite number of typewriters would produce the same work on the first try. The probability of any monkey doing so is statistically insignificant in human time, but it is not zero. Given enough time, typing monkeys would be able to produce all the writing ever produced by humankind. Even if we take just half, or one twentieth or one millionth of the troop of apes to write out our texts, it would take just the same amount of time to do it. A percentage of infinity that is not zero is still infinity; when dealing with a borderless concept of time and space, only everything or nothing is possible. Brett Watson explains: 

 

For the sake of demonstration, let us assume that our target document is the phrase “WATSON, COME QUICKLY!”—a mere 21 keystrokes—and that a trial only takes one second. Using our new-found knowledge, how many monkeys will we need to have a 50% chance of getting this within twenty-four hours? Well, there are 40,564,819,207,303,340,847,894,502,572,032 (4e31) possible documents that our monkeys could type of length 21 (computed by 32^21), which we multiply by 0.69 to compute the number of trials we need for a 50% chance of success, giving us a mere 28,117,390,063,466,213,804,330,352,586,381 (3e31) trials. At one trial per second, one monkey can perform 86,400 (9e4) trials in a day, so we will need 325,432,755,364,192,289,401,971,673 (3e26) monkeys. Hiring this many monkeys will pose logistical problems, as the surface area of the earth is only about 510,000,000,000,000 (5e14) square metres, meaning we will have to fit approximately 638,103,441,890 (6e11) monkeys per square metre (never mind the fact that most of the Earth’s surface is ocean). And it doesn’t really matter how much you skimp on the banana budget: the total cost is sure to make the US national debt ($6e12 at time of writing) look like peanuts. One gram of banana mash per monkey would translate to a bowl of banana mash several times bigger than the moon.

 

This intentionally absurd calculation clearly exposes the limits of our understanding. When we do not define parameters within infinity, the concept remains a hypothetical brainchild, and when we do, we compare the size of a bowl of banana mash with the moon. We can conceptualise 325,432,755,364,192,289,401,971,673 monkeys mathematically; we can calculate with that number and we can fantasise about how silly one-third of a septillion chimpanzees on typewriters would look. We can compare it to numbers of football fields we would need to give them the space they need to write Shakespearean tragedies. We can even bring in the moon to visualise the vastness of it all. But do we really fathom how big the moon is? Can we even fully perceive the size of just one football field, let alone thousands or millions? In the spheres of the phenomenal, infinity becomes something more ambiguous. Comparing one scale to another can aid us to master this sense of infinity, but they never truly represent conscious visualisation.

A similar scalability problem is evident in the projection of the history of the planet onto a twenty-four-hour clock. The first four hours of the planet are a bombardment of meteorites, cooling down and settling. At 4am, the first life comes into existence. It takes another ten hours for the first single-celled algae to develop. At half past 8pm, animals and plants start to develop. Humanity only arrives in the last two minutes, and our contemporary society would only get about two or three seconds on the clock. 

When we measure time, we are constantly comparing it to the scale of experienced time, that of the personally lived lifespan. What feels like infinity to us might be just a speck on the timeline of the universe, and what happens in just a day is a lifetime for a mayfly. This observational relativity needs markers to define at what scale time passes. Without the markers, time, in our experience, would either cease to exist or become endless. Futurologist Alvin Toffler connects this passage of time to change, the element that allows us to set the parameters of time: 

 

There is […] no absolute way to measure change. In the awesome complexity of the universe, even within any given society, a virtually infinite number of streams of change occur simultaneously. […] Without time, change has no meaning. And without change, time would stop. Time can be conceived as the intervals during which events occur.

 

Without these intervals, we would have no understanding of the concept of time, and all time would appear as abstract as infinity. But philosopher Henri Bergson argues that the duration of true change cannot be infinitely divisible into static captions, or in other words, into intervals during which events occur. Instead, he believes we separate time into events in an attempt to enable agency. In acting upon the world, we mince the flow of change into an endless concatenation of instants where we sensorially confirm the state of things. Subsequently, we can measure, emotionally or rationally, the difference between our present and prior states. This allows us to perceive and react to change. For Bergson, true change is indivisible; any separation of the whole of continuity is a construct we build to comprehend and represent our experience of time. In his lectures on the perception of change, he claims:

 

Our consciousness tells us that when we speak of our present we are thinking of a certain interval of duration. What duration? It is impossible to fix it exactly, as it is something rather elusive. My present at this moment, is the sentence I am pronouncing. But it is so because I want to limit the field of my attention to my sentence. This attention is something that can be made longer or shorter, like the interval between the two points of a compass. For the moment, the points are just far enough apart to reach from the beginning to the end of my sentence; but if the fancy took me to spread them further my present would embrace, in addition to my last sentence, the one that preceded it: all I should have had to do is to adopt another punctuation. Let us go further: an attention which could be extended indefinitely would embrace, along with the preceding sentence, all the anterior phrases of the lecture and the events which preceded the lecture, and as large a portion of what we call our past as desired. The distinction we make between our present and past is therefore, if not arbitrary, at least relative to the extent of the field which our attention to life can embrace. 

 

Duration, for Bergson, is defined by consciousness, but this definition is arbitrary at best. We limit the extent of change to a chain of immobile states, like stills from a film. We cannot, however, recreate the movie from these snapshots. The blanks between them are only filled by the mind, by our imagination. The fact that we do divide time and change makes perceiving the future problematic and renders in us a fascination for the preservation of the past. 

 

If we shut our eyes to the indivisibility of change, to the fact that our most distant past adheres to our present and constitutes with it a single and identical uninterrupted change, it seems that the past is normally what is abolished and that there is something extraordinary about the preservation of the past: we think ourselves obliged to conjure up an apparatus whose function would be to record the parts of the past capable of reappearing in our consciousness.

 

We invoke memories—past experiences—and exclude others as a tool to canalise our attention towards the future. However, our relation to the past will always remain artificial, as our memories are both simplified and selective. We cannot recompose time and change indivisibly, only discontinuously, in fragments. Essence, meaning, and substance are therefore pushed to the background.

 

All the […] difficulties which caused substance to recede little by little to the regions of the unknowable […] came from the fact that we shut our eyes to the indivisibility of change. If change, which is evidently constitutive of all our experience, is the fleeting thing most philosophers have spoken of, if we see in it only a multiplicity of states replacing other states, we are obliged to re-establish the continuity between these states by an artificial bond; but this immobile substratum of immobility, being incapable of possessing any of the attributes we know—since all are changes—recedes as we try to approach it: it is as elusive as the phantom of change it was called upon to fix. 

 

Plainly, we compartmentalise time to be able to define it, but through definition lose our ability to recreate it. We trade perception for agency, a bargain at the heart of normalisation processes. But there are limitations to this strategy, as the coming chapters will show. When we normalise the future, we also trade it for the past, however incompletely we perceived it the first time.

1_2_ON AUTOMATION


1_2_1_Tools, Engines, and Devices

 

In normalising the future with the codified data of the known past, we begin to establish patterns, building systems predicated upon our repetitions. This abstract process is the starting point in the development of technical automation. Arguably, however, the most basic level of automation predates any conscious use of the past to rearrange the future.

Lecturing at Princeton University in 1966, the architect Buckminster Fuller provoked his audience with a question: “Do you know what you are doing with the supper you ate last night?” The human body, Fuller claimed, is 99% automated, as we are not cognitively aware of most processes that take place inside our corporal systems. To actively manage our bodily functions while also engaging in activities outside of the body would be impossible. Imagine making your bed, but then also having to devote the same effort into breathing simultaneously. Automation, Fuller concluded, “is not really something new. It is a new description of a very old process.”

We should, however, make a distinction between the passive processes of life—biological, chemical, and physical—which quietly and unconsciously shaped our species, and the processes of technical automation. Without this delineation, we could easily argue that any process that does not require human agency constitutes automation. What would be the difference between the vascular system of the human body and the xylem and phloem systems of a plant when comparing just the amount of active power needed to keep them running? Even non-biological processes, such as the motion of celestial bodies or the hydrological cycle of a river, could then be regarded as automated. Not all self-sustaining processes are necessarily automated. On the other hand, for a process to be considered automated, it does need to be self-sustaining, operating outside of active human consideration. 

Nowadays, the word “automation” is closely associated with technology. Every automated process is assumed to be of a technical nature. However, automation should not be confused with, for instance, the use of a tool. The wheel, the candle, the hammer and the furnace are all hallmarks of technical advancement, but all still require active human intervention to operate. The word “operate” is key here: tools are inert when not in use. Importantly, the human user is the engine of the tool. The properties or dispositions of a hammer, for instance, are diverse and open-ended. The operator could use it as intended—to drive nails into a wooden beam—but could also use it to break said beam into pieces. She could throw it into a field or use it to bash someone’s head in. Normalised only to a certain extent, an inert hammer does not exclude other applications.

The separation between tool and automated process grows blurrier with the windmill or printing press. Wind, not manpower, turns the windmill, grinding grain into flour. The miller may know exactly which lever to pull to lower the gears of the blades into the gears of the stones, and which lever to pull to unlock the blades and free them to the power of wind, but in operating it, he merely guides the wind’s power. The operator here becomes the manager, executing the tasks required to maintain the process without driving that process.

The printing press is a more hybrid form of tool-automation correspondence. Johannes Gutenberg normalised the letterset, which made it possible to turn a highly manual, cognitively demanding endeavour into a relatively repetitive task. His contribution to automation was not the invention of the press (which we now see as a device or machine but is, in fact, a tool) but the standardisation of font, the movable type and print matrix that marked the progression of automated printing technology, allowing typesetters to reassign elsewhere the process of copying text.

 

Since the Industrial Revolution, technology and automation have grown increasingly intertwined. The weaving machines of the early nineteenth century combined elements of simple tools, but they also reallocated some active human operation to the device itself. They became, to a certain extent, self-sufficient, requiring human intervention only to set other pieces in motion. 

The assembly line later streamlined this concept, transforming humans into fleshy robots, and demanding of them only repetition. Mediation by humans was continually broken down into chains of acts, in which one simple, repetitive action led to another, disintegrating complicated operations into normalised, linear tasks. Eventually, it became possible to exclude humans altogether, or at least to reassign them to a task of management or maintenance, and to have devices replace them. Machines, designed to serve only one purpose, placed in a self-sufficient production line and executing their tasks in a controlled and repetitive manner, are now emblematic of automation, overshadowing any other application of the concept. 

But was the production process not already automated, even when it did not involve machines or mechanical technology? The goal of automation is to establish a closed, self-sufficient system, requiring little to no human agency, by simplifying a complex process into a chain of repetitive or regular actions by means of normalisation. 

Automation does not therefore demand technology, at least as we typically understand the concept. Neither does every technological contraption contribute to the establishment of automated processes. Automation simply describes a strictly normalised process: an engine controlling the mechanics of the device, which redistributes human agency to the device’s development, rather than the execution of its process. For weaving machines, the industrialised hierarchy of workers is as much the driver of automation as the steam engine. Automation can be driven, in this way, by a social contract or a political protocol. It could be driven by the rules of a game, the conventions for using a public space or the casting of a crew of sailors. It could require a constitution, a script, or an agreement. Automation is the integration of normalisations into a system, a mechanism, a practice. This process of integration can be considered a process of abstraction. The philosopher Franco “Bifo” Berardi writes:

 

During the last century, abstraction has been the main tendency of the general history of the world in the field of art, language and economics. Abstraction can be defined as the mental extraction of a concept from a series of real experiences, but it can be also defined as the separation of conceptual dynamics from bodily processes. Since the time Marx spoke of “abstract labour” to refer to the working activity as separate from the useful production of concrete things, we know that abstraction is a powerful engine. Thanks to abstraction, capitalism has detached the process of valorisation from the material process of production. As productive labour turns into a process of info-production, abstraction becomes the main source of accumulation, and the condition of automation. Automation is the insertion of abstraction into the machinery of social life, and consequently it is the replacement of an action (physical and cognitive) with a technical engine.

 

I would take it one step further and propose a broader interpretation: abstraction—or what I call normalisation—is the mental extraction of a concept from a series of real experiences. This, as we have noted, is not something new. Neither is the process of inserting abstraction into the machinery of social life new. It is embedded in an old process, as Fuller claims, automating the liaison between people and their tools, between people and devices, between people and people. Automation is only a matter of situating the engine and maintaining the process.

Nevertheless, the notion of abstraction as the primary movement in contemporary histories of art, language, and economics is the one that opens up a pathway towards further elaboration on a possible future. If abstraction, or normalisation, is the keystone of automation, then it is possible to imagine automation in all abstracted fields of thought and creation, of agency and operation. 

 

1_2_2_On the Folly of Self-Organisation

 

In Fuller’s example, our body keeps most corporeal processes out of our consciousness in order to free our attention for more acute, differentiated situations. While I follow his logic of automation as a means to free attention, I disagree with his idea that automation is a natural development. As I have pointed out before, the natural systems that govern the universe occur whether or not we exist. It is very difficult, if not impossible, to imagine that the human body developed in any other manner than through evolution, which would make its presumed automatic capabilities a spontaneous advancement rather than a rational or an intellectual one. Thus, for our purposes here, I want to define true automation as exactly that: a rational, intellectual, cognitive and, most importantly, human development. 

Over the course of millennia, the myth of a system that can sustain itself without intervention, in service of human survival or advantage, by liberating us from the overwhelming task of dealing with the future, has become a utopian goal for civilisation. Like the alchemist turning lead into gold, the true self-sustaining system would transform any intricate web of causes and effects, of uncertainties and irregularities, and of countless feedback loops into a simple and manageable structure, which could operate with only a fraction of our cerebral power or attention. 

When Benedict of Nursia founded the monastery of Monte Cassino in the year 529, he imposed a strict code of conduct that would govern every aspect of monastic life within the abbey walls. Composed of seventy-three chapters, the Rule of Saint Benedict precisely defined the parameters of social interaction, labour times, meals, and prayers in an attempt to establish lasting order and autonomy in the tightly bound abbey community. For 1,500 years, the Rule has been applied, generation after generation, with hardly any variation, throughout the Order of Saint Benedict. 

Saint Benedict prefaces his rule: “I now address a word of exhortation to thee, whosoever thou art, who, renouncing thine own will and taking up the bright and all-conquering weapons of obedience, dost enter upon the service of thy true king, Christ the Lord.” The renouncement of agency plays the central part in almost all religious activity, but Benedict formalised this idea as a way to organise a community. He justifies the existence of the rule in service of Christ, as if the stable continuation of the community would be beyond human control, safeguarded and conserved by the divine, for as long as its members show zealous obedience. He then continues by orchestrating, in great detail, the ins and outs of monastic comportment: the hours and content of payer, the vows of silence, the composition of the meals, the attitude towards strangers, the financial regime, and so on. He even elaborates on the existing Ten Commandments with guidance to, for example, “not to be given too much wine.” In his proto-constitution, Benedict set up the rules of the game on how to live together, with the promise that, if followed precisely and obediently, the community would sustain itself. 

Indeed, Benedictine communities still exist today, their code of conduct still intact, their buildings still upright. The original abbey was sacked on numerous occasions and restored on many others; troops garrisoned there, bishops and popes were drafted there; earthquakes razed the complex; and its power rose and faded over the centuries. During the Second World War, Allied forces nearly levelled the monastery, resulting in yet another restoration. And yet, to this day, the Order of Saint Benedict is still housed there, still following the same rule. 

The monks’ unity under divine providence masks the fact that their unity was artificially constructed and is automated by the conduct dictated by the rule. The rule built the community, then the community built divine worship and the real estate to accommodate it. While the contents of the rule are fixed, the people who follow have continually changed, bringing new interpretations of the rule into the monasteries and abbeys with each new generation. This refresh of individuals is what made the order versatile and resilient against the unknown. The rule only brought the individuals together. The self-sustainable or self-governing capacity attributed to seventy-three chapters sustains nothing without these individuals.

I remember visiting the abbey of Our Lady of Nazareth in Brecht, a strictly observant Trappistine order, which follows the Rule of Saint Benedict extremely closely. After a long discussion, I was able to enter the inner parts of the building complex, which are normally only accessible to initiated nuns. Under the supervision of the abbess, I made my way to the attic, where the nuns store their worldly belongings. Among these many items were two rotating hatches, once installed in the wall between the worldly, publicly accessible side of the convent and the sanctuary, visually sealing each side off to the other while allowing objects through. These hatches were, in earlier days, the only connection to the outside world the nuns had, used to pass food and letters from family or friends in and out of the monastery. That the nuns had decided to remove it proved the rule was not as immutable as it would seem, or at least that it remains open to interpretation. The idea that the institution of an irrefutable script would manage the community only through brainless adoption, incorporating the rule as if an extension of passive bodily function, like drones in a beehive, is an illusion. 

The Benedictine fixation on self-regulation embodies the dream of automation without maintenance, and the assumption that automation will hold relevance throughout eternity. Reassigning agency to the script or the machine allows us to redirect our attention elsewhere, indefinitely ignoring automated processes. Automation is therefore based on the normalised assumption that the future will not divert from the past, as it can only repeat the actions it was conceived to regulate. 

Still, if the circumstances for which automation is valid cease to apply, human intervention remains necessary nonetheless. Take, for instance, the self-regulated automation of a bus queue. The rule, we know, is that we stand behind the person ahead of us, allowing for the first to enter first, and the second to enter second, and so on. The order in which these actions occur is organised by a social script, embodied by every participant in the queue, and allows us to direct our attention elsewhere. However, this script is only valid so long as all other automated processes that precede or follow its function—i.e. the bus arrives, has enough room for everyone in line, etc. When these functions do not work as intended, the automated succession of events breaks down. Riders skip the queue or leave. People are suddenly thrust into action. What rectifies the situation is the insertion of a corrective, something to maintain the original state of automation—say, a steward arrives to shepherd the queue along and keep everyone in line. 

The idea that all automations are inherently self-sustaining is only true within the limitations of time, if and only if the automation does not mutate by way of randomness. Systems that seem self-sustaining or auto-regulatory, like the Rule of Saint Benedict, are in fact continually transformed by human agency to maintain their relevance and functionality. 

 

 

1_2_3_Maintenance, Management

 

As I have made clear, the replacement of human agency by automation is far from assured or even logical. Much as mechanical automation almost always needs coal, petroleum, or electricity to fuel its engines, so too does social automation need constant input to stop it coming to a halt. Social automation, like technical automation, needs to be lubricated, cleaned, its obstructions cleared. It needs maintenance to prevent corrosion and erosion. Parts need to be replaced. As time inevitably progresses and entropy runs its course, even automated engines wear down progressively, prompting cognitive processes of repair or reorganisation. This is what we could call maintenance. It could be argued that maintenance is yet another form of automation, since repair procedures are usually organised into playbooks, contingency plans, schedules, or follow-up protocols, all accounting for the likelihood of decay and how to counter it. It is therefore necessary to make a distinction between maintenance as a means to continue, and maintenance that updates, refreshes or adjusts to novel circumstances. 

Maintenance as perpetuation or preservation is indeed part of automated processes. A detailed script steers the maintainer to pre-empt and subsequently mend possible failures of a device, or to restore the device in its original state after an unforeseen event. The technicians and plumbers, garbagemen, firefighters, and nurses of this world struggle every day to maintain the structural stability of the automations which support our society, allowing the rest of the population to spend their cognitive attention elsewhere. (The question remains which tasks, performed by humans, can be considered “the rest,” when following this strict definition of automation and maintenance. But more on that later.) However, we cannot assume that these maintainers perform their maintenance tasks only mindlessly. Randomness assures that the tasks of maintenance will eventually divert from any pre-emptive scripts and playbooks, demanding more elaborate interventions—problem solving, we could call it. Though often in service of continuation or restoration, problem solving demands ingenuity (often called creativity, a word we will return to in a later chapter). Ingenuity seems to negate automation and its intended cognitive discharge. During moments of crisis, when the rules of the script no longer apply, the shortcomings of automation are exposed. These shortcomings are then noted, solved or circumvented, and then integrated into future protocols for the device’s operation.

While moments of true problem solving are rare compared to the number of maintenance actions successfully performed according to protocol, it is crucial to note that it is in these instances that the notion of maintenance as a process of updating emerges. In the words of anthropologist Tim Ingold:

 

Design, it seems, must fail if every generation is to look forward to a future that it can call its own: that is, for every generation to begin afresh, to be a new generation. To adapt a maxim from the environmental pundit Stewart Brand: all designs are predictions; all predictions are wrong. This hardly sounds like a formula for sustainable living. The sustainability of everything, I have argued, is about keeping life going. Yet design based on the science of sustainability seems intent on bringing life to a stop, by specifying moments of completion when things fall into line with prior projections. If design brings predictability and foreclosure to a life-process that is inherently open-ended, then is it not the very antithesis of life? […] How, then, can we think of design as part of a process of life whose outstanding characteristic is not that it tends to a limit but that it carries on? 

Design, or protocol, is prone to rust. The entropy of design, as dictated by time’s arrow, is inevitable. The way entropy is dealt with is what sets continuation and update apart. 

Look at the way airplanes are maintained. As a relatively new technology, manned flight has been through much trial and error over the past century. Every setback, every problem, requires problem solving. A faulty design would be improved or removed, plans drawn and then maybe scrapped. I think it is safe to say that nearly 99% of all designs for the creation of a flying device either never left the drawing board, were cancelled during tests, were never used when testing was completed or were altered over continuous and numerous flight-hours. Because of the extreme expectations of safety placed on aeronautical technology, air travel is now considered one of the most reliable means of transportation. Yet, with this assurance of security and predictability comes a stagnation in development. Since the 1970s, there have been no increases in the speed at which human travel is possible, though our speeds had been exponentially rising from the introduction of the steam engine, the internal combustion engine, and the jet engine, taking humans far beyond the barrier of the speed of sound. Moreover, since the retirement of the supersonic Concorde in 2003, the curve of travel speeds has pointed down for the first time in history. The overall structural design of commercial jets has not changed since then; innovation has been reduced to minor improvements in passenger comfort and fuel economy. 

Maintenance, it seems, relies on the standard set by the initial design. Every update to this design opens the door to a potentially worse end result. The risk of confronting unknowns is often more overwhelming than mitigating well understood problems. Yet, for the tiny portion of events that can still not be classified in binders of maintenance or ledgers of management, maintenance awards a shot at transcending the known. These events are moments of real possibility, when the puzzle of human thought is put together, combining all skills and willpower to overcome something which has not been overcome before. 

In this sense, management can be considered synonymous with maintenance. A manager, like a maintenance worker, is a guardian of the equilibrium usually sustained by automation. Yet the two lie on opposite ends of the chains of command: the manager is on top, giving orders, the maintainer at the bottom, taking them. The manager is tasked with minimising liabilities and maximising efficiency, preferably by not changing anything at all. Like the maintainer, the manager executes the trusted protocols drafted up to mitigate these invariable mechanics. So why are managers often assumed to be more in control or able to steer or alter the course of an automation than the maintainer? When balancing the financial, social, or technological accounts of automation, the manager ultimately does nothing more than following the guidelines provided by normalisation, as does the maintainer. They are both expected to use their specific know-how to overcome problems in the rare cases when they do occur.

Managers are not operators, in that they do not personally control their inert tools. They do not initiate processes. They are executors, managing that which has already been set in motion. It would be interesting to know how many CEOs around the world are managers in this sense of the word, performing assignments given by no one, but required to keep things moving—maintaining automation, rather than leading a system to new frontiers. 

Nevertheless, it is fairly obvious that not every CEO is a mere maintainer, powerless to their own enterprise. To conceptually level them with their lower-on-the-ladder counterparts, saying that they, in essence, do the same work, would imply that they have the same amount of power. This is clearly not the case. While the CEO and the mechanic are equally led by protocol, it is the mechanic’s job to mitigate problems and to problem-solve on the front end of the protocol, and the CEO’s job to transform these solved problems into protocols of administration.

The role of the maintainer and the manager are both strictly controlled by normalisation, integrated in systems of automation. In a ship’s crew, both captain and sailor execute the necessary protocols to move the ship forward. The captain knows that the protocol allows for some degree of flexibility to counter unforeseen scenarios between points A and B. She can also alter the itinerary mid-travel, but risks mutiny if a consensus on new directions cannot be met. She could decide to move full-sail into a storm, or to approach a rocky coast without precautions, but would risk the loss of the ship and of life onboard. In this situation, the balance between manager and maintainer is fragile, and any alteration could lead to liability. It is therefore best for the protocols to be enforced and to remain the same, at all times, both for the captain and crew. Both sides of the power spectrum are equally restricted by limitations of the automation. 

Yet, this metaphor assumes both the manager and the maintainer to be, quite literally, on the same boat. Both positions carry the same amount of risk: they are “in this together.” This is, of course, rarely the case. Inequality of outcome between the tasks of the manager and the maintainer is common when they are not as close to each other as the captain and her crew. The two positions are separated by nodes and nodes of automation, by complexity and vastness. Automation creates a distance between those who maintain it and those who manage it, and retracts them from the original normalisation on which the automation is built. It is what separates the managers of airline companies from the designers of airplanes, leading to a stagnation of development and change. They are not in this together, as the former wants to reduce liabilities and assure safety, maintaining what works for now, and the latter is eager to design entirely new vehicles, up to date with current demands and future necessities. Why are airplanes still so polluting, and the companies who exploit them so unable to address their share in filling the atmosphere with carbon dioxide? The perpetuation of what currently has proven to work is in the best interest of the managers who supply air travel to commerce. Changing these parameters drastically, and tailoring them to contemporary needs, might affect security and reliability. The maintainers are therefore coerced into sticking with the known, as they do not want to be held accountable for making changes to a working design that might introduce a whole new set of variables. 

 

In the rationale of sustainable development, the world is understood not as a plenum to be inhabited but as a totality to be managed, much as a company manages its portfolio, by balancing the books. At the point of balance, the supply of renewables precisely matches consumer demand. Now in theory, if the world and everything in it could be poised on this point, then it could be kept forever in a state of dynamic equilibrium. Sustainability, however, would then have been bought at the expense of putting life and history permanently on hold. The future could be no more than a protraction of the present.

 

It is in this void, in the dissonance between maintenance and management, that maintenance as update, as renewal and change, finds its origin. Maintenance as an updating process stems from a realisation of the unsustainability of automation’s status quo in a realm of infinite possibility. To find the source of this realisation, we need to look closer at what I called the promise of automation, and to take into account the attractiveness and ugliness of this promise. 

 

 

1_2_4_Care and Quality

 

The promise of automation is a self-sustainable, unbiased, and objective replication of the past in the future, void of morality and intervention. This concept is appealing for its exclusion of the randomness and complexity inherent to human thought, human creation, and human conduct, and to nature in general. Automation, in its platonic ideal, subtracts moral or ideological ambiguity from the processes of organisation. For automation, there is no good or bad, there is no judgement or exoneration. It puts together what is meant to be put together, without questions, without interpretation, and without variation. Automation offers us peace of mind in the knowledge that what is automated is fixed in time and space and will not change.

Again, this promise is negated by decay and subsequent maintenance: any automation loses relevance over time. But it also neglects our more general search for quality. There is a certain care involved in maintenance, a human care for quality. This is care that goes beyond the joy of a problem solved, which can indeed be quite pleasant. To want for quality is to dream of repair, to long for a world in which a problem does not occur anymore. It is a yearning for a situation which is better than before—an improvement, an upgrade, or an update. And it is exactly this feeling of quality which is both the thesis and the antithesis of automation. In Zen and the Art of Motorcycle Maintenance, Robert Pirsig questions this metaphysics of quality:

 

Quality—you know what it is, yet you don’t know what it is. But that’s self-contradictory. But some things are better than others, that is, they have more quality. But when you try to say what the quality is, apart from the things that have it, it all goes poof! There’s nothing to talk about. But if you can’t say what Quality is, how do you know what it is, or how do you know that it even exists? If no one knows what it is, then for all practical purposes it doesn’t exist at all. But for all practical purposes it really does exist. What else are the grades based on? Why else would people pay fortunes for some things and throw others in the trash pile? Obviously some things are better than others—but what’s the “betterness”?—So round and round you go, spinning mental wheels and nowhere finding anyplace to get traction. What the hell is Quality? What is it?

 

Pirsig’s confusion as to how to precisely define quality lies in the dissonance between renewal and continuation, between maintenance and management. Quality is a word used in a variety of contexts: from the condition of material or a standardised process to the deliberate time spent with a loved one. It can be used to describe both a result and a procedure. Quality can define a character or a trait, but also the reliability of a service. Whatever the denomination, quality can always be divided into pairs of opposing principles—subjectivity and objectivity, spontaneity and predictability, personal and public values. According to Pirsig, this duality make for a continuous dialogue inherent to the word’s use: weighing a beautifully handcrafted ceramic plate against the awesome perfection of an industrially produced bowl, we are equally impressed, but with different understandings of quality in mind. 

Quality can be both a moral principle and an objective observation, yet in each use never fully detached from the other. Automation, then, sprouts from our desire to objectify, to disconnect a process from the narratives of morality, and to cultivate predictability, isolated in time and space; in short, automation derives from our want of a better world—it derives from our search for quality. And so, when automation puts forward a pledge of detachment from morality, it does so inherently loaded with moral judgement. From the search for quality—something better, more just and less biased—emerges a system which seems to ignore all aspects of quality. Automation quickly excludes all human influence altogether, and remains quarantined from any real intervention, as that would taint the objectivity of the automation. It reduces the human agent to a spectator. 

Pirsig uses the example of motorcycle maintenance to differentiate between the expectations of automation and the reality. When reading into the manuals provided by the motorcycle manufacturer, he notices that even the protocols provided to maintain the device are drafted in such a way that they expect the same objectivity and sterility from the maintainer as they would from the device itself. 

 

These were spectator manuals. It was built into the format of them. Implicit in every line is the idea that “Here is the machine, isolated in time and in space from everything else in the universe. It has no relationship to you, you have no relationship to it, other than to turn certain switches, maintain voltage levels, check for error conditions,” and so on. That’s it. The mechanics in their attitude toward the machine were really taking no different attitude from the manual’s toward the machine. […] We were all spectators. And it occurred to me there is no manual that deals with the real business of motorcycle maintenance, the most important aspect of all. Caring about what you are doing is considered either unimportant or taken for granted. 

 

Pirsig makes the important remark that maintenance can, in fact, also mean care. To mend the sterility of automation with the careful human touch too be a form of quality. Quality—and therefore automation—and the duality it embodies, neutrality and care, is a product of normalisation, the fictionalisation of the unknown future. It is built on a narrative that dictates both morally (internally) good and neutrally (externally) good. Both concepts of morality and neutrality are, then, concepts of the mind. Without it, neither care, quality, nor the promise of automation would exist: motorcycles would be just steel objects. They would, in fact, just be inert materials and ores in the geological layers of the earth. Pirsig continues: 

 

That’s all the motorcycle is, a system of concepts worked out in steel. There’s no part in it, no shape in it, that is not out of someone’s mind. […] I’ve noticed that people who have never worked with steel have trouble seeing this...that the motorcycle is primarily a mental phenomenon. They associate metal with given shapes...pipes, rods, girders, tools, parts...all of them fixed and inviolable, and think of it as primarily physical. But a person who does machining or foundry work or forge work or welding sees “steel” as having no shape at all. Steel can be any shape you want if you are skilled enough, and any shape but the one you want if you are not. […] These shapes are all out of someone’s mind. That’s important to see. The steel? Hell, even the steel is out of someone’s mind. There’s no steel in nature. Anyone from the Bronze Age could have told you that. All nature has is a potential for steel. There’s nothing else there. But what’s “potential”? That’s also in someone’s mind!

1_3_ON COMPUTATION


1_3_1_Totality

 

“All stable processes we shall predict. All unstable processes we shall control.” — John Von Neumann

 

Quantify everything! That is the rule of thumb of computation. As an elaboration on Pierre-Simon Laplace’s fantasy of “an intelligence sufficiently vast to submit (the totality of) data to analysis,” computation emerges from the idea that the past might hold the key to the future. The translation of the complexity of nature into data permits calculation, and thereby prediction. We only need a calculator powerful enough to make said calculation. It was mathematician George Boole who in the mid-nineteenth century produced the groundwork for this abstraction by reducing the logic of computation to the dichotomy of “true” or “false,” or 1 or 0. A thing either exists or it does not. 

 

In Boolean algebra, the world of numbers—and along with it, the world as a whole—breaks down into binary code. Numbers as we know them are forms which this coding manifests itself. In other words, they do not represent anything. The Boolean digits 1 and 0 do not designate a quantity; instead, they mark presence and absence. Thus, 1 stands for the universe, and 0 stands for nothingness. Nevertheless, the terms are not governed by a mutually exclusive relationship. Their relationship is complementary: they follow the same logic. Just as 1 times 1 always yields 1, and 0 time 0 always produces 0, x times x always equals x in the Boolean world. For the same reason, All and Nothing meet up in the formula x = xn. Because x can stand for anything and everything — indeed, for the universe in its entirety—it is no exaggeration to speak of a digital world formula in this context. 

 

Cultural theorists Martin Burckhardt and Dirk Höfer see the Boolean formula as the foundation upon which the digital philosophy of computation is built: either everything is quantifiable, or nothing is. If one thing can be quantified, so can everything, proving the existence of a totality. Almost spiritual in tone, the authors proclaim: “Through the Zero and the One all things were made; without them nothing was made that has been made.”

It is the search for the meaning of totality, and subsequently the quantification of totality, that led mathematicians, statisticians, and cryptographers to expand their visions of computation as a complexity-unravelling and randomness-anticipating strategy. Henri Poincaré states that “if we knew exactly the (totality) of the universe at the initial moment, we could predict exactly the situation of that same universe at a succeeding moment.” But directly afterwards, he counters that “it may happen that small differences in the initial conditions produce very great ones in the final phenomena.” The only problem is that the totality of the universe is not exactly known, and therefore not a totality at all. To correct this problem, we would need more data; indeed, all data would need to be included. It was this thirst for data and data-processing that drove mathematician and electrical engineer Claude Shannon to carry Boole’s concepts of a binary world into the design of electrical switching circuits—which can be positioned on or off—and, in doing so, create a device capable of computing anything computable. As Burckhardt and Höfer write:

 

Boole envisioned a kind of mathematics that would allow for calculations with apples and oranges—that is, allow for one to jump from one number system to another. Such a leap was made possible when he took the 0 and the 1 out of any order of denotative logic— indeed, out of mathematics itself. In doing so, he made the 1 no longer stand for a quantity, but a presence, and the 0 stand for an absence (of something—whatever it may be). The real revolution lies here: Boole’s binary logic detaches itself from mathematics; it abandons the material quality of what it describes. Because it oscillates between the universe and the void, it can describe anything. Viewed in this light, Boolean thinking is not algebra so much as a general theory of signs. […] In fact, this same abstractness is what fascinated readers of Shannon’s Mathematical Theory of Communication a good hundred years later, and today is what allows us to digitize not only texts, images and sounds, but also earthquakes, brain activity and extraterrestrial phenomena.

 

As the father of information theory (or so he is called), Shannon imagined the first computer able to abstract the totality of information. This led Edward Lorenz to stipulate his theories on deterministic chaos and complexity, only to discover that the data his computer was able to produce and process was insufficient; complexity reached so far into the decimals, if not infinitely far, that all prediction of future events proved to not stand beyond several days. Meanwhile, mathematician and philosopher Norbert Wiener, the father of a new science called “cybernetics,” which he framed as “the scientific study of control and communication in the animal and the machine,” integrated computation into every possible field of research, vigorously attempting to streamline the totality of the universe along Boolean logic. 

Still, over 200 years of proving and disproving the achievability of predicting the future, of attempting to concoct devices and abstractions of comprehension, of rendering reality into data, we see today that it is not enough. Following the Boolean logic, we know it will never be enough, that the totality of information, of “the laws of nature and the situation of the universe” will never be grasped, as it is a matter of all or nothing.



1_3_2_Stability

 

In our endless search for strategies to deal with the future, we have wound down a path of a nearly manic fascination for stability. We began with normalisation, a strategy of exclusion of possible futures, then imagined automation, a strategy for upholding normalisations into infinity, and now live in a world of computation, a strategy for shaping the future. As automation is the extension of normalisation—protocolled normalisation projected into infinity—computation is the extension of automation. It is the assimilation of the Boolean formula into the scripts of automation. Where automation, in its most essential form, can only repeat past experiences indefinitely, merely assuming that the future will be similar to the past, computation models a future actively, updates itself, and hosts the expectation of its self-preservation. 

 

Just as global telecommunications have collapsed time and space, computation conflates past and future. That which is gathered as data is modelled as the way things are, and then projected forward — with the implicit assumption that things will not radically change or diverge from previous experiences. In this way, computation does not merely govern our actions in the present, but constructs a future that best fits its parameters. That which is possible becomes that which is computable. That which is hard to quantify and difficult to model, that which has not been seen before or which does not map onto established patterns, that which is uncertain or ambiguous, is excluded from the field of possible futures. Computation projects a future that is like the past—which makes it, in turn, incapable of dealing with the reality of the present, which is never stable.

 

Computation is a paradoxical undertaking, according to artist and writer James Bridle, since its goal in adapting to the future is, in fact, stability. Computation presents us a future that supports our assumptions, since computation, and the technology that enables it offers us the comforting pillow of neutrality: machines will make better choices than humans, because they are not tainted, spoiled, or tangled up in our limited capacity to comprehend the random events of the future. They do not need to exclude possible futures to make decisions; they do not need to fragment time into intervals; they are not overwhelmed by arbitrariness. Computational machines only follow the sterile logic of computation. They have no culture, no history, no backstory—only pure, unbiased, algorithmic reasoning. Machines will not only make better choices than humans; they will make the best choices. 

This is, of course, the promise of computation. Through computation, our assumption—a fiction, remember—that the future is like the past is continually confirmed, quietly replaced by the fiction of objectivity. By pushing both decision-making processes and responsibilities for those decisions onto the machine, those aspects of human life which have been traditionally always open to interpretation are now quarantined from our contamination. Bridle continues:

 

This conditioning occurs for two reasons: because the combination of opacity and complexity renders much of the computational process illegible; and because computation itself is perceived to be politically and emotionally neutral. Computation is opaque: it takes place inside the machine, behind the screen, in remote buildings—within, as it were, a cloud. Even when this opacity is penetrated, by direct apprehension of code and data, it remains beyond the comprehension of most. The aggregation of complex systems in contemporary networked applications means that no single person ever sees the whole picture. Faith in the machine is a prerequisite for its employment, and this backs up other cognitive biases that see automated responses as inherently more trustworthy than non-automated ones.[…] Technology’s increasing inability to predict the future—whether that’s the fluctuating markets of digital stock exchanges, the outcomes and applications of scientific research, or the accelerating instability of the global climate — stems directly from these misapprehensions about the neutrality and comprehensibility of computation. 

 

Equal to all normalisations, which transition from a narrative to elucidate the unknown future into a certainty of unknown origin, computation has transitioned from an auspicious aspiration into a solidified given. And not without reason. The quantified world of computation, built on statistical analyses of centuries of data, is right too often for us to ignore. When predicting the weather patterns of tomorrow, the old, superstitious weather lore—”when March blows its horn, your barn will be filled with hay and corn,” for instance—simply cuts it short compared to contemporary satellite imageries, atmospheric models, and fluid simulations. The guarantees of computation offer a form of stability and certainty more than do other types of normalisation. But the grandeur of computation—quantify everything, not just something—is expansive, requiring more and more input to reach the totality of information and tighten the net on its accuracy to predict. This grandeur overwhelms us into compliance: the dataset is good, but it can be better. But outsourcing agency to the automation obfuscates the internal complexity of its processes. More simply,

 

Computational thinking has triumphed because it has first seduced us with its power, then befuddled us with its complexity, and finally settled into our cortexes as self-evident.

 

The self-evidence of computation is a reminder of the normality of normalisation, how it dims the logic of any alternative. But the ambitions of computation, Bridle suggests, make its predictions of the future less and less accurate, which in turn makes its self-evidence not only problematic, but self-replicating. Computation needs to lay promise upon promise to mask the original but faulty assumption at its core—that the future is nothing more than the effect of past causes, which can be abstracted into data—and in doing so flatten out its raison d’être altogether. Computation’s promises of neutrality, comprehensibility, self-sustainability and, most importantly, stability have resulted in an intricate self-affirming web.

As we know, our dream of stability, of an unalterable future, is not at all new. Nor are the dynamics of self-affirmation that computation camouflages. What is profoundly different now is computation’s reliance on the past, and its ability to replicate it. Data—which is necessarily gathered in the past and never in the future—is used to design models of the future. Computation therefore assumes that the future will be like the past. Unlike humans, who also live by this assumption but rely on faulty memory, computational machines are capable of maintaining this data, storing it and recollecting it, indefinitely. Where the past was open to interpretation prior to computational thinking, it is now precisely registered and ostensibly objectively archived, ready to be photocopied into the future. The past can be used as a template for the future, yet only the parts which have been transformed into data and which have been logged into the vaults of computation. By actively creating a future which is like the past—through steering our outsourced decision-making and responsibility—computation has given us the ability to repeat, recreate, reiterate and reenact our past with a precision previously unknown to us. And in this sense, the promise of stability—stability through repetition, through management, through continuation, through simulation, through the complete annulation of interpretation and variability—has become reality. 

 

 

1_3_3_Remembering

 

To be is to have been, and to project our messy, malleable past into our unknown future. — David Lowenthal

 

Computation’s re-creation of the past in the future relies on the rationalisation of memory, which determines our understanding of what the totality of information means exactly. Memory in the automated world has a different meaning than human memory, yet it is the conflation of these two meanings that generates the replicating capabilities of computation. For a clear example of the way memory is dealt with in greedy archives of computation, we can look to Borges’s 1942 story “Funes, the Memorious.” In the story, the narrator recalls an encounter with a mysterious character, Ireneo Funes, who was able to remember everything he had ever seen or experienced.

 

With one quick look, you and I perceive three wine glasses on a table; Funes perceived every grape that had been pressed into the wine and all the stalks and tendrils of its vineyard. He knew the forms of the clouds in the southern sky on the morning of April 30, 1882, and he could compare them in his memory with the veins in the marbled binding of a book he had seen only once, or with the feathers of spray lifted by an oar on the Rìo Negro on the eve of the battle of Quebrancho. Nor were those memories simple—every visual image was linked to muscular sensations, thermal sensations, and so on. He was able to reconstruct every dream, every daydream he had ever had. Two or three times he had reconstructed an entire day; he had never once erred or faltered, but each reconstruction had itself taken an entire day. “I, myself, alone, have more memories than all mankind since the world began,” he said to me. And also: “My dreams are like other people’s waking hours.” And again, toward dawn: “My memory, sir, is like a garbage heap.” A circle drawn on a blackboard, a right triangle, a rhombus—these are forms we can fully intuit; Ireneo could do the same with the stormy mane of a young colt, a small herd of cattle on a mountainside, a flickering fire and its uncountable ashes, and the many faces of a dead man at a wake. I have no idea how many stars he saw in the sky. 

 

Funes’s ability to remember—or, more accurately, his inability to forget—allows him to accumulate and recollect strings of experiences, so fine-grained that near to no cognitive gaps need to be filled in through inductive means. Like Bergson’s definition of change as a continuous and dynamic whole, Funes’s memory is non-intermittent and indivisible, resulting in an overwhelming resolution which permits the comparison of hypothetical apples to oranges. It becomes clear that, to harvest the totality of information, the storage of this information needs to come first, and it needs to be executed with infinite detail. Funes registers every sensation, every movement, and every vision into the logbooks of his brain; they are immediately available to him at every instance. His body is a mobile recording device, needing only a glance at an object to have it captured and stored and carried within forever. 

The baseline of computation is a capacity for memory like this. If a machine forgets nothing, then its scan of the world and the situation of the universe would result in a comprehension of totality. All angles would be seen, all sounds heard, all movements registered. Even if it takes some time before everything is logged, the unforgetting memory would collect and safeguard all data until the totality is achieved, slowly filling in a seemingly endless blank. The retrieval of this data from the archives and subsequent comparison with other data then allows to find overlap and similarity, or produce an impression of an even denser resolution, layer upon layer, map upon map. Just as Funes can compare the southern clouds to a splash of water, placing them virtually next to each other, so can computation place sound next to image, and meta-data next to movement. 

However, absolute memory warrants not only the comparison between incomparable differences, but between similarities, even likenesses. Funes, confronted with the generalising logic of the average mind, is baffled by the banality of perception, by the simplifying and compartmentalising processes people use to grasp the complex. He is, in fact, unable to combine two or more experiences as being alike.

 

Funes, we must not forget, was virtually incapable of general, platonic ideas. Not only was it difficult for him to see that the generic symbol “dog” took in all the dissimilar individuals of all shapes and sizes, it irritated him that the “dog” of three-fourteen in the afternoon, seen in profile, should be indicated by the same noun as the dog of three-fifteen, seen frontally.

 

What does the comparison between incomparable differences achieve if it is unable to group and read abstract similarities? In computational thinking, similarities are referred to as “patterns”, which is a way to describe recurring sequences or strings of data from different sources. To find a pattern is to lay the sequences on top of each other and note both the similarities and the differences—which pixel or group of pixels accords with another, which wavelength synchronises with another, which quantified representation of reality matches with another. This understanding of similarity is entirely different from what Borges defines as similarity. Indeed, a dog seen from the side on a sunny day and then seen frontally on a cloudy day is visually not the same dog. Yet in human perception, leaving out the differences entirely and only focussing on the similarities, the dog is considered one and the same, regardless of the time passing or the weather changing or the angle of view shifting. It is in this sense that the perception of computation, based on a flawless and complete memory, differs from the faulty and messy mind of mankind. To put it in Borges’s words, “To think is to ignore (or forget) differences, to generalise, to abstract.” And as computers are incapable of forgetting or ignoring, their vision of the world remains particular, built up from endless and immediate particulars. 

Bergson describes the same process of abstraction in a different way. He assumes that our mind is, indeed, capable of perceiving particulars, but that the breaking up of continuity into particulars, into intervals, allows the mind to bring certain events to the front and send others to the back. 

 

When the two changes, that of the object and that of the subject, take place under particular conditions, they produce the particular appearance that we call a “state.” And once in possession of “states,” our mind recomposes change with them. […] There is nothing more natural: the breaking up of change into states enables us to act upon things, and it is useful in a practical sense to be interested in the states rather than in the change itself. 

 

Like computation, which breaks down change into particulars, into states, or better, into fixed numbers, human perception cuts the continuous flow of information into befores and afters. Computation, however, is unable to select, to choose between these states. By adding more and more states to the memory without deleting what is between, perception reaches a point at which it reconstructs the graduality of change again, like a camera producing 24 images and placing them so tightly together that the difference between the images blurs into one continuous movement. The human mind selects, first by breaking up the flow of change into states, and then by isolating specific states and white out others.

 

The facts […] show us, in normal psychological life, a constant effort of the mind to limit its horizon, to turn away from what it has a material interest in not seeing. Before philosophizing one must live; and life demands that we put on blinders, that we look neither to the right, nor to the left nor behind us, but straight ahead in the direction we have to go. Our knowledge, far from being made up of a gradual association of simple elements, is the effect of a sudden dissociation: from the immensely vast field of our virtual knowledge, we have selected, in order to make it into actual knowledge, everything which concerns our action upon things; we have neglected the rest. The brain seems to have been constructed with a view to this work of selection. That could easily be shown by the way in which the memory works. Our past […] is necessarily automatically preserved. It survives complete. But our practical interest is to thrust it aside, or at least to accept of it only what can more or less usefully illuminate and complete the situation in the present. The brain serves to bring about this choice: it actualizes the useful memories, it keeps in the lower strata of the consciousness those which are of no use. One could say as much for perception. 

 

Forgetting permits agency and knowledge. It thereby keeps us sane, as the totality of information would bewilder us, as it does Funes:

 

Funes could continually perceive the quiet advances of corruption, of tooth decay, of weariness. He saw—he noticed—the progress of death, of humidity. He was the solitary, lucid spectator of a multiform, momentaneous, and almost unbearable precise world. Babylon, London, and New York dazzle mankind’s imagination with their fierce splendor; no one in the populous towers or urgent avenues of those cities has ever felt the heat and pressure of a reality as inexhaustible as that which battered Ireneo, day and night, in his poor South American hinterland. 

 

The totality of information does not, however, bewilder computational devices. Bewilderment is a human trait, inextricably connected to the limitations of the human mind. Yet it is these same limitations which allows me to compare Borges’ fictions to the philosophy of Bergson, written 30 years apart and numerous decades before my birth. My ability to forget permits me to see similarity in the written texts of the same language, by selecting that which is relevant and leaving out that which is not. It permits me to block out the contexts in which both texts were written, which backgrounds both writers had. It supports my mission to create a new insight or a new interpretation. What it does not permit is for me to compare these texts to the structural integrity of a skyscraper or to the average yearly flow of the Nile basin. It does not allow me to capture the total amount of characters used in both texts and compare them to the history of every text ever written, which is, indeed, a possibility for computation.

It is the dynamic of forgetting, reinterpreting, and re-embodying that creates not only the individual memory, but history in general. That is, the collection of past experiences and the selection of the memories captioning those experiences. The limitation active in the human mind is also one that is mirrored by the succession of generations in a community. People do not just forget; they also die, perhaps the most extreme form of forgetting. Their experiences are lost, or when written down or registered otherwise, unprotected by their originator, and thus again open to interpretation. The past inescapably recedes away from the present, taking with it the ideas and memories of those who have passed away. All that is left are the traces they leave behind, either in direct transmission to another, or in another physical form. 

Normalisation is intrinsically a fictionalisation from the past projected into the future, a narrative built on the ruins of bygone days, solidified in the collective memory as a given. Its origin is forgotten, irretrievably lost in the inevitable progression of time, which forces memories to fade and generations to pass. The process that forms normalisations is therefore based on the same selective procedure performed by the brain: to make sense of the residue of the past, we include some traces and sources and leave out others, creating a consistent whole with a logical, reasonable narrative. David Lowenthal describes this relationship to the past as a visit to a foreign country, where the similarities and differences are extracted selectively from experience and subsequently internalised as normal to straighten out this unknown territory. 

 

The past itself is gone—all that survives are its material residues and the accounts of those who experienced it. No such evidence can tell us about the past with absolute certainty, for its survivals on the ground, in books, and in our heads are selectively preserved from the start and further altered by the passage of time. These remnants conform too well with one another and with knowledge of the present to be denied all validity, yet residual doubts about the past’s reality help to account for our eagerness to accept what may be dubious about it. There can be no certainty that the past ever existed, let alone in the form we now conceive it, but sanity and security require us to believe that it did.

 

Interpreting the past as a way to avoid its complexity or escape its “inexhaustible heat and pressure,” by cognitively knitting together a selection of the remnants is not only what keeps us sane, but also what allows us to take a glimpse, through the experience of hindsight, at what may be to come. It provides a sense of security, of certainty. By infusing the narrative past with present conformities, we highlight similarities. When what happens now has happened in the past, it reassures us of the outcome. Lowenthal continues:

 

As modes of access to the past, memory, history, and relics exhibit important resemblances and differences. By its nature personal and hence largely unverifiable, memory extends back only to childhood, though we do accrete to our own recollections those told us by forebears. By contrast, history, whose shared data and conclusions must be open to public scrutiny, extends back to or beyond the earliest records of civilization. The death of each individual totally extinguishes countless memories, whereas history (at least in print) is potentially immortal. Yet all history depends on memory, and many recollections incorporate history. And they are alike distorted by selective perception, intervening circumstance, and hindsight. 

 

History, here, as immortal as it may seem, grows distorted and fluid as time passes. The documentation of past events, however prone to entropy, is re-interpreted at every possible instance through the selective and seemingly random process of perception. As the documentation is never complete and total, and as that which is documented is dispersed again by future generations, the past becomes malleable. 

Computation’s approach to memory, however, and therefore history, goes beyond interpretation. Computation annuls distortion by selective perception or hindsight, putting in place a neutrality based in the comprehension of this totality of the past—the potential immortality of history made manifest. Forgetting is completely ignored by computation. Philosopher Byung-Chul Han marks the difference:

 

Human memory is a narrative, an account; forgetting forms a necessary component. In contrast, digital memory is a matter of seamless addition and accumulation. Stored data admit counting, but they cannot be recounted. Storage and retrieval are fundamentally different from remembering, which is a narrative process. Likewise, autobiography constitutes a narrative: it is memorial writing. A timeline, on the other hand, recounts nothing. It simply enumerates and adds up events or information. 

 

The dream of computation begins to unravel once we frame it as the fictionalisation that it is, as is the case with any automation and the promises that come with it. The idea that the scale of the totality of information, if ever it could be ascertained, would be of any use at all to us, is formally untrue. In the Borges story, Funes “had reconstructed an entire day” two or three times; “he had never once erred or faltered, but each reconstruction had itself taken an entire day.” Totality, likewise, takes totality to operate. If no cropping of information is made, then no conclusion can be drawn, and the simulation which we create is no different from reality itself. This reminds us of another short story by Borges, only the length of a paragraph: 

 

...In that Empire, the Art of Cartography attained such Perfection that the map of a single Province occupied the entirety of a City, and the map of the Empire, the entirety of a Province. In time, those Unconscionable Maps no longer satisfied, and the Cartographers Guilds struck a Map of the Empire whose size was that of the Empire, and which coincided point for point with it. The following Generations, who were not so fond of the Study of Cartography as their Forebears had been, saw that that vast Map was Useless, and not without some Pitilessness was it, that they delivered it up to the Inclemencies of Sun and Winters. In the Deserts of the West, still today, there are Tattered Ruins of that Map, inhabited by Animals and Beggars; in all the Land there is no other Relic of the Disciplines of Geography.

 

 

1_3_4_Averages and Choices

 

Sometime the same is different, but mostly it’s the same

Queens of the Stone Age

 

Our desire for stability and quest for the totality of information has led us down a path towards the construction of a replica of reality, a 1-to-1 simulation of the past and present to provide us with the answers to the questions of the unknown future. Yet, the scale of everything is of no use, as it denies us any comprehensible narrative, and confronts us with a copy as complex as the original. 

As computation’s undiscriminating view of the world has become so fine-grained that it has no focus, the crux of the question at the heart of computation today is a matter of selection, of deciding what’s relevant. The goal of computation is to attain a neutral and unbiased representation of the world as it currently exists, but the limitations of human mental capacity require selection or categorisation to make any sense of its infinite reach. How do we maintain these two contradicting facets in computational automation? How do you make a computer make an objective choice of what information is okay to exclude?

One means is to align information to its average, compacting large amounts of data into a common denominator. This statistical feat enables a significant simplification of the totality of information, permitting some version of possible human interpretation. Yet it is only able to do so by cutting away. Typically represented by the bell-shaped Gaussian distribution, sometimes called the curve of normal distribution, averages exclude the outer, lower, and less common ranges of the bell curve, while the central, higher parts hold the most relevance and are therefore considered more valuable. We use averages like these all the time. In the mass production of doors and door frames, for instance, builders design for a range of average human body heights, causing abnormally tall people to have to duck in standard door frames. Since these people are fewer in number, they are, according to the curve, cut out of the averages used during the design process. 

To recognise a pattern is to make a choice. By reducing the complexity of differences, we bring similarities to the surface. But in doing so, we lose much of the totality of the original, chopping away at differences in order to build patterns. Averages round infinitesimals either up or down, according to the convention. They compress files into workable sizes and scale maps to fit on a table or wall. Averages are what turn Funes’s dog at three-fourteen in the afternoon, seen in profile, and the dog at three-fifteen, seen frontally, into one and the same dog. 

The stability generated by the average excludes that which has not been seen before, that which does not align with existing models. By forcing the totality of information into averages, we make selections based on normalisations of the past. The bell curve and all its derivatives are mathematical models of probability and statistics that induce correlation to enable the retrieval of similarities in an utterly vast and complex web of differences. 

Because of our inevitable need for selection, the process of developing a neutral representation of reality—one that would store each and every memory or data point possible—ultimately leads us to a profound fictionalisation of future: that the way things are now represents the most neutral, the most efficient, and the most universal model that can be objectively attained. As computation stores and classifies all the data it collects, creating a timeline of its own evolution, it can replicate its history, return to a previous moment, and compare its current state to a past state. It can repeat a past situation with an exactitude which was unimaginable only a century ago, and it develops this quality progressively, becoming more and more precise as it brings in larger amounts of data. It becomes nearly impossible to contradict the outcome of computational models. Every proposition that does not lie within the model’s reach can be cross-referenced with prior examples, and is subsequently averaged out to fit within.

Norbert Wiener, one of the great advocates of the computational quest for totality, also addressed this problem of decision-making and of selection in computational automation. He recognised that computers, loaded with statistical biases hard-coded in, would eventually carry on copying faulty versions of themselves, and thus repeat and reiterate the faulty choices they embody. 

 

Any machine constructed for the purpose of making decisions, if it does not possess the power of learning, will be completely literal-minded. Woe to us if we let it decide our conduct, unless we have previously examined the laws of its action, and know fully that its conduct will be carried out on principles acceptable to us! 

 

Wiener instead proposes a machine capable of learning, a kind of contradiction in terms. As we have seen, the construction of knowledge is a feature computation does not control. For it to do so, it would have to make its selections of totality completely on its own, without human interference. In order for a machine to make decisions, it would first have to know how to make decisions. Wiener qualifies his statement thus:

 

On the other hand, the machine like the djinnee, which can learn and can make decisions on the basis of its learning, will in no way be obliged to make such decisions as we should have made, or will be acceptable to us. For the man who is not aware of this, to throw the problem of his responsibility on the machine, whether it can learn or not, is to cast his responsibility to the winds, and to find it coming back seated on the whirlwind. 

 

Even if the machine were able to make said decisions, it would be just as necessary to continually recalibrate them, to remind them of what we expect from them. The paradox is that the references we have of what we want are exactly that which we have outsourced to the machine. The confidence we have put in the decision-making capabilities of computation, which we expect to bring stability to the chaotic approach of the future, instead results in the reiteration of the familiar and stable past. This familiarity with the past—our reference point for what is and can be known—is what computation mass-reproduces. Wiener’s whirlwind, in that sense, always returns. It brings us exactly what we originally gave it. 

 

 

1_3_5_Power and Transposition

 

In the equation of computation, who are we, exactly? When I use the word we, I do so to suggest that the fictionalisation of the unknown future is an activity common to all of mankind, regardless of its heterogeneous extent or variegated impact, and regardless of the advantages it provides to those who impose their fictionalisations upon others. Indeed, despite its internal variation, the acceptance of this fiction is, in my view, inevitable for rich and poor alike. It is for this reason that I want to discredit the hypothesis that every act of fictionalisation is orchestrated for the express purpose of willingly subjecting another. Fictions are inescapable for everyone: for a person to be completely void of fiction—to have a clear, unobstructed view of each and every fact—is beyond our human capacity. That is why we have built computers. 

However the idea that the effects of fictionalisation occur equally and indiscriminately is clearly false. There are people who benefit from the perpetuation of certain fictions, and there are people who do not. Strategies used to copy the past into the future—fictionalisation, automation, computation— intrinsically honor those who held power in the past, and those who intend to maintain it in the future. By presenting the present as the most neutral and most objective way of living, computation, for one, also further normalises existing hierarchies, and shields them from critique. The sanctimonious neutrality of computation waltzes over all other forms of normalisation and solidifies itself as being universal and timeless. James Bridle writes:

 

We have been conditioned to believe that computers render the world clearer and more efficient, that they reduce complexity and facilitate better solutions to the problems that beset us, and that they expand our agency to address an ever-widening domain of experience. But what if this is not true at all? A close reading of computer history reveals an ever-increasing opacity allied to a concentration of power, and the retreat of that power into ever more narrow domains of experience. By reifying the concerns of the present in unquestionable architectures, computation freezes the problems of the immediate moment into abstract, intractable dilemmas; obsessing over the inherent limitations of a small class of mathematical and material conundrums rather than the broader questions of a truly democratic and egalitarian society.

 

In Bridle’s use here, we denotes a group of people not in control of computational thinking’s development. Computation suspends any grievances this we has about the problems the group faces and interprets them only as issues that have already been solved, employing the repetition of computational strategies in instances where they do not necessarily apply.

In the entanglement of fiction with reality, of approximation with simulation, of average with totality, of subjectivity with neutrality, repetition becomes a mode of life. If everything can be compared, and if similarities are everywhere—if our faith in computational thinking has us, unlike Funes, forgetting differences between things—then everything is, for us, connected through some form of Boolean logic. This holistic approach to computation allows us to copy and paste the past into the future, but it also enables the transposition of any one thing into anything else through averages and rounding errors. It permits translation between vocabularies and systems, and it allows the designers of computational systems to shape both problems and solutions according only to their understanding of patterns. But, as James Bridle recounts, patterns can be discovered and recognised everywhere, even between geological events and crime:

 

The Great Nōbi Earthquake, which was estimated at 8.0 on the Richter scale, occurred in what is now Aichi Prefecture in 1891. A fault line fifty miles long fell eight metres, collapsing thousands of buildings in multiple cities and killing more than 7,000 people. It is still the largest known earthquake on the Japanese archipelago. In its aftermath, the pioneering seismologist Fusakichi Omori described the pattern of aftershocks: a rate of decay that became known as Omori’s law. It is worth noting at this point that Omori’s law and all that derived from it are empirical laws: that is, they fit to existing data after the event, which differ in every case. They are aftershocks – the rumbling echo of something that already occurred. Despite decades of effort by seismologists and statisticians, no similar calculus has been developed for predicting earthquakes from corresponding foreshocks. Omori’s law provides the basis for one contemporary implementation of this calculus, called the epidemic type aftershock sequence (ETAS) model, used today by seismologists to study the cascade of seismic activity following a major earthquake. In 2009, mathematicians at University of California, Los Angeles, reported that patterns of crime across a city followed the same model: the result, they wrote, of the ‘local, contagious spread of crime [that] leads to the formation of crime clusters in space and time … For example, burglars will repeatedly attack clusters of nearby targets because local vulnerabilities are well known to the offenders. A gang shooting may incite waves of retaliatory violence in the local set space (territory) of the rival gang.’ To describe these patterns, they used the geophysical term ‘self-excitation’, the process by which events are triggered and amplified by nearby stresses. The mathematicians even noted the way in which the urban landscape mirrored the layered topology of the earth’s crust, with the risk of crime travelling laterally along a city’s streets. It is ETAS that forms the basis of today’s predictive policing programmes […].

 

Computation, here, is what warrants the transposition of earthquakes in Japan onto crime rates in Los Angeles. The abstraction of reality into the simulation strips data of its context. Not only history does repeat itself, but jumps timelines, landing in different fields and narrative lines. Examples of this transposition are legion, ranging from personal “if you like... then you might like ” algorithms to the use of models of fluid dynamics to simulate the flow of crowds at large events. Once these theories become natural laws, if only in practice, they are dispersed, copied, and transposed without limitations, making their origins irrelevant. 

Computation’s transpositional inclination renders it seemingly universal: makes no distinction between facts or fictions, or between apples or oranges. There are only similarities, regardless of their origins, to be compared. Our futile quest for the totality of information, once intended as a means to complicate universality and help us navigate whatever true universal laws remain has fed back into confirmation biases, and reinforced the indiscriminate interchangeability of ideas, myths, and fictions. 

Every normalisation, in the end, can be automated, every automation can then be computed, and then all can be undone. Through this process, it becomes extremely difficult to assess whether we, as those who undergo computation, are therefore influencing the world of computational thinking or are just being influenced by it. Are the normalisations that spring from computation our doing, or are we the products of facsimile brought about by the normalisations of computation?