Yuk Hui

Recursivity is not mere mechanical repetition; it is characterized by the looping movement of returning to itself in order to determine itself, while every movement is open to contingency, which in turn determines its singularity. We can imagine a spiral form, in its every circular movement, which determines its becoming partially from the past circular movements, which still extend their effects as ideas and impressions. This image corresponds to the soul. What is called the soul is the capacity of coming back to itself in order to know itself and determine itself. Every time it departs from itself, it actualizes its own reflection in traces, which we call memory. It is this extra in the form of difference that witnesses the movement of time, while at the same time modifying the being that is itself time, so that it consequently constitutes the dynamic of the whole. Every difference is a differing, deferring in time and a being differed in space, a new creation. Every reflective movement leaves a trace like a road mark; every trace presents a questioning, to which the answer can be addressed only by the movement in its totality. This questioning is a test, in the sense that it may fall or it may continue with intensification, like the movement of a curve. What determines the falling or intensifying is the contingent encounter between internal and external ends.

Iteration and Recursivity

Experimental Studio

25 04 2018

-------------------------

Luc Döbereiner, David Pirrò, Hanns Holger Rutz, Daniele Pozzi



One of Almat's formats consists in selecting a text which tackles topic of common interest and discuss it together with the artist in residence. To continue the conversation on some themes which emerged from a previous reading session, Luc suggested to discuss the first chapter of "Recursivity adn Contingency" by Yuk Hui (Media Philosophy - Jan 2019)

HHR

I thought this first formulation of WRECK it's so brilliant in understanding the problem of recursion. I mean, this thing with the comma, it immediately explodes this problem of thinking about recursion. This infinite mirroring is formulated very concisely there, so to speak.  

Talking about how concepts may become "tangible" or "objectified". 


---------------------------------


HHR

When I think about it, this is always what happens, in a way. Not a reification, but a kind of "becoming thing" or "objectifing", like recursion becomes a thing and, you know, we have this object that we talk about. We take it maybe from our reading, or maybe from the way we implemented and maybe it comes back, it pings back to us. And in that way it becomes actual as the principle itself, that I find kind of interesting. When we speak about recursivity, that -ity it says something, there is an object or the idea of a concept, basically. So, we say this is recursion. We define it and so on, and by that it influences the way, when we go back and we work with the software, we make pieces and so on, this thing becomes actual somehow. Because it keeps coming back as something that exists. I thought it's a little bit interesting in terms also of these accidental properties. Somehow these are all accidental because our imagination of what is recursivity is always under defined of course, but still for us is very real. 

 

 

In ALMAT, what interests me is how that changes what we are doing. We all have some conceptions and we try them out, we experiment, but the most difficult thing is to understand how this movement happens over time, because we are in time. So we can try to look back at our own sketches and notations from the previous iterations, and try to reimagine back how we were there. But it's very difficult somehow. I was thinking that, for example, one month ago we weren't talking about "recursivity". We didn't use the term "recursivity" but maybe only recursion. And so something enters, like recursivity and then it's like a magnet a little bit, like you attach the different things you associate with. You pick up some stuff, let's say this text, but can be with any text. He makes of course a lot of connections and they come all back. He's talking about cyberneticists and Wiener and Spencer Brown and you say "Ah interesting, that's true. This has to do with this". And that's in a way what he calls intensification. The idea that something gets saturated, that something, in a way, is a container but it gets saturated. But we carry it around, somehow. And then when we go back it's there. The difficulty is in qualifying how that influences what we are doing in the moment. But we know that's influencing what we are doing. So if I would go back and do a piece that has everything to do with recursion this stuff would be there, somehow. So it's impossible to take it out of the equation. And for me that's the interesting point about what agency maybe is, or about how agency is constructed. Our discussion is condensed and goes in there. This is one of the interesting aspects of the whole project, in a way. That's maybe a possibility to discuss how these concepts become things that are not just ideational, but somehow they become very concrete in the way we work. That's one possibility of this mattering, when you take Barad's idea of mattering: to matter in english means "to make sense, to mean something". So it becomes meaningful, somehow. But is difficult to observe the process of becoming meaningful.

 DP

Recursivity needs a stopping condition, so there is already an halting point in it. While iteativity does not really have it. But of course a recursive algorithm can be understood as iterative. The two sets are intersecting somehow, they are overlapping. 

ITERATION

  • reducible to a sequence of steps (or states)
  • does not need a halting condition

RECURSION

  • not reducible to a sequence of steps
  • need an halting condition

Recursivity and time


---------------------------------


HHR

I thought what is interesting about his definition (Hui - "Recursivity and Contingency") is when he says that "sublation of being and becoming" is the core of recursivity. I thought that was kind of an interesting idea to understand it. Because it has the time in itself. So recursion, in the sense of coming back to itself, he defines it at some point as memory. So there's "the extra in the form of difference that witnesses the movement in time, at the same time modifying the being that is itself time. So that it consequently constitutes the dynamics of the whole". So I thought this kind of sentence was very beautifully trying to describe how the identity function is related to the memory of the thing somehow. But without the time it wouldn't exist. So, I would say that recursivity, at least from this kind of reading, it requires this becoming, so it requires a process of some form. If you call it time, or whatever you call it, some form of unwinding. He says that "the infinite inscribes itself into the finite". Of course you can write it as a finite form, but you can only experience it in its writing process, which is in this idea of inscribing. And I wouldn't agree that you have to stop at some point, I don't think that's a necessary thing. I think a necessary thing is that it has a mechanism of decision making. 

 

56m40

LD

But the finality it's not only that something ends, it's a telos. It can also be going towards something that would never reach, but it's still a telos. It's a directiveness the finality, to an aim.

 

HHR

Yeah, but in the aim is inscribed the idea of completing something. 

 

LD

Yeah ideally yes, that's why it's finality. But I don't think it has to complete, to stop. I mean, something to have finality doesn't mean that it's terminable, or how you call it.

 

DP

But there's another thing, correct me if I'm wrong. He says in the text that recursion needs the stopping condition basically, because it needs to be finite. 

 

LD

Yeah he oscillates there, this is definitely one aspect. But when he speaks about what Hanns Holger just spoke about, which I think resonates much more with this Schelling-Fichte-Hegel thing, I think there is also a difference in the time. Because it relates more to a kind of phenomenologically experienced time, right? When you come back to yourself. And I think that, when you do something, somehow you don't really experience time, because you experience the thing you do. So, the moment in which you kind of come back to yourself, in this german-idealistic sense, and you kind of experience the time that elapsed. Because you can reflect on what you did. In a way, you only experience time once you go back to something. Do you agree?

 

DP

I think, as you hinted to, there are different kinds of recursivity that he's talking about. One is the one in the analog world, so to say, one is in the digital, and I think they are very different. 

 

LD

Yeah, and the one in thought or thinking.

 

DP

Yes. They share some commonalities but they are different in some way.

LD

If we go back to this temporal theme, I think they are different concepts of recursion. They can be these totally timeless recursions, like in functional programming languages, it's always timeless. 

 

DP

Exactly, that's a very classical example of recursion. There's no time in there. 

 

HHR

Because it's terminating basically.

 

LD

Because it's terminating and also because there's no state. So it's just structure somehow.

 

HHR

The time is in the return structure. You build up, you process a list and you return a new list. The processing is there embedded in that you get a new list. It's not you get nothing, you get another list. The time is invested in the sorting afterwards. So the time is not annihilated, it's just in the form, it's that is not visible in the form.

 

DP

Yeah, but that's what the form is telling. I mean, the form is how you think the things.

 

LD

Yes, how you think the things. You don't have to think about time when you write in Haskell. Unless you do input output, and then you get the laziness. 

 

HHR

Yes. But any recursive function needs a decision property, in a functional programming language. And somehow when you write it, when you conceptualise it, you are performing the action, I think. At least, if I have to formulate some bubble sort or whatever, let's say Fibonacci, the typical example, in a functional way you think: "We take this and we do this. And then we repeat". Somehow I think the time is there. Or space, I think it's easier for us to talk about space. If we think spatially, we have our set. We think of a set, or a list, we have it there. They are the elements, and afterwards they are sorted. And we read from left to right, or whatever you want. And then the operation of sorting it is like we would do it on a table, almost embodied. Like split the colours, and take them apart, and then you do this. And then you write it in the two cases somehow. So, even if there is no time variable, or space variable, I think it's somehow in the form that you make this kind of branching and decisions in the recursive formula. I don't think it's kind of gone. 

 

POZ

Yeah, you can imagine all these steps in between, right?

 

HHR

I think that, if you read a computer science paper, that's how inductive proofs work. You apply the sequentiality somehow, at some point. Whereas logic is different. In the recursive formula this annihilation of space and time is not true, I think. Whereas, when you just state the relations in a logical system, and then the solver does the thing, it's much more eliminated, in a way. Because there's no sequence, no clear sequence. 

 

1h08m

LD

But if you write a little bit larger systems, I mean, there it makes a big difference wether it's purely functional or wether it's a more procedural thinking. Or also object orient thinking. In my experience, if it's like the imperative way of writing, I have to think about it as some kind of world that you turn on, and then stuff happens etc. And with functional, even with large systems, I never think of it like this. Because you just plug it together and run.

 

HHR

This turn towards the thought of larger systems is also very important and interesting, because we can always think about these small examples, and they are very easy to discuss. But the larger systems confront us with the limits of how we can conceptualise the whole thing. This building large system, that we all do somehow, or we take code from previous projects and we rebuild it for another purpose. The systems is always more complex than just something that we can say "this is the concept of the program". Even if we want to describe it as the concept of the program, it always escapes that because it has all the history attached to it. It has all the energies put into it from various other things. Like a sponge, taking all this contingency. It is embedded in the program, somehow, the accidentals.