If we go back to this temporal theme, I think they are different concepts of recursion. They can be these totally timeless recursions, like in functional programming languages, it's always timeless.
Exactly, that's a very classical example of recursion. There's no time in there.
Because it's terminating basically.
Because it's terminating and also because there's no state. So it's just a structure somehow.
The time is in the return structure. You build up, you process a list and you return a new list. The processing is there embedded in that you get a new list. It's not you get nothing, you get another list. The time is invested in the sorting afterwards. So the time is not annihilated, it's just in the form, it's that is not visible in the form.
Yeah, but that's what the form is telling. I mean, the form is how you think the things.
Yes, how you think things through. You don't have to think about time when you write in Haskell. Unless you do input output, and then you get the laziness.
Yes. But any recursive function needs a decision property, in a functional programming language. And somehow when you write it, when you conceptualize it, you are performing the action, I think. At least, if I have to formulate some bubble sort or whatever, let's say Fibonacci, the typical example, in a functional way you think: "We take this and we do this. And then we repeat". Somehow I think the time is there. Or space, I think it's easier for us to talk about space. If we think spatially, we have our set. We think of a set, or a list, we have it there. They are the elements, and afterwards they are sorted. And we read from left to right, or whatever you want. And then the operation of sorting it is like we would do it on a table, almost embodied. Like split the colors, and take them apart, and then you do this. And then you write it in the two cases somehow. So, even if there is no time variable, or space variable, I think it's somehow in the form that you make this kind of branching and decisions in the recursive formula. I don't think it's kind of gone.
Yeah, you can imagine all these steps in between, right?
I think that, if you read a computer science paper, that's how inductive proofs work. You apply the sequentiality somehow, at some point. Whereas logic is different. In the recursive formula this annihilation of space and time is not true, I think. Whereas, when you just state the relations in a logical system, and then the solver does the thing, it's much more eliminated, in a way. Because there's no sequence, no clear sequence.
But if you write a little bit larger systems, I mean, there it makes a big difference whether it's purely functional or whether it's a more procedural thinking. Or also object orient thinking. In my experience, if it's like the imperative way of writing, I have to think about it as some kind of world that you turn on, and then stuff happens etc. And with functional, even with large systems, I never think of it like this. Because you just plug it together and run.
This turn towards the thought of larger systems is also very important and interesting, because we can always think about these small examples, and they are very easy to discuss. But the larger systems confront us with the limits of how we can conceptualize the whole thing. This building large system, that we all do somehow, or we take code from previous projects and we rebuild it for another purpose. The systems is always more complex than just something that we can say "this is the concept of the program". Even if we want to describe it as the concept of the program, it always escapes that because it has all the history attached to it. It has all the energies put into it from various other things. Like a sponge, taking all this contingency. It is embedded in the program, somehow, the accidentals.