I had the pleasure to meet with Ollie Bown and Alice Eldridge yesterday. Basically, they are working on a project in biologically inspired interactive music systems associated with the Live Algorithms for Music Group, based in Goldsmiths. They are doing a bunch of interesting stuff – but here is what I liked the most about it: they have this cool software based on Max/MSP stuff (which you can ask them about) and they used CTRNNs (among other stuff) to make music. Ok, not just to make music but a system to interact with musicians.

You may be asking, so what is the difficulty in this? Getting a dynamical system to be ‘interesting’, that’s the hard part. Not too boring, but not too chaotic either. A lot of people have talked about this in various different ways. Probably discussing the ways in which people have talked about this would be useful, I can mention off the top of my head Stuart Kauffmann with his complexity and order stuff, Takashi Ikegami with his search for balance between homeostasis and chaos, and a number of people at Sussex (in particular people like Ezequiel) in the search for ‘autonomous’ agents, systems that can perhaps perform a task but be ‘idiosynchratic’ in the way they go about it. I would dare to say that this is perhaps the most interesting problem in artificial life / adaptive behaviour, because it relates to synthetising systems that can at least appear to have ‘a mind of their own’ and therefore appear to be ‘creative’ in some way or another.

This is a obviously a really big topic and it has become particularly relevant lately in our discussions in the life and mind reading group and the CCNR in general. The post here is about something that I think has potential to help out in that search. Before I continue, however, I must be careful to acknoledge that most of the ideas in here come from what Ollie showed me yesterday. I will just be exploring some of a few aspects of those things in slightly more depth here. With the hopes to generate some arts/science collaboration going.

As you know CTRNNs are the coolest dynamical systems out there đź™‚ So it makes sense to start here. More seriously, there are a bunch of reasons to use CTRNNs but I’m not going to go into that here. If you are interested in nonlinear continuous-time dynamical systems and are looking for an interesting model then go to Randall Beer’s site and have a look at any of his papers.

Assuming you know about CTRNNs, then you know about their activation/transfer functions. This is usually a sigmoidy looking one. In his recent paper ‘CTRNN Parameter Space’, Randy talks about a more general class of activation functions parameterized as:

The class contains several activation functions, including both the one commonly used in the context of CTRNNs (1,0,1) and the hyperbolic tangent function (2,-1,2) (alpha, beta and mu respectively).

Also in that paper he talks about *Rnm* regions of parameter space. This regions correspond to sets of parameters that when instantiated into the system generate dynamics where *n* out of the *m* neurons are active (non-saturated). This allows one to do a very important calculation which is, if you know the range of values from which you are picking (say at random) the parameters for your new dynamical system, then you can calculate the proportion of the volume of that space that will correspond to the case where, for example, all of the nodes are active. Calculating this is not easy, but in that paper he gives us some insightful estimations. For what I want to say here, all that matters is that the more ‘interesting’ (i.e. *Rnn*) regions of dynamics are rather small (between 0.01% and 3%). – and they can get smaller as the parameter range increases or as the number of nodes increases.

Knowing this, I think, is crucial if you are interested in evolving CTRNNs. One could (and certainly should) perform a number of experiments and tests to see how one can make the parameter range and other things appropriate for interesting behaviours to arise with ease. I think there is a whole PhD’s worth (or even a couple) there. Now, that is not what I will talk about at this point.

Back to the story (and it should become clearer why what I just mentioned is important later on). As soon as we met they started showing me their dynamical systems in action. Some artificially evolved – only slightly – others purely random ones. I was surprised. The behavior was very interesting with external perturbations (e.g. beats, tunes) and without. After seeing three or four of them I asked them to see the code. I was puzzled. It turns out they are using a number of variations on the transfer function – among other things. But that is what we focused on mostly.

All of you that have picked CTRNNs at random know that it is rather common for them to reach their equilibrium not too long (i.e. more or less as long as the largest time-constant) after the rather trivial transients have passed (you will see what I mean by trivial transients bellow). What follows are examples of the activations of 10-node CTRNNs integrated for 100 units of time. with parameters chosen from: weights [-10,10], biases [-10,10], time-constants [e0,e5] using different transfer functions.

First, using the common logistic function:

This is the type of behaviors that are common to see (and I’ll show the first 4 random ones that I obtained):

The system reaches its rather simple equilibrium point not long after initialisation. The transients are short and non-complicated. We could visualize this as a ball rolling down a valley.

Here is the type of behaviors that you get if you change from the logistic to the hyperbolic tangent activation function (which is almost the same as the logistic one but goes from -1 to 1 instead of from 0 to 1):

Perhaps a little bit more interesting? or perhaps just a tiny bit luckier? In any case, it is not a big difference. Particularly with the absurdly small sample that I am showing here. But I reckon it is worh looking further into the parameter space of CTRNNs with hyperbolic tangents – particularly in relation to the already studied parameter space of CTRNNs with the logistic function.

That’s not all however. They went on to show me their sine + tanh transfer function.

I’m not going to go into full detail about this. But there are two flavours here to keep in mind. One is that every node in the system has the same transfer function and the other is that each node has any of the transfer functions morphing between the sinouide and the hyperbolic tangent.

So what are the types of behavior that you get when you chose 10-node CTRNNs with all nodes using the middle gray transfer function – half way between sin and tanh?? Keep in mind that, except for the transfer functio, all of the parameters are chosen from the exact same range as before.

What is this? I’m blown away by it (and if you have played with CTRNNs then you should be too). Is this some really complicated transient towards the equilibrium? Is it a chaotic equilibrium?

Again a crazy transient, but this time it does reach a very traditional looking attractor.

I’m not kidding! this is the first 4 that came at random. You can now begin to guess why I became suspicious of such CTRNN dynamics that I was hearing when they displayed their CTRNN music.

Now, the interesting bit is that I think this changes (quite dramatically) with the mix of sin and tanh. Let’s call this the A parameter. Where F=A*tanh + (1-A)*sin, thus A=1 is the simple tanh, A=0 is the sine wave, A=0.5 is what we just showed. As soon as we get closer to the tanh (A=0.75) the system begins to get more ‘classical’ looking.

:

This is the type of stuff that really excited me. Some really chaotic behavior which ended up switching after a while into a completely different pattern. Really exciting stuff! and keep in mind that this is merely random.

In any case, as I hinted before – yes, they had one more card hidden – each node in his system could choose its own A. That is, its own mix of transfer function (in fact, this is simply the way they had it – I generated the in-between step where all of the nodes have the same – but mixed – activation function to ease the transition). Here I’ll show you some 4 behaviors where the A for each node is also chosen at random between [0,1].

You get the rough idea. To put it in terms of the *Rnn* regions of parameter space that I mentioned at the start and the likely (or unlikely) hood of stumbling by chance on some ‘interesting’ dynamics, it seems like having these other activation functions leaves the space pregnant with ‘interestingness’.

Now, I am very much aware that this is the furthest thing away from a serious paremeter space study of the dynamics. In other words: caution to those CTRNN evolvers out there, this is only very very preliminary stuff. Do not attempt this at home unless you are willing to study the parameter space in more detail. Those of you who know me, will now that I am not a fan of adding bits and pieces to CTRNNsÂ (e.g. multiplication in the weights, hebbian learning, homeostatic adaptation mechanisms). Particularly because people then end up claiming that it is because they have added those extra bits that their system evolved. Or even worst, that it is the extra bits that are ‘causing’ some functionality at the behavioural level. This is nothing like that. From the examples that I have seen, there is a pressing need to study this further. This shall follow. Also using some of those to evolve common minimally cognitive behaviors would be appropriate/interesting.

Footnote: Although I refer to ‘they’, Ollie is the main person behind what I have been talking about. So he wanted me to make clear that he has not been doing any dedicated research into CTRNNs and that he is really not at all familiar with the literature. His PhD research is focused on something different and you can read more about it in his website. He said to me, “*I have just been exploring this aspects for my own purposes in the musical domain*“. Classical artist-informs-scientist clichĂ©.