I’m writing this on the train, taking ‘advantage’ of the fact that I took the wrong one today – this one is taking me to Eastbourne. Luckily I’ve got my mp3 player filled to the brim with great tunes and the notebook to write this down on.
First. Regarding my previous post and the set of experiments that I have been thinking about. It is not purely non-environmentally determined behaviour that concerns me with these experiments. Well it is, but I can be more specific than that:
On the one extreme one has reactive systems. A good example of a system on this end of the spectrum is a leaf floating downstream in a river. If we choose the boundaries of our system of interest as ‘the leaf’ then we can say that its future behaviour is purely determined by external variables, its environment (in this case the water in the river). Ok, so not purely, the leaf does have its own small mass and thus inertia, but you get the idea.
On the other end of the spectrum we have purely internally determined systems. Their environment does not determine the future behaviour of these systems; in fact they are not even influenced by it. These are solely self-governing systems. Although autonomous, in the traditional use of the term (cf. Maggie Boden), these systems are not of interest because they cannot readily adapt to their changing environments. A good example of one such system is the classical humanoid robot, ASIMOV, whose each step is determined by a large amount of internal calculations based on the positions of its limbs and the distribution of the weight of its body as internally modelled. This, of course, is just as undesirable as our previous leaf system. And not for too different a reason: lack of adaptive power.
We are interested in neither ends of the spectrum. In fact, the interesting systems will be neither systems whose ‘reactivism/internalism’ stays constant over time, even if it has some in-between mix along the spectrum just described. Our hypothesis is that the cognitively rich systems will be those whose level of involvement with the environment (in the case of reactive systems) and/or de-coupling from the environment (in the case of internally-driven systems) itself changed over time to adapt to the changing demands of their survival. A good example of such a system is a group of humans in a raft navigating in white waters (btw it’s a lot of fun). Our system of interest here is the humans+raft+paddles. At some points this system is fighting the current, attempting move left, right or even to stop going forward. At other points the system is letting itself go, letting the current take it where it wants. It is this interaction, letting go and taking back, that is of interest.
And to summarise it all two figures taken from some slides I did for a Life and Mind seminar a while back:
To come back: all of this can be applied to the example experiment described in the previous post. The environment sometimes drives the system: this shows because it sometimes orients to the regions with hotter temperature. But the system is also internally driven: this shows in that it can change directions even on nearly identical runs – with no environmental difference to drive it.
An interesting result would be to (a) calculate a measure of ‘reactivity’ (i.e. how environmentally driven or internally driven a situated system is) and (2) plot it over time for one of the evolved agents. The hypothesis would be that the line would go up and down over time as the system follows the environment to go towards the hot regions but sometimes abandons the signal altogether to follow its internal drive to do something different every-time. In other words, sometimes internally driven, sometimes environmentally driven, most of the times some combination of both, but most importantly actively changing how much so over time.
The last time I mentioned this to Inman he mentioned the work of Andy Wuensche using cellular automata. I looked at it at some point but didn’t quite get it – I think it may have to do with me not liking CAs that much – their discreteness makes me clinch sometimes.
Second. There a number of methods to measure how chaotic a system is from one recording of the systems behaviour (e.g. time-embedding). The people studying complex systems have had to do this because of several reasons (e.g. limited data, recordings from the brain, etc). One question that I have is, does it make sense to study my evolved agents (which are simple 3D dynamical systems and whose full behaviour I can have recordings of) with such tools? Even though I could easily do something different and perhaps more direct? The issue that worries me is that they are non-autonomous dynamical systems, embodied, situated and so on. Suggestions welcome.
Third. Ideally I would like to use a CTRNN for this task and not have to introduce alpha-CTRNNs. In the unfortunate case of not being able to obtain small (ie ideally 3, but maybe 4-node) circuits for the task, I will (of course) need to include as fair as possible of a comparison between the evolvability of CTRNNs against alpha-CTRNNs. But otherwise there is no reason to introduce alpha CTRNNs to the world.
This brings up an old worry of mine: am I falling in the same steps of those who start adding bells and whistles to CTRNNs to add ‘extra’ functionalities to a system that never lacked them? I’d like to think that this is not the case. Although I would rather not introduce yet-another-CTRNN-variation, I feel the motivation for the alpha version is more to do with concentrating on finding regions of ‘rich’ dynamics, then that of adding any particular behavioral functionality to the system. Despite losing biological reality to real neuron’s properties, as Randy pointed out to me, nonmonotonic activation functions may expand the regions of interesting dynamics. I feel this is related to, for example, Randy’s motivation when looking at centre-crossing CTRNNs, and more recently at Rnn regions of parameter space. At least it is closer to that then, say, adding weight-changing rules in the hopes of obtaining learning behaviour; or adding connection-growing algorithms in the hopes of obtaining developmental agents.
Update: got back to the lab and so last night’s experiments using CTRNNs are in. Although this is all very preliminary here is a comparison of monotonic [i.e. f=tanh(x)] versus nonmonotonic [i.e. f=alpha*tanh(x) + (1-alpha)*sin(x)] activation functions in CTRNNs for non-environmentally determined behaviours using 3-node circuits:
The solid lines depict the average over the best fitness on 20 evolutionary runs for 500 generations for monotonic (red) and nonmonotonic (blue) activation functions. The triangles at the end show the standard deviation at the end of the evolutionary runs for each experiment colored respectively. Too early to say something concrete, I will leave a proper discussion for later. Also I’m running exact same experiments using the more studied and traditionally used logistic sigmoid.
The plan now is to re-do these experiments using the logistic sigmoid, the tanh and the tanh+sin function for comparison using 2, 3 and 4 node CTRNNs, for a little longer (1000 generations) and with the same number of repetitions (20 evolutionary runs each) to start with (I may do 20 more for each if needed); for a total of 180 evolutionary runs. It takes between 100 and 150 hours to run 9 evolutionary runs on one computer, so if I use 20 it should be done in about a week. I’ll start them up tomorrow.
By the way, very nice landscapes along the way: smooth hills lightly covered with snow – beautiful. Will tell Lilia that we should travel this way soon. Also, it is sort of nice that I got to ride here today. Lilia and I are thinking of moving to London soon (towards the summer). I will have to do the commute, for a change. And so I’m actually looking forward to the commuting – as I see it an opportunity to get a lot of work done without as many of the usual distractions (ie. the internet).
ps. you know you have been in Britain for too long when you are calling security about an abandoned backpack in one of the carriages, only to have to wait for the guy to return from the men’s room a couple of minutes later.