Catching up

Getting around to writing this post has taken even longer than the usual. I will attempt to briefly catch up. What have I done so far this year? In terms of writing first.


First, I obtained a job as a Research Fellow at the University of Birmingham at the very start of the year. Chrisantha suggested the position. We are part of a bigger project on “Evolving Cell Signaling Networks in Silico“. At Birmingham, I interact with Jon Rowe and Dov Stekel mostly.

Out of that job came a collaboration with Chrisantha on “The evolution of evolvability in gene transcription networks“, accepted as a full paper for Alife XI.

GRN Evolvability
GRN Evolvability

We present a case of a genotype-phenotype map, that when evolved in variable environments optimizes its genetic representation to structure phenotypic variability properties, allowing rapid adaptation to novel environments. How genetic representations evolved is a relatively neglected topic in evolutionary theory. Furthermore, the “black art” of genetic algorithms depends on the practitioner to choose a representation that captures problem structure. Nature has achieved remarkably efficient heuristic search mechanisms without top-down design. We propose that an important example of this, ubiquitous in biology is the structuring of the phenotypic variability properties of gene networks. By studying a simple model of gene networks in which topology is a function of interactions between transcription factor proteins and transcription factor binding sites (TFBS), we show that transcription factor binding matrices (TFBM) evolve to positively constrain phenotypic variability in response to transcription factor binding sequence mutations.

Second, I started meeting with Thomas (ex-CCNR member, now working for Natural Motion in Oxford) on Saturdays mostly in London to discuss our ideas on situated, embodied, and dynamical agents research. Out of these meetings, we came up with a set of experiments on the “Analysis of a dynamical recurrent neural network evolved for two qualitatively different tasks: Walking and Chemotaxis“, which also got accepted as a full paper for Alife XI.

Living organisms perform a broad range of different behaviours during their lifetime.

Neural architecture and body morphologies
Neural architecture and body morphologies

It is important that these be coordinated such as to perform the appropriate one at the right time. This paper extends previous work on evolving dynamical recurrent neural networks by synthesizing a single circuit that performs two qualitatively different behaviours: orientation to sensory stimuli and legged locomotion. We demonstrate that small fully interconnected networks can solve these two tasks without providing it a priori structural modules, explicit neural learning mechanisms, or an external signal for when to switch between them. Dynamical systems analysis of the best-adapted circuit explains the agent’s ability to switch between the two behaviours from the interactions of the circuit’s neural dynamics, its body and environment.

Third, after several acceptances with revisions, the work I had carried out during my research visit to Randy’s lab was finally fully accepted (just recently). I’m, of course, delighted to see this come through.

This paper extends previous work on evolving learning without synaptic plasticity from discrete tasks to continuous tasks.

Transitions between each of the states superimposed over the strobed states
Transitions between each of the states superimposed over the strobed states

Continuous-time recurrent neural networks without synaptic plasticity are artificially evolved on an associative learning task. The task consists in associating paired stimuli: temperature and food. The temperature to be associated can be either drawn from a discrete set or range over a continuum of values. We address two questions: can the learning without synaptic plasticity approach be extended to continuous tasks? And if so, how does learning without synaptic plasticity work in the evolved circuits? Analysis of the most successful circuits to learn discrete stimuli reveal finite state machine (FSM) like internal dynamics. However, when the task is modified to require learning stimuli on the full continuum range, it is not possible to extract a FSM from the internal dynamics. In this case, a continuous state machine is extracted instead.

Last, and certainly not least: my doctoral thesis. I just recently finished it. I took my time with it, for sure. But I’m happy with the result.

The aim of this thesis is to better understand how learning behaviour can be produced from a situated, embodied, and dynamical agent. To this end, we employ evolutionary techniques to synthesize dynamical system neural controllers on tasks that require learning behaviour. We vary the experimental conditions on several dimensions. First, the stimulus to be remembered is in some tasks discrete and in other tasks continuous. Second, the level of embodiment and situatedness of the model agent varies from none, to minimal, to fully embedded. The scope of the tasks is also varied. We study Hebbian learning, associative learning, object discrimination, coping with visual inversion, imprinting, and coping with changes to body morphology.

Perspectives on learning
Perspectives on learning

No learning algorithm is provided to the internal dynamics of the agent. Evolution has to `come up’ with the mechanisms that can produce the learning behaviour on its own, starting from continuous-time recurrent neural-like components as its building block. We succeed to artificially evolve networks without synaptic plasticity on all of the tasks that we set out to study. For each of the tasks, we go into some depth trying to understand how the learning behaviour is produced by the most successful networks using dynamical systems theory.

The work in this thesis demonstrates the ability of small continuous-time recurrent neural networks to perform learning behaviour under a series of different conditions. All of the work on evolving agents that learn without synaptic plasticity has focused on tasks where the agent is required to act differently in a discrete number of distinct environments, in practice two. The result was agents that swapped between two modes of interaction. We extend the approach to having to act differently in a continuum of distinct environments. Also, all of the work has focused on the role of the internal dynamics of the agent in learning behaviour. By analysing networks evolved in abstract tasks as well as more ecological versions of those same tasks, we show how plasticity can switch from being generated purely as a result of the internal dynamics to arising from the full brain-body-environment interaction.

Now I’m catching up with a lot of other things that I have had to postpone because of the desire to finish the thesis. I hope to write about these things in the coming weeks.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s