This is a bit of a long post (took me the better part of today). It shall serve as my initial train of thought on something that I would like to refine, extend, and publish somewhere more seriously. So, if you find the post interesting, and/or you think you can help me shape it such that it becomes so, then don’t hesitate to contact me.
Getting around to writing this post has taken even longer than the usual. I will attempt to briefly catch up. What have I done so far this year? In terms of writing first.
For the last couple of months I have been spending most time working on my thesis. It has been actually very enjoyable, for the most part. There have been some days where it has been somewhat painful. The quick update is that it is going well and that I still hope to submit very soon. As expected, there are many experiments, and tasks, and analysis that I would like to expand into. But I’m going to have to leave it for later. As Inman keeps reminding me, there is no such thing as a finished thesis…
It is so sad that I have been too busy working to update my blog. Sad because keeping the notebook updated helps me stay focus on the bigger picture. Here’s a brief update on what I have been up to.
I have been mostly working on what I think will be the 5th and last experimental chapter of my thesis, a subset of which I am planning to submit to the upcoming ECAL conference with deadline a couple of weeks away (9th of April). The work is on the evolution, and more importantly for me the analysis, of an agent that can learn to associate food with temperature in a 2D environment during its lifetime, while overcoming the difficulties encountered by previous work (Yamauchi, Beer, Tuci, Blynel, Floreano, Phattanarasri, and Chrisantha). I’m particularly interested in attempting to understand the agent’s learning behavior in the language of animal learning theory (is the agent doing classical or instrumental conditioning?), as well as the ‘memory trace’ in the internal dynamics.
Fortunately, I have managed to succesfully evolve small circuits for the task (much smaller than all previous attempts in the literature) even though many aspects of the tasks are more complex: they have to be able to remember for several re-tests and be able to re-learn different environments during their lifetime. The smallness of the circuit (4 nodes) has encouraged its in-depth dynamical systems analysis (as opposed to the typical statistical measures, correlation and ablation studies).
I’m still hard at work analysing the agent and writing it up, but I will have a draft to share with whoever wants to read it fairly soon.
I have also been collaborating with Nathaniel and Tom (colleagues from the CCNR) on a review paper on the notion of autonomy in the cognitive sciences for the same conference. There is a lot of material already there, it’s a matter mostly of narrowing it down and making it as concise as possible. We have been meeting on and off for the last 3 months, that has been really useful and interesting. I would definitely like to engage in more collaborations of that sort.
I’m also getting my presentation ready for the IEEE conference in Hawaii, and that’s even closer! I leave on the 3oth of this month. Still need to work on it a lot more.
Will be sending our first call for papers for the workshop that Eldan and I are organising in Portugal next week. It’s very exciting. I think there is a lot of space for really interesting discussions, new directions and collaborations to take place from that workshop.
There are a couple of other things, also important and interesting, which I won’t go into now. Laters.
This term has been unusually busy in terms of teaching and marking for me: 92 hours which ended up being more like 180. Just finished marking today around 100 assignments on logic and reasoning. All that activity has made me have a hard time focusing on my research. The experience, however, has been great. Teaching first year students can be very rewarding indeed. A good proportion of them are very alert and highly motivated. Even though it was my first time teaching that course, some of the students gave me positive feedback. Furthermore, even though I am not a fan of the topic by any stretch of the imagination, I actually feel encouraged to teach it again next year. I know I could do much better. In any case, the good news is that it is almost over now.
With the IEEE Artificial Life symposium deadline approaching, I think there is some chance of working on a small project that I have been thinking for a while. It is based on ideas from Inman – basically he gave an initial attempt at it many years ago, got partial results but did not proceed much further. It is the simplest form of learning we can think of – and it deals directly with how ‘hardwired’ small circuits (with nevertheless a continuum of possible different time-scales) can ‘implement’ ‘weight-like changing’ mechanisms. The idea is to evolve a CTRNN network to produce Hebbian learning behavior: “Nodes which tend to be either both positive or both negative at the same time will have strong positive weights while those which tend to be opposite will have strong negative weights. It is sometimes stated more simply as ‘neurons that fire together, wire together.'” (definition loosely taken from Wikipedia). The beauty of this work is that it would allow for a very thourough dynamical analysis of how the ‘Hebbian-like-learning’ is implemented in a small circuit. Which is not always possible for two different reasons: (a) the task has other complications or (b) the circuit is not small enough. For all of these reasons, this research should provide the underlying foundations for the work on evolving dynamical systems for learning behavior in general (particularly the one I have been doing until now)… The catch, is that for this reason I have partially postponed the writing of the journal paper on the results from my summer research visit with Randy for after the deadline.