With the IEEE Artificial Life symposium deadline approaching, I think there is some chance of working on a small project that I have been thinking for a while. It is based on ideas from Inman – basically he gave an initial attempt at it many years ago, got partial results but did not proceed much further. It is the simplest form of learning we can think of – and it deals directly with how ‘hardwired’ small circuits (with nevertheless a continuum of possible different time-scales) can ‘implement’ ‘weight-like changing’ mechanisms. The idea is to evolve a CTRNN network to produce Hebbian learning behavior: “Nodes which tend to be either both positive or both negative at the same time will have strong positive weights while those which tend to be opposite will have strong negative weights. It is sometimes stated more simply as ‘neurons that fire together, wire together.'” (definition loosely taken from Wikipedia). The beauty of this work is that it would allow for a very thourough dynamical analysis of how the ‘Hebbian-like-learning’ is implemented in a small circuit. Which is not always possible for two different reasons: (a) the task has other complications or (b) the circuit is not small enough. For all of these reasons, this research should provide the underlying foundations for the work on evolving dynamical systems for learning behavior in general (particularly the one I have been doing until now)… The catch, is that for this reason I have partially postponed the writing of the journal paper on the results from my summer research visit with Randy for after the deadline.

# Hebbian learning CTRNNs

Advertisements