Abbott, L.F. (2008) Theoretical Neuroscience Rising. Neuron 60:489-495.
Key passages from my favorite parts of the paper:
“Neuroscience has always had models, but prior to the invasion of the theorists, these were often word models. Equations force a model to be precise, complete, and self-consistent, and they allow its full implications to be worked out.”
“A skillful theoretician can formulate, explore, and often reject models at a pace that no experimental program can match. This is a major role of theory – to generate and vet ideas prior to full experimental testing. [….] Is the theorist’s job to develop, test, frequently reject, and sometimes promote new ideas. […] (to) provide valuable new ways of thinking. ”
“Identifying the minimum set of features needed to account for a particular phenomenon and describing these accurately enough to do the job is a key component of model building. […] The truly realistic model is as impossible and useless a concept as Borges’ ‘map of the empire that was of the same scale as the empire and that coincided with it point for point’”
And curiously, with regards to the ‘future’: “Learning is widely considered a job for the synapse.” and “We (…) commonly tend to think of synapses as the focus of learning and memory, and neurons as the workhorses of dynamic computation. This may be radically wrong.”
Hmn.. Is he honestly not aware of the existence of the works below providing proofs of principle of how radically wrong it could be? Or is he choosing to ignore them? And if so, it would be interesting to know why exactly.
– Yamauchi, B. and Beer, R.D. (1994). Sequential behavior and learning in evolved dynamical neural networks.
– Tuci, E., Quinn, M. and Harvey, I. (2003): An evolutionary ecological approach to the study of learning behaviour using a robot based model.
– Izquierdo, E. and Harvey, I. (2007): Hebbian Learning using Fixed Weight Evolved Dynamical ‘Neural’ Networks.
– Phattanasri, P., Chiel, H.J. and Beer, R.D. (2007). The dynamics of associative learning in evolved model circuits.
– Izquierdo, E., Harvey, I. and Beer, R.D. (2008). Associative learning on a continuum in evolved dynamical neural networks.
But perhaps more importantly, these suggests that the project of synaptic plasticity and leaning behavior has to reach a high impact journal where it can be read more widely and take a shape that biologists won’t so easily continue to ignore – this I am working on.
The other point that completely escapes this theoretical neuroscience review is the role that environmental feedback and the biomechanics of the body play on behavior. Not a brief mention, not even for the future! I guess this is exciting.
In order to better understand the motion and interactions of the planets (and their satellites) of our solar system, Galileo built mechanical devices to instantiate idealized models of the solar system such as the one in the figure:
Learned about this from Edward Tufte’s “Envisioning Information” and thought it would be worth sharing. This photo is from William Pearson, in Abraham Rees ed., The Cyclopaedia; or, Universal Dictionary of Arts, Sciences, and Literature, Plates, Vol. iv (London 1820).
I think this is a good example of how Galileo placed an emphasis on idealization and understanding by building in science. Not to mention early precursors to the use of computer simulations in science. Interestingly, Tufte warns that these machines may be “directing attention more toward miraculous contraptionary display than to planetary motion.” An important warning for scientists using computer simulations for modeling in general Today.
Just quickly wanted to point you towards the new volume of the Adaptive Behavior Journal, still warm out of the presses. Needless to mention, this is a compulsory read for anybody passing by this blog.
The target article by Barbara Webb criticizes models that are not directly targeting specific organisms or that are not matching empirical data sets. Without a doubt, the overall response from the adaptive behavior community is that the criticisms are unfair, overly restrictive, and mostly displaying a data-driven shortsightedness with regards to the wide range of uses of models and their relevance to biology.
There are a number of very interesting and different defenses — all of them worth reading carefully. Each of the commentaries, in fact, deserves a dedicated discussion of their own, but I won’t have time for that. Suffice it to say (at least for now), that I particularly recommend the articles by: Randall Beer and Paul Williams, Inman Harvey, Jason Noble and Manuel de Pinedo, and Seth Bullock. I also really the article by Xabier Barandiaran and Anthony Chemero. In particular, the reference to metaphorical forests.
The only articles I can’t fully recommend are those by William Bechtel and by Volker Grimm and Steven F. Railsback – the message of their commentaries is unclear to me and suggests that they could possibly be a bit confused themselves about it (but I’ll re-read them a few times more just in case).
Modeler myopia is a refractive defect of the “mind’s eye of the modeler” in which biological relevance is only in focus when proximally related to empirical data; but otherwise all relevance comes out of focus.
I was excited to check out Dario Floreano’s latest book: “Bio-Inspired Artificial Intelligence“, but after only a few hours, I was pretty much ready to put it down.
I started by checking the area that I’m most familiar with: neural networks that can produce learning behavior. Immediately I see that the authors neglected to cite original work; citing, in one case their own version of the work (produced a decade later), and – worst yet – in a second example, adopting some ideas as their own without any reference whatsoever.
The two examples are from page 264, in a closing remarks section on neural networks.
They start off by neglecting the original work :
“.. it has been shown that .. a network with dynamic neurons and without synaptic plasticity is still capable of displaying learning-like behaviors (reference to Floreano and student paper).”
The original work is, of course, by Yamauchi, B. and Beer, R.D. in their two 1994 papers titled: Sequential behavior and learning in evolved dynamical neural networks (In Adaptive Behavior Journal) and Integrating reactive, sequential and learning behavior using dynamical neural networks (In the Proceedings of the Third International Conference on Simulation of Adaptive Behavior).
The authors follow this by describing a perspective that has been put forward by Randall Beer many times and in many different contexts, in both written form and talks. Also by Inman Harvey, but I think slightly later:
“These .. examples challenge the more or less implicit assumption that neural activations are responsible for behavior and synaptic change is responsible for learning. An alternative perspective is to consider the brain as a dynamical system characterized by several time constants associated to various processes… Such a dynamical system perspective does not require us to make a mechanistic distinction between behavior and learning of the network.”
Interestingly, this is – in my perspective – the most sensible stand, although by far still not the conventionally accepted one. So, although I think it is great to see that he considers the position to be important (even though treated only in passing during the closing remarks), I think it is not so great that he has failed to mention the people that have either: performed the original experiments or developed the ideas further.
At any rate, the lamentable thing is that while this doesn’t matter for this area, because I’m quite familiar with it and can trace back the original ideas without much effort. I’m not so sure how I feel about reading about any of the rest of the areas that I’m less familiar with, from fear that much the same will be happening throughout this book.
It’s a trip back to the library for me.
“The first step is to look around at the rich fabric of the phenomena around us. Next, we selectively ignore nearly everything about these phenomena, snipping the fabric down to just a few threads. This process involves (a) selecting a simplified but real model system for detailed study and (b) representing the simple system by an equally simple mathematical model, with as few independent constructs and relations as possible. […] The last step is to (c) deduce from the mathematical model some nonobvious quantitative, and experimentally testable predictions. If a model makes many such successful predictions, we gain conviction that we have found the few key ingredients in our simplifying steps (a) and (b).” Philip Nelson, p. 16-17 (2004).