{699} revision 2 modified: 12-07-2011 02:34 gmt

PMID-18255165[0] Stability of the fittest: organizing learning through retroaxonal signals

  • the central hypothesis: strengthening of a neuron's output synapses stabilizes recent changes in the same neuron's inputs.
    • this causes representations (as are arrived at with backprop) that are tuned to task features.
  • Retroaxonal signaling in the brain is too slow for an instructive (says at least the sign of the error wrt a current neuron's output) backprop algorithm
  • hence, retroaxonal signals are not instructive but selective.
  • At SFN Harris was looking for people to test this in a model; as (yet) unmodeled and untested, I'm suspicious of it.
  • Seems plausible, yet it also just seems to be a way of moving the responsibility for learning computation to the postsynaptic neuron (which is then propagated back to the present neuron). The theory does not immediately suggest what neurons are doing to learn their stuff; rather how they may be learning.
    • If this stabilization is based on some sort of feedback (attention? reward?), which may guide learning (except for the cortex, which does not have many (any?) DA receptors...), then I may be more willing to accept it.
    • It seems likely that the cortex is doing a lot of unsupervised learning: predicting what sensory info will come next based on present sensory info (ICA, PCA).


[0] Harris KD, Stability of the fittest: organizing learning through retroaxonal signals.Trends Neurosci 31:3, 130-6 (2008 Mar)