use https for features.
text: sort by
tags: modified
type: chronology
hide / / print
ref: -0 tags: neuronal assemblies maass hebbian plasticity simulation austria fMRI date: 02-23-2021 18:49 gmt revision:1 [0] [head]

PMID-32381648 A model for structured information representation in neural networks in the brain

  • Using randomly connected E/I networks, suggests that information can be "bound" together using fast Hebbian STDP.
  • That is, 'assemblies' in higher-level areas reference sensory information through patterns of bidirectional connectivity.
  • These patterns emerge spontaneously following disinihbition of the higher-level areas.
  • Find the results underwhelming, but the discussion is more interesting.
    • E.g. there have been a lot of theoretical and computational-experimental work for how concepts are bound together into symbols or grammars.
    • The referenced fMRI studies are interesting, too: they imply that you can observe the results of structural binding in activity of the superior temporal gyrus.
  • I'm more in favor of dendritic potentials or neuronal up/down states to be a fast and flexible way of maintaining 'symbol membership' --
    • But it's not as flexible as synaptic plasticity, which, obviously, populates the outer product between 'region a' and 'region b' with a memory substrate, thereby spanning the range of plausible symbol-bindings.
    • Inhibitory interneurons can then gate the bindings, per morphological evidence.
    • But but, I don't think anyone has shown that you need protein synthesis for perception, as you do for LTP (modulo AMPAR cycling).
      • Hence I'd argue that localized dendritic potentials can serve as the flexible outer-product 'memory tag' for presence in an assembly.
        • Or maybe they are used primarily for learning, who knows!

hide / / print
ref: -0 tags: nonlinear hebbian synaptic learning rules projection pursuit date: 12-12-2019 00:21 gmt revision:4 [3] [2] [1] [0] [head]

PMID-27690349 Nonlinear Hebbian Learning as a Unifying Principle in Receptive Field Formation

  • Here we show that the principle of nonlinear Hebbian learning is sufficient for receptive field development under rather general conditions.
  • The nonlinearity is defined by the neuron’s f-I curve combined with the nonlinearity of the plasticity function. The outcome of such nonlinear learning is equivalent to projection pursuit [18, 19, 20], which focuses on features with non-trivial statistical structure, and therefore links receptive field development to optimality principles.
  • Δwxh(g(w Tx))\Delta w \propto x h(g(w^T x)) where h is the hebbian plasticity term, and g is the neurons f-I curve (input-output relation), and x is the (sensory) input.
  • The relevant property of natural image statistics is that the distribution of features derived from typical localized oriented patterns has high kurtosis [5,6, 39]
  • Model is a generalized leaky integrate and fire neuron, with triplet STDP

hide / / print
ref: -0 tags: NMDA spike hebbian learning states pyramidal cell dendrites date: 10-03-2018 01:15 gmt revision:0 [head]

PMID-20544831 The decade of the dendritic NMDA spike.

  • NMDA spikes occur in the finer basal, oblique, and tuft dendrites.
  • Typically 40-50 mV, up to 100's of ms in duration.
  • Look similar to cortical up-down states.
  • Permit / form the substrate for spatially and temporally local computation on the dendrites that can enhance the representational or computational repertoire of individual neurons.

hide / / print
ref: Vasilaki-2009.02 tags: associative learning prefrontal cortex model hebbian date: 02-17-2009 03:37 gmt revision:2 [1] [0] [head]

PMID-19153762 Learning flexible sensori-motor mappings in a complex network.

  • Were looking at a task, presented to monkeys over 10 years ago, where two images were presented to the monkeys, and they had to associate left and rightward saccades with both.
  • The associations between saccade direction and image was periodically reversed. Unlike humans, who probably could very quickly change the association, the monkeys required on the order of 30 trials to learn the new association.
  • Interestingly, whenever the monkeys made a mistake, they effectively forgot previous pairings. That is, after an error, the monkeys were as likely to make another error as they were to choose correctly, independent of the number of correct trials preceding the error. Strange!
  • They implement and test reward-modulated hebbian learning (RAH), where:
    • The synaptic weights are changed based on the presynaptic activity, the postsynaptic activity minus the probability of both presynaptic and postsynaptic activity. This 'minus' effect seems similar to that of TD learning?
    • The synaptic weights are soft-bounded,
    • There is a stop-learning criteria, where the weights are not positively updated if the total neuron activity is strongly positive or strongly negative. This allows the network to ultimately obtain perfection (at some point the weights are no longer changed upon reward), and explains some of the asymmetry of the reward / punishment.
  • Their model perhaps does not scale well for large / very complicated tasks... given the presence of only a single reward signal. And the lack of attention / recall? Still, it fits the experimental data quite well.
  • They also note that for all the problems they study, adding more layers to the network does not significantly affect learning - neither the rate nor the eventual performance.

hide / / print
ref: bookmark-0 tags: STDP hebbian learning dopamine reward robot model ISO date: 0-0-2007 0:0 revision:0 [head]


  • idea: have a gating signal for the hebbian learning.
    • pure hebbian learning is unsable; it will lead to endless amplification.
  • method: use a bunch of resonators near sub-critically dampled.
  • application: a simple 2-d robot that learns to seek food. not super interesting, but still good.
  • Uses ISO learning - Isotropic sequence order learning.
  • somewhat related: runbot!