m8ta
You are not authenticated, login.
text: sort by
tags: modified
type: chronology
[0] Nishida M, Walker MP, Daytime naps, motor memory consolidation and regionally specific sleep spindles.PLoS ONE 2:4, e341 (2007 Apr 4)

{1531}
hide / / print
ref: -2013 tags: synaptic learning rules calcium harris stdp date: 02-18-2021 19:48 gmt revision:3 [2] [1] [0] [head]

PMID-24204224 The Convallis rule for unsupervised learning in cortical networks 2013 - Pierre Yger  1 , Kenneth D Harris

This paper aims to unify and reconcile experimental evidence of in-vivo learning rules with  established STDP rules.  In particular, the STDP rule fails to accurately predict change in strength in response to spike triplets, e.g. pre-post-pre or post-pre-post.  Their model instead involves the competition between two time-constant threshold circuits / coincidence detectors, one which controls LTD and another LTP, and is such an extension of the classical BCM rule.  (BCM: inputs below a threshold will weaken a synapse; those above it will strengthen. )

They derive the model from optimization criteria that neurons should try to optimize the skewedness of the distribution of their membrane potential: much time spent either firing spikes or strongly inhibited.  This maps to a objective function F that looks like a valley - hence the 'convallis' in the name (latin for valley); the objective is differentiated to yield a weighting function for weight changes; they also add a shrinkage function (line + heaviside function) to gate weight changes 'off' at resting membrane potential. 

A network of firing neurons successfully groups correlated rate-encoded inputs, better than the STDP rule.  it can also cluster auditory inputs of spoken digits converted into cochleogram.  But this all seems relatively toy-like: of course algorithms can associate inputs that co-occur.  The same result was found for a recurrent balanced E-I network with the same cochleogram, and convalis performed better than STDP.   Meh.

Perhaps the biggest thing I got from the paper was how poorly STDP fares with spike triplets:

Pre following post does not 'necessarily' cause LTD; it's more complicated than that, and more consistent with the two different-timeconstant coincidence detectors.  This is satisfying as it allows for apical dendritic depolarization to serve as a contextual binding signal - without negatively impacting the associated synaptic weights. 

{1417}
hide / / print
ref: -0 tags: synaptic plasticity 2-photon imaging inhibition excitation spines dendrites synapses 2p date: 08-14-2020 01:35 gmt revision:3 [2] [1] [0] [head]

PMID-22542188 Clustered dynamics of inhibitory synapses and dendritic spines in the adult neocortex.

  • Cre-recombinase-dependent labeling of postsynapitc scaffolding via Gephryn-Teal fluorophore fusion.
  • Also added Cre-eYFP to label the neurons
  • Electroporated in utero e16 mice.
    • Low concentration of Cre, high concentrations of Gephryn-Teal and Cre-eYFP constructs to attain sparse labeling.
  • Located the same dendrite imaged in-vivo in fixed tissue - !! - using serial-section electron microscopy.
  • 2230 dendritic spines and 1211 inhibitory synapses from 83 dendritic segments in 14 cells of 6 animals.
  • Some spines had inhibitory synapses on them -- 0.7 / 10um, vs 4.4 / 10um dendrite for excitatory spines. ~ 1.7 inhibitory
  • Suggest that the data support the idea that inhibitory inputs maybe gating excitation.
  • Furthermore, co-inervated spines are stable, both during mormal experience and during monocular deprivation.
  • Monocular deprivation induces a pronounced loss of inhibitory synapses in binocular cortex.

{1478}
hide / / print
ref: -2013 tags: 2p two photon STED super resolution microscope synapse synaptic plasticity date: 08-14-2020 01:34 gmt revision:3 [2] [1] [0] [head]

PMID-23442956 Two-Photon Excitation STED Microscopy in Two Colors in Acute Brain Slices

  • Plenty of details on how they set up the microscope.
  • Mice: Thy1-eYFP (some excitatory cells in the hippocampus and cortex) and CX3CR1-eGFP (GFP in microglia). Crossbred the two strains for two-color imaging.
  • Animals were 21-40 days old at slicing.

PMID-29932052 Chronic 2P-STED imaging reveals high turnover of spines in the hippocampus in vivo

  • As above, Thy1-GFP / Thy1-YFP labeling; hence this was a structural study (for which the high resolution of STED was necessary).
  • Might just as well gone with synaptic labels, e.g. tdTomato-Synapsin.

{1518}
hide / / print
ref: -0 tags: synaptic plasticity LTP LTD synapses NMDA glutamate uncaging date: 08-11-2020 22:40 gmt revision:0 [head]

PMID-31780899 Single Synapse LTP: A matter of context?

  • Not a great name for a thorough and reasonably well-written review of glutamate uncaging studies as related to LTP (and to a lesser extent LTD).
  • Lots of refernces from many familiar names. Nice to have them all in one place!
  • I'm left wondering, between CaMKII, PKA, PKC, Ras, other GTP dependent molecules -- how much of the regulatory network in synapse is known? E.g. if you pull down all proteins in the synaptosome & their interacting partners, how many are unknown, or have an unknown function? I know something like this has been done for flies, but in mammals - ?

{1495}
hide / / print
ref: -0 tags: multifactor synaptic learning rules date: 01-22-2020 01:45 gmt revision:9 [8] [7] [6] [5] [4] [3] [head]

Why multifactor?

  • Take a simple MLP. Let xx be the layer activation. X 0X^0 is the input, X 1X^1 is the second layer (first hidden layer). These are vectors, indexed like x i ax^a_i .
  • Then X 1=WX 0X^1 = W X^0 or x j 1=ϕ(Σ i=1 Nw ijx i 0)x^1_j = \phi(\Sigma_{i=1}^N w_{ij} x^0_i) . ϕ\phi is the nonlinear activation function (ReLU, sigmoid, etc.)
  • In standard STDP the learning rule follows Δwf(x pre(t),x post(t)) \Delta w \propto f(x_{pre}(t), x_{post}(t)) or if layer number is aa Δw a+1f(x a(t),x a+1(t))\Delta w^{a+1} \propto f(x^a(t), x^{a+1}(t))
    • (but of course nobody thinks there 'numbers' on the 'layers' of the brain -- this is just referring to pre and post synaptic).
  • In an artificial neural network, Δw aEw ij aδ j ax i \Delta w^a \propto - \frac{\partial E}{\partial w_{ij}^a} \propto - \delta_{j}^a x_{i} (Intuitively: the weight change is proportional to the error propagated from higher layers times the input activity) where δ j a=(Σ k=1 Nw jkδ k a+1)ϕ \delta_{j}^a = (\Sigma_{k=1}^{N} w_{jk} \delta_k^{a+1}) \partial \phi where ϕ\partial \phi is the derivative of the nonlinear activation function, evaluated at a given activation.
  • f(i,j)[x,y,θ,ϕ] f(i, j) \rightarrow [x, y, \theta, \phi]
  • k=13.165 k = 13.165
  • x=round(i/k) x = round(i / k)
  • y=round(j/k) y = round(j / k)
  • θ=a(ikx)+b(ikx) 2 \theta = a (\frac{i}{k} - x) + b (\frac{i}{k} - x)^2
  • ϕ=a(jky)+b(jky) 2 \phi = a (\frac{j}{k} - y) + b (\frac{j}{k} - y)^2

{1493}
hide / / print
ref: -0 tags: nonlinear hebbian synaptic learning rules projection pursuit date: 12-12-2019 00:21 gmt revision:4 [3] [2] [1] [0] [head]

PMID-27690349 Nonlinear Hebbian Learning as a Unifying Principle in Receptive Field Formation

  • Here we show that the principle of nonlinear Hebbian learning is sufficient for receptive field development under rather general conditions.
  • The nonlinearity is defined by the neuron’s f-I curve combined with the nonlinearity of the plasticity function. The outcome of such nonlinear learning is equivalent to projection pursuit [18, 19, 20], which focuses on features with non-trivial statistical structure, and therefore links receptive field development to optimality principles.
  • Δwxh(g(w Tx))\Delta w \propto x h(g(w^T x)) where h is the hebbian plasticity term, and g is the neurons f-I curve (input-output relation), and x is the (sensory) input.
  • The relevant property of natural image statistics is that the distribution of features derived from typical localized oriented patterns has high kurtosis [5,6, 39]
  • Model is a generalized leaky integrate and fire neuron, with triplet STDP

{1485}
hide / / print
ref: -2015 tags: PaRAC1 photoactivatable Rac1 synapse memory optogenetics 2p imaging mouse motor skill learning date: 10-30-2019 20:35 gmt revision:1 [0] [head]

PMID-26352471 Labelling and optical erasure of synaptic memory traces in the motor cortex

  • Idea: use Rac1, which has been shown to induce spine shrinkage, coupled to a light-activated domain to allow for optogenetic manipulation of active synapses.
  • PaRac1 was coupled to a deletion mutant of PSD95, PSD delta 1.2, which concentrates at the postsynaptic site, but cannot bind to postsynaptic proteins, thus minimizing the undesirable effects of PSD-95 overexpression.
    • PSD-95 is rapidly degraded by proteosomes
    • This gives spatial selectivity.
  • They then exploited the dendritic targeting element (DTE) of Arc mRNA which is selectively targeted and translated in activiated dendritic segments in response to synaptic activation in an an NMDA receptor dependent manner.
    • Thereby giving temporal selectivity.
  • Construct is then PSD-PaRac1-DTE; this was tested on hippocampal slice cultures.
  • Improved sparsity and labelling further by driving it with the Arc promoter.
  • Motor learning is impaired in Arc KO mice; hence inferred that the induction of AS-PaRac1 by the Arc promoter would enhance labeling during learning-induced potentiation.
  • Delivered construct via in-utero electroporation.
  • Observed rotarod-induced learning; the PaRac signal decayed after two days, but the spine volume persisted in spines that showed Arc / DTE hence PA labeled activity.
  • Now, since they had a good label, performed rotarod training followed by (at variable delay) light pulses to activate Rac, thereby suppressing recently-active synapses.
    • Observed both a depression of behavioral performance.
    • Controlled with a second task; could selectively impair performance on one of the tasks based on ordering/timing of light activation.
  • The localized probe also allowed them to image the synapse populations active for each task, which were largely non-overlapping.

{1464}
hide / / print
ref: -2012 tags: phase change materials neuromorphic computing synapses STDP date: 06-13-2019 21:19 gmt revision:3 [2] [1] [0] [head]

Nanoelectronic Programmable Synapses Based on Phase Change Materials for Brain-Inspired Computing

  • Here, we report a new nanoscale electronic synapse based on technologically mature phase change materials employed in optical data storage and nonvolatile memory applications.
  • We utilize continuous resistance transitions in phase change materials to mimic the analog nature of biological synapses, enabling the implementation of a synaptic learning rule.
  • We demonstrate different forms of spike-timing-dependent plasticity using the same nanoscale synapse with picojoule level energy consumption.
  • Again uses GST germanium-antimony-tellurium alloy.
  • 50pJ to reset (depress) the synapse, 0.675pJ to potentiate.
    • Reducing the size will linearly decrease this current.
  • Synapse resistance changes from 200k to 2M approx.

See also: Experimental Demonstration and Tolerancing of a Large-Scale Neural Network (165 000 Synapses) Using Phase-Change Memory as the Synaptic Weight Element

{1423}
hide / / print
ref: -2014 tags: Lillicrap Random feedback alignment weights synaptic learning backprop MNIST date: 02-14-2019 01:02 gmt revision:5 [4] [3] [2] [1] [0] [head]

PMID-27824044 Random synaptic feedback weights support error backpropagation for deep learning.

  • "Here we present a surprisingly simple algorithm for deep learning, which assigns blame by multiplying error signals by a random synaptic weights.
  • Backprop multiplies error signals e by the weight matrix W T W^T , the transpose of the forward synaptic weights.
  • But the feedback weights do not need to be exactly W T W^T ; any matrix B will suffice, so long as on average:
  • e TWBe>0 e^T W B e > 0
    • Meaning that the teaching signal Be B e lies within 90deg of the signal used by backprop, W Te W^T e
  • Feedback alignment actually seems to work better than backprop in some cases. This relies on starting the weights very small (can't be zero -- no output)

Our proof says that weights W0 and W
evolve to equilibrium manifolds, but simulations (Fig. 4) and analytic results (Supple-
mentary Proof 2) hint at something more specific: that when the weights begin near
0, feedback alignment encourages W to act like a local pseudoinverse of B around
the error manifold. This fact is important because if B were exactly W + (the Moore-
Penrose pseudoinverse of W ), then the network would be performing Gauss-Newton
optimization (Supplementary Proof 3). We call this update rule for the hidden units
pseudobackprop and denote it by ∆hPBP = W + e. Experiments with the linear net-
work show that the angle, ∆hFA ]∆hPBP quickly becomes smaller than ∆hFA ]∆hBP
(Fig. 4b, c; see Methods). In other words feedback alignment, despite its simplicity,
displays elements of second-order learning.

{1136}
hide / / print
ref: -0 tags: DBS dopamine synaptic plasticity striatum date: 02-27-2012 21:57 gmt revision:7 [6] [5] [4] [3] [2] [1] [head]

PMID-11285003 Dopaminergic control of synaptic plasticity in the dorsal striatum.

  • Repetitive stimulation of corticostriatal fibers causes a massive release of glutamate and DA in the striatum, and depending on the glutamate receptor subtype preferentially activated, produces either long-term depression (LTD) or long-term potentiation (LTP) of excitatory synaptic transmission.
  • D1 and D2 (like) receptors interact synergistically to allow LTD formation, and in opposition while inducing LTP.
  • Stimulation of DA receptors has been shown to modulate voltage-dependent conductances in striatal spiny neurons, but it does not cause depolarization or hyperpolarization (Calabresi et al 2000a PMID-11052221; Nicola et al 2000)
  • Striatal spiny neurons present a high degree of colocalization of subtypes of DA and glutamate receptors. PMID-9215599
  • Striatal cells have up and down states. Wilson and Kawaguchi 1996 PMID-8601819
  • Both LTD and LTP are induced in the striatum by the repetitive stimulation of corticostriatal fibers.
    • Repetition is associated with the dramatic increase of both glutamate and DA in the striatum. (presynaptic?)
  • LTP is enhanced by blocking or removing D2 receptors.
  • More complexity here - in terms of receptors and blocking. (sure magnesium blocks NMDA receptors, but there are many other drugs used...)

{970}
hide / / print
ref: Prescott-2009.02 tags: PD levodopa synaptic plasticity SNr STN DBS date: 02-22-2012 18:28 gmt revision:2 [1] [0] [head]

PMID-19050033[0] Levodopa enhances synaptic plasticity in the substantia nigra pars reticulata of Parkinson's disease patients

  • In the SNpc -> SNr.
  • High frequency stimulation (HFS--four trains of 2 s at 100 Hz) in the SNr failed to induce a lasting change in test fEPs (1 Hz) amplitudes in patients OFF medication (decayed to baseline by 160 s). Following oral L-dopa administration, HFS induced a potentiation of the fEP amplitudes (+29.3% of baseline at 160 s following a plateau).
  • Aberrant synaptic plasticity may play a role in the pathophysiology of Parkinson's disease.

____References____

[0] Prescott IA, Dostrovsky JO, Moro E, Hodaie M, Lozano AM, Hutchison WD, Levodopa enhances synaptic plasticity in the substantia nigra pars reticulata of Parkinson's disease patients.Brain 132:Pt 2, 309-18 (2009 Feb)

{699}
hide / / print
ref: Harris-2008.03 tags: retroaxonal retrosynaptic Harris learning cortex backprop date: 12-07-2011 02:34 gmt revision:2 [1] [0] [head]

PMID-18255165[0] Stability of the fittest: organizing learning through retroaxonal signals

  • the central hypothesis: strengthening of a neuron's output synapses stabilizes recent changes in the same neuron's inputs.
    • this causes representations (as are arrived at with backprop) that are tuned to task features.
  • Retroaxonal signaling in the brain is too slow for an instructive (says at least the sign of the error wrt a current neuron's output) backprop algorithm
  • hence, retroaxonal signals are not instructive but selective.
  • At SFN Harris was looking for people to test this in a model; as (yet) unmodeled and untested, I'm suspicious of it.
  • Seems plausible, yet it also just seems to be a way of moving the responsibility for learning computation to the postsynaptic neuron (which is then propagated back to the present neuron). The theory does not immediately suggest what neurons are doing to learn their stuff; rather how they may be learning.
    • If this stabilization is based on some sort of feedback (attention? reward?), which may guide learning (except for the cortex, which does not have many (any?) DA receptors...), then I may be more willing to accept it.
    • It seems likely that the cortex is doing a lot of unsupervised learning: predicting what sensory info will come next based on present sensory info (ICA, PCA).

____References____

[0] Harris KD, Stability of the fittest: organizing learning through retroaxonal signals.Trends Neurosci 31:3, 130-6 (2008 Mar)

{715}
hide / / print
ref: Legenstein-2008.1 tags: Maass STDP reinforcement learning biofeedback Fetz synapse date: 04-09-2009 17:13 gmt revision:5 [4] [3] [2] [1] [0] [head]

PMID-18846203[0] A Learning Theory for Reward-Modulated Spike-Timing-Dependent Plasticity with Application to Biofeedback

  • (from abstract) The resulting learning theory predicts that even difficult credit-assignment problems, where it is very hard to tell which synaptic weights should be modified in order to increase the global reward for the system, can be solved in a self-organizing manner through reward-modulated STDP.
    • This yields an explanation for a fundamental experimental result on biofeedback in monkeys by Fetz and Baker.
  • STDP is prevalent in the cortex ; however, it requires a second signal:
    • Dopamine seems to gate STDP in corticostriatal synapses
    • ACh does the same or similar in the cortex. -- see references 8-12
  • simple learning rule they use: d/dtW ij(t)=C ij(t)D(t) d/dt W_{ij}(t) = C_{ij}(t) D(t)
  • Their notes on the Fetz/Baker experiments: "Adjacent neurons tended to change their firing rate in the same direction, but also differential changes of directions of firing rates of pairs of neurons are reported in [17] (when these differential changes were rewarded). For example, it was shown in Figure 9 of [17] (see also Figure 1 in [19]) that pairs of neurons that were separated by no more than a few hundred microns could be independently trained to increase or decrease their firing rates."
  • Their result is actually really simple - there is no 'control' or biofeedback - there is no visual or sensory input, no real computation by the network (at least for this simulation). One neuron is simply reinforced, hence it's firing rate increases.
    • Fetz & later Schimdt's work involved feedback and precise control of firing rate; this does not.
    • This also does not address the problem that their rule may allow other synapses to forget during reinforcement.
  • They do show that exact spike times can be rewarded, which is kinda interesting ... kinda.
  • Tried a pattern classification task where all of the information was in the relative spike timings.
    • Had to run the pattern through the network 1000 times. That's a bit unrealistic (?).
      • The problem with all these algorithms is that they require so many presentations for gradient descent (or similar) to work, whereas biological systems can and do learn after one or a few presentations.
  • Next tried to train neurons to classify spoken input
    • Audio stimului was processed through a cochlear model
    • Maass previously has been able to train a network to perform speaker-independent classification.
    • Neuron model does, roughly, seem to discriminate between "one" and "two"... after 2000 trials (each with a presentation of 10 of the same digit utterance). I'm still not all that impressed. Feels like gradient descent / linear regression as per the original LSM.
  • A great many derivations in the Methods section... too much to follow.
  • Should read refs:
    • PMID-16907616[1] Gradient learning in spiking neural networks by dynamic perturbation of conductances.
    • PMID-17220510[2] Solving the distal reward problem through linkage of STDP and dopamine signaling.

____References____

[0] Legenstein R, Pecevski D, Maass W, A learning theory for reward-modulated spike-timing-dependent plasticity with application to biofeedback.PLoS Comput Biol 4:10, e1000180 (2008 Oct)
[1] Fiete IR, Seung HS, Gradient learning in spiking neural networks by dynamic perturbation of conductances.Phys Rev Lett 97:4, 048104 (2006 Jul 28)
[2] Izhikevich EM, Solving the distal reward problem through linkage of STDP and dopamine signaling.Cereb Cortex 17:10, 2443-52 (2007 Oct)

{720}
hide / / print
ref: Huber-2004.07 tags: sleep REM SWS wilson synaptic strength date: 04-01-2009 17:50 gmt revision:2 [1] [0] [head]

http://www.the-scientist.com/2009/04/1/34/1/ -- good layperson-level review of the present research on sleep. Includes interviews with Strickgold and other prominents. References:

http://www.the-scientist.com/2009/04/1/15/1/ -- points out that Western sleep style is a relative outlier compared to sleeping in other cultures. More 'primitive' cultures have polyphasic sleep, with different stages of alertness, dozing, napping, disengaged, vigilance, etc.

  • Quote: Other cultures tend towards "multiple and multiage sleeping partners; frequent proximity of animals; embeddedness of sleep in ongoing social interaction; fluid bedtimes and wake times; use of nighttime for ritual, sociality, and information exchange; and relatively exposed sleeping locations that require fire maintenance and sustained vigilance."

____References____

[0] Huber R, Ghilardi MF, Massimini M, Tononi G, Local sleep and learning.Nature 430:6995, 78-81 (2004 Jul 1)
[1] Klintsova AY, Greenough WT, Synaptic plasticity in cortical systems.Curr Opin Neurobiol 9:2, 203-8 (1999 Apr)
[2] Vyazovskiy VV, Cirelli C, Pfister-Genskow M, Faraguna U, Tononi G, Molecular and electrophysiological evidence for net synaptic potentiation in wake and depression in sleep.Nat Neurosci 11:2, 200-8 (2008 Feb)
[3] Pavlides C, Winson J, Influences of hippocampal place cell firing in the awake state on the activity of these cells during subsequent sleep episodes.J Neurosci 9:8, 2907-18 (1989 Aug)
[4] Pompeiano M, Cirelli C, Arrighi P, Tononi G, c-Fos expression during wakefulness and sleep.Neurophysiol Clin 25:6, 329-41 (1995)
[5] Hill S, Tononi G, Modeling sleep and wakefulness in the thalamocortical system.J Neurophysiol 93:3, 1671-98 (2005 Mar)
[6] Aton SJ, Seibt J, Dumoulin M, Jha SK, Steinmetz N, Coleman T, Naidoo N, Frank MG, Mechanisms of sleep-dependent consolidation of cortical plasticity.Neuron 61:3, 454-66 (2009 Feb 12)

{705}
hide / / print
ref: Tononi-2006.02 tags: sleep synaptic homeostasis plasticity date: 03-20-2009 15:45 gmt revision:1 [0] [head]

PMID-16376591[0] Sleep function and synaptic homeostasis.

  • Sleep keeps the neural network stable & the synaptic weights in check.
    • if you don't sleep do you get epilepsy?? don't have access to the article, would have to read it.

____References____

[0] Tononi G, Cirelli C, Sleep function and synaptic homeostasis.Sleep Med Rev 10:1, 49-62 (2006 Feb)

{680}
hide / / print
ref: Nishida-2007.04 tags: sleep spindle learning nap NREM date: 03-06-2009 17:56 gmt revision:1 [0] [head]

PMID-17406665[0] Daytime naps, motor memory consolidation and regionally specific sleep spindles.

  • asked subjects to learn a motor task with their non-dominant hand, and then tested them 8 hours later.
  • subjects that were allowed a 60-90 minute siesta improved their performance significantly relative to controls and relative to previous performance.
  • when they subtracted EEG activity of the non-learning hemisphere from the learning hemisphere, spindle activity was strongly correlated with offline memory improvement.

____References____