m8ta
You are not authenticated, login.
text: sort by
tags: modified
type: chronology
{1560}
hide / / print
ref: -2021 tags: synaptic imaging weights 2p oregon markov date: 12-29-2021 23:30 gmt revision:2 [1] [0] [head]

Distinct in vivo dynamics of excitatory synapses onto cortical pyramidal neurons and parvalbumin-positive interneurons

  • Joshua B.Melander, Aran Nayebi, Bart C.Jongbloets, Dale A.Fortin, Maozhen Qin, Surya Ganguli, Tianyi Mao, Haining Zhong
  • Cre-dependent mVenus labeled PSD-95, in both excitatory pyramidadl neurons & inhibitory PV interneurons.
  • morphology labeled with tdTomato
  • Longitudinal imaging of individual exictatoy post-synaptic densityies; estimated weight from fluorescence; examined spine appearance and disappearance
  • PV synapses were more stable over the 24-day period than synapses on pyramidal neurons.
  • Likewise, large synapses were more likely to remain over the imaging period.
  • Both followed log-normal distributions in 'strengths'
  • Changes were well modeled by a Markov process, which puts high probability on small changes.
  • But these changes are multiplicative (+ additive component in PV cells)

{1423}
hide / / print
ref: -2014 tags: Lillicrap Random feedback alignment weights synaptic learning backprop MNIST date: 02-14-2019 01:02 gmt revision:5 [4] [3] [2] [1] [0] [head]

PMID-27824044 Random synaptic feedback weights support error backpropagation for deep learning.

  • "Here we present a surprisingly simple algorithm for deep learning, which assigns blame by multiplying error signals by a random synaptic weights.
  • Backprop multiplies error signals e by the weight matrix W T W^T , the transpose of the forward synaptic weights.
  • But the feedback weights do not need to be exactly W T W^T ; any matrix B will suffice, so long as on average:
  • e TWBe>0 e^T W B e > 0
    • Meaning that the teaching signal Be B e lies within 90deg of the signal used by backprop, W Te W^T e
  • Feedback alignment actually seems to work better than backprop in some cases. This relies on starting the weights very small (can't be zero -- no output)

Our proof says that weights W0 and W
evolve to equilibrium manifolds, but simulations (Fig. 4) and analytic results (Supple-
mentary Proof 2) hint at something more specific: that when the weights begin near
0, feedback alignment encourages W to act like a local pseudoinverse of B around
the error manifold. This fact is important because if B were exactly W + (the Moore-
Penrose pseudoinverse of W ), then the network would be performing Gauss-Newton
optimization (Supplementary Proof 3). We call this update rule for the hidden units
pseudobackprop and denote it by ∆hPBP = W + e. Experiments with the linear net-
work show that the angle, ∆hFA ]∆hPBP quickly becomes smaller than ∆hFA ]∆hBP
(Fig. 4b, c; see Methods). In other words feedback alignment, despite its simplicity,
displays elements of second-order learning.