Distinct in vivo dynamics of excitatory synapses onto cortical pyramidal neurons and parvalbuminpositive interneurons
 Joshua B.Melander, Aran Nayebi, Bart C.Jongbloets, Dale A.Fortin, Maozhen Qin, Surya Ganguli, Tianyi Mao, Haining Zhong
 Credependent mVenus labeled PSD95, in both excitatory pyramidadl neurons & inhibitory PV interneurons.
 morphology labeled with tdTomato
 Longitudinal imaging of individual exictatoy postsynaptic densityies; estimated weight from fluorescence; examined spine appearance and disappearance
 PV synapses were more stable over the 24day period than synapses on pyramidal neurons.
 Likewise, large synapses were more likely to remain over the imaging period.
 Both followed lognormal distributions in 'strengths'
 Changes were well modeled by a Markov process, which puts high probability on small changes.
 But these changes are multiplicative (+ additive component in PV cells)


PMID27824044 Random synaptic feedback weights support error backpropagation for deep learning.
 "Here we present a surprisingly simple algorithm for deep learning, which assigns blame by multiplying error signals by a random synaptic weights.
 Backprop multiplies error signals e by the weight matrix $W^T$ , the transpose of the forward synaptic weights.
 But the feedback weights do not need to be exactly $W^T$ ; any matrix B will suffice, so long as on average:
 $e^T W B e > 0$
 Meaning that the teaching signal $B e$ lies within 90deg of the signal used by backprop, $W^T e$
 Feedback alignment actually seems to work better than backprop in some cases. This relies on starting the weights very small (can't be zero  no output)
Our proof says that weights W0 and W
evolve to equilibrium manifolds, but simulations (Fig. 4) and analytic results (Supple
mentary Proof 2) hint at something more specific: that when the weights begin near
0, feedback alignment encourages W to act like a local pseudoinverse of B around
the error manifold. This fact is important because if B were exactly W + (the Moore
Penrose pseudoinverse of W ), then the network would be performing GaussNewton
optimization (Supplementary Proof 3). We call this update rule for the hidden units
pseudobackprop and denote it by âˆ†hPBP = W + e. Experiments with the linear net
work show that the angle, âˆ†hFA ]âˆ†hPBP quickly becomes smaller than âˆ†hFA ]âˆ†hBP
(Fig. 4b, c; see Methods). In other words feedback alignment, despite its simplicity,
displays elements of secondorder learning.
