m8ta
You are not authenticated, login. 

{1472} 
ref: 0
tags: computational neuroscience opinion tony zador konrad kording lillicrap
date: 07302019 21:04 gmt
revision:0
[head]


Two papers out recently in Arxive and Biorxiv:
 
{1453}  
PMID22325196 Backpropagation through time and the brain
 
{1423}  
PMID27824044 Random synaptic feedback weights support error backpropagation for deep learning.
Our proof says that weights W0 and W evolve to equilibrium manifolds, but simulations (Fig. 4) and analytic results (Supple mentary Proof 2) hint at something more specific: that when the weights begin near 0, feedback alignment encourages W to act like a local pseudoinverse of B around the error manifold. This fact is important because if B were exactly W + (the Moore Penrose pseudoinverse of W ), then the network would be performing GaussNewton optimization (Supplementary Proof 3). We call this update rule for the hidden units pseudobackprop and denote it by ∆hPBP = W + e. Experiments with the linear net work show that the angle, ∆hFA ]∆hPBP quickly becomes smaller than ∆hFA ]∆hBP (Fig. 4b, c; see Methods). In other words feedback alignment, despite its simplicity, displays elements of secondorder learning.  
{1422}  
PMID29205151 Towards deep learning with segregated dendrites https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5716677/
