m8ta
You are not authenticated, login. |
|
{1560} | ||
| ||
{1423} | ||
PMID-27824044 Random synaptic feedback weights support error backpropagation for deep learning.
Our proof says that weights W0 and W evolve to equilibrium manifolds, but simulations (Fig. 4) and analytic results (Supple- mentary Proof 2) hint at something more specific: that when the weights begin near 0, feedback alignment encourages W to act like a local pseudoinverse of B around the error manifold. This fact is important because if B were exactly W + (the Moore- Penrose pseudoinverse of W ), then the network would be performing Gauss-Newton optimization (Supplementary Proof 3). We call this update rule for the hidden units pseudobackprop and denote it by ∆hPBP = W + e. Experiments with the linear net- work show that the angle, ∆hFA ]∆hPBP quickly becomes smaller than ∆hFA ]∆hBP (Fig. 4b, c; see Methods). In other words feedback alignment, despite its simplicity, displays elements of second-order learning. |