use https for features.
text: sort by
tags: modified
type: chronology
[0] Mehta MR, Cortico-hippocampal interaction during up-down states and memory consolidation.Nat Neurosci 10:1, 13-5 (2007 Jan)[1] Ji D, Wilson MA, Coordinated memory replay in the visual cortex and hippocampus during sleep.Nat Neurosci 10:1, 100-7 (2007 Jan)

[0] Dzirasa K, Ribeiro S, Costa R, Santos LM, Lin SC, Grosmark A, Sotnikova TD, Gainetdinov RR, Caron MG, Nicolelis MA, Dopaminergic control of sleep-wake states.J Neurosci 26:41, 10577-89 (2006 Oct 11)

hide / / print
ref: -0 tags: credit assignment distributed feedback alignment penn state MNIST fashion backprop date: 03-16-2019 02:21 gmt revision:1 [0] [head]

Conducting credit assignment by aligning local distributed representations

  • Alexander G. Ororbia, Ankur Mali, Daniel Kifer, C. Lee Giles
  • Propose two related algorithms: Local Representation Alignment (LRA)-diff and LRA-fdbk.
    • LRA-diff is basically a modified form of backprop.
    • LRA-fdbk is a modified version of feedback alignment. {1432} {1423}
  • Test on MNIST (easy -- many digits can be discriminated with one pixel!) and fashion-MNIST (harder -- humans only get about 85% right!)
  • Use a Cauchy or log-penalty loss at each layer, which is somewhat unique and interesting: L(z,y)= i=1 nlog(1+(y iz i) 2)L(z,y) = \sum_{i=1}^n{ log(1 + (y_i - z_i)^2)} .
    • This is hence a saturating loss.
  1. Normal multi-layer-perceptron feedforward network. pre activation h h^\ell and post activation z z^\ell are stored.
  2. Update the weights to minimize loss. This gradient calculation is identical to backprop, only they constrain the update to have a norm no bigger than c 1c_1 . Z and Y are actual and desired output of the layer, as commented. Gradient includes the derivative of the nonlinear activation function.
  3. Generaete update for the pre-nonlinearity h 1h^{\ell-1} to minimize the loss in the layer above. This again is very similar to backprop; its' the chain rule -- but the derivatives are vectors, of course, so those should be element-wise multiplication, not outer produts (i think).
    1. Note hh is updated -- derivatives of two nonlinearities.
  4. Feedback-alignment version, with random matrix E E_{\ell} (elements drawn from a gaussian distribution, σ=1\sigma = 1 ish.
    1. Only one nonlinearity derivative here -- bug?
  5. Move the rep and post activations in the specified gradient direction.
    1. Those h¯ 1\bar{h}^{\ell-1} variables are temporary holding -- but note that both lower and higher layers are updated.
  6. Do this K of times, K=1-50.
  • In practice K=1, with the LRA-fdbk algorithm, for the majority of the paper -- it works much better than LRA-diff (interesting .. bug?). Hence, this basically reduces to feedback alignment.
  • Demonstrate that LRA works much better with small initial weights, but basically because they tweak the algorithm to do this.
    • Need to see a positive control for this to be conclusive.
    • Again, why is FA so different from LRA-fdbk? Suspicious. Positive controls.
  • Attempted a network with Local Winner Take All (LWTA), which is a hard nonlinearity that LFA was able to account for & train through.
  • Also used Bernoulli neurons, and were able to successfully train. Unlike drop-out, these were stochastic at test time, and things still worked OK.

Lit review.
  • Logistic sigmoid can slow down learning, due to it's non-zero mean (Glorot & Bengio 2010).
  • Recirculation algorithm (or generalized recirculation) is a precursor for target propagation.
  • Target propagation is all about the inverse of the forward propagation: if we had access to the inverse of the network of forward propagations, we could compute which input values at the lower levels of the network would result in better values at the top that would please the global cost.
    • This is a very different way of looking at it -- almost backwards!
    • And indeed, it's not really all that different from contrastive divergence. (even though CD doesn't work well with non-Bernoulli units)
  • Contractive Hebbian learning also has two phases, one to fantasize, and done to try to make the fantasies look more like the input data.
  • Decoupled neural interfaces (Jaderberg et al 2016): learn a predictive model of error gradients (and inputs) nistead of trying to use local information to estimate updated weights.

  • Yeah, call me a critic, but I'm not clear on the contribution of this paper; it smells precocious and over-sold.
    • Even the title. I was hoping for something more 'local' than per-layer computation. BP does that already!
  • They primarily report supportive tests, not discriminative or stressing tests; how does the algorithm fail?
    • Certainly a lot of work went into it..
  • I still don't see how the computation of a target through a ransom matrix, then using delta/loss/error between that target and the feedforward activation to update weights, is much different than propagating the errors directly through a random feedback matrix. Eg. subtract then multiply, or multiply then subtract?

hide / / print
ref: -0 tags: NMDA spike hebbian learning states pyramidal cell dendrites date: 10-03-2018 01:15 gmt revision:0 [head]

PMID-20544831 The decade of the dendritic NMDA spike.

  • NMDA spikes occur in the finer basal, oblique, and tuft dendrites.
  • Typically 40-50 mV, up to 100's of ms in duration.
  • Look similar to cortical up-down states.
  • Permit / form the substrate for spatially and temporally local computation on the dendrites that can enhance the representational or computational repertoire of individual neurons.

hide / / print
ref: -0 tags: review neural recording penn state extensive biopolymers date: 02-06-2017 23:09 gmt revision:0 [head]

PMID-24677434 A Review of Organic and Inorganic Biomaterials for Neural Interfaces

  • Not necessarily insightful, but certainly exhaustive review of all the various problems and strategies for neural interfacing.
  • Some emphasis on graphene, conductive polymers, and biological surface treatments for reducing FBR.
  • Cites 467 articles!

hide / / print
ref: -0 tags: NC state tap drill chart date: 08-02-2016 18:38 gmt revision:0 [head]


by way of: https://m.reddit.com/r/engineering/comments/4ry07t/does_anyone_have_a_stored_copy_of_this_tap_and/

hide / / print
ref: Chhatbar-2010.05 tags: Lee von Kraus Francis SUNY downstate electrode floating headpost date: 01-28-2013 01:06 gmt revision:1 [0] [head]

PMID-20153370[0] A bio-friendly and economical technique for chronic implantation of multiple microelectrode arrays

  • Nesting design -- the headpost is the only transcutaneous object.


[0] Chhatbar PY, von Kraus LM, Semework M, Francis JT, A bio-friendly and economical technique for chronic implantation of multiple microelectrode arrays.J Neurosci Methods 188:2, 187-94 (2010 May 15)

hide / / print
ref: Maass-2002.11 tags: Maass liquid state machine expansion LSM Markram computation cognition date: 12-06-2011 07:17 gmt revision:2 [1] [0] [head]

PMID-12433288[0] Real-time computing without stable states: a new framework for neural computation based on perturbations.

  • It is shown that the inherent transient dynamics of the high-dimensional dynamical system formed by a sufficiently large and heterogeneous neural circuit may serve as universal analog fading memory. Readout neurons can learn to extract in real time from the current state of such recurrent neural circuit information about current and past inputs that may be needed for diverse tasks.
    • Stable states, e.g. Turing machines and attractor-based networks are not requried!
    • How does this compare to Shenoy's result that neuronal dynamics converge to a 'stable' point just before movement?


[0] Maass W, Natschläger T, Markram H, Real-time computing without stable states: a new framework for neural computation based on perturbations.Neural Comput 14:11, 2531-60 (2002 Nov)

hide / / print
ref: Hoffman-2007.1 tags: up down states neocortex SWS date: 03-20-2009 01:27 gmt revision:1 [0] [head]

PMID-17978020[0] The Upshot of Up States in the Neocortex: From Slow Oscillations to Memory Formation

  • slow waves are caused by spreading synchronous up/down depolarizations in the neocortex during SWS
    • the slow waves are thought to be generated intrinsically (?)
  • cortex is insensitive in up states, but highly sensitive to thalamic stimulation in down states? humm, need to see the data for that - from slices.
  • quote: "According to some theories of memory consolidation (Marr, 1971Go; Buzs√°ki, 1989Go; Squire, 1992Go; McClelland et al., 1995Go), memories are thought to be minted rapidly in the hippocampus during behavior and transferred to the neocortex during slow-wave sleep for long-term storage."
  • there is other stuff about 50-150 Hz activation in the hippocampus leading to neocortical activation, and that this is associated with transfer from labile hippocampus to long-term neocortex.
  • the review gives an impression of not being as concrete as, say, Buzsaki.


[0] Hoffman KL, Battaglia FP, Harris K, MacLean JN, Marshall L, Mehta MR, The upshot of up states in the neocortex: from slow oscillations to memory formation.J Neurosci 27:44, 11838-41 (2007 Oct 31)

hide / / print
ref: Mehta-2007.01 tags: hippocampus visual cortex wilson replay sleep learning states date: 03-09-2009 18:53 gmt revision:1 [0] [head]

PMID-17189946[0] Cortico-hippocampal interaction during up-down states and memory consolidation.

  • (from the associated review) Good pictorial description of how the hippocampus may impinge order upon the cortex:
    • During sleep the cortex is spontaneously and randomly active. Hippocampal activity is similarly disorganized.
    • During waking, the mouse/rat moves about in the environment, activating a sequence of place cells. The weights of the associated place cells are modified to reflect this sequence.
    • When the rat falls back to sleep, the hippocampus is still not random, and replays a compressed copy of the day's events to the cortex, which can then (and with other help, eg. ACh), learn/consolidate it.
  • see [1].


hide / / print
ref: Dzirasa-2006.1 tags: Kafui dopamine sleep REM state-diagram SCLin date: 10-05-2008 17:37 gmt revision:2 [1] [0] [head]

PMID-17035544[0] Dopaminergic control of sleep-wake states


hide / / print
ref: notes-0 tags: nordic nrf24L01 state diagram flowchart SPI blackfin date: 06-25-2008 02:44 gmt revision:7 [6] [5] [4] [3] [2] [1] [head]


The goal is to use a nRF24L01 to make an asymmetrical, bidirectional link. The outgoing bandwidth should be maximized, ~1.5mbps, and the incoming bandwidth can be much smaller, ~17kbps, though on both channels we want guaranteed latency, < 4ms for the outgoing data, and < 10ms for the incoming data. Furthermore, the processor that is being used to run this, a blackfin BF532, does not seem to play well when both SPI DMA is enabled and most CPU time is being spent in SPORT ISR reading samples & processing them. Fortunately, the SPI port and SPORT can be run synchronously (provided the SPI port is clocked fast enough), allowing the processor to run one 'thread' e.g. no interrupts. It seem that with high-priority interrupts, the DMA engine is not able to service the SPI perfectly, and without DMA, data comes out of the SPI in drips and drabs, and cannot keep the radio's fifo full. Hence, must program a synchronous radio controller, where states are stored in variables and not in the program counter (PC register, saved upon interrupt, etc).

As in other postings on the nRF24L01, the plan is to keep the transmit fifo full for most of the 4ms allowed by the free-running pll, then transition back into either standby-I mode, or send a status packet. The status packet is always acknowledged by the primary receiver with a command packet, and this allows both synchronization and incoming bandwidth. Therefore, there are 4 classes of transfers:

  1. just a status packet. After uploading, wait for TX_DS IRQ, transition to RX mode, wait for RX_DR irq, clear ce, read in the packet, and set back to TX mode.
  2. one data packet + status packet. There are timeouts on both the transmission of data packets and status packets; in this case, both have been exceeded. Here TX data state is entered, the packet is uploaded, CE is asserted, send the status packet, wait for IRQ from both packets. This requires a transition from tx data CE high state to tx status CSN low state.
  3. many data packets and one status packet. This is the same as above, only the data transmission was triggered by a full threshold in the outgoing packet queue (in processor ram). In this case, two packets are uploaded to the radio before waiting for a TX_DS IRQ, and, at the end of the process, we have to wait for two TX_DS IRQs after uploading the data packet.
  4. many data packets. This is straightforward - upload 2 packets, wait for 1 TX_DS IRQ, {upload another, wait for IRQ}(until packets are gone), wait for final IRQ, set CE low.

screenshot of the derived code working (yea, my USB logic analyzer only runs on windows..yeck):

old versions:

hide / / print
ref: picture-0 tags: nordic state control diagram radio date: 10-22-2007 18:58 gmt revision:2 [1] [0] [head]

hide / / print
ref: Kerr-2004.01 tags: UP_DOWN states striatum cortex spike timing date: 0-0-2007 0:0 revision:0 [head]

PMID-14749432 Action Potential Timing Determines Dendritic Calcium during Striatal Up-States

  • striatum has up/down states too!
  • only read the abstract.