use https for features.
text: sort by
tags: modified
type: chronology
[0] Mehta MR, Cortico-hippocampal interaction during up-down states and memory consolidation.Nat Neurosci 10:1, 13-5 (2007 Jan)[1] Ji D, Wilson MA, Coordinated memory replay in the visual cortex and hippocampus during sleep.Nat Neurosci 10:1, 100-7 (2007 Jan)

[0] Ji D, Wilson MA, Coordinated memory replay in the visual cortex and hippocampus during sleep.Nat Neurosci 10:1, 100-7 (2007 Jan)

[0] Káli S, Dayan P, Off-line replay maintains declarative memories in a model of hippocampal-neocortical interactions.Nat Neurosci 7:3, 286-94 (2004 Mar)

[0] Foster DJ, Wilson MA, Reverse replay of behavioural sequences in hippocampal place cells during the awake state.Nature 440:7084, 680-3 (2006 Mar 30)

hide / / print
ref: -2020 tags: replay hippocampus variational autoencoder date: 10-11-2020 04:09 gmt revision:1 [0] [head]

Brain-inspired replay for continual learning with artificial neural networks

  • Gudo M van de Ven, Hava Siegelmann, Andreas Tolias
  • In the real world, samples are not replayed in shuffled order -- they occur in a sequence, typically few times. Hence, for training an ANN (or NN?), you need to 'replay' samples.
    • Perhaps, to get at hidden structure not obvious on first pass through the sequence.
    • In the brain, reactivation / replay likely to stabilize memories.
      • Strong evidence that this occurs through sharp-wave ripples (or the underlying activity associated with this).
  • Replay is also used to combat a common problem in training ANNs - catastrophic forgetting.
    • Generally you just re-sample from your database (easy), though in real-time applications, this is not possible.
      • It might also take a lot of memory (though that is cheap these days) or violate privacy (though again who cares about that)

  • They study two different classification problems:
    • Task incremental learning (Task-IL)
      • Agent has to serially learn distinct tasks
      • OK for Atari, doesn't make sense for classification
    • Class incremental learning (Class-IL)
      • Agent has to learn one task incrementally, one/few classes at a time.
      • Like learning a 2 digits at a time in MNIST
        • But is tested on all digits shown so far.
  • Solved via Generative Replay (GR, ~2017)
  • Use a recursive formulation: 'old' generative model is used to generate samples, which are then classified and fed, interleaved with the new samples, to the new network being trained.
    • 'Old' samples can be infrequent -- it's easier to reinforce an existing memory rather than create a new one.
    • Generative model is a VAE.
  • Compared with some existing solutions to catastrophic forgetting:
    • Methods to protect parameters in the network important for previous tasks
      • Elastic weight consolidation (EWC)
      • Synaptic intelligence (SI)
        • Both methods maintain estimates of how influential parameters were for previous tasks, and penalize changes accordingly.
        • "metaplasticity"
        • Synaptic intelligence: measure the loss change relative to the individual weights.
        • δL=δLδθδθδtδt \delta L = \int \frac{\delta L}{\delta \theta} \frac{\delta \theta}{\delta t} \delta t ; converted into discrete time / SGD: L=Σ kω k=ΣδLδθδθδtδt L = \Sigma_k \omega_k = \Sigma \int \frac{\delta L}{\delta \theta} \frac{\delta \theta}{\delta t} \delta t
        • ω k\omega_k are then the weightings for how much parameter change contributed to the training improvement.
        • Use this as a per-parameter regularization strength, scaled by one over the square of 'how far it moved'.
        • This is added to the loss, so that the network is penalized for moving important weights.
    • Context-dependent gating (XdG)
      • To reduce interference between tasks, a random subset of neurons is gated off (inhibition), depending on the task.
    • Learning without forgetting (LwF)
      • Method replays current task input after labeling them (incorrectly?) using the model trained on the previous tasks.
  • Generative replay works on Class-IL!
  • And is robust -- not to many samples or hidden units needed (for MNIST)

  • Yet the generative replay system does not scale to CIFAR or permuted MNIST.
  • E.g. if you take the MNIST pixels, permute them based on a 'task', and ask a network to still learn the character identities , it can't do it ... though synaptic intelligence can.
  • Their solution is to make 'brain-inspired' modifications to the network:
    • RtF, Replay-though-feedback: the classifier and generator network are fused. Latent vector is the hippocampus. Cortex is the VAE / classifier.
    • Con, Conditional replay: normal prior for the VAE is replaced with multivariate class-conditional Gaussian.
      • Not sure how they sample from this, check the methods.
    • Gat, Gating based on internal context.
      • Gating is only applied to the feedback layers, since for classification ... you don't a priori know the class!
    • Int, Internal replay. This is maybe the most interesting: rather than generating pixels, feedback generates hidden layer activations.
      • First layer of a network is convolutional, dependent on visual feature statistics, and should not change much.
        • Indeed, for CIFAR, they use pre-trained layers.
      • Internal replay proved to be very important!
    • Dist, Soft target labeling of the generated targets; cross-entropy loss when training the classifier on generated samples. Aka distillation.
  • Results suggest that regularization / metaplasticity (keeping memories in parameter space) and replay (keeping memories in function space) are complementary strategies,
    • And that the brain uses both to create and protect memories.

  • When I first read this paper, it came across as a great story -- well thought out, well explained, a good level of detail, and sufficiently supported by data / lesioning experiments.
  • However, looking at the first authors pub record, it seems that he's been at this for >2-3 years ... things take time to do & publish.
  • Folding in of the VAE is satisfying -- taking one function approximator and use it to provide memory for another function approximator.
  • Also satisfying are the neurological inspirations -- and that full feedback to the pixel level was not required!
    • Maybe the hippocampus does work like this, providing high-level feature vectors to the cortex.
    • And it's likely that the cortex has some features of a VAE, e.g. able to perceive and imagine through the same nodes, just run in different directions.
      • The fact that both concepts led to an engineering solution is icing on the cake!

hide / / print
ref: Ribeiro-2004.12 tags: Sidarta Ribeiro reverberation sleep consolidation integration replay REM SWS date: 03-26-2009 03:19 gmt revision:2 [1] [0] [head]

PMID-15576886[0] Reverberation, storage, and postsynaptic propagation of memories during sleep

  • Many references in the first paragraph! They should switch to the [n] notation; the names are disruptive.
  • Show reverberation (is this measured in a scale-invariant way?) increases after novel object is placed in cage. Recorded from a single rat for up to 96 hours.
  • also looked at Zif-268 activation in the cortex (autoradiogram);
    • Previous results showed that Zif-268 levels are up-regulated in REM but not SWS in the hippocampus and cerebral cortex of exposed animals. (Ribeiro 1999)
    • hippocampal inactivation during REM sleep blocked zif-268 upregulation.
    • quote: "Increased activity is necessary but not sufficient to induce zif-268 expression, which also requires calcium inflow via NMDA channels and phosphorilation of the cAMP response element-binding protein (CREB)"
  • Sleep deprivation is much more detrimental to implicit than to explicit memory consolidation (Fowler et al. 1973; Karni et al. 1994; Smith 1995, 2001; Stickgold et al. 2000a; Laureys et al. 2002; Walker et al. 2002; Maquet et al. 2003; Mednick et al. 2003)


[0] Ribeiro S, Nicolelis MA, Reverberation, storage, and postsynaptic propagation of memories during sleep.Learn Mem 11:6, 686-96 (2004 Nov-Dec)

hide / / print
ref: Stickgold-2001.11 tags: review dream sleep REM NREM SWS learning memory replay date: 03-19-2009 17:09 gmt revision:7 [6] [5] [4] [3] [2] [1] [head]

PMID-11691983[0] Sleep, Learning, and Dreams: Off-line Memory Reprocessing

  • sleep can be broadly divided into REM (rapid eye movement) and NREM (no rapid eye movement) sleep, with the REM-NREM cycle lasting 90 minutes in humans.
  • REM seems involved in proper binocular wiring in the visual cortex, development of problem solving skills, and discrimination tasks.
    • REM sleep seems as important as visual experience for wiring binocular vision.
  • REM seems critical for learning procedural memories, but not declarative (by the authors claim that the tasks used in declarative tests are too simple).
    • Depriving rats of REM sleep can impair procedural learning at test points up to a week later.
    • SWS may be better for consolidation of declarative memory.
  • Strongest evidence comes from a visual texture discrimination task, where improvements are only seen after REM sleep.
    • REM has also been shown to have an effect in learning of complex logic games, foreign language acquisition, and after intensive studying.
    • Solving anagrames stronger after being woken up from REM sleep. (!)
  • REM (hypothetically) involves NC -> hippocampus; SWS involves hippocampus -> NC (hence declarative memory). (Buzaki 1996).
    • This may use theta waves, which enhance LTP in the hippocampus; the slow large depolarizations in SWS may facilitate LTP in the cortex.
  • Replay in the rat hippocampus:
    • replay occurs within layer CA1 during SWS for a half hour or so after learning, and in REM after 24 hours.
    • replay shifts from being in-phase with the theta wave activity (e.g. helping LTP) to being out of phase (coinicident with troughs, possibly used to 'erase' memories from the hippocampus?); this is in accord with memories becoming hippocampally independent.
  • ACh levels are at waking levels or higher, and levels of NE (noradrenergic) & 5-HT go near zero.
  • DLPFC (dorsolateral prefrontal cortex) is inhibited during REM sleep - presumably, this results in an inability to allocate attentional resources.
  • ACC (anterior cingulate cortex), MFC (medial frontal cortex), and the amygdala are highly active in REM sleep.
  • if you block correlates of learning - PKA pathwat, zif-268 genes during REM, learning is impaired.
  • In the context of a multilevel system of sleep-dependent memory reprocessing, dreams represent the conscious awareness of complex brain systems involved in the reprocessing of emotions and memories during sleep.
    • the whole section on dreaming is really interesting!


[0] Stickgold R, Hobson JA, Fosse R, Fosse M, Sleep, learning, and dreams: off-line memory reprocessing.Science 294:5544, 1052-7 (2001 Nov 2)

hide / / print
ref: Mehta-2007.01 tags: hippocampus visual cortex wilson replay sleep learning states date: 03-09-2009 18:53 gmt revision:1 [0] [head]

PMID-17189946[0] Cortico-hippocampal interaction during up-down states and memory consolidation.

  • (from the associated review) Good pictorial description of how the hippocampus may impinge order upon the cortex:
    • During sleep the cortex is spontaneously and randomly active. Hippocampal activity is similarly disorganized.
    • During waking, the mouse/rat moves about in the environment, activating a sequence of place cells. The weights of the associated place cells are modified to reflect this sequence.
    • When the rat falls back to sleep, the hippocampus is still not random, and replays a compressed copy of the day's events to the cortex, which can then (and with other help, eg. ACh), learn/consolidate it.
  • see [1].


hide / / print
ref: Ji-2007.01 tags: hippocampus visual cortex wilson replay sleep date: 03-09-2009 18:48 gmt revision:3 [2] [1] [0] [head]

PMID-17173043[0] Coordinated memory replay in the visual cortex and hippocampus during sleep.

  • EEG from Layer 5 of the visual cortex.
  • used tetrodes.
  • rats were trained to alternate loops in a figure-8 maze to get at food.
  • the walls of the maze were lined with high-contrast cues.
  • data for correlated activity between ctx and hippocampus weak - they just show that the frame ('up' period in cellular activity) start & end between the two regions are correlated. No surprise - they are in the same brain after all!
  • Found that cells in the deep visual cortex (V1 & V2) had localized firing fields. Rat vision is geared for navigation? (mostly?)
  • From this, they could show offline replay of the same sequence; these offline sequences were compressed by about 5-10.
    • shuffle tests on the replayed frames look pretty good - respectable degree of significance here.
    • Aside: possibly some of the noise of the recordings is reflective not of the noise of the system, but the noise / high dimensionality of the sensory input driving the visual ctx.
  • Also found some visual and some hippocampal cells that replayed sequences simultaneously; shuffle test here looks ok too.
  • picture from associated review, {692}


hide / / print
ref: KAli-2004.03 tags: hippocampus memory model Dayan replay learning memory date: 03-06-2009 17:53 gmt revision:1 [0] [head]

PMID-14983183[0] Off-line replay maintains declarative memories in a model of hippocampal-neocortical interactions

  • (i'm skimming the article)
  • The neocortex acts as a probabilistic generative model. unsupervised learning extracts categories, tendencies and correlations from the statistics of the inputs into the [synaptic weights].
  • Their hypothesis is that hippocampal replay is required for maintenance of episodic memories; their model and simulations support this.
  • quote: "However, the computational goal of episodic learning is storing individual events rather than discovering statistical structure, seemingly rendering consolidation inappropriate. If initial hippocampal storage of the episode already ensures that it can later be recalled episodically, then, barring practical advantages such as storage capacity (or perhaps efficiency), there seems little point in duplicating this capacity in neocortex." makes sense!


hide / / print
ref: Foster-2006.03 tags: hippocampus memory place cells reverse replay Wilson date: 03-06-2009 17:53 gmt revision:1 [0] [head]

PMID-16474382[0] Reverse replay of behavioral sequences in hippocampal place cells during the awake state.

  • wow: they show compressed reverse replay of firing sequences of hippocampal place cells during movement. While the rat is awake, too!
  • recorded up to 128 cells from the rat hippocampus; 4 animals.
  • the replay occurred while the rat was stopped, and lasted a few hundred milliseconds (~300).
  • phenomena appears to be very common, at least for the rats on the novel tracks.
  • replay events were coincident with ripples in the hippocampal EEG, which also occurs during sleep.
    • however, during slow-wave sleep, the replay was forward.
  • they offer a reasonable hypothesis for the reverse replay's function: it is used to propagate value information from the rewarded lcoation backwards along incoming (behavioral) trajectories.
    • quote "awake replay represents efficient use of hard-won experience."