m8ta
use https for features.
text: sort by
tags: modified
type: chronology
{1447}
hide / / print
ref: -2006 tags: Mark Bear reward visual cortex cholinergic date: 03-06-2019 04:54 gmt revision:1 [0] [head]

PMID-16543459 Reward timing in the primary visual cortex

  • Used 192-IgG-Saporin (saporin immunotoxin)to selectively lesion cholinergic fibers locally in V1 following a visual stimulus -> licking reward delay behavior.
  • Visual stimulus is full-field light, delivered to either the left or right eye.
    • This is scarcely a challenging task; perhaps they or others have followed up?
  • These examples illustrate that both cue 1-dominant and cue 2-dominant neurons recorded from intact animals express NRTs that appropriately reflect the new policy. Conversely, although cue 1- and cue 2-dominant neurons recorded from 192-IgG-saporin-infused animals are capable of displaying all forms of reward timing activity, ‘’’they do not update their NRTs but rather persist in reporting the now outdated policy.’’’
    • NRT = neural reaction time.
  • This needs to be controlled with recordings from other cortical areas.
  • Acquisition of reward based response is simultaneously interesting and boring -- what about the normal, discriminative and perceptual function of the cortex?
  • See also follow-up work PMID-23439124 A cholinergic mechanism for reward timing within primary visual cortex.

{1412}
hide / / print
ref: -0 tags: deeplabcut markerless tracking DCN transfer learning date: 10-03-2018 23:56 gmt revision:0 [head]

Markerless tracking of user-defined features with deep learning

  • Human - level tracking with as few as 200 labeled frames.
  • No dynamics - could be even better with a Kalman filter.
  • Uses a Google-trained DCN, 50 or 101 layers deep.
    • Network has a distinct read-out layer per feature to localize the probability of a body part to a pixel location.
  • Uses the DeeperCut network architecture / algorithm for pose estimation.
  • These deep features were trained on ImageNet
  • Trained on examples with both only the readout layers (rest fixed per ResNet), as well as end-to-end; latter performs better, unsurprising.

{998}
hide / / print
ref: -0 tags: bookmark Cory Doctorow EFF SOPA internet freedom date: 01-01-2012 21:51 gmt revision:0 [head]

The Coming War on General Computation "M.P.s and Congressmen and so on are elected to represent districts and people, not disciplines and issues. We don't have a Member of Parliament for biochemistry, and we don't have a Senator from the great state of urban planning, and we don't have an M.E.P. from child welfare. "

{714}
hide / / print
ref: Maass-2002.11 tags: Maass liquid state machine expansion LSM Markram computation cognition date: 12-06-2011 07:17 gmt revision:2 [1] [0] [head]

PMID-12433288[0] Real-time computing without stable states: a new framework for neural computation based on perturbations.

  • It is shown that the inherent transient dynamics of the high-dimensional dynamical system formed by a sufficiently large and heterogeneous neural circuit may serve as universal analog fading memory. Readout neurons can learn to extract in real time from the current state of such recurrent neural circuit information about current and past inputs that may be needed for diverse tasks.
    • Stable states, e.g. Turing machines and attractor-based networks are not requried!
    • How does this compare to Shenoy's result that neuronal dynamics converge to a 'stable' point just before movement?

____References____

[0] Maass W, Natschläger T, Markram H, Real-time computing without stable states: a new framework for neural computation based on perturbations.Neural Comput 14:11, 2531-60 (2002 Nov)

{93}
hide / / print
ref: notes-0 tags: MCMC Monte carlo markov chain date: 0-0-2006 0:0 revision:0 [head]

In a MCMC, the invariant distribution is a eigenvector of the state transition matrix whose eigenvalue is 1!

page 372 of http://www.inference.phy.cam.ac.uk/itprnn/book.pdf