m8ta
use https for features.
text: sort by
tags: modified
type: chronology
{1523}
hide / / print
ref: -0 tags: tennenbaum compositional learning character recognition one-shot learning date: 09-29-2020 03:44 gmt revision:1 [0] [head]

One-shot learning by inverting a compositional causal process

  • Brenden Lake, Russ Salakhutdinov, Josh Tennenbaum
  • This is the paper that preceded the 2015 Science publication "Human level concept learning through probabalistic program induction"
  • Because it's a NIPS paper, and not a science paper, this one is a bit more accessible: the logic to the details and developments is apparent.
  • General idea: build up a fully probabilistic model of multi-language (omniglot corpus) characters / tokens. This model includes things like character type / alphabet, number of strokes, curvature of strokes (parameterized via bezier splines), where strokes attach to others
(spatial relations), stroke scale, and character scale. The model (won't repeat the formal definition) is factorized to be both compositional and causal, though all the details of the conditional probs are likely left to the supplemental material.
  • They fit the complete model to the Omniglot data using gradient descent + image-space noising, e.g tweak the free parameters of the model to generate images that look like the human created characters. (This too is in the supplement).
  • Because the model is high-dimensional and hard to invert, they generate a perceptual model by winnowing down the image into a skeleton, then breaking this into a variable number of strokes.
    • The probabilistic model then assigns a log-likelihood to each of the parses.
    • They then use the model with Metropolis-Hastings MCMC to sample a region in parameter space around each parse -- but they sample ψ\psi (the character type) to get a greater weighted diversity of types.
      • Surprisingly, they don't estimate the image likelihood - which is expensive - they here just re-do the parsing based on aggregate info embedded in the statistical model. Clever.
  • ψ\psi is the character type (a, b, c..), ψ=κ,S,R\psi = { \kappa, S, R } where kappa are the number of strokes, S is a set of parameterized strokes, R are the relations between strokes.
  • θ\theta are the per-token stroke parameters.
  • II is the image, obvi.
  • Classification task: one image of a new character (c) vs 20 characters new characters from the same alphabet (test, (t)). In the 20 there is one character of the same type -- task is to find it.
  • With 'hierarchical bayesian program learning', they not only anneal the type to the parameters (with MCMC, above) for the test image, but they also fit the parameters using gradient descent to the image.
    • Subsequently parses the test image onto the class image (c)
    • Hence the best classification is the one where both are in the best agreement: argmaxcP(c|t)P(c)P(t|c)\underset{c}{argmax} \frac{P(c|t)}{P(c)} P(t|c) where P(c)P(c) is approximated as the parse weights.
      • Again, this is clever as it allows significant information leakage between (c) and (t) ...
      • The other models (Affine, Deep Boltzman Machines, Hierarchical Deep Model) have nothing like this -- they are feed-forward.
  • No wonder HBPL performs better. It's a better model of the data, that has a bidirectional fitting routine.

  • As i read the paper, had a few vague 'hedons':
    • Model building is essential. But unidirectional models are insufficient; if the models include the mechanism for their own inversion many fitting and inference problems are solved. (Such is my intuition)
      • As a corrolary of this, having both forward and backward tags (links) can be used to neatly solve the binding problem. This should be easy in a computer w/ pointers, though in the brain I'm not sure how it might work (?!) without some sort of combinatorial explosion?
    • The fitting process has to be multi-pass or at least re-entrant. Both this paper and the Vicarious CAPTCHA paper feature statistical message passing to infer or estimate hidden explanatory variables. Seems correct.
    • The model here includes relations that are conditional on stroke parameters that occurred / were parsed beforehand; this is very appealing in that the model/generator/AI needs to be flexibly re-entrant to support hierarchical planning ...

{1462}
hide / / print
ref: -0 tags: 3D SHOT Alan Hillel Waller 2p photon holography date: 05-31-2019 22:19 gmt revision:4 [3] [2] [1] [0] [head]

PMID-29089483 Three-dimensional scanless holographic optogenetics with temporal focusing (3D-SHOT).

  • P├ęgard NC1,2, Mardinly AR1, Oldenburg IA1, Sridharan S1, Waller L2, Adesnik H3,4
  • Combines computer-generated holography and temporal focusing for single-shot (no scanning) two-photon photo-activation of opsins.
  • The beam intensity profile determines the dimensions of the custom temporal focusing pattern (CTFP), while phase, a previously unused degree of freedom, is engineered to make 3D holograph and temporal focusing compatible.
  • "To ensure good diffraction efficiency of all spectral components by the SLM, we used a lens Lc to apply a small spherical phase pattern. The focal length was adjusted so that each spectral component of the pulse spans across the short axis of the SLM in the Fourier domain".
    • That is, they spatially and temporally defocus the pulse to better fill the SLM. The short axis of the SLM in this case is Y, per supplementary figure 2.
  • The image of the diffraction grating determines the plane of temporal focusing (with lenses L1 and L2); there is a secondary geometric focus due to Lc behind the temporal plane, which serves as an aberration.
  • The diffraction grating causes the temporal pattern to scan to produce a semi-spherical stimulated area ('disc').
  • Rather than creating a custom 3D holographic shape for each neuron, the SLM is after the diffraction grating -- it imposes phase and space modulation to the CTFP, effectively convolving it with a holograph of a cloud of points & hence replicating at each point.

{1181}
hide / / print
ref: -0 tags: neural imaging recording shot noise redshirt date: 01-02-2013 02:20 gmt revision:0 [head]

http://www.redshirtimaging.com/redshirt_neuro/neuro_lib_2.htm

  • Shot Noise: The limit of accuracy with which light can be measured is set by the shot noise arising from the statistical nature of photon emission and detection.
    • If an ideal light source emits an average of N photons/ms, the RMS deviation in the number emitted is N\sqrt N .
    • At high intensities this ratio NN\frac{N}{\sqrt N} is large and thus small changes in intensity can be detected. For example, at 10^10 photons/ms a fractional intensity change of 0.1% can be measured with a signal-to-noise ratio of 100.
    • On the other hand, at low intensities this ratio of intensity divided by noise is small and only large signals can be detected. For example, at 10^4 photons/msec the same fractional change of 0.1% can be measured with a signal-to-noise ratio of 1 only after averaging 100 trials.

{813}
hide / / print
ref: work-0 tags: kicadocaml zbuffer comparison picture screenshot date: 03-03-2010 16:38 gmt revision:4 [3] [2] [1] [0] [head]

Simple illustration of Kicadocaml with Z buffering enabled:

and disabled:

I normally use it with Z buffering enabled, but turn it off if, say, I want to clearly see all the track intersections, especially co-linear tracks or zero length tracks. (Probably I should write something to merge and remove these automatically.) Note that in either case, tracks and modules are rendered back-to-front, which effects a Z-sorting of sorts; it is the GPUs Z buffer that is enabled/disabled here.

{812}
hide / / print
ref: -0 tags: kicadocaml screenshot picture date: 03-03-2010 05:53 gmt revision:2 [1] [0] [head]

Aint she pretty?

More shots of the completed board (click for full resolution image):

  • whole thing, all layers.

  • Just the headstage, top and inner layer 2 only

  • Just the headstage, bottom and inner layers 2, 3 and 4.