m8ta
You are not authenticated, login.
text: sort by
tags: modified
type: chronology
{1497}
hide / / print
ref: -2017 tags: human level concept learning through probabalistic program induction date: 01-20-2020 15:45 gmt revision:0 [head]

PMID-26659050 Human level concept learning through probabalistic program induction

  • Preface:
    • How do people learn new concepts from just one or a few examples?
    • And how do people learn such abstract, rich, and flexible representations?
    • How can learning succeed from such sparse dataset also produce such rich representations?
    • For any theory of learning, fitting a more complicated model requires more data, not less, to achieve some measure of good generalization, usually in the difference between new and old examples.
  • Learning proceeds bu constructing programs that best explain the observations under a Bayesian criterion, and the model 'learns to learn' by developing hierarchical priors that allow previous experience with related concepts to ease learning of new concepts.
  • These priors represent learned inductive bias that abstracts the key regularities and dimensions of variation holding actoss both types of concepts and across instances.
  • BPL can construct new programs by reusing pieced of existing ones, capturing the causal and compositional properties of real-world generative processes operating on multiple scales.
  • Posterior inference requires searching the large combinatorial space of programs that could have generated a raw image.
    • Our strategy uses fast bottom-up methods (31) to propose a range of candidate parses.
    • That is, they reduce the character to a set of lines (series of line segments), then simply the intersection of those lines, and run a series of parses to estimate the generation of those lines, with heuristic criteria to encourage continuity (e.g. no sharp angles, penalty for abruptly changing direction, etc).
    • The most promising candidates are refined by using continuous optimization and local search, forming a discrete approximation to the posterior distribution P(program, parameters | image).

{1454}
hide / / print
ref: -2011 tags: Andrew Ng high level unsupervised autoencoders date: 03-15-2019 06:09 gmt revision:7 [6] [5] [4] [3] [2] [1] [head]

Building High-level Features Using Large Scale Unsupervised Learning

  • Quoc V. Le, Marc'Aurelio Ranzato, Rajat Monga, Matthieu Devin, Kai Chen, Greg S. Corrado, Jeff Dean, Andrew Y. Ng
  • Input data 10M random 200x200 frames from youtube. Each video contributes only one frame.
  • Used local receptive fields, to reduce the communication requirements. 1000 computers, 16 cores each, 3 days.
  • "Strongly influenced by" Olshausen & Field {1448} -- but this is limited to a shallow architecture.
  • Lee et al 2008 show that stacked RBMs can model simple functions of the cortex.
  • Lee et al 2009 show that convolutonal DBN trained on faces can learn a face detector.
  • Their architecture: sparse deep autoencoder with
    • Local receptive fields: each feature of the autoencoder can connect to only a small region of the lower layer (e.g. non-convolutional)
      • Purely linear layer.
      • More biologically plausible & allows the learning of more invariances other than translational invariances (Le et al 2010).
      • No weight sharing means the network is extra large == 1 billion weights.
        • Still, the human visual cortex is about a million times larger in neurons and synapses.
    • L2 pooling (Hyvarinen et al 2009) which allows the learning of invariant features.
      • E.g. this is the square root of the sum of the squares of its inputs. Square root nonlinearity.
    • Local contrast normalization -- subtractive and divisive (Jarrett et al 2009)
  • Encoding weights W 1W_1 and deconding weights W 2W_2 are adjusted to minimize the reconstruction error, penalized by 0.1 * the sparse pooling layer activation. Latter term encourages the network to find invariances.
  • minimize(W 1,W 2) minimize(W_1, W_2) i=1 m(||W 2W 1 Tx (i)x (i)|| 2 2+λ j=1 kε+H j(W 1 Tx (i)) 2) \sum_{i=1}^m {({ ||W_2 W_1^T x^{(i)} - x^{(i)} ||^2_2 + \lambda \sum_{j=1}^k{ \sqrt{\epsilon + H_j(W_1^T x^{(i)})^2}} })}
    • H jH_j are the weights to the j-th pooling element, λ=0.1\lambda = 0.1 ; m examples; k pooling units.
    • This is also known as reconstruction Topographic Independent Component Analysis.
    • Weights are updated through asynchronous SGD.
    • Minibatch size 100.
    • Note deeper autoencoders don't fare consistently better.

{758}
hide / / print
ref: work-0 tags: ocaml toplevel ocamlfind date: 06-24-2009 14:52 gmt revision:1 [0] [head]

Ocaml has an interactive top level, but in order to make this useful (e.g. for inspecting the types of variables, trying out code before compiling it), you need to import libraries and modules. If you have ocamlfind on your system (I think this is the requirement..), do this with: #use "topfind";; at the ocaml prompt, then #require"package names" . e.g:

tlh24@chimera:~/svn/m8ta/yushin$ ledit | ocaml
        Objective Caml version 3.10.2

# #use "topfind";;
- : unit = ()
Findlib has been successfully loaded. Additional directives:
  #require "package";;      to load a package
  #list;;                   to list the available packages
  #camlp4o;;                to load camlp4 (standard syntax)
  #camlp4r;;                to load camlp4 (revised syntax)
  #predicates "p,q,...";;   to set these predicates
  Topfind.reset();;         to force that packages will be reloaded
  #thread;;                 to enable threads

- : unit = ()
# #require "bigarray,gsl";;
/usr/lib/ocaml/3.10.2/bigarray.cma: loaded
/usr/lib/ocaml/3.10.2/gsl: added to search path
/usr/lib/ocaml/3.10.2/gsl/gsl.cma: loaded
# #require "pcre,unix,str";;
/usr/lib/ocaml/3.10.2/pcre: added to search path
/usr/lib/ocaml/3.10.2/pcre/pcre.cma: loaded
/usr/lib/ocaml/3.10.2/unix.cma: loaded
/usr/lib/ocaml/3.10.2/str.cma: loaded
# Pcre.pmatch
  ;;
- : ?iflags:Pcre.irflag ->
    ?flags:Pcre.rflag list ->
    ?rex:Pcre.regexp ->
    ?pat:string -> ?pos:int -> ?callout:Pcre.callout -> string -> bool
= <fun>
# let m = Gsl_matrix.create 3 3;;
val m : Gsl_matrix.matrix = <abstr>
# m;;
- : Gsl_matrix.matrix = <abstr>
# m.{1,1};;
- : float = 6.94305623882282e-310
# m.{0,0};;
- : float = 6.94305568087725e-310
# m.{1,1} <- 1.0 ;;
- : unit = ()
# m.{2,2} <- 2.0 ;;
- : unit = ()
# let mstr = Marshal.to_string m [] ;;

Nice!