PMID-26659050 Human level concept learning through probabalistic program induction
- Preface:
- How do people learn new concepts from just one or a few examples?
- And how do people learn such abstract, rich, and flexible representations?
- How can learning succeed from such sparse dataset also produce such rich representations?
- For any theory of learning, fitting a more complicated model requires more data, not less, to achieve some measure of good generalization, usually in the difference between new and old examples.
- Learning proceeds bu constructing programs that best explain the observations under a Bayesian criterion, and the model 'learns to learn' by developing hierarchical priors that allow previous experience with related concepts to ease learning of new concepts.
- These priors represent learned inductive bias that abstracts the key regularities and dimensions of variation holding actoss both types of concepts and across instances.
- BPL can construct new programs by reusing pieced of existing ones, capturing the causal and compositional properties of real-world generative processes operating on multiple scales.
-
- Posterior inference requires searching the large combinatorial space of programs that could have generated a raw image.
- Our strategy uses fast bottom-up methods (31) to propose a range of candidate parses.
- That is, they reduce the character to a set of lines (series of line segments), then simply the intersection of those lines, and run a series of parses to estimate the generation of those lines, with heuristic criteria to encourage continuity (e.g. no sharp angles, penalty for abruptly changing direction, etc).
- The most promising candidates are refined by using continuous optimization and local search, forming a discrete approximation to the posterior distribution P(program, parameters | image).
|