m8ta
You are not authenticated, login.
text: sort by
tags: modified
type: chronology
{5}
hide / / print
ref: bookmark-0 tags: machine_learning research_blog parallel_computing bayes active_learning information_theory reinforcement_learning date: 12-31-2011 19:30 gmt revision:3 [2] [1] [0] [head]

hunch.net interesting posts:

  • debugging your brain - how to discover what you don't understand. a very intelligent viewpoint, worth rereading + the comments. look at the data, stupid
    • quote: how to represent the problem is perhaps even more important in research since human brains are not as adept as computers at shifting and using representations. Significant initial thought on how to represent a research problem is helpful. And when it’s not going well, changing representations can make a problem radically simpler.
  • automated labeling - great way to use a human 'oracle' to bootstrap us into good performance, esp. if the predictor can output a certainty value and hence ask the oracle all the 'tricky questions'.
  • The design of an optimal research environment
    • Quote: Machine learning is a victim of it’s common success. It’s hard to develop a learning algorithm which is substantially better than others. This means that anyone wanting to implement spam filtering can do so. Patents are useless here—you can’t patent an entire field (and even if you could it wouldn’t work).
  • More recently: http://hunch.net/?p=2016
    • Problem is that online course only imperfectly emulate the social environment of a college, which IMHO are useflu for cultivating diligence.
  • The unrealized potential of the research lab Quote: Muthu Muthukrishnan says “it’s the incentives”. In particular, people who invent something within a research lab have little personal incentive in seeing it’s potential realized so they fail to pursue it as vigorously as they might in a startup setting.
    • The motivation (money!) is just not there.

{7}
hide / / print
ref: bookmark-0 tags: book information_theory machine_learning bayes probability neural_networks mackay date: 0-0-2007 0:0 revision:0 [head]

http://www.inference.phy.cam.ac.uk/mackay/itila/book.html -- free! (but i liked the book, so I bought it :)

{29}
hide / / print
ref: bookmark-0 tags: machine_learning todorov motor_control date: 0-0-2007 0:0 revision:0 [head]

Iterative Linear Quadratic regulator design for nonlinear biological movement systems

  • paper for an international conference on informatics in control/automation/robotics

{37}
hide / / print
ref: bookmark-0 tags: Unscented sigma_pint kalman filter speech processing machine_learning SDRE control UKF date: 0-0-2007 0:0 revision:0 [head]

{8}
hide / / print
ref: bookmark-0 tags: machine_learning algorithm meta_algorithm date: 0-0-2006 0:0 revision:0 [head]

Boost learning or AdaBoost - the idea is to update the discrete distribution used in training any algorithm to emphasize those points that are misclassified in the previous fit of a classifier. sensitive to outliers, but not overfitting.

{20}
hide / / print
ref: bookmark-0 tags: neural_networks machine_learning matlab toolbox supervised_learning PCA perceptron SOM EM date: 0-0-2006 0:0 revision:0 [head]

http://www.ncrg.aston.ac.uk/netlab/index.php n.b. kinda old. (or does that just mean well established?)

{43}
hide / / print
ref: bookmark-0 tags: machine_learning date: 0-0-2006 0:0 revision:0 [head]

http://www.iovs.org/cgi/reprint/46/4/1322.pdf

A related machine learning classifier, the relevance vector machine (RVM), has recently been introduced, which, unlike SVM, incorporates probabalistic output (probability of membership) through Bayesian inference. Its decision function depends on fewer input variables that SVM, possibly allowing better classification for small data sets with high dimensionality.

  • input data here is a number of glaucoma-correlated parameters.
  • " SVM is a machine classification method that directly minimizes the classification error without requiring a statistical data model. SVM uses a kernel function to find a hyperplane that maximizes the distance (margin) between two classes (or more?). The resultant model is spares, depending only on a few training samples (support vectors).
  • The RVM has the same functional form as the SVM within a Bayesian framework. This classifier is a sparse Bayesian model that provides probabalistic predictions (e.g. probability of glaucoma based on the training samples) through bayesian inference.
    • RVM outputs probabilities of membership rather than point estimates like SVM

{61}
hide / / print
ref: bookmark-0 tags: smith predictor motor control wolpert cerebellum machine_learning prediction date: 0-0-2006 0:0 revision:0 [head]

http://prism.bham.ac.uk/pdf_files/SmithPred_93.PDF

  • quote in reference to models in which the cerebellum works as a smith predictor, e.g. feedforward prediction of the behavior of the limbs, eyes, trunk: Motor performance based on the use of such internal models would be degraded if the model was inavailable or inaccurate. These theories could therefore account for dysmetria, tremor, and dyssynergia, and perhaps also for increased reaction times.
  • note the difference between inverse model (transforms end target to a motor plan) and inverse models 9is used on-line in a tight feedback loop).
  • The difficulty becomes one of detecting mismatches between a rapid prediction of the outcome of a movement and the real feedback that arrives later in time (duh! :)
  • good set of notes on simple simulated smith predictor performance.

{66}
hide / / print
ref: bookmark-0 tags: machine_learning classification entropy information date: 0-0-2006 0:0 revision:0 [head]

http://iridia.ulb.ac.be/~lazy/ -- Lazy Learning.