You are not authenticated, login.
text: sort by
tags: modified
type: chronology
[0] Ferrari PF, Rozzi S, Fogassi L, Mirror neurons responding to observation of actions made with tools in monkey ventral premotor cortex.J Cogn Neurosci 17:2, 212-26 (2005 Feb)

hide / / print
ref: -0 tags: tungsten eletropolishing hydroxide cleaning bath tartarate date: 03-28-2017 16:34 gmt revision:0 [head]

Method of electropolishing tungsten wire US 3287238 A

  • The bath is formed of 15% by weight sodium hydroxide, 30% by weight sodium potassium tartrate, and 55% by weight distilled water, with the bath temperature being between 70 and 100 F.
    • If the concentration of either the hydroxide or the tartrate is below the indicated minimum, the wire is electrocleaned rather than electropolished, and a matte finish is obtained rather than a specular surface.
    • If the concentration of either the hydroxide or the tartrate is greater than the indicated maximum, the electropolishing process is quite slow.
  • The voltage which is applied between the two electrodes 18 and 20 is from 16 to 18.5 volts, the current through the bath is 20 to 24 amperes, and the current density is 3,000 to 4,000 amperes per square foot of surface of wire in the bath.

hide / / print
ref: work-0 tags: Ng computational leaning theory machine date: 10-25-2009 19:14 gmt revision:0 [head]

Andrew Ng's notes on learning theory

  • goes over the bias / variance tradeoff.
    • variance = when the model has a large testing error; large generalization error.
    • bias = the expected generalization error even if the model is fit to a very large training set.
  • proves that, with a sufficiently large training set, the training error will be the same as the fitting error.
    • also gives an upper bound on the generalization error in terms of fitting error in terms of the number of models available (discrete number)
    • this bound is only logarithmic in k, the number of hypotheses.
  • the training size m that a certain method or algorithm requires in order to achieve a certain level of performance is the algorithm's sample complexity.
  • shows that with infinite hypothesis space, the number of training examples needed is at most linear in the parameters of the model.
  • goes over the Vapnik-Chervonenkis dimension = the size of the largest set that is shattered by a hypothesis space. = VC(H)
    • A hypothesis space can shatter a set if it can realize any labeling (binary, i think) on the set of points in S. see his diagram.
    • In oder to prove that VC(H) is at least D, only need to show that there's at least one set of size d that H can shatter.
  • There are more notes in the containing directory - http://www.stanford.edu/class/cs229/notes/

hide / / print
ref: Ferrari-2005.02 tags: tool use monkey neural response leaning mirror neurons F5 date: 04-03-2007 22:44 gmt revision:1 [0] [head]

PMID-15811234[] Mirror Neurons Responding to Observation of Actions Made with Tools in Monkey Ventral Premotor Cortex

  • respond when the monkey sees a human using a tool!