You are not authenticated, login.
text: sort by
tags: modified
type: chronology
[0] Jackson A, Mavoori J, Fetz EE, Correlations between the same motor cortex cells and arm muscles during a trained task, free behavior, and natural sleep in the macaque monkey.J Neurophysiol 97:1, 360-74 (2007 Jan)

hide / / print
ref: work-0 tags: distilling free-form natural laws from experimental data Schmidt Cornell automatic programming genetic algorithms date: 12-30-2021 05:11 gmt revision:7 [6] [5] [4] [3] [2] [1] [head]

Distilling free-form natural laws from experimental data

  • The critical step was to use the full set of all pairs of partial derivatives ( δx/δy\delta x / \delta y ) to evaluate the search for invariants.
  • The selection of which partial derivatives are held to be independent / which variables are dependent is a bit of a trick too -- see the supplemental information.
    • Even yet, with a 4D data set the search for natural laws took ~ 30 hours.
  • This was via a genetic algorithm, distributed among 'islands' on different CPUs, with mutation and single-point crossover.
  • Not sure what the IL is, but it appears to be floating-point assembly.
  • Timeseries data is smoothed with Loess smoothing, which fits a polynomial to the data, and hence allows for smoother / more analytic derivative calculation.
    • Then again, how long did it take humans to figure out these invariants? (Went about it in a decidedly different way..)
    • Further, how long did it take for biology to discover similar 'design equations'?
      • The same algorithm has been applied to biological data - a metabolic pathway - with some success pub 2011.
      • Of course evolution had to explore a much larger space - proteins and regulatory pathways, not simpler mathematical expressions / linkages.

Since his Phd, Michael Schmidt has gone on to found Nutonian, which produced Eurequa software, apparently without dramatic new features other than being able to use the cloud for equation search. (Probably he improved many other detailed facets of the software..). Nutonian received $4M in seed funding, according to Crunchbase.

In 2017, Nutonian was acquired by Data Robot (for an undisclosed amount), where Michael has worked since, rising to the title of CTO.

Always interesting to follow up on the authors of these classic papers!

hide / / print
ref: Jackson-2007.01 tags: Fetz neurochip sleep motor control BMI free behavior EMG date: 09-13-2019 02:21 gmt revision:4 [3] [2] [1] [0] [head]

PMID-17021028[0] Correlations Between the Same Motor Cortex Cells and Arm Muscles During a Trained Task, Free Behavior, and Natural Sleep in the Macaque Monkey

  • used their implanted "neurochip" recorder that recorded both EMG and neural activity. The neurochip buffers data and transmits via IR offline. It doesn't have all that much flash onboard - 16Mb.
    • used teflon-insulated 50um tungsten wires.
  • confirmed that there is a strong causal relationship, constant over the course of weeks, between motor cortex units and EMG activity.
    • some causal relationships between neural firing and EMG varied dependent on the task. Additive / multiplicative encoding?
  • this relationship was different at night, during REM sleep, though (?)
  • point out, as Todorov did, that Stereotyped motion imposes correlation between movement parameters, which could lead to spurrious relationships being mistaken for neural coding.
    • Experiments with naturalistic movement are essential for understanding innate, untrained neural control.
  • references {597} Suner et al 2005 as a previous study of long term cortical recordings. (utah probe)
  • during sleep, M1 cells exhibited a cyclical patter on quiescence followed by periods of elevated activity;
    • the cycle lasted 40-60 minutes;
    • EMG activity was seen at entrance and exit to the elevated activity period.
    • during periods of highest cortical activity, muscle activity was completely suppressed.
    • peak firing rates were above 100hz! (mean: 12-16hz).


hide / / print
ref: -0 tags: variational free energy inference learning bayes curiosity insight Karl Friston date: 02-15-2019 02:09 gmt revision:1 [0] [head]

PMID-28777724 Active inference, curiosity and insight. Karl J. Friston, Marco Lin, Christopher D. Frith, Giovanni Pezzulo,

  • This has been my intuition for a while; you can learn abstract rules via active probing of the environment. This paper supports such intuitions with extensive scholarship.
  • “The basic theme of this article is that one can cast learning, inference, and decision making as processes that resolve uncertanty about the world.
    • References Schmidhuber 1991
  • “A learner should choose a policy that also maximizes the learner’s predictive power. This makes the world both interesting and exploitable.” (Still and Precup 2012)
  • “Our approach rests on the free energy principle, which asserts that any sentient creature must minimize the entropy of its sensory exchanges with the world.” Ok, that might be generalizing things too far..
  • Levels of uncertainty:
    • Perceptual inference, the causes of sensory outcomes under a particular policy
    • Uncertainty about policies or about future states of the world, outcomes, and the probabilistic contingencies that bind them.
  • For the last element (probabilistic contingencies between the world and outcomes), they employ Bayesian model selection / Bayesian model reduction
    • Can occur not only on the data, but exclusively on the initial model itself.
    • “We use simulations of abstract rule learning to show that context-sensitive contingiencies, which are manifest in a high-dimensional space of latent or hidden states, can be learned with straightforward variational principles (ie. minimization of free energy).
  • Assume that initial states and state transitions are known.
  • Perception or inference about hidden states (i.e. state estimation) corresponds to inverting a generative model gievn a sequence of outcomes, while learning involves updating the parameters of the model.
  • The actual task is quite simple: central fixation leads to a color cue. The cue + peripheral color determines either which way to saccade.
  • Gestalt: Good intuitions, but I’m left with the impression that the authors overexplain and / or make the description more complicated that it need be.
    • The actual number of parameters to to be inferred is rather small -- 3 states in 4 (?) dimensions, and these parameters are not hard to learn by minimizing the variational free energy:
    • F=D[Q(x)||P(x)]E q[ln(P(o t|x)]F = D[Q(x)||P(x)] - E_q[ln(P(o_t|x)] where D is the Kullback-Leibler divergence.
      • Mean field approximation: Q(x)Q(x) is fully factored (not here). many more notes

hide / / print
ref: -0 tags: molecule mean free path vacuum date: 05-01-2016 03:16 gmt revision:0 [head]

Useful numbers for estimating molecular mean-free-path in vacuum systems:


PressureTorrMean free path
0.01 Pa7.5e-5 torr4.8 m
10 Pa75 mTorr4.8 mm
30 Pa225 mTorr1.6 mm

hide / / print
ref: -0 tags: bookmark Cory Doctorow EFF SOPA internet freedom date: 01-01-2012 21:51 gmt revision:0 [head]

The Coming War on General Computation "M.P.s and Congressmen and so on are elected to represent districts and people, not disciplines and issues. We don't have a Member of Parliament for biochemistry, and we don't have a Senator from the great state of urban planning, and we don't have an M.E.P. from child welfare. "

hide / / print
ref: work-0 tags: no free lunch wolpert coevolution date: 07-19-2010 12:54 gmt revision:2 [1] [0] [head]


  • Just discovered this. It makes perfect sense - bias free learning is 'futile'. Learning need be characterized by its biases, which enable faster or better results in particular problem domains.
  • Equivalently: any two algorithms are equivalent when their performance is averaged across all possible problems. (This is not as strong as it sounds, as most problems will never be encountered).
  • Wolper 1996 provides an excellent geometric interpretation of this: the quality of the search/optimization algorithm within a particular domain iis proporational to the inner product of its expected search stream with the actual (expected?) probability distribution of the data.
  • However! with coevolutionary algorithms, there can be a free lunch - "in coevolution some algorithms have better performance than other algorithms, averaged across all possible problems." Wolpert 2005
    • claims that this does not (??) hold in biological evolution, where there is no champion. Yet biology seems all about co-evolution.
    • coevolution of a backgammon player details how it may be coevolution + the structure of the backgammon game, not reinforcement learning, which led Tesauro to his championship-level player. Specifically, coevolutionary algorithms tend to get stuck in local minima - where both contestants play mediocre and draw - but this is not possible in backgammon; there is only one winner, and the games must terminate eventually.
      • These authors introduce a very interesting twist to improve coevolutionary bootstrapping: Firstly, the games are played in pairs, with the order of play reversed and the same random seed used to generate the dice rolls for both games. This washes out some of the unfairness due to the dice rolls when the two networks are very close - in particular, if they were identical, the result would always be one win each.

hide / / print
ref: Friston-2010.02 tags: free energy minimization life learning large theories date: 06-08-2010 13:59 gmt revision:2 [1] [0] [head]

My letter to a friend regarding images/817_1.pdf The free-energy principle: a unified brain theory? PMID-20068583 -- like all critics, i feel the world will benefit from my criticism ;-) Hey , I did read that paper on the plane, and wrote down some comments, but haven't had a chance to actually send them until now. err..anyway.. might as well send them since I did bother writing stuff down: I thought the paper was interesting, but rather specious, especially the way the author makes 'surprise' something to be minimized. This is blatantly false! Humans and other mammals (at least) like being surprised (in the normal meaning of the word). He says things like: "This is where free energy comes in: free energy is an upper bound on surprise, which means that if agents minimize free energy, they implicity minimize surprise -- a huge logical jump, and not one that I'm willing to accept. I feel like this author is trying to capitalize on some recent developments, like variational bayes and ensemble learning, without fully understanding them or having the mathematical chops (like Hayen) to flesh it out. So far as I understand, large theories (as this proposes to be) are useful in that they permit derivation of particular update equations; Variational Bayes for example takes the Kullbeck-Leibler divergence & a factorization of the posterior to create EM update equations. So, even if the free energy idea is valid, the author uses it at such a level to make no useful, mathy predictions. One area where I agree with him is that the nervous system create a model of the internal world, for the purpose of prediction. Yes, maybe this allows 'surprise' to be minimized. But animals minimize surprise not because of free energy, but rather for the much more quotidian reason that surprise can be dangerous. Finally, i wholly reject the idea that value and surprise can be equated or even similar. They seem orthogonal to me! Value is assigned to things that help an animal survive and multiply, surprise is things it's nervous system does not expect. All these things make sense when cast against the theories of evolurion and selection. Perhaps, perhaps selection is a consequence of decreasing free energy - this intuitively and somewhat amorphously/mystically makes sense (the aggregate consequence of life on earth is somehow order, harmony and other 'goodstuff' (but this is an anthropocentric view)) - but if so the author should be able to make more coherent / mathematical prediction of observed phenomena. Eg. why animals locally violate the second law of thermodynamics. Despite my critique, thanks for sending the article, made me think. Maybe you don't want to read it now and I saved you some time ;-)

hide / / print
ref: notes-0 tags: skate sideskate freeline date: 12-19-2007 04:50 gmt revision:1 [0] [head]

Tim's list of skate-like devices, sorted by flatland speed, descending order:

  1. rollerblades / in-line skates. clap skates and xcountry training skates are up here too.
  2. skateboard -- skateboarders in central park can do the whole loop (~7 miles?) in about ~20 minutes = 21mph average. You can get some very fast wheels, bearings, and boards.
  3. streetboards / snakeboards -- great acceleration. Unlike sideskates, freelines, and Xliders, you do not have to reserve / use muscle capacity to keep from doing a split; all can be put into whipping the board up to speed.
  4. Onshoreboards -- Don't have one, but looks like a randal in back there. These things are kinda heavy - 13bs for the largest - but should be pumpable to high speed? Compared to the flowlabs, all axles (when going straight) are perpendicular to the direction of motion, so there should be little more than the rolling resistance of the 8 wheels. Note dual skate wheels on the back - I presume this was to cut costs, as good inline skate wheels are much cheaper than good skateboard wheels.
  5. sideskates -- these generally have higher top-end speed compared to freelines, but worse acceleration. rolling resistance is comparable to a skateboard; they have large patchs of urethane in contact with the ground, with no rotational shear from a axle at angle to road.
  6. freeline -- these are far more stable at speed than sideskates. However, contact patch with ground undergoes rotational shear, which in addition to the softer urethane and higher loading, makes for more friction than sideskates.
  7. Hammerhead -- faster than below because it has one standard skate truck. Have not tested it.
  8. Flowlab -- the wheels are not co-axial, so there will always be more rolling resistance than a skateboard. Urethane and bearing quality is low on these boards (e.g. 608zz electric motor bearings), simply because they need so many of both and must cut costs to compete with skateboards!
  9. The Wave -- seriously, slow. downhill speed is ok, no speed wobbles - but no powerslides either.
  10. Xliders -- The videos make it look rather slow. But, it also looks very choreographic / dance-like.
  11. Tierney Rides -- hard to pump, but not impossible. Dumb because it is easy to tilt the deck a bit to much, hit the edge, and slide out (the coefficient of friction of hard maple << urethane wheel). Tried to learn it for a while, but the over tilt / deck slide bruised my ankles too many times. This makes it bad for both downhill and flatland. On the plus side, these are very well made boards - buy one & put some randals on it :)

hide / / print
ref: bookmark-0 tags: blackfin ELF freestanding applications boot date: 08-01-2007 14:40 gmt revision:0 [head]


very good, very instructive.