Distilling free-form natural laws from experimental data
- The critical step was to use the full set of all pairs of partial derivatives ( $\delta x / \delta y$ ) to evaluate the search for invariants.
- The selection of which partial derivatives are held to be independent / which variables are dependent is a bit of a trick too -- see the supplemental information.
- Even yet, with a 4D data set the search for natural laws took ~ 30 hours.
- This was via a genetic algorithm, distributed among 'islands' on different CPUs, with mutation and single-point crossover.
- Not sure what the IL is, but it appears to be floating-point assembly.
- Timeseries data is smoothed with Loess smoothing, which fits a polynomial to the data, and hence allows for smoother / more analytic derivative calculation.
- Then again, how long did it take humans to figure out these invariants? (Went about it in a decidedly different way..)
- Further, how long did it take for biology to discover similar 'design equations'?
- The same algorithm has been applied to biological data - a metabolic pathway - with some success pub 2011.
- Of course evolution had to explore a much larger space - proteins and regulatory pathways, not simpler mathematical expressions / linkages.
Since his Phd, Michael Schmidt has gone on to found Nutonian, which produced Eurequa software, apparently without dramatic new features other than being able to use the cloud for equation search. (Probably he improved many other detailed facets of the software..). Nutonian received $4M in seed funding, according to Crunchbase.
In 2017, Nutonian was acquired by Data Robot (for an undisclosed amount), where Michael has worked since, rising to the title of CTO.
Always interesting to follow up on the authors of these classic papers! |