m8ta
You are not authenticated, login. 

{842}  
Distilling freeform natural laws from experimental data
Since his Phd, Michael Schmidt has gone on to found Nutonian, which produced Eurequa software, apparently without dramatic new features other than being able to use the cloud for equation search. (Probably he improved many other detailed facets of the software..). Nutonian received $4M in seed funding, according to Crunchbase. In 2017, Nutonian was acquired by Data Robot (for an undisclosed amount), where Michael has worked since, rising to the title of CTO. Always interesting to follow up on the authors of these classic papers!  
{763}  
I recently wrote a matlab script to measure & plot the autocorrelation of a spike train; to test it, I generated a series of timestamps from a homogeneous Poisson process: function [x, isi]= homopoisson(length, rate) % function [x, isi]= homopoisson(length, rate) % generate an instance of a poisson point process, unbinned. % length in seconds, rate in spikes/sec. % x is the timestamps, isi is the intervals between them. num = length * rate * 3; isi = (1/rate).*log(1rand(num, 1)); x = cumsum(isi); %%find the x that is greater than length. index = find(x > length); x = x(1:index(1,1)1, 1); isi = isi(1:index(1,1)1, 1); The autocorrelation of a Poisson process is, as it should be, flat: Above:
The problem with my recordings is that there is generally high longrange correlation, correlation which is destroyed by shuffling. Above is a plot of 1/isi for a noise channel with very high mean 'firing rate' (> 100Hz) in blue. Behind it, in red, is 1/shuffled isi. Noise and changes in the experimental setup (bad!) make the channel very nonstationary. Above is the autocorrelation plotted in the same way as figure 1. Normally, the firing rate is binned at 100Hz and highpass filtered at 0.005hz so that longrange correlation is removed, but I turned this off for the plot. Note that the suffled data has a number of different offsets, primarily due to differing longrange correlations / nonstationarities. Same plot as figure 3, with highpass filtering turned on. Shuffled data still has far more local correlation  why? The answer seems to be in the relation between individual isis. Shuffling isi order obviuosly does not destroy the distribution of isi, but it does destroy the ordering or pairwise correlation between isi(n) and isi(n+1). To check this, I plotted these two distributions:  Original log(isi(n)) vs. log(isi(n+1)  Shuffled log(isi_shuf(n)) vs. log(isi_shuf(n+1)  Closeup of log(isi(n)) vs. log(isi(n+1) using alphablending for a channel that seems heavily corrupted with electrocauterizer noise.  
{806}  
I've recently tried to determine the bitrate of conveyed by one gaussian random process about another in terms of the signaltonoise ratio between the two. Assume $x$ is the known signal to be predicted, and $y$ is the prediction. Let's define $SNR(y) = \frac{Var(x)}{Var(err)}$ where $err = xy$ . Note this is a ratio of powers; for the conventional SNR, $SNR_{dB} = 10*log_{10 } \frac{Var(x)}{Var(err)}$ . $Var(err)$ is also known as the meansquarederror (mse). Now, $Var(err) = \sum{ (x  y  sstrch \bar{err})^2 estrch} = Var(x) + Var(y)  2 Cov(x,y)$ ; assume x and y have unit variance (or scale them so that they do), then $\frac{2  SNR(y)^{1}}{2 } = Cov(x,y)$ We need the covariance because the mutual information between two jointly Gaussian zeromean variables can be defined in terms of their covariance matrix: (see http://www.springerlink.com/content/v026617150753x6q/ ). Here Q is the covariance matrix, $Q = \left[ \array{Var(x) & Cov(x,y) \\ Cov(x,y) & Var(y)} \right]$ $MI = \frac{1 }{2 } log \frac{Var(x) Var(y)}{det(Q)}$ $Det(Q) = 1  Cov(x,y)^2$ Then $MI =  \frac{1 }{2 } log_2 \left[ 1  Cov(x,y)^2 \right]$ or $MI =  \frac{1 }{2 } log_2 \left[ SNR(y)^{1}  \frac{1 }{4 } SNR(y)^{2} \right]$ This agrees with intuition. If we have a SNR of 10db, or 10 (power ratio), then we would expect to be able to break a random variable into about 10 different categories or bins (recall stdev is the sqrt of the variance), with the probability of the variable being in the estimated bin to be 1/2. (This, at least in my mind, is where the 1/2 constant comes from  if there is gaussian noise, you won't be able to determine exactly which bin the random variable is in, hence log_2 is an overestimator.) Here is a table with the respective values, including the amplitude (not power) ratio representations of SNR. "
Now, to get the bitrate, you take the SNR, calculate the mutual information, and multiply it by the bandwidth (not the sampling rate in a discrete time system) of the signals. In our particular application, I think the bandwidth is between 1 and 2 Hz, hence we're getting 1.63.2 bits/second/axis, hence 3.26.4 bits/second for our normal 2D tasks. If you read this blog regularly, you'll notice that others have achieved 4bits/sec with one neuron and 6.5 bits/sec with dozens {271}.  
{773}  
Recently I've been working on a currentcontrolled microstimulator for the lab, and have not been at all satisfied with the performance  hence, I decided to redesign it. Since it is a digitally currentcontrolled stimulator, and the current is set with a DAC (MCP4822), we need a voltage controlled current source. Here is one design:
What I really need is a highside regulated current source; after some fiddling, here is what I came up with:
 
{850}  
Historical notes from using the Kinarm... this only seems to render properly in firefox / mozilla. To apply cartesian force fields to the arm, the original kinarm PLCC (whatever that stands for) converted joint velocities to cartesian veolocities using the jacobian matrix. All well and good. The equation for endpoint location of the kinarm is: $\hat{x} = { \left[ \array{ l_1 sin(\theta_{sho}) + l_2 sin(\theta_{sho} + \theta_{elb} ) \\ l_1 cos(\theta_{sho}) + l_2 cos(\theta_{sho} + \theta_{elb} ) } \right] }$ L_1 = 0.115 meters, l_2 = 0.195 meters in our case. The jacobian of this function is: $J = { \left[ \array{  l_1 sin(\theta_{sho})  l_2 sin(\theta_{sho} + \theta_{elb} ) &&  l_2 sin(\theta_{elb}) \\ l_1 cos(\theta_{sho}) + l_2 cos(\theta_{sho} + \theta_{elb} ) && l_2 cos(\theta_{elb}) } \right] }$ $\hat{v} = J \cdot \hat{\theta}$ etc. and (I think!) $\hat{F} = J \cdot \hat{\tau}$ where tau is the shoulder and elbow torques and F is the cartesian force. The flow of the PLCC is then:
$J = { \left[ \array{ a & b \\ c & d } \right] }$ $J^{1} = \frac{ 1}{a d  b c} { \left[ \array{d &b \\ c & a} \right] } \ne { \left[ \array{a & c \\ b & d} \right] } = J^{T}$ substitute to see if the matrices look similar ... ${\vert J \vert} \cdot { \left[ \array{ l_2 cos(\theta_{elb}) && l_2 sin(\theta_{elb}) \\  l_1 cos(\theta_{sho})  l_2 cos(\theta_{sho} + \theta_{elb} ) &&  l_1 sin(\theta_{sho})  l_2 sin(\theta_{sho} + \theta_{elb} ) } \right] } \ne { \left[ \array{  l_1 sin(\theta_{sho})  l_2 sin(\theta_{sho} + \theta_{elb} ) && l_1 cos(\theta_{sho}) + l_2 cos(\theta_{sho} + \theta_{elb} ) \\  l_2 sin(\theta_{elb}) && l_2 cos(\theta_{elb}) } \right] }$ where ${\vert J \vert} = {  l_1 l_2 sin(\theta_sho) cos(\theta_elb)  l_2^2 sin(\theta_{sho} + \theta_{elb} ) cos(\theta_elb) +  l_1 l_2 cos(\theta_sho) sin(\theta_elb)  l_2^2 cos(\theta_{sho} + \theta_{elb} ) sin(\theta_elb) }$ I'm surprised that we got something even like curl and viscous forces  the matrices are not similar. This explains why the forces seemed odd and poorly scaled, and why the constants for the viscious and curl fields were so small (the units should have been N/(cm/s)  1 newton is a reasonable force, and the monkey moves at around 10cm/sec, so the constant should have been 1/10 or so. Instead, we usually put in a value of 0.0005 ! For typical values of the shoulder and elbow angles, the determinant of the matrix is 200 (the kinarm PLCC works in centimeters, not meters), so the transpose has entries ~ 200 x too big. Foolishly we compensated by making the constant (or entries in A) 200 times to small. i.e. 1/10 * 1/200 = 0.0005 :( The end result is that a densityplot of the space spanned by the cartesian force and velocity is not very clean, as you can see in the picture below. The horizontal line is, of course, when the forces were turned off. A linear relationship between force and velocity should be manifested by a line in these plots  however, there are only suggestions of lines. The null field should have a negative  slope line in upper left and lower right; the curl field should have a positive sloped line in the upper right and negative in the lower left (or vicevercia).  
{848}  
http://www.xbdev.net/directx3dx/specialX/Fur/index.php  for future reference. Simple algorithm that seems to work quite well. Can be done almost entirely in vertex shader...  
{844}  
"Stage 6" part selection:
 
{839}  
(I'm posting this here as it's easier than putting a image & text in subversion) I'm building a wireless headstage for neural recording. Hence, it has sensitive, highgain amplifiers (RHA2116) pretty close to a wireless transmitter + serial lines. The transmitter operates intermittently to save power, only sending samples from one continuous channel + threshold crossings for all the other channels. 27 bytewide samples + channel identifier + 4 bytes threshold crossing are sent in one radio packet; as the radio takes some 130us to start up the PLL, 8 of these packets are chunked together into one frame; one frame is transmitted every 144hz (actually, 1e6/(32*27*8)Hz. At the conclusion of each frame, the continuous channel to be transmitted is incremented. It seems that radio transmission is interfering with the input amplfifiers, as the beginning samples from a frame are corrupted  this is when the previous frame is going out over the air. It could also be noise from the SPI lines, which run under and close to the amplifiers. This may also not be a problem in vivo  it could only be an issue when the input to the amplifiers are floating. Above, a plot of the raw data coming off the headstage radio. Red trace indicates the channel currently being transmitted; blue are the samples. Note that some chanels do not have the artifact  I presume this is because their input is grounded. This will be very tricky to debug, as if we turn off the radio, we'll get no data. Checking if it is a SPI problem is possible by writing the bus at a specified time. Tested with radio PA disabled, it is definitely the SPI bus  routing problem! Stupid.  
{815}  
Jacques Pitrat seems to have many of the same ideas that I've had (only better, and he's implemented them!) A Step toward and Artificial Scientist
Artificial beings  his book.  
{826}  
Studies in astronomical time series analysis. II  Statistical aspects of spectral analysis of unevenly spaced data Scargle, J. D.
 
{825}  
 
{824}  
images/824_1.pdf  Eurisko by DB Lenat, the program that made the fleet which won the 1981 and 1982 Traveller's challenge, as I discovered in this New Yorker article by Malcolm Gladwell.
 
{821} 
ref: work0
tags: differential evolution function optimization
date: 07092010 14:46 gmt
revision:3
[2] [1] [0] [head]


Differential evolution (DE) is an optimization method, somewhat like NeidlerMead or simulated annealing (SA). Much like genetic algorithms, it utilizes a population of solutions and selection to explore and optimize the objective function. However, it instead of perturbing vectors randomly or greedily descending the objective function gradient, it uses the difference between individual population vectors to update hypothetical solutions. See below for an illustration. At my rather cursory reading, this serves to adapt the distribution of hypothetical solutions (or population of solutions, to use the evolutionary term) to the structure of the underlying function to be optimized. Judging from images/821_1.pdf Price and Storn (the inventors), DE works in situations where simulated annealing (which I am using presently, in the robot vision system) fails, and is applicable to higherdimensional problems than simplex methods or SA. The paper tests DE on 100 dimensional problems, and it is able to solve these with on the order of 50k function evaluations. Furthermore, they show that it finds function extrema quicker than stochastic differential equations (SDE, alas from 85) which uses the gradient of the function to be optimized. I'm surprised that this method slipped under my radar for so long  why hasn't anyone mentioned this? Is it because it has no proofs of convergence? has it more recently been superseded? (the paper is from 1997). Yet, I'm pleased because it means that there are also many other algorithms equally clever and novel (and simple?), out their in the literature or waiting to be discovered.  
{818}  
Say you have a program, called from a perl script, that may run for a long time. Get at the program's output as it appears? Simple  open a pipe to the programs STDOUT. See http://docstore.mik.ua/orelly/perl/prog3/ch16_03.htm Below is an example  I wanted to see the output of programs run, for convenience, from a perl script (didn't want to have to remember  or get wrong  all the command line arguments for each). #!/usr/bin/perl $numArgs = $#ARGV + 1; if($numArgs == 1){ if($ARGV[0] eq "table"){ open STATUS, "sudo ./video 0xc1e9 15 4600 4601 0 "; while(<STATUS>){ print ; } close STATUS ; }elsif($ARGV[0] eq "arm"){ open STATUS, "sudo ./video 0x1ff6 60 4597 4594 4592 "; while(<STATUS>){ print ; } close STATUS ; }else{ print "$ARGV[0] not understood  say arm or table!\n"; } }  
{813} 
ref: work0
tags: kicadocaml zbuffer comparison picture screenshot
date: 03032010 16:38 gmt
revision:4
[3] [2] [1] [0] [head]


Simple illustration of Kicadocaml with Z buffering enabled: and disabled: I normally use it with Z buffering enabled, but turn it off if, say, I want to clearly see all the track intersections, especially colinear tracks or zero length tracks. (Probably I should write something to merge and remove these automatically.) Note that in either case, tracks and modules are rendered backtofront, which effects a Zsorting of sorts; it is the GPUs Z buffer that is enabled/disabled here.  
{809}  
I learned this in college, but have forgotten all the details  Microcontroller provides an alternative to DDS $freq = \frac{\sqrt{F}}{2 \pi \tau}$ where $\tau$ is the sampling frequency. F ranges from 0.2 to 0.  
{796}  
An interesting field in ML is nonlinear dimensionality reduction  data may appear to be in a highdimensional space, but mostly lies along a nonlinear lowerdimensional subspace or manifold. (Linear subspaces are easily discovered with PCA or SVD(*)). Dimensionality reduction projects highdimensional data into a lowdimensional space with minimum information loss > maximal reconstruction accuracy; nonlinear dim reduction does this (surprise!) using nonlinear mappings. These techniques set out to find the manifold(s):
(*) SVD maps into 'concept space', an interesting interpretation as per Leskovec's lecture presentation.  
{795} 
ref: work0
tags: machine learning reinforcement genetic algorithms
date: 10262009 04:49 gmt
revision:1
[0] [head]


I just had dinner with Jesse, and the we had a good/productive discussion/brainstorm about algorithms, learning, and neurobio. Two things worth repeating, one simpler than the other: 1. Gradient descent / NewtonRhapson like techniques should be tried with genetic algorithms. As of my current understanding, genetic algorithms perform an semidirected search, randomly exploring the space of solutions with natural selection exerting a pressure to improve. What if you took the partial derivative of each of the organism's genes, and used that to direct mutation, rather than random selection of the mutated element? What if you looked before mating and crossover? Seems like this would speed up the algorithm greatly (though it might get it stuck in local minima, too). Not sure if this has been done before  if it has, edit this to indicate where! 2. Most supervised machine learning algorithms seem to rely on one single, externally applied objective function which they then attempt to optimize. (Rather this is what convex programming is. Unsupervised learning of course exists, like PCA, ICA, and other means of learning correlative structure) There are a great many ways to do optimization, but all are exactly that  optimization, search through a space for some set of weights / set of rules / decision tree that maximizes or minimizes an objective function. What Jesse and I have arrived at is that there is no real utility function in the world, (Corollary #1: life is not an optimization problem (**))  we generate these utility functions, just as we generate our own behavior. What would happen if an algorithm iteratively estimated, checked, crossvalidated its utility function based on the small rewards actually found in the world / its synthetic environment? Would we get generative behavior greater than the complexity of the inputs? (Jesse and I also had an indepth talk about information generation / destruction in nonlinear systems.) Put another way, perhaps part of learning is to structure internal valuation / utility functions to set up reinforcement learning problems where the reinforcement signal comes according to satisfaction of subgoals (= local utility functions). Or, the gradient signal comes by evaluating partial derivatives of actions wrt Creating these goals is natural but not always easy, which is why one reason (of very many!) sports are so great  the utility function is clean, external, and immutable. The recursive, introspective creation of valuation / utility functions is what drives a lot of my internal monologues, mixed with a hefty dose of taking partial derivatives (see {780}) based on models of the world. (Stated this way, they seem so similar that perhaps they are the same thing?) To my limited knowledge, there has been some work as of recent in the creation of subgoals in reinforcement learning. One paper I read used a system to look for states that had a high ratio of ultimately rewarded paths to unrewarded paths, and selected these as subgoals (e.g. rewarded the agent when this state was reached.) I'm not talking about these sorts of subgoals. In these systems, there is an ultimate goal that the researcher wants the agent to achieve, and it is the algorithm's (or s') task to make a policy for generating/selecting behavior. Rather, I'm interested in even more unstructured tasks  make a utility function, and a behavioral policy, based on small continuous (possibly irrelevant?) rewards in the environment. Why would I want to do this? The pet project I have in mind is a 'cognitive' PCB part placement / layout / routing algorithm to add to my pet project, kicadocaml, to finally get some people to use it (the attention economy :) In the course of thinking about how to do this, I've realized that a substantial problem is simply determining what board layouts are good, and what are not. I have a rough aesthetic idea + some heuristics that I learned from my dad + some heuristics I've learned through practice of what is good layout and what is not  but, how to code these up? And what if these aren't the best rules, anyway? If i just code up the rules I've internalized as utility functions, then the board layout will be pretty much as I do it  boring! Well, I've stated my subgoal in the form of a problem statement and some criteria to meet. Now, to go and search for a decent solution to it. (Have to keep this blog m8ta!) (Or, realistically, to go back and see if the problem statement is sensible). (**) Corollary #2  There is no god. nod, Dawkins.  
{794}  
http://weblog.raganwald.com/2007/06/whichtheoryfirstevidence.html
also from that site  http://weblog.raganwald.com/2007/05/notsobigsoftwareapplication.html
 
{793}  
Andrew Ng's notes on learning theory
 
{792}  
http://www.cs.cmu.edu/~wcohen/slipper/
 
{789}  
I've been reading Computational Explorations in Cognitive Neuroscience, and decided to try the code that comes with / is associated with the book. This used to be called "PDP+", but was rewritten, and is now called Emergent. It's a rather large program  links to Qt, GSL, Coin3D, Quarter, Open Dynamics Library, and others. The GUI itself seems obtuse and too heavy; it's not clear why they need to make this so customized / panneled / tabbed. Also, it depends on relatively recent versions of each of these libraries  which made the install on my Debian Lenny system a bit of a chore (kinda like windows). A really strange thing is that programs are stored in tree lists  woah  a natural folding editor built in! I've never seen a programming language that doesn't rely on simple text files. Not a bad idea, but still foreign to me. (But I guess programs are inherently hierarchal anyway.) Below, a screenshot of the whole program  note they use a Coin3D window to graph things / interact with the model. The colored boxes in each network layer indicate local activations, and they update as the network is trained. I don't mind this interface, but again it seems a bit too 'heavy' for things that are inherently 2D (like 2D network activations and the output plot). It's good for seeing hierarchies, though, like the network model. All in all looks like something that could be more easily accomplished with some python (or ocaml), where the language itself is used for customization, and not a GUI. With this approach, you spend more time learning about how networks work, and less time programming GUIs. On the other hand, if you use this program for teaching, the gui is essential for debugging your neural networks, or other people use it a lot, maybe then it is worth it ... In any case, the book is very good. I've learned about GeneRec, which uses different activation phases to compute local errors for the purposes of errorminimization, as well as the virtues of using both Hebbian and errorbased learning (like GeneRec). Specifically, the authors show that errorbased learning can be rather 'lazy', purely moving down the error gradient, whereas Hebbian learning can internalize some of the correlational structure of the input space. You can look at this internalization as 'weight constraint' which limits the space that errorbased learning has to search. Cool idea! Inhibition also is a constraint  one which constrains the network to be sparse. To use his/their own words: ... given the explanation above about the network's poor generalization, it should be clear why both Hebbian learning and kWTA (k winner take all) inhibitory competition can improve generalization performance. At the most general level, they constitute additional biases that place important constraints on the learning and the development of representations. Mroe specifically, Hebbian learning constrains the weights to represent the correlational structure of the inputs to a given unit, producing systematic weight patterns (e.g. cleanly separated clusters of strong correlations). Inhibitory competition helps in two ways. First, it encourages individual units to specialize in representing a subset of items, thus parcelling up the task in a much cleaner and more systematic way than would occur in an otherwise unconstrained network. Second, inhibition greatly restricts the settling dynamics of the network, greatly constraining the number of states the network can settle into, and thus eliminating a large proportion of the attractors that can hijack generalization.."  
{776}  
http://www.willamette.edu/~gorr/classes/cs449/intro.html  descent resource, good explanation of the equations associated with artificial neural networks.  
{774} 
ref: work0
tags: functional programming compilation ocaml
date: 08242009 14:33 gmt
revision:0
[head]


The implementation of functional programming languages  book!  
{764} 
ref: work0
tags: ocaml mysql programming functional
date: 07032009 19:16 gmt
revision:2
[1] [0] [head]


Foe my work I store a lot of analyzed data in SQL databases. In one of these, I have stored the anatomical target that the data was recorded from  namely, STN or VIM thalamus. After updating the analysis programs, I needed to copy the anatomical target data over to the new SQL tables. Where perl may have been my previous goto language for this task, I've had enuogh of its strange quiks, hence decided to try it in Ruby (worked, but was not so elegant, as I don't actually know Ruby!) and then Ocaml. ocaml #use "topfind" #require "mysql" (* this function takes a query and a function that converts entries in a row to Ocaml tuples *) let read_table db query rowfunc = let r = Mysql.exec db query in let col = Mysql.column r in let rec loop = function  None > []  Some x > rowfunc col x :: loop (Mysql.fetch r) in loop (Mysql.fetch r) ;; let _ = let db = Mysql.quick_connect ~host:"crispy" ~database:"turner" ~password:"" ~user:"" () in let nn = Mysql.not_null in (* this function builds a table of files (recording sessions) from a given target, then uses the mysql UPDATE command to propagate to the new SQL database. *) let propagate targ = let t = read_table db ("SELECT file, COUNT(file) FROM `xcor2` WHERE target='"^targ^"' GROUP BY file") (fun col row > ( nn Mysql.str2ml (col ~key:"file" ~row), nn Mysql.int2ml (col ~key:"COUNT(file)" ~row) ) ) in List.iter (fun (fname,_) > let query = "UPDATE `xcor3` SET `target`='"^targ^ "' WHERE STRCMP(`file`,'"^fname^"')=0" in print_endline query ; ignore( Mysql.exec db query ) ) t ; in propagate "STN" ; propagate "VIM" ; propagate "CTX" ; Mysql.disconnect db ;; Interacting with MySQL is quite easy with Ocaml  though the type system adds a certain overhead, it's not too bad.  
{762} 
ref: work0
tags: covariance matrix adaptation learning evolution continuous function normal gaussian statistics
date: 06302009 15:07 gmt
revision:0
[head]


http://www.lri.fr/~hansen/cmatutorial.pdf
 
{759}  
U141 LMV1032 microSMD4 2.23315 0.03575 180. 9394. 27366. 1675. L7 INDUCTOR 0603 1.7784 0.7561 0. 13171. 34955. 1727. C86 0.1uf 0402 1.0946 0.0347 360. 37107. 27524. 1710. TP8 TP TP 0.222 1.0285 0. 29815. 37809. 1767. TP9 TP TP 0.7021 1.2484 0. 33805. 40090. 1787. C67 1uf 0603 0.8146 0.7047 270. 34758. 34540. 1752. C68 1uf 0603 1.1946 0.7247 270. 37920. 34730. 1758. C69 1uf 0603 1.2747 0.7247 90. 38576. 34742. 1759. R4 33 0402 1.6937 0.1982 180. 42071. 29215. 1728. R17 10k 0402 1.685 0.6615 270. 13941. 33981. 1723. U92 LMV1032 microSMD4 2.53285 0.03585 180. 6912. 27381. 1671. U96 LMV1032 microSMD4 2.23315 0.89075 180. 9364. 36340. 1732. TP10 TP TP 0.222 1.1685 0. 29811. 39233. 1776. TP11 TP TP 0.222 1.3084 0. 29807. 40698. 1786. R23 33 0402 0.2834 0.6142 180. 30371. 20682. 1659. U105 LMV1032 microSMD4 2.23315 0.71965 180. 9368. 34557. 1720. U117 LMV1032 microSMD4 2.23315 0.49165 180. 9366. 32055. 1705. U124 LMV1032 microSMD4 2.18025 0.37765 180. 9820. 30853. 1698. U127 LMV1032 microSMD4 2.18025 0.32065 180. 9826. 30273. 1695. U128 LMV1032 microSMD4 2.28685 0.26365 180. 8940. 29697. 1690. R10 50k 0402 0.9607 0.3308 180. 19983. 30430. 1709. more data! U136 LMV1032 microSMD4 2.18025 0.14965 180. 9860. 28534. 1682. R47 20k 0402 1.1822 1.3883 90. 37828. 41612. 1797. R48 20k 0402 0.942 1.0284 270. 35838. 37757. 1771. U139 LMV1032 microSMD4 2.18025 0.09265 180. 9863. 27964. 1678. C72 10nf 0603 1.3546 0.6248 270. 39284. 33694. 1750. R45 12.5k 0402 1.1021 1.3883 90. 37161. 41608. 1796. C37 33nF 0402 1.0956 0.7067 360. 18894. 34462. 1730. R46 12.5k 0402 1.0221 1.0284 270. 36505. 37759. 1772. L7 INDUCTOR 0603 1.7784 0.7561 0. 13210. 34933. 1725. U142 LMV1032 microSMD4 2.18025 0.03575 180. 9865. 27310. 1674. L8 INDUCTOR 0603 0.1745 0.6447 270. 29446. 33849. 1738. C87 0.047uf 0402 2.3611 0.8811 360. 8363. 36186. 1729. R53 9.2k 0402 1.062 1.3883 90. 36817. 41587. 1796. R36 3.3k 0402 1.9546 0.8747 270. 44273. 36230. 1772. C88 0.047uf 0402 2.361 0.8241 360. 8356. 35593. 1725. R54 9.2k 0402 1.062 1.0284 270. 36838. 37762. 1772. R38 3.3k 0603 0.8646 0.8147 360. 35200. 35636. 1757. R37 3.3k 0402 1.9546 1.1347 270. 44266. 38878. 1788. TP1 TP TP 1.302 1.3882 0. 38828. 41596. 1797. C89 0.047uf 0402 2.361 0.7671 360. 8358. 35023. 1721. C83 0.1uf 0402 1.2246 0.5147 0. 38206. 32492. 1741. C12 1uf 0402 0.8182 0.1876 270. 34842. 25228. 1692. R39 3.3k 0402 1.5146 0.8747 90. 40609. 36213. 1767. TP3 TP TP 1.302 1.2484 0. 38835. 40039. 1788. C85 0.1uf 0402 0.2946 0.0348 180. 30497. 27541. 1701. C29 0.01uf 0402 1.5749 0.1575 270. 14907. 28634. 1690. TP4 TP TP 0.8219 1.1684 0. 34852. 39172. 1778. C15 1uf 0402 1.6037 0.0518 270. 41377. 26681. 1709. TP5 TP TP 0.8219 1.3084 0. 34835. 40731. 1787. C86 0.1uf 0402 1.0946 0.0347 360. 37136. 27478. 1709. TP6 TP TP 1.3021 1.1085 0. 38832. 38563. 1779. TP7 TP TP 0.7021 1.3883 0. 33824. 41561. 1791. TP8 TP TP 0.222 1.0285 0. 29855. 37751. 1763. C19 1uf 0402 0.6901 0.0599 90. 22286. 27662. 1693. TP9 TP TP 0.7021 1.2484 0. 33830. 40042. 1782. C90 0.047uf 0402 2.361 0.7101 360. 8360. 34449. 1718. R40 3.3k 0402 1.5146 1.1347 90. 40602. 38842. 1784. C28 7pf 0402 1.0306 0.562 270. 19447. 32944. 1722. C36 0.01uf 0402 1.1968 0.0315 0. 18064. 26795. 1682. C67 1uf 0603 0.8146 0.7047 270. 34787. 34503. 1750. R13 25 0402 1.57 0.34 0. 14940. 30478. 1701. C68 1uf 0603 1.1946 0.7247 270. 37950. 34725. 1755. C38 0.01uf 0402 0.9763 0.1733 270. 19894. 28829. 1697. R14 25 0402 1.5749 0.4094 270. 14897. 31177. 1705. C69 1uf 0603 1.2747 0.7247 90. 38616. 34707. 1755. R16 25 0402 1.1956 0.8867 180. 18053. 36282. 1739. R1 33 0402 1.4961 0.0314 90. 40482. 26822. 1709. R5 220k 0402 0.5628 0.1852 90. 23338. 28986. 1701. R3 33 0402 1.6937 0.1282 180. 42120. 28451. 1721. R4 33 0402 1.6937 0.1982 180. 42116. 29193. 1725. R28 2.2k 0402 1.9346 1.4048 90. 44069. 41754. 1804. R29 2.2k 0402 1.8346 1.4047 90. 43249. 41818. 1804. C70 1uf 0603 1.2747 0.6246 270. 38619. 33709. 1749. R2 100k 0402 1.4173 0.0315 90. 39826. 26815. 1708. C42 0.01uf 0402 1.1955 0.7166 180. 18052. 34552. 1730. R43 3k 0402 1.242 1.3085 270. 38319. 40701. 1792. C73 1uf 0603 1.8646 0.7147 0. 43527. 34646. 1761. R44 3k 0402 0.882 1.1085 90. 35337. 38556. 1776. R49 33k 0402 1.202 1.2285 270. 37988. 39816. 1787. C77 1uf 0603 0.7446 0.9347 0. 34197. 36870. 1764. C32 1uf 0402 0.8976 0.6615 180. 20551. 34005. 1729. C79 1uf 0603 0.8646 0.8747 180. 35198. 36251. 1761. R30 2.2k 0402 1.7347 1.4047 90. 42427. 41804. 1803. C35 1uf 0402 1.2913 0.0315 180. 17298. 26781. 1681. R31 2.2k 0402 1.6346 1.4047 90. 41584. 41800. 1802. R50 33k 0402 0.9345 1.1548 90. 35772. 39028. 1779. R11 10k 0402 0.0001 0.126 90. 28025. 25843. 1690. C46 1uf 0402 1.1955 0.6766 180. 18053. 34138. 1727. R12 10k 0402 0.0001 0.5196 90. 28038. 21612. 1662. R9 10k 0402 0.0001 0.2835 270. 28031. 24093. 1677. R17 10k 0402 1.685 0.6615 270. 13974. 33945. 1741. R18 10k 0402 1.5998 0.4875 90. 14688. 32018. 1710. C14 0.001uf 0402 0.96 0.26 0. 20044. 29712. 1703. U92 LMV1032 microSMD4 2.53285 0.03585 180. 6926. 27289. 1670. R55 6.5k 0402 0.9821 1.3883 90. 36150. 41583. 1795. R56 6.5k 0402 1.142 1.0284 270. 37502. 37773. 1774. R19 22K 0402 0.9958 0.6867 90. 19712. 34257. 1729. C2 0.1uf 0402 1.6237 0.2581 270. 41530. 29787. 1728. C30 5pf 0402 1.1907 0.562 90. 18114. 32929. 1720. C25 0.001uf 0402 0.2835 0.0787 180. 30398. 26352. 1694. C20 33pf 0402 0.5628 0.3352 90. 23328. 30458. 1712. C13 8pf 0402 1.6877 0.4299 270. 42062. 31517. 1741. C27 0.001uf 0402 0.9763 0.5039 90. 19900. 32258. 1718. C17 8pf 0402 1.4476 0.4299 90. 40063. 31519. 1738. C71 0.1uf 0603 1.3545 0.7247 90. 39280. 34701. 1756. C49 2.2uf 0402 2.2324 0.9436 0. 9413. 36840. 1734. C50 2.2uf 0402 2.4802 0.9455 0. 7350. 36852. 1732. C51 2.2uf 0402 2.4779 0.0152 0. 7399. 26905. 1670. C52 2.2uf 0402 2.2347 0.0184 0. 9423. 26881. 1672. C40 0.001uf 0402 1.1956 0.7568 180. 18050. 34938. 1732. C53 2.2uf 0402 1.9398 0.7554 0. 11855. 34916. 1725. C54 2.2uf 0402 1.6317 0.315 270. 14433. 30225. 1700. C48 2.2nF 0402 0.9154 0.9464 90. 20377. 36919. 1747. C55 2.2uf 0402 1.8616 0.7549 180. 12506. 34903. 1726. C56 0.012uf 0402 1.7107 0.7353 270. 13762. 34716. 1726. C57 0.012uf 0402 1.6956 0.8478 90. 13875. 35886. 1733. R7 90k 0402 0.8225 0.266 90. 21176. 29782. 1704. C58 0.012uf 0402 1.8891 0.8466 90. 12274. 35834. 1731. R57 22k 0402 0.942 1.3883 90. 35826. 41602. 1795. TP10 TP TP 0.222 1.1685 0. 29847. 39154. 1772. C22 10uf 0603 0.6428 0.1653 360. 22687. 28750. 1700. TP11 TP TP 0.222 1.3084 0. 29820. 40682. 1781. C23 10uf 0603 0.7429 0.1652 180. 21854. 28745. 1699. TP12 TP TP 0.7022 1.1085 0. 33848. 38556. 1773. C61 2.2uf 0402 1.8422 0.8468 90. 12664. 35859. 1732. C62 2.2uf 0402 2.0357 0.8464 90. 11053. 35837. 1730. C63 2.2uf 0402 2.0001 0.0836 270. 11363. 27899. 1681. C64 2.2uf 0402 2.0025 0.1862 90. 11350. 28924. 1688. C44 1.5pF 0402 0.8357 0.8065 180. 21045. 35478. 1739. C65 0.012uf 0402 1.8247 0.9119 0. 12808. 36505. 1736. C66 0.012uf 0402 2.0181 0.913 0. 11198. 36540. 1734. C39 0.1uf 0402 1.3229 0.6772 180. 16993. 34136. 1726. R6 825k 0402 0.5628 0.2651 90. 23329. 29784. 1706. C41 0.1uf 0402 1.1023 0.0314 180. 18851. 26789. 1683. C45 0.1uf 0402 0.9763 0.0787 90. 19897. 27845. 1691. R34 327k 0402 1.9046 0.8747 270. 43856. 36228. 1771. R35 327k 0402 1.9046 1.1347 90. 43849. 38858. 1788. R51 47k 0402 1.202 1.3085 270. 37985. 40700. 1792. R52 47k 0402 0.9221 1.1083 90. 35661. 38566. 1776. C74 4.7uf 0603 1.9346 1.0047 360. 44101. 37581. 1780. C75 4.7uf 0603 1.9346 0.9447 360. 44103. 36957. 1776. C76 4.7uf 0603 1.9346 1.0648 180. 44099. 38174. 1784. R41 327k 0402 1.5646 0.8747 90. 41026. 36215. 1768. C78 4.7uf 0603 1.7346 0.7947 0. 42442. 35463. 1765. R42 327k 0402 1.5645 1.1347 270. 41018. 38856. 1784. C59 0.1uf 0402 1.8046 0.2246 270. 12986. 29320. 1692. U124 LMV1032 microSMD4 2.18025 0.37765 180. 9843. 30773. 1696. U127 LMV1032 microSMD4 2.18025 0.32065 180. 9845. 30296. 1692. C80 4.7uf 0603 1.5346 0.9447 360. 40773. 36984. 1772. C81 4.7uf 0603 1.5346 1.0648 180. 40769. 38149. 1780. R10 50k 0402 0.9607 0.3308 180. 20034. 30408. 1706. C82 4.7uf 0603 1.5346 1.0047 360. 40771. 37546. 1776. C84 4.7uf 0603 0.1746 0.5347 270. 29464. 32629. 1732. C60 0.1uf 0402 1.8032 0.0862 270. 13012. 27892. 1683. R15 50k 0402 1.1956 0.7967 180. 18055. 35380. 1734. U130 LMV1032 microSMD4 2.18025 0.26365 180. 9857. 29685. 1689. 2.9mm_hole VAL** 2.9mm_hole 2.325 0.2 0. 8698. 24995. 1658. U133 LMV1032 microSMD4 2.18025 0.20665 180. 9849. 29114. 1685. C47 1pF 0402 0.8158 0.7565 90. 21212. 34950. 1736.counts spaced at exactly 1mm: 0 13206.000000 1 12795.000000 2 12349.000000 3 11983.000000 4 11545.000000 5 11117.000000 6 10710.000000 7 10262.000000 8 9813.000000 9 9395.000000 10 8957.000000 11 8561.000000 12 8154.000000 13 7726.000000 14 7298.000000 15 6897.000000 16 6477.000000 17 6093.000000 18 5700.000000 19 5309.000000 20 4871.000000 21 4453.000000 22 4046.000000 23 3639.000000 24 3232.000000 25 2836.000000 26 2429.000000 27 2011.000000 28 1594.000000 29 1187.000000 30 780.000000 31 352.000000 32 65.000000 33 472.000000 34 900.000000 35 1318.000000 36 1708.000000 37 2104.000000 38 2490.000000 39 2908.000000 40 3325.000000  
{758}  
Ocaml has an interactive top level, but in order to make this useful (e.g. for inspecting the types of variables, trying out code before compiling it), you need to import libraries and modules. If you have ocamlfind on your system (I think this is the requirement..), do this with: #use "topfind";; at the ocaml prompt, then #require"package names" . e.g: tlh24@chimera:~/svn/m8ta/yushin$ ledit  ocaml Objective Caml version 3.10.2 # #use "topfind";;  : unit = () Findlib has been successfully loaded. Additional directives: #require "package";; to load a package #list;; to list the available packages #camlp4o;; to load camlp4 (standard syntax) #camlp4r;; to load camlp4 (revised syntax) #predicates "p,q,...";; to set these predicates Topfind.reset();; to force that packages will be reloaded #thread;; to enable threads  : unit = () # #require "bigarray,gsl";; /usr/lib/ocaml/3.10.2/bigarray.cma: loaded /usr/lib/ocaml/3.10.2/gsl: added to search path /usr/lib/ocaml/3.10.2/gsl/gsl.cma: loaded # #require "pcre,unix,str";; /usr/lib/ocaml/3.10.2/pcre: added to search path /usr/lib/ocaml/3.10.2/pcre/pcre.cma: loaded /usr/lib/ocaml/3.10.2/unix.cma: loaded /usr/lib/ocaml/3.10.2/str.cma: loaded # Pcre.pmatch ;;  : ?iflags:Pcre.irflag > ?flags:Pcre.rflag list > ?rex:Pcre.regexp > ?pat:string > ?pos:int > ?callout:Pcre.callout > string > bool = <fun> # let m = Gsl_matrix.create 3 3;; val m : Gsl_matrix.matrix = <abstr> # m;;  : Gsl_matrix.matrix = <abstr> # m.{1,1};;  : float = 6.94305623882282e310 # m.{0,0};;  : float = 6.94305568087725e310 # m.{1,1} < 1.0 ;;  : unit = () # m.{2,2} < 2.0 ;;  : unit = () # let mstr = Marshal.to_string m [] ;; Nice!  
{751}  
 from the Lenthor Engineering Design guide. Wow they are indeed everywhere!  
{226}  