m8ta
you are not logged in, login. new entry
text: sort by
tags: modified
type: chronology
{822} is owned by tlh24.{816} is owned by tlh24.
{842}
hide / edit[5] / print
ref: work-0 tags: distilling free-form natural laws from experimental data Schmidt Cornell automatic programming genetic algorithms date: 09-14-2018 01:34 gmt revision:5 [4] [3] [2] [1] [0] [head]

Distilling free-form natural laws from experimental data

  • There critical step was to use partial derivatives to evaluate the search for invariants. Even yet, with a 4D data set the search for natural laws took ~ 30 hours.
    • Then again, how long did it take humans to figure out these invariants? (Went about it in a decidedly different way..)
    • Further, how long did it take for biology to discover similar invariants?
      • They claim elsewhere that the same algorithm has been applied to biological data - a metabolic pathway - with some success.
      • Of course evolution had to explore a much larger space - proteins and reculatory pathways, not simpler mathematical expressions / linkages.

{763}
hide / edit[5] / print
ref: work-2999 tags: autocorrelation poisson process test neural data ISI synchrony DBS date: 02-16-2012 17:53 gmt revision:5 [4] [3] [2] [1] [0] [head]

I recently wrote a matlab script to measure & plot the autocorrelation of a spike train; to test it, I generated a series of timestamps from a homogeneous Poisson process:

function [x, isi]= homopoisson(length, rate)
% function [x, isi]= homopoisson(length, rate)
% generate an instance of a poisson point process, unbinned.
% length in seconds, rate in spikes/sec. 
% x is the timestamps, isi is the intervals between them.

num = length * rate * 3; 
isi = -(1/rate).*log(1-rand(num, 1)); 
x = cumsum(isi); 
%%find the x that is greater than length. 
index = find(x > length); 
x = x(1:index(1,1)-1, 1); 
isi = isi(1:index(1,1)-1, 1); 

The autocorrelation of a Poisson process is, as it should be, flat:

Above:

  • Red lines are the autocorrelations estimated from shuffled timestamps (e.g. measure the ISIs - interspike intervals - shuffle these, and take the cumsum to generate a new series of timestamps). Hence, red lines are a type of control.
  • Blue lines are the autocorrelations estimated from segments of the full timestamp series. They are used to how stable the autocorrelation is over the recording
  • Black line is the actual autocorrelation estimated from the full timestamp series.

The problem with my recordings is that there is generally high long-range correlation, correlation which is destroyed by shuffling.

Above is a plot of 1/isi for a noise channel with very high mean 'firing rate' (> 100Hz) in blue. Behind it, in red, is 1/shuffled isi. Noise and changes in the experimental setup (bad!) make the channel very non-stationary.

Above is the autocorrelation plotted in the same way as figure 1. Normally, the firing rate is binned at 100Hz and high-pass filtered at 0.005hz so that long-range correlation is removed, but I turned this off for the plot. Note that the suffled data has a number of different offsets, primarily due to differing long-range correlations / nonstationarities.

Same plot as figure 3, with highpass filtering turned on. Shuffled data still has far more local correlation - why?

The answer seems to be in the relation between individual isis. Shuffling isi order obviuosly does not destroy the distribution of isi, but it does destroy the ordering or pair-wise correlation between isi(n) and isi(n+1). To check this, I plotted these two distributions:

-- Original log(isi(n)) vs. log(isi(n+1)

-- Shuffled log(isi_shuf(n)) vs. log(isi_shuf(n+1)

-- Close-up of log(isi(n)) vs. log(isi(n+1) using alpha-blending for a channel that seems heavily corrupted with electro-cauterizer noise.

{806}
hide / edit[26] / print
ref: work-0 tags: gaussian random variables mutual information SNR date: 01-16-2012 03:54 gmt revision:26 [25] [24] [23] [22] [21] [20] [head]

I've recently tried to determine the bit-rate of conveyed by one gaussian random process about another in terms of the signal-to-noise ratio between the two. Assume x is the known signal to be predicted, and y is the prediction.

Let's define SNR(y)=Var(x)Var(err) where err=xy . Note this is a ratio of powers; for the conventional SNR, SNR dB=10*log 10Var(x)Var(err) . Var(err) is also known as the mean-squared-error (mse).

Now, Var(err)=(xyerr¯) 2=Var(x)+Var(y)2Cov(x,y) ; assume x and y have unit variance (or scale them so that they do), then

2SNR(y) 12=Cov(x,y)

We need the covariance because the mutual information between two jointly Gaussian zero-mean variables can be defined in terms of their covariance matrix: (see http://www.springerlink.com/content/v026617150753x6q/ ). Here Q is the covariance matrix,

Q=[Var(x) Cov(x,y) Cov(x,y) Var(y)]

MI=12logVar(x)Var(y)det(Q)

Det(Q)=1Cov(x,y) 2

Then MI=12log 2[1Cov(x,y) 2]

or MI=12log 2[SNR(y) 114SNR(y) 2]

This agrees with intuition. If we have a SNR of 10db, or 10 (power ratio), then we would expect to be able to break a random variable into about 10 different categories or bins (recall stdev is the sqrt of the variance), with the probability of the variable being in the estimated bin to be 1/2. (This, at least in my mind, is where the 1/2 constant comes from - if there is gaussian noise, you won't be able to determine exactly which bin the random variable is in, hence log_2 is an overestimator.)

Here is a table with the respective values, including the amplitude (not power) ratio representations of SNR. "

SNRAmp. ratioMI (bits)
103.11.6
20103.3
30315.0
401006.6
9031e315
Note that at 90dB, you get about 15 bits of resolution. This makes sense, as 16-bit DACs and ADCs have (typically) 96dB SNR. good.

Now, to get the bitrate, you take the SNR, calculate the mutual information, and multiply it by the bandwidth (not the sampling rate in a discrete time system) of the signals. In our particular application, I think the bandwidth is between 1 and 2 Hz, hence we're getting 1.6-3.2 bits/second/axis, hence 3.2-6.4 bits/second for our normal 2D tasks. If you read this blog regularly, you'll notice that others have achieved 4bits/sec with one neuron and 6.5 bits/sec with dozens {271}.

{773}
hide / edit[15] / print
ref: work-2009 tags: bipolar opamp design current control microstimulation date: 01-06-2012 20:13 gmt revision:15 [14] [13] [12] [11] [10] [9] [head]

Recently I've been working on a current-controlled microstimulator for the lab, and have not been at all satisfied with the performance - hence, I decided to redesign it.

Since it is a digitally current-controlled stimulator, and the current is set with a DAC (MCP4822), we need a voltage controlled current source. Here is one design:

  • Because the output of the DAC is ground-referenced, and there is no negative supply in the design, the input buffers must be PNP transistors. These level-shift the input (0-2V, corresponding to 0-400uA) + 0.65V ( V be ), and increase the current. Both are biased with 1uA here, though 10uA would also work (lazily, through 1M resistors - I've checked that these work well too). This sets the base current at about 10nA for Q2 and Q1.
  • Q3 and Q4 are a current-mirror pair. If Q1 Vb increases, Ie for Q3 will decrease, increasing Ib for Q4 and hence its Ic. This will decrease the base current in Q6 and Q5, as desired. On the other hand, increasing Q2 Vb will decrease Q4 Ic, increasing Ib in Q6 and Q5. The current mirror effects the needed negative feedback in the circuit. This mirror could also be implemented with PNP transistors, but it doesn't work as well as then the collector (which has voltage gain) is tied to the emitter of the input PNP transistors. Voltage gain is needed to drive Q5 / Q6.
  • Q5 & Q6 are Darlington cascaded NPN transistors for current gain. If Q6 is omitted, Ib in Q5 increases -> Ib in Q1 decreases -> Ic in Q3 decreases -> Ib in Q4 increases. This results in a set-point of Ib = 100nA in Q5 -> Ic ~10uA. (unacceptable for our task).

What I really need is a high-side regulated current source; after some fiddling, here is what I came up with:

  • V2 is from the DAC; for the testing, I just simulate with a votlage ramp. This circuit, due to the 5V biasing (I have 5V available for the DAC, hence might as well use it) works well up to about 4V input voltage - exactly what the DAC can produce.
  • Q1 and Q2 are biased through 1M resistors R6 and R8; their emitters are coupled to a common-emitter amplifier Q3 and Q4.
  • As the voltage across R1 increases, Ib in Q1 decreases. This puts more current through the base of Q4, increasing the emitter voltage on both Q3 and Q4. This reduces the current in Q3, hence reducing the current in Q5 -> the voltage across R1. feedback ;-)
  • I tried using a current mirror on the high-side, but according to spice, this actually works *worse*. Q5 & Q3 / Q4 have more than enough gain as it stands.
  • Yes, that's 100V - the electrodes we use have high impedance, so need a good bit of voltage to get the desired current.
  • Now, will need to build this circuit to verify that it actually works.

  • (click for the full image)
  • This simulates OK, but shows some bad transients related to switching - I'll have to inspect this more closely, and possibly tune the differential stage (e.g. remove the fast transient response - Q6 and Q12 seem to turn off before Q5 and Q11 do, which pulls the output to +50v briefly)

  • This is the biphasic, bipolar stimulator's response to a rising ramp command voltage, as measured by the current through R17. Note how clean the signal is :-) But, I'm sure that it won't look quite this nice in real life! Will try one half out on a breadboard to see how it looks.
  • Note I switched from NMOS switching transistors to NPN - Q15 and Q16 shunt the bias current from Q3/Q2 and Q8/Q9, keeping the output PNPs (Q5 and Q11). These transistors are in saturation, so they take 100-200ns to turn off, which should be fine for this application where pulse width is typically 100us.
  • I've fed the pull-down NPN base current from the positive supply here, so that as long as Q5 and Q11 are on, Q6 and Q12 are also on. The storage time here (not that it is much, the transistors are kept out of saturation via D1-4) helps to keep the mean difference in voltage between animal or stimulee's ground and isolated stimulator ground low. In previous stimulators the high-side was a near-saturation PNP, which pulled the voltage all the way to the positive supply when stimulation started. This meant that any stray capacitance had to be charged through the brain - bad!
    • Note this means that the emitter current through Q6 and Q12 is more than the current through R17 by that passed through Q14 and Q13. By design, this is 1/50th that through Q5 and Q11. This means that the actual stimulated current will be 95% of the commanded current, something which is easily corrected in software.

  • Larger view of the schematic. Still worried about stability - perhaps will need to add something to limit slew rate.
  • V2 on the right is the command voltage from the DAC.

  • The amplifier in figure 5 suffered from low bandwidth, primarily because the large resistors effected slow timeconstants, and because there was no short path to +50V from the high-side PNP transistors. This led to very slow turn-off times. To remedy this:
    • Bias current to Q3 & Q4 was increased (R6 & R8 decreased) -> more current to charge / discharge capacitance.
    • Common emitter resistor concomitantly decreased to 22k. This increases the collector current.
    • Pull-up resistors changed to a current mirror. This allows the current through Q4 to pull up the bases of Q5 and Q6, letting them turn off more quickly. If Q1 is off (e.g. voltage across R1 is high), Q4 will be on, and Q6 will source this current. etc.
  • With this done, I tested it on the breadboard & it oscillated. bad! Hence, I put a 1nf (10nf in the schematic) capacitor from the collector of Q3 to ground - hence limiting the slew rate. This abolished oscillations and led to a very pretty linear turn-on waveform.
  • However, the turn-off waveform was an ugly exponential. Why? With Q2 or Q10 fully on, Q3 will be off. Q4 will effectively recharge C1 through R7. As the voltage across R7 goes to zero, so does the charging current. Since I don't want to add in a negative supply, I simply shifted the base voltage of Q3 and Q4 using a diode, about as simple as you can get!
  • Eventually, I replaced R7 with a current source ... but this did not change the fall waveform that much; it is still (partially) exponential. Possibly this is from the emitter resistors on the high-side.

  • As of now, the final version - tested using surface mount devices; seems to work ok!
  • Note added transistor Q11 - this discharges / removes minority carriers from the base of Q8. Even though D1 and D2 guarantee a current-starved Q8 in previous designs, they leave no path to ground from the base, so this transistor was taking forever to turn off. This was especially the case when switching (recall this is one half of a H-bridge, and Q9 would actually be on the other side of the h-bridge), since the other sides' Q9 would push current, while Q8 would continue to conduct & sink current. This current through R1 would increase Q8 emitter voltage, reverse-biasing its' base-emitter junction, making the transistor take 100us of us to turn off. Bad, since the amplifier is intended to replicate 100us pulses! Anyway, Q11 neatly solves the problem (albeit with 100ns or so of saturated-switching storage time - something that Q10 has anyway).
  • D1 and D2 are no longer really necessary, but I've left them in this diagram for illustrative purposes. (and they improve storage time a bit).

  • Update as the result of testing. Changes:
    • Added emitter resistors on the two current mirrors (Q6, Q7; Q12, Q13). This eliminated stability problems
    • Changed the anti-saturation diodes to a resistor. This is needed as it takes some time for Q9 to turn off, and to avoid unbalanced currents through the electrode pairs, this charge should be pulled to ground through Q8. In the actual circuit, Q11 is driven with a 4-8us delayed version of the control signal V4 so that Q8 remains on longer than current source Q9.
    • Decreased C1 to 100pf; because the amplifier is more stable now, the slew rate can be increased.

{850}
hide / edit[8] / print
ref: work-0 tags: kinarm problem mathML date: 11-03-2010 16:05 gmt revision:8 [7] [6] [5] [4] [3] [2] [head]

Historical notes from using the Kinarm... this only seems to render properly in firefox / mozilla.


To apply cartesian force fields to the arm, the original kinarm PLCC (whatever that stands for) converted joint velocities to cartesian veolocities using the jacobian matrix. All well and good. The equation for endpoint location of the kinarm is:

x̂=[l 1sin(θ sho)+l 2sin(θ sho+θ elb) l 1cos(θ sho)+l 2cos(θ sho+θ elb)]

L_1 = 0.115 meters, l_2 = 0.195 meters in our case. The jacobian of this function is: J=[l 1sin(θ sho)l 2sin(θ sho+θ elb) l 2sin(θ elb) l 1cos(θ sho)+l 2cos(θ sho+θ elb) l 2cos(θ elb)] v̂=Jθ̂ etc. and (I think!) F̂=Jτ̂ where tau is the shoulder and elbow torques and F is the cartesian force. The flow of the PLCC is then:

  1. convert joint angluar velocities to cartesian velocities
  2. cartesian velocities to cartesian forces by a symmetric matrix A which effects simple viscious and curl fields.
F̂=Av̂
  1. cartesian forces to joint torques via the inverse of the jacobian.
But, and I may be wrong here, rather than inverting the jacobian, the PLCC simply takes the transform. The inverse of the jacobian and the transpose are not even close to equal. viz (from mathworld):

J=[a b c d]

J 1=1adbc[d b c a][a c b d]=J T

substitute to see if the matrices look similar ...

J[l 2cos(θ elb) l 2sin(θ elb) l 1cos(θ sho)l 2cos(θ sho+θ elb) l 1sin(θ sho)l 2sin(θ sho+θ elb)][l 1sin(θ sho)l 2sin(θ sho+θ elb) l 1cos(θ sho)+l 2cos(θ sho+θ elb) l 2sin(θ elb) l 2cos(θ elb)]

where

J=l 1l 2sin(θ sho)cos(θ elb)l 2 2sin(θ sho+θ elb)cos(θ elb)+l 1l 2cos(θ sho)sin(θ elb)l 2 2cos(θ sho+θ elb)sin(θ elb)

I'm surprised that we got something even like curl and viscous forces - the matrices are not similar. This explains why the forces seemed odd and poorly scaled, and why the constants for the viscious and curl fields were so small (the units should have been N/(cm/s) - 1 newton is a reasonable force, and the monkey moves at around 10cm/sec, so the constant should have been 1/10 or so. Instead, we usually put in a value of 0.0005 ! For typical values of the shoulder and elbow angles, the determinant of the matrix is 200 (the kinarm PLCC works in centimeters, not meters), so the transpose has entries ~ 200 x too big. Foolishly we compensated by making the constant (or entries in A) 200 times to small. i.e. 1/10 * 1/200 = 0.0005 :(

The end result is that a density-plot of the space spanned by the cartesian force and velocity is not very clean, as you can see in the picture below. The horizontal line is, of course, when the forces were turned off. A linear relationship between force and velocity should be manifested by a line in these plots - however, there are only suggestions of lines. The null field should have a negative - slope line in upper left and lower right; the curl field should have a positive sloped line in the upper right and negative in the lower left (or vice-vercia).

http://hardcarve.com/wikipic/kinarm_fkup.jpg

{848}
hide / edit[0] / print
ref: work-0 tags: fur openGL directX shell hull algorithm date: 11-03-2010 15:47 gmt revision:0 [head]

http://www.xbdev.net/directx3dx/specialX/Fur/index.php -- for future reference. Simple algorithm that seems to work quite well. Can be done almost entirely in vertex shader...

{844}
hide / edit[9] / print
ref: work-0 tags: emg_dsp design part selection stage6 date: 09-22-2010 20:09 gmt revision:9 [8] [7] [6] [5] [4] [3] [head]

"Stage 6" part selection:

  • B527 to replace the BF537 -- big difference are more pins + USB OTG high-speed port. The previous deign used Maxim's MAX3421E, which seems to drop packets / have limited bandwidth (or perhaps my USB profile is incorrect?)
    • available in both 0.8mm and 0.5mm BGA. which? both are available from Digi-key. Coarser one is fine, will be easier to route.
    • Does not support mobile SDRAM nor DDR SDRAM; just the vanilla variety.
  • Continue to use the BF532 on the wireless devices (emg, neuro)
  • LAN8710 to replace the LAN83C185. Both can use the MII interface; the LAN83 is not recommended for new designs, though it is in the easier-to-debug TQFP package. Blackfin EZ-KIT for BF527 uses the LAN8710.
    • comes in 0.5mm pitch QFN-32 package.
    • 3.3V and 1.2V supply - can supply 1.2V externally.
  • SDRAM: MT48LC16M16A2BG-7E:D, digikey 557-1220-1-ND 16M x16, or 4M x 16 bit X 4 banks.
    • VFBGA-54 package.
    • 3.3v supply.
  • converter: AD7689 8 channel, 16-bit SAR ADC. has a built-in sequencer, which is sweet. (as well as a temperature sensor??!)
    • Package: 20LFCSP.
    • Seems we can run it at 4.0V, as in stage4.
  • Inst amp: MCP4208, available MSOP-8 (they call it 8-muMax). can use the same circuitry as in stage2 - just check the bandwidth; want 2khz maybe?
  • M25P16 flash, same as on the dev board.
    • Digikey M25P16-VMN6P-ND : 150mil width SOIC-8
  • USB: use the on-board high-speed controller. No need for OTG functionality; FCI USB connector is fine. Digikey 609-1039-ND.

{839}
hide / edit[5] / print
ref: work-0 tags: headstage recording wireless interference stage5 intan date: 08-13-2010 01:16 gmt revision:5 [4] [3] [2] [1] [0] [head]

(I'm posting this here as it's easier than putting a image & text in subversion)

I'm building a wireless headstage for neural recording. Hence, it has sensitive, high-gain amplifiers (RHA2116) pretty close to a wireless transmitter + serial lines. The transmitter operates intermittently to save power, only sending samples from one continuous channel + threshold crossings for all the other channels. 27 byte-wide samples + channel identifier + 4 bytes threshold crossing are sent in one radio packet; as the radio takes some 130us to start up the PLL, 8 of these packets are chunked together into one frame; one frame is transmitted every 144hz (actually, 1e6/(32*27*8)Hz. At the conclusion of each frame, the continuous channel to be transmitted is incremented.

It seems that radio transmission is interfering with the input amplfifiers, as the beginning samples from a frame are corrupted - this is when the previous frame is going out over the air. It could also be noise from the SPI lines, which run under and close to the amplifiers. This may also not be a problem in vivo - it could only be an issue when the input to the amplifiers are floating.

Above, a plot of the raw data coming off the headstage radio. Red trace indicates the channel currently being transmitted; blue are the samples. Note that some chanels do not have the artifact - I presume this is because their input is grounded.

This will be very tricky to debug, as if we turn off the radio, we'll get no data. Checking if it is a SPI problem is possible by writing the bus at a specified time.


Tested with radio PA disabled, it is definitely the SPI bus - routing problem! Stupid.

{815}
hide / edit[7] / print
ref: work-0 tags: metacognition AI bootstrap machine learning Pitrat self-debugging date: 08-07-2010 04:36 gmt revision:7 [6] [5] [4] [3] [2] [1] [head]

Jacques Pitrat seems to have many of the same ideas that I've had (only better, and he's implemented them!)--

A Step toward and Artificial Scientist

  • The overall structure seems good - difficult problems are attacked by 4 different levels. First level tries to solve the problem semi-directly, by writing a program to solve combinatorial problems (all problems here are constraint based; constraints are used to pare the tree of possible solutions; these trees are tested combinatorially); second level monitors lower level performance and decides which hypotheses to test (which branch to pursue on the tree) and/or which rules to apply to the tree; third level directs the second level and restarts the whole process if a snag or inconsistency is found, forth level gauges the interest of a given problem and looks for new problems to solve within a family so as to improve the skill of the 3 lower levels.
    • This makes sense, but why 4? Seems like in humans we only need 2 - the actor and the critic, bootstrapping forever.
    • Also includes a "Zeus" module that periodically checks for infinite loops of the other programs, and recompiles with trace instructions if an infinite loop is found within a subroutine.
  • Author claims that the system is highly efficient - it codes constraints and expert knowledge using a higher level language/syntax that is then converted to hundreds of thousands of lines of C code. The active search program runs runtime-generated C programs to evaluate and find solutions, wow!
  • This must have taken a decade or more to create! Very impressive. (seems it took 2 decades, at least according to http://tunes.org/wiki/jacques_20pitrat.html)
    • Despite all this work, he is not nearly done - it has not "learning" module.
    • Quote: In this paper, I do not describe some parts of the system which still need to be developed. For instance, the system performs experiments, analyzes them and finds surprising results; from these results, it is possible to learn some improvements, but the learning module, which would be able to find them, is not yet written. In that case, only a part of the system has been implemented: on how to find interesting data, but still not on how to use them.
  • Only seems to deal with symbolic problems - e.g. magic squares, magic cubes, self-referential integer series. Alas, no statistical problems.
  • The whole CAIA system can effectively be used as a tool for finding problems of arbitrary difficulty with arbitrary number of solutions from a set of problem families or meta-families.
  • Has hypothesis based testing and backtracking; does not have problem reformulation or re-projection.
  • There is mention of ALICE, but not the chatbot A.L.I.C.E - some constraint-satisfaction AI program from the 70's.
  • Has a C source version of MALICE (his version of ALICE) available on the website. Amazingly, there is no Makefile - just gcc *.c -rdynamic -ldl -o malice.
  • See also his 1995 Paper: AI Systems Are Dumb Because AI Researchers Are Too Clever images/815_1.pdf

Artificial beings - his book.

{826}
hide / edit[3] / print
ref: work-0 tags: PSD FFT periodogram autocorrelation time series analysis date: 07-19-2010 18:45 gmt revision:3 [2] [1] [0] [head]

Studies in astronomical time series analysis. II - Statistical aspects of spectral analysis of unevenly spaced data Scargle, J. D.

  • The power at a given frequency as computed by a periodigram (FFT is a specific case of the periodigram) of a gaussian white noise source with uniform variance is exponentially distributed: P z(z)=P(x<Z<z+dz)=e zdz
    • The corresponding CDF: 1e z or P(Z>z)=e z which gives the probability of a large observed power at a given freq.
    • If you need to average N samples, then P(Z>z)=1(1e z) N where Z=max nPow(ω n)
  • Means of improving detection using a periodogram:
    • Average in time - this means that N above will be smaller, hence a spectral peak becomes more significant.
      • Cannot average too much - at some point, averaging will start to attenuate the signal!
    • Decrease the number of frequencies inspected.
  • Deals a good bit with non-periodic sampling, which i guess is more common in astronomical data (the experimenter may not take a photo every day, or the same time every day (clouds!).

{825}
hide / edit[2] / print
ref: work-0 tags: no free lunch wolpert coevolution date: 07-19-2010 12:54 gmt revision:2 [1] [0] [head]

http://www.no-free-lunch.org/

  • Just discovered this. It makes perfect sense - bias free learning is 'futile'. Learning need be characterized by its biases, which enable faster or better results in particular problem domains.
  • Equivalently: any two algorithms are equivalent when their performance is averaged across all possible problems. (This is not as strong as it sounds, as most problems will never be encountered).
  • Wolper 1996 provides an excellent geometric interpretation of this: the quality of the search/optimization algorithm within a particular domain iis proporational to the inner product of its expected search stream with the actual (expected?) probability distribution of the data.
  • However! with coevolutionary algorithms, there can be a free lunch - "in coevolution some algorithms have better performance than other algorithms, averaged across all possible problems." Wolpert 2005
    • claims that this does not (??) hold in biological evolution, where there is no champion. Yet biology seems all about co-evolution.
    • coevolution of a backgammon player details how it may be coevolution + the structure of the backgammon game, not reinforcement learning, which led Tesauro to his championship-level player. Specifically, coevolutionary algorithms tend to get stuck in local minima - where both contestants play mediocre and draw - but this is not possible in backgammon; there is only one winner, and the games must terminate eventually.
      • These authors introduce a very interesting twist to improve coevolutionary bootstrapping: Firstly, the games are played in pairs, with the order of play reversed and the same random seed used to generate the dice rolls for both games. This washes out some of the unfairness due to the dice rolls when the two networks are very close - in particular, if they were identical, the result would always be one win each.

{824}
hide / edit[7] / print
ref: work-0 tags: DB Lenat Eurisko date: 07-19-2010 04:37 gmt revision:7 [6] [5] [4] [3] [2] [1] [head]

images/824_1.pdf -- Eurisko by DB Lenat, the program that made the fleet which won the 1981 and 1982 Traveller's challenge, as I discovered in this New Yorker article by Malcolm Gladwell.

  • Notable observations of the author: EURISKO'S progress in this domain was entertaining, and a fundamental feature of this domain became clear: large programs are carefully engineered artifacts, complex constructs with thousands of pieces in a kind of unstable equilibrium. Any sort of random perturbation is likely to produce an error rather than a novel mutant. The analogy to biological evolution is strong.
    • EURISKO had successes in automatic programming only when it modified functions which had been coded as units. Why was this?
  • He also tried simulating biological evolution, and found that progress was slow when the mutations were random, but were quite rapid when they were organized by a set of heuristics. Heuristics, in this case, refer to rules like 'fewer defenses require faster legs and better ears and noses' - which can be generalized by simple observations of nature - or 'large cranium requires large female cervical opening' which is a heuristic that has only weakly been encoded in our DNA in the form of mate preference (maybe).
    • Quote: The net effect of having these heuristics for guiding plausible mutations was that, in a single generation, an offspring would emerge with a whole constellation of related mutations that worked together. For example, one had thicker fur, a thicker fat layer, whiter fur, smaller ears, etc. It is not known whether there is any biological validity to this radical hypothesis, but there is no doubt that the simulated evolution progressed almost not at all when mutation was random, and quite rapidly when mutation was under control of a body of heuristic rules. See [10].
    • This is consistent with homeobox genes, which were discovered in 1983. ref
  • The introductory and explicit description of the program was more difficult to parse than the later examples illustrating exactly what Eurisko did/does; the introduction of weakly-weighted contrapositive heuristics to a defeated heuristics on page 90 (page 30 in the pdf), for example, is revealing.
  • Eurisko has a sensible trigger for invoking heuristic / generalizing rules: when a particular node has too many entries (slots, in his terminology), a set of heuristics are called in to segregate the set of entries by common features, e.g. clustering.
  • The conclusion - a list of ideas (heuristics!) regarding the development of his program - is well articulated and useful, even 29 years later! In particular: "In other words, even though the discovery of new heuristics is important, the presence (and maintenance) of an appropriate representation for knowledge is even more necessary." (my emphasis)
  • Again: "Brevity is a key attribute in any kind of asemantic exploration. If useful concepts are short expressions in your language, then you have some chance of coming across them often, even if you don't know much about the terrain."

{821}
hide / edit[3] / print
ref: work-0 tags: differential evolution function optimization date: 07-09-2010 14:46 gmt revision:3 [2] [1] [0] [head]

Differential evolution (DE) is an optimization method, somewhat like Neidler-Mead or simulated annealing (SA). Much like genetic algorithms, it utilizes a population of solutions and selection to explore and optimize the objective function. However, it instead of perturbing vectors randomly or greedily descending the objective function gradient, it uses the difference between individual population vectors to update hypothetical solutions. See below for an illustration.

At my rather cursory reading, this serves to adapt the distribution of hypothetical solutions (or population of solutions, to use the evolutionary term) to the structure of the underlying function to be optimized. Judging from images/821_1.pdf Price and Storn (the inventors), DE works in situations where simulated annealing (which I am using presently, in the robot vision system) fails, and is applicable to higher-dimensional problems than simplex methods or SA. The paper tests DE on 100 dimensional problems, and it is able to solve these with on the order of 50k function evaluations. Furthermore, they show that it finds function extrema quicker than stochastic differential equations (SDE, alas from 85) which uses the gradient of the function to be optimized.

I'm surprised that this method slipped under my radar for so long - why hasn't anyone mentioned this? Is it because it has no proofs of convergence? has it more recently been superseded? (the paper is from 1997). Yet, I'm pleased because it means that there are also many other algorithms equally clever and novel (and simple?), out their in the literature or waiting to be discovered.

{818}
hide / edit[0] / print
ref: work-0 tags: perl fork read lines external program date: 06-15-2010 18:08 gmt revision:0 [head]

Say you have a program, called from a perl script, that may run for a long time. Get at the program's output as it appears?

Simple - open a pipe to the programs STDOUT. See http://docstore.mik.ua/orelly/perl/prog3/ch16_03.htm Below is an example - I wanted to see the output of programs run, for convenience, from a perl script (didn't want to have to remember - or get wrong - all the command line arguments for each).

#!/usr/bin/perl

$numArgs = $#ARGV + 1;
if($numArgs == 1){
	if($ARGV[0] eq "table"){
		open STATUS, "sudo ./video 0xc1e9 15 4600 4601 0 |";
		while(<STATUS>){
			print ; 
		}
		close STATUS ; 
	}elsif($ARGV[0] eq "arm"){
		open STATUS, "sudo ./video 0x1ff6 60 4597 4594 4592 |";
		while(<STATUS>){
			print ; 
		}
		close STATUS ; 
	}else{ print "$ARGV[0] not understood - say arm or table!\n"; 
	}
}

{813}
hide / edit[4] / print
ref: work-0 tags: kicadocaml zbuffer comparison picture screenshot date: 03-03-2010 16:38 gmt revision:4 [3] [2] [1] [0] [head]

Simple illustration of Kicadocaml with Z buffering enabled:

and disabled:

I normally use it with Z buffering enabled, but turn it off if, say, I want to clearly see all the track intersections, especially co-linear tracks or zero length tracks. (Probably I should write something to merge and remove these automatically.) Note that in either case, tracks and modules are rendered back-to-front, which effects a Z-sorting of sorts; it is the GPUs Z buffer that is enabled/disabled here.

{809}
hide / edit[1] / print
ref: work-0 tags: sine wave synthesis integrator date: 02-03-2010 05:52 gmt revision:1 [0] [head]

I learned this in college, but have forgotten all the details - Microcontroller provides an alternative to DDS

freq=F2πτ where τ is the sampling frequency. F ranges from -0.2 to 0.

{796}
hide / edit[5] / print
ref: work-0 tags: machine learning manifold detection subspace segregation linearization spectral clustering date: 10-29-2009 05:16 gmt revision:5 [4] [3] [2] [1] [0] [head]

An interesting field in ML is nonlinear dimensionality reduction - data may appear to be in a high-dimensional space, but mostly lies along a nonlinear lower-dimensional subspace or manifold. (Linear subspaces are easily discovered with PCA or SVD(*)). Dimensionality reduction projects high-dimensional data into a low-dimensional space with minimum information loss -> maximal reconstruction accuracy; nonlinear dim reduction does this (surprise!) using nonlinear mappings. These techniques set out to find the manifold(s):

  • Spectral Clustering
  • Locally Linear Embedding
    • related: The manifold ways of perception
      • Would be interesting to run nonlinear dimensionality reduction algorithms on our data! What sort of space does the motor system inhabit? Would it help with prediction? Am quite sure people have looked at Kohonen maps for this purpose.
    • Random irrelevant thought: I haven't been watching TV lately, but when I do, I find it difficult to recognize otherwise recognizable actors. In real life, I find no difficulty recognizing people, even some whom I don't know personally - is this a data thing (little training data), or mapping thing (not enough time training my TV-not-eyes facial recognition).
  • A Global Geometric Framework for Nonlinear Dimensionality Reduction method:
    • map the points into a graph by connecting each point with a certain number of its neighbors or all neighbors within a certain radius.
    • estimate geodesic distances between all points in the graph by finding the shortest graph connection distance
    • use MDS (multidimensional scaling) to embed the original data into a smaller-dimensional euclidean space while preserving as much of the original geometry.
      • Doesn't look like a terribly fast algorithm!

(*) SVD maps into 'concept space', an interesting interpretation as per Leskovec's lecture presentation.

{795}
hide / edit[1] / print
ref: work-0 tags: machine learning reinforcement genetic algorithms date: 10-26-2009 04:49 gmt revision:1 [0] [head]

I just had dinner with Jesse, and the we had a good/productive discussion/brainstorm about algorithms, learning, and neurobio. Two things worth repeating, one simpler than the other:

1. Gradient descent / Newton-Rhapson like techniques should be tried with genetic algorithms. As of my current understanding, genetic algorithms perform an semi-directed search, randomly exploring the space of solutions with natural selection exerting a pressure to improve. What if you took the partial derivative of each of the organism's genes, and used that to direct mutation, rather than random selection of the mutated element? What if you looked before mating and crossover? Seems like this would speed up the algorithm greatly (though it might get it stuck in local minima, too). Not sure if this has been done before - if it has, edit this to indicate where!

2. Most supervised machine learning algorithms seem to rely on one single, externally applied objective function which they then attempt to optimize. (Rather this is what convex programming is. Unsupervised learning of course exists, like PCA, ICA, and other means of learning correlative structure) There are a great many ways to do optimization, but all are exactly that - optimization, search through a space for some set of weights / set of rules / decision tree that maximizes or minimizes an objective function. What Jesse and I have arrived at is that there is no real utility function in the world, (Corollary #1: life is not an optimization problem (**)) -- we generate these utility functions, just as we generate our own behavior. What would happen if an algorithm iteratively estimated, checked, cross-validated its utility function based on the small rewards actually found in the world / its synthetic environment? Would we get generative behavior greater than the complexity of the inputs? (Jesse and I also had an in-depth talk about information generation / destruction in non-linear systems.)

Put another way, perhaps part of learning is to structure internal valuation / utility functions to set up reinforcement learning problems where the reinforcement signal comes according to satisfaction of sub-goals (= local utility functions). Or, the gradient signal comes by evaluating partial derivatives of actions wrt Creating these goals is natural but not always easy, which is why one reason (of very many!) sports are so great - the utility function is clean, external, and immutable. The recursive, introspective creation of valuation / utility functions is what drives a lot of my internal monologues, mixed with a hefty dose of taking partial derivatives (see {780}) based on models of the world. (Stated this way, they seem so similar that perhaps they are the same thing?)

To my limited knowledge, there has been some work as of recent in the creation of sub-goals in reinforcement learning. One paper I read used a system to look for states that had a high ratio of ultimately rewarded paths to unrewarded paths, and selected these as subgoals (e.g. rewarded the agent when this state was reached.) I'm not talking about these sorts of sub-goals. In these systems, there is an ultimate goal that the researcher wants the agent to achieve, and it is the algorithm's (or s') task to make a policy for generating/selecting behavior. Rather, I'm interested in even more unstructured tasks - make a utility function, and a behavioral policy, based on small continuous (possibly irrelevant?) rewards in the environment.

Why would I want to do this? The pet project I have in mind is a 'cognitive' PCB part placement / layout / routing algorithm to add to my pet project, kicadocaml, to finally get some people to use it (the attention economy :-) In the course of thinking about how to do this, I've realized that a substantial problem is simply determining what board layouts are good, and what are not. I have a rough aesthetic idea + some heuristics that I learned from my dad + some heuristics I've learned through practice of what is good layout and what is not - but, how to code these up? And what if these aren't the best rules, anyway? If i just code up the rules I've internalized as utility functions, then the board layout will be pretty much as I do it - boring!

Well, I've stated my sub-goal in the form of a problem statement and some criteria to meet. Now, to go and search for a decent solution to it. (Have to keep this blog m8ta!) (Or, realistically, to go back and see if the problem statement is sensible).

(**) Corollary #2 - There is no god. nod, Dawkins.

{794}
hide / edit[2] / print
ref: work-0 tags: software development theory date: 10-26-2009 04:29 gmt revision:2 [1] [0] [head]

http://weblog.raganwald.com/2007/06/which-theory-first-evidence.html

  • Very good article, clearly the author has hard-earned experience..
    • I appreciate his (journalistic, correctful, maybe overbearing) tone, but personally think it much better to be a bit playful with the silly arbitrariness, imperfect-but-honestly-attempted decisions, that humans are.
  • One thing that I particularly liked was the idea of 'learning area' - the more competent people that you have working on a project and learning along the way, the more area is exposed to learning, facilitating progress. Compare to the top-down approach, which allocates a few very good people at the beginning of a project to plan it out, but then does not allow the implementors to modify the plan, and furthermore suggests mediocre implementors will do - all which minimizes the 'learning area'.

also from that site - http://weblog.raganwald.com/2007/05/not-so-big-software-application.html

  • The market for lemons, or "the bad driving out the good" - linked in the blog - brilliant!
  • Quote: "Adding detail makes a design more specific, but it only makes it specific for a client if the choices expressed address the most important needs of the client."

{793}
hide / edit[0] / print
ref: work-0 tags: Ng computational leaning theory machine date: 10-25-2009 19:14 gmt revision:0 [head]

Andrew Ng's notes on learning theory

  • goes over the bias / variance tradeoff.
    • variance = when the model has a large testing error; large generalization error.
    • bias = the expected generalization error even if the model is fit to a very large training set.
  • proves that, with a sufficiently large training set, the training error will be the same as the fitting error.
    • also gives an upper bound on the generalization error in terms of fitting error in terms of the number of models available (discrete number)
    • this bound is only logarithmic in k, the number of hypotheses.
  • the training size m that a certain method or algorithm requires in order to achieve a certain level of performance is the algorithm's sample complexity.
  • shows that with infinite hypothesis space, the number of training examples needed is at most linear in the parameters of the model.
  • goes over the Vapnik-Chervonenkis dimension = the size of the largest set that is shattered by a hypothesis space. = VC(H)
    • A hypothesis space can shatter a set if it can realize any labeling (binary, i think) on the set of points in S. see his diagram.
    • In oder to prove that VC(H) is at least D, only need to show that there's at least one set of size d that H can shatter.
  • There are more notes in the containing directory - http://www.stanford.edu/class/cs229/notes/

{792}
hide / edit[2] / print
ref: work-0 tags: Cohen Singer SLIPPER machine learning hypothesis generation date: 10-25-2009 18:42 gmt revision:2 [1] [0] [head]

http://www.cs.cmu.edu/~wcohen/slipper/

  • "One disadvantage of boosting is that improvements in accuracy are often obtained at the expense of comprehensibility.
  • SLIPPER = simple learner with iterative pruning to produce error reduction.
  • Inner loop: the weak lerner splits the training data, grows a single rule using one subset of the data, and then prunes the rule using the other subset.
  • They use a confidence-rated prediction based boosting algorithm, which allows the algorithm to abstain from examples not covered by the rule.
    • the sign of h(x) - the weak learner's hyposthesis - is interpreted as the predited label and the magnitude |h(x)| is the confidence in the prediction.
  • SLIPPER only handles two-class problems now, but can be extended..
  • Is better than, though not dramatically so, than c5rules (a commercial version of Quinlan's decision tree algorithms).
  • see also the excellent overview at http://www.cs.princeton.edu/~schapire/uncompress-papers.cgi/msri.ps

{789}
hide / edit[4] / print
ref: work-0 tags: emergent leabra QT neural networks GUI interface date: 10-21-2009 19:02 gmt revision:4 [3] [2] [1] [0] [head]

I've been reading Computational Explorations in Cognitive Neuroscience, and decided to try the code that comes with / is associated with the book. This used to be called "PDP+", but was re-written, and is now called Emergent. It's a rather large program - links to Qt, GSL, Coin3D, Quarter, Open Dynamics Library, and others. The GUI itself seems obtuse and too heavy; it's not clear why they need to make this so customized / panneled / tabbed. Also, it depends on relatively recent versions of each of these libraries - which made the install on my Debian Lenny system a bit of a chore (kinda like windows).

A really strange thing is that programs are stored in tree lists - woah - a natural folding editor built in! I've never seen a programming language that doesn't rely on simple text files. Not a bad idea, but still foreign to me. (But I guess programs are inherently hierarchal anyway.)

Below, a screenshot of the whole program - note they use a Coin3D window to graph things / interact with the model. The colored boxes in each network layer indicate local activations, and they update as the network is trained. I don't mind this interface, but again it seems a bit too 'heavy' for things that are inherently 2D (like 2D network activations and the output plot). It's good for seeing hierarchies, though, like the network model.

All in all looks like something that could be more easily accomplished with some python (or ocaml), where the language itself is used for customization, and not a GUI. With this approach, you spend more time learning about how networks work, and less time programming GUIs. On the other hand, if you use this program for teaching, the gui is essential for debugging your neural networks, or other people use it a lot, maybe then it is worth it ...

In any case, the book is very good. I've learned about GeneRec, which uses different activation phases to compute local errors for the purposes of error-minimization, as well as the virtues of using both Hebbian and error-based learning (like GeneRec). Specifically, the authors show that error-based learning can be rather 'lazy', purely moving down the error gradient, whereas Hebbian learning can internalize some of the correlational structure of the input space. You can look at this internalization as 'weight constraint' which limits the space that error-based learning has to search. Cool idea! Inhibition also is a constraint - one which constrains the network to be sparse.

To use his/their own words:

... given the explanation above about the network's poor generalization, it should be clear why both Hebbian learning and kWTA (k winner take all) inhibitory competition can improve generalization performance. At the most general level, they constitute additional biases that place important constraints on the learning and the development of representations. Mroe specifically, Hebbian learning constrains the weights to represent the correlational structure of the inputs to a given unit, producing systematic weight patterns (e.g. cleanly separated clusters of strong correlations).

Inhibitory competition helps in two ways. First, it encourages individual units to specialize in representing a subset of items, thus parcelling up the task in a much cleaner and more systematic way than would occur in an otherwise unconstrained network. Second, inhibition greatly restricts the settling dynamics of the network, greatly constraining the number of states the network can settle into, and thus eliminating a large proportion of the attractors that can hijack generalization.."

{776}
hide / edit[0] / print
ref: work-0 tags: neural networks course date: 09-01-2009 04:24 gmt revision:0 [head]

http://www.willamette.edu/~gorr/classes/cs449/intro.html -- descent resource, good explanation of the equations associated with artificial neural networks.

{774}
hide / edit[0] / print
ref: work-0 tags: functional programming compilation ocaml date: 08-24-2009 14:33 gmt revision:0 [head]

The implementation of functional programming languages - book!

{764}
hide / edit[2] / print
ref: work-0 tags: ocaml mysql programming functional date: 07-03-2009 19:16 gmt revision:2 [1] [0] [head]

Foe my work I store a lot of analyzed data in SQL databases. In one of these, I have stored the anatomical target that the data was recorded from - namely, STN or VIM thalamus. After updating the analysis programs, I needed to copy the anatomical target data over to the new SQL tables. Where perl may have been my previous go-to language for this task, I've had enuogh of its strange quiks, hence decided to try it in Ruby (worked, but was not so elegant, as I don't actually know Ruby!) and then Ocaml.

ocaml
#use "topfind"
#require "mysql"

(* this function takes a query and a function that converts entries 
in a row to Ocaml tuples *)
let read_table db query rowfunc =
	let r = Mysql.exec db query in
	let col = Mysql.column r in
	let rec loop = function
		| None      -> []
		| Some x    -> rowfunc col x :: loop (Mysql.fetch r)
	in
	loop (Mysql.fetch r)
	;;
	

let _ = 
	let db = Mysql.quick_connect ~host:"crispy" ~database:"turner" ~password:"" ~user:"" () in
	let nn = Mysql.not_null in
	(* this function builds a table of files (recording sessions) from a given target, then 
	uses the mysql UPDATE command to propagate to the new SQL database. *)
	let propagate targ = 
		let t = read_table db 
			("SELECT file, COUNT(file) FROM `xcor2` WHERE target='"^targ^"' GROUP BY file")
			(fun col row -> (
				nn Mysql.str2ml (col ~key:"file" ~row), 
				nn Mysql.int2ml (col ~key:"COUNT(file)" ~row) )
			)
		in
		List.iter (fun (fname,_) -> 
			let query = "UPDATE `xcor3` SET `target`='"^targ^
				"' WHERE STRCMP(`file`,'"^fname^"')=0" in
			print_endline query ;
			ignore( Mysql.exec db query )
		) t ;
	in
	propagate "STN" ; 
	propagate "VIM" ; 
	propagate "CTX" ; 
	Mysql.disconnect db ;;

Interacting with MySQL is quite easy with Ocaml - though the type system adds a certain overhead, it's not too bad.

{762}
hide / edit[0] / print
ref: work-0 tags: covariance matrix adaptation learning evolution continuous function normal gaussian statistics date: 06-30-2009 15:07 gmt revision:0 [head]

http://www.lri.fr/~hansen/cmatutorial.pdf

  • Details a method of sampling + covariance matrix approximation to find the extrema of a continuous (but intractable) fitness function
  • HAs flavors of RLS / Kalman filtering. Indeed, i think that kalman filtering may be a more principled method for optimization?
  • Can be used in high-dimensional optimization problems like finding optimal weights for a neural network.
  • Optimum-seeking is provided by weighting the stochastic samples (generated ala a particle filter or unscented kalman filter) by their fitness.
  • Introductory material is quite good, actually...

{759}
hide / edit[3] / print
ref: work-0 tags: yushin robot data date: 06-25-2009 18:35 gmt revision:3 [2] [1] [0] [head]

U141 LMV1032 microSMD-4 -2.23315 -0.03575 180. 9394. 27366. 1675. 
L7 INDUCTOR 0603 -1.7784 -0.7561 0. 13171. 34955. 1727. 
C86 0.1uf 0402 1.0946 -0.0347 360. 37107. 27524. 1710. 
TP8 TP TP 0.222 -1.0285 0. 29815. 37809. 1767. 
TP9 TP TP 0.7021 -1.2484 0. 33805. 40090. 1787. 
C67 1uf 0603 0.8146 -0.7047 270. 34758. 34540. 1752. 
C68 1uf 0603 1.1946 -0.7247 270. 37920. 34730. 1758. 
C69 1uf 0603 1.2747 -0.7247 90. 38576. 34742. 1759. 
R4 33 0402 1.6937 -0.1982 180. 42071. 29215. 1728. 
R17 10k 0402 -1.685 -0.6615 270. 13941. 33981. 1723. 
U92 LMV1032 microSMD-4 -2.53285 -0.03585 180. 6912. 27381. 1671. 
U96 LMV1032 microSMD-4 -2.23315 -0.89075 180. 9364. 36340. 1732. 
TP10 TP TP 0.222 -1.1685 0. 29811. 39233. 1776. 
TP11 TP TP 0.222 -1.3084 0. 29807. 40698. 1786. 
R23 33 0402 0.2834 0.6142 180. 30371. 20682. 1659. 
U105 LMV1032 microSMD-4 -2.23315 -0.71965 180. 9368. 34557. 1720. 
U117 LMV1032 microSMD-4 -2.23315 -0.49165 180. 9366. 32055. 1705. 
U124 LMV1032 microSMD-4 -2.18025 -0.37765 180. 9820. 30853. 1698. 
U127 LMV1032 microSMD-4 -2.18025 -0.32065 180. 9826. 30273. 1695. 
U128 LMV1032 microSMD-4 -2.28685 -0.26365 180. 8940. 29697. 1690. 
R10 50k 0402 -0.9607 -0.3308 180. 19983. 30430. 1709. 

more data!

U136 LMV1032 microSMD-4 -2.18025 -0.14965 180. 9860. 28534. 1682. 
R47 20k 0402 1.1822 -1.3883 90. 37828. 41612. 1797. 
R48 20k 0402 0.942 -1.0284 270. 35838. 37757. 1771. 
U139 LMV1032 microSMD-4 -2.18025 -0.09265 180. 9863. 27964. 1678. 
C72 10nf 0603 1.3546 -0.6248 270. 39284. 33694. 1750. 
R45 12.5k 0402 1.1021 -1.3883 90. 37161. 41608. 1796. 
C37 33nF 0402 -1.0956 -0.7067 360. 18894. 34462. 1730. 
R46 12.5k 0402 1.0221 -1.0284 270. 36505. 37759. 1772. 
L7 INDUCTOR 0603 -1.7784 -0.7561 0. 13210. 34933. 1725. 
U142 LMV1032 microSMD-4 -2.18025 -0.03575 180. 9865. 27310. 1674. 
L8 INDUCTOR 0603 0.1745 -0.6447 270. 29446. 33849. 1738. 
C87 0.047uf 0402 -2.3611 -0.8811 360. 8363. 36186. 1729. 
R53 9.2k 0402 1.062 -1.3883 90. 36817. 41587. 1796. 
R36 3.3k 0402 1.9546 -0.8747 270. 44273. 36230. 1772. 
C88 0.047uf 0402 -2.361 -0.8241 360. 8356. 35593. 1725. 
R54 9.2k 0402 1.062 -1.0284 270. 36838. 37762. 1772. 
R38 3.3k 0603 0.8646 -0.8147 360. 35200. 35636. 1757. 
R37 3.3k 0402 1.9546 -1.1347 270. 44266. 38878. 1788. 
TP1 TP TP 1.302 -1.3882 0. 38828. 41596. 1797. 
C89 0.047uf 0402 -2.361 -0.7671 360. 8358. 35023. 1721. 
C83 0.1uf 0402 1.2246 -0.5147 0. 38206. 32492. 1741. 
C12 1uf 0402 0.8182 0.1876 270. 34842. 25228. 1692. 
R39 3.3k 0402 1.5146 -0.8747 90. 40609. 36213. 1767. 
TP3 TP TP 1.302 -1.2484 0. 38835. 40039. 1788. 
C85 0.1uf 0402 0.2946 -0.0348 180. 30497. 27541. 1701. 
C29 0.01uf 0402 -1.5749 -0.1575 270. 14907. 28634. 1690. 
TP4 TP TP 0.8219 -1.1684 0. 34852. 39172. 1778. 
C15 1uf 0402 1.6037 0.0518 270. 41377. 26681. 1709. 
TP5 TP TP 0.8219 -1.3084 0. 34835. 40731. 1787. 
C86 0.1uf 0402 1.0946 -0.0347 360. 37136. 27478. 1709. 
TP6 TP TP 1.3021 -1.1085 0. 38832. 38563. 1779. 
TP7 TP TP 0.7021 -1.3883 0. 33824. 41561. 1791. 
TP8 TP TP 0.222 -1.0285 0. 29855. 37751. 1763. 
C19 1uf 0402 -0.6901 -0.0599 90. 22286. 27662. 1693. 
TP9 TP TP 0.7021 -1.2484 0. 33830. 40042. 1782. 
C90 0.047uf 0402 -2.361 -0.7101 360. 8360. 34449. 1718. 
R40 3.3k 0402 1.5146 -1.1347 90. 40602. 38842. 1784. 
C28 7pf 0402 -1.0306 -0.562 270. 19447. 32944. 1722. 
C36 0.01uf 0402 -1.1968 0.0315 0. 18064. 26795. 1682. 
C67 1uf 0603 0.8146 -0.7047 270. 34787. 34503. 1750. 
R13 25 0402 -1.57 -0.34 0. 14940. 30478. 1701. 
C68 1uf 0603 1.1946 -0.7247 270. 37950. 34725. 1755. 
C38 0.01uf 0402 -0.9763 -0.1733 270. 19894. 28829. 1697. 
R14 25 0402 -1.5749 -0.4094 270. 14897. 31177. 1705. 
C69 1uf 0603 1.2747 -0.7247 90. 38616. 34707. 1755. 
R16 25 0402 -1.1956 -0.8867 180. 18053. 36282. 1739. 
R1 33 0402 1.4961 0.0314 90. 40482. 26822. 1709. 
R5 220k 0402 -0.5628 -0.1852 90. 23338. 28986. 1701. 
R3 33 0402 1.6937 -0.1282 180. 42120. 28451. 1721. 
R4 33 0402 1.6937 -0.1982 180. 42116. 29193. 1725. 
R28 2.2k 0402 1.9346 -1.4048 90. 44069. 41754. 1804. 
R29 2.2k 0402 1.8346 -1.4047 90. 43249. 41818. 1804. 
C70 1uf 0603 1.2747 -0.6246 270. 38619. 33709. 1749. 
R2 100k 0402 1.4173 0.0315 90. 39826. 26815. 1708. 
C42 0.01uf 0402 -1.1955 -0.7166 180. 18052. 34552. 1730. 
R43 3k 0402 1.242 -1.3085 270. 38319. 40701. 1792. 
C73 1uf 0603 1.8646 -0.7147 0. 43527. 34646. 1761. 
R44 3k 0402 0.882 -1.1085 90. 35337. 38556. 1776. 
R49 33k 0402 1.202 -1.2285 270. 37988. 39816. 1787. 
C77 1uf 0603 0.7446 -0.9347 0. 34197. 36870. 1764. 
C32 1uf 0402 -0.8976 -0.6615 180. 20551. 34005. 1729. 
C79 1uf 0603 0.8646 -0.8747 180. 35198. 36251. 1761. 
R30 2.2k 0402 1.7347 -1.4047 90. 42427. 41804. 1803. 
C35 1uf 0402 -1.2913 0.0315 180. 17298. 26781. 1681. 
R31 2.2k 0402 1.6346 -1.4047 90. 41584. 41800. 1802. 
R50 33k 0402 0.9345 -1.1548 90. 35772. 39028. 1779. 
R11 10k 0402 -0.0001 0.126 90. 28025. 25843. 1690. 
C46 1uf 0402 -1.1955 -0.6766 180. 18053. 34138. 1727. 
R12 10k 0402 0.0001 0.5196 90. 28038. 21612. 1662. 
R9 10k 0402 0.0001 0.2835 270. 28031. 24093. 1677. 
R17 10k 0402 -1.685 -0.6615 270. 13974. 33945. 1741. 
R18 10k 0402 -1.5998 -0.4875 90. 14688. 32018. 1710. 
C14 0.001uf 0402 -0.96 -0.26 0. 20044. 29712. 1703. 
U92 LMV1032 microSMD-4 -2.53285 -0.03585 180. 6926. 27289. 1670. 
R55 6.5k 0402 0.9821 -1.3883 90. 36150. 41583. 1795. 
R56 6.5k 0402 1.142 -1.0284 270. 37502. 37773. 1774. 
R19 22K 0402 -0.9958 -0.6867 90. 19712. 34257. 1729. 
C2 0.1uf 0402 1.6237 -0.2581 270. 41530. 29787. 1728. 
C30 5pf 0402 -1.1907 -0.562 90. 18114. 32929. 1720. 
C25 0.001uf 0402 0.2835 0.0787 180. 30398. 26352. 1694. 
C20 33pf 0402 -0.5628 -0.3352 90. 23328. 30458. 1712. 
C13 8pf 0402 1.6877 -0.4299 270. 42062. 31517. 1741. 
C27 0.001uf 0402 -0.9763 -0.5039 90. 19900. 32258. 1718. 
C17 8pf 0402 1.4476 -0.4299 90. 40063. 31519. 1738. 
C71 0.1uf 0603 1.3545 -0.7247 90. 39280. 34701. 1756. 
C49 2.2uf 0402 -2.2324 -0.9436 0. 9413. 36840. 1734. 
C50 2.2uf 0402 -2.4802 -0.9455 0. 7350. 36852. 1732. 
C51 2.2uf 0402 -2.4779 0.0152 0. 7399. 26905. 1670. 
C52 2.2uf 0402 -2.2347 0.0184 0. 9423. 26881. 1672. 
C40 0.001uf 0402 -1.1956 -0.7568 180. 18050. 34938. 1732. 
C53 2.2uf 0402 -1.9398 -0.7554 0. 11855. 34916. 1725. 
C54 2.2uf 0402 -1.6317 -0.315 270. 14433. 30225. 1700. 
C48 2.2nF 0402 -0.9154 -0.9464 90. 20377. 36919. 1747. 
C55 2.2uf 0402 -1.8616 -0.7549 180. 12506. 34903. 1726. 
C56 0.012uf 0402 -1.7107 -0.7353 270. 13762. 34716. 1726. 
C57 0.012uf 0402 -1.6956 -0.8478 90. 13875. 35886. 1733. 
R7 90k 0402 -0.8225 -0.266 90. 21176. 29782. 1704. 
C58 0.012uf 0402 -1.8891 -0.8466 90. 12274. 35834. 1731. 
R57 22k 0402 0.942 -1.3883 90. 35826. 41602. 1795. 
TP10 TP TP 0.222 -1.1685 0. 29847. 39154. 1772. 
C22 10uf 0603 -0.6428 -0.1653 360. 22687. 28750. 1700. 
TP11 TP TP 0.222 -1.3084 0. 29820. 40682. 1781. 
C23 10uf 0603 -0.7429 -0.1652 180. 21854. 28745. 1699. 
TP12 TP TP 0.7022 -1.1085 0. 33848. 38556. 1773. 
C61 2.2uf 0402 -1.8422 -0.8468 90. 12664. 35859. 1732. 
C62 2.2uf 0402 -2.0357 -0.8464 90. 11053. 35837. 1730. 
C63 2.2uf 0402 -2.0001 -0.0836 270. 11363. 27899. 1681. 
C64 2.2uf 0402 -2.0025 -0.1862 90. 11350. 28924. 1688. 
C44 1.5pF 0402 -0.8357 -0.8065 180. 21045. 35478. 1739. 
C65 0.012uf 0402 -1.8247 -0.9119 0. 12808. 36505. 1736. 
C66 0.012uf 0402 -2.0181 -0.913 0. 11198. 36540. 1734. 
C39 0.1uf 0402 -1.3229 -0.6772 180. 16993. 34136. 1726. 
R6 825k 0402 -0.5628 -0.2651 90. 23329. 29784. 1706. 
C41 0.1uf 0402 -1.1023 0.0314 180. 18851. 26789. 1683. 
C45 0.1uf 0402 -0.9763 -0.0787 90. 19897. 27845. 1691. 
R34 327k 0402 1.9046 -0.8747 270. 43856. 36228. 1771. 
R35 327k 0402 1.9046 -1.1347 90. 43849. 38858. 1788. 
R51 47k 0402 1.202 -1.3085 270. 37985. 40700. 1792. 
R52 47k 0402 0.9221 -1.1083 90. 35661. 38566. 1776. 
C74 4.7uf 0603 1.9346 -1.0047 360. 44101. 37581. 1780. 
C75 4.7uf 0603 1.9346 -0.9447 360. 44103. 36957. 1776. 
C76 4.7uf 0603 1.9346 -1.0648 180. 44099. 38174. 1784. 
R41 327k 0402 1.5646 -0.8747 90. 41026. 36215. 1768. 
C78 4.7uf 0603 1.7346 -0.7947 0. 42442. 35463. 1765. 
R42 327k 0402 1.5645 -1.1347 270. 41018. 38856. 1784. 
C59 0.1uf 0402 -1.8046 -0.2246 270. 12986. 29320. 1692. 
U124 LMV1032 microSMD-4 -2.18025 -0.37765 180. 9843. 30773. 1696. 
U127 LMV1032 microSMD-4 -2.18025 -0.32065 180. 9845. 30296. 1692. 
C80 4.7uf 0603 1.5346 -0.9447 360. 40773. 36984. 1772. 
C81 4.7uf 0603 1.5346 -1.0648 180. 40769. 38149. 1780. 
R10 50k 0402 -0.9607 -0.3308 180. 20034. 30408. 1706. 
C82 4.7uf 0603 1.5346 -1.0047 360. 40771. 37546. 1776. 
C84 4.7uf 0603 0.1746 -0.5347 270. 29464. 32629. 1732. 
C60 0.1uf 0402 -1.8032 -0.0862 270. 13012. 27892. 1683. 
R15 50k 0402 -1.1956 -0.7967 180. 18055. 35380. 1734. 
U130 LMV1032 microSMD-4 -2.18025 -0.26365 180. 9857. 29685. 1689. 
2.9mm_hole VAL** 2.9mm_hole -2.325 0.2 0. 8698. 24995. 1658. 
U133 LMV1032 microSMD-4 -2.18025 -0.20665 180. 9849. 29114. 1685. 
C47 1pF 0402 -0.8158 -0.7565 90. 21212. 34950. 1736. 

counts spaced at exactly 1mm:
0 -13206.000000
1 -12795.000000
2 -12349.000000
3 -11983.000000
4 -11545.000000
5 -11117.000000
6 -10710.000000
7 -10262.000000
8 -9813.000000
9 -9395.000000
10 -8957.000000
11 -8561.000000
12 -8154.000000
13 -7726.000000
14 -7298.000000
15 -6897.000000
16 -6477.000000
17 -6093.000000
18 -5700.000000
19 -5309.000000
20 -4871.000000
21 -4453.000000
22 -4046.000000
23 -3639.000000
24 -3232.000000
25 -2836.000000
26 -2429.000000
27 -2011.000000
28 -1594.000000
29 -1187.000000
30 -780.000000
31 -352.000000
32 65.000000
33 472.000000
34 900.000000
35 1318.000000
36 1708.000000
37 2104.000000
38 2490.000000
39 2908.000000
40 3325.000000

{758}
hide / edit[1] / print
ref: work-0 tags: ocaml toplevel ocamlfind date: 06-24-2009 14:52 gmt revision:1 [0] [head]

Ocaml has an interactive top level, but in order to make this useful (e.g. for inspecting the types of variables, trying out code before compiling it), you need to import libraries and modules. If you have ocamlfind on your system (I think this is the requirement..), do this with: #use "topfind";; at the ocaml prompt, then #require"package names" . e.g:

tlh24@chimera:~/svn/m8ta/yushin$ ledit | ocaml
        Objective Caml version 3.10.2

# #use "topfind";;
- : unit = ()
Findlib has been successfully loaded. Additional directives:
  #require "package";;      to load a package
  #list;;                   to list the available packages
  #camlp4o;;                to load camlp4 (standard syntax)
  #camlp4r;;                to load camlp4 (revised syntax)
  #predicates "p,q,...";;   to set these predicates
  Topfind.reset();;         to force that packages will be reloaded
  #thread;;                 to enable threads

- : unit = ()
# #require "bigarray,gsl";;
/usr/lib/ocaml/3.10.2/bigarray.cma: loaded
/usr/lib/ocaml/3.10.2/gsl: added to search path
/usr/lib/ocaml/3.10.2/gsl/gsl.cma: loaded
# #require "pcre,unix,str";;
/usr/lib/ocaml/3.10.2/pcre: added to search path
/usr/lib/ocaml/3.10.2/pcre/pcre.cma: loaded
/usr/lib/ocaml/3.10.2/unix.cma: loaded
/usr/lib/ocaml/3.10.2/str.cma: loaded
# Pcre.pmatch
  ;;
- : ?iflags:Pcre.irflag ->
    ?flags:Pcre.rflag list ->
    ?rex:Pcre.regexp ->
    ?pat:string -> ?pos:int -> ?callout:Pcre.callout -> string -> bool
= <fun>
# let m = Gsl_matrix.create 3 3;;
val m : Gsl_matrix.matrix = <abstr>
# m;;
- : Gsl_matrix.matrix = <abstr>
# m.{1,1};;
- : float = 6.94305623882282e-310
# m.{0,0};;
- : float = 6.94305568087725e-310
# m.{1,1} <- 1.0 ;;
- : unit = ()
# m.{2,2} <- 2.0 ;;
- : unit = ()
# let mstr = Marshal.to_string m [] ;;

Nice!

{751}
hide / edit[3] / print
ref: work-0 tags: flexible kapton funny date: 05-12-2009 21:57 gmt revision:3 [2] [1] [0] [head]

-- from the Lenthor Engineering Design guide. Wow they are indeed everywhere!

{226}
hide / edit[1] / print
ref: work notes-0 tags: web stimulator SUNY ICMS python webinterface project date: 03-26-2007 04:26 gmt revision:1 [0] [head]

we are proud of this :)