m8ta
you are not logged in, login. new entry
text: sort by
tags: modified
type: chronology
{1410}
hide / edit[1] / print
ref: -0 tags: kernel regression structure discovery fitting gaussian process date: 09-24-2018 22:09 gmt revision:1 [0] [head]

Structure discovery in Nonparametric Regression through Compositional Kernel Search

  • Use Gaussian process kernels (squared exponential, periodic, linear, and ratio-quadratic)
  • to model a kernel function, k(x,x) which specifies how similar or correlated outputs y and y are expected to be at two points $$x$ and x .
    • By defining the measure of similarity between inputs, the kernel determines the pattern of inductive generalization.
    • This is different than modeling the mapping y=f(x) .
    • It's something more like y=N(m(x)+k(x,x)) -- check the appendix.
    • See also: http://rsta.royalsocietypublishing.org/content/371/1984/20110550
  • Gaussian process models use a kernel to define the covariance between any two function values: Cov(y,y)=k(x,x) .
  • This kernel family is closed under addition and multiplication, and provides an interpretable structure.
  • Search for kernel structure greedily & compositionally,
    • then optimize parameters with conjugate gradients with restarts.
    • This seems straightforwardly intuitive...
  • Kernels are scored with the BIC.
  • C.f. {842} -- "Because we learn expressions describing the covariance structure rather than the functions themselves, we are able to capture structure which does not have a simple parametric form."
  • All their figure examples are 1-D time-series, which is kinda boring, but makes sense for creating figures.
    • Tested on multidimensional (d=4) synthetic data too.
    • Not sure how they back out modeling the covariance into actual predictions -- just draw (integrate) from the distribution?

{763}
hide / edit[5] / print
ref: work-2999 tags: autocorrelation poisson process test neural data ISI synchrony DBS date: 02-16-2012 17:53 gmt revision:5 [4] [3] [2] [1] [0] [head]

I recently wrote a matlab script to measure & plot the autocorrelation of a spike train; to test it, I generated a series of timestamps from a homogeneous Poisson process:

function [x, isi]= homopoisson(length, rate)
% function [x, isi]= homopoisson(length, rate)
% generate an instance of a poisson point process, unbinned.
% length in seconds, rate in spikes/sec. 
% x is the timestamps, isi is the intervals between them.

num = length * rate * 3; 
isi = -(1/rate).*log(1-rand(num, 1)); 
x = cumsum(isi); 
%%find the x that is greater than length. 
index = find(x > length); 
x = x(1:index(1,1)-1, 1); 
isi = isi(1:index(1,1)-1, 1); 

The autocorrelation of a Poisson process is, as it should be, flat:

Above:

  • Red lines are the autocorrelations estimated from shuffled timestamps (e.g. measure the ISIs - interspike intervals - shuffle these, and take the cumsum to generate a new series of timestamps). Hence, red lines are a type of control.
  • Blue lines are the autocorrelations estimated from segments of the full timestamp series. They are used to how stable the autocorrelation is over the recording
  • Black line is the actual autocorrelation estimated from the full timestamp series.

The problem with my recordings is that there is generally high long-range correlation, correlation which is destroyed by shuffling.

Above is a plot of 1/isi for a noise channel with very high mean 'firing rate' (> 100Hz) in blue. Behind it, in red, is 1/shuffled isi. Noise and changes in the experimental setup (bad!) make the channel very non-stationary.

Above is the autocorrelation plotted in the same way as figure 1. Normally, the firing rate is binned at 100Hz and high-pass filtered at 0.005hz so that long-range correlation is removed, but I turned this off for the plot. Note that the suffled data has a number of different offsets, primarily due to differing long-range correlations / nonstationarities.

Same plot as figure 3, with highpass filtering turned on. Shuffled data still has far more local correlation - why?

The answer seems to be in the relation between individual isis. Shuffling isi order obviuosly does not destroy the distribution of isi, but it does destroy the ordering or pair-wise correlation between isi(n) and isi(n+1). To check this, I plotted these two distributions:

-- Original log(isi(n)) vs. log(isi(n+1)

-- Shuffled log(isi_shuf(n)) vs. log(isi_shuf(n+1)

-- Close-up of log(isi(n)) vs. log(isi(n+1) using alpha-blending for a channel that seems heavily corrupted with electro-cauterizer noise.

{735}
hide / edit[0] / print
ref: -0 tags: processing javascript vector graphics web date: 05-03-2009 18:20 gmt revision:0 [head]

http://www.mattryall.net/blog/2008/11/wiki-visualisations-with-javascript -- way cool!!

{381}
hide / edit[2] / print
ref: notes-0 tags: low-power microprocessor design techniques ieee DSP date: 05-29-2007 03:30 gmt revision:2 [1] [0] [head]

http://hardm.ath.cx:88/pdf/lowpowermicrocontrollers.pdf

also see IBM's eLite DSP project.

{37}
hide / edit[0] / print
ref: bookmark-0 tags: Unscented sigma_pint kalman filter speech processing machine_learning SDRE control UKF date: 0-0-2007 0:0 revision:0 [head]