m8ta
You are not authenticated, login. |
|
{763} | ||
I recently wrote a matlab script to measure & plot the autocorrelation of a spike train; to test it, I generated a series of timestamps from a homogeneous Poisson process: function [x, isi]= homopoisson(length, rate) % function [x, isi]= homopoisson(length, rate) % generate an instance of a poisson point process, unbinned. % length in seconds, rate in spikes/sec. % x is the timestamps, isi is the intervals between them. num = length * rate * 3; isi = -(1/rate).*log(1-rand(num, 1)); x = cumsum(isi); %%find the x that is greater than length. index = find(x > length); x = x(1:index(1,1)-1, 1); isi = isi(1:index(1,1)-1, 1); The autocorrelation of a Poisson process is, as it should be, flat: Above:
The problem with my recordings is that there is generally high long-range correlation, correlation which is destroyed by shuffling. Above is a plot of 1/isi for a noise channel with very high mean 'firing rate' (> 100Hz) in blue. Behind it, in red, is 1/shuffled isi. Noise and changes in the experimental setup (bad!) make the channel very non-stationary. Above is the autocorrelation plotted in the same way as figure 1. Normally, the firing rate is binned at 100Hz and high-pass filtered at 0.005hz so that long-range correlation is removed, but I turned this off for the plot. Note that the suffled data has a number of different offsets, primarily due to differing long-range correlations / nonstationarities. Same plot as figure 3, with highpass filtering turned on. Shuffled data still has far more local correlation - why? The answer seems to be in the relation between individual isis. Shuffling isi order obviuosly does not destroy the distribution of isi, but it does destroy the ordering or pair-wise correlation between isi(n) and isi(n+1). To check this, I plotted these two distributions: -- Original log(isi(n)) vs. log(isi(n+1) -- Shuffled log(isi_shuf(n)) vs. log(isi_shuf(n+1) -- Close-up of log(isi(n)) vs. log(isi(n+1) using alpha-blending for a channel that seems heavily corrupted with electro-cauterizer noise. | ||
{249} |
ref: notes-0
tags: sorting SNR correlation coefficient expectation maximization tlh24
date: 01-06-2012 03:07 gmt
revision:5
[4] [3] [2] [1] [0] [head]
|
|
Description: red is the per-channel cross-validated correlation coeifficent of prediction. Blue is the corresponding number of clusters that the unit was sorted into, divided by 10 to fit on the same axis. The variable being predicted is cartesian X position. note 32 channels were dead (from PP). The last four (most rpedictive) channels were: 71 (1 unit), 64 (5 units), 73 (6 units), 67 (1 unit). data from sql entry: clem 2007-03-08 18:59:27 timarm_log_20070308_185706.out ;Looks like this data came from PMD region. Description: same as above, but for the y-axis. Description: same as above, but for the z-axis. Conclusion: sorting seems to matter & have a non-negligible positive effect on predictive ability. | ||
{585} | ||
LMS-based adaptive decorrelator, xn is the noise, xs is the signal, len is the length of the signal, delay is the delay beyond which the autocorrelation function of the signal is zero but the acf of the noise is non-zero. The filter is very simple, and should be easy to implement in a DSP. function [y,e,h] = lms_test(xn, xs, len, delay) h = zeros(len, 1); x = xn + xs; for k = 1:length(x)-len-delay y(k) = x(k+delay:k+len-1+delay) * h ; e(k) = x(k) - y(k); h = h + 0.0004 * e(k) * x(k+delay:k+len-1+delay)'; endIt works well if the noise source is predictable & stable: (black = sinusoidal noise, red = output, green = error in output) Now, what if the amplitude of the corrupting sinusoid changes (e.g. due to varying electrode properties during movement), and the changes per cycle are larger than the amplitude of the signal? The signal will be swamped! The solution to this is to adapt the decorrelating filter slowly, by adding an extra (multiplicative, nonlinear) gain term to track the error in terms of the absolute values of the signals (another nonlinearity). So, if the input signal is on average larger than the output, the gain goes up and vice-versa. See the code. function [y,e,h,g] = lms_test(xn, xs, len, delay) h = zeros(len, 1); x = xn + xs; gain = 1; e = zeros(size(x)); e2 = zeros(size(x)); for k = 1:length(x)-len-delay y(k) = x(k+delay:k+len-1+delay) * h; e(k) = (x(k) - y(k)); h = h + 0.0002 * e(k) * x(k+delay:k+len-1+delay)'; % slow adaptation. y2(k) = y(k) * gain; e2(k) = abs(x(k)) - abs(y2(k)); gain = gain + 1 * e2(k) ; gain = abs(gain); if (gain > 3) gain = 3; end g(k) = gain; end If, like me, you are interested in only the abstract features of the signal, and not an accurate reconstruction of the waveform, then the gain signal (g above) reflects the signal in question (once the predictive filter has adapted). In my experiments with a length 16 filter delayed 16 samples, extracting the gain signal and filtering out out-of-band information yielded about +45db improvement in SNR. This was with a signal 1/100th the size of the disturbing amplitude-modulated noise. This is about twice as good as the human ear/auditory system in my tests.
It doesn't look like much, but it is just perfect for EMG signals corrupted by time-varying 60hz noise. | ||
{862} |
ref: -0
tags: backpropagation cascade correlation neural networks
date: 12-20-2010 06:28 gmt
revision:1
[0] [head]
|
|
The Cascade-Correlation Learning Architecture
| ||
{826} | ||
Studies in astronomical time series analysis. II - Statistical aspects of spectral analysis of unevenly spaced data Scargle, J. D.
|