m8ta
You are not authenticated, login. 

{1155}  
A frequent task in the lab is to sort spikes (extracellular neural action potentials) from background noise. In the lab we are working on doing this wirelessly; to minimize power consumption, spike sorting is done before the radio. In this way only times of spikes need be transmitted, saving bandwidth and power. (This necessitates a bidirectional radio protocol, but this is a worthy sacrifice). In most sorting programs (e.g. Plexon), the raw signal is first thresholded, then waveform snippets (typically 32 samples long) are compared to a template to accept/reject them, or to sort them into different units. The comparison metric is usually the meansquared error, MSE, aka the L2 norm. This makes sense, as the spike shapes are assumed to be stereotyped (they may very well not be), and the noise white / uncorrelated (another debatable assumption). On the headstage we are working with for wireless neural recording, jumps and memory moves are expensive operations, hence we've elected to do no waveform extraction, and instead match continuously match. By using the builtin MPEG compression opcodes, we can compute the L1 norm at a rate of 4 samples / clock  very efficient. However, this was more motivated by hardware considerations an not actual spike sorting practice. Literature suggests that for isolating a fixedpattern signal embedded in noise, the best solution is instead a matched filter. Hence, a careful study of spikesorting was attempted in matlab, given the following assumptions: fixed spike shape (this was extracted from real data), and uncorrelated bandlimited noise. The later was just white noise passed through a bandpass filter, e.g. cheby1(3, 2, [500/15e3 7.5/15]) Where the passband edges are 500 Hz and 15kHz, at a sampling rate of 30kHz. (Actual rate is 31.25kHz). Since the spike times are known, we can rigorously compare the Receiver Operating Characteristic (ROC) and the area under curve (AUC) for different sorting algorithms. Four were tried: L1 (as mentioned above, motivated by the MPEG opcodes), L2 (Plexon), FIR matched filter, and IIR matched filter. The latter was very much an experiment  IIR filters are efficiently implemented on the blackfin processor, and they generally require fewer taps than their equivalent FIR implementation. To find an IIR equivalent to a given FIR matched filter (whose impulse response closely looks like the actual waveshape, just timereversed), the filter parameters were simply optimized to match the two impulse responses. To facilitate the search, the denominator was specified in terms of complex conjugate pole locations (thereby constraining the form of the filter), while the numerator coefficients were individually optimized. Note that this is not optimizing given the objective to maximize sorting quality  rather, it is to make the IIR filter impulse response as close as possible to the FIR matched filter, hence computationally light. And yet: the IIR filter outperforms the FIR matched filter, even though the IIR filter has 1/3 the coefficients (10 vs 32)! Below is the AUC quality metric for the four methods. And here are representative ROC curves at varying spike SNR ratios. The remarkable thing is that even at very low SNR, the matched IIR filter can reliably sort cells from noise. (Note that the acceptable false positive here should be weighted more highly; in the present analysis true positive and false positive are weighted equally, which is decidedly nonBayesian given most of the time there is no spike.) The matched IIR filter is far superior to the normal MSE to template / L2 norm method  seems we've been doing it wrong all along? As for reliably finding spikes / templates / filters when the SNR < 0, the tests above  which assume an equal number of spike samples and nonspike samples  are highly biased; spikes are not normally sortable when the SNR < 0. Upon looking at the code again, I realized three important things:
Including #1 above, as expected, dramatically increased the false positive rate, which is to be expected and how the filters will be used in the real world. #2 did not dramatically impact any of the discriminators, which is good. #3 alleviated the gap between the IIR and FIR filters, and indeed the FIR matched filter performance now slightly exceeds the IIR matched filer. Below, AUC metric for 4 methods. And corresponding ROC for 6 different SNR ratios (note the SNRs sampled are slightly different, due to the higher false positive rate). One thing to note: as implemented, the IIR filter requires careful matching of poles and zeros, and is may not work with 1.15 fixedpoint math on the Blackfin. The method really deserves to be tested in vivo, which I shall do shortly. More updates: See www.aicit.org/jcit/ppl/JCIT0509_05.pdf  they add an 'adjustment' function to the matched filter due to variance in the amplitude of spikes, which adds a little performance at low SNRs. $F(t) = \left[ \frac{x(t)}{k \sigma} \dot e^{1\frac{x(t)}{k \sigma}} \right]^n$ Sigma is the standard deviation of x(t), n and k determine 'zoom intensity and zoom center'. The paper is not particularly well written  there are some typos, and their idea seems unjustified. Still the references are interesting:
Update: It is not to difficult to convert FIR filters to IIR filters using simple numerical optimization. Within my client program, this is done using simulated annealing; have tested this using fminsearch in matlab. To investigate the IIRfilter fitting problem more fully, I sliced the 10dimensional optimization space along pairs of dimensions about the optimum point as found using fminsearch. The parameters are as follows:
The figure below plots the +1 beyond the optimum for each axis pair. Click for full resolution image. Note that the last parameter is discrete, hence steps in the objective function. Also note that the problem is perfectly quadratic for the numerator, as expected, which is why LMS works so well. Note that for the denominator pole locations, the volume of the optimum is small, and there are interesting features beyond this. Some spaces have multiple optima. The next figure plots +0.1 beyond the optimum for each axis vs. every other one. It shows that, at least on a small scale, the problem becomes very quadratic in all axes hence amenable to line or conjugate gradient search. Moving away from planes that pass through a found optima, what does the space look like? E.g. From a naive start, how hard is it to find at least one workable solution? To test this, I perturbed the found optimum with white noise in the parameters std 0.2, and plotted the objective function as before, albeit at higher resolution (600 x 600 points for each slice). These figures show that there can be several optima in the denominator, but again it appears that a very rough exploration followed by gradient descent should arrive at an optima.  
{175}  
{15} 
ref: bookmark0
tags: monte_carlo MCMC particle_filter probability bayes filtering biblography
date: 002007 0:0
revision:0
[head]


http://wwwsigproc.eng.cam.ac.uk/smc/papers.html  sequential monte carlo methods. (bibliography)
