m8ta
You are not authenticated, login. 

{1052}  
IEEE5332822 (pdf) Neural prosthetic systems: Current problems and future directions
____References____ Chestek, C.A. and Cunningham, J.P. and Gilja, V. and Nuyujukian, P. and Ryu, S.I. and Shenoy, K.V. Engineering in Medicine and Biology Society, 2009. EMBC 2009. Annual International Conference of the IEEE 3369 3375 (2009)  
{1155}  
A frequent task in the lab is to sort spikes (extracellular neural action potentials) from background noise. In the lab we are working on doing this wirelessly; to minimize power consumption, spike sorting is done before the radio. In this way only times of spikes need be transmitted, saving bandwidth and power. (This necessitates a bidirectional radio protocol, but this is a worthy sacrifice). In most sorting programs (e.g. Plexon), the raw signal is first thresholded, then waveform snippets (typically 32 samples long) are compared to a template to accept/reject them, or to sort them into different units. The comparison metric is usually the meansquared error, MSE, aka the L2 norm. This makes sense, as the spike shapes are assumed to be stereotyped (they may very well not be), and the noise white / uncorrelated (another debatable assumption). On the headstage we are working with for wireless neural recording, jumps and memory moves are expensive operations, hence we've elected to do no waveform extraction, and instead match continuously match. By using the builtin MPEG compression opcodes, we can compute the L1 norm at a rate of 4 samples / clock  very efficient. However, this was more motivated by hardware considerations an not actual spike sorting practice. Literature suggests that for isolating a fixedpattern signal embedded in noise, the best solution is instead a matched filter. Hence, a careful study of spikesorting was attempted in matlab, given the following assumptions: fixed spike shape (this was extracted from real data), and uncorrelated bandlimited noise. The later was just white noise passed through a bandpass filter, e.g. cheby1(3, 2, [500/15e3 7.5/15]) Where the passband edges are 500 Hz and 15kHz, at a sampling rate of 30kHz. (Actual rate is 31.25kHz). Since the spike times are known, we can rigorously compare the Receiver Operating Characteristic (ROC) and the area under curve (AUC) for different sorting algorithms. Four were tried: L1 (as mentioned above, motivated by the MPEG opcodes), L2 (Plexon), FIR matched filter, and IIR matched filter. The latter was very much an experiment  IIR filters are efficiently implemented on the blackfin processor, and they generally require fewer taps than their equivalent FIR implementation. To find an IIR equivalent to a given FIR matched filter (whose impulse response closely looks like the actual waveshape, just timereversed), the filter parameters were simply optimized to match the two impulse responses. To facilitate the search, the denominator was specified in terms of complex conjugate pole locations (thereby constraining the form of the filter), while the numerator coefficients were individually optimized. Note that this is not optimizing given the objective to maximize sorting quality  rather, it is to make the IIR filter impulse response as close as possible to the FIR matched filter, hence computationally light. And yet: the IIR filter outperforms the FIR matched filter, even though the IIR filter has 1/3 the coefficients (10 vs 32)! Below is the AUC quality metric for the four methods. And here are representative ROC curves at varying spike SNR ratios. The remarkable thing is that even at very low SNR, the matched IIR filter can reliably sort cells from noise. (Note that the acceptable false positive here should be weighted more highly; in the present analysis true positive and false positive are weighted equally, which is decidedly nonBayesian given most of the time there is no spike.) The matched IIR filter is far superior to the normal MSE to template / L2 norm method  seems we've been doing it wrong all along? As for reliably finding spikes / templates / filters when the SNR < 0, the tests above  which assume an equal number of spike samples and nonspike samples  are highly biased; spikes are not normally sortable when the SNR < 0. Upon looking at the code again, I realized three important things:
Including #1 above, as expected, dramatically increased the false positive rate, which is to be expected and how the filters will be used in the real world. #2 did not dramatically impact any of the discriminators, which is good. #3 alleviated the gap between the IIR and FIR filters, and indeed the FIR matched filter performance now slightly exceeds the IIR matched filer. Below, AUC metric for 4 methods. And corresponding ROC for 6 different SNR ratios (note the SNRs sampled are slightly different, due to the higher false positive rate). One thing to note: as implemented, the IIR filter requires careful matching of poles and zeros, and is may not work with 1.15 fixedpoint math on the Blackfin. The method really deserves to be tested in vivo, which I shall do shortly. More updates: See www.aicit.org/jcit/ppl/JCIT0509_05.pdf  they add an 'adjustment' function to the matched filter due to variance in the amplitude of spikes, which adds a little performance at low SNRs. $F(t) = \left[ \frac{x(t)}{k \sigma} \dot e^{1\frac{x(t)}{k \sigma}} \right]^n$ Sigma is the standard deviation of x(t), n and k determine 'zoom intensity and zoom center'. The paper is not particularly well written  there are some typos, and their idea seems unjustified. Still the references are interesting:
Update: It is not to difficult to convert FIR filters to IIR filters using simple numerical optimization. Within my client program, this is done using simulated annealing; have tested this using fminsearch in matlab. To investigate the IIRfilter fitting problem more fully, I sliced the 10dimensional optimization space along pairs of dimensions about the optimum point as found using fminsearch. The parameters are as follows:
The figure below plots the +1 beyond the optimum for each axis pair. Click for full resolution image. Note that the last parameter is discrete, hence steps in the objective function. Also note that the problem is perfectly quadratic for the numerator, as expected, which is why LMS works so well. Note that for the denominator pole locations, the volume of the optimum is small, and there are interesting features beyond this. Some spaces have multiple optima. The next figure plots +0.1 beyond the optimum for each axis vs. every other one. It shows that, at least on a small scale, the problem becomes very quadratic in all axes hence amenable to line or conjugate gradient search. Moving away from planes that pass through a found optima, what does the space look like? E.g. From a naive start, how hard is it to find at least one workable solution? To test this, I perturbed the found optimum with white noise in the parameters std 0.2, and plotted the objective function as before, albeit at higher resolution (600 x 600 points for each slice). These figures show that there can be several optima in the denominator, but again it appears that a very rough exploration followed by gradient descent should arrive at an optima.  
{1157}  
PMID22448159 Spike sorting of heterogeneous neuron types by multimodalityweighted PCA and explicit robust variational Bayes.
 
{1051}  
IEEE5226763 (pdf) An Implantable 64Channel Wireless Microsystem for SingleUnit Neural Recording
____References____ Sodagar, A.M. and Perlin, G.E. and Ying Yao and Najafi, K. and Wise, K.D. An Implantable 64Channel Wireless Microsystem for SingleUnit Neural Recording SolidState Circuits, IEEE Journal of 44 9 2591 2604 (2009)  
{1045}  
PMID95711[0] Spike separation in multiunit records: A multivariate analysis of spike descriptive parameters
____References____
 
{1037}  
IEEE1546254 (pdf) A threedimensional neural recording microsystem with implantable data compression circuitry
____References____ Olsson, R.H., III and Wise, K.D. A threedimensional neural recording microsystem with implantable data compression circuitry SolidState Circuits, IEEE Journal of 40 12 2796  2804 (2005)  
{947}  
PMID6392757[0] Instruments for sorting neuroelectric data: a review
____References____
 
{1044}  
PMID10522821[0] A 100channel system for real time detection and storage of extracellular spike waveforms.
____References____
 
{249} 
ref: notes0
tags: sorting SNR correlation coefficient expectation maximization tlh24
date: 01062012 03:07 gmt
revision:5
[4] [3] [2] [1] [0] [head]


Description: red is the perchannel crossvalidated correlation coeifficent of prediction. Blue is the corresponding number of clusters that the unit was sorted into, divided by 10 to fit on the same axis. The variable being predicted is cartesian X position. note 32 channels were dead (from PP). The last four (most rpedictive) channels were: 71 (1 unit), 64 (5 units), 73 (6 units), 67 (1 unit). data from sql entry: clem 20070308 18:59:27 timarm_log_20070308_185706.out ;Looks like this data came from PMD region. Description: same as above, but for the yaxis. Description: same as above, but for the zaxis. Conclusion: sorting seems to matter & have a nonnegligible positive effect on predictive ability.  
{97}  
From Uncertain Spikes to Prosthetic Control a powerpoint presentation w/ good overview of all that the Brown group has done  
{224} 
ref: notes0
tags: kmeans clustering neurophysiology sorting
date: 01032012 06:51 gmt
revision:1
[0] [head]


kmeans is easy! % i want to do the kmeans alg. v = [x y]; nc = 7; dim = cols(v); n = rows(v); cent = rand(nc,dim); d = zeros(n, nc); for k = (1:500) for s = 1:nc d(:,s) = sqrt(sum((v  rvecrep(cent(s, :), n)).^2,2)); end % select the smallest [nada, g] = min(d'); g = g'; for s = 1:nc if(numel(find(g==s)) > 0) cent(s, :) = mean(v(g==s, :)); end end end real data from clementine:  
{948}  
PMID6396456[0] Computer separation of multiunit neuroelectric data: a review
____References____
 
{252}  
PMID15022843[0] A simulation study of information transmission by multiunit microelectrode recordings key idea:
____References____  
{227} 
ref: notes0
tags: expectation maximization EM clustering autosorting
date: 06162008 19:40 gmt
revision:5
[4] [3] [2] [1] [0] [head]


so, I coded up the EM algorithm  it was not hard, though i did have to put the likelihood calculation in C++ because i couldn't figure out how to vectorize it properly. It fits the clusters pretty well, but it does not tell you how many clusters there are! clustering with 5 underlying gaussians: plot of the loglikelihood of fitted gaussian mixtures vs. number of gaussians: the code is in subversion, of course. James has code for gibbssampling to the correct number of components! Here is an example of the output  it quickly removes the unnecessary gaussian components: images/227_4.pdf  original CEM (classification expectation maximization) paper, 1992, by Celeux and Govaert. Note that CEM with no variance estimation and gaussian clusters is the same as kmeans, see {224}. See also http://klustakwik.sourceforge.net/  
{258}  
PMID17271178[0] automatic spike sorting for neural decoding
____References____  
{260}  
Friday March 30 Jen shared an interesting algorithm for spike sorting: dist=pdist(psi); %This finds the Euclidean distances for all of the points (waveforms) in psi; %dist is of the form of a row vector of length m(m1)/2. Could convert into a %distance matrix via squareform function, but is computationally inefficient. %m is the number of waveforms in psit. link=linkage(dist); %This performs a nearest neighbor linkage on the distance matrix and returns %a matrix of size (m1)x3. Cols 1 and 2 contain the indices of the objects %were linked in pairs to form a new cluster. This new cluster is assigned the %index value m+i. There are m1 higher clusters that correspond to the interior %nodes of the hierarchical cluster tree. Col 3 contains the corresponding linkage %distances between the objects paired in the clusters at each row i. [H,T]=dendrogram(link,0); %This creates a dendrogram; 0 instructs the function to plot all nodes in %the tree. H is vector of line handles, and T a vector of the cluster %number assignment for each waveform in psit. It looks real nice in theory, and computes very quickly on 2000 x 32 waveform data (provided you don't want to plot)  however, I'm not sure if it works properly on synthetic data. Here are the commands that i tried: v = [randn(1000, 32); (randn(1000, 32) + rvecrep(ones(1,32),1000))]; [coef, vec] = pca(v); vv = v * vec(:, 1:2); dist = pdist(vv); link = linkage(dist); [H,T]=dendrogram(link,0); figure DensityPlotOpenGL(vv(:,1), vv(:,2))  the fitted dendogram, without PCA
 the fitted dendogram, with PCA
 the asociated PCA plot of the data, clearly showing two clusters. need to figure out how jen made the colorized plots  
{63}  
