 m8ta
 {1430} hide / / print ref: -2017 tags: calcium imaging seeded iterative demixing light field microscopy mouse cortex hippocampus date: 02-13-2019 22:44 gmt revision:1  [head] Tobias NÃ¶bauer, Oliver Skocek, Alejandro J PernÃ­a-Andrade, Lukas Weilguny, Francisca MartÃ­nez Traub, Maxim I Molodtsov & Alipasha Vaziri Cell-scale imaging at video rates of hundreds of GCaMP6 labeled neurons with light-field imaging followed by computationally-efficient deconvolution and iterative demixing based on non-negative factorization in space and time.  Utilized a hybrid light-field and 2p microscope, but didn't use the latter to inform the SID algorithm. Algorithm: Remove motion artifacts Time iteration: Compute the standard deviation versus time (subtract mean over time, measure standard deviance) Deconvolve standard deviation image using Richardson-Lucy algo, with non-negativity, sparsity constraints, and a simulated PSF. Yields hotspots of activity, putative neurons. These neuron lcoations are convolved with the PSF, thereby estimating its ballistic image on the LFM. This is converted to a binary mask of pixels which contribute information to the activity of a given neuron, a 'footprint' Form a matrix of these footprints, p * n, $S_0$ (p pixels, n neurons) Also get the corresponding image data $Y$ , p * t, (t time) Solve: minimize over T $|| Y - ST||_2$ subject to $T \geq 0$ That is, find a non-negative matrix of temporal components $T$ which predicts data $Y$ from masks $S$ . Space iteration: Start with the masks again, $S$ , find all sets $O^k$ of spatially overlapping components $s_i$ (e.g. where footprints overlap) Extract the corresponding data columns $t_i$ of T (from temporal step above) from $O^k$ to yield $T^k$ . Each column corresponds to temporal data corresponding to the spatial overlap sets. (additively?) Also get the data matrix $Y^k$ that is image data in the overlapping regions in the same way. Minimize over $S^k$ $|| Y^k - S^k T^k||_2$ Subject to $S^k >= 0$ That is, solve over the footprints $S^k$ to best predict the data from the corresponding temporal components $T^k$ . They also impose spatial constraints on this non-negative least squares problem (not explained). This process repeats. allegedly 1000x better than existing deconvolution / blind source segmentation algorithms, such as those used in CaImAn