m8ta
use https for features.
text: sort by
tags: modified
type: chronology
{1517}
hide / / print
ref: -2015 tags: spiking neural networks causality inference demixing date: 07-22-2020 18:13 gmt revision:1 [0] [head]

PMID-26621426 Causal Inference and Explaining Away in a Spiking Network

  • Rubén Moreno-Bote & Jan Drugowitsch
  • Use linear non-negative mixing plus nose to generate a series of sensory stimuli.
  • Pass these through a one-layer spiking or non-spiking neural network with adaptive global inhibition and adaptive reset voltage to solve this quadratic programming problem with non-negative constraints.
  • N causes, one observation: μ=Σ i=1 Nu ir i+ε \mu = \Sigma_{i=1}^{N} u_i r_i + \epsilon ,
    • r i0r_i \geq 0 -- causes can be present or not present, but not negative.
    • cause coefficients drawn from a truncated (positive only) Gaussian.
  • linear spiking network with symmetric weight matrix J=U TUβI J = -U^TU - \beta I (see figure above)
    • That is ... J looks like a correlation matrix!
    • UU is M x N; columns are the mixing vectors.
    • U is known beforehand and not learned
      • That said, as a quasi-correlation matrix, it might not be so hard to learn. See ref [44].
  • Can solve this problem by minimizing the negative log-posterior function: $$ L(\mu, r) = \frac{1}{2}(\mu - Ur)^T(\mu - Ur) + \alpha1^Tr + \frac{\beta}{2}r^Tr $$
    • That is, want to maximize the joint probability of the data and observations given the probabilistic model p(μ,r)exp(L(μ,r))Π i=1 NH(r i) p(\mu, r) \propto exp(-L(\mu, r)) \Pi_{i=1}^{N} H(r_i)
    • First term quadratically penalizes difference between prediction and measurement.
    • second term, alpha is a L1 regularization term, and third term w beta is a L2 regularization.
  • The negative log-likelihood is then converted to an energy function (linear algebra): W=U TUW = -U^T U , h=U Tμ h = U^T \mu then E(r)=0.5r TWrr Th+α1 Tr+0.5βr TrE(r) = 0.5 r^T W r - r^T h + \alpha 1^T r + 0.5 \beta r^T r
    • This is where they get the weight matrix J or W. If the vectors U are linearly independent, then it is negative semidefinite.
  • The dynamics of individual neurons w/ global inhibition and variable reset voltage serves to minimize this energy -- hence, solve the problem. (They gloss over this derivation in the main text).
  • Next, show that a spike-based network can similarly 'relax' or descent the objective gradient to arrive at the quadratic programming solution.
    • Network is N leaky integrate and fire neurons, with variable synaptic integration kernels.
    • α\alpha translates then to global inhibition, and β\beta to lowered reset voltage.
  • Yes, it can solve the problem .. and do so in the presence of firing noise in a finite period of time .. but a little bit meh, because the problem is not that hard, and there is no learning in the network.

{1430}
hide / / print
ref: -2017 tags: calcium imaging seeded iterative demixing light field microscopy mouse cortex hippocampus date: 02-13-2019 22:44 gmt revision:1 [0] [head]

PMID-28650477 Video rate volumetric Ca2+ imaging across cortex using seeded iterative demixing (SID) microscopy

  • Tobias Nöbauer, Oliver Skocek, Alejandro J Pernía-Andrade, Lukas Weilguny, Francisca Martínez Traub, Maxim I Molodtsov & Alipasha Vaziri
  • Cell-scale imaging at video rates of hundreds of GCaMP6 labeled neurons with light-field imaging followed by computationally-efficient deconvolution and iterative demixing based on non-negative factorization in space and time.
  • Utilized a hybrid light-field and 2p microscope, but didn't use the latter to inform the SID algorithm.
  • Algorithm:
    • Remove motion artifacts
    • Time iteration:
      • Compute the standard deviation versus time (subtract mean over time, measure standard deviance)
      • Deconvolve standard deviation image using Richardson-Lucy algo, with non-negativity, sparsity constraints, and a simulated PSF.
      • Yields hotspots of activity, putative neurons.
      • These neuron lcoations are convolved with the PSF, thereby estimating its ballistic image on the LFM.
      • This is converted to a binary mask of pixels which contribute information to the activity of a given neuron, a 'footprint'
        • Form a matrix of these footprints, p * n, S 0S_0 (p pixels, n neurons)
      • Also get the corresponding image data YY , p * t, (t time)
      • Solve: minimize over T ||YST|| 2|| Y - ST||_2 subject to T0T \geq 0
        • That is, find a non-negative matrix of temporal components TT which predicts data YY from masks SS .
    • Space iteration:
      • Start with the masks again, SS , find all sets O kO^k of spatially overlapping components s is_i (e.g. where footprints overlap)
      • Extract the corresponding data columns t it_i of T (from temporal step above) from O kO^k to yield T kT^k . Each column corresponds to temporal data corresponding to the spatial overlap sets. (additively?)
      • Also get the data matrix Y kY^k that is image data in the overlapping regions in the same way.
      • Minimize over S kS^k ||Y kS kT k|| 2|| Y^k - S^k T^k||_2
      • Subject to S k>=0S^k >= 0
        • That is, solve over the footprints S kS^k to best predict the data from the corresponding temporal components T kT^k .
        • They also impose spatial constraints on this non-negative least squares problem (not explained).
    • This process repeats.
    • allegedly 1000x better than existing deconvolution / blind source segmentation algorithms, such as those used in CaImAn