m8ta
you are not logged in, login. new entry
text: sort by
tags: modified
type: chronology
{1371}
hide / edit[0] / print
ref: -0 tags: nanotube tracking extracellular space fluorescent date: 02-02-2017 22:13 gmt revision:0 [head]

PMID-27870840 Single-nanotube tracking reveals the nanoscale organization of the extracellular space in the live brain

  • Extracellular space (ECS) takes up nearly a quarter the volume of the brain (!!!)
  • Used the intrinsic fluorescence of single-walled carbon nanotubes @ 1um, 845nm excitation, with super-resolution tracking of diffusion.
    • Were coated in phospholipid-polyethylene glycol (PL-PEG), which display low cytotoxicity compared to other encapsulants.
  • 5ul, 3ug/ml injected into the ventricles of young rats; allowed to diffuse for 30 minutes post-injection.
  • No apparent response of the microglia.
  • Diffusion tracking revealed substantial dead-space domains in the ECS.
    • As compared to patch-clamp loaded SWCNTs
  • Estimate from parallel and perpendicular diffusion rates that the characteristic scale of ECS dimension is 80 to 270nm, or 150 +- 40nm.
  • The ECS nanoscale dimensions as visualized by tracking similar in dimension and tortuosity to electron microscopy.
  • Viscosity of the extracellular matrix from 1 to 50 mPa S, up to two orders of magnitude higher than the CSF.
  • Positive control through hyalurinase + several hours to digest the hyaluronic acid.
    • But no observed changes in morphology of the neurons via confocal .. interesting.
    • Enzyme digestion normalized the spatial heterogenaity of diffusion.

{796}
hide / edit[5] / print
ref: work-0 tags: machine learning manifold detection subspace segregation linearization spectral clustering date: 10-29-2009 05:16 gmt revision:5 [4] [3] [2] [1] [0] [head]

An interesting field in ML is nonlinear dimensionality reduction - data may appear to be in a high-dimensional space, but mostly lies along a nonlinear lower-dimensional subspace or manifold. (Linear subspaces are easily discovered with PCA or SVD(*)). Dimensionality reduction projects high-dimensional data into a low-dimensional space with minimum information loss -> maximal reconstruction accuracy; nonlinear dim reduction does this (surprise!) using nonlinear mappings. These techniques set out to find the manifold(s):

  • Spectral Clustering
  • Locally Linear Embedding
    • related: The manifold ways of perception
      • Would be interesting to run nonlinear dimensionality reduction algorithms on our data! What sort of space does the motor system inhabit? Would it help with prediction? Am quite sure people have looked at Kohonen maps for this purpose.
    • Random irrelevant thought: I haven't been watching TV lately, but when I do, I find it difficult to recognize otherwise recognizable actors. In real life, I find no difficulty recognizing people, even some whom I don't know personally - is this a data thing (little training data), or mapping thing (not enough time training my TV-not-eyes facial recognition).
  • A Global Geometric Framework for Nonlinear Dimensionality Reduction method:
    • map the points into a graph by connecting each point with a certain number of its neighbors or all neighbors within a certain radius.
    • estimate geodesic distances between all points in the graph by finding the shortest graph connection distance
    • use MDS (multidimensional scaling) to embed the original data into a smaller-dimensional euclidean space while preserving as much of the original geometry.
      • Doesn't look like a terribly fast algorithm!

(*) SVD maps into 'concept space', an interesting interpretation as per Leskovec's lecture presentation.