PMID27870840 Singlenanotube tracking reveals the nanoscale organization of the extracellular space in the live brain
 Extracellular space (ECS) takes up nearly a quarter the volume of the brain (!!!)
 Used the intrinsic fluorescence of singlewalled carbon nanotubes @ 1um, 845nm excitation, with superresolution tracking of diffusion.
 Were coated in phospholipidpolyethylene glycol (PLPEG), which display low cytotoxicity compared to other encapsulants.
 5ul, 3ug/ml injected into the ventricles of young rats; allowed to diffuse for 30 minutes postinjection.
 No apparent response of the microglia.
 Diffusion tracking revealed substantial deadspace domains in the ECS.
 As compared to patchclamp loaded SWCNTs
 Estimate from parallel and perpendicular diffusion rates that the characteristic scale of ECS dimension is 80 to 270nm, or 150 + 40nm.
 The ECS nanoscale dimensions as visualized by tracking similar in dimension and tortuosity to electron microscopy.
 Viscosity of the extracellular matrix from 1 to 50 mPa S, up to two orders of magnitude higher than the CSF.
 Positive control through hyalurinase + several hours to digest the hyaluronic acid.
 But no observed changes in morphology of the neurons via confocal .. interesting.
 Enzyme digestion normalized the spatial heterogenaity of diffusion.

An interesting field in ML is nonlinear dimensionality reduction  data may appear to be in a highdimensional space, but mostly lies along a nonlinear lowerdimensional subspace or manifold. (Linear subspaces are easily discovered with PCA or SVD(*)). Dimensionality reduction projects highdimensional data into a lowdimensional space with minimum information loss > maximal reconstruction accuracy; nonlinear dim reduction does this (surprise!) using nonlinear mappings. These techniques set out to find the manifold(s):
 Spectral Clustering
 Locally Linear Embedding
 related: The manifold ways of perception
 Would be interesting to run nonlinear dimensionality reduction algorithms on our data! What sort of space does the motor system inhabit? Would it help with prediction? Am quite sure people have looked at Kohonen maps for this purpose.
 Random irrelevant thought: I haven't been watching TV lately, but when I do, I find it difficult to recognize otherwise recognizable actors. In real life, I find no difficulty recognizing people, even some whom I don't know personally  is this a data thing (little training data), or mapping thing (not enough time training my TVnoteyes facial recognition).
 A Global Geometric Framework for Nonlinear Dimensionality Reduction method:
 map the points into a graph by connecting each point with a certain number of its neighbors or all neighbors within a certain radius.
 estimate geodesic distances between all points in the graph by finding the shortest graph connection distance
 use MDS (multidimensional scaling) to embed the original data into a smallerdimensional euclidean space while preserving as much of the original geometry.
 Doesn't look like a terribly fast algorithm!
(*) SVD maps into 'concept space', an interesting interpretation as per Leskovec's lecture presentation. 