you are not logged in, login. new entry
text: sort by
tags: modified
type: chronology
{1403} is owned by guest.
hide / edit[1] / print
ref: -0 tags: US employment top 100 bar chart date: 11-12-2018 00:02 gmt revision:1 [0] [head]

After briefly searching the web, I could not find a chart of the top 100 occupations in the US. After downloading the data from the US Bureau of Labor Statistics, made this chart:

Click for full-size.

Surprising how very service heavy our economy is.

hide / edit[0] / print
ref: -0 tags: cutting plane manifold learning classification date: 10-31-2018 23:49 gmt revision:0 [head]

Learning data manifolds with a Cutting Plane method

  • Looks approximately like SVM: perform binary classification on a high-dimensional manifold (or sets of manifolds in this case).
  • The general idea behind Mcp_simple is to start with a finite number of training examples, find the maximum margin solution for that training set, augment the draining set by finiding a poing on the manifolds that violates the constraints, iterating the process until a tolerance criteria is met.
  • The more complicated cutting plane SVM uses slack variables to allow solution where classification is not linearly separable.
    • Propose using one slack variable per manifold, plus a manifold center, which strictly obeys the margin (classification) constraint.
  • Much effort put to proving the convergence properties of these algorithms; admittedly I couldn't be bothered to read...

hide / edit[0] / print
ref: -0 tags: variational free energy inference learning bayes curiosity insight date: 10-31-2018 22:33 gmt revision:0 [head]

Active inference, curiosity and insight.

  • This has been my intuition for a while; you can learn abstract rules via active probing of the environment. This paper supports such intuitions with extensive scholarship.
  • “The basic theme of this article is that one can cast learning, inference, and decision making as processes that resolve uncertanty about the world.
    • References Schmidhuber 1991
  • “A learner should choose a policy that also maximizes the learner’s predictive power. This makes the world both interesting and exploitable.” (Still and Precup 2012)
  • “Our approach rests on the free energy principle, which asserts that any sentient creature must minimize the entropy of its sensory exchanges with the world.” Ok, that might be generalizing things too far..
  • Levels of uncertainty:
    • Perceptual inference, the causes of sensory outcomes under a particular policy
    • Uncertainty about policies or about future states of the world, outcomes, and the probabilistic contingencies that bind them.
  • For the last element (probabilistic contingencies between the world and outcomes), they employ Bayesian model selection / Bayesian model reduction
    • Can occur not only on the data, but exclusively on the initial model itself.
    • “We use simulations of abstract rule learning to show that context-sensitive contingiencies, which are manifest in a high-dimensional space of latent or hidden states, can be learned with straightforward variational principles (ie. minimization of free energy).
  • Assume that initial states and state transitions are known.
  • Perception or inference about hidden states (i.e. state estimation) corresponds to inverting a generative model gievn a sequence of outcomes, while learning involves updating the parameters of the model.
  • The actual task is quite simple: central fixation leads to a color cue. The cue + peripheral color determines either which way to saccade.
  • Gestalt: Good intuitions, but I’m left with the impression that the authors overexplain and / or make the description more complicated that it need be.
    • The actual number of parameters to to be inferred is rather small -- 3 states in 4 (?) dimensions, and these parameters are not hard to learn by minimizing the variational free energy:
    • F=D[Q(x)P(x)]E q[ln(P(o tx)] where D is the Kullback-Leibler divergence.
      • Mean field approximation: Q(x) is fully factored (not here). many more notes

hide / edit[1] / print
ref: -0 tags: hahnloser zebrafinch LMAN HVC song learning internal model date: 10-12-2018 00:33 gmt revision:1 [0] [head]

PMID-24711417 Evidence for a causal inverse model in an avian cortico-basal ganglia circuit

  • Recorded an stimulated the LMAN (upstream, modulatory) region of the zebrafinch song-production & learning pathway.
  • Found evidence, albeit weak, for a mirror arrangement or 'causal inverse' there: neurons fire bursts prior syllable production with some motor delay, ~30ms, and also fire single spikes with a delay ~10 ms to the same syllables.
    • This leads to an overall 'mirroring offset' of about 40 ms, which is sufficiently supported by the data.
    • The mirroring offset is quantified by looking at the cross-covariance of audio-synchronized motor and sensory firing rates.
  • Causal inverse: a sensory target input generates a motor activity pattern required to cause, or generate that same sensory target.
    • Similar to the idea of temporal inversion via memory.
  • Data is interesting, but not super strong; per the discussion, the authors were going for a much broader theory:
    • Normal Hebbian learning says that if a presynaptic neuron fires before a postsynaptic neuron, then the synapse is potentiated.
    • However, there is another side of the coin: if the presynaptic neuron fires after the postsynaptic neuron, the synapse can be similarly strengthened, permitting the learning of inverse models.
      • "This order allows sensory feedback arriving at motor neurons to be associated with past postsynaptic patterns of motor activity that could have caused this sensory feedback. " So: stimulate the sensory neuron (here hypothetically in LMAN) to get motor output; motor output is indexed in the sensory space.
      • In mammals, a similar rule has been found to describe synaptic connections from the cortex to the basal ganglia [37].
      • ... or, based on anatomy, a causal inverse could be connected to a dopaminergic VTA, thereby linking with reinforcement learning theories.
      • Simple reinforcement learning strategies can be enhanced with inverse models as a means to solve the structural credit assignment problem [49].
  • Need to review literature here, see how well these theories of cortical-> BG synapse match the data.

hide / edit[0] / print
ref: -0 tags: deeplabcut markerless tracking DCN transfer learning date: 10-03-2018 23:56 gmt revision:0 [head]

Markerless tracking of user-defined features with deep learning

  • Human - level tracking with as few as 200 labeled frames.
  • No dynamics - could be even better with a Kalman filter.
  • Uses a Google-trained DCN, 50 or 101 layers deep.
    • Network has a distinct read-out layer per feature to localize the probability of a body part to a pixel location.
  • Uses the DeeperCut network architecture / algorithm for pose estimation.
  • These deep features were trained on ImageNet
  • Trained on examples with both only the readout layers (rest fixed per ResNet), as well as end-to-end; latter performs better, unsurprising.

hide / edit[0] / print
ref: -0 tags: NMDA spike hebbian learning states pyramidal cell dendrites date: 10-03-2018 01:15 gmt revision:0 [head]

PMID-20544831 The decade of the dendritic NMDA spike.

  • NMDA spikes occur in the finer basal, oblique, and tuft dendrites.
  • Typically 40-50 mV, up to 100's of ms in duration.
  • Look similar to cortical up-down states.
  • Permit / form the substrate for spatially and temporally local computation on the dendrites that can enhance the representational or computational repertoire of individual neurons.

hide / edit[1] / print
ref: -0 tags: kernel regression structure discovery fitting gaussian process date: 09-24-2018 22:09 gmt revision:1 [0] [head]

Structure discovery in Nonparametric Regression through Compositional Kernel Search

  • Use Gaussian process kernels (squared exponential, periodic, linear, and ratio-quadratic)
  • to model a kernel function, k(x,x) which specifies how similar or correlated outputs y and y are expected to be at two points $$x$ and x .
    • By defining the measure of similarity between inputs, the kernel determines the pattern of inductive generalization.
    • This is different than modeling the mapping y=f(x) .
    • It's something more like y=N(m(x)+k(x,x)) -- check the appendix.
    • See also: http://rsta.royalsocietypublishing.org/content/371/1984/20110550
  • Gaussian process models use a kernel to define the covariance between any two function values: Cov(y,y)=k(x,x) .
  • This kernel family is closed under addition and multiplication, and provides an interpretable structure.
  • Search for kernel structure greedily & compositionally,
    • then optimize parameters with conjugate gradients with restarts.
    • This seems straightforwardly intuitive...
  • Kernels are scored with the BIC.
  • C.f. {842} -- "Because we learn expressions describing the covariance structure rather than the functions themselves, we are able to capture structure which does not have a simple parametric form."
  • All their figure examples are 1-D time-series, which is kinda boring, but makes sense for creating figures.
    • Tested on multidimensional (d=4) synthetic data too.
    • Not sure how they back out modeling the covariance into actual predictions -- just draw (integrate) from the distribution?

hide / edit[5] / print
ref: work-0 tags: distilling free-form natural laws from experimental data Schmidt Cornell automatic programming genetic algorithms date: 09-14-2018 01:34 gmt revision:5 [4] [3] [2] [1] [0] [head]

Distilling free-form natural laws from experimental data

  • There critical step was to use partial derivatives to evaluate the search for invariants. Even yet, with a 4D data set the search for natural laws took ~ 30 hours.
    • Then again, how long did it take humans to figure out these invariants? (Went about it in a decidedly different way..)
    • Further, how long did it take for biology to discover similar invariants?
      • They claim elsewhere that the same algorithm has been applied to biological data - a metabolic pathway - with some success.
      • Of course evolution had to explore a much larger space - proteins and reculatory pathways, not simpler mathematical expressions / linkages.

hide / edit[8] / print
ref: -0 tags: coevolution fitness prediction schmidt genetic algorithm date: 09-14-2018 01:34 gmt revision:8 [7] [6] [5] [4] [3] [2] [head]

Coevolution of Fitness Predictors

  • Michael D. Schmidt and Hod Lipson, Member, IEEE
  • Fitness prediction is a technique to replace fitness evaluation in evolutionary algorithms with a light-weight approximation that adapts with the solution population.
    • Cannot approximate the full landscape, but shift focus during evolution.
    • Aka local caching.
    • Or adversarial techniques.
  • Instead use coevolution, with three populations:
    • 1) solutions to the original problem, evaluated using only fitness predictors;
    • 2) fitness predictors of the problem; and
    • 3) fitness trainers, whose exact fitness is used to train predictors.
      • Trainers are selected high variance solutions across the predictors, and predictors are trained on this subset.
  • Lightweight fitness predictors evolve faster than the solution population, so they cap the computational effort on that at 5% overall effort.
    • These fitness predictors are basically an array of integers which index the full training set -- very simple and linear. Maybe boring, but the simplest solution that works ...
    • They only sample 8 training examples for even complex 30-node solution functions (!!).
    • I guess, because the information introduced into the solution set is relatively small per generation, it makes little sense to over-sample or over-specify this; all that matters is that, on average, it's directionally correct and unbiased.
  • Used deterministic crowding selection as the evolutionary algorithm.
    • Similar individuals have to compete in tournaments for space.
  • Showed that the coevolution algorithm is capable of inferring even highly complex many-term functions
    • And, it uses function evaluations more efficiently than the 'exact' (each solution evaluated exactly) algorithm.
  • Coevolution algorithm seems to induce less 'bloat' in the complexity of the solutions.
  • See also {842}

hide / edit[0] / print
ref: -2018 tags: machine learning manifold deep neural net geometry regularization date: 08-29-2018 14:30 gmt revision:0 [head]

LDMNet: Low dimensional manifold regularized neural nets.

  • Synopsis of the math:
    • Fit a manifold formed from the concatenated input ‘’and’’ output variables, and use this set the loss of (hence, train) a deep convolutional neural network.
      • Manifold is fit via point integral method.
      • This requires both SGD and variational steps -- alternate between fitting the parameters, and fitting the manifold.
      • Uses a standard deep neural network.
    • Measure the dimensionality of this manifold to regularize the network. Using a 'elegant trick', whatever that means.
  • Still yet he results, in terms of error, seem not very significantly better than previous work (compared to weight decay, which is weak sauce, and dropout)
    • That said, the results in terms of feature projection, figures 1 and 2, ‘’do’’ look clearly better.
    • Of course, they apply the regularizer to same image recognition / classification problems (MNIST), and this might well be better adapted to something else.
  • Not completely thorough analysis, perhaps due to space and deadlines.

hide / edit[0] / print
ref: -0 tags: tissue probe neural insertion force damage wound speed date: 06-02-2018 00:03 gmt revision:0 [head]

PMID-21896383 Effect of Insertion Speed on Tissue Response and Insertion Mechanics of a Chronically Implanted Silicon-Based Neural Probe

  • Two speeds, 10um/sec and 100um/sec, monitored out to 6 weeks.
  • Once the probes were fully advanced into the brain, we observed a decline in the compression force over time.
    • However, the compression force never decreased to zero.
    • This may indicate that chronically implanted probes experience a constant compression force when inserted in the brain, which may push the probe out of the brain over time if there is nothing to keep it in a fixed position.
      • Yet ... the Utah probe seems fine, up to many months in humans.
    • This may be a drawback for flexible probes [24], [25]. The approach to reduce tissue damage by reducing micromotion by not tethering the probe to the skull can also have this disadvantage [26]. Furthermore, the upward movement may lead to the inability of the contacts to record signals from the same neurons over long periods of time.
  • We did not observe a difference in initial insertion force, amount of dimpling, or the rest force after a 3-min rest period, but the force at the end of the insertion was significantly higher when inserting at 100 μm/s compared to 10 μm/s.
  • No significant difference in histological response observed between the two speeds.

hide / edit[0] / print
ref: -0 tags: insertion speed needle neural electrodes force damage injury cassanova date: 06-01-2018 23:51 gmt revision:0 [head]

Effect of Needle Insertion Speed on Tissue Injury, Stress, and Backflow Distribution for Convection-Enhanced Delivery in the Rat Brain

  • Tissue damage, evaluated as the size of the hole left by the needle after retraction, bleeding, and tissue fracturing, was found to increase for increasing insertion speeds and was higher within white matter regions.
    • A statistically significant difference in hole areas with respect to insertion speed was found.
  • While there are no previous needle insertion speed studies with which to directly compare, previous electrode insertion studies have noted greater brain surface dimpling and insertion forces with increasing insertion speed [43–45]. These higher deformation and force measures may indicate greater brain tissue damage which is in agreement with the present study.
  • There are also studies which have found that fast insertion of sharp tip electrodes produced less blood vessel rupture and bleeding [28,29].
    • These differences in rate dependent damage may be due to differences in tip geometry (diameter and tip) or tissue region, since these electrode studies focus mainly on the cortex [28,29].
    • In the present study, hole measurements were small in the cortex, and no substantial bleeding was observed in the cortex except when it was produced during dura mater removal.
    • Any hemorrhage was observed primarily in white matter regions of the external capsule and the CPu.

hide / edit[2] / print
ref: -0 tags: insertion speed neural electrodes force damage date: 06-01-2018 23:38 gmt revision:2 [1] [0] [head]

In vivo evaluation of needle force and friction stress during insertion at varying insertion speed into the brain

  • Targeted at CED procedures, but probably applicable elsewhere.
  • Used a blunted 32ga CA glue filled hypodermic needle.
  • Sprague-dawley rats.
  • Increased insertion speed corresponds with increased force, unlike cardiac tissue.
  • Greatuer surface dimpling before failure results in larger regions of deformed tissue and more energy storage before needle penetration.
  • In this study (blunt needle) dimpling increased with insertion speed, indicating that more energy was transferred over a larger region and increasing the potential for injury.
  • However, friction stresses likely decrease with insertion speed since larger tissue holes were measured with increasing insertion speeds indicating lower frictional stresses.
    • Rapid deformation results in greater pressurization of fluid filled spaces if fluid does not have time to redistribute, making the tissue effectively stiffer. This may occur in compacted tissues below or surrounding the needle and result in increasing needle forces with increasing needle speed.

hide / edit[3] / print
ref: -2015 tags: ice charles lieber silicon nanowire probes su-8 microwire extracellular date: 05-30-2018 23:40 gmt revision:3 [2] [1] [0] [head]

PMID-26436341 Three-dimensional macroporous nanoelectronic networks as minimally invasive brain probes.

  • Xie C1, Liu J1, Fu TM1, Dai X1, Zhou W1, Lieber CM1,2.
  • Again, use silicon nanowire transistors as sensing elements. These seem rather good; can increase the signal, and do not suffer from shunt resistance / capacitance like wires.
    • They're getting a lot of mileage out of the technology; initial pub back in 2006.
  • Su-8, Cr/Pd/Cr (stress elements) and Cr/Au/Cr (conductor) spontaneously rolled into a ball, then the froze in LN2. Devices seemed robust to freezing in LN2.
  • 300-500nm Su-8 passivation layers, as with the syringe injectable electrodes.
  • 3um trace / 7um insulation (better than us!)
  • Used 100nm Ni release layer; thin / stiff enough Su-8 with rigid Si support chip permitted wirebonding a connector (!!)
    • Might want to use this as well for our electrodes -- of course, then we'd have to use the dicing saw, and free-etch away a Ni (or Al?) polyimide adhesion layer -- or use Su-8 like them. See figure S-4
  • See also {1352}

hide / edit[1] / print
ref: -0 tags: tissue response indwelling implants dialysis kozai date: 04-04-2018 00:28 gmt revision:1 [0] [head]

PMID-25546652 Brain Tissue Responses to Neural Implants Impact Signal Sensitivity and Intervention Strategies

  • (Interesting): eight identical electrode arrays implanted into the same region of different animals have shown that half the arrays continue to record neural signals for >14 weeks while in the other half of the arrays, single-unit yield rapidly degraded and ultimately failed over the same timescale.
  • In another study, aimed at uncovering the time course of insertion-related bleeding and coagulation, electrodes were implanted into the cortex of rats at varying time intervals (−120, −90, −60, −30, −15, and 0 min) using a micromanipulator and linear motor with an insertion speed of 2 mm/s.40 The results showed dramatic variability in BBB leakage that washed out any trend (Figure 3), suggesting that a separate underlying cause was responsible for the large inter- and intra-animal variability.

hide / edit[1] / print
ref: -0 tags: recurrent cortical model adaptation gain V1 LTD date: 03-27-2018 17:48 gmt revision:1 [0] [head]

PMID-18336081 Adaptive integration in the visual cortex by depressing recurrent cortical circuits.

  • Mainly focused on the experimental observation that decreasing contrast increases latency to both behavioral and neural response (latter in the later visual areas..)
  • Idea is that synaptic depression in recurrent cortical connections mediates this 'adaptive integration' time-constant to maintain reliability.
  • Model also explains persistent activity after a flashed stimulus.
  • No plasticity or learning, though.
  • Rather elegant and well explained.

hide / edit[0] / print
ref: -2016 tags: somatostatin interneurons review date: 02-11-2018 18:08 gmt revision:0 [head]

PMID-27225074 Somatostatin-expressing neurons in cortical networks.

  • Urban-Ciecko J1, Barth AL1.
  • High (~ 10hz) tonic (constitutive) firing rate. All GABA.
  • Somatostatin, a neuropeptide, is of ill-defined role. Unknown when it is released.
  • SST interneurons receive diffuse input from cortical pyramidal cells, but each synapse is of low strength.
  • SST intererneurons are frequently electrically connected through gap junctions, but almost never through electrical synapses. The resulting network can extend for hundreds of microns, and has been shown to cause synchronized firing when cells are active.
  • Common anesthetics (isoflurane, urethane) profoundly silence the SSTs.
  • Wide diversity of axonal and dendritic branching patterns, targeting both apical (20%) and distal pyramidal cell dendrites.
  • SST neuron activity is reduced in Dravet syndrome.
  • SST neurons have also been implicated in schizophrenia; affected individuals show decreased SST mRNA and mislocalization of SST interneurons.

hide / edit[1] / print
ref: -0 tags: NET probes SU-8 microfabrication sewing machine carbon fiber electrode insertion mice histology 2p date: 12-29-2017 04:38 gmt revision:1 [0] [head]

PMID-28246640 Ultraflexible nanoelectronic probes form reliable, glial scar–free neural integration

  • SU-8 asymptotic H2O absorption is 3.3% in PBS -- quite a bit higher than I expected, and higher than PI.
  • Faced yield problems with contact litho at 2-3um trace/space.
  • Good recordings out to 4 months!
  • 3 minutes / probe insertion.
  • Fab:
    • Ni release layer, Su-8 2000.5. "excellent tensile strength" --
      • Tensile strength 60 MPa
      • Youngs modulus 2.0 GPa
      • Elongation at break 6.5%
      • Water absorption, per spec sheet, 0.65% (but not PBS)
    • 500nm dielectric; < 1% crosstalk; see figure S12.
    • Pt or Au rec sites, 10um x 20um or 30 x 30um.
    • FFC connector, with Si substrate remaining.
  • Used transgenic mice, YFP expressed in neurons.
  • CA glue used before metabond, followed by Kwik-sil silicone.
  • Neuron yield not so great -- they need to plate the electrodes down to acceptable impedance. (figure S5)
    • Measured impedance ~ 1M at 1khz.
  • Unclear if 50um x 1um is really that much worse than 10um x 1.5um.
  • Histology looks realyl great, (figure S10).
  • Manuscript did not mention (though the did at the poster) problems with electrode pull-out; they deal with it in the same way, application of ACSF.

hide / edit[4] / print
ref: Kim-2008.01 tags: PEDOT review soft date: 12-29-2017 04:34 gmt revision:4 [3] [2] [1] [0] [head]

PMID-21204405 Soft, Fuzzy, and Bioactive Conducting Polymers for Improving the Chronic Performance of Neural Prosthetic Devices.

  • lays out the soft electrode approach (obviously).
  • Extensive discussion of conductive polymer plating methods for neural electrodes.


[0] Kim DH, Richardson-Burns S, Povlich L, Abidian MR, Spanninga S, Hendricks JL, Martin DC, Soft, Fuzzy, and Bioactive Conducting Polymers for Improving the Chronic Performance of Neural Prosthetic Devicesno Source no Volume no Issue no Pages (2008)