you are not logged in, login. new entry
text: sort by
tags: modified
type: chronology
[0] Loewenstein Y, Seung HS, Operant matching is a generic outcome of synaptic plasticity based on the covariance between reward and neural activity.Proc Natl Acad Sci U S A 103:41, 15224-9 (2006 Oct 10)

hide / edit[1] / print
ref: -0 tags: computational biology evolution metabolic networks andreas wagner genotype phenotype network date: 06-12-2017 19:35 gmt revision:1 [0] [head]

Evolutionary Plasticity and Innovations in Complex Metabolic Reaction Networks

  • ‘’João F. Matias Rodrigues, Andreas Wagner ‘’
  • Our observations suggest that the robustness of the Escherichia coli metabolic network to mutations is typical of networks with the same phenotype.
  • We demonstrate that networks with the same phenotype form large sets that can be traversed through single mutations, and that single mutations of different genotypes with the same phenotype can yield very different novel phenotypes
  • Entirely computational study.
    • Examines what is possible given known metabolic building-blocks.
  • Methodology: collated a list of all metabolic reactions in E. Coli (726 reactions, excluding 205 transport reactions) out of 5870 possible reactions.
    • Then ran random-walk mutation experiments to see where the genotype + phenotype could move. Each point in the genotype had to be viable on either a rich (many carbon source) or minimal (glucose) growth medium.
    • Viability was determined by Flux-balance analysis (FBA).
      • In our work we use a set of biochemical precursors from E. coli 47-49 as the set of required compounds a network needs to synthesize, ‘’’by using linear programming to optimize the flux through a specific objective function’’’, in this case the reaction representing the production of biomass precursors we are able to know if a specific metabolic network is able to synthesize the precursors or not.
      • Used Coin-OR and Ilog to optimize the metabolic concentrations (I think?) per given network.
    • This included the ability to synthesize all required precursor biomolecules; see supplementary information.
    • ‘’’“Viable” is highly permissive -- non-zero biomolecule concentration using FBA and linear programming. ‘’’
    • Genomic distances = hamming distance between binary vectors, where 1 = enzyme / reaction possible; 0 = mutated off; 0 = identical genotype, 1 = completely different genotype.
  • Between pairs of viable genetic-metabolic networks, only a minority (30 - 40%) of reactions are essential,
    • Which naturally increases with increasing carbon source diversity:
    • When they go back an examine networks that can sustain life on any of (up to) 60 carbon sources, and again measure the distance from the original E. Coli genome, they find this added robustness does not significantly constrain network architecture.

Summary thoughts: This is a highly interesting study, insofar that the authors show substantial support for their hypotheses that phenotypes can be explored through random-walk non-lethal mutations of the genotype, and this is somewhat invariant to the source of carbon for known biochemical reactions. What gives me pause is the use of linear programming / optimization when setting the relative concentrations of biomolecules, and the permissive criteria for accepting these networks; real life (I would imagine) is far more constrained. Relative and absolute concentrations matter.

Still, the study does reflect some robustness. I suggest that a good control would be to ‘fuzz’ the list of available reactions based on statistical criteria, and see if the results still hold. Then, go back and make the reactions un-biological or less networked, and see if this destroys the measured degrees of robustness.

hide / edit[1] / print
ref: -0 tags: David Kleinfeld cortical vasculature laser surgery network occlusion flow date: 09-23-2016 06:35 gmt revision:1 [0] [head]

Heller Lecture - Prof. David Kleinfeld

  • Also mentions the use of LIBS + q-switched laser for precisely drilling holes in the scull. Seems to work!
    • Use 20ns delay .. seems like there is still spectral broadening.
    • "Turn neuroscience into an industrial process, not an art form" After doing many surgeries, agreed!
  • Vasodiliation & vasoconstriction is very highly regulated; there is not enough blood to go around.
    • Vessels distant from a energetic / stimulated site will (net) constrict.
  • Vascular network is most entirely closed-loop, and not tree-like at all -- you can occlude one artery, or one capillary, and the network will route around the occlusion.
    • The density of the angio-architecture in the brain is unique in this.
  • Tested micro-occlusions by injecting rose bengal, which releases free radicals on light exposure (532nm, 0.5mw), causing coagulation.
  • "Blood flow on the surface arteriole network is insensitive to single occlusions"
  • Penetrating arterioles and venules are largely stubs -- single unbranching vessels, which again renders some immunity to blockage.
  • However! Occlusion of a penetrating arteriole retards flow within a 400 - 600um cylinder (larger than a cortical column!)
  • Occulsion of many penetrating vessels, unsurprisingly, leads to large swaths of dead cortex, "UBOS" in MRI parlance (unidentified bright objects).
  • Death and depolarizing depression can be effectively prevented by excitotoxicity inhibitors -- MK801 in the slides (NMDA blocker, systemically)

hide / edit[0] / print
ref: -0 tags: hinton convolutional deep networks image recognition 2012 date: 01-11-2014 20:14 gmt revision:0 [head]

ImageNet Classification with Deep Convolutional Networks

hide / edit[6] / print
ref: Ganguly-2011.05 tags: Carmena 2011 reversible cortical networks learning indirect BMI date: 01-23-2013 18:54 gmt revision:6 [5] [4] [3] [2] [1] [0] [head]

PMID-21499255[0] Reversible large-scale modification of cortical networks during neuroprosthetic control.

  • Split the group of recorded motor neurons into direct (decoded and controls the BMI) and indirect (passive) neurons.
  • Both groups showed changes in neuronal tuning / PD.
    • More PD. Is there no better metric?
  • Monkeys performed manual control before (MC1) and after (MC2) BMI training.
    • The majority of neurons reverted back to original tuning after BC; c.f. [1]
  • Monkeys were trained to rapidly switch between manual and brain control; still showed substantial changes in PD.
  • 'Near' (on same electrode as direct neurons) and 'far' neurons (different electrode) showed similar changes in PD.
    • Modulation Depth in indirect neurons was less in BC than manual control.
  • Prove (pretty well) that motor cortex neuronal spiking can be dissociated from movement.
  • Indirect neurons showed decreased modulation depth (MD) -> perhaps this is to decrease interference with direct neurons.
  • Quote "Studies of operant conditioning of single neurons found that conconditioned adjacent neurons were largely correlated with the conditioned neurons".
    • Well, also: Fetz and Baker showed that you can condition neurons recorded on the same electrode to covary or inversely vary.
  • Contrast with studies of motor learning in different force fields, where there is a dramatic memory trace.
    • Possibly this is from proprioception activating the cerebellum?

Other notes:

  • Scale bars on the waveforms are incorrect for figure 1.
  • Same monkeys as [2]


[0] Ganguly K, Dimitrov DF, Wallis JD, Carmena JM, Reversible large-scale modification of cortical networks during neuroprosthetic control.Nat Neurosci 14:5, 662-7 (2011 May)
[1] Gandolfo F, Li C, Benda BJ, Schioppa CP, Bizzi E, Cortical correlates of learning in monkeys adapting to a new dynamical environment.Proc Natl Acad Sci U S A 97:5, 2259-63 (2000 Feb 29)
[2] Ganguly K, Carmena JM, Emergence of a stable cortical map for neuroprosthetic control.PLoS Biol 7:7, e1000153 (2009 Jul)

hide / edit[0] / print
ref: -0 tags: Hinton google tech talk dropout deep neural networks Boltzmann date: 11-09-2012 18:01 gmt revision:0 [head]


  • Hinton believes in the the power of crowds -- he thinks that the brain fits many, many different models to the data, then selects afterward.
    • Random forests, as used in predator, is an example of this: they average many simple to fit and simple to run decision trees. (is apparently what Kinect does)
  • Talk focuses on dropout, a clever new form of model averaging where only half of the units in the hidden layers are trained for a given example.
    • He is inspired by biological evolution, where sexual reproduction often spontaneously adds or removes genes, hence individual genes or small linked genes must be self-sufficient. This equates to a 'rugged individualism' of units.
    • Likewise, dropout forces neurons to be robust to the loss of co-workers.
    • This is also great for parallelization: each unit or sub-network can be trained independently, on it's own core, with little need for communication! Later, the units can be combined via genetic algorithms then re-trained.
  • Hinton then observes that sending a real value p (output of logistic function) with probability 0.5 is the same as sending 0.5 with probability p. Hence, it makes sense to try pure binary neurons, like biological neurons in the brain.
    • Indeed, if you replace the backpropagation with single bit propagation, the resulting neural network is trained more slowly and needs to be bigger, but it generalizes better.
    • Neurons (allegedly) do something very similar to this by poisson spiking. Hinton claims this is the right thing to do (rather than sending real numbers via precise spike timing) if you want to robustly fit models to data.
      • Sending stochastic spikes is a very good way to average over the large number of models fit to incoming data.
      • Yes but this really explains little in neuroscience...
  • Paper referred to in intro: Livnat, Papadimitriou and Feldman, PMID-19073912 and later by the same authors PMID-20080594

hide / edit[1] / print
ref: Dethier-2011.28 tags: BMI decoder spiking neural network Kalman date: 01-06-2012 00:20 gmt revision:1 [0] [head]

IEEE-5910570 (pdf) Spiking neural network decoder for brain-machine interfaces

  • Golden standard: kalman filter.
  • Spiking neural network got within 1% of this standard.
  • THe 'neuromorphic' approach.
  • Used Nengo, freely available neural simulator.


Dethier, J. and Gilja, V. and Nuyujukian, P. and Elassaad, S.A. and Shenoy, K.V. and Boahen, K. Neural Engineering (NER), 2011 5th International IEEE/EMBS Conference on 396 -399 (2011)

hide / edit[2] / print
ref: Sanchez-2005.06 tags: BMI Sanchez Nicolelis Wessberg recurrent neural network date: 01-01-2012 18:28 gmt revision:2 [1] [0] [head]

IEEE-1439548 (pdf) Interpreting spatial and temporal neural activity through a recurrent neural network brain-machine interface

  • Putting it here for the record.
  • Note they did a sensitivity analysis (via chain rule) of the recurrent neural network used for BMI predictions.
  • Used data (X,Y,Z) from 2 monkeys feeding.
  • Figure 6 is strange, data could be represented better.
  • Also see: IEEE-1300786 (pdf) Ascertaining the importance of neurons to develop better brain-machine interfaces Also by Justin Sanchez.


Sanchez, J.C. and Erdogmus, D. and Nicolelis, M.A.L. and Wessberg, J. and Principe, J.C. Interpreting spatial and temporal neural activity through a recurrent neural network brain-machine interface Neural Systems and Rehabilitation Engineering, IEEE Transactions on 13 2 213 -219 (2005)

hide / edit[1] / print
ref: Bassett-2009.07 tags: Weinberger congnitive efficiency beta band neuroimagaing EEG task performance optimization network size effort date: 12-28-2011 20:39 gmt revision:1 [0] [head]

PMID-19564605[0] Cognitive fitness of cost-efficient brain functional networks.

  • Idea: smaller, tighter networks are correlated with better task performance
    • working memory task in normal subjects and schizophrenics.
  • Larger networks operate with higher beta frequencies (more effort?) and show less efficient task performance.
  • Not sure about the noisy data, but v. interesting theory!


[0] Bassett DS, Bullmore ET, Meyer-Lindenberg A, Apud JA, Weinberger DR, Coppola R, Cognitive fitness of cost-efficient brain functional networks.Proc Natl Acad Sci U S A 106:28, 11747-52 (2009 Jul 14)

hide / edit[4] / print
ref: Loewenstein-2006.1 tags: reinforcement learning operant conditioning neural networks theory date: 12-07-2011 03:36 gmt revision:4 [3] [2] [1] [0] [head]

PMID-17008410[0] Operant matching is a generic outcome of synaptic plasticity based on the covariance between reward and neural activity

  • The probability of choosing an alternative in a long sequence of repeated choices is proportional to the total reward derived from that alternative, a phenomenon known as Herrnstein's matching law.
  • We hypothesize that there are forms of synaptic plasticity driven by the covariance between reward and neural activity and prove mathematically that matching (alternative to reward) is a generic outcome of such plasticity
    • models for learning that are based on the covariance between reward and choice are common in economics and are used phenomologically to explain human behavior.
  • this model can be tested experimentally by making reward contingent not on the choices, but rather on the activity of neural activity.
  • Maximization is shown to be a generic outcome of synaptic plasticity driven by the sum of the covariances between reward and all past neural activities.


hide / edit[1] / print
ref: -0 tags: backpropagation cascade correlation neural networks date: 12-20-2010 06:28 gmt revision:1 [0] [head]

The Cascade-Correlation Learning Architecture

  • Much better - much more sensible, computationally cheaper, than backprop.
  • Units are added one by one; each is trained to be maximally correlated to the error of the existing, frozen neural network.
  • Uses quickprop to speed up gradient ascent learning.

hide / edit[4] / print
ref: work-0 tags: emergent leabra QT neural networks GUI interface date: 10-21-2009 19:02 gmt revision:4 [3] [2] [1] [0] [head]

I've been reading Computational Explorations in Cognitive Neuroscience, and decided to try the code that comes with / is associated with the book. This used to be called "PDP+", but was re-written, and is now called Emergent. It's a rather large program - links to Qt, GSL, Coin3D, Quarter, Open Dynamics Library, and others. The GUI itself seems obtuse and too heavy; it's not clear why they need to make this so customized / panneled / tabbed. Also, it depends on relatively recent versions of each of these libraries - which made the install on my Debian Lenny system a bit of a chore (kinda like windows).

A really strange thing is that programs are stored in tree lists - woah - a natural folding editor built in! I've never seen a programming language that doesn't rely on simple text files. Not a bad idea, but still foreign to me. (But I guess programs are inherently hierarchal anyway.)

Below, a screenshot of the whole program - note they use a Coin3D window to graph things / interact with the model. The colored boxes in each network layer indicate local activations, and they update as the network is trained. I don't mind this interface, but again it seems a bit too 'heavy' for things that are inherently 2D (like 2D network activations and the output plot). It's good for seeing hierarchies, though, like the network model.

All in all looks like something that could be more easily accomplished with some python (or ocaml), where the language itself is used for customization, and not a GUI. With this approach, you spend more time learning about how networks work, and less time programming GUIs. On the other hand, if you use this program for teaching, the gui is essential for debugging your neural networks, or other people use it a lot, maybe then it is worth it ...

In any case, the book is very good. I've learned about GeneRec, which uses different activation phases to compute local errors for the purposes of error-minimization, as well as the virtues of using both Hebbian and error-based learning (like GeneRec). Specifically, the authors show that error-based learning can be rather 'lazy', purely moving down the error gradient, whereas Hebbian learning can internalize some of the correlational structure of the input space. You can look at this internalization as 'weight constraint' which limits the space that error-based learning has to search. Cool idea! Inhibition also is a constraint - one which constrains the network to be sparse.

To use his/their own words:

... given the explanation above about the network's poor generalization, it should be clear why both Hebbian learning and kWTA (k winner take all) inhibitory competition can improve generalization performance. At the most general level, they constitute additional biases that place important constraints on the learning and the development of representations. Mroe specifically, Hebbian learning constrains the weights to represent the correlational structure of the inputs to a given unit, producing systematic weight patterns (e.g. cleanly separated clusters of strong correlations).

Inhibitory competition helps in two ways. First, it encourages individual units to specialize in representing a subset of items, thus parcelling up the task in a much cleaner and more systematic way than would occur in an otherwise unconstrained network. Second, inhibition greatly restricts the settling dynamics of the network, greatly constraining the number of states the network can settle into, and thus eliminating a large proportion of the attractors that can hijack generalization.."

hide / edit[0] / print
ref: work-0 tags: neural networks course date: 09-01-2009 04:24 gmt revision:0 [head]

http://www.willamette.edu/~gorr/classes/cs449/intro.html -- descent resource, good explanation of the equations associated with artificial neural networks.

hide / edit[2] / print
ref: Oskoei-2008.08 tags: EMG pattern analysis classification neural network date: 04-07-2009 21:10 gmt revision:2 [1] [0] [head]

  • EMG pattern analysis and classification by Neural Network
    • 1989!
    • short, simple paper. showed that 20 patterns can accurately be decoded with a backprop-trained neural network.
  • PMID-18632358 Support vector machine-based classification scheme for myoelectric control applied to upper limb.
    • myoelectric discrimination with SVM running on features in both the time and frequency domain.
    • a survace MES (myoelectric sensor) is formed via the superposition of individual action potentials generated by irregular discharges of active motor units in a muscle fiber. It's amplitude, variance, energy, and frequency vary depending on contration level.
    • Time domain features:
      • Mean absolute value (MAV)
      • root mean square (RMS)
      • waveform length (WL)
      • variance
      • zero crossings (ZC)
      • slope sign changes (SSC)
      • William amplitude.
    • Frequency domain features:
      • power spectrum
      • autoregressive coefficients order 2 and 6
      • mean signal frequency
      • median signal frequency
      • good performance with just RMS + AR2 for 50 or 100ms segments. Used a SVM with a RBF kernel.
      • looks like you can just get away with time-domain metrics!!

hide / edit[0] / print
ref: -0 tags: alopex machine learning artificial neural networks date: 03-09-2009 22:12 gmt revision:0 [head]

Alopex: A Correlation-Based Learning Algorithm for Feed-Forward and Recurrent Neural Networks (1994)

  • read the abstract! rather than using the gradient error estimate as in backpropagation, it uses the correlation between changes in network weights and changes in the error + gaussian noise.
    • backpropagation requires calculation of the derivatives of the transfer function from one neuron to the output. This is very non-local information.
    • one alternative is somewhat empirical: compute the derivatives wrt the weights through perturbations.
    • all these algorithms are solutions to the optimization problem: minimize an error measure, E, wrt the network weights.
  • all network weights are updated synchronously.
  • can be used to train both feedforward and recurrent networks.
  • algorithm apparently has a long history, especially in visual research.
  • the algorithm is quite simple! easy to understand.
    • use stochastic weight changes with a annealing schedule.
  • this is pre-pub: tables and figures at the end.
  • looks like it has comparable or faster convergence then backpropagation.
  • not sure how it will scale to problems with hundreds of neurons; though, they looked at an encoding task with 32 outputs.

hide / edit[1] / print
ref: Pearlmutter-2009.06 tags: sleep network stability learning memory date: 02-05-2009 19:21 gmt revision:1 [0] [head]

PMID-19191602 A New Hypothesis for Sleep: Tuning for Criticality.

  • Their hypothesis: in the course of learning, the brain's networks move closer to instability, as the process of learning and information storage requires that the network move closer to instability.
    • That is, a perfectly stable network stores no information: output is the same independent of input; a highly unstable network can potentially store a lot of information, or be a very selective or critical system: output is highly sensitive to input.
  • Sleep serves to restore the stability of the network by exposing it to a variety of inputs, checking for runaway activity, and adjusting accordingly. (inhibition / glia? how?)
  • Say that when sleep is not possible, an emergency mechanism must com into play, namely tiredness, to prevent runaway behavior.
  • (From wikipedia:) a potentially serious side-effect of many antipsychotics is that they tend to lower a individual's seizure threshold. Recall that removal of all dopamine can inhibit REM sleep; it's all somehow consistent, but unclear how maintaining network stability and being able to move are related.

hide / edit[2] / print
ref: bookmark-0 tags: open source cellphone public network date: 11-13-2007 21:28 gmt revision:2 [1] [0] [head]


  • kinda high-level, rather amorphous, but generally in the right direction. The drive is there, the time is coming, but we are not quite there yet..
  • have some designs for wireless repeaters, based on 802.11g mini-pci cards in a SBC, 3 repeaters. total cost about $1000
  • also interesting: http://www.opencellphone.org/index.php?title=Main_Page

hide / edit[0] / print
ref: bookmark-0 tags: book information_theory machine_learning bayes probability neural_networks mackay date: 0-0-2007 0:0 revision:0 [head]

http://www.inference.phy.cam.ac.uk/mackay/itila/book.html -- free! (but i liked the book, so I bought it :)

hide / edit[0] / print
ref: bookmark-0 tags: neural_networks machine_learning matlab toolbox supervised_learning PCA perceptron SOM EM date: 0-0-2006 0:0 revision:0 [head]

http://www.ncrg.aston.ac.uk/netlab/index.php n.b. kinda old. (or does that just mean well established?)

hide / edit[0] / print
ref: bookmark-0 tags: Numenta Bayesian_networks date: 0-0-2006 0:0 revision:0 [head]


  • shared, hierarchal representation reduces memory requirements, training time, and mirrors the structure of the world.
  • belief propagation techniques force the network into a set of mutually consistent beliefs.
  • a belief is a form of spatio-temporal quantization: ignore the unusual.
  • a cause is a persistent or recurring structure in the world - the root of a spatiotemporal pattern. This is a simple but important concept.
    • HTM marginalize along space and time - they assume time patterns and space patterns, not both at the same time. Temporal parameterization follows spatial parameterization.

hide / edit[0] / print
ref: bookmark-0 tags: Bayes Baysian_networks probability probabalistic_networks Kalman ICA PCA HMM Dynamic_programming inference learning date: 0-0-2006 0:0 revision:0 [head]

http://www.cs.ubc.ca/~murphyk/Bayes/bnintro.html very, very good! many references, well explained too.

hide / edit[0] / print
ref: bookmark-0 tags: training neural_networks with kalman filters date: 0-0-2006 0:0 revision:0 [head]

with the extended kalman filter, from '92: http://ftp.ccs.neu.edu/pub/people/rjw/kalman-ijcnn-92.ps

with the unscented kalman filter : http://hardm.ath.cx/pdf/NNTrainingwithUnscentedKalmanFilter.pdf