m8ta
use https for features.
text: sort by
tags: modified
type: chronology
[0] Fetz EE, Volitional control of neural activity: implications for brain-computer interfaces.J Physiol 579:Pt 3, 571-9 (2007 Mar 15)

[0] Scott SH, Optimal feedback control and the neural basis of volitional motor control.Nat Rev Neurosci 5:7, 532-46 (2004 Jul)

{1455}
hide / / print
ref: -0 tags: credit assignment distributed feedback alignment penn state MNIST fashion backprop date: 03-16-2019 02:21 gmt revision:1 [0] [head]

Conducting credit assignment by aligning local distributed representations

  • Alexander G. Ororbia, Ankur Mali, Daniel Kifer, C. Lee Giles
  • Propose two related algorithms: Local Representation Alignment (LRA)-diff and LRA-fdbk.
    • LRA-diff is basically a modified form of backprop.
    • LRA-fdbk is a modified version of feedback alignment. {1432} {1423}
  • Test on MNIST (easy -- many digits can be discriminated with one pixel!) and fashion-MNIST (harder -- humans only get about 85% right!)
  • Use a Cauchy or log-penalty loss at each layer, which is somewhat unique and interesting: L(z,y)= i=1 nlog(1+(y iz i) 2)L(z,y) = \sum_{i=1}^n{ log(1 + (y_i - z_i)^2)} .
    • This is hence a saturating loss.
  1. Normal multi-layer-perceptron feedforward network. pre activation h h^\ell and post activation z z^\ell are stored.
  2. Update the weights to minimize loss. This gradient calculation is identical to backprop, only they constrain the update to have a norm no bigger than c 1c_1 . Z and Y are actual and desired output of the layer, as commented. Gradient includes the derivative of the nonlinear activation function.
  3. Generaete update for the pre-nonlinearity h 1h^{\ell-1} to minimize the loss in the layer above. This again is very similar to backprop; its' the chain rule -- but the derivatives are vectors, of course, so those should be element-wise multiplication, not outer produts (i think).
    1. Note hh is updated -- derivatives of two nonlinearities.
  4. Feedback-alignment version, with random matrix E E_{\ell} (elements drawn from a gaussian distribution, σ=1\sigma = 1 ish.
    1. Only one nonlinearity derivative here -- bug?
  5. Move the rep and post activations in the specified gradient direction.
    1. Those h¯ 1\bar{h}^{\ell-1} variables are temporary holding -- but note that both lower and higher layers are updated.
  6. Do this K of times, K=1-50.
  • In practice K=1, with the LRA-fdbk algorithm, for the majority of the paper -- it works much better than LRA-diff (interesting .. bug?). Hence, this basically reduces to feedback alignment.
  • Demonstrate that LRA works much better with small initial weights, but basically because they tweak the algorithm to do this.
    • Need to see a positive control for this to be conclusive.
    • Again, why is FA so different from LRA-fdbk? Suspicious. Positive controls.
  • Attempted a network with Local Winner Take All (LWTA), which is a hard nonlinearity that LFA was able to account for & train through.
  • Also used Bernoulli neurons, and were able to successfully train. Unlike drop-out, these were stochastic at test time, and things still worked OK.

Lit review.
  • Logistic sigmoid can slow down learning, due to it's non-zero mean (Glorot & Bengio 2010).
  • Recirculation algorithm (or generalized recirculation) is a precursor for target propagation.
  • Target propagation is all about the inverse of the forward propagation: if we had access to the inverse of the network of forward propagations, we could compute which input values at the lower levels of the network would result in better values at the top that would please the global cost.
    • This is a very different way of looking at it -- almost backwards!
    • And indeed, it's not really all that different from contrastive divergence. (even though CD doesn't work well with non-Bernoulli units)
  • Contractive Hebbian learning also has two phases, one to fantasize, and done to try to make the fantasies look more like the input data.
  • Decoupled neural interfaces (Jaderberg et al 2016): learn a predictive model of error gradients (and inputs) nistead of trying to use local information to estimate updated weights.

  • Yeah, call me a critic, but I'm not clear on the contribution of this paper; it smells precocious and over-sold.
    • Even the title. I was hoping for something more 'local' than per-layer computation. BP does that already!
  • They primarily report supportive tests, not discriminative or stressing tests; how does the algorithm fail?
    • Certainly a lot of work went into it..
  • I still don't see how the computation of a target through a ransom matrix, then using delta/loss/error between that target and the feedforward activation to update weights, is much different than propagating the errors directly through a random feedback matrix. Eg. subtract then multiply, or multiply then subtract?

{1441}
hide / / print
ref: -2018 tags: biologically inspired deep learning feedback alignment direct difference target propagation date: 03-15-2019 05:51 gmt revision:5 [4] [3] [2] [1] [0] [head]

Assessing the Scalability of Biologically-Motivated Deep Learning Algorithms and Architectures

  • Sergey Bartunov, Adam Santoro, Blake A. Richards, Luke Marris, Geoffrey E. Hinton, Timothy Lillicrap
  • As is known, many algorithms work well on MNIST, but fail on more complicated tasks, like CIFAR and ImageNet.
  • In their experiments, backprop still fares better than any of the biologically inspired / biologically plausible learning rules. This includes:
    • Feedback alignment {1432} {1423}
    • Vanilla target propagation
      • Problem: with convergent networks, layer inverses (top-down) will map all items of the same class to one target vector in each layer, which is very limiting.
      • Hence this algorithm was not directly investigated.
    • Difference target propagation (2015)
      • Uses the per-layer target as h^ l=g(h^ l+1;λ l+1)+[h lg(h l+1;λ l+1)]\hat{h}_l = g(\hat{h}_{l+1}; \lambda_{l+1}) + [h_l - g(h_{l+1};\lambda_{l+1})]
      • Or: h^ l=h l+g(h^ l+1;λ l+1)g(h l+1;λ l+1)\hat{h}_l = h_l + g(\hat{h}_{l+1}; \lambda_{l+1}) - g(h_{l+1};\lambda_{l+1}) where λ l\lambda_{l} are the parameters for the inverse model; g()g() is the sum and nonlinearity.
      • That is, the target is modified ala delta rule by the difference between inverse-propagated higher layer target and inverse-propagated higher level activity.
        • Why? h lh_{l} should approach h^ l\hat{h}_{l} as h l+1h_{l+1} approaches h^ l+1\hat{h}_{l+1} .
        • Otherwise, the parameters in lower layers continue to be updated even when low loss is reached in the upper layers. (from original paper).
      • The last to penultimate layer weights is trained via backprop to prevent template impoverishment as noted above.
    • Simplified difference target propagation
      • The substitute a biologically plausible learning rule for the penultimate layer,
      • h^ L1=h L1+g(h^ L;λ L)g(h L;λ L)\hat{h}_{L-1} = h_{L-1} + g(\hat{h}_L;\lambda_L) - g(h_L;\lambda_L) where there are LL layers.
      • It's the same rule as the other layers.
      • Hence subject to impoverishment problem with low-entropy labels.
    • Auxiliary output simplified difference target propagation
      • Add a vector zz to the last layer activation, which carries information about the input vector.
      • zz is just a set of random features from the activation h L1h_{L-1} .
  • Used both fully connected and locally-connected (e.g. convolution without weight sharing) MLP.
  • It's not so great:
  • Target propagation seems like a weak learner, worse than feedback alignment; not only is the feedback limited, but it does not take advantage of the statistics of the input.
    • Hence, some of these schemes may work better when combined with unsupervised learning rules.
    • Still, in the original paper they use difference-target propagation with autoencoders, and get reasonable stroke features..
  • Their general result that networks and learning rules need to be tested on more difficult tasks rings true, and might well be the main point of this otherwise meh paper.

{1432}
hide / / print
ref: -0 tags: feedback alignment Arild Nokland MNIST CIFAR date: 02-14-2019 02:15 gmt revision:0 [head]

Direct Feedback alignment provides learning in deep neural nets

  • from {1423}
  • Feedback alignment is able to provide zero training error even in convolutional networks and very deep networks, completely without error back-propagation.
  • Biologically plausible: error signal is entirely local, no symmetric or reciprocal weights required.
    • Still, it requires supervision.
  • Almost as good as backprop!
  • Clearly written, easy to follow math.
    • Though the proof that feedback-alignment direction is within 90 deg of backprop is a bit impenetrable, needs some reorganization or additional exposition / annotation.
  • 3x400 tanh network tested on MNIST; performs similarly to backprop, if faster.
  • Also able to train very deep networks, on MNIST - CIFAR-10, CIFAR-100, 100 layers (which actually hurts this task).

{1423}
hide / / print
ref: -2014 tags: Lillicrap Random feedback alignment weights synaptic learning backprop MNIST date: 02-14-2019 01:02 gmt revision:5 [4] [3] [2] [1] [0] [head]

PMID-27824044 Random synaptic feedback weights support error backpropagation for deep learning.

  • "Here we present a surprisingly simple algorithm for deep learning, which assigns blame by multiplying error signals by a random synaptic weights.
  • Backprop multiplies error signals e by the weight matrix W T W^T , the transpose of the forward synaptic weights.
  • But the feedback weights do not need to be exactly W T W^T ; any matrix B will suffice, so long as on average:
  • e TWBe>0 e^T W B e > 0
    • Meaning that the teaching signal Be B e lies within 90deg of the signal used by backprop, W Te W^T e
  • Feedback alignment actually seems to work better than backprop in some cases. This relies on starting the weights very small (can't be zero -- no output)

Our proof says that weights W0 and W
evolve to equilibrium manifolds, but simulations (Fig. 4) and analytic results (Supple-
mentary Proof 2) hint at something more specific: that when the weights begin near
0, feedback alignment encourages W to act like a local pseudoinverse of B around
the error manifold. This fact is important because if B were exactly W + (the Moore-
Penrose pseudoinverse of W ), then the network would be performing Gauss-Newton
optimization (Supplementary Proof 3). We call this update rule for the hidden units
pseudobackprop and denote it by ∆hPBP = W + e. Experiments with the linear net-
work show that the angle, ∆hFA ]∆hPBP quickly becomes smaller than ∆hFA ]∆hBP
(Fig. 4b, c; see Methods). In other words feedback alignment, despite its simplicity,
displays elements of second-order learning.

{1159}
hide / / print
ref: -0 tags: loops feedback arcs video game programming date: 04-30-2012 15:12 gmt revision:0 [head]

I highly agree with this philosophy / this deconstruction of the flow of information in human structures: http://www.lostgarden.com/2012/04/loops-and-arcs.html

On criticism as a meta-arc game:

"In the past I've discussed criticism as a game that attempts to revisit an arc repeatedly and embellish it with additional meaning. The game is to generate essays superficially based on some piece of existing art. In turn, other players generate additional essays based off the first essays. This acts as both a referee mechanism and judge. Score is accumulated via reference counts and by rising through an organization hierarchy. It is a deliciously political game of wit that is both impenetrable to outsiders and nearly independent of the actual source arcs. Here creating an arc becomes a move in the larger game. "

{1081}
hide / / print
ref: Hershey-2010.12 tags: DBS impulsivity STN feedback stability gonogo date: 02-22-2012 22:04 gmt revision:8 [7] [6] [5] [4] [3] [2] [head]

PMID-20855421[0] Mapping Go-No-Go performance within the subthalamic nucleus region.

  • Support the dorsal-ventral motor-cognitive model.
  • Only ventral subthalamic stimulation effected Go-No-Go accuracy.
    • Both ventral and dorsal stimulation showed positive motor effects.
  • On inhibition in the STN: (Aron and Poldrack 2006; Frank et al 2007).
    • Thought: if methamphetamine and L-Dopa have similar impulsivity / punding / hobbyism effects, why do they think that the function is localized exclusively in the STN? These behaviors seem a more general problem of dopamine disregulation. Meth heads presumably have intact STN. The pausing hypothesis (e.g. STN controls pausing in conflict situations) seems better to me (maybe); have to check rat results.
    • Such is the problem with taking one thing out of a feedback loop and assuming the resultant deficit corresponds with the original 'function' insofar as one can be assigned. Think if you adjust the coefficients on a filter -- it gets all F'ed, with minor projection onto the frequency response.
    • Low-order systems are less sensitive to drastic parameter adjustment, but still purpose is obscured in feedback systems.
    • See {1082}
  • STN DBS can lead to impaired withholding strong prepotent responses with strong response conflict
    • Such as the Stroop task (Jahanshahi et al 2000; Schroeder et al 2002; Witt et al 2004)
    • Stop signal task (Ray et al 2009)
    • Go-nogo tasks (Hershey et al 2004; Ballanger et al 2009).
    • Rats show the same deficit in inhibiting responses in strong conflict cases (Baunex et al 1995, 2001; Baunez and Robbins 1997).
  • Suggest that significant variability in treatment responses could be from the exact location of stimulation.
    • Ventral STN closer to SNr, and dorsal is closer to the ZI and thalamus.

____References____

[0] Hershey T, Campbell MC, Videen TO, Lugar HM, Weaver PM, Hartlein J, Karimi M, Tabbal SD, Perlmutter JS, Mapping Go-No-Go performance within the subthalamic nucleus region.Brain 133:Pt 12, 3625-34 (2010 Dec)

{1082}
hide / / print
ref: -0 tags: feedback stability resonance butterworth matlab date: 01-22-2012 03:46 gmt revision:4 [3] [2] [1] [0] [head]

Just fer kicks, I tested what happens to low-order butterworth filters when you maladjust one of the feedback coefficients.

[B, A] = butter(2, 0.1);
[h, w] = freqz(B,A);
A(2) = A(2) * 0.9;
[h2, ~] = freqz(B,A);
hold off
subplot(1,2,1)
plot(w,abs(h))
hold on; plot(w,abs(h2), 'r')
title('10% change in one FB filter coef 2nd order butterworth')
xlabel('freq, rads / sec'); 
ylabel('filter response');

% do the same for a higher order filter. 
[B, A] = butter(3, 0.1);
[h, w] = freqz(B,A);
A(2) = A(2) * 0.9;
[h2, ~] = freqz(B,A);
subplot(1,2,2)
hold on
plot(w,abs(h), 'b')
plot(w,abs(h2), 'r')
title('10% change in one FB filter coef 3rd order butterworth')
xlabel('freq, rads / sec'); 
ylabel('filter response');

The filters show a resonant peak, even though feedback was reduced. Not surprising, really; a lot of systems will show reduced phase margin and will begin to oscillate when poles are moved. Does this mean that a given coefficient (anatomical area) is responsible for resonance? By itself, of course not; one can not extrapolate one effect from one manipulation in a feedback system, especially a higher-order feedback system.

This, of course hold in the mapping of digital (or analog) filters to pathology or anatomy. Pathology is likely reflective of how the loop is structured, not how one element functions (well, maybe).

For a paper, see {1083}

{1066}
hide / / print
ref: Hagbarth-1983.02 tags: piper rhythm oscillations feedback proprioception spinal reflex date: 01-19-2012 21:41 gmt revision:2 [1] [0] [head]

PMID-6869036[0] The Piper rhythm--a phenomenon related to muscle resonance characteristics?

  • Piper rhythm: the tendency towards rhytmical 40-60 Hz grouping of motor unit potentials in steadily contracting human muscles.
  • Recording of nerves in muscles did not support the idea that the Piper rhythm is dependent on afferent spindle pulses causing reflex entrainment. (loop too slow).
  • TThis wouldn't make sense anyway, as the same rhythm appears in different muscles with markedly different mechanical properties.
  • Likkely cause is the cerebrum, upper oscillations. Interesting!
  • See also: PMID-9862895[1] Cortical correlate of the Piper rhythm in humans.
    • MEG data is consistent with the cortex being the origin of the Piper rhythm.
  • And PMID-10203308[2] Rhythmical corticomotor communication.
    • The rhythmic modulation may form a tool for efficient driving of motor units but we express some reservations about the assumed binding and attention-related roles of the rolandic brain rhythms.
  • PMID-10622378[3] Cortical drives to human muscle: the Piper and related rhythms.
    • Alternately, oscillations may be a form of holding state.
    • They think gamma frequencies are a means of binding together simultaneously activated isometric muscles.
    • Inadequate output from the basal ganglia leads to a disappearance of the beta and piper drives to muscle.
    • Did we see and piper band osc activity? Did not look.

____References____

[0] Hagbarth KE, Jessop J, Eklund G, Wallin EU, The Piper rhythm--a phenomenon related to muscle resonance characteristics?Acta Physiol Scand 117:2, 263-71 (1983 Feb)
[1] Brown P, Salenius S, Rothwell JC, Hari R, Cortical correlate of the Piper rhythm in humans.J Neurophysiol 80:6, 2911-7 (1998 Dec)
[2] Hari R, Salenius S, Rhythmical corticomotor communication.Neuroreport 10:2, R1-10 (1999 Feb 5)
[3] Brown P, Cortical drives to human muscle: the Piper and related rhythms.Prog Neurobiol 60:1, 97-108 (2000 Jan)

{1045}
hide / / print
ref: Vibert-1979.08 tags: spike sorting recording depth extracellular glass electrodes active feedback original date: 01-15-2012 06:46 gmt revision:3 [2] [1] [0] [head]

PMID-95711[0] Spike separation in multiunit records: A multivariate analysis of spike descriptive parameters

  • Glass coated tungsten microeletrodes have high capacitance; they compensate for this by spraying colloidal silver over the outside sheath of the glass, insulating that with varnish, and driving the shield in a positive-feedback way (stabillized in some way?) This negates the capacitance. 'low impedance capacitance compensated'.
    • Capacitance compensation really matters!!
  • Were able to record from single units for 40-100um range (average: 50um) with SNRs 2:1 to 7:1.
    • Some units had SNRs that could reach 15:1 (!!!), these could be recorded for 600 um of descent.
    • more than 3 units could usually be recognized at each recording point by visual inspection of the oscilloscope, and in some cases up to 6 units could be distinguished
    • Is there some clever RF way of neutralizing the capacitance of everything but the electrode tip? Hmm. Might as well try to minimize it.
  • Bandpass 300 Hz - 10 kHz.
  • When the signal crossed the threshold level, it was retained and assumed to be a spike if the duration of the first component was between 70 and 1000 us.
    • This 70 us lower limit was determined on a preliminary study as a fairly good rise time threshold for separation of fiber spikes from somatic or dendritic spikes.
    • I really need to do some single electrode recordings. Platt?
  • Would it be possible to implement this algorithm in realtime on the DSP?
  • Describe clustering based on PCA.
  • Programming this computer (PDP-12) must have been crazy!
  • They analyzed 20k spikes. Mango gives billions.
  • First principal component (F1) represented 60-65% of total information was based mostly on amplitude
  • Second principal component, 15-20% of total information represented mainly time parameters.
  • Suggested 3 parameters: Vmax, Vmin, and T3 (time from max to min).
  • Maybe they don't know what they are talking about:

____References____

[0] Vibert JF, Costa J, Spike separation in multiunit records: a multivariate analysis of spike descriptive parameters.Electroencephalogr Clin Neurophysiol 47:2, 172-82 (1979 Aug)

{907}
hide / / print
ref: Wyler-1980.1 tags: Wyler Robbins operant control feedback date: 01-07-2012 22:09 gmt revision:1 [0] [head]

PMID-7418770[0] Operant control of precentral neurons: the role of audio and visual feedback.

  • Central point: though in previous studies of operant conditioning of precentral neurons visual and auditory feedback was employed, this proved unnecessary for the monkeys to gain control of their neurons.
  • All that is required is reinforcement / feedback when the ISI is in the target range.
  • Consistent with the idea that the monkey gets feedback from periphery, and not from audio / visual feedback.

____References____

[0] Wyler AR, Robbins CA, Operant control of precentral neurons: the role of audio and visual feedback.Exp Neurol 70:1, 200-3 (1980 Oct)

{349}
hide / / print
ref: thesis-0 tags: clementine 042007 operant conditioning biofeedback tlh24 date: 01-06-2012 03:08 gmt revision:4 [3] [2] [1] [0] [head]

channel 29 controlled the X direction:

channel 81, the Y direction (this one was very highly modulated, and the monkey could get to a high rate ~60Hz. note that both units are sorted as one -- I ought to do the same on the other channels from now on, as this was rather predictive (this is duplicating Debbie Won's results):

However, when I ran a wiener filter on the binned spike rates (this is not the rates as estimated through the polynomial filter), ch 81 was most predictive for target X position; ch 29, Y target position (?). This is in agreement with population-wide predictions of target position: target X was predicted with low fidelity (1.12; cc = 0.35 or so); target Y was, apparently, unpredicted. I don't understand why this is, as I trained the monkey for 1/2 hour on just the opposite. Actually this is because the targets were not in a random sequence - they were in a CCW sequence, hence the neuronal activity was correlated to the last target, hence ch 81 to target X!

for reference, here is the ouput of bmi_sql:

order of columns: unit,channel, lag, snr, variable

ans =

    1.0000   80.0000    5.0000    1.0909    7.0000
    1.0000   80.0000    4.0000    1.0705    7.0000
    1.0000   80.0000    3.0000    1.0575    7.0000
    1.0000   80.0000    2.0000    1.0485    7.0000
    1.0000   80.0000    1.0000    1.0402    7.0000
    1.0000   28.0000    4.0000    1.0318    8.0000
    1.0000   76.0000    2.0000    1.0238   11.0000
    1.0000   76.0000    5.0000    1.0225   11.0000
    1.0000   17.0000         0    1.0209   11.0000
    1.0000   63.0000    3.0000    1.0202    8.0000

movies of the performance are here:

{912}
hide / / print
ref: Carlton-1981.1 tags: visual feedback 1981 error correction movement motor control reaction time date: 12-06-2011 06:35 gmt revision:1 [0] [head]

PMID-6457106 Processing visual feedback information for movement control.

  • Vusual feedback can correct movement within 135ms.
  • Measured this by simply timing the latency from presentation of visual error to initiation of corrective movement.

{896}
hide / / print
ref: Friston-2002.1 tags: neuroscience philosophy feedback top-down sensory integration inference date: 10-25-2011 23:24 gmt revision:0 [head]

PMID-12450490 Functional integration and inference in the brain

  • Extra-classical tuning: tuning is dependent on behavioral context (motor) or stimulus context (sensory). Author proposes that neuroimaging can be used to investigate it in humans.
  • "Information theory can, in principle, proceed using only forward connections. However, it turns out that this is only possible when processes generating sensory inputs are invertible and independent. Invertibility is precluded when the cause of a percept and the context in which it is engendered interact." -- proof? citations? Makes sense though.
  • Argues for the rather simplistic proof of backward connections via neuroimaging..

{715}
hide / / print
ref: Legenstein-2008.1 tags: Maass STDP reinforcement learning biofeedback Fetz synapse date: 04-09-2009 17:13 gmt revision:5 [4] [3] [2] [1] [0] [head]

PMID-18846203[0] A Learning Theory for Reward-Modulated Spike-Timing-Dependent Plasticity with Application to Biofeedback

  • (from abstract) The resulting learning theory predicts that even difficult credit-assignment problems, where it is very hard to tell which synaptic weights should be modified in order to increase the global reward for the system, can be solved in a self-organizing manner through reward-modulated STDP.
    • This yields an explanation for a fundamental experimental result on biofeedback in monkeys by Fetz and Baker.
  • STDP is prevalent in the cortex ; however, it requires a second signal:
    • Dopamine seems to gate STDP in corticostriatal synapses
    • ACh does the same or similar in the cortex. -- see references 8-12
  • simple learning rule they use: d/dtW ij(t)=C ij(t)D(t) d/dt W_{ij}(t) = C_{ij}(t) D(t)
  • Their notes on the Fetz/Baker experiments: "Adjacent neurons tended to change their firing rate in the same direction, but also differential changes of directions of firing rates of pairs of neurons are reported in [17] (when these differential changes were rewarded). For example, it was shown in Figure 9 of [17] (see also Figure 1 in [19]) that pairs of neurons that were separated by no more than a few hundred microns could be independently trained to increase or decrease their firing rates."
  • Their result is actually really simple - there is no 'control' or biofeedback - there is no visual or sensory input, no real computation by the network (at least for this simulation). One neuron is simply reinforced, hence it's firing rate increases.
    • Fetz & later Schimdt's work involved feedback and precise control of firing rate; this does not.
    • This also does not address the problem that their rule may allow other synapses to forget during reinforcement.
  • They do show that exact spike times can be rewarded, which is kinda interesting ... kinda.
  • Tried a pattern classification task where all of the information was in the relative spike timings.
    • Had to run the pattern through the network 1000 times. That's a bit unrealistic (?).
      • The problem with all these algorithms is that they require so many presentations for gradient descent (or similar) to work, whereas biological systems can and do learn after one or a few presentations.
  • Next tried to train neurons to classify spoken input
    • Audio stimului was processed through a cochlear model
    • Maass previously has been able to train a network to perform speaker-independent classification.
    • Neuron model does, roughly, seem to discriminate between "one" and "two"... after 2000 trials (each with a presentation of 10 of the same digit utterance). I'm still not all that impressed. Feels like gradient descent / linear regression as per the original LSM.
  • A great many derivations in the Methods section... too much to follow.
  • Should read refs:
    • PMID-16907616[1] Gradient learning in spiking neural networks by dynamic perturbation of conductances.
    • PMID-17220510[2] Solving the distal reward problem through linkage of STDP and dopamine signaling.

____References____

[0] Legenstein R, Pecevski D, Maass W, A learning theory for reward-modulated spike-timing-dependent plasticity with application to biofeedback.PLoS Comput Biol 4:10, e1000180 (2008 Oct)
[1] Fiete IR, Seung HS, Gradient learning in spiking neural networks by dynamic perturbation of conductances.Phys Rev Lett 97:4, 048104 (2006 Jul 28)
[2] Izhikevich EM, Solving the distal reward problem through linkage of STDP and dopamine signaling.Cereb Cortex 17:10, 2443-52 (2007 Oct)

{329}
hide / / print
ref: Fetz-2007.03 tags: hot fetz BMI biofeedback operant training learning date: 09-07-2008 18:56 gmt revision:7 [6] [5] [4] [3] [2] [1] [head]

PMID-17234689[0] Volitional control of neural activity: implications for brain-computer interfaces (part of a symposium)

  • Limits in the degree of accuracy of control in the latter studies can be attributed to several possible factors. Some of these factors, particularly limited practice time, can be addressed with long-term implanted BCIs. YES.
  • Accurate device control under diverse behavioral conditions depends significantly on the degree to which the neural activity can be volitionally modulated. YES again.
  • neurons (50%) in somatosensory (post central) cortex fire prior to volitional movement. interesting.
  • It should also be noted that the monkeys activated some motor cortex cells for operant reward without ever making any observed movements See: Fetz & Finocchio, 1975, PMID-810359.
    • Motor cortex neurons that were reliably associated with EMG activity in particular forelimb muscles could be readily dissociated from EMG when the rewarded pattern involved cell activity and muscle suppression.
    • This may be realated to switching between real and imagined movements.
  • Biofeedback worked well for activating low-threshold motor units in isolation, but not high threshold units; attempts to reverse recruitment order of motor units largely failed to demonstrate violations of the size principle.
  • This (the typical BMI decoding strategy) interposes an intermediate stage that may complicate the relationship between neural activity and the final output control of the device
    • again, in other words: "First, the complex transforms of neural activity to output parameters may complicate the degree to which neural control can be learned."
    • quote: This flexibility of internal representations (e.g. to imagine moving your arm, train the BMI on that, and rapidly directly control the arm rather than gonig through the intermediate/training step) underlies the ability to cognitively incorporate external prosthetic devices in to the body image, and explains the rapid conceptual adaptation to artificial environments, such as virtual reality or video games.
      • There is a high flexibility of input (sensory) and output (motor) for purposes of imagining / simulating movements.
  • adaptive learning algorithms may create a moving target for the robust learning algorithm; does it not make more sense to allow the cortex to work it's magic?
  • Degree of independent control of cells may be inherently contrained by ensemble interactions
    • To the extent that internal representations depend on relationships between the activities of neurons in an ensemble, processing of these representations involves corresponding constraints on the independence of those activities.
  • quote: "These factors suggest that the range and reliability of neural control in BMI might increase significantly when prolonged stable recordings are acheived and the subject can practice under consistent conditions over extended periods of time.
  • Fetz agrees that the limitation is the goddamn technology. need to fix this!
  • there is evidence of favortism in his citations (friends with Miguel??)

humm.. this paper came out a month ago, and despite the fact that he is much older and more experienced than i, we have arrived at the same conclusions by looking at the same set of data/papers. so: that's good, i guess.

____References____

{479}
hide / / print
ref: bookmark-0 tags: cybernetics introduction 1957 Ross Ashby feedback date: 10-26-2007 00:50 gmt revision:3 [2] [1] [0] [head]

http://pespmc1.vub.ac.be/books/IntroCyb.pdf -- dated, but still interesting, useful, a book in and of itself!

  • cybernetics = "the study of systems that are open to energy but closed to information and control"
    • cybernetics also = the study of systems whose complexity cannot be reduced away, or rather whose complexity is integral to its function, e.g. the human brain, the world economy. here simple examples have little explanatory power.
  • book, for the most part, avoids calculus, and deals instead with discrete time and sums (i think?)
  • with exercises!! for example, page 60 - cybernetics of a haunted house:)
  • random thought: a lot of this stuff seems dependent on the mathematics of statistical physics...

{106}
hide / / print
ref: Scott-2004.07 tags: Scott motor control optimal feedback cortex reaching dynamics review date: 04-09-2007 22:40 gmt revision:1 [0] [head]

PMID-15208695[0] PDF HTML summary Optimal feedback control and the neural basis of volitional motor control by Stephen S. Scott

____References____

{141}
hide / / print
ref: learning-0 tags: motor control primitives nonlinear feedback systems optimization date: 0-0-2007 0:0 revision:0 [head]

http://hardm.ath.cx:88/pdf/Schaal2003_LearningMotor.pdf not in pubmed.