m8ta
you are not logged in, login. new entry
text: sort by
tags: modified
type: chronology
[0] Káli S, Dayan P, Off-line replay maintains declarative memories in a model of hippocampal-neocortical interactions.Nat Neurosci 7:3, 286-94 (2004 Mar)

[0] Nakahara H, Doya K, Hikosaka O, Parallel cortico-basal ganglia mechanisms for acquisition and execution of visuomotor sequences - a computational approach.J Cogn Neurosci 13:5, 626-47 (2001 Jul 1)

[0] Chan SS, Moran DW, Computational model of a primate arm: from hand position to joint angles, joint torques and muscle forces.J Neural Eng 3:4, 327-37 (2006 Dec)

{1402}
hide / edit[1] / print
ref: -0 tags: recurrent cortical model adaptation gain V1 LTD date: 03-27-2018 17:48 gmt revision:1 [0] [head]

PMID-18336081 Adaptive integration in the visual cortex by depressing recurrent cortical circuits.

  • Mainly focused on the experimental observation that decreasing contrast increases latency to both behavioral and neural response (latter in the later visual areas..)
  • Idea is that synaptic depression in recurrent cortical connections mediates this 'adaptive integration' time-constant to maintain reliability.
  • Model also explains persistent activity after a flashed stimulus.
  • No plasticity or learning, though.
  • Rather elegant and well explained.

{1340}
hide / edit[4] / print
ref: -2016 tags: 6-OHDA parkinsons model warren grill simulation date: 05-10-2016 23:30 gmt revision:4 [3] [2] [1] [0] [head]

PMID-26867734 A biophysical model of the cortex-basal ganglia-thalamus network in the 6-OHDA lesioned rat model of Parkinson’s disease

  • Kumaravelu K1, Brocker DT1, Grill WM
  • Background: Although animal models (6-OHDA rats, MPTP mk) are rendered parkinsonian by a common mechanism (loss of dopaminergic neurons), there is considerable variation in the neuronal activity underlying the pathophysiology, including differences in firing rates, firing patterns, responses to cortical stimulation, and neuronal synchronization across different basal ganglia (BG) structures (Kita and Kita 2011;Nambu et al. 2000).
    • Yep. Highly idiopathic disease.
    • Claim there are good models of the MPTP monkey:
      • PMID-20309620 Modeling shifts in the rate and pattern of subthalamopallidal network activity during deep brain stimulation.
      • PMID-22805068 Network effects of subthalamic deep brain stimulation drive a unique mixture of responses in basal ganglia output.
  • Biophysical model of the cortex - basal ganglia - thalamus circuit
    • Hodgkin-Huxley type.
      • Single compartment neurons.
    • Validated by comparing responses of the BG to CTX stimulation.
    • Details, should they be important:
      • Each rCortex (regularly spiking) neuron
        • excitatory input from one TH neuron
        • inhibitory input from four randomly selected iCortex neurons.
        • Izhikevich model.
      • Each iCortex (fast inhibitory) neuron
        • excitatory input from four randomly selected rCortex neurons.
      • Each dStr (direct, D1/D5, ex) neuron
        • excitatory input from one rCortex neuron
        • inhibitory axonal collaterals from three randomly selected dStr neurons.
      • Each idStr (indirect, D2, inhib) neuron
        • excitatory input from one rCortex neuron
        • inhibitory axonal collaterals from four randomly selected idStr neurons.
      • Each STN neuron
        • inhibitory input from two GPe neurons
        • excitatory input from two rCortex neurons.
        • DBS modeled as a somatic current.
      • Each GPe neuron
        • inhibitory axonal collaterals from any two other GPe neurons
        • inhibitory input from all idStr neurons.
      • Each GPi neuron
        • inhibitory input from two GPe neurons
        • inhibitory input from all dStr neurons.
      • Some GPe/GPi neurons receive
        • excitatory input from two STN neurons,
        • while others do not.
      • Each TH neuron receives inhibitory input from one GPi neuron.
  • Diseased state:
    • Loss of striatal dopamine is accompanied by an increase in acetylcholine levels (Ach) in the Str (Ikarashi et al. 1997)
      • This results in a reduction of M-type potassium current in both the direct and indirect MSNs. (2.6 -> 1.5)
    • Dopamine loss results in reduced sensitivity of direct Str MSN to cortical stimulation (Mallet et al. 2006)
      • corticostriatal synaptic conductance from 0.07 to 0.026
    • Striatal dopamine depletion causes an increase in the synaptic strength of intra-GPe axonal collaterals resulting in aberrant GPe firing (Miguelez et al. 2012)
      • Increase from 0.125 to 0.5.
  • Good match to experimental rats:
  • Ok, so this is a complicated model (they aim to be the most complete to-date). How sensitive is it to parameter perturbations?
    • Noticeable ~20 Hz oscillations in BG in PD condition
    • ~9 Hz in STN & GPi.
  • And how well do the firing rates match experiment?
    • Not very. Look at the error bars.
  • What does DBS (direct current injection into STN neurons) do?
    • Se d,e,f: stochastic parameter; g,h,i: (semi) stochastic wiring.
  • Another check: NMDA antagonist into STN suppressed STN beta band oscillations in 6-OHDA lesioned rats (Pan et al. 2014).
    • Analysis of model GPi neurons revealed that episodes of beta band oscillatory activity interrupted alpha oscillatory activity in the PD state (Fig. 9a, b), consistent with experimental evidence that episodes of tremor-related oscillations desynchronized beta activity in PD patients (Levy et al. 2002).
  • What does DBS, at variable frequencies, do oscillations in the circuit?
  • How might this underly a mechanism of action?

Overall, not a bad paper. Not very well organized, which is not assisted by the large amount of information presented, but having slogged through the figures, I'm somewhat convinced that the model is good. This despite my general reservations of these models: the true validation would be to have it generate actual behavior (and learning)!

Lacking this, the approximations employed seem like a step forward in understanding how PD and DBS work. The results and discussion are consistent with {1255}, but not {711}, which found that STN projections from M1 (not the modulation of M1 projections to GPi, via efferents from STN) truly matter.

{1134}
hide / edit[2] / print
ref: Penney-1983.01 tags: DBS parkinsons model review chorea review date: 03-02-2012 21:22 gmt revision:2 [1] [0] [head]

PMID-6838141[0] Speculations on the Functional Anatomy of Basal Ganglia Disorders

  • "We present a model based on the accumulating evidence that suggests the importance of a cortico-striato-pallido-thalamocortical feedback circuit as the major extrapyramidal influcence on the motor system in man.
    • Behaviors generated from the cerebral cortex are focused and facilitated by projections through the basal ganglia.
    • The chorea of Huntington's disease and the bradykineasia of PD are opposite extremes of the dysfunction of this system.
      • Huntington's: inability to supress unwanted movements.
      • Inadequate inhibitory modulation of ongoing movement by the nigrostriatal dopamine pathway.
  • Anatomy already described in Kemp & Powell 1971. More details have accrued in the subsequent 4 decades.
  • Kinner Wilson 1929 -- astute observations on the nature of chorea, in Huntington's and others: how they appear to be purposeful, but are objectviely not. He infers that it may be a disorder of the premotor cortex, since the primary cortex seems to control individual muscle contractions. Much data supports this now.
  • All dopamine agonists result in choreiform dyskinesias.
  • Tardive dyskinesia seems to result from drug-induced striatal dopamine receptor supersensitivity after long-term high-dose neuroleptic therapy also manifests choreaform movements.
  • In huntington's disease, supersensitive GABA receptors develop in the globus pallidus following striatal deinnervation.
    • Likewise for PD: supersensitive dopamine receptors develop in the striatum (Lee at al 1978).
  • mention neuromodulators (substance P, angiotensin II, cholecystokinin, leucine-enkephalin) which have been largely ignored in later work -- why?
  • Tremor is very responsive to muscarinic cholinergic agonists, hence striatal cholinergic neurons may play a role in the etiology of tremor.
    • Or the effect could be mediated through the cortex (my observation).
    • But then again this is inconsistent with the fact that pallidotomy is effective at mediating tremor in PD patients.
    • Tremor is unusual in diseases like Hallervorden-Spatz and other pallidal degenerations presumably because pallidothalamic pathways are necessary for the manifestation of PD tremor.
  • THe descending SNr pathways to the tectum and midbrain tegmentum appear to be responsible for the rotatory behavior seen in models of parkinsonism in the rat (Morelli et al 1981).
    • Rotatory behavior exhibited by rats after lesioning of nigral dopamine neurons continues even in the absence of the telencephalon and thalamus (Papadopolous and Huston 1981).

Got some things completely wrong:

  • Say that the cells of the subthalamic nucleus are inhibitory on the cells of MGP (GPi)/SNr

____References____

[0] Penney JB Jr, Young AB, Speculations on the functional anatomy of basal ganglia disorders.Annu Rev Neurosci 6no Issue 73-94 (1983)

{1083}
hide / edit[4] / print
ref: Holgado-2010.09 tags: DBS oscillations beta globus pallidus simulation computational model date: 02-22-2012 18:36 gmt revision:4 [3] [2] [1] [0] [head]

PMID-20844130[0] Conditions for the Generation of Beta Oscillations in the Subthalamic Nucleus–Globus Pallidus Network

  • Modeled the globus pallidus external & STN; arrived at criteria in which the system shows beta-band oscillations.
    • STN is primarily glutamergic and projects to GPe (along with many other areas..)
      • STN gets lots of cortical afferent, too.
    • GPe is GABAergic and projects profusely back to STN.
    • This inhibition leads to more accurate choices.
      • (Frank, 2006 PMID:,
        • The present [neural network] model incorporates the STN and shows that by modulating when a response is executed, the STN reduces premature responding and therefore has substantial effects on which response is ultimately selected, particularly when there are multiple competing responses.
        • Increased cortical response conflict leads to dynamic adjustments in response thresholds via cortico-subthalamic-pallidal pathways.
        • the model accounts for the beneficial effects of STN lesions on these oscillations, but suggests that this benefit may come at the expense of impaired decision making.
        • Not totally convinced -- impulsivity is due to larger network effects. Delay in conflict situations is an emergent property, not localized to STN.
      • Frank 2007 {1077}.
  • Beta band: cite Boraud et al 2005.
  • Huh parameters drawn from Misha's work, among others + Kita 2004, 2005.
    • Striatum has a low spike rate but high modulation? Schultz and Romo 1988.
  • In their model there are a wide range of parameters (bidirectional weights) which lead to oscillation
  • In PD the siatum is hyperactive in the indirect path (Obeso et al 2000); their model duplicates this.

____References____

[0] Holgado AJ, Terry JR, Bogacz R, Conditions for the generation of beta oscillations in the subthalamic nucleus-globus pallidus network.J Neurosci 30:37, 12340-52 (2010 Sep 15)

{1103}
hide / edit[1] / print
ref: -0 tags: micromotion electrode FEA model date: 01-27-2012 19:19 gmt revision:1 [0] [head]

PMID-16317234 A finite-element model of the mechanical effects of implantable microelectrodes in the cerebral cortex.

  • Postulate that mechanical strains induced around the implant site may be one of the leading factors responsible for the sustained tissue response in chronic implants
  • A tangential tethering force results in 94% reduction in the strain value at the tip of the polyimide probe track in the tissue,
  • Simulated 'soft' probe induced two orders of magnitude smaller values of strain compared to a simulated silicon probe.
  • Shows some insertion forces:
  • As well as mechanical properties of the brain.

{1095}
hide / edit[1] / print
ref: Dorval-2010.08 tags: DBS Dorval STN irregular regular basal ganglia model date: 01-24-2012 20:24 gmt revision:1 [0] [head]

PMID-20505125[0] Deep brain stimulation alleviates parkinsonian bradykinesia by regularizing pallidal activity.

  • Hypothesis: disorder in the STN leads to parkinsonian symptoms (tremor, akinesia).
  • finger tapping test.
  • Irregular DBS was less effective than regular DBS at eliminating bradykinesia.
  • computational model: this is because there are more transmission errors at thalamic output neurons.
    • computational model possibly fluffy to keep conclusion from being too short?
  • cf. [1][2] -- which includes an irregular stimulation protocol (at longer timescales).

____References____

[0] Dorval AD, Kuncel AM, Birdno MJ, Turner DA, Grill WM, Deep brain stimulation alleviates parkinsonian bradykinesia by regularizing pallidal activity.J Neurophysiol 104:2, 911-21 (2010 Aug)
[1] Rosin B, Slovik M, Mitelman R, Rivlin-Etzion M, Haber SN, Israel Z, Vaadia E, Bergman H, Closed-loop deep brain stimulation is superior in ameliorating parkinsonism.Neuron 72:2, 370-84 (2011 Oct 20)
[2] Santos FJ, Costa RM, Tecuapetla F, Stimulation on demand: closing the loop on deep brain stimulation.Neuron 72:2, 197-8 (2011 Oct 20)

{1059}
hide / edit[0] / print
ref: -0 tags: electrode implantation spring modeling muscles sewing date: 01-16-2012 17:30 gmt revision:0 [head]

PMID-21719340 Modelization of a self-opening peripheral neural interface: a feasibility study.

  • Electrode is self-opening, and they outline the math behind it. This could be useful!

{888}
hide / edit[1] / print
ref: tlh24-2011 tags: motor learning models BMI date: 01-06-2012 00:19 gmt revision:1 [0] [head]

Experiment: you have a key. You want that key to learn to control a BMI, but you do not want the BMI to learn how the key does things, as

  1. That is not applicable for when you don't have training data - amputees, parapalegics.
  2. That does not tell much about motor learning, which is what we are interested in.

Given this, I propose a very simple groupweight: one axis is controlled by the summed action of a certain population of neurons, the other by a second, disjoint, population; a third population serves as control. The task of the key is to figure out what does what: how does the firing of a given unit translate to movement (forward model). Then the task during actual behavior is to invert this: given movement end, what sequence of firings should be generated? I assume, for now, that the brain has inbuilt mechanisms for inverting models (not that it isn't incredibly interesting -- and I'll venture a guess that it's related to replay, perhaps backwards replay of events). This leaves us with the task of inferring the tool-model from behavior, a task that can be done now with our modern (though here-mentioned quite simple) machine learning algorithms. Specifically, it can be done through supervised learning: we know the input (neural firing rates) and the output (cursor motion), and need to learn the transform between them. I can think of many ways of doing this on a computer:

  1. Linear regression -- This is obvious given the problem statement and knowledge that the model is inherently linear and separable (no multiplication factors between the input vectors). n matlab, you'd just do mldivide (backslash opeartor) -- but but! this requires storing all behavior to date. Does the brain do this? I doubt it, but this model, for a linear BMI, is optimal. (You could extend it to be Bayesian if you want confidence intervals -- but this won't make it faster).
  2. Gradient descent -- During online performance, you (or the brain) adjusts the estimates of the weights per neuron to minimize error between observed behavior and estimated behavior (the estimated behavior would constitute a forward model..) This is just LMS; it works, but has a exponential convergence and may get stuck in local minima. This model will make predictions on which neurons change relevance in the behavior (more needed for acquiring reward) based on continuous-time updates.
  3. Batched Gradient descent -- Hypothetically, one could bolster the learning rate by running batches of data multiple times through a gradient descent algorithm. The brain very well could offline (sleep), and we can observe this. Such a mechanism would improve performance after sleep, which has been observed behaviorally in people (and primates?).
  4. Gated Gradient Descent -- This is halfway between reinforcement learning and gradient descent. Basically, the brain only updates weights when something of motivational / sensory salience occurs, e.g. juice reward. It differs from raw reinforcement learning in that there is still multiplication between sensory and motor data + subsequent derivative.
  5. Reinforcement learning -- Neurons are 'rewarded' at the instant juice is delivered; they adjust their behavior based on behavioral context (a target), which presumably (given how long we train our keys), is present in the brain at the same time the cursor enters the target. Sensory data and model-building are largely absent.

{i need to think more about model-building, model inversion, and songbird learning?}

{929}
hide / edit[1] / print
ref: Kim-2007.08 tags: Hyun Kim muscle activation method BMI model prediction kinarm impedance control date: 01-06-2012 00:19 gmt revision:1 [0] [head]

PMID-17694874[0] The muscle activation method: an approach to impedance control of brain-machine interfaces through a musculoskeletal model of the arm.

  • First BMI that successfully predicted interactions between the arm and a force field.
  • Previous BMIs are used to decode position, velocity, and acceleration, as each of these has been shown to be encoded in the motor cortex
  • Hyun talks about stiff tasks, like writing on paper vs . pliant tasks, like handling an egg; both require a mixture of force and position control.
  • Georgopoulous = velocity; Evarts = Force; Kalaska movement and force in an isometric task; [17-19] = joint dependence;
  • Todorov "On the role of primary motor cortex in arm movement control" [20] = muscle activation, which reproduces Georgouplous and Schwartz ("Direct cortical representation of drawing".
  • Kakei [19] "Muscle movement representations in the primary motor cortex" and Li [23] [1] show neurons correlate with both muscle activations and direction.
  • Argues that MAM is the best way to extract impedance information -- direct readout of impedance requires a supervised BMI to be trained on data where impedance is explicitly measured.
  • linear filter does not generalize to different force fields.
  • algorithm activity highly correlated with recorded EMG.
  • another interesting ref: [26] "Are complex control signals required for human arm movements?"

____References____

[0] Kim HK, Carmena JM, Biggs SJ, Hanson TL, Nicolelis MA, Srinivasan MA, The muscle activation method: an approach to impedance control of brain-machine interfaces through a musculoskeletal model of the arm.IEEE Trans Biomed Eng 54:8, 1520-9 (2007 Aug)
[1] Li CS, Padoa-Schioppa C, Bizzi E, Neuronal correlates of motor performance and motor learning in the primary motor cortex of monkeys adapting to an external force field.Neuron 30:2, 593-607 (2001 May)

{950}
hide / edit[3] / print
ref: -0 tags: Todorov motor control models 2000 date: 12-22-2011 21:18 gmt revision:3 [2] [1] [0] [head]

PMID-10725930 Direct cortical control of muscle activation in voluntary arm movements: a model.

  • Argues that the observed high-level control of parameters (movement direction) is inconsistent with demonstrated low-level control (control of individual muscles / muscle groups, as revealed by STA [5] or force production [3]), but this inconsistency is false: the principle of low level control is correct, and high level control appears due to properties of the musculoskeletal system.
  • "Yet the same cells that encode hand velocity in movement tasks can also encode the forces exerted against external objects in both movement and isometric tasks [9,10].
  • The following other correlations have been observed:
    • arm position [11]
    • acceleration [12]
    • movement preparation [13]
    • target position [14]
    • distance to target [15]
    • overall trajectory [16]
    • muscle coactivation [17]
    • serial order [18]
    • visual target position [19]
    • joint configuration [20]
    • instantaneous movement curvature [7]
    • time from movement onset [15]
  • although these models can fit the data well, they leave a crucial question unanswered, namely, how such a mixed signal can be useful for generating motor behavior.
    • What? No! The diversity of voices gives rise to robust, dynamic computation. I think this is what Miguel has written about, will need to find a reference.
  • Anyway, all the motor parameters are related by the laws of physics -- the actual dimensionality of real reaches is relatively low.
  • His model: muscle activity simply reflects M1 PTN activity.
  • If you include real muscle parameters, a lot of the observed correlations make sense: muscle force depends not only on activation, but also on muscle length and rate of change of length.
  • In this scientific problem, the output (motor behavior) specified by the motor task is easily measured, and the input (M1 firing) must be explained.
    • Due to the many-to-one mapping, there is a large null-space of the inverse transform, so individual neurons cannot be predicted. Hence focus on population vector average.
  • Cosine tuning is the only activation pattern that minimizes neuromotor noise (derived in methods, Parseval's theorem)). Hence he uses force, velocity, and displacement tuning for his M1 cells.
  • Activity of M1 cells is constrained in endpoint space, hence depends only on behavioral parameters.
    • The muscles were "integrated out".
  • Using his equation, it is clear that for an isometric task, M1 activity is cosine tuned to force direction and magnitude -- x(t) is constant.
  • For hand kinematics in the physiological range with an experimentally measured inertia-to-damping ratio, the damping compensation signal dominates the acceleration signal.
    • Hence population x˙(t)
    • Muscle damping is asymmetric: predominant during shortening.
  • The population vector ... is equal not to the movement direction or velocity, but instead to the particular sum of position, velocity, acceleration, and force signals in eq. 1
  • PV reconstruction fails when movement and force direction are varied independently. [28]
  • Fig 4. Schwartz' drawing task -- {951} -- and shows how curvature, naturalistic velocity profiles, the resultant accelerations, and leading neuronal firing interact to distort the decoded PV.
    • Explains why, when assuming PV tuning, there seems to be variable M1-to-movement delay. At high curvature PV tuning can apprently lag movement. Impossible!
  • Fig 5 reproduces [21]
    • Mean firing rate (mfr, used to derive the poisson process spike times) and r^2 based classification remarkably different -- smoothing + square root biases toward finding direction-tuned cells.
    • Plus, as P, V, and A are all linearly related, a sum of the 3 is closer to D than any of the three.
    • "Such biases raise the important question of how one can determine what an individual neuron controls"
  • PV reversals occur when the force/acceleration term exceeds the velocity scaling term -- which is 'equivalent' to the triphasic burst pattern observed in EMG. Ergo monkeys should be trained to make faster movements.
  • The structure of your model -- for example firingrate=b 0+b xX+b yY+b mM biases analysis for direction, not magnitude; correct model is firingrate=b 0+b xmXM+b ymYM -- multiplicative.
  • "Most of these puzzling phenomena arise from the feedforward control of muscle viscoelasticity."
  • Implicit assumption is that for the simple, overtrained, unperturbed movements typically studied, feedforward neural control is quite accurate. When you get spinal reflexes involved things may change. Likewise for projections from the red nucleus.

{154}
hide / edit[1] / print
ref: OReilly-2006.02 tags: computational model prefrontal_cortex basal_ganglia date: 12-07-2011 04:11 gmt revision:1 [0] [head]

PMID-16378516[0] Making Working Memory Work: A Computational Model of Learning in the Prefrontal Cortex and Basal Ganglia

found via: http://www.citeulike.org/tag/basal-ganglia

____References____

[0] O'Reilly RC, Frank MJ, Making working memory work: a computational model of learning in the prefrontal cortex and basal ganglia.Neural Comput 18:2, 283-328 (2006 Feb)

{683}
hide / edit[1] / print
ref: KAli-2004.03 tags: hippocampus memory model Dayan replay learning memory date: 03-06-2009 17:53 gmt revision:1 [0] [head]

PMID-14983183[0] Off-line replay maintains declarative memories in a model of hippocampal-neocortical interactions

  • (i'm skimming the article)
  • The neocortex acts as a probabilistic generative model. unsupervised learning extracts categories, tendencies and correlations from the statistics of the inputs into the [synaptic weights].
  • Their hypothesis is that hippocampal replay is required for maintenance of episodic memories; their model and simulations support this.
  • quote: "However, the computational goal of episodic learning is storing individual events rather than discovering statistical structure, seemingly rendering consolidation inappropriate. If initial hippocampal storage of the episode already ensures that it can later be recalled episodically, then, barring practical advantages such as storage capacity (or perhaps efficiency), there seems little point in duplicating this capacity in neocortex." makes sense!

____References____

{675}
hide / edit[0] / print
ref: Yelnik-2008.12 tags: basal ganglia model review date: 02-17-2009 17:47 gmt revision:0 [head]

PMID-18808769 Modeling the organization of the basal ganglia.

  • wow, a concrete and descriptive model! nice!
  • can't get at the PDF / fulltext though.

{673}
hide / edit[2] / print
ref: Vasilaki-2009.02 tags: associative learning prefrontal cortex model hebbian date: 02-17-2009 03:37 gmt revision:2 [1] [0] [head]

PMID-19153762 Learning flexible sensori-motor mappings in a complex network.

  • Were looking at a task, presented to monkeys over 10 years ago, where two images were presented to the monkeys, and they had to associate left and rightward saccades with both.
  • The associations between saccade direction and image was periodically reversed. Unlike humans, who probably could very quickly change the association, the monkeys required on the order of 30 trials to learn the new association.
  • Interestingly, whenever the monkeys made a mistake, they effectively forgot previous pairings. That is, after an error, the monkeys were as likely to make another error as they were to choose correctly, independent of the number of correct trials preceding the error. Strange!
  • They implement and test reward-modulated hebbian learning (RAH), where:
    • The synaptic weights are changed based on the presynaptic activity, the postsynaptic activity minus the probability of both presynaptic and postsynaptic activity. This 'minus' effect seems similar to that of TD learning?
    • The synaptic weights are soft-bounded,
    • There is a stop-learning criteria, where the weights are not positively updated if the total neuron activity is strongly positive or strongly negative. This allows the network to ultimately obtain perfection (at some point the weights are no longer changed upon reward), and explains some of the asymmetry of the reward / punishment.
  • Their model perhaps does not scale well for large / very complicated tasks... given the presence of only a single reward signal. And the lack of attention / recall? Still, it fits the experimental data quite well.
  • They also note that for all the problems they study, adding more layers to the network does not significantly affect learning - neither the rate nor the eventual performance.

{618}
hide / edit[0] / print
ref: Nakahara-2001.07 tags: basal ganglia model cerebral cortex motor learning date: 10-05-2008 02:38 gmt revision:0 [head]

PMID-11506661[0] Parallel cortico-basal ganglia mechanisms for acquisition and execution of visuomotor sequences - a computational approach.

  • Interesting model of parallel motor/visual learning, the motor through the posterior BG (the middle posterior part of the putamen) and supplementary motor areas, and the visual through the dorsolateral prefrontal cortex and the anterior BG (caudate head and rostral putamen).
  • visual tasks are learned quicker due to the simplicity of their transform.
  • require a 'coordinator' to adjust control of the visual and motor loops.
  • basal ganglia-thalamacortical loops are highly topographic; motor, oculomotor, prefrontal and limbic loops have been found.
  • pre-SMA, not the SMA, is connected to the prefrontal cortex.
  • pre-SMA receives connections from the rostral cingulate motor area.
  • used actor-critic architecture, where the critic learns to predict cumulative future rewards from state and the actor produces movements to maximize reward (motor) or transformations (sensory). visual and motor networks are actors in visual and motor representations, respectively.
  • used TD learning, where TD error is encoded via SNc.
  • more later, not finished writing (need dinner!)

____References____

{80}
hide / edit[1] / print
ref: Chan-2006.12 tags: computational model primate arm musculoskeletal motor_control Moran date: 04-09-2007 22:35 gmt revision:1 [0] [head]

PMID-17124337[0] Computational Model of a Primate Arm: from hand position to joint angles, joint torques, and muscle forces ideas:

  • no study so far has been able to incorporate all of these variables (global hand position & velocity, joint angles, joint angular velocities, joint torques, muscle activations)
  • they have a 3D, 7DOF model that translate actual motion to optimized muscle activations.
  • knock the old center-out research (nice!)
  • 38 musculoskeletal-tendon units
  • past research: people have found correlations to both forces and higher-level parameters, like position and velocity. these must be transformed via inverse dynamics to generate a motor plan / actually move the arm.
  • used SIMM to optimize the joint locations to replicate actual movements...
  • assume that the torso is the inertial frame.
  • used infrared Optotrak 3020
  • their model is consistent - they can use the inverse model to calculate muscle activations, which when fed back into the forward model, results in realistic movements. still yet, they do not compare to actual EMG.
  • for working with the dynamic model of the arm, they used AUTOLEV
    • I wish i could figure out what the Kane method was, they seem to leverage it here.
  • their inverse model is pretty clever:
  1. take the present attitude/orientation & velocity of the arm, and using parts of the forward model, calculate the contributions from gravity & coriolis forces.
  2. subtract this from the torques estimated via M*A (moment of interia times angular acceleration) to yield the contributions of the muscles.
  3. perturb each of the joints / DOF & measure the resulting arm motion, integrated over the same period as measurement
  4. form a linear equation with the linearized torque-responses on the left, and the muscle torque contributions on the right. Invert this equation to get the actual joint torques. (presumably the matrix spans row space).
  5. to figure out the muscle contributions, do the same thing - apply activation, scaled by the PCSA, to each muscle & measure the resulting torque (this is effectively the moment arm).
  6. take the resulting 38x7 matrix & p-inverse, with the constraint that none of the muscle activations are negative, yielding a somewhat well-specified muscle activation. not all that complicated of a method

____References____

{108}
hide / edit[0] / print
ref: bookmark-0 tags: STDP hebbian learning dopamine reward robot model ISO date: 0-0-2007 0:0 revision:0 [head]

http://www.berndporr.me.uk/iso3_sab/

  • idea: have a gating signal for the hebbian learning.
    • pure hebbian learning is unsable; it will lead to endless amplification.
  • method: use a bunch of resonators near sub-critically dampled.
  • application: a simple 2-d robot that learns to seek food. not super interesting, but still good.
  • Uses ISO learning - Isotropic sequence order learning.
  • somewhat related: runbot!

{36}
hide / edit[0] / print
ref: bookmark-0 tags: spiking neuron models learning SRM spike response model date: 0-0-2006 0:0 revision:0 [head]

http://diwww.epfl.ch/~gerstner/SPNM/SPNM.html

{81}
hide / edit[0] / print
ref: Stapleton-2006.04 tags: Stapleton Lavine poisson prediction gustatory discrimination statistical_model rats bayes BUGS date: 0-0-2006 0:0 revision:0 [head]

PMID-16611830

http://www.jneurosci.org/cgi/content/full/26/15/4126