m8ta
You are not authenticated, login.
text: sort by
tags: modified
type: chronology
[0] Mojarradi M, Binkley D, Blalock B, Andersen R, Ulshoefer N, Johnson T, Del Castillo L, A miniaturized neuroprosthesis suitable for implantation into the brain.IEEE Trans Neural Syst Rehabil Eng 11:1, 38-42 (2003 Mar)

[0] Musallam S, Corneil BD, Greger B, Scherberger H, Andersen RA, Cognitive control signals for neural prosthetics.Science 305:5681, 258-62 (2004 Jul 9)

[0] Jackson A, Mavoori J, Fetz EE, Long-term motor cortex plasticity induced by an electronic neural implant.Nature 444:7115, 56-60 (2006 Nov 2)

[0] Gandolfo F, Mussa-Ivaldi FA, Bizzi E, Motor learning by field approximation.Proc Natl Acad Sci U S A 93:9, 3843-6 (1996 Apr 30)[1] Mussa-Ivaldi FA, Giszter SF, Vector field approximation: a computational paradigm for motor control and learning.Biol Cybern 67:6, 491-500 (1992)

[0] Churchland MM, Afshar A, Shenoy KV, A central source of movement variability.Neuron 52:6, 1085-96 (2006 Dec 21)

{1511}
hide / / print
ref: -2020 tags: evolution neutral drift networks random walk entropy population date: 04-08-2020 00:48 gmt revision:0 [head]

Localization of neutral evolution: selection for mutational robustness and the maximal entropy random walk

  • The take-away of the paper is that, with larger populations, random mutation and recombination make areas of the graph that take several steps to get to (in the figure, this is Maynard Smith's four-letter mutation word game) are less likely to be visited with a larger population.
  • This is because the recombination serves to make the population adhere more closely to the 'giant' mode. In Maynard's game, this is 2268 words of 2405 meaningful words that can be reached by successive letter changes.
  • The author extends it to van Nimwegen's 1999 paper / RNA genotype-secondary structure. It's not as bad as Maynard's game, but still has much lower graph-theoretic entropy than the actual population.
    • He suggests if the entropic size of the giant component is much smaller than it's dictionary size, then populations are likely to be trapped there.

  • Interesting, but I'd prefer to have an expert peer-review it first :)

{1468}
hide / / print
ref: -2013 tags: microscopy space bandwidth product imaging resolution UCSF date: 06-17-2019 14:45 gmt revision:0 [head]

How much information does your microscope transmit?

  • Typical objectives 1x - 5x, about 200 Mpix!

{1454}
hide / / print
ref: -2011 tags: Andrew Ng high level unsupervised autoencoders date: 03-15-2019 06:09 gmt revision:7 [6] [5] [4] [3] [2] [1] [head]

Building High-level Features Using Large Scale Unsupervised Learning

  • Quoc V. Le, Marc'Aurelio Ranzato, Rajat Monga, Matthieu Devin, Kai Chen, Greg S. Corrado, Jeff Dean, Andrew Y. Ng
  • Input data 10M random 200x200 frames from youtube. Each video contributes only one frame.
  • Used local receptive fields, to reduce the communication requirements. 1000 computers, 16 cores each, 3 days.
  • "Strongly influenced by" Olshausen & Field {1448} -- but this is limited to a shallow architecture.
  • Lee et al 2008 show that stacked RBMs can model simple functions of the cortex.
  • Lee et al 2009 show that convolutonal DBN trained on faces can learn a face detector.
  • Their architecture: sparse deep autoencoder with
    • Local receptive fields: each feature of the autoencoder can connect to only a small region of the lower layer (e.g. non-convolutional)
      • Purely linear layer.
      • More biologically plausible & allows the learning of more invariances other than translational invariances (Le et al 2010).
      • No weight sharing means the network is extra large == 1 billion weights.
        • Still, the human visual cortex is about a million times larger in neurons and synapses.
    • L2 pooling (Hyvarinen et al 2009) which allows the learning of invariant features.
      • E.g. this is the square root of the sum of the squares of its inputs. Square root nonlinearity.
    • Local contrast normalization -- subtractive and divisive (Jarrett et al 2009)
  • Encoding weights W 1W_1 and deconding weights W 2W_2 are adjusted to minimize the reconstruction error, penalized by 0.1 * the sparse pooling layer activation. Latter term encourages the network to find invariances.
  • minimize(W 1,W 2) minimize(W_1, W_2) i=1 m(||W 2W 1 Tx (i)x (i)|| 2 2+λ j=1 kε+H j(W 1 Tx (i)) 2) \sum_{i=1}^m {({ ||W_2 W_1^T x^{(i)} - x^{(i)} ||^2_2 + \lambda \sum_{j=1}^k{ \sqrt{\epsilon + H_j(W_1^T x^{(i)})^2}} })}
    • H jH_j are the weights to the j-th pooling element, λ=0.1\lambda = 0.1 ; m examples; k pooling units.
    • This is also known as reconstruction Topographic Independent Component Analysis.
    • Weights are updated through asynchronous SGD.
    • Minibatch size 100.
    • Note deeper autoencoders don't fare consistently better.

{1426}
hide / / print
ref: -2019 tags: Arild Nokland local error signals backprop neural networks mnist cifar VGG date: 02-15-2019 03:15 gmt revision:6 [5] [4] [3] [2] [1] [0] [head]

Training neural networks with local error signals

  • Arild Nokland and Lars H Eidnes
  • Idea is to use one+ supplementary neural networks to measure within-batch matching loss between transformed hidden-layer output and one-hot label data to produce layer-local learning signals (gradients) for improving local representation.
  • Hence, no backprop. Error signals are all local, and inter-layer dependencies are not explicitly accounted for (! I think).
  • L simL_{sim} : given a mini-batch of hidden layer activations H=(h 1,...,h n)H = (h_1, ..., h_n) and a one-hot encoded label matrix Y=(y 1,...,y nY = (y_1, ..., y_n ,
    • L sim=||S(NeuralNet(H))S(Y)|| F 2 L_{sim} = || S(NeuralNet(H)) - S(Y)||^2_F (don't know what F is..)
    • NeuralNet()NeuralNet() is a convolutional neural net (trained how?) 3*3, stride 1, reduces output to 2.
    • S()S() is the cosine similarity matrix, or correlation matrix, of a mini-batch.
  • L pred=CrossEntropy(Y,W TH)L_{pred} = CrossEntropy(Y, W^T H) where W is a weight matrix, dim hidden_size * n_classes.
    • Cross-entropy is H(Y,W TH)=Σ i,jY i,jlog((W TH) i,j)+(1Y i,j)log(1(W TH) i,j) H(Y, W^T H) = \Sigma_{i,j} Y_{i,j} log((W^T H)_{i,j}) + (1-Y_{i,j}) log(1-(W^T H)_{i,j})
  • Sim-bio loss: replace NeuralNet()NeuralNet() with average-pooling and standard-deviation op. Plus one-hot target is replaced with a random transformation of the same target vector.
  • Overall loss 99% L simL_sim , 1% L predL_pred
    • Despite the unequal weighting, both seem to improve test prediction on all examples.
  • VGG like network, with dropout and cutout (blacking out square regions of input space), batch size 128.
  • Tested on all the relevant datasets: MNIST, Fashion-MNIST, Kuzushiji-MNIST, CIFAR-10, CIFAR-100, STL-10, SVHN.
  • Pretty decent review of similarity matching measures at the beginning of the paper; not extensive but puts everything in context.
    • See for example non-negative matrix factorization using Hebbian and anti-Hebbian learning in and Chklovskii 2014.
  • Emphasis put on biologically realistic learning, including the use of feedback alignment {1423}
    • Yet: this was entirely supervised learning, as the labels were propagated back to each layer.
    • More likely that biology is setup to maximize available labels (not a new concept).

{1432}
hide / / print
ref: -0 tags: feedback alignment Arild Nokland MNIST CIFAR date: 02-14-2019 02:15 gmt revision:0 [head]

Direct Feedback alignment provides learning in deep neural nets

  • from {1423}
  • Feedback alignment is able to provide zero training error even in convolutional networks and very deep networks, completely without error back-propagation.
  • Biologically plausible: error signal is entirely local, no symmetric or reciprocal weights required.
    • Still, it requires supervision.
  • Almost as good as backprop!
  • Clearly written, easy to follow math.
    • Though the proof that feedback-alignment direction is within 90 deg of backprop is a bit impenetrable, needs some reorganization or additional exposition / annotation.
  • 3x400 tanh network tested on MNIST; performs similarly to backprop, if faster.
  • Also able to train very deep networks, on MNIST - CIFAR-10, CIFAR-100, 100 layers (which actually hurts this task).

{1423}
hide / / print
ref: -2014 tags: Lillicrap Random feedback alignment weights synaptic learning backprop MNIST date: 02-14-2019 01:02 gmt revision:5 [4] [3] [2] [1] [0] [head]

PMID-27824044 Random synaptic feedback weights support error backpropagation for deep learning.

  • "Here we present a surprisingly simple algorithm for deep learning, which assigns blame by multiplying error signals by a random synaptic weights.
  • Backprop multiplies error signals e by the weight matrix W T W^T , the transpose of the forward synaptic weights.
  • But the feedback weights do not need to be exactly W T W^T ; any matrix B will suffice, so long as on average:
  • e TWBe>0 e^T W B e > 0
    • Meaning that the teaching signal Be B e lies within 90deg of the signal used by backprop, W Te W^T e
  • Feedback alignment actually seems to work better than backprop in some cases. This relies on starting the weights very small (can't be zero -- no output)

Our proof says that weights W0 and W
evolve to equilibrium manifolds, but simulations (Fig. 4) and analytic results (Supple-
mentary Proof 2) hint at something more specific: that when the weights begin near
0, feedback alignment encourages W to act like a local pseudoinverse of B around
the error manifold. This fact is important because if B were exactly W + (the Moore-
Penrose pseudoinverse of W ), then the network would be performing Gauss-Newton
optimization (Supplementary Proof 3). We call this update rule for the hidden units
pseudobackprop and denote it by ∆hPBP = W + e. Experiments with the linear net-
work show that the angle, ∆hFA ]∆hPBP quickly becomes smaller than ∆hFA ]∆hBP
(Fig. 4b, c; see Methods). In other words feedback alignment, despite its simplicity,
displays elements of second-order learning.

{1431}
hide / / print
ref: -0 tags: betzig sparse and composite coherent lattices date: 02-14-2019 00:00 gmt revision:1 [0] [head]

Sparse and composite coherent lattices

  • Focused on the math:
    • Linear algebra to find the wavevectors from the Bravais primitive vectors;
    • Iterative maximization @ lattice points to find the electric field phase and amplitude
    • (Read paper for details)
  • High NA objective naturally converts plane wave to a spherical wave; this can be used to create spherically-constrained lattices at the focal point of objectives.

{1391}
hide / / print
ref: -0 tags: computational biology evolution metabolic networks andreas wagner genotype phenotype network date: 06-12-2017 19:35 gmt revision:1 [0] [head]

Evolutionary Plasticity and Innovations in Complex Metabolic Reaction Networks

  • ‘’João F. Matias Rodrigues, Andreas Wagner ‘’
  • Our observations suggest that the robustness of the Escherichia coli metabolic network to mutations is typical of networks with the same phenotype.
  • We demonstrate that networks with the same phenotype form large sets that can be traversed through single mutations, and that single mutations of different genotypes with the same phenotype can yield very different novel phenotypes
  • Entirely computational study.
    • Examines what is possible given known metabolic building-blocks.
  • Methodology: collated a list of all metabolic reactions in E. Coli (726 reactions, excluding 205 transport reactions) out of 5870 possible reactions.
    • Then ran random-walk mutation experiments to see where the genotype + phenotype could move. Each point in the genotype had to be viable on either a rich (many carbon source) or minimal (glucose) growth medium.
    • Viability was determined by Flux-balance analysis (FBA).
      • In our work we use a set of biochemical precursors from E. coli 47-49 as the set of required compounds a network needs to synthesize, ‘’’by using linear programming to optimize the flux through a specific objective function’’’, in this case the reaction representing the production of biomass precursors we are able to know if a specific metabolic network is able to synthesize the precursors or not.
      • Used Coin-OR and Ilog to optimize the metabolic concentrations (I think?) per given network.
    • This included the ability to synthesize all required precursor biomolecules; see supplementary information.
    • ‘’’“Viable” is highly permissive -- non-zero biomolecule concentration using FBA and linear programming. ‘’’
    • Genomic distances = hamming distance between binary vectors, where 1 = enzyme / reaction possible; 0 = mutated off; 0 = identical genotype, 1 = completely different genotype.
  • Between pairs of viable genetic-metabolic networks, only a minority (30 - 40%) of reactions are essential,
    • Which naturally increases with increasing carbon source diversity:
    • When they go back an examine networks that can sustain life on any of (up to) 60 carbon sources, and again measure the distance from the original E. Coli genome, they find this added robustness does not significantly constrain network architecture.

Summary thoughts: This is a highly interesting study, insofar that the authors show substantial support for their hypotheses that phenotypes can be explored through random-walk non-lethal mutations of the genotype, and this is somewhat invariant to the source of carbon for known biochemical reactions. What gives me pause is the use of linear programming / optimization when setting the relative concentrations of biomolecules, and the permissive criteria for accepting these networks; real life (I would imagine) is far more constrained. Relative and absolute concentrations matter.

Still, the study does reflect some robustness. I suggest that a good control would be to ‘fuzz’ the list of available reactions based on statistical criteria, and see if the results still hold. Then, go back and make the reactions un-biological or less networked, and see if this destroys the measured degrees of robustness.

{1335}
hide / / print
ref: -0 tags: concentation of monoamine dopamine serotonin and norepinephrine in the brain date: 04-28-2016 19:38 gmt revision:3 [2] [1] [0] [head]

What are the concentrations of the monoamines in the brain? (Purpose: estimate the required electrochemical sensing area & efficiency)

  • Dopamine: 100 uM - 1 mM local, extracellular.
    • PMID-17709119 The Yin and Yang of dopamine release: a new perspective.
  • Serotonin (5-HT): 100 ng/g, 0.5 uM, whole brain (not extracellular!).
  • Norepinephrine / noradrenaline: 400 nm/g, 2.4 uM, again whole brain.
    • PMID-11744005 An enriched environment increases noradrenaline concentration in the mouse brain.
    • Also has whole-brain extracts for DA and 5HT, roughly:
      • 1200 ng/g DA
      • 400 ng/g NE
      • 350 ng/g 5-HT
  • So, one could imagine ~100 uM transient concentrations for all 3 monoamines.

{1318}
hide / / print
ref: -0 tags: standard enthalpy chemicals list pdf date: 06-25-2015 00:09 gmt revision:1 [0] [head]

Standard thermodynamic properties of chemical substances

{969}
hide / / print
ref: Weinberger-2009.09 tags: STN DBS PD oscillations beta band review date: 03-05-2012 16:32 gmt revision:5 [4] [3] [2] [1] [0] [head]

PMID-19460368[0] Pathological subthalamic nucleus oscillations in PD: can they be the cause of bradykinesia and akinesia?

  • Review of {1075}
  • Suppression of beta-band is correlated with the improvement in combined measures of bradykinesia and rigidity.
    • This does not mean that the oscillations cause rigidity! only that L-DOPA affects both. Focused entirely on Beta band.
  • Previously shown that the degree of beta oscillatory activity in the STN of PD patients correlates with the patients' benefit from dopaminergic medications, but not with baseline motor deficits. (the treatment but not the symptoms)
  • Levy 2000, 2001 for the existence of oscillatory activity in the STN & globus pallidus.
  • Prominent beta band activity in GPi & STN LFP. [Levy 2000, Levy 2001 , Brown 2001]
  • Short train HFS of the STN has been shown to decrease STN-cortex coherence for up to 25s after application. [Wingeier 2006] [Kuhn 2008]
    • Others disagree. [Foffani et al., 2006] and [Rossi et al., 2008] ).
  • In a response task, decrease in beta-band activity negatively correlates with reaction time. [Kuhn 2004]
    • Beta suppression is also correlated with increased motor planning [Williams 2005]
  • Beta band activity also present in healthy monkey striatum, human putamen, and cortex. (I wonder how? many references.)
  • Yet, to date there is no clear evidence that the degree of synchronization in the beta band directly accounts for the motor deficits in PD.
  • It has been recently shown that the percentage of neurons exhibiting oscillatory firing in the beta range correlates well (r squared = 0.62) with the degree by which PD motor symptoms improved after dopamine replacement therapy (Weinberger et al. 2006 PMID-17005611)
  • It should be noted that decrease in beta-band activity may be caused by -- rather than causal of -- decreased akinesia and rigidity.
    • That said, in rats treated with 6-OHDA, an increase in beta band activity took several days to appear after drug administration, and appeared at the same time as clinical symptoms.
  • Interesting! Activity-dependent plasticity was remarkably enhanced with a low dose of levodopa in the basal ganglia output of SNr and that there was a surprisingly good correlation (r squared = 0.81) between symptoms and the level of synaptic plasticity (Prescott et al., 2009) [2].
  • Other theory: exaggerated synchrony in the basal ganglia limits the ability to encode meaningful information, as all neurons are entrained to the same frequency hence undifferentiated.
    • Thought beta band may just be a non-coding resting state. Synaptic plasticity goes awry, and all neurons become entrained. Explains bradykinesia but not rigidity.

____References____

[0] Weinberger M, Hutchison WD, Dostrovsky JO, Pathological subthalamic nucleus oscillations in PD: can they be the cause of bradykinesia and akinesia?Exp Neurol 219:1, 58-61 (2009 Sep)
[1] Kühn AA, Tsui A, Aziz T, Ray N, Brücke C, Kupsch A, Schneider GH, Brown P, Pathological synchronisation in the subthalamic nucleus of patients with Parkinson's disease relates to both bradykinesia and rigidity.Exp Neurol 215:2, 380-7 (2009 Feb)
[2] Prescott IA, Dostrovsky JO, Moro E, Hodaie M, Lozano AM, Hutchison WD, Levodopa enhances synaptic plasticity in the substantia nigra pars reticulata of Parkinson's disease patients.Brain 132:Pt 2, 309-18 (2009 Feb)

{1125}
hide / / print
ref: -0 tags: active filter design Netherlands Gerrit Groenewold date: 02-17-2012 20:27 gmt revision:0 [head]

IEEE-04268406 (pdf) Noise and Group Delay in Actvie Filters

  • relevant conclusion: the output noise spectrum is exactly proportinoal to the group delay.
  • Poschenrieder established a relationship between group delay and energy stored in a passive filter.
  • Fettweis proved from this that the noise generation of an active filter which is based on a passive filter is appoximately proportional to the group delay. (!!!)

{806}
hide / / print
ref: work-0 tags: gaussian random variables mutual information SNR date: 01-16-2012 03:54 gmt revision:26 [25] [24] [23] [22] [21] [20] [head]

I've recently tried to determine the bit-rate of conveyed by one gaussian random process about another in terms of the signal-to-noise ratio between the two. Assume x x is the known signal to be predicted, and y y is the prediction.

Let's define SNR(y)=Var(x)Var(err) SNR(y) = \frac{Var(x)}{Var(err)} where err=xy err = x-y . Note this is a ratio of powers; for the conventional SNR, SNR dB=10*log 10Var(x)Var(err) SNR_{dB} = 10*log_{10 } \frac{Var(x)}{Var(err)} . Var(err)Var(err) is also known as the mean-squared-error (mse).

Now, Var(err)=(xyerr¯) 2=Var(x)+Var(y)2Cov(x,y) Var(err) = \sum{ (x - y - sstrch \bar{err})^2 estrch} = Var(x) + Var(y) - 2 Cov(x,y) ; assume x and y have unit variance (or scale them so that they do), then

2SNR(y) 12=Cov(x,y) \frac{2 - SNR(y)^{-1}}{2 } = Cov(x,y)

We need the covariance because the mutual information between two jointly Gaussian zero-mean variables can be defined in terms of their covariance matrix: (see http://www.springerlink.com/content/v026617150753x6q/ ). Here Q is the covariance matrix,

Q=[Var(x) Cov(x,y) Cov(x,y) Var(y)] Q = \left[ \array{Var(x) & Cov(x,y) \\ Cov(x,y) & Var(y)} \right]

MI=12logVar(x)Var(y)det(Q) MI = \frac{1 }{2 } log \frac{Var(x) Var(y)}{det(Q)}

Det(Q)=1Cov(x,y) 2 Det(Q) = 1 - Cov(x,y)^2

Then MI=12log 2[1Cov(x,y) 2] MI = - \frac{1 }{2 } log_2 \left[ 1 - Cov(x,y)^2 \right]

or MI=12log 2[SNR(y) 114SNR(y) 2] MI = - \frac{1 }{2 } log_2 \left[ SNR(y)^{-1} - \frac{1 }{4 } SNR(y)^{-2} \right]

This agrees with intuition. If we have a SNR of 10db, or 10 (power ratio), then we would expect to be able to break a random variable into about 10 different categories or bins (recall stdev is the sqrt of the variance), with the probability of the variable being in the estimated bin to be 1/2. (This, at least in my mind, is where the 1/2 constant comes from - if there is gaussian noise, you won't be able to determine exactly which bin the random variable is in, hence log_2 is an overestimator.)

Here is a table with the respective values, including the amplitude (not power) ratio representations of SNR. "

SNRAmp. ratioMI (bits)
103.11.6
20103.3
30315.0
401006.6
9031e315
Note that at 90dB, you get about 15 bits of resolution. This makes sense, as 16-bit DACs and ADCs have (typically) 96dB SNR. good.

Now, to get the bitrate, you take the SNR, calculate the mutual information, and multiply it by the bandwidth (not the sampling rate in a discrete time system) of the signals. In our particular application, I think the bandwidth is between 1 and 2 Hz, hence we're getting 1.6-3.2 bits/second/axis, hence 3.2-6.4 bits/second for our normal 2D tasks. If you read this blog regularly, you'll notice that others have achieved 4bits/sec with one neuron and 6.5 bits/sec with dozens {271}.

{316}
hide / / print
ref: Mojarradi-2003.03 tags: MEMS recording telemetry Normann Andersen wireless date: 01-15-2012 04:29 gmt revision:2 [1] [0] [head]

PMID-12797724[0] A miniaturized neuroprosthesis suitable for implantation into the brain.

  • Standard tricks: cascode configuration, deep-ohmic PMOS Devices for resistive feedback, wide PMOS weak-inversion input stage for good transconductance and low noise.
  • Varaible power for variable noise levels & bandwidths.
  • Wireless transceiver and power stage are in early concept stages.

____References____

{322}
hide / / print
ref: Musallam-2004.07 tags: cognitive BMI Musallam Andersen PRR MIP date: 01-08-2012 23:13 gmt revision:5 [4] [3] [2] [1] [0] [head]

PMID-15247483[0] Cognitive control signals for Neural Prosthetics

  • decode intended target from 200 to 1100ms of memory period (reward on correct, etc).
  • got good success rates with relatively few neurons (like 8 for 8 targets) -- yet decode rates were not that good, not at all as good as Fetz or Schmidt.
  • used pareital reach region (PRR), a subsection of posterior partietal cortex PPC, which represents the goals of the reach in visual coordinates. In the experiment, the implanted in media intrapareital (MIP)
    • in encodes the intended goal rather than the trajectory to achieve that goal.
    • PMd also seems to encode planning activity, though less is known about that.
  • used an adaptive database to map neuronal activity to targets; eventually, the database contained only (correct) brain-control trials.
  • neuronal responses were recorded from parietal reach region (PRR) with 64 microwire electrodes in 4 monkeys, plus 32 microwire electrodes in PMd
  • monkeys were tained to fixate on the center of the screen dring the task, though free fixation was also tested and seemed to work ok.
  • monkeys had to press cue, fixate, observe target location, wait ~2 sec, and move to the (remembered) target location when cue disappeared.
  • they use a static or continually updated 'database' for predicting which of four targets the monkey wants to go to during the instructed delay task.
  • able to predict with moderate accuracy the expected value of the target as well as its (discrete) position.
  • predictions were made during the delay period while there was no motor movement.
  • predictions worked equally well for updated and static databases.
  • monkeys were able to increase their performance on the BMI trials over the course of training.
  • reward type or size modulated the tuning of BMI neurons in the ecpected way, though aversive stimuli did not increase the tuning - suggesting that the tuning is not a function of attention (maybe).
  • the database consisted of 900ms of spike recordings starting 200ms after cue for 30 reach trials for each target. spike trains were projected onto Haar wavelets (sorta like a binary tree), and the filter coefficients were used to describe P(r), the probability of response, and P(r|s), probability of response given the target. then they used bayes rule (P(r) and P(r|s) were approximated with histograms, i think) to find P(s|r) - a discrete function - which it is easy to find the maximum of.
  • adding more trials offline improved the decode performance.
  • supporting online material.

PMID-15491902 Cognitive neural prosthetics

  • LFPs are easier to record and may last longer (but they are not as 'sexy').
  • suggest future electrodes will move automatically, peizo-drive perhaps.
  • PRR receives direct visual projections & codes for reaches in visual coordinates relative to the current direction of gaze.
  • PRR can hold the plan for a movement in short-term memory.
  • 16 neurons peak..?
  • In area LIP of PPC Platt and Glimcher PMID-10421364 found cells that code the expected value of rewards.
    • 20Hz beta-band oscillation indicated the behavioral state of the animal. While planning for a saccade it slowly increased, whereas at the time of movement in dramatically increased in amplitude.
    • LFP was better than spikes for a state decode.

____References____

{1004}
hide / / print
ref: Dabrowski-2003.1 tags: ASIC neural recording poland neuroplat pseudoresistor date: 01-03-2012 15:24 gmt revision:4 [3] [2] [1] [0] [head]

IEEE-1351853 (pdf) Development of integrated circuits for readout of microelectrode arrays to image neuronal activity in live retinal tissue

  • Use miller effect to increas capacitance for HPF.
  • resistors are long channel PMOS 3um / 500um, biased in linear region @ 0V.
    • Transistors must be in linear region: implement gate following of input signal. By varying this gate voltage, can change the filter characteristics.
  • Amplifier looks rather clever.
  • 7uV RMS input-referred noise.

____References____

Dabrowski, W. and Grybos, P. and Hottowy, P. and Skoczen, A. and Swientek, K. and Bezayiff, N. and Grillo, A.A. and Kachiguine, S. and Litke, A.M. and Sher, A. Nuclear Science Symposium Conference Record, 2003 IEEE 2 956 - 960 Vol.2 (2003)

{984}
hide / / print
ref: ODoherty-2011 tags: Odoherty Nicolelis ICMS stimulation randomly patterned gamma distribution date: 01-03-2012 06:55 gmt revision:1 [0] [head]

IEEE-6114258 (pdf) Towards a Brain-Machine-Brain Interface:Virtual Active Touch Using Randomly Patterned Intracortical Microstimulation.

  • Key result: monkeys can discriminate between constant-frequency ICMS and aperiodic pulses, hence can discriminate some fine temporal aspects of ICMS.
  • Also discussed blanking methods for stimulating and recording at the same time (on different electrodes, using the randomized stimulation patterns).

____References____

O'Doherty, J. and Lebedev, M. and Li, Z. and Nicolelis, M. Towards a Brain #x2013;Machine #x2013;Brain Interface:Virtual Active Touch Using Randomly Patterned Intracortical Microstimulation Neural Systems and Rehabilitation Engineering, IEEE Transactions on PP 99 1 (2011)

{1002}
hide / / print
ref: Fan-2011.01 tags: TBSI wireless recordings system FM modulation multiplexing poland date: 01-03-2012 00:55 gmt revision:5 [4] [3] [2] [1] [0] [head]

PMID-21765934[0] A wireless multi-channel recording system for freely behaving mice and rats.

  • Light enough that rats can use it: 4.5g
  • 15 or 32 channels.
  • Good list of the competiton; they note Szuts et al [31], [1], {1003}, [2], {1004}, {1005}
  • Why are there so many authors?
  • Morizio and Henry Yin last authors.

____References____

[0] Fan D, Rich D, Holtzman T, Ruther P, Dalley JW, Lopez A, Rossi MA, Barter JW, Salas-Meza D, Herwik S, Holzhammer T, Morizio J, Yin HH, A wireless multi-channel recording system for freely behaving mice and rats.PLoS One 6:7, e22033 (2011)
[1] no Title no Source no Volume no Issue no Pages no PubDate
[2] Szuts TA, Fadeyev V, Kachiguine S, Sher A, Grivich MV, Agrochão M, Hottowy P, Dabrowski W, Lubenov EV, Siapas AG, Uchida N, Litke AM, Meister M, A wireless multi-channel neural amplifier for freely moving animals.Nat Neurosci 14:2, 263-9 (2011 Feb)

{968}
hide / / print
ref: Bassett-2009.07 tags: Weinberger congnitive efficiency beta band neuroimagaing EEG task performance optimization network size effort date: 12-28-2011 20:39 gmt revision:1 [0] [head]

PMID-19564605[0] Cognitive fitness of cost-efficient brain functional networks.

  • Idea: smaller, tighter networks are correlated with better task performance
    • working memory task in normal subjects and schizophrenics.
  • Larger networks operate with higher beta frequencies (more effort?) and show less efficient task performance.
  • Not sure about the noisy data, but v. interesting theory!

____References____

[0] Bassett DS, Bullmore ET, Meyer-Lindenberg A, Apud JA, Weinberger DR, Coppola R, Cognitive fitness of cost-efficient brain functional networks.Proc Natl Acad Sci U S A 106:28, 11747-52 (2009 Jul 14)

{69}
hide / / print
ref: Jackson-2006.11 tags: Fetz Andrew Jackson BMI motor learning microstimulation date: 12-16-2011 04:20 gmt revision:6 [5] [4] [3] [2] [1] [0] [head]

PMID-17057705 Long-term motor cortex plasticity induced by an electronic neural implant.

  • used an implanted neurochip.
  • record from site A in motor cortex (encodes movement A)
  • stimulate site B of motor cortex (encodes movement B)
  • after a few days of learning, stimulate A and generate mixure of AB then B-type movements.
  • changes only occurred when stimuli were delivered within 50ms of recorded spikes.
  • quantified with measurement of (to) radial/ulnar deviation and flexion/extension of the wrist.
  • stimulation in target (site B) was completely sub-threshold (40ua)
  • distance between recording and stimulation site did not matter.
  • they claim this is from Hebb's rule: if one neuron fires just before another (e.g. it contributes to the second's firing), then the connection between the two is strengthened. However, i originally thought this was because site A was controlling the betz cells in B, therefore for consistency A's map was modified to agree with its /function/.
  • repetitive high-frequency stimulation has been shown to expand movement representations in the motor cortex of rats (hmm.. interesting)
  • motor cortex is highly active in REM

____References____

{300}
hide / / print
ref: Gandolfo-1996.04 tags: learning approximation kernel field Bizzi Gandolfo date: 12-07-2011 03:40 gmt revision:1 [0] [head]

Motor learning by field approximation.

  • PMID-8632977[0]
    • studied the generalization properties of force compensation in humans.
    • learning to compensate only occurs in regions of space where the subject actually experianced the force.
    • they posit that the CNS builds an internal model of the external world in order to predict and compensate for it. what a friggn surprise! eh well.
  • PMID-1472573[1] Vector field approximation: a computational paradigm for motor control and learning
    • Recent experiments in the spinalized frog (Bizzi et al. 1991) have shown that focal microstimulation of a site in the premotor layers in the lumbar grey matter of the spinal cord results in a field of forces acting on the frog's ankle and converging to a single equilibrium position
    • they propose that the process of generating movements is the process of combining basis functions/fields. these feilds may be optimized based on making it easy to achieve goals/move in reasonable ways.
  • alternatly, these basis functions could make movements invariant under a number of output transformations. yes...

____References____

{474}
hide / / print
ref: bookmark-0 tags: EMG SNR bits delsys differential amplifier bandwidth date: 12-07-2011 03:15 gmt revision:4 [3] [2] [1] [0] [head]

http://delsys.com/KnowledgeCenter/FAQ_EMGSensor.html

  • on a very good EMG recording the signal-to-noise is 65db ~= 11 bits
  • dynamic range of 5uv to 10mv.
  • differential measurement essential.
  • googling 'EMG bandwidth' yields something around 20-500hz. study of this question
  • delsys wireless EMG system & logger - uses WLAN to transmit the data (up to 16 channels) passband 20-450hz, has QVGA screen, 1GB removable storage.
  • also see "grasp recognition from myoelectric signals" images/474_1.pdf

{698}
hide / / print
ref: bookmark-0 tags: typing keyboard bitrate probability reaching bandwidth date: 12-07-2011 02:35 gmt revision:3 [2] [1] [0] [head]

From Scott MacKenzie:

{914}
hide / / print
ref: Gandolfo-2000.02 tags: Gandolfo Bizzi dynamic environment force fields learning motor control MIT M1 date: 12-02-2011 00:10 gmt revision:1 [0] [head]

PMID-10681435 Cortical correlates of learning in monkey adapting to a new dynamical environment.

{886}
hide / / print
ref: -0 tags: iceland mountains lost adventure date: 07-08-2011 23:00 gmt revision:2 [1] [0] [head]

Just got back from a trek through the volcanic mountains of Iceland. The landscape is extremely dramatic; though it’s not nearly the scale of Alaska or the Rockies, it presents itself as such, as the largest plant is thick moss or stubble grass (in places); everything is bare, the vistas unobstructed. (What do you do if you get lost in an Icelandic forest? Stand up.). There are no trees for size reference, indeed it seemed so alien for a bit that I was amazed that I could still breathe the air.

The first day of exploring I had a pretty serious scare. Was walking, very light and fast as usual, with just enough to protect against rain, just enough food to keep me from eating moss. I elected to take the less-popular route back, which lead across a high muddy (no plants) gray (all the snow is ashen) scree-filled plain, to a hunchback of a mountain, and down into the river valley where I was camped. The first part was fine, though searingly desolate and wind-shorn. The problem came when I rounded the final peak and discovered that the trail was covered by a gray wind-sculpted snowmass. It was at an angle too steep for my shit shoes and lack of ice-tools, and the slopes everywhere else were critical: free a rock and it will tumble 100‘. Free a Tim and he will also tumble 100 feet .. or more. I didn’t want to hike the 17km back the way I came without an attempt at re-finding the trail, so I set off, gingerly, over the ice and gravel, alone.

The ash actually saved me, as it coated the snowfields, and made them passable in the late late afternoon warmth (the sun ‘sets’ around midnight and rises at 2.). This lead to a pinnacle from which I could *see* the campsite! But there was only slide-to-death venues for descent, until I noticed a set of footprints heading up a steep snowbank to my left. I was elated - a trace of humanity! I set off with renewed vigor, and did a semi-controlled fall down the ice; the foot-holes kept me under control.

But they were not foot holes. I noticed quickly that the holes were irregular in spacing and shape, and shortly after I passed the steepest wind sculpted section of snowbank, realized that they were made by a large rock falling off the mountain, picking up speed as it dented the ice shell. I kept going, mostly because I could not stop, though eventually it leveled off. Had that rock not fallen, I don’t think I would have had the psychological wherewithal to try the slope, nevermind foot purchase to slow my descent.

As a stream gets broader its slope generally decreases, given constant resistance from the rock / earth, so as I descended the valleys broadened and became less treacherous. I made the remainder of the way back on a riverbed, albeit with wet feet. It was exciting, and i felt fully in the world as i was trying to get off that trail-less mountain, but I’m not sure if I want to do it again; the following day while hiking up neighboring peaks I felt a heightened sense of caution, vertigo.

{882}
hide / / print
ref: -0 tags: switzerland travel plans date: 06-09-2011 04:56 gmt revision:6 [5] [4] [3] [2] [1] [0] [head]

  • 20 -- Land in Geneva June 10 mid-morning. Poke around, have lunch. Go to Laussane. Stay at http://lausanne-guesthouse.ch/reservation/price_en.php
    • 40 miles (about)
    • Hostel expensive - $65 pp pn, reserved must cancel, charged on arrival (or even if we don't arrive).
    • Reserved Camping Vidy instead. half the price. Will stay in a bungalow. Must check to see if they have blankets & sheets.
  • 21 -- Lausanne to Friburg. Should stop at Montreux along the way. Friburg has an excellent music hall Frison. Stay here?
    • 44 miles
    • Hostel reserved, paid.
  • 22 -- Friburg to Neuchatel (must be visited, according to S.) to Bern. Put bikes on train to Interlaken. Once there, stay @ http://www.villa.ch/
    • 50 miles, depends.
    • Hostel reserved 2 nights, different rooms & rates for 2 nights. payment upon arrival.
  • 23 -- Stay in Interlaken. Looks gorgeous! We can go up Jungfrau, or other mountains, with the caveat that warm clothes will be needed.
  • 24 -- Interlaken to Luzern. Stay in this joint:
    • 43 miles, probably some serious hills, being that it crosses the Alps (not that Ireland was flat)
    • Hostel reserved, payment upon arrival.
  • 25 -- Luzern to Zurich. Stay here.
    • 33 miles direct shot, though we could go by the lake and make it 40 miles. Perhaps it will be warm enough to swim while there?
    • Hostel reserved, payment upon arrival.
  • 26 -- Zurich you have an early flight; mine is at 3pm in the afternoon.

{872}
hide / / print
ref: -0 tags: hike bynum wandering bushwacking date: 01-17-2011 16:32 gmt revision:1 [0] [head]

Excellent hike in Bynum NC starting at the old homestead down there, crossed a number of random properties, entered and left Haw river state park, saw a good number of decomposing farmhouses, all on a gorgeous day. Route was taken clockwise; jog at the end away from main trail was to avoid a hunter in the main fields. This forced us to do a good bit of bushwacking and gave the opportunity to meet some local horses, goats, and runners. Total distance about 9 miles.

{810}
hide / / print
ref: -0 tags: circular polarized antenna microstrip ultrawideband date: 02-03-2010 21:30 gmt revision:1 [0] [head]

excellent! Ultra-wideband circular polarized microstrip archimedean spiral

{805}
hide / / print
ref: notes-0 tags: ice hydrophones glissando recording sound date: 01-19-2010 16:41 gmt revision:0 [head]

http://silentlistening.wordpress.com/2008/05/09/dispersion-of-sound-waves-in-ice-sheets/ -- amazing!

{783}
hide / / print
ref: Chae-2009.08 tags: wireless neural recording UWB Chinese ultra-wideband RF date: 10-12-2009 21:07 gmt revision:2 [1] [0] [head]

PMID-19435684[0] A 128-channel 6 mW wireless neural recording IC with spike feature extraction and UWB transmitter.

  • The title basically says it all.
  • Great details - all of the sub-circuits needed.
  • Really impressive work!

____References____

[0] Chae MS, Yang Z, Yuce MR, Hoang L, Liu W, A 128-channel 6 mW wireless neural recording IC with spike feature extraction and UWB transmitter.IEEE Trans Neural Syst Rehabil Eng 17:4, 312-21 (2009 Aug)

{734}
hide / / print
ref: -0 tags: Vanity Fair American dream control theory in politics and society date: 05-03-2009 17:11 gmt revision:3 [2] [1] [0] [head]

Rethinking the American Dream by David Kamp

  • check out the lights in the frame at the bottom, and the kid taking a picture center-right (image courtesy of Kodak, hence.)

  • (quote:) "Still, we need to challenge some of the middle-class orthodoxies that have brought us to this point—not least the notion, widely promulgated throughout popular culture, that the middle class itself is a soul-suffocating dead end."
    • Perhaps they should teach expectations management in school? Sure, middle class should never die - I hope it will grow.
  • And yet, this is still rather depressive - we all want things to continuously, exponentially get better. I actually think this is almost possible, we just need to reason carefully about how this could happen: what changes in manufacturing, consumption, energy generation, transportation, and social organization would gradually effect widespread improvement.
    • Some time in individual lives (my own included!) is squandered in pursuit of the small pleasures which would be better used for purposeful endeavor. Seems we need to resurrect the idea of sacrifice towards the future (and it seems this meme itself is increasingly popular).
  • Realistically: nothing is for free; we are probably only enjoying this more recent economic boom because energy (and i mean oil, gas, coal, hydro, nuclear etc), which drives almost everything in society, is really cheap. If we can keep it this cheap, or make it cheaper through judicious investment in new technologies (and perhaps serendipity), then our standard of living can increase. That is not to say that it will - we need to put the caloric input to the economy to good use.
    • Currently our best system for enacting a general goal of efficiency is market-based capitalism. Now, the problem is that this is an inherently unstable system: there will be cheaters e.g. people who repackage crap mortgages as safe securities, companies who put lead paint on children's toys, companies who make unsafe products - and the capitalistic system, in and of itself, is imperfect at regulating these cheaters (*). Bureaucracy may not be the most efficient use of money or people's lives, but again it seems to be the best system for regulating/auditing cheaters. Examined from a control feedback point-of-view, bureaucracy 'tries' to control axes which pure capitalism does not directly address.
    • (*) Or is it? The largest problem with using consumer (or, more generally, individual) choice as the path to audit & evaluate production is that there is a large information gradient or knowledge difference between producers and consumers. It is the great (white?) hope of the internet generation that we can reduce this gradient, democratize information, and have everyone making better choices.
      • In this way, I'm very optimistic that things will get continuously better. (But recall that optimality-seeking requires time/money/energy - it ain't going to be free, and it certainly is not going to be 'natural'. Alternately, unstable-equilibrium-maintaining (servoing! auditing!) requires energy; democracy's big trick is that it takes advantage of a normal human behavior, bitching, as the feedstock. )
  • Finally (quote:) "I’m no champion of downward mobility, but the time has come to consider the idea of simple continuity: the perpetuation of a contented, sustainable middle-class way of life, where the standard of living remains happily constant from one generation to the next. "
    • Uh, you've had this coming: stick it. You can enjoy 'simple continuity'. My life is going to get better (or at least my life is going to change and be interesting/fun), and I expect the same for everybody else that I know. See logic above, and homoiconic's optimism

{178}
hide / / print
ref: Churchland-2006.12 tags: motor_noise CNS Churchland execution variance motor_planning 2006 date: 12-08-2008 22:50 gmt revision:2 [1] [0] [head]

PMID-17178410[0] A central source of movement variability.

  • Small variations in preparatory neural activity were predictive of small variations in the upcoming reach
    • About half of the noise in reaching movements seems to be from variability during the preparatory phase, as estimated from regressions between preparatory neural activity and variability in performance.
  • even for a highly practiced task, the ability to repeatedly plan the same movement limits our ability to repeatedly execute the same movement.
  • when cocontraction increases, EMG variablility increases, but movement variability decreases. (This is consistent with poisson-based noise source?)
  • see the related articles!!

____References____

{590}
hide / / print
ref: notes-0 tags: ocaml run external command stdin date: 09-10-2008 19:32 gmt revision:1 [0] [head]

It is not obvious how to run an external command in ocaml & get it's output from stdin. Here is my hack, which simply polls the output of the program until there is nothing left to read. Not very highly tested, but I wanted to share, as I don't think there is an example of the same on pleac

let run_command cmd = 
	let inch = Unix.open_process_in cmd in
	let infd = Unix.descr_of_in_channel inch in
	let buf = String.create 20000 in
	let il = ref 1 in
	let offset = ref 0 in
	while !il > 0 do (
		let inlen = Unix.read infd buf !offset (20000- !offset) in
		il := inlen ; 
		offset := !offset + inlen;
	) done; 
	ignore(Unix.close_process_in inch);  
	if !offset = 0 then "" else String.sub buf 0 !offset
	;;

Note: Fixed a nasty string-termination/memory-reuse bug Sept 10 2008

{581}
hide / / print
ref: notes-0 tags: android google date: 07-09-2008 03:33 gmt revision:1 [0] [head]

brilliant!! source: android winners

{521}
hide / / print
ref: notes-0 tags: UWB ultrawideband radio bandwidth date: 12-10-2007 23:11 gmt revision:8 [7] [6] [5] [4] [3] [2] [head]

from: http://bwrc.eecs.berkeley.edu/Presentations/Retreats/Summer_Retreat_2004/WednesdayPM/Retreat_Jun04_talk-ianv0.ppt

Above, FCC limitations on UWB transmitted power levels in communication devices. Currently, only the US allows operation of UWB transceivers.

links:

{503}
hide / / print
ref: bookmark-0 tags: internet communication tax broadband election? date: 11-21-2007 22:18 gmt revision:6 [5] [4] [3] [2] [1] [0] [head]

quote:

Consumers also pay high taxes for telecommunication services, averaging about 13 percent on some telecom services, similar to the tax rate on tobacco and alcohol, Mehlman said. One tax on telecom service has remained in place since the 1898 Spanish-American War, when few U.S. residents had telephones, he noted.

"We think it's a mistake to treat telecom like a luxury and tax it like a sin," he said.

from: The internet could run out of capacity in two years

comments:

  • I bet this will turn into a great excuse for your next president not to invest on health, but rather on internet. --ana
  • Humm.. I think it is meant to be more of a wake-up call to the backhaul and ISP companies, which own most of the networking capacity (not the government). I imagine there will be some problems, people complain, it gets fixed.. hopefully soon. What is really amazing is the total amount of data the internet is expected to produce - 161 exabytes!! -- tlh
  • They won't upgrade their capacity. After all, the telcos spent a lot of money doing just that in the dot-bomb days. No, instead they will spend their money on technologies and laws that allow them to charge more for certain types of packets or for delivering some packets faster than others. You think it's a coincidence that Google is buying up dark fiber? --jeo

{480}
hide / / print
ref: bookmark-0 tags: RonPaul American presidential candidate libertarian date: 10-30-2007 22:38 gmt revision:0 [head]

http://www.grist.org/feature/2007/10/16/paul/?source=weekly

  • claims that the solution to our problems is to deregulate environmental control - e.g. disempower the EPA, maybe even dissolve it, and allow litigation and property rights to regulate pollution. That is, if a polluter destroys some resource say a river, then another user of same resource will sue them for damages & polluting.
    • This is retarded because it replaces one system (hopefully transparent laws) with another system (wasteful litigation), the latter which will be codified anyway within the legal system. I would argue that it is more efficient to simply fix the original system directly, and eliminate this bureaucracy which he complains. Otherwise, it will take some time for the 'bugs' in the legal, litigation based regulatory system to be eliminated.
      • a centralized authority is arguably more efficient & direct in deciding say which compounds are pollutants and which are not, whereas an iterative, litigation based system may be eventually more accurate but possibly more abstruse & opaque (e.g. you have to look up many many cases to figure out the 'law') and may possibly take more time.
    • Perhaps, though, he is correct on one thing: by making the end users (people subject to pollution) more directly involved, they will have more power, hence 'law' will more directly represent collective interest.
    • My conclusion: the present system includes some end-user litigation; it makes not sense to overhaul it. It only makes sense to tweak the 'coefficients' on the control paths, or possibly add other control paths.
      • however, has anyone proved that collective interest is sufficiently far-sighted, pragmatic, and free from spurious manipulation by the media. This is why we have a republic, I guess.
  • He does not support the Kyoto protocol. not the 'free-market'. well, he is a libertarian after all.
  • He thinks it is a good idea to de-regulate large polluters like coal fired electricity plants; he claims that in a free market economy the costs of a dirtier energy source will be internalized and the consumers will choose the optimal source.
    • This is naive, too. Companies will manipulate those effected by the pollution to make them forget about it, perhaps by simply bribing them. Besides, it makes sense to have a centralized regulator where the expertise, intelligence, and data can be concentrated. But, then again, this system was setup itself by the public (?) which therefore must have some degree of farsight, therefore the public can be responsible for holding companies responsible for selfish, greedy & polluting practices. (I would argue not - the public and/or those farsighted leaders - have set up centralized agencies for offloading the effort of regulation & enforcement.

{476}
hide / / print
ref: bookmark-0 tags: wideband oxygen sensor diffusion nernst lambda date: 10-22-2007 03:41 gmt revision:0 [head]

http://www.wbo2.com/lsu/lsuworks.htm

{403}
hide / / print
ref: bookmark-0 tags: blackfin ELF freestanding applications boot date: 08-01-2007 14:40 gmt revision:0 [head]

http://www.johanforrer.net/BLACKFIN/index.html

very good, very instructive.

{147}
hide / / print
ref: Blankertz-2003.06 tags: BMI BCI EEG error classification motor commands Blankertz date: 0-0-2007 0:0 revision:0 [head]

PMID-12899253 Boosting bit rates and error detection for the classification of fast-paced motor commands based on single-trial EEG analysis

  • want to minimize subject training and maximize the major learning load on the computer.
  • task: predict the laterality of imminent left-right hand finger movements in a natural keyboard typing condition. they got ~15bits/minute (in one subject, ~50bits per minute!)
    • used non-oscilatory signals.
  • did a to detect 85% percent of error trials, and limited false-positives to ~2%

{72}
hide / / print
ref: abstract-0 tags: tlh24 error signals in the cortex and basal ganglia reinforcement_learning gradient_descent motor_learning date: 0-0-2006 0:0 revision:0 [head]

Title: Error signals in the cortex and basal ganglia.

Abstract: Numerous studies have found correlations between measures of neural activity, from single unit recordings to aggregate measures such as EEG, to motor behavior. Two general themes have emerged from this research: neurons are generally broadly tuned and are often arrayed in spatial maps. It is hypothesized that these are two features of a larger hierarchal structure of spatial and temporal transforms that allow mappings to procure complex behaviors from abstract goals, or similarly, complex sensory information to produce simple percepts. Much theoretical work has proved the suitability of this organization to both generate behavior and extract relevant information from the world. It is generally agreed that most transforms enacted by the cortex and basal ganglia are learned rather than genetically encoded. Therefore, it is the characterization of the learning process that describes the computational nature of the brain; the descriptions of the basis functions themselves are more descriptive of the brain’s environment. Here we hypothesize that learning in the mammalian brain is a stochastic maximization of reward and transform predictability, and a minimization of transform complexity and latency. It is probable that the optimizations employed in learning include both components of gradient descent and competitive elimination, which are two large classes of algorithms explored extensively in the field of machine learning. The former method requires the existence of a vectoral error signal, while the latter is less restrictive, and requires at least a scalar evaluator. We will look for the existence of candidate error or evaluator signals in the cortex and basal ganglia during force-field learning where the motor error is task-relevant and explicitly provided to the subject. By simultaneously recording large populations of neurons from multiple brain areas we can probe the existence of error or evaluator signals by measuring the stochastic relationship and predictive ability of neural activity to the provided error signal. From this data we will also be able to track dependence of neural tuning trajectory on trial-by-trial success; if the cortex operates under minimization principles, then tuning change will have a temporal relationship to reward. The overarching goal of this research is to look for one aspect of motor learning – the error signal – with the hope of using this data to better understand the normal function of the cortex and basal ganglia, and how this normal function is related to the symptoms caused by disease and lesions of the brain.

{75}
hide / / print
ref: bookmark-0 tags: linux command line tips rip record date: 0-0-2006 0:0 revision:0 [head]

http://www.pixelbeat.org/cmdline.html