m8ta
you are not logged in, login. new entry
text: sort by
tags: modified
type: chronology
[0] Schmidt EM, McIntosh JS, Durelli L, Bak MJ, Fine control of operantly conditioned firing patterns of cortical neurons.Exp Neurol 61:2, 349-69 (1978 Sep 1)[1] Serruya MD, Hatsopoulos NG, Paninski L, Fellows MR, Donoghue JP, Instant neural control of a movement signal.Nature 416:6877, 141-2 (2002 Mar 14)[2] Fetz EE, Operant conditioning of cortical unit activity.Science 163:870, 955-8 (1969 Feb 28)[3] Fetz EE, Finocchio DV, Operant conditioning of specific patterns of neural and muscular activity.Science 174:7, 431-5 (1971 Oct 22)[4] Fetz EE, Finocchio DV, Operant conditioning of isolated activity in specific muscles and precentral cells.Brain Res 40:1, 19-23 (1972 May 12)[5] Fetz EE, Baker MA, Operantly conditioned patterns on precentral unit activity and correlated responses in adjacent cells and contralateral muscles.J Neurophysiol 36:2, 179-204 (1973 Mar)

[0] Bar-Gad I, Morris G, Bergman H, Information processing, dimensionality reduction and reinforcement learning in the basal ganglia.Prog Neurobiol 71:6, 439-73 (2003 Dec)

[0] Won DS, Wolf PD, A simulation study of information transmission by multi-unit microelectrode recordings.Network 15:1, 29-44 (2004 Feb)

[0] Li CS, Padoa-Schioppa C, Bizzi E, Neuronal correlates of motor performance and motor learning in the primary motor cortex of monkeys adapting to an external force field.Neuron 30:2, 593-607 (2001 May)[1] Caminiti R, Johnson PB, Urbano A, Making arm movements within different parts of space: dynamic aspects in the primate motor cortex.J Neurosci 10:7, 2039-58 (1990 Jul)

[0] Caminiti R, Johnson PB, Galli C, Ferraina S, Burnod Y, Making arm movements within different parts of space: the premotor and motor cortical representation of a coordinate system for reaching to visual targets.J Neurosci 11:5, 1182-97 (1991 May)

[0] Caminiti R, Johnson PB, Urbano A, Making arm movements within different parts of space: dynamic aspects in the primate motor cortex.J Neurosci 10:7, 2039-58 (1990 Jul)[1] Caminiti R, Johnson PB, Galli C, Ferraina S, Burnod Y, Making arm movements within different parts of space: the premotor and motor cortical representation of a coordinate system for reaching to visual targets.J Neurosci 11:5, 1182-97 (1991 May)

{842}
hide / edit[5] / print
ref: work-0 tags: distilling free-form natural laws from experimental data Schmidt Cornell automatic programming genetic algorithms date: 09-14-2018 01:34 gmt revision:5 [4] [3] [2] [1] [0] [head]

Distilling free-form natural laws from experimental data

  • There critical step was to use partial derivatives to evaluate the search for invariants. Even yet, with a 4D data set the search for natural laws took ~ 30 hours.
    • Then again, how long did it take humans to figure out these invariants? (Went about it in a decidedly different way..)
    • Further, how long did it take for biology to discover similar invariants?
      • They claim elsewhere that the same algorithm has been applied to biological data - a metabolic pathway - with some success.
      • Of course evolution had to explore a much larger space - proteins and reculatory pathways, not simpler mathematical expressions / linkages.

{305}
hide / edit[10] / print
ref: Schmidt-1978.09 tags: Schmidt BMI original operant conditioning cortex HOT pyramidal information antidromic date: 04-22-2013 18:21 gmt revision:10 [9] [8] [7] [6] [5] [4] [head]

PMID-101388[0] Fine control of operantly conditioned firing patterns of cortical neurons.

  • hand-arm area of M1, 11 or 12 chronic recording electrodes, 3 monkeys.
    • but, they only used one unit at a time in the conditioning task (i think)
  • conditioning in 77% of single units and 65% of combined units (multiunits?).
  • trained to move a handle to a position indicated by 8 annular cursor lights.
    • cursor was updated at 50hz -- this was just a series of lights! talk about simple feedback...
    • Investigated different smoothing: too fast, FR does not stay in target; too slow, cursor acquires target too slowly.
    • My gamma function is very similar to their lowpass filter used for smoothing the firing rates.
    • 4 or 8 target random tracking task
    • time out of 8 seconds
    • run of 40 trials
      • the conditioning reached a significant level of performance after 2.2 runs of 40 trials (in well-trained monkeys); typically, they did 18 runs/day.
  • recordings:
    • scalar mapping of unit firing rate to cursor position.
    • filtered 600-6kHz
    • each accepted spike triggered a generator that produced a pulse of of constant amplitude and width -> this was fed into a lowpass filter (1.5 to 2.5 & 3.5Hz cutoff), and a gain stage, then a ADC, then (presumably) the PDP.
      • can determine if these units were in the pyramidal tract by measuring antidromic delay (stimulate muscles??)
    • recorded one neuron for 108 days!!
      • neuronal activity is still being recorded from one monkey 24 months after chronic implantation of the microelectrodes.
    • average period in which conditioning was attempted was 3.12 days.
  • successful conditioning was always associated with specific repeatable limb movements
    • "However, what appears to be conditioned in these experiments is a movement, and the neuron under study is correlated with that movement." YES.
    • the monkeys clearly learned to make (increasingly refined) movement to modulate the firing activity of the recorded units.
    • the monkey learned to turn off certain units with specific limb positions; the monkey used exaggerated movements for these purposes.
      • e.g. finger and shoulder movements, isometric contraction in one case.
  • Trained some monkeys or > 15 months; animals got better at the task over time.
  • PDP-12 computer!
  • Information measure: 0 bits for missed targets, 2 for a 4 target task, 3 for 8 target task; information rate = total number of bits / time to acquire targets.
    • 3.85 bits/sec peak with 4 targets, 500ms hold time
    • with this, monkeys were able to exert fine control of firing rate.
    • damn! compare to Paninski! [1]
  • 4.29 bits/sec when the same task was performed with a manipulandum & wrist movement
  • they were able to condition 77% of individual neurons and 65% of combined units.
  • Implanted a pyramidal tract electrode in one monkey; both cells recorded at that time were pyramidal tract neurons, antidromic latencies of 1.2 - 1.3ms.
    • failures had no relation to over movements of the monkey.
  • Fetz and Baker [2,3,4,5] found that 65% of precentral neurons could be conditioned for increased or decreased firing rates.
    • and it only took 6.5 minutes, on average, for the units to change firing rates!
  • Summarized in [1].

____References____

{1207}
hide / edit[1] / print
ref: -0 tags: Shenoy eye position BMI performance monitoring date: 01-25-2013 00:41 gmt revision:1 [0] [head]

PMID-18303802 Cortical neural prosthesis performance improves when eye position is monitored.

  • This proposal stems from recent discoveries that the direction of gaze influences neural activity in several areas that are commonly targeted for electrode implantation in neural prosthetics.
  • Can estimate eye position directly from neural activity & subtract it when performing BMI predictions.

{1087}
hide / edit[4] / print
ref: Timmermann-2003.01 tags: DBS double tremor oscillations DICS beamforming parkinsons date: 02-29-2012 00:39 gmt revision:4 [3] [2] [1] [0] [head]

PMID-12477707[0] The cerebral oscillatory network of parkinsonian resting tremor.

  • Patients had idiopathic unliateral tremor-dominated PD.
  • MEG + EMG -> coherence analysis. (+ DICS for deep MEG recording).
  • M1 correlated to EMG at tremor and double-tremor frequency following medication withdrawal overnight.
    • M1 leads by 15 - 25 ms, consistent with conduction delay.
  • Unlike other studies, they find that many cortical areas are also coherent / oscillating with M1, including:
    • Cingulate and supplementary motor area (CMA / SMA)
    • Lateral premotor cortex (PM).
    • SII
    • Posterior pareital cortex PPC
    • contralateral cerebellum - strongest at double frequency.
  • In contrast, the cerebellum, SMA/CMA and PM show little evidence for direct coupling with the peripheral EMG but seem to be connected with the periphery via other cerebral areas (e.g. M1)
  • Power spectral analysis of activity in all central areas indicated the strongest frequency coherence at double tremor frequency.
    • Especially cerebro-cerebro coupling.
  • These open-ended observation studies are useful!

____References____

[0] Timmermann L, Gross J, Dirks M, Volkmann J, Freund HJ, Schnitzler A, The cerebral oscillatory network of parkinsonian resting tremor.Brain 126:Pt 1, 199-212 (2003 Jan)

{1132}
hide / edit[0] / print
ref: -0 tags: mesh silk conformal coating date: 02-21-2012 20:03 gmt revision:0 [head]

PMID-20400953 Dissolvable films of silk fibroin for ultrathin conformal bio-integrated electronics.

  • Mounting such devices on tissue and then allowing the silk to dissolve and resorb initiates a spontaneous, conformal wrapping process driven by capillary forces at the biotic/abiotic interface.
  • Specialized mesh designs and ultrathin forms for the electronics ensure minimal stresses on the tissue and highly conformal coverage, even for complex curvilinear surfaces, as confirmed by experimental and theoretical studies.
    • Wow! cool!
  • polyimide electrode substrates 2.5 - 7.5 um thick. Electrodes were made of anisotropic conductive film.

{255}
hide / edit[3] / print
ref: BarGad-2003.12 tags: information dimensionality reduction reinforcement learning basal_ganglia RDDR SNR globus pallidus date: 01-16-2012 19:18 gmt revision:3 [2] [1] [0] [head]

PMID-15013228[] Information processing, dimensionality reduction, and reinforcement learning in the basal ganglia (2003)

  • long paper! looks like they used latex.
  • they focus on a 'new model' for the basal ganglia: reinforcement driven dimensionality reduction (RDDR)
  • in order to make sense of the system - according to them - any model must ingore huge ammounts of information about the studied areas.
  • ventral striatum = nucelus accumbens!
  • striatum is broken into two, rough, parts: ventral and dorsal
    • dorsal striatum: the caudate and putamen are a part of the
    • ventral striatum: the nucelus accumbens, medial and ventral portions of the caudate and putamen, and striatal cells of the olifactory tubercle (!) and anterior perforated substance.
  • ~90 of neurons in the striatum are medium spiny neurons
    • dendrites fill 0.5mm^3
    • cells have up and down states.
      • the states are controlled by intrinsic connections
      • project to GPe GPi & SNr (primarily), using GABA.
  • 1-2% of neurons in the striatum are tonically active neurons (TANs)
    • use acetylcholine (among others)
    • fewer spines
    • more sensitive to input
    • TANs encode information relevant to reinforcement or incentive behavior

____References____

{806}
hide / edit[26] / print
ref: work-0 tags: gaussian random variables mutual information SNR date: 01-16-2012 03:54 gmt revision:26 [25] [24] [23] [22] [21] [20] [head]

I've recently tried to determine the bit-rate of conveyed by one gaussian random process about another in terms of the signal-to-noise ratio between the two. Assume x is the known signal to be predicted, and y is the prediction.

Let's define SNR(y)=Var(x)Var(err) where err=xy . Note this is a ratio of powers; for the conventional SNR, SNR dB=10*log 10Var(x)Var(err) . Var(err) is also known as the mean-squared-error (mse).

Now, Var(err)=(xyerr¯) 2=Var(x)+Var(y)2Cov(x,y) ; assume x and y have unit variance (or scale them so that they do), then

2SNR(y) 12=Cov(x,y)

We need the covariance because the mutual information between two jointly Gaussian zero-mean variables can be defined in terms of their covariance matrix: (see http://www.springerlink.com/content/v026617150753x6q/ ). Here Q is the covariance matrix,

Q=[Var(x) Cov(x,y) Cov(x,y) Var(y)]

MI=12logVar(x)Var(y)det(Q)

Det(Q)=1Cov(x,y) 2

Then MI=12log 2[1Cov(x,y) 2]

or MI=12log 2[SNR(y) 114SNR(y) 2]

This agrees with intuition. If we have a SNR of 10db, or 10 (power ratio), then we would expect to be able to break a random variable into about 10 different categories or bins (recall stdev is the sqrt of the variance), with the probability of the variable being in the estimated bin to be 1/2. (This, at least in my mind, is where the 1/2 constant comes from - if there is gaussian noise, you won't be able to determine exactly which bin the random variable is in, hence log_2 is an overestimator.)

Here is a table with the respective values, including the amplitude (not power) ratio representations of SNR. "

SNRAmp. ratioMI (bits)
103.11.6
20103.3
30315.0
401006.6
9031e315
Note that at 90dB, you get about 15 bits of resolution. This makes sense, as 16-bit DACs and ADCs have (typically) 96dB SNR. good.

Now, to get the bitrate, you take the SNR, calculate the mutual information, and multiply it by the bandwidth (not the sampling rate in a discrete time system) of the signals. In our particular application, I think the bandwidth is between 1 and 2 Hz, hence we're getting 1.6-3.2 bits/second/axis, hence 3.2-6.4 bits/second for our normal 2D tasks. If you read this blog regularly, you'll notice that others have achieved 4bits/sec with one neuron and 6.5 bits/sec with dozens {271}.

{5}
hide / edit[3] / print
ref: bookmark-0 tags: machine_learning research_blog parallel_computing bayes active_learning information_theory reinforcement_learning date: 12-31-2011 19:30 gmt revision:3 [2] [1] [0] [head]

hunch.net interesting posts:

  • debugging your brain - how to discover what you don't understand. a very intelligent viewpoint, worth rereading + the comments. look at the data, stupid
    • quote: how to represent the problem is perhaps even more important in research since human brains are not as adept as computers at shifting and using representations. Significant initial thought on how to represent a research problem is helpful. And when it’s not going well, changing representations can make a problem radically simpler.
  • automated labeling - great way to use a human 'oracle' to bootstrap us into good performance, esp. if the predictor can output a certainty value and hence ask the oracle all the 'tricky questions'.
  • The design of an optimal research environment
    • Quote: Machine learning is a victim of it’s common success. It’s hard to develop a learning algorithm which is substantially better than others. This means that anyone wanting to implement spam filtering can do so. Patents are useless here—you can’t patent an entire field (and even if you could it wouldn’t work).
  • More recently: http://hunch.net/?p=2016
    • Problem is that online course only imperfectly emulate the social environment of a college, which IMHO are useflu for cultivating diligence.
  • The unrealized potential of the research lab Quote: Muthu Muthukrishnan says “it’s the incentives”. In particular, people who invent something within a research lab have little personal incentive in seeing it’s potential realized so they fail to pursue it as vigorously as they might in a startup setting.
    • The motivation (money!) is just not there.

{968}
hide / edit[1] / print
ref: Bassett-2009.07 tags: Weinberger congnitive efficiency beta band neuroimagaing EEG task performance optimization network size effort date: 12-28-2011 20:39 gmt revision:1 [0] [head]

PMID-19564605[0] Cognitive fitness of cost-efficient brain functional networks.

  • Idea: smaller, tighter networks are correlated with better task performance
    • working memory task in normal subjects and schizophrenics.
  • Larger networks operate with higher beta frequencies (more effort?) and show less efficient task performance.
  • Not sure about the noisy data, but v. interesting theory!

____References____

[0] Bassett DS, Bullmore ET, Meyer-Lindenberg A, Apud JA, Weinberger DR, Coppola R, Cognitive fitness of cost-efficient brain functional networks.Proc Natl Acad Sci U S A 106:28, 11747-52 (2009 Jul 14)

{922}
hide / edit[2] / print
ref: Guenther-2009.12 tags: Guenther Kennedy 2009 neurotrophic electrode speech synthesize formant BMI date: 12-17-2011 02:12 gmt revision:2 [1] [0] [head]

PMID-20011034[0] A Wireless Brain-Machine Interface for Real-Time Speech Synthesis

  • Neurites grow into the glass electrode over the course of 3-4 months; the signals and neurons are henceforth stable, at least for the period prior publication (>4 years).
  • Used an FM modulator to send out the broadband neural signal; powered the implanted electronics inductively.
  • Sorted 56 spike clusters (!!)
    • quote: "We chose to err on the side of overestimating the number of clusters in our BMI since our Kalman filter decoding technique is somewhat robust to noisy inputs, whereas a stricter criterion for cluster definition might leave out information-carrying spike clusters."
    • 27 units on one wire and 29 on the other.
  • Quote: "neurons in the implanted region of left ventral premotor cortex represent intended speech sounds in terms of formant frequency trajectories, and projections from these neurons to primary motor cortex transform the intended formant trajectories into motor commands to the speech articulators."
    • Thus speech can be represented as a trajectory through formant space.
    • plus there are many simple low-load formant-based sw synthesizers
  • Used supervised methods (ridge regression), where the user was asked to imagine making vowel sounds mimicking what he heard.
    • only used the first 2 vowel formants; hence 2D task.
    • Supervised from 8 ~1-minute recording sessions.
  • 25 real-time feedback sessions over 5 months -- not much training time, why?
  • Video looks alright.

____References____

[0] Guenther FH, Brumberg JS, Wright EJ, Nieto-Castanon A, Tourville JA, Panko M, Law R, Siebert SA, Bartels JL, Andreasen DS, Ehirim P, Mao H, Kennedy PR, A wireless brain-machine interface for real-time speech synthesis.PLoS One 4:12, e8218 (2009 Dec 9)

{252}
hide / edit[3] / print
ref: Won-2004.02 tags: Debbie Won Wolf spike sorting mutual information tuning BMI date: 12-07-2011 02:58 gmt revision:3 [2] [1] [0] [head]

PMID-15022843[0] A simulation study of information transmission by multi-unit microelectrode recordings key idea:

  • when the units on a single channel are similarly tuned, you don't loose much information by grouping all spikes as coming from one source. And the opposite effect is true when you have very differently tuned neurons on the same channel - the information becomes more ambiguous.

____References____

{289}
hide / edit[5] / print
ref: Li-2001.05 tags: Bizzi motor learning force field MIT M1 plasticity memory direction tuning transform date: 09-24-2008 22:49 gmt revision:5 [4] [3] [2] [1] [0] [head]

PMID-11395017[0] Neuronal correlates of motor performance and motor learning in the primary motor cortex of monkeys adapting to an external force field

  • this is concerned with memory cells, cells that 'remember' or remain permanently changed after learning the force-field.
  • In the above figure, the blue lines (or rather vertices of the blue lines) indicate the firing rate during the movement period (and 200ms before); angular position indicates the target of the movement. The force-field in this case was a curl field where force was proportional to velocity.
  • Preferred direction of the motor cortical units changed when the preferred driection of the EMGs changed
  • evidence of encoding of an internal model in the changes in tuning properties of the cells.
    • this can suppor both online performance and motor learning.
    • but what mechanisms allow the motor cortex to change in this way???
  • also see [1]

____References____

{565}
hide / edit[1] / print
ref: Walker-2005.12 tags: algae transfection transformation protein synthesis bioreactor date: 03-21-2008 17:22 gmt revision:1 [0] [head]

Microalgae as bioreactors PMID-16136314

{530}
hide / edit[4] / print
ref: notes-0 tags: neuroscience ion channels information coding John Harris date: 01-07-2008 16:46 gmt revision:4 [3] [2] [1] [0] [head]

  • crazy idea: that neurons have a number of ion channel lines which can be selectively activated. That is, information is transmitted by longitudial transmission channels which are selectively activated based on the message that is transmitted
  • has any evidence for such a fine structure been found?? I think not, due to binding studies, but who knows..
  • dude uses historical references (Neumann) to back up his ideas. I find these sorts of justifications interesting, but not logically substantiative. Do not talk about the opinions of old philosophers (exclusively, at least), talk about their data.
  • interesting story about holography & the holograph of Dennis Gabor.
    • he does make interesting analogies to neuroscience & the importance of preserving spatial phase.
  • fourier images -- neato.
conclusion: interesting, but a bit cooky.

{520}
hide / edit[1] / print
ref: bookmark-0 tags: DSP Benford's law Fourier transform book date: 12-07-2007 06:14 gmt revision:1 [0] [head]

http://www.dspguide.com/ch34.htm -- awesome!!

{344}
hide / edit[2] / print
ref: Caminiti-1991.05 tags: transform motor control M1 3D population_vector premotor Caminiti date: 04-09-2007 20:10 gmt revision:2 [1] [0] [head]

PMID-2027042[0] Making arm movements within different parts of space: the premotor and motor cortical representation of a coordinate system for reaching to visual targets.

  • trained monkeys to make similar movements in different parts of external/extrinsic 3D space.
  • change of preferred direction was graded in an orderly manner across extrinsic space.
  • virtually no correlations found to endpoint static position: "virtually all cells were related to the direction and not to the end point of movement" - compare to Graziano!
  • yet the population vector remained an accurate predictor of movement: "Unlike the individual cell preferred directions upon which they are based, movement population vectors did not change their spatial orientation across the work space, suggesting that they remain good predictors of movement direction regardless of the region of space in which movements are made"

____References____

{294}
hide / edit[4] / print
ref: Caminiti-1990.07 tags: transform motor control M1 3D population_vector premotor Caminiti date: 04-09-2007 20:07 gmt revision:4 [3] [2] [1] [0] [head]

PMID-2376768[0] Making arm movements within different parts of space: dynamic aspects in the primate motor cortex

  • monkeys made similar movements in different parts of external/extrinsic 3D space.
  • change of preferred direction was graded in an orderly manner across extrinsic space.
    • this change closely followed the changes in muscle activation required to effect the observed movements.
  • motor cortical cells can code direction of movement in a way which is dependent on the position of the arm in space
  • implies existence of mechanisms which facilitate the transformation between extrinsic (visual targets) and intrinsic coordinates
  • also see [1]

____References____

{229}
hide / edit[2] / print
ref: notes-0 tags: SNR MSE error multidimensional mutual information date: 03-08-2007 22:33 gmt revision:2 [1] [0] [head]

http://ieeexplore.ieee.org/iel5/516/3389/00116771.pdf or http://hardm.ath.cx:88/pdf/MultidimensionalSNR.pdf

  • the signal-to-noise ratio between two vectors is the ratio of the determinants of the correlation matrices. Just see equation 14.

{7}
hide / edit[0] / print
ref: bookmark-0 tags: book information_theory machine_learning bayes probability neural_networks mackay date: 0-0-2007 0:0 revision:0 [head]

http://www.inference.phy.cam.ac.uk/mackay/itila/book.html -- free! (but i liked the book, so I bought it :)

{146}
hide / edit[0] / print
ref: van-2004.11 tags: anterior cingulate cortex error performance monitoring 2004 date: 0-0-2007 0:0 revision:0 [head]

PMID-15518940 Errors without conflict: implications for performance monitoring theories of anterior cingulate cortex.

  • did a event-locked fMRI to study whether the ACC would differentiate between correct and incorrect feedback stimuli in a time estimation task.
  • ACC seems to be not involved in error detection, just conflict.
----
  • according to one theory, ERN is generated as part of a reinforcement learning process. (Holroyd and Coles 2002): behavior is monitored by an 'adaptive critic' in the basal ganglia.
    • in this theory, the ACC is used to select between mental processes competing to access the motor system.
    • ERN corresponds to a decrease in dopamine.
    • ERN occurs when the stimulus indicates that an error has occured.
  • alternately, the ACC can monitor for the presence of conflict between simultaneously active but incompatible sensory/processing streams.
    • the ACC is active in correct trials in tasks that require conflict resolution. + it makes sense from a modeling strategy: high-energy state is equivalent to a state of conflit: many neurons are active at the same time.
    • that is, it is a stimuli resolver: e.g. the stroop task.
  • some studies localize (and the authors here indicate that the source-analysis that localizes dipole sources is inaccurate) the error potential to the posterior cingulate cortex.
    • fMRI solves this problem.
  • from their figures, it seems that the right putamen + bilateral caudate are involved in their time-estimation task (subjects has to press a button 1 second after a stimulus cue; feedback then guided/misguided them toward/away from 1000ms; subjects, of course, adjusted their behavior)
    • no sign of ACC activation was shown - as hard as they could look - despite identical (more or less) experimental design to the ERN studies.
      • hence, ERN is generated by areas other than the ACC.
  • in contrast, the stroop task fully engaged the anterior cingulate cortex.
  • cool: perhaps, then, error feedback negativity is better conceived as an (absence of) superimposed "correct feedback positivity" 'cause no area was more active in error than correct feedback.
  • of course, one is measuring brain activation through blood flow, and the other is measuring EEG signals.

{57}
hide / edit[0] / print
ref: bookmark-0 tags: information entropy bit rate matlab code date: 0-0-2006 0:0 revision:0 [head]

http://www.cs.rug.nl/~rudy/matlab/

  • concise, well documented, useful.
  • number of bins = length of vector ^ (1/3).
  • information = sum(log (bincounts / prior) * bincounts) -- this is just the divergence, same as I do it.

{66}
hide / edit[0] / print
ref: bookmark-0 tags: machine_learning classification entropy information date: 0-0-2006 0:0 revision:0 [head]

http://iridia.ulb.ac.be/~lazy/ -- Lazy Learning.