You are not authenticated, login.
text: sort by
tags: modified
type: chronology
[0] Isoda M, Hikosaka O, Switching from automatic to controlled action by monkey medial frontal cortex.Nat Neurosci 10:2, 240-8 (2007 Feb)

hide / / print
ref: -0 tags: ocaml application functional programming date: 10-11-2022 21:36 gmt revision:2 [1] [0] [head]


From this I learned that in ocaml you can return not just functions (e.g. currying) but appliations of yet-to-be named functions.

let sum f = f 0 ;;
let arg a b c = c ( b + a ) ;;
let z a = a ;;


sum (arg 1) ;; 

is well-typed as (int -> `a) -> `a = <fun> e.g. an application of a function that converts int to `a. Think of it as the application of Xa to argument ( 0 + 1 ), where Xa is the argument (per type signature). Zero is supplied by the definition of 'sum'.

 sum (arg 1) (arg 2);; 

can be parsed as

(sum (arg 1)) (arg 2) ;; 

'(arg 2)' outputs an application of an int & a yet-to be determined function to 'a,

E.g. it's typed as int -> (int -> `a) -> `a = <fun>. So, you can call it Xa passed to above.

Or, Xa = Xb( ( 0 + 1 ) + 2)

where, again, Xb is a yet-to-be defined function that is supplied as an argument.

Therefore, you can collapse the whole chain with the identity function z. But, of course, it could be anything else -- square root perhaps for MSE?

All very clever.

hide / / print
ref: -2019 tags: Piantadosi cogntion combinators function logic date: 09-05-2022 01:57 gmt revision:0 [head]

  • The Computational Origin of Representation (2019)
  • from Piantandosi, talks a big game... reviews some seminal literature ...
    • But the argument reduces to the established idea that you can represent boolean logic and arbitrary algorithms with Church encoding through S and K (and some tortuous symbol manipulation..)
    • It seems the Piantadosi was perhaps excited by discovering and understanding combinators?
      • It is indeed super neat (and i didn't wade so deep to really understand it), but the backtracking search procedure embodied in pyChuriso is scarcely close to anything happening in our brains (and such backtracking search is common in CS..)
      • It is overwhelmingly more likely that we approximate other Turning-complete computations, by (evolutionary) luck and education.
      • The last parts of the paper, describing a continuum between combinators, logic, calculus, tensor approximations, and neuroscience is ... very hand-wavey, with no implementation.
        • If you allow me to be hyypercritical, this paper is an excellent literature review, but limited impact for ML practitioners.

hide / / print
ref: -0 tags: automatic programming inductive functional igor date: 07-29-2014 02:07 gmt revision:0 [head]

Inductive Rule Learning on the Knowledge Level.

  • 2011.
  • v2 of their IGOR inductive-synthesis program.
  • Quote: The general idea of learning domain specific problem solving strategies is that first some small sample problems are solved by means of some planning or problem solving algorithm and that then a set of generalized rules are learned from this sample experience. This set of rules represents the competence to solve arbitrary problems in this domain.
  • My take is that, rather than using heuristic search to discover programs by testing specifications, they use memories of the output to select programs directly (?)
    • This is allegedly a compromise between the generate-and-test and analytic strategies.
  • Description is couched in CS-lingo which I am inexperienced in, and is perhaps too high-level, a sin I too am at times guilty of.
  • It seems like a good idea, though the examples are rather unimpressive as compared to MagicHaskeller.

hide / / print
ref: -0 tags: putamen functional organization basal ganglia date: 02-24-2012 21:01 gmt revision:0 [head]

PMID-6705861 Single cell studies of the primate putamen. I. Functional organization.

  • Cells in the striatum have very low levels of activity -- some are simply not spontaneously active.
  • Other cells are tonically active at 3-6Hz (cholinergic?)
  • ( Most cells related to the direction of movement, not necessarily force.
  • Two types of load reactions: short latency (presumably sensory) and long-latency (motor -- related to the active return movement of the arm.)
  • Timing suggests that the striatum does not play a role in the earliest phases of movement, consistent with cooling studies, kainic acid lesions, or microstimulation. Only 19% of neurons were active before movement.
  • Many neurons were reactive to both active and passive movements in the same joint / direction.
    • The BG receive afferents from joint and not muscle receptors.

hide / / print
ref: Vitek-2008.03 tags: DBS function efferent STN date: 02-22-2012 18:39 gmt revision:2 [1] [0] [head]

PMID-18540149[0] Deep brain stimulation: how does it work?

  • MPTP monkey research suggests that activation of output and the resultant change in pattern of neuronal activity that permeates throughout the basal ganglia motor circuit is the mechanism responsible for symptom improvement.
    • Sensible network approach.
  • If pathological plasticity mechanisms are responsible for the symptoms, perhaps we should look for similarly slow treatments?


[0] Vitek JL, Deep brain stimulation: how does it work?Cleve Clin J Med 75 Suppl 2no Issue S59-65 (2008 Mar)

hide / / print
ref: -0 tags: reinforcement learning basis function policy specialization date: 01-03-2012 02:37 gmt revision:1 [0] [head]

To read:

hide / / print
ref: Douglas-1991.01 tags: functional microcircuit cat visual cortex microstimulation date: 12-29-2011 05:12 gmt revision:3 [2] [1] [0] [head]

PMID-1666655[0] A functional microcircuit for cat visual cortex

  • Using in vivo stim and record, They describe what may be a 'cannonical' circuit for the cortex.
  • Not dominated by excitation / inhibition, but rather cell dynamics.
  • Thalamus weaker than poysynaptic inupt from the cortex for excitation.
  • Focuses on Hubel and Wiesel style stuffs. Cats, SUA.
  • Stimulated the geniculate body & observed the response using intracellular electrodes from 102 neurons.
  • Their traces show lots of long-duration inhibition.
  • Probably not relevant to my purposes.


[0] Douglas RJ, Martin KA, A functional microcircuit for cat visual cortex.J Physiol 440no Issue 735-69 (1991)

hide / / print
ref: work-0 tags: differential evolution function optimization date: 07-09-2010 14:46 gmt revision:3 [2] [1] [0] [head]

Differential evolution (DE) is an optimization method, somewhat like Neidler-Mead or simulated annealing (SA). Much like genetic algorithms, it utilizes a population of solutions and selection to explore and optimize the objective function. However, it instead of perturbing vectors randomly or greedily descending the objective function gradient, it uses the difference between individual population vectors to update hypothetical solutions. See below for an illustration.

At my rather cursory reading, this serves to adapt the distribution of hypothetical solutions (or population of solutions, to use the evolutionary term) to the structure of the underlying function to be optimized. Judging from images/821_1.pdf Price and Storn (the inventors), DE works in situations where simulated annealing (which I am using presently, in the robot vision system) fails, and is applicable to higher-dimensional problems than simplex methods or SA. The paper tests DE on 100 dimensional problems, and it is able to solve these with on the order of 50k function evaluations. Furthermore, they show that it finds function extrema quicker than stochastic differential equations (SDE, alas from 85) which uses the gradient of the function to be optimized.

I'm surprised that this method slipped under my radar for so long - why hasn't anyone mentioned this? Is it because it has no proofs of convergence? has it more recently been superseded? (the paper is from 1997). Yet, I'm pleased because it means that there are also many other algorithms equally clever and novel (and simple?), out their in the literature or waiting to be discovered.

hide / / print
ref: work-0 tags: functional programming compilation ocaml date: 08-24-2009 14:33 gmt revision:0 [head]

The implementation of functional programming languages - book!

hide / / print
ref: work-0 tags: ocaml mysql programming functional date: 07-03-2009 19:16 gmt revision:2 [1] [0] [head]

Foe my work I store a lot of analyzed data in SQL databases. In one of these, I have stored the anatomical target that the data was recorded from - namely, STN or VIM thalamus. After updating the analysis programs, I needed to copy the anatomical target data over to the new SQL tables. Where perl may have been my previous go-to language for this task, I've had enuogh of its strange quiks, hence decided to try it in Ruby (worked, but was not so elegant, as I don't actually know Ruby!) and then Ocaml.

#use "topfind"
#require "mysql"

(* this function takes a query and a function that converts entries 
in a row to Ocaml tuples *)
let read_table db query rowfunc =
	let r = Mysql.exec db query in
	let col = Mysql.column r in
	let rec loop = function
		| None      -> []
		| Some x    -> rowfunc col x :: loop (Mysql.fetch r)
	loop (Mysql.fetch r)

let _ = 
	let db = Mysql.quick_connect ~host:"crispy" ~database:"turner" ~password:"" ~user:"" () in
	let nn = Mysql.not_null in
	(* this function builds a table of files (recording sessions) from a given target, then 
	uses the mysql UPDATE command to propagate to the new SQL database. *)
	let propagate targ = 
		let t = read_table db 
			("SELECT file, COUNT(file) FROM `xcor2` WHERE target='"^targ^"' GROUP BY file")
			(fun col row -> (
				nn Mysql.str2ml (col ~key:"file" ~row), 
				nn Mysql.int2ml (col ~key:"COUNT(file)" ~row) )
		List.iter (fun (fname,_) -> 
			let query = "UPDATE `xcor3` SET `target`='"^targ^
				"' WHERE STRCMP(`file`,'"^fname^"')=0" in
			print_endline query ;
			ignore( Mysql.exec db query )
		) t ;
	propagate "STN" ; 
	propagate "VIM" ; 
	propagate "CTX" ; 
	Mysql.disconnect db ;;

Interacting with MySQL is quite easy with Ocaml - though the type system adds a certain overhead, it's not too bad.

hide / / print
ref: work-0 tags: covariance matrix adaptation learning evolution continuous function normal gaussian statistics date: 06-30-2009 15:07 gmt revision:0 [head]


  • Details a method of sampling + covariance matrix approximation to find the extrema of a continuous (but intractable) fitness function
  • HAs flavors of RLS / Kalman filtering. Indeed, i think that kalman filtering may be a more principled method for optimization?
  • Can be used in high-dimensional optimization problems like finding optimal weights for a neural network.
  • Optimum-seeking is provided by weighting the stochastic samples (generated ala a particle filter or unscented kalman filter) by their fitness.
  • Introductory material is quite good, actually...

hide / / print
ref: Isoda-2007.02 tags: SMA saccade basal_forebrain executive function 2007 microstimulation SUA cortex sclin date: 10-03-2008 17:12 gmt revision:2 [1] [0] [head]

PMID-17237780[0] Switching from automatic to controlled action by monkey medial frontal cortex.

  • SCLIN's blog entry
  • task: two monkeys were trained to saccade to one of two targets, left/right pink/yellow. the choice was cued by the color of the central fixation target; when it changed, they should saccade to the same-colored target.
    • usually, the saccade direction remained the same; sometimes, it switched.
    • the switch could either occur to the same side as the SUA recording (ipsilateral) or to the opposite (contralateral).
  • found cells in the pre-SMA that would fire when the monkey had to change his adapted behavior
    • both cells that increased firing upon an ipsi-switch and contra-switch
  • microstimulated in SMA, and increased the number of correct trials!
    • 60ua, 0.2ms, cathodal only,
    • design: stimulation simulated adaptive-response related activity in a slightly advanced manner
    • don't actually have that many trials of this. humm?
  • they also did some go-nogo (no saccade) work, in which there were neurons responsive to inhibiting as well as facilitating saccades on both sides.
    • not a hell of a lot of neurons here nor trials, either - but i guess proper statistical design obviates the need for this.
  • I think if you recast this in tems of reward expectation it will make more sense and be less magical.
  • would like to do shadlen-similar type stuff in the STN
  1. how long did it take to train the monkeys to do this?
  2. what part of the nervous system looked at the planned action with visual context, and realized that the normal habitual basal-ganglia output would be wrong?
    1. probably the whole brain is involved in this.
    2. hypothetical path of error trials: visual system -> cortico-cortico projections + context activation -> preparatory motor activity -> basal ganglia + visual context (is there anatomical basis for this?) -> activation of some region that detects the motor plan is unlikely to result in reward -> SMA?


hide / / print
ref: bookmark-0 tags: optimization function search matlab linear nonlinear programming date: 08-09-2007 02:21 gmt revision:0 [head]


very nice collection of links!!

hide / / print
ref: Schaal-1998.11 tags: schaal local learning PLS partial least squares function approximation date: 0-0-2007 0:0 revision:0 [head]

PMID-9804671 Constructive incremental learning from only local information

hide / / print
ref: Nakanishi-2005.01 tags: schaal adaptive control function approximation error learning date: 0-0-2007 0:0 revision:0 [head]

PMID-15649663 Composite adaptive control with locally weighted statistical learning.

  • idea: want error-tracking plus locally-weighted peicewise linear function approximation (though , I didn't read it all that much in depth.. it is complicated)