you are not logged in, login. new entry
text: sort by
tags: modified
type: chronology
[0] Chan SS, Moran DW, Computational model of a primate arm: from hand position to joint angles, joint torques and muscle forces.J Neural Eng 3:4, 327-37 (2006 Dec)

hide / edit[1] / print
ref: -0 tags: computational biology evolution metabolic networks andreas wagner genotype phenotype network date: 06-12-2017 19:35 gmt revision:1 [0] [head]

Evolutionary Plasticity and Innovations in Complex Metabolic Reaction Networks

  • ‘’João F. Matias Rodrigues, Andreas Wagner ‘’
  • Our observations suggest that the robustness of the Escherichia coli metabolic network to mutations is typical of networks with the same phenotype.
  • We demonstrate that networks with the same phenotype form large sets that can be traversed through single mutations, and that single mutations of different genotypes with the same phenotype can yield very different novel phenotypes
  • Entirely computational study.
    • Examines what is possible given known metabolic building-blocks.
  • Methodology: collated a list of all metabolic reactions in E. Coli (726 reactions, excluding 205 transport reactions) out of 5870 possible reactions.
    • Then ran random-walk mutation experiments to see where the genotype + phenotype could move. Each point in the genotype had to be viable on either a rich (many carbon source) or minimal (glucose) growth medium.
    • Viability was determined by Flux-balance analysis (FBA).
      • In our work we use a set of biochemical precursors from E. coli 47-49 as the set of required compounds a network needs to synthesize, ‘’’by using linear programming to optimize the flux through a specific objective function’’’, in this case the reaction representing the production of biomass precursors we are able to know if a specific metabolic network is able to synthesize the precursors or not.
      • Used Coin-OR and Ilog to optimize the metabolic concentrations (I think?) per given network.
    • This included the ability to synthesize all required precursor biomolecules; see supplementary information.
    • ‘’’“Viable” is highly permissive -- non-zero biomolecule concentration using FBA and linear programming. ‘’’
    • Genomic distances = hamming distance between binary vectors, where 1 = enzyme / reaction possible; 0 = mutated off; 0 = identical genotype, 1 = completely different genotype.
  • Between pairs of viable genetic-metabolic networks, only a minority (30 - 40%) of reactions are essential,
    • Which naturally increases with increasing carbon source diversity:
    • When they go back an examine networks that can sustain life on any of (up to) 60 carbon sources, and again measure the distance from the original E. Coli genome, they find this added robustness does not significantly constrain network architecture.

Summary thoughts: This is a highly interesting study, insofar that the authors show substantial support for their hypotheses that phenotypes can be explored through random-walk non-lethal mutations of the genotype, and this is somewhat invariant to the source of carbon for known biochemical reactions. What gives me pause is the use of linear programming / optimization when setting the relative concentrations of biomolecules, and the permissive criteria for accepting these networks; real life (I would imagine) is far more constrained. Relative and absolute concentrations matter.

Still, the study does reflect some robustness. I suggest that a good control would be to ‘fuzz’ the list of available reactions based on statistical criteria, and see if the results still hold. Then, go back and make the reactions un-biological or less networked, and see if this destroys the measured degrees of robustness.

hide / edit[4] / print
ref: Holgado-2010.09 tags: DBS oscillations beta globus pallidus simulation computational model date: 02-22-2012 18:36 gmt revision:4 [3] [2] [1] [0] [head]

PMID-20844130[0] Conditions for the Generation of Beta Oscillations in the Subthalamic Nucleus–Globus Pallidus Network

  • Modeled the globus pallidus external & STN; arrived at criteria in which the system shows beta-band oscillations.
    • STN is primarily glutamergic and projects to GPe (along with many other areas..)
      • STN gets lots of cortical afferent, too.
    • GPe is GABAergic and projects profusely back to STN.
    • This inhibition leads to more accurate choices.
      • (Frank, 2006 PMID:,
        • The present [neural network] model incorporates the STN and shows that by modulating when a response is executed, the STN reduces premature responding and therefore has substantial effects on which response is ultimately selected, particularly when there are multiple competing responses.
        • Increased cortical response conflict leads to dynamic adjustments in response thresholds via cortico-subthalamic-pallidal pathways.
        • the model accounts for the beneficial effects of STN lesions on these oscillations, but suggests that this benefit may come at the expense of impaired decision making.
        • Not totally convinced -- impulsivity is due to larger network effects. Delay in conflict situations is an emergent property, not localized to STN.
      • Frank 2007 {1077}.
  • Beta band: cite Boraud et al 2005.
  • Huh parameters drawn from Misha's work, among others + Kita 2004, 2005.
    • Striatum has a low spike rate but high modulation? Schultz and Romo 1988.
  • In their model there are a wide range of parameters (bidirectional weights) which lead to oscillation
  • In PD the siatum is hyperactive in the indirect path (Obeso et al 2000); their model duplicates this.


[0] Holgado AJ, Terry JR, Bogacz R, Conditions for the generation of beta oscillations in the subthalamic nucleus-globus pallidus network.J Neurosci 30:37, 12340-52 (2010 Sep 15)

hide / edit[1] / print
ref: OReilly-2006.02 tags: computational model prefrontal_cortex basal_ganglia date: 12-07-2011 04:11 gmt revision:1 [0] [head]

PMID-16378516[0] Making Working Memory Work: A Computational Model of Learning in the Prefrontal Cortex and Basal Ganglia

found via: http://www.citeulike.org/tag/basal-ganglia


[0] O'Reilly RC, Frank MJ, Making working memory work: a computational model of learning in the prefrontal cortex and basal ganglia.Neural Comput 18:2, 283-328 (2006 Feb)

hide / edit[0] / print
ref: work-0 tags: Ng computational leaning theory machine date: 10-25-2009 19:14 gmt revision:0 [head]

Andrew Ng's notes on learning theory

  • goes over the bias / variance tradeoff.
    • variance = when the model has a large testing error; large generalization error.
    • bias = the expected generalization error even if the model is fit to a very large training set.
  • proves that, with a sufficiently large training set, the training error will be the same as the fitting error.
    • also gives an upper bound on the generalization error in terms of fitting error in terms of the number of models available (discrete number)
    • this bound is only logarithmic in k, the number of hypotheses.
  • the training size m that a certain method or algorithm requires in order to achieve a certain level of performance is the algorithm's sample complexity.
  • shows that with infinite hypothesis space, the number of training examples needed is at most linear in the parameters of the model.
  • goes over the Vapnik-Chervonenkis dimension = the size of the largest set that is shattered by a hypothesis space. = VC(H)
    • A hypothesis space can shatter a set if it can realize any labeling (binary, i think) on the set of points in S. see his diagram.
    • In oder to prove that VC(H) is at least D, only need to show that there's at least one set of size d that H can shatter.
  • There are more notes in the containing directory - http://www.stanford.edu/class/cs229/notes/

hide / edit[3] / print
ref: -0 tags: computational geometry triangulation ocaml kicadocaml zone fill edge date: 01-26-2009 01:47 gmt revision:3 [2] [1] [0] [head]

I have been working hard to add zone support to kicadocaml since the implementation in kicad's PCBnew is somewhat borken (at least for my boards). It is not a very easy task!

Roughly, the task is this: given a zone of copper pour, perhaps attached to the ground net, and a series of tracks, vias, and pads also on that layer of the PCB but not on the same net, form cutouts in the zone so that there is an even spacing between the tracks/vias and zone.

Currently I'm attacking the problem using triangles (not polygons like the other PCB softwares). I chose triangles since I'm using OpenGL to display the PCB, and triangles are a very native mode of drawing in OpenGL. Points are added to the triangle mesh with an incremental algorithm, where the triangles are stored as a linked-mesh : each triangle has a pointer (index#) to the triangle off edge ab,bc,ca. This allows finding the containing triangle when inserting a point a matter of jumping between triangles; since many of the points to be inserted are close to eachother, this is a relatively efficient algorithm. Once the triangle containing a point to be inserted is found, the triangle is split into three, the pointers are updated appropriately, and each triangle is tested to see if flipping with it's pair would result in a net larger smallest interior angle between the two. (This is not the same as Delaunay's criteria, but it is simpler, and it produces equally beautiful pictures.)

The problem is when two triangles are allowed to overlap or a gap is allowed - this makes the search algorithm die or get into a loop, and is a major major problem of the approach. In Guibas and Stolfi's paper, "Primitives for the manipulation of general subdivisions and the computation of Voronoi diagrams", they use an edge data structure, rather than a triangle data structure, which I suppose avoids this problem. I was lazy when starting this project, and chose the more obvious triangle-centric way of storing the data.

The insertion of points is actually not so hard; the big problem is making sure the edges in the original list of polygons are represented in the list of edges in the triangle mesh. Otherwise, triangles will span edges, which will result in DRC violations (e.g.g copper too close to vias). My inefficient way of doing this is to calculate, for all triangles, their intersections with the polygon segments, then adding this to the mesh until all segments are represented in the list. This process, too, is prone to numerical instability.

Perhaps the solution is to move back to an edge-centric data representation, so that certain edges can be 'pinned' or frozen, and hence they are guaranteed to be in the triangle mesh's edge list. I don't know; need to think about this more.

Update: I got most of it working; at least the triangulation & making sure the edges are in the triangle mesh are working. Mostly there were issues with numerical precision with narrow / small triangles; I rewrote the inside triangle function to use the cross product, which helped (this seems like the simplest way, and it avoids divisions!):

let insidetri a b c d = 
	cross (sub b a) (sub d a) > 0.0 &&
	cross (sub c b) (sub d b) > 0.0 &&
	cross (sub a c) (sub d c) > 0.0 

as well as the segment-segment intersection algorithm:

let intersect a b c d = 
	(* see if two line segments intersect *)
	(* return the point of intersection too *)
	let ab = sub b a in
	(* a prime is the origin *)
	let bp = length ab in
	let xx = norm ab in
	let yy = (-1.) *. (snd xx) , (fst xx) in
	let project e = 
		(dot (sub e a) xx) , (dot (sub e a) yy)
	let cp = project c in
	let dp = project d in
	let cd = sub dp cp in
	let m = (fst cd) /. (snd cd) in
	let o = (fst cp) -. m *. (snd cp) in
	let e = add (scl ab (o /. bp)) a in
	(* cp and dp must span the x-axis *)
	if ((snd cp) <= 0. && (snd dp) >= 0.) || ((snd cp) >= 0. && (snd dp) <= 0.) then (
		if o >= 0. && o <= bp then ( true, e )
		else ( false, e )
	) else ( false, e )

Everything was very sensitive to ">" vs. ">=" -- all must be correct. All triangles must be CCW, too, for the inside algorithm to work - this requires that points to be inserted close to a triangle edge must be snapped to that edge to avoid any possible CW triangles. (Determining if a triangle is CW or CCW is as simple as measuring the sign of the smallest cross product between two segments). I tried, for a day or so, to include a specialized function to insert points along a triangle's edge, but that turned out not to matter; the normal flipping routine works fine. I also tried inserting auxiliary points to try to break up very small triangles, but that really didn't affect the stability of the algorithm much. It is either correct, or it is not, and my large board was a good test suite. I have, however, seeded the triangularization with a grid of (up to) 20x20 points (this depends on the aspect ratio of the region to be filled - the points are equally spaced in x and y). This adds (max) 800 triangles, but it makes the algorithm more stable - fewer very narrow triangles - and we are working with sets of 10,000 triangles anyway for the large zones of copper.

Some corrections remain to be done regarding removing triangles based on DRC violation and using the linked-mesh of triangles when calculating edge-triangle edge intersection, but that should be relatively minor. Now I have to figure out how to store it in Kicad's ".brd" file format. Kicad uses "Kbool" library for intersection polygons - much faster than my triangle methods (well, it's in C not ocaml) - and generates concave polygons not triangles. Would prefer to do this so that I don't have to re-implement gerber export. (Of course, look at how much I have re-implemented! This was originally a project just to learn ocaml - Well, gotta have some fun :-)

hide / edit[1] / print
ref: Chan-2006.12 tags: computational model primate arm musculoskeletal motor_control Moran date: 04-09-2007 22:35 gmt revision:1 [0] [head]

PMID-17124337[0] Computational Model of a Primate Arm: from hand position to joint angles, joint torques, and muscle forces ideas:

  • no study so far has been able to incorporate all of these variables (global hand position & velocity, joint angles, joint angular velocities, joint torques, muscle activations)
  • they have a 3D, 7DOF model that translate actual motion to optimized muscle activations.
  • knock the old center-out research (nice!)
  • 38 musculoskeletal-tendon units
  • past research: people have found correlations to both forces and higher-level parameters, like position and velocity. these must be transformed via inverse dynamics to generate a motor plan / actually move the arm.
  • used SIMM to optimize the joint locations to replicate actual movements...
  • assume that the torso is the inertial frame.
  • used infrared Optotrak 3020
  • their model is consistent - they can use the inverse model to calculate muscle activations, which when fed back into the forward model, results in realistic movements. still yet, they do not compare to actual EMG.
  • for working with the dynamic model of the arm, they used AUTOLEV
    • I wish i could figure out what the Kane method was, they seem to leverage it here.
  • their inverse model is pretty clever:
  1. take the present attitude/orientation & velocity of the arm, and using parts of the forward model, calculate the contributions from gravity & coriolis forces.
  2. subtract this from the torques estimated via M*A (moment of interia times angular acceleration) to yield the contributions of the muscles.
  3. perturb each of the joints / DOF & measure the resulting arm motion, integrated over the same period as measurement
  4. form a linear equation with the linearized torque-responses on the left, and the muscle torque contributions on the right. Invert this equation to get the actual joint torques. (presumably the matrix spans row space).
  5. to figure out the muscle contributions, do the same thing - apply activation, scaled by the PCSA, to each muscle & measure the resulting torque (this is effectively the moment arm).
  6. take the resulting 38x7 matrix & p-inverse, with the constraint that none of the muscle activations are negative, yielding a somewhat well-specified muscle activation. not all that complicated of a method


hide / edit[0] / print
ref: Flash-2001.12 tags: Flash Sejnowski 2001 computational motor control learning PRR date: 0-0-2007 0:0 revision:0 [head]

PMID-11741014 Computational approaches to motor control. Tamar Flash and Terry Sejnowski.

  • PRR = parietal reach region
  • essential controviersies (to them):
    • the question of motor variables that are coded by neural populations.
    • equilibrium point control vs. inverse dynamics (the latter is obviously better/more correct)