You are not authenticated, login.
text: sort by
tags: modified
type: chronology
hide / / print
ref: -0 tags: computational neuroscience opinion tony zador konrad kording lillicrap date: 07-30-2019 21:04 gmt revision:0 [head]

Two papers out recently in Arxive and Biorxiv:

  • A critique of pure learning: what artificial neural networks can learn from animal brains
    • Animals learn rapidly and robustly, without the need for labeled sensory data, largely through innate mechanisms as arrived at and encoded genetically through evolution.
    • Still, this cannot account for the connectivity of the human brain, which is much to large for the genome; with us, there are cannonical circuits and patterns of intra-area connectivity which act as the 'innate' learning biases.
    • Mice and men are not so far apart evolutionary. (I've heard this also from people FIB-SEM imaging cortex) Hence, understanding one should appreciably lead us to understand the other. (I agree with this sentiment, but for the fact that lab mice are dumb, and have pretty stereotyped behaviors).
    • References Long short term memory and learning to learn in networks of spiking neurons -- which claims that a hybrid algorithm (BPTT with neuronal rewiring) with realistic neuronal dynamics markedly increases the computational power of spiking neural networks.
  • What does it mean to understand a neural network?
    • As has been the intuition with a lot of neuroscientists probably for a long time, posits that we have to investigate the developmental rules (wiring and connectivity, same as above) plus the local-ish learning rules (synaptic, dendritic, other .. astrocytic).
      • The weights themselves, in either biological neural networks, or in ANN's, are not at all informative! (Duh).
    • Emphasizes the concept of compressability: how much information can be discarded without impacting performance? With some modern ANN's, 30-50x compression is possible. Authors here argue that little compression is possible in the human brain -- the wealth of all those details about the world are needed! In other words, no compact description is possible.
    • Hence, you need to learn how the network learns those details, and how it's structured so that important things are learned rapidly and robustly, as seen in animals (very similar to above).

hide / / print
ref: Frank-2007.11 tags: horses PD STN DBS levodopa decision learning science date: 01-25-2012 00:50 gmt revision:5 [4] [3] [2] [1] [0] [head]

PMID-17962524[0] Hold your horses: impulsivity, deep brain stimulation, and medication in parkinsonism.

  • While on DBS, patients actually sped up their decisions under high-conflict conditions. Wow!
    • This impulsivity was not effected by dopaminergic medication status.
    • Impulsivity may be the cognitive equivalent of excess grip force {88}.
  • Mathematical models of decision making suggest that individuals only execute a choice once the 'evidence' in its favor crosses a critical decision threshold.
    • people can adjust decision thresholds to meet current task demands
    • One theory is that the STN modulates decision thresholds (6) and delays decision-making when faced with a conflict. Wanted to test this in a conflict situation.
    • Record from the STN in conflict task to see ??
  • Second wanted to test negative learning.
    • Dopamine replacement therapy impairs patient's ability to learn from the negative outcomes of their decisions (11 - 13), which may account for pathological gambling behavior (14).
    • PD patients did indeed score worse on avoidance, slightly less accurate on AB choice, and about the same for the rest.
  • Made a network model.
    • Found that preSMA and STN coactivation is associated with slowed reaction times under decision conflict (25).
    • And that STN-DBS reduces coupling between cingulate and basal ganglia output (27).
    • Their model they either lesioned STN or overloaded it with high frequency regular firing.
      • either one showed the same faster response in high-conflict decisions.
  • STN dysfunction does not lead to impulsivity in all behavioral situations.
    • STN lesioned rats show enhanced preference for choices that lead to large delayed rewards compared to those that yield small immediate rewards (32,33). (This is not conflict, though -- rather reward -- but nonetheless illuminating)
  • Dopaminergic medication, by tonically elevating dopamine levels and stimulating D2 receptors, prevents learning from negative decision outcomes (11, 13, 18). Hence pathological gambling behavior (14).
  • Other studies show DBS-induced impairments in cognitive control (27 PMID-17119543, 36 PMID-15079009).


[0] Frank MJ, Samanta J, Moustafa AA, Sherman SJ, Hold your horses: impulsivity, deep brain stimulation, and medication in parkinsonism.Science 318:5854, 1309-12 (2007 Nov 23)

hide / / print
ref: Friston-2002.1 tags: neuroscience philosophy feedback top-down sensory integration inference date: 10-25-2011 23:24 gmt revision:0 [head]

PMID-12450490 Functional integration and inference in the brain

  • Extra-classical tuning: tuning is dependent on behavioral context (motor) or stimulus context (sensory). Author proposes that neuroimaging can be used to investigate it in humans.
  • "Information theory can, in principle, proceed using only forward connections. However, it turns out that this is only possible when processes generating sensory inputs are invertible and independent. Invertibility is precluded when the cause of a percept and the context in which it is engendered interact." -- proof? citations? Makes sense though.
  • Argues for the rather simplistic proof of backward connections via neuroimaging..

hide / / print
ref: -0 tags: science decay truth observation bias Jonah Lehrer new yorker date: 12-20-2010 01:23 gmt revision:3 [2] [1] [0] [head]

"The Truth Wears Off" by Jonah Lehrer, the New Yorker.

  • "The disturbing implication of the Crabbe study is that a lot of extraordinary scientific data are nothing but noise. [...] This suggests that the decline effect is actually a decline of illusion."
  • "The situation is even worse when a subject is fashonable. In recent years, for instance, there have been hundreds of studies on the various genes that control the differences in risk between men and women. These findings have included the mutations responsible for the increased risk of schizophrenia to the genes underlying hypertension. Ioannidis and his colleagues looksed at 432 of these claims. [...] The most troubling fact emerged when he looked at the test of replication: out of four hundred thirty two claims, only a single one was consistently replicatable. "This doesn't mean that none of these claims will turn out to be true," he says. "But, given that most of them were done badly, I wouldn't hold my breath."
  • Some follow up discussion on wired science
  • Synopsis of the sources of this decline:
    • The original data was an outlier; we scientists are biased to look for interesting outliers & report them.
      • The decline is nothing less than a regression to the mean.
    • Scientists have strong observation bias, especially when measuring difficult things, like the length of wing feathers (hypothesis being that symmetrical males mate more, are selected for by the females of their species).
    • Publishers have strong bias; they like to publish positive results.
      • Hell, we humans like/love positive results (what works!) which is good and normal.
    • This is a trace - an 'impulse response' of the feedback system that is science. An idea is a fad for a few years, when other scientists will try to repeat and buttress it (which leads to a strong bias in publishing), then scientists seeking new novelty will attack it. The idea henceforth declines.
  • Anyway, have been thinking this for a while, good to see some evidence (meta-evidence?).
  • cached
  • Richard Feynman quote, courtesy of Joey, which illustrates another side of the coin: "Millikan measured the charge on an electron by an experiment with falling oil drops, and got an answer which we now know not to be quite right. It’s a little bit off, because he had the incorrect value for the viscosity of air. It’s interesting to look at the history of measurements of the charge of the electron, after Millikan. If you plot them as a function of time, you find that one is a little bigger than Millikan’s, and the next one’s a little bit bigger than that, and the next one’s a little bit bigger than that, until finally they settle down to a number which is higher. Why didn’t they discover that the new number was higher right away? It’s a thing that scientists are ashamed of–this history–because it’s apparent that people did things like this: When they got a number that was too high above Millikan’s, they thought something must be wrong–and they would look for and find a reason why something might be wrong. When they got a number closer to Millikan’s value they didn’t look so hard. And so they eliminated the numbers that were too far off, and did other things like that."

hide / / print
ref: -0 tags: sciences artificial Simon organizations economic rationality date: 12-01-2010 07:33 gmt revision:2 [1] [0] [head]

These are notes from reading Herbert A. Simon’s The Sciences of the Artificial, third edition, 1996 (though most of the material seems from the 70s). They are half quoted / half paraphrased (as needed when the original phrasing was clunky). I’ve added a few of my own observations, and reordered the ideas from the book.

“A large body of evidence shows that human choices are not consistent and transitive, as they would be if a utility function existed ... In general a large gain along one axis is required to compensate for a small loss along another.” HA Simon.

Companies within a capitalist economy make almost negligible use of markets in their internal functioning” - HA Simon. Eg. they are internally command economies. (later, p 40...) We take the frequent movability and indefiniteness of organizational boundaries as evidence that there is often a near balance between the advantages of markets and organizations”

  • Retail sales of automobiles are handled by dealerships
  • Many other commodities are sold directly to the consumer
  • In fast food there are direct outlets and franchises.
  • There are sole source suppliers that produce parts for much larger manufacturers.
I’m realizing / imagining a very flexible system of organizations, tied together and communicating via a liquid ‘blood’ of the market economy.

That said: organizations are not highly centralized structures in which all the important decisions are made at the center; this would exceed the limits of procedural rationality and lose many of the advantages attainable from the use of hierarchical authority. Business organizations, like markets, are vast distributed computers whose decision processes are substantially decentralized. In fact, the work of the head of a corporation is a market-like activity: allocating capital to promising or desirable projects.

In organizations, uncertainty is often a good reason to shift from markets to hierarchies in making decisions. If two different arms of a corporation - production and marketing - make different decisions on the uncertain number of units to be sold next year, there will be a problem. It is better for the management to share assumptions. “Left to the market, this kind of uncertainty leads directly to the dilemmas of rationality that we described earlier in terms of game theory and rational expectations”

I retain vivid memories of the astonishment and disbelief expressed by the architecture students to whom I taught urban land economics many years ago when I pointed to medieval cities as marveluosly patterned systems that had mostly just ‘grown’ in response to myriads of individual human decisions. To my students a pattern implied a planner in whose mind it had been conceived and whose hand it had been implemented. The idea that a city could acquire its patter as naturally as a snowflake was foreign to them ... they reacted to it as many christian fundamentalists responded to Darwin: no design without a Designer!

Markets appear to conserve information and calculation by assigning decisions to actors who can make them on the basis of information that is available to them locally. von Hayek: “The most significant fact about this system is the economy of knowledge with which it operates, o how little the individual participants need to know in order to make the right action”. To maintain actual Pareto optimality in the markets would require information and computational requirements that are exceedingly burdensome and unrealistic (from The New Palgrave: A dictionary of Economics)

Nelson and winter observe that in economic evolution, in contract to biological evolution, sucessful algorithms (business practices) may be borrowed from one firm to the other. The hypothesized system is Lamarkian, because any new idea can be incorporated in opearting procedures as soon as its success is observed" . Also, it's good as corporations don't have secual reproduction / crossover.

hide / / print
ref: Inzlicht-2009.03 tags: uncertainty religion conviction decision science date: 02-02-2010 20:39 gmt revision:3 [2] [1] [0] [head]

The Neural Markers of Religious Conviction PMID-19291205

Recently a friend pointed this article out to me, and while I found the scientific results interesting though slightly questionable - that religious people have less anterior cingulate cortex activation upon error - the introduction and discussion were stimulating. What follows are a few quotes and my interpretation and implications of the authors' viewpoint.

"The absence of a cognitive map providing clear standards and goals is uncomfortable and leads people to search for and assert belief systems that quell their anxiety by allowing for clearer goal pursuit (McGregor, Zanna, Holmes, & Spencer, 2001)." I would argue that uncertainty itself is highly uncomfortable - whether it is uncertainty as to how much food you will have in the future, or uncertainty as to the best behavior. In this sense, of course religion decreases anxiety - it provides a structured way to think about this disordered and highly undecidable world, a filter to remove or explain away many of the random parts of our lives. In my personal experience, conviction is usually easier than trying to hold accurate probabalistic models in your mind - conviction is pleasurable, even if it is wrong.

I find their short review of cognitive science in the introduction interesting - they claim that the septo-hippocampal system is concerned with the detection and correction of errors associated with concrete behaviors and goals, while in humans (and other primates?) the ACC allows error and feedback based operations on concepts and higher-order goals. The need for a higher-level error detection circuit makes sense in humans, as we are able to bootstrap our behavior to very complicated limits, but it also begs to question - what trains the ACC? To some degree, it must train itself in the via the typical loopy feedback-based brain way, but this only goes so far, as (at least in the modern world) the space of all possible behaviors, longterm and short term, given stochastic feedback is too large to be either decidable or fully parseable/generalizable into an accurate global model, even given a lifetime of experience. Religion, as this paper and many others posits, provides this global model against which behaviors and perceptions can be measured.

But why does a uncertainty challenge causes a compensatory increase in the strength of convictions, almost to the point of zealousness (how is this adaptive? just as a means of reducing anxiety?); I've seen it happen, but why. From a Bayesian point of view, increased uncertainty necessitates decreased certainty, or fewer convictions. From a pragmatic point of view, increased uncertainty requires increased convictions purely because the convictions have to make up for the lack of environmental information from which to make a decision. Any theory must include the cost of not making a decision, the cost of delaying a decision, and the principle of sunk costs.

There are other solutions to the 'undecidable' problems of life than religion - literary culture and science come to mind. The principle behind all may be that, while individual experience and intellect is possibly insufficient for generating global rules to guide behavior, the condensed experience of thousands/millions/billions of people is. This assumes that experience, as a random variable/signal, scales according to the laws of large numbers - noise decreases monotonically as sample size increases. This may not actually be true, it depends on the structure of the distributions, and the extent to which people's decisions/behaviors are orthogonal, and the fidelity of the communication / aggregation channels which operate on the data. I think the dimensionality increase afforded by larger sample size is slower than the concomitant noise decrease, hence (valid) global rules guiding behavior can be extracted from large populations of people. Regarding the communication channels, it seems there were always high fidelity channels of experience - e.g. Homer, Benjamin Franklin's transatlatic trips, the royal Society of London, (forgive my western pov) - and now, there are even more (the internet)! The latter invention should, at least within the framework here, allow larger groups of people to make 'harder' or 'more undecidable' decisions by virtue of greater information. Fairly standard rhetoric to the internet crowd (c.f. forums), I know.

I would argue that this is better than using convictions... but the result of communication / aggregation is convictions anyway, so eh. Getting back to the uncertainty issue, the authors point out that conservative cultures there is usually greater uncertainty (which way is the arrow of causality?), and increasing uncertainty bolsters support zealous action, e.g. war.

"For example, contemporary social psychological research indicates that uncertainty threats can cause people to become more extreme in their opinions, so that they exaggerate their religious convictions and become more willing to support a war to defend those convictions (McGregor, Haji, Nash, & Teper, 2008). In fact, even nonbelievers bolster their personal convictions to near-religious levels in order to reduce uncertainty-related distress (McGregor et al., 2001). Thus, in terms of feedback-loop models, the standards and predictions provided by religious convictions are strong enough that they can resist any discrepant feedback that might alert the comparator system."

This, I believe, is fairly accurate, and it implies several dramatic things: if a despot or leader wishes to engender support for a war, particularly a religious war, then he should make the lives of his constituents uncertain. If their lives are stable and certain sans ideology, then they will be less likely to have the convictions ('the other side is bad!') to fight certain wars. (It of course depends on who/what the other side is!). Take Europe vs. America as an example - America has far fewer social support systems and greater uncertainty in life than in Europe. The Economist frequently phrases American businesses' penchant for hiring and firing people quickly and seemingly at whim, as it encourages creative reuse, economic flexibility, and better allocation of capital, but it has a clear downside - increased anxiety, uncertainty. We (well, not me, but many Americans) deal with this via religion, the article would argue (that said, I should guess that there are a great many other reasons people are religious). Still, in western Europe has less uncertainty in life, is more secular, and less tolerant of ideological wars. Hence the antidote for war is to give people stable, significant lives. More common-sense rhetoric.

On to another suggestive point made by the article: "In terms of feedback-loop models, this explanation suggests that the standards and predictions provided by religion are inadequate and should, in fact, result in prediction errors; however, because religious beliefs are rigid, inconsistent information is reinterpreted in such a way that it becomes assimilated to preexisting convictions, further sustaining beliefs (Park, 2005)."

I would be interested in an actual test of this hypothesis - if it is possible without bias (perhaps another EEG study? perhaps it has been already done?) The authors actually prove the opposite point, that religions people are more likely to answer correctly on the Stroop test. They take more time, but seem to be more careful. This reminds me of Matteo Ricci, who allegedly used his Jesuit training in sustained concentration and memorization to master the Chinese language; clearly religion is far more than just a means of reducing perceived uncertainty about the world.

To loop the argument back on its tail - this is the 'meta' blog, afterall - one may question if the theory (looking at behavior in terms of the unpleasantness of uncertainty and the need for decidability) is a good way of looking at things, just as we questioned if religion is a good theory of the world. I think it generalizes; for example, Solaiman mentioned that the European children of the revolution of 1968 had parents who notably applied very little guidance to their lives; they were like the American hippies. These people grew up disliking their parents, and sought far more structure in their lives and in parenting their own children. One may imagine that they disliked the vast uncertainty their parents bluntly exposed them to, and paucity of guiding principles - something that the parents, after years of living in the world, probably had. Secondly, Solaiman recalled that all his favorite teachers were those that were strictest, strongest in their conviction, and most structured in their pedagogy. People seek to make decisions decidable whether through parents, teachers, religion, science or even art and literature.

To summarize, uncertainty engenders convictions by the pragmatic principle. Best thing we can do is to either reduce uncertainty or found those convictions on aggregate data(*)

(*) Google publication. The principle of data is our zeitgist, but history suggests that independent of what we think now it will not be the last.

comments? edit this, write below.

hide / / print
ref: notes-0 tags: neuroscience ion channels information coding John Harris date: 01-07-2008 16:46 gmt revision:4 [3] [2] [1] [0] [head]

  • crazy idea: that neurons have a number of ion channel lines which can be selectively activated. That is, information is transmitted by longitudial transmission channels which are selectively activated based on the message that is transmitted
  • has any evidence for such a fine structure been found?? I think not, due to binding studies, but who knows..
  • dude uses historical references (Neumann) to back up his ideas. I find these sorts of justifications interesting, but not logically substantiative. Do not talk about the opinions of old philosophers (exclusively, at least), talk about their data.
  • interesting story about holography & the holograph of Dennis Gabor.
    • he does make interesting analogies to neuroscience & the importance of preserving spatial phase.
  • fourier images -- neato.
conclusion: interesting, but a bit cooky.

hide / / print
ref: bookmark-0 tags: postmodernism pseudoscience Alan Sokal date: 04-23-2007 03:47 gmt revision:0 [head]


  • idea: postmodernism attacks science by claiming that all observations are inherently subjective andn unsupported by evidence; pseudoscience attacks science by claiming subjective, unsupported assertions to be supported by the method andn courpus of scientific thought. One robs science of credibility, the other attempts to profit from scientific credibility due to its success in predicting physical phenomena.
    • Usually, knowledge of the field in question is enough to highlight pseudoscience.. and postmodernism (by his definition) can be spotted by its claims/lack thereof.
  • claims Judeanism & Christianity contain sections of pseudoscience
  • (pope) John Paul II is a pseudoscientist?