m8ta
use https for features.
text: sort by
tags: modified
type: chronology
{1472}
hide / / print
ref: -0 tags: computational neuroscience opinion tony zador konrad kording lillicrap date: 07-30-2019 21:04 gmt revision:0 [head]

Two papers out recently in Arxive and Biorxiv:

  • A critique of pure learning: what artificial neural networks can learn from animal brains
    • Animals learn rapidly and robustly, without the need for labeled sensory data, largely through innate mechanisms as arrived at and encoded genetically through evolution.
    • Still, this cannot account for the connectivity of the human brain, which is much to large for the genome; with us, there are cannonical circuits and patterns of intra-area connectivity which act as the 'innate' learning biases.
    • Mice and men are not so far apart evolutionary. (I've heard this also from people FIB-SEM imaging cortex) Hence, understanding one should appreciably lead us to understand the other. (I agree with this sentiment, but for the fact that lab mice are dumb, and have pretty stereotyped behaviors).
    • References Long short term memory and learning to learn in networks of spiking neurons -- which claims that a hybrid algorithm (BPTT with neuronal rewiring) with realistic neuronal dynamics markedly increases the computational power of spiking neural networks.
  • What does it mean to understand a neural network?
    • As has been the intuition with a lot of neuroscientists probably for a long time, posits that we have to investigate the developmental rules (wiring and connectivity, same as above) plus the local-ish learning rules (synaptic, dendritic, other .. astrocytic).
      • The weights themselves, in either biological neural networks, or in ANN's, are not at all informative! (Duh).
    • Emphasizes the concept of compressability: how much information can be discarded without impacting performance? With some modern ANN's, 30-50x compression is possible. Authors here argue that little compression is possible in the human brain -- the wealth of all those details about the world are needed! In other words, no compact description is possible.
    • Hence, you need to learn how the network learns those details, and how it's structured so that important things are learned rapidly and robustly, as seen in animals (very similar to above).

{896}
hide / / print
ref: Friston-2002.1 tags: neuroscience philosophy feedback top-down sensory integration inference date: 10-25-2011 23:24 gmt revision:0 [head]

PMID-12450490 Functional integration and inference in the brain

  • Extra-classical tuning: tuning is dependent on behavioral context (motor) or stimulus context (sensory). Author proposes that neuroimaging can be used to investigate it in humans.
  • "Information theory can, in principle, proceed using only forward connections. However, it turns out that this is only possible when processes generating sensory inputs are invertible and independent. Invertibility is precluded when the cause of a percept and the context in which it is engendered interact." -- proof? citations? Makes sense though.
  • Argues for the rather simplistic proof of backward connections via neuroimaging..

{530}
hide / / print
ref: notes-0 tags: neuroscience ion channels information coding John Harris date: 01-07-2008 16:46 gmt revision:4 [3] [2] [1] [0] [head]

  • crazy idea: that neurons have a number of ion channel lines which can be selectively activated. That is, information is transmitted by longitudial transmission channels which are selectively activated based on the message that is transmitted
  • has any evidence for such a fine structure been found?? I think not, due to binding studies, but who knows..
  • dude uses historical references (Neumann) to back up his ideas. I find these sorts of justifications interesting, but not logically substantiative. Do not talk about the opinions of old philosophers (exclusively, at least), talk about their data.
  • interesting story about holography & the holograph of Dennis Gabor.
    • he does make interesting analogies to neuroscience & the importance of preserving spatial phase.
  • fourier images -- neato.
conclusion: interesting, but a bit cooky.