m8ta
You are not authenticated, login.
text: sort by
tags: modified
type: chronology
[0] Is this the bionic man?Nature 442:7099, 109 (2006 Jul 13)[1] Hochberg LR, Serruya MD, Friehs GM, Mukand JA, Saleh M, Caplan AH, Branner A, Chen D, Penn RD, Donoghue JP, Neuronal ensemble control of prosthetic devices by a human with tetraplegia.Nature 442:7099, 164-71 (2006 Jul 13)[2] Santhanam G, Ryu SI, Yu BM, Afshar A, Shenoy KV, A high-performance brain-computer interface.Nature 442:7099, 195-8 (2006 Jul 13)[3] Shenoy KV, Meeker D, Cao S, Kureshi SA, Pesaran B, Buneo CA, Batista AP, Mitra PP, Burdick JW, Andersen RA, Neural prosthetic control signals from plan activity.Neuroreport 14:4, 591-6 (2003 Mar 24)

[0] Scott SH, Optimal feedback control and the neural basis of volitional motor control.Nat Rev Neurosci 5:7, 532-46 (2004 Jul)

[0] Cabel DW, Cisek P, Scott SH, Neural activity in primary motor cortex related to mechanical loads applied to the shoulder and elbow during a postural task.J Neurophysiol 86:4, 2102-8 (2001 Oct)

{1578}
hide / / print
ref: -0 tags: superposition semantic LLM anthropic scott alexander date: 11-29-2023 23:58 gmt revision:1 [0] [head]

God Help us, let's try to understand AI monosemanticity

Commentary: To some degree, superposition seems like a geometric "hack" invented in the process of optimization to squeeze a great many (largely mutually-exclusive) sparse features into a limited number of neurons. GPT3 has a latent dimension of only 96 * 128 = 12288, and with 96 layers this is only 1.17 M neurons (*). A fruit fly has 100k neurons (and can't speak). All communication must be through that 12288 dimensional vector, which is passed through LayerNorm many times (**), so naturally the network learns to take advantage of locally linear subspaces.

That said, the primate visual system does seem to use superposition, though not via local subspaces; instead, neurons seem to encode multiple axes somewhat linearly (e.g. global spaces: linearly combined position and class) That was a few years ago, and I suspect that new results may contest this. The face area seems to do a good job of disentanglement, for example.

Treating everything as high-dimensional vectors is great for analogy making, like the wife - husband + king = queen example. But having fixed-size vectors for representing arbitrary-dimensioned relationships inevitably leads to compression ~= superposition. Provided those subspaces are semantically meaningful, it all works out from a generalization standpoint -- but this is then equivalent to allocating an additional axis for said relationship or attribute. Additional axes would also put less decoding burden on the downstream layers, and make optimization easier.

Google has demonstrated allocation in transformers. It's also prevalent in the cortex. Trick is getting it to work!

(*) GPT4 is unlikely to have more than an order of magnitude more 'neurons'; PaLM-540B has only 2.17 M. Given that GPT-4 is something like 3-4x larger, it should have 6-8 M neurons, which is still 3 orders of magnitude fewer than the human neocortex (nevermind the cerebellum ;-)

(**) I'm of two minds on LayerNorm. PV interneurons might be seen to do something like this, but it's all local -- you don't need everything to be vector rotations. (LayerNorm effectively removes one degree of freedom, so really it's a 12287 dimensional vector)

Update: After reading https://transformer-circuits.pub/2023/monosemantic-features/index.html, I find the idea of local manifolds / local codes to be quite appealing: why not represent sparse yet conditional features using superposition?  This also expands the possibility of pseudo-hierarchical representation, which is great.

{957}
hide / / print
ref: -0 tags: Scott M1 motor control pathlets filter EMG date: 12-22-2011 22:52 gmt revision:1 [0] [head]

PMID-19923243 Complex Spatiotemporal Tuning in Human Upper-Limb Muscles

  • Original idea: M1 neurons encode 'pathlets', sophisticated high-level movement trajectories, possibly through the action of both the musculoskeletal system and spinal cord circuitry.
  • Showed that muscle pathlets can be extracted from EMG data, relkiably and between patients, implying that M1 reflects 'filter-like' properties of the body, and not high level representations.

{952}
hide / / print
ref: -0 tags: todorov PV M1 controversy Scott date: 12-22-2011 22:22 gmt revision:1 [0] [head]

PMID-10725914 Population vectors and motor cortex: neural coding or epiphenomenon?

  • Basic friendly editorial of {950}.
  • On the PV method: "These correlations have been interpreted as suggesting that the motor cortex controls higher-level features of hand movements, rather than the lower-level features related to the individual joints and muscles that bring about those movements"
  • Implies that conversion of the relatively abstract representation to the concrete muscle activations would be done by the spinal cord.
    • This in turn implies that the spinal cord is capable of organizatonal-level learning, or that gross properties are genetically / developmentally encoded.
  • On Schwartz's drawing experiments: [The variable latency for the correlation of the PV tuning and arm motion] was interpreted that motor cortex is involved in controlling movements with high curvature, but not relatively straight movements.
  • Nice: "Engineers have to understand the plant before they can figure out how to control it. Why should it be any different when examining biological control?"

{41}
hide / / print
ref: bookmark-2006.07 tags: BMI BCI EEG bibliography Stephan Scott date: 09-07-2008 19:54 gmt revision:2 [1] [0] [head]

http://www.cs.colostate.edu/eeg/links.html

____References____

{106}
hide / / print
ref: Scott-2004.07 tags: Scott motor control optimal feedback cortex reaching dynamics review date: 04-09-2007 22:40 gmt revision:1 [0] [head]

PMID-15208695[0] PDF HTML summary Optimal feedback control and the neural basis of volitional motor control by Stephen S. Scott

____References____

{279}
hide / / print
ref: Cabel-2001.1 tags: Stephen Scott Kinarm motor control date: 04-04-2007 21:51 gmt revision:0 [head]

PMID-11600665[] Neural Activity in Primary Motor Cortex Related to Mechanical Loads Applied to the Shoulder and Elbow During a Postural Task

  • experiment w/ the kinarm. w/ Stephen Scott.
  • roughly equal numbers of neuons responsive to mechanical loads on shoulder, elbow, and both.
  • neural activity is also strongly influenced by the specific motor patterns used to perform a given task.

____References____