m8ta
you are not logged in, login. new entry
text: sort by
tags: modified
type: chronology
[0] Loewenstein Y, Seung HS, Operant matching is a generic outcome of synaptic plasticity based on the covariance between reward and neural activity.Proc Natl Acad Sci U S A 103:41, 15224-9 (2006 Oct 10)

[0] Sergio LE, Hamel-Paquet C, Kalaska JF, Motor cortex neural correlates of output kinematics and kinetics during isometric-force and arm-reaching tasks.J Neurophysiol 94:4, 2353-78 (2005 Oct)[1] Hatsopoulos NG, Encoding in the motor cortex: was evarts right after all? Focus on "motor cortex neural correlates of output kinematics and kinetics during isometric-force and arm-reaching tasks".J Neurophysiol 94:4, 2261-2 (2005 Oct)[2] Cooke JD, Brown SH, Movement-related phasic muscle activation. II. Generation and functional role of the triphasic pattern.J Neurophysiol 63:3, 465-72 (1990 Mar)[3] Almeida GL, Hong DA, Corcos D, Gottlieb GL, Organizing principles for voluntary movement: extending single-joint rules.J Neurophysiol 74:4, 1374-81 (1995 Oct)[4] Gottlieb GL, Latash ML, Corcos DM, Liubinskas TJ, Agarwal GC, Organizing principles for single joint movements: V. Agonist-antagonist interactions.J Neurophysiol 67:6, 1417-27 (1992 Jun)[5] Corcos DM, Agarwal GC, Flaherty BP, Gottlieb GL, Organizing principles for single-joint movements. IV. Implications for isometric contractions.J Neurophysiol 64:3, 1033-42 (1990 Sep)[6] Gottlieb GL, Corcos DM, Agarwal GC, Latash ML, Organizing principles for single joint movements. III. Speed-insensitive strategy as a default.J Neurophysiol 63:3, 625-36 (1990 Mar)[7] Corcos DM, Gottlieb GL, Agarwal GC, Organizing principles for single-joint movements. II. A speed-sensitive strategy.J Neurophysiol 62:2, 358-68 (1989 Aug)[8] Gottlieb GL, Corcos DM, Agarwal GC, Organizing principles for single-joint movements. I. A speed-insensitive strategy.J Neurophysiol 62:2, 342-57 (1989 Aug)[9] Ghez C, Gordon J, Trajectory control in targeted force impulses. I. Role of opposing muscles.Exp Brain Res 67:2, 225-40 (1987)[10] Sainburg RL, Ghez C, Kalakanis D, Intersegmental dynamics are controlled by sequential anticipatory, error correction, and postural mechanisms.J Neurophysiol 81:3, 1045-56 (1999 Mar)

{1384}
hide / edit[0] / print
ref: -0 tags: NET probes SU-8 microfabrication sewing machine carbon fiber electrode insertion mice histology 2p date: 03-01-2017 23:20 gmt revision:0 [head]

Ultraflexible nanoelectronic probes form reliable, glial scar–free neural integration

  • SU-8 asymptotic H2O absorption is 3.3% in PBS -- quite a bit higher than I expected, and higher than PI.
  • Faced yield problems with contact litho at 2-3um trace/space.
  • Good recordings out to 4 months!
  • 3 minutes / probe insertion.
  • Fab:
    • Ni release layer, Su-8 2000.5. "excellent tensile strength" --
      • Tensile strength 60 MPa
      • Youngs modulus 2.0 GPa
      • Elongation at break 6.5%
      • Water absorption, per spec sheet, 0.65% (but not PBS)
    • 500nm dielectric; < 1% crosstalk; see figure S12.
    • Pt or Au rec sites, 10um x 20um or 30 x 30um.
    • FFC connector, with Si substrate remaining.
  • Used transgenic mice, YFP expressed in neurons.
  • CA glue used before metabond, followed by Kwik-sil silicone.
  • Neuron yield not so great -- they need to plate the electrodes down to acceptable impedance. (figure S5)
    • Measured impedance ~ 1M at 1khz.
  • Unclear if 50um x 1um is really that much worse than 10um x 1.5um.
  • Histology looks realyl great, (figure S10).
  • Manuscript did not mention (though the did at the poster) problems with electrode pull-out; they deal with it in the same way, application of ACSF.

{1354}
hide / edit[1] / print
ref: -0 tags: David Kleinfeld penetrating arterioles perfusion cortex vasculature date: 10-17-2016 23:24 gmt revision:1 [0] [head]

PMID-17190804 Penetrating arterioles are a bottleneck in the perfusion of neocortex.

  • Focal photothrombosis was used to occlude single penetrating arterioles in rat parietal cortex, and the resultant changes in flow of red blood cells were measured with two-photon laser-scanning microscopy in individual subsurface microvessels that surround the occlusion.
  • We observed that the average flow of red blood cells nearly stalls adjacent to the occlusion and remains within 30% of its baseline value in vessels as far as 10 branch points downstream from the occlusion.
  • Preservation of average flow emerges 350 mum away; this length scale is consistent with the spatial distribution of penetrating arterioles
  • Rose bengal photosensitizer.
  • 2p laser scanning microscopy.
  • Downstream and connected arterioles show a dramatic reduction in blood flow, even 1-4 branches in; there is little reduncancy (figure 2)
  • Measured a good number of vessels (and look at their density!); results are satisfactorily quantitative.
  • Vessel leakiness extends up to 1.1mm away (!) (figure 5).

{1348}
hide / edit[1] / print
ref: -0 tags: David Kleinfeld cortical vasculature laser surgery network occlusion flow date: 09-23-2016 06:35 gmt revision:1 [0] [head]

Heller Lecture - Prof. David Kleinfeld

  • Also mentions the use of LIBS + q-switched laser for precisely drilling holes in the scull. Seems to work!
    • Use 20ns delay .. seems like there is still spectral broadening.
    • "Turn neuroscience into an industrial process, not an art form" After doing many surgeries, agreed!
  • Vasodiliation & vasoconstriction is very highly regulated; there is not enough blood to go around.
    • Vessels distant from a energetic / stimulated site will (net) constrict.
  • Vascular network is most entirely closed-loop, and not tree-like at all -- you can occlude one artery, or one capillary, and the network will route around the occlusion.
    • The density of the angio-architecture in the brain is unique in this.
  • Tested micro-occlusions by injecting rose bengal, which releases free radicals on light exposure (532nm, 0.5mw), causing coagulation.
  • "Blood flow on the surface arteriole network is insensitive to single occlusions"
  • Penetrating arterioles and venules are largely stubs -- single unbranching vessels, which again renders some immunity to blockage.
  • However! Occlusion of a penetrating arteriole retards flow within a 400 - 600um cylinder (larger than a cortical column!)
  • Occulsion of many penetrating vessels, unsurprisingly, leads to large swaths of dead cortex, "UBOS" in MRI parlance (unidentified bright objects).
  • Death and depolarizing depression can be effectively prevented by excitotoxicity inhibitors -- MK801 in the slides (NMDA blocker, systemically)

{711}
hide / edit[8] / print
ref: Gradinaru-2009.04 tags: Deisseroth DBS STN optical stimulation 6-OHDA optogenetics date: 05-10-2016 23:48 gmt revision:8 [7] [6] [5] [4] [3] [2] [head]

PMID-19299587[0] Optical Deconstruction of Parkinsonian Neural Circuitry.

  • Viviana Gradinaru, Murtaza Mogri, Kimberly R. Thompson, Jaimie M. Henderson, Karl Deisseroth
  • DA depletion of the SN leads to abnormal activity in the BG ; HFS (>90Hz) of the STN has been found to be therapeutic, but the mechanism is imperfectly understood.
    • lesions of the BG can also be therapeutic.
  • Used chanelrhodopsin (light activated cation channel (+)) which are expressed by cell type specific promoters. (transgenic animals). Also used halorhodopsins, which are light activated chloride pumps (inhibition).
    • optogenetics allows simultaneous optical stimulation and electrical recording without artifact.
  • Made PD rats by 6-hydroxydopamine unilaterally into the medial forebrain bundle of rats.
  • Then they injected eNpHr (inhibitory) opsin vector targeting excitatory neurons (under control of the CaMKIIa receptor) to the STN as identified stereotaxically & by firing pattern.
    • Electrical stimulation of this area alleviated rotational behavior (they were hemiparkinsonian rats), but not optical inhibition of STN.
  • Alternately, the glia in STN may be secreting molecules that modulate local circuit activity; it has been shown that glial-derived factor adenosine accumulates during DBS & seems to help with attenuation of tremor.
    • Tested this by activating glia with ChR2, which can pass small Ca+2 currents.
    • This worked: blue light halted firing in the STN; but, again, no behavioral trace of the silencing was found.
  • PD is characterized by pathological levels of beta oscillations in the BG, and synchronizing STN with the BG at gamma frequencies may ameliorate PD symptoms; while sync. at beta will worsen -- see [1][2]
  • Therefore, they tried excitatory optical stimulation of excitatory STN neurons at the high frequencies used in DBS (90-130Hz).
    • HFS to STN failed, again, to produce any therapeutic effect!
  • Next expressed channel rhodopsin in only projection neurons Thy1::ChR2 (not excitatory cells in STN), again did optotrode (optical stim, eletrical record) recordings.
    • HFS of afferent fibers to STN shut down most of the local circuitry there, with some residual low-amplitude high frequency burstiness.
    • Observed marked effects with this treatment! Afferent HFS alleviated Parkinsonian symptoms, profoundly, with immediate reversal once the laser was turned off.
    • LFS worsened PD symptoms, in accord with electrical stimulation.
    • The Thy1::ChR2 only affected excitatory projections; GABAergic projections from GPe were absent. Dopamine projections from SNr were not affected by the virus either. However, M1 layer V projection neurons were strongly labeled by the retrovirus.
      • M1 layer V neurons could be antidromically recruited by optical stimulation in the STN.
  • Selective M1 layer V HFS also alleviated PD symptoms ; LFS had no effect; M2 (Pmd/Pmv?) LFS causes motor behavior.
  • Remind us that DBS can treat tremor, rigidity, and bradykinesia, but is ineffective at treating speech impairment, depression, and dementia.
  • Suggest that axon tract modulation could be a common theme in DBS (all the different types..), as activity in white matter represents the activity of larger regions compactly.
  • The result that the excitatory fibers of projections, mainly from the motor cortex, matter most in producing therapeutic effects of DBS is counterintuitive but important.
    • What do these neurons do normally, anyway? give a 'copy' of an action plan to the STN? What is their role in M1 / the BG? They should test with normal mice.

____References____

[0] Gradinaru V, Mogri M, Thompson KR, Henderson JM, Deisseroth K, Optical Deconstruction of Parkinsonian Neural Circuitry.Science no Volume no Issue no Pages (2009 Mar 19)
[1] Eusebio A, Brown P, Synchronisation in the beta frequency-band - The bad boy of parkinsonism or an innocent bystander?Exp Neurol no Volume no Issue no Pages (2009 Feb 20)
[2] Wingeier B, Tcheng T, Koop MM, Hill BC, Heit G, Bronte-Stewart HM, Intra-operative STN DBS attenuates the prominent beta rhythm in the STN in Parkinson's disease.Exp Neurol 197:1, 244-51 (2006 Jan)

{1334}
hide / edit[0] / print
ref: -0 tags: micro LEDS Buzaki silicon neural probes optogenetics date: 04-18-2016 18:00 gmt revision:0 [head]

PMID-26627311 Monolithically Integrated μLEDs on Silicon Neural Probes for High-Resolution Optogenetic Studies in Behaving Animals.

  • 12 uLEDs and 32 rec sites integrated into one probe.
  • InGaN monolithically integrated LEDs.
    • Si has ~ 5x higher thermal conductivity than sapphire, allowing better heat dissipation.
    • Use quantum-well epitaxial layers, 460nm emission, 5nm Ni / 5nm Au current injection w/ 75% transmittance @ design wavelength.
      • Think the n/p GaN epitaxy is done by an outside company, NOVAGAN.
    • Efficiency near 80% -- small LEDs have fewer defects!
    • SiO2 + ALD Al2O3 passivation.
    • 70um wide, 30um thick shanks.

{1287}
hide / edit[0] / print
ref: -0 tags: maleimide azobenzine glutamate photoswitch optogenetics date: 06-16-2014 21:19 gmt revision:0 [head]

PMID-16408092 Allosteric control of an ionotropic glutamate receptor with an optical switch

  • 2006
  • Use an azobenzene (benzine linked by two double-bonded nitrogens) as a photo-switchable allosteric activator that reversibly presents glutamate to an ion channel.
  • PIMD:17521567 Remote control of neuronal activity with a light-gated glutamate receptor.
    • The molecule, in use.
  • Likely the molecule is harder to produce than channelrhodopsin or halorhodopsin, hence less used. Still, it's quite a technology.

{1283}
hide / edit[0] / print
ref: -0 tags: optogenetics glutamate azobenzine date: 05-07-2014 19:51 gmt revision:0 [head]

PMID-17521567 Remote control of neuronal activity with a light-gated glutamate receptor.

  • Neuron 2007.
  • azobenzines undergo a cis to trans confirmational change via illumination with one wavelength, and trans to cis via another. (neat!!)
  • This was used to create photo-controlled (on and off) glutamate channels.

{1269}
hide / edit[0] / print
ref: -0 tags: hinton convolutional deep networks image recognition 2012 date: 01-11-2014 20:14 gmt revision:0 [head]

ImageNet Classification with Deep Convolutional Networks

{1257}
hide / edit[3] / print
ref: -0 tags: Anna Roe optogenetics artificial dura monkeys intrinisic imaging date: 09-30-2013 19:08 gmt revision:3 [2] [1] [0] [head]

PMID-23761700 Optogenetics through windows on the brain in nonhuman primates

  • technique paper.
  • placed over the visual cortex.
  • Injected virus through the artificial dura -- micropipette, not CVD.
  • Strong expression:
  • See also: PMID-19409264 (Boyden, 2009)

{1255}
hide / edit[0] / print
ref: -0 tags: Disseroth Kreitzer parkinsons optogenetics D1 D2 6OHDA date: 09-30-2013 18:15 gmt revision:0 [head]

PMID-20613723 Regulation of parkinsonian motor behaviors by optogenetic control of basal ganglia circuitry

  • Kravitz AV, Freeze BS, Parker PR, Kay K, Thwin MT, Deisseroth K, Kreitzer AC.
  • Generated mouse lines with channelrhodopsin2, with Cre recombinase under control of regulatory elements for the dopamine D1 (direct) or D2 (indirect) receptor.
  • optogenetic exitation of the indirect pathway elicited a parkinsonian state: increased freezing, bradykinesia and decreased locomotor initiations;
  • Activation of the direct pathway decreased freezing and increased locomotion.
  • Then: 6OHDA depletion of striatal dopamine neurons.
  • Optogenetic activation of direct pathway (D1 Cre/loxp) neurons restored behavior to pre-lesion levels.
    • Hence, this seems like a good target for therapy.

{1236}
hide / edit[8] / print
ref: -0 tags: optogenetics micro LED flexible electrodes date: 06-27-2013 19:31 gmt revision:8 [7] [6] [5] [4] [3] [2] [head]

PMID-23580530 Injectable, cellular-scale optoelectronics with applications for wireless optogenetics.

  • Supplementary materials
  • 21 authors, University Illinois at Urbana-Champaign, Tufts, China, Northwestern, Miami ..
  • GaN blue and green LEDs fabricated on a flexible substrate with stiff inserter.
    • Inserter is released in 15 min with a dissolving silk fibrin.
    • 250um thick SU-8 epoxy, reverse photocured on a glass slide.
  • GaN LEDS fabricated on a sapphire substrate & transfer printed via modified Karl-Suss mask aligner.
    • See supplemental materials for the intricate steps.
    • LEDs are 50um x 50um x 6.75um
  • Have integrated:
    • Temperature sensor (Pt serpentine resistor) / heater.
    • inorganic photodetector (IPD)
      • ultrathin silicon photodiode 1.25um thick, 200 x 200um^2, made on a SOI wafer
    • Pt extracellular recording electrode.
        • This insulated via 2um thick more SU-8.
  • Layers are precisely aligned and assembled via 500nm layer of epoxy.
    • Layers made of 6um or 2.5um thick mylar (a polyester -- polyethylene terephthalate (PET))
    • Layers joined with SU-8 2.
    • Wiring patterned via lift-off.
  • Powered via RF scavenging at 910 Mhz.
    • appeared to be simple, power in = light out; no data connection.
  • Tested vs control and fiber optic stimulation, staining for:
    • Tyrosine hydroxylase (makes l-DOPA)
    • c-fos, a neural activity marker
    • u-LEDs show significant activation.
  • Also tested for GFAP (astrocytes) and Iba1 (activated microglia); flexible & smaller devices had lower gliosis.
  • Next tested for behavior using a self-stimulation protocol; mice learned to self-stimulate to release DA.
  • Devices are somewhat reliable to 250 days!

{1177}
hide / edit[2] / print
ref: -0 tags: magnetic flexible insertion japan neural recording electrodes date: 01-28-2013 03:54 gmt revision:2 [1] [0] [head]

IEEE-1196780 (pdf) 3D flexible multichannel neural probe array

  • Shoji Takeuchi1, Takafumi Suzuki2, Kunihiko Mabuchi2 and Hiroyuki Fujita
  • wild -- they use a magnetic field to make the electrodes stand up!
  • Electrodes released with DRIE, as with Michigan probes.
  • As with many other electrodes, pretty high electrical impedance - 1.5M @ 1kHz.
    • 20x20um recording sites on 10um parylene.
  • Could push these into a rat and record extracellular APs, but nothing quantitative, no histology either.
  • Used a PEG coating to make them stiff enough to insert into the ctx (phantom in IEEE conference proceedings.)

{1214}
hide / edit[0] / print
ref: -0 tags: brain micromotion magnetic resonance imaging date: 01-28-2013 01:38 gmt revision:0 [head]

PMID-7972766 Brain and cerebrospinal fluid motion: real-time quantification with M-mode MR imaging.

  • Measured brain motion via a clever MR protocol. (beyond my present understanding...)
  • ventricles move at up to 1mm/sec
  • In the Valsava maneuver the brainstem can move 2-3mm.
  • Coughing causes upswing of the CSF.

{54}
hide / edit[1] / print
ref: bookmark-0 tags: intrinsic evolution FPGA GPU optimization algorithm genetic date: 01-27-2013 22:27 gmt revision:1 [0] [head]

!:

  • http://evolutioninmaterio.com/ - using FPGAs in intrinsic evolution, e.g. the device is actually programmed and tested.
  • - Adrian Thompson's homepage. There are many PDFs of his work on his homepage.
  • Parallel genetic algorithms on programmable graphics hardware
    • basically deals with optimizing mutation and fitness evaluation using the parallel arcitecture of a GPU: larger populations can be evaluated at one time.
    • does not concern the intrinsic evolution of algorithms to the GPU, as in the Adrian's work.
    • uses a linear conguent generator to produce random numbers.
    • used a really simple problem: Colville minimization problem which need only search through a four-dimensional space.
  • Cellular genetic algoritms and local search for 3-SAT problem on Graphic Hardware
    • concerning SAT: satisfiabillity technique: " many practical problems, such as graph coloring, job-shop scheduling, and real-world scheduling can be represented as a SAT problem.
    • SAT-3 refers to the length of the search clause. length 3 is apparently very hard..
    • they use a combination of greedy search (flip the bit that increases the fitness the largest ammount) and random-walk via point mutations to keep the algorithm away from local minima.
    • also use cellular genetic algorithm which works better on a GPU): select the optimal neignbor, not global, individual.
    • only used a GeForce 6200 gpu, but it was still 5x faster than a p4 2.4ghz.
  • Evolution of a robot controller using cartesian genetic programming
    • cartesian programming has many advantages over traditional tree based methods - e.g. blot-free evolution & faster evolution through neutral search.
    • cartesian programming is characterized by its encoding of a graph as a string of integers that represent the functions and connections between graph nodes, and program inputs and outputs.
      • this encoding was developed in the course of evolving electronic circuits, e.g. above ?
      • can encode a non-connected graph. the genetic material that is not utilized is analogous to biological junk DNA.
    • even in converged populations, small mutations can produce large changes in phenotypic behavior.
    • in this work he only uses directed graphs - there are no cycles & an organized flow of information.
    • mentions automatically defined functions - what is this??
    • used diffusion to define the fitness values of particular locations in the map. the fewer particles there eventually were in a grid location, the higher the fitness value of the robot that managed to get there.
  • Hardware evolution: on the nature of artifically evolved circuits - doctoral dissertation.
    • because evolved circuits utilize the parasitic properties of devices, they have little tolerance of the value of components. Reverse engineering of the circuits evolved to improve tolerance is not easy.

{913}
hide / edit[6] / print
ref: Ganguly-2011.05 tags: Carmena 2011 reversible cortical networks learning indirect BMI date: 01-23-2013 18:54 gmt revision:6 [5] [4] [3] [2] [1] [0] [head]

PMID-21499255[0] Reversible large-scale modification of cortical networks during neuroprosthetic control.

  • Split the group of recorded motor neurons into direct (decoded and controls the BMI) and indirect (passive) neurons.
  • Both groups showed changes in neuronal tuning / PD.
    • More PD. Is there no better metric?
  • Monkeys performed manual control before (MC1) and after (MC2) BMI training.
    • The majority of neurons reverted back to original tuning after BC; c.f. [1]
  • Monkeys were trained to rapidly switch between manual and brain control; still showed substantial changes in PD.
  • 'Near' (on same electrode as direct neurons) and 'far' neurons (different electrode) showed similar changes in PD.
    • Modulation Depth in indirect neurons was less in BC than manual control.
  • Prove (pretty well) that motor cortex neuronal spiking can be dissociated from movement.
  • Indirect neurons showed decreased modulation depth (MD) -> perhaps this is to decrease interference with direct neurons.
  • Quote "Studies of operant conditioning of single neurons found that conconditioned adjacent neurons were largely correlated with the conditioned neurons".
    • Well, also: Fetz and Baker showed that you can condition neurons recorded on the same electrode to covary or inversely vary.
  • Contrast with studies of motor learning in different force fields, where there is a dramatic memory trace.
    • Possibly this is from proprioception activating the cerebellum?

Other notes:

  • Scale bars on the waveforms are incorrect for figure 1.
  • Same monkeys as [2]

____References____

[0] Ganguly K, Dimitrov DF, Wallis JD, Carmena JM, Reversible large-scale modification of cortical networks during neuroprosthetic control.Nat Neurosci 14:5, 662-7 (2011 May)
[1] Gandolfo F, Li C, Benda BJ, Schioppa CP, Bizzi E, Cortical correlates of learning in monkeys adapting to a new dynamical environment.Proc Natl Acad Sci U S A 97:5, 2259-63 (2000 Feb 29)
[2] Ganguly K, Carmena JM, Emergence of a stable cortical map for neuroprosthetic control.PLoS Biol 7:7, e1000153 (2009 Jul)

{1058}
hide / edit[3] / print
ref: -0 tags: Purdue magnetic bullet electrode implantation date: 01-04-2013 00:51 gmt revision:3 [2] [1] [0] [head]

PMID-19596378 Magnetic insertion system for flexible electrode implantation.

  • Probes constructed from a sharp magnetic tip attached to a flexible tether.
  • Cite Polikov et al 2005. {781}.
  • Re micromotion: (Gilletti and Muthuswamy, 2006 {1102}; Lee et al., 2004; Subbaroyan et al., 2005 {1103}).
  • 0.6 mm (600 um!) diameter steel bullet, 4mm long, on the end of 38 gauge magnet wire. Mass 7.2 +- 0.4 mg.
  • Peak current 520 A froman 800V, 900uF capacitor which produces a maximum force of 10 N on the electrode, driving it at 126.25 m/s.
  • Did manage to get neural data.
  • Experimental evidence suggests that macrophages have difficulty adhering to and spreading on polymer fibers ranging between 2.1 and 5.9 um in diameter. PMID-8902241 Bernatchez et al. 1996 and {746}.
  • Shot through the dura.
  • Also reference magnetic stereotaxis for use in manipulating magnetic 'seeds' through cancers for hyperthremic destruction.
  • See also their 2011 AES abstract

{1183}
hide / edit[0] / print
ref: -0 tags: optical imaging neural recording diamond magnetic date: 01-02-2013 03:44 gmt revision:0 [head]

PMID-22574249 High spatial and temporal resolution wide-field imaging of neuron activity using quantum NV-diamond.

  • yikes: In this work we consider a fundamentally new form of wide-field imaging for neuronal networks based on the nanoscale magnetic field sensing properties of optically active spins in a diamond substrate.
  • Cultured neurons.
  • NV = nitrogen-vacancy defect centers.
    • "The NV centre is a remarkable optical defect in diamond which allows discrimination of its magnetic sublevels through its fluorescence under illumination. "
    • We show that the NV detection system is able to non-invasively capture the transmembrane potential activity in a series of near real-time images, with spatial resolution at the level of the individual neural compartments.
  • Did not actually perform neural measurements -- used a 10um microwire with mA of current running through it.
    • I would imagine that actual neurons have far less current!

{1174}
hide / edit[0] / print
ref: -0 tags: Hinton google tech talk dropout deep neural networks Boltzmann date: 11-09-2012 18:01 gmt revision:0 [head]

http://www.youtube.com/watch?v=DleXA5ADG78

  • Hinton believes in the the power of crowds -- he thinks that the brain fits many, many different models to the data, then selects afterward.
    • Random forests, as used in predator, is an example of this: they average many simple to fit and simple to run decision trees. (is apparently what Kinect does)
  • Talk focuses on dropout, a clever new form of model averaging where only half of the units in the hidden layers are trained for a given example.
    • He is inspired by biological evolution, where sexual reproduction often spontaneously adds or removes genes, hence individual genes or small linked genes must be self-sufficient. This equates to a 'rugged individualism' of units.
    • Likewise, dropout forces neurons to be robust to the loss of co-workers.
    • This is also great for parallelization: each unit or sub-network can be trained independently, on it's own core, with little need for communication! Later, the units can be combined via genetic algorithms then re-trained.
  • Hinton then observes that sending a real value p (output of logistic function) with probability 0.5 is the same as sending 0.5 with probability p. Hence, it makes sense to try pure binary neurons, like biological neurons in the brain.
    • Indeed, if you replace the backpropagation with single bit propagation, the resulting neural network is trained more slowly and needs to be bigger, but it generalizes better.
    • Neurons (allegedly) do something very similar to this by poisson spiking. Hinton claims this is the right thing to do (rather than sending real numbers via precise spike timing) if you want to robustly fit models to data.
      • Sending stochastic spikes is a very good way to average over the large number of models fit to incoming data.
      • Yes but this really explains little in neuroscience...
  • Paper referred to in intro: Livnat, Papadimitriou and Feldman, PMID-19073912 and later by the same authors PMID-20080594

{1125}
hide / edit[0] / print
ref: -0 tags: active filter design Netherlands Gerrit Groenewold date: 02-17-2012 20:27 gmt revision:0 [head]

IEEE-04268406 (pdf) Noise and Group Delay in Actvie Filters

  • relevant conclusion: the output noise spectrum is exactly proportinoal to the group delay.
  • Poschenrieder established a relationship between group delay and energy stored in a passive filter.
  • Fettweis proved from this that the noise generation of an active filter which is based on a passive filter is appoximately proportional to the group delay. (!!!)

{425}
hide / edit[3] / print
ref: bookmark-2007.08 tags: donoghue cyberkinetics BMI braingate date: 01-06-2012 03:09 gmt revision:3 [2] [1] [0] [head]

images/425_1.pdf August 2007

  • provides more extensive details on the braingate system.
  • including, their automatic impedance tester (5mv, 10pa)
  • and the automatic spike sorter.
  • the different tests that were required, such as accelerated aging in 50-70 deg C saline baths
  • the long path to market - $30 - $40 million more (of course, they have since abandoned the product).

{1007}
hide / edit[1] / print
ref: Dethier-2011.28 tags: BMI decoder spiking neural network Kalman date: 01-06-2012 00:20 gmt revision:1 [0] [head]

IEEE-5910570 (pdf) Spiking neural network decoder for brain-machine interfaces

  • Golden standard: kalman filter.
  • Spiking neural network got within 1% of this standard.
  • THe 'neuromorphic' approach.
  • Used Nengo, freely available neural simulator.

____References____

Dethier, J. and Gilja, V. and Nuyujukian, P. and Elassaad, S.A. and Shenoy, K.V. and Boahen, K. Neural Engineering (NER), 2011 5th International IEEE/EMBS Conference on 396 -399 (2011)

{998}
hide / edit[0] / print
ref: -0 tags: bookmark Cory Doctorow EFF SOPA internet freedom date: 01-01-2012 21:51 gmt revision:0 [head]

The Coming War on General Computation "M.P.s and Congressmen and so on are elected to represent districts and people, not disciplines and issues. We don't have a Member of Parliament for biochemistry, and we don't have a Senator from the great state of urban planning, and we don't have an M.E.P. from child welfare. "

{993}
hide / edit[2] / print
ref: Sanchez-2005.06 tags: BMI Sanchez Nicolelis Wessberg recurrent neural network date: 01-01-2012 18:28 gmt revision:2 [1] [0] [head]

IEEE-1439548 (pdf) Interpreting spatial and temporal neural activity through a recurrent neural network brain-machine interface

  • Putting it here for the record.
  • Note they did a sensitivity analysis (via chain rule) of the recurrent neural network used for BMI predictions.
  • Used data (X,Y,Z) from 2 monkeys feeding.
  • Figure 6 is strange, data could be represented better.
  • Also see: IEEE-1300786 (pdf) Ascertaining the importance of neurons to develop better brain-machine interfaces Also by Justin Sanchez.

____References____

Sanchez, J.C. and Erdogmus, D. and Nicolelis, M.A.L. and Wessberg, J. and Principe, J.C. Interpreting spatial and temporal neural activity through a recurrent neural network brain-machine interface Neural Systems and Rehabilitation Engineering, IEEE Transactions on 13 2 213 -219 (2005)

{968}
hide / edit[1] / print
ref: Bassett-2009.07 tags: Weinberger congnitive efficiency beta band neuroimagaing EEG task performance optimization network size effort date: 12-28-2011 20:39 gmt revision:1 [0] [head]

PMID-19564605[0] Cognitive fitness of cost-efficient brain functional networks.

  • Idea: smaller, tighter networks are correlated with better task performance
    • working memory task in normal subjects and schizophrenics.
  • Larger networks operate with higher beta frequencies (more effort?) and show less efficient task performance.
  • Not sure about the noisy data, but v. interesting theory!

____References____

[0] Bassett DS, Bullmore ET, Meyer-Lindenberg A, Apud JA, Weinberger DR, Coppola R, Cognitive fitness of cost-efficient brain functional networks.Proc Natl Acad Sci U S A 106:28, 11747-52 (2009 Jul 14)

{323}
hide / edit[4] / print
ref: Loewenstein-2006.1 tags: reinforcement learning operant conditioning neural networks theory date: 12-07-2011 03:36 gmt revision:4 [3] [2] [1] [0] [head]

PMID-17008410[0] Operant matching is a generic outcome of synaptic plasticity based on the covariance between reward and neural activity

  • The probability of choosing an alternative in a long sequence of repeated choices is proportional to the total reward derived from that alternative, a phenomenon known as Herrnstein's matching law.
  • We hypothesize that there are forms of synaptic plasticity driven by the covariance between reward and neural activity and prove mathematically that matching (alternative to reward) is a generic outcome of such plasticity
    • models for learning that are based on the covariance between reward and choice are common in economics and are used phenomologically to explain human behavior.
  • this model can be tested experimentally by making reward contingent not on the choices, but rather on the activity of neural activity.
  • Maximization is shown to be a generic outcome of synaptic plasticity driven by the sum of the covariances between reward and all past neural activities.

____References____

{883}
hide / edit[1] / print
ref: -0 tags: Lehrer internet culture community collapse groupthink date: 06-01-2011 02:22 gmt revision:1 [0] [head]

Response to Jonah Lehrer's The Web and the Wisdom of Crowds:

Lehrer is right on one thing: culture. We're all consuming similar things (e.g. Rebecca Black) via the strong positive feedback of sharing things that you like, liking things that you share, and becoming more like the things that are shared with you. Will this lead to a cultural convergence, or stable n-ary system? To early to tell, but probably not: likely this is nothing new. Would you expect music to collapse to a single genre? No way. Sure, there will be pop culture via the mechanisms Lehrer suggests, but meanwhile there is too much to explore, and we like novelty too much.

Regarding decision making through stochastic averaging as implemented in democracy, I have to agree with John Hawk here. The growing availability of knowledge, news, and other opinions should be a good thing. This ought to be more than enough to counteract the problem of everyone reading say the NYTimes instead of many varied local newspapers; there should be no impoverishment of opinion. Furthermore, we read blogs (like Lehrer's) which have to compete increasingly honestly in the attention economy. The cost of redirecting our attention has gone from that of a subscription to free. Plus, this attention economy ties communication to reality at more points - each reader, as opposed to each publisher, is partially responsible for information amplification and dissemination. (I mean I just published this damn thing and almost zero cost - is that not a great thing?)

{862}
hide / edit[1] / print
ref: -0 tags: backpropagation cascade correlation neural networks date: 12-20-2010 06:28 gmt revision:1 [0] [head]

The Cascade-Correlation Learning Architecture

  • Much better - much more sensible, computationally cheaper, than backprop.
  • Units are added one by one; each is trained to be maximally correlated to the error of the existing, frozen neural network.
  • Uses quickprop to speed up gradient ascent learning.

{795}
hide / edit[1] / print
ref: work-0 tags: machine learning reinforcement genetic algorithms date: 10-26-2009 04:49 gmt revision:1 [0] [head]

I just had dinner with Jesse, and the we had a good/productive discussion/brainstorm about algorithms, learning, and neurobio. Two things worth repeating, one simpler than the other:

1. Gradient descent / Newton-Rhapson like techniques should be tried with genetic algorithms. As of my current understanding, genetic algorithms perform an semi-directed search, randomly exploring the space of solutions with natural selection exerting a pressure to improve. What if you took the partial derivative of each of the organism's genes, and used that to direct mutation, rather than random selection of the mutated element? What if you looked before mating and crossover? Seems like this would speed up the algorithm greatly (though it might get it stuck in local minima, too). Not sure if this has been done before - if it has, edit this to indicate where!

2. Most supervised machine learning algorithms seem to rely on one single, externally applied objective function which they then attempt to optimize. (Rather this is what convex programming is. Unsupervised learning of course exists, like PCA, ICA, and other means of learning correlative structure) There are a great many ways to do optimization, but all are exactly that - optimization, search through a space for some set of weights / set of rules / decision tree that maximizes or minimizes an objective function. What Jesse and I have arrived at is that there is no real utility function in the world, (Corollary #1: life is not an optimization problem (**)) -- we generate these utility functions, just as we generate our own behavior. What would happen if an algorithm iteratively estimated, checked, cross-validated its utility function based on the small rewards actually found in the world / its synthetic environment? Would we get generative behavior greater than the complexity of the inputs? (Jesse and I also had an in-depth talk about information generation / destruction in non-linear systems.)

Put another way, perhaps part of learning is to structure internal valuation / utility functions to set up reinforcement learning problems where the reinforcement signal comes according to satisfaction of sub-goals (= local utility functions). Or, the gradient signal comes by evaluating partial derivatives of actions wrt Creating these goals is natural but not always easy, which is why one reason (of very many!) sports are so great - the utility function is clean, external, and immutable. The recursive, introspective creation of valuation / utility functions is what drives a lot of my internal monologues, mixed with a hefty dose of taking partial derivatives (see {780}) based on models of the world. (Stated this way, they seem so similar that perhaps they are the same thing?)

To my limited knowledge, there has been some work as of recent in the creation of sub-goals in reinforcement learning. One paper I read used a system to look for states that had a high ratio of ultimately rewarded paths to unrewarded paths, and selected these as subgoals (e.g. rewarded the agent when this state was reached.) I'm not talking about these sorts of sub-goals. In these systems, there is an ultimate goal that the researcher wants the agent to achieve, and it is the algorithm's (or s') task to make a policy for generating/selecting behavior. Rather, I'm interested in even more unstructured tasks - make a utility function, and a behavioral policy, based on small continuous (possibly irrelevant?) rewards in the environment.

Why would I want to do this? The pet project I have in mind is a 'cognitive' PCB part placement / layout / routing algorithm to add to my pet project, kicadocaml, to finally get some people to use it (the attention economy :-) In the course of thinking about how to do this, I've realized that a substantial problem is simply determining what board layouts are good, and what are not. I have a rough aesthetic idea + some heuristics that I learned from my dad + some heuristics I've learned through practice of what is good layout and what is not - but, how to code these up? And what if these aren't the best rules, anyway? If i just code up the rules I've internalized as utility functions, then the board layout will be pretty much as I do it - boring!

Well, I've stated my sub-goal in the form of a problem statement and some criteria to meet. Now, to go and search for a decent solution to it. (Have to keep this blog m8ta!) (Or, realistically, to go back and see if the problem statement is sensible).

(**) Corollary #2 - There is no god. nod, Dawkins.

{789}
hide / edit[4] / print
ref: work-0 tags: emergent leabra QT neural networks GUI interface date: 10-21-2009 19:02 gmt revision:4 [3] [2] [1] [0] [head]

I've been reading Computational Explorations in Cognitive Neuroscience, and decided to try the code that comes with / is associated with the book. This used to be called "PDP+", but was re-written, and is now called Emergent. It's a rather large program - links to Qt, GSL, Coin3D, Quarter, Open Dynamics Library, and others. The GUI itself seems obtuse and too heavy; it's not clear why they need to make this so customized / panneled / tabbed. Also, it depends on relatively recent versions of each of these libraries - which made the install on my Debian Lenny system a bit of a chore (kinda like windows).

A really strange thing is that programs are stored in tree lists - woah - a natural folding editor built in! I've never seen a programming language that doesn't rely on simple text files. Not a bad idea, but still foreign to me. (But I guess programs are inherently hierarchal anyway.)

Below, a screenshot of the whole program - note they use a Coin3D window to graph things / interact with the model. The colored boxes in each network layer indicate local activations, and they update as the network is trained. I don't mind this interface, but again it seems a bit too 'heavy' for things that are inherently 2D (like 2D network activations and the output plot). It's good for seeing hierarchies, though, like the network model.

All in all looks like something that could be more easily accomplished with some python (or ocaml), where the language itself is used for customization, and not a GUI. With this approach, you spend more time learning about how networks work, and less time programming GUIs. On the other hand, if you use this program for teaching, the gui is essential for debugging your neural networks, or other people use it a lot, maybe then it is worth it ...

In any case, the book is very good. I've learned about GeneRec, which uses different activation phases to compute local errors for the purposes of error-minimization, as well as the virtues of using both Hebbian and error-based learning (like GeneRec). Specifically, the authors show that error-based learning can be rather 'lazy', purely moving down the error gradient, whereas Hebbian learning can internalize some of the correlational structure of the input space. You can look at this internalization as 'weight constraint' which limits the space that error-based learning has to search. Cool idea! Inhibition also is a constraint - one which constrains the network to be sparse.

To use his/their own words:

... given the explanation above about the network's poor generalization, it should be clear why both Hebbian learning and kWTA (k winner take all) inhibitory competition can improve generalization performance. At the most general level, they constitute additional biases that place important constraints on the learning and the development of representations. Mroe specifically, Hebbian learning constrains the weights to represent the correlational structure of the inputs to a given unit, producing systematic weight patterns (e.g. cleanly separated clusters of strong correlations).

Inhibitory competition helps in two ways. First, it encourages individual units to specialize in representing a subset of items, thus parcelling up the task in a much cleaner and more systematic way than would occur in an otherwise unconstrained network. Second, inhibition greatly restricts the settling dynamics of the network, greatly constraining the number of states the network can settle into, and thus eliminating a large proportion of the attractors that can hijack generalization.."

{787}
hide / edit[1] / print
ref: life-0 tags: IQ intelligence Flynn effect genetics facebook social utopia data machine learning date: 10-02-2009 14:19 gmt revision:1 [0] [head]

src

My theory on the Flynn effect - human intelligence IS increasing, and this is NOT stopping. Look at it from a ML perspective: there is more free time to get data, the data (and world) has almost unlimited complexity, the data is much higher quality and much easier to get (the vast internet & world!(travel)), there is (hopefully) more fuel to process that data (food!). Therefore, we are getting more complex, sophisticated, and intelligent. Also, the idea that less-intelligent people having more kids will somehow 'dilute' our genetic IQ is bullshit - intelligence is mostly a product of environment and education, and is tailored to the tasks we need to do; it is not (or only very weakly, except at the extremes) tied to the wetware. Besides, things are changing far too fast for genetics to follow.

Regarding this social media, like facebook and others, you could posit that social intelligence is increasing, along similar arguments to above: social data is seemingly more prevalent, more available, and people spend more time examining it. Yet this feels to be a weaker argument, as people have always been socializing, talking, etc., and I'm not sure if any of these social media have really increased it. Irregardless, people enjoy it - that's the important part.

My utopia for today :-)

{690}
hide / edit[2] / print
ref: Chapin-1999.07 tags: chapin Nicolelis BMI neural net original SUNY rat date: 09-02-2009 23:11 gmt revision:2 [1] [0] [head]

PMID-10404201 Real-time control of a robot arm using simultaneously recorded neurons in the motor cortex.

  • Abstract: To determine whether simultaneously recorded motor cortex neurons can be used for real-time device control, rats were trained to position a robot arm to obtain water by pressing a lever. Mathematical transformations, including neural networks, converted multineuron signals into 'neuronal population functions' that accurately predicted lever trajectory. Next, these functions were electronically converted into real-time signals for robot arm control. After switching to this 'neurorobotic' mode, 4 of 6 animals (those with > 25 task-related neurons) routinely used these brain-derived signals to position the robot arm and obtain water. With continued training in neurorobotic mode, the animals' lever movement diminished or stopped. These results suggest a possible means for movement restoration in paralysis patients.
The basic idea of the experiment. Rat controlled the water lever with a forelimb lever, then later learned to control the water lever directly. They used an artificial neural network to decode the intended movement.

{776}
hide / edit[0] / print
ref: work-0 tags: neural networks course date: 09-01-2009 04:24 gmt revision:0 [head]

http://www.willamette.edu/~gorr/classes/cs449/intro.html -- descent resource, good explanation of the equations associated with artificial neural networks.

{756}
hide / edit[0] / print
ref: life-0 tags: education wikinomics internet age college university pedagogy date: 06-11-2009 12:52 gmt revision:0 [head]

Will universities stay relevant? and the rest of the wikinomics blog

  • Idea: for universities to remain relevant, they will have to change their teaching styles to match the impatient and interactive internet-raised generation.
  • Notable quotes:
    • [College students today] want to learn, but they want to learn only from what they have to learn, and they want to learn it in a style that is best for them.
    • In the old model, teachers taught and students were expected to absorb vast quantities of content. Education was about absorbing content and being able to recall it on exams. You graduated and you were set for life - just “keeping” up in your chosen field. Today when you graduate you’re set for say, 15 minutes. (heheh)
  • What matters now is a student's capacity for learning. Hence colleges should teach meta-learning: learning how to learn.
  • My opinion: Universities will not die, they are too useful given the collaborative nature of human learning: they bring many different people together for the purpose of learning (and perhaps doing research). This is essential, not just for professional learning, but for life-learning (learning from other's experience so you don't have to experience it). Sure, people can learn by consulting google or wikipedia, but it's not nearly as good as face-to-face lectures (where you can ask questions!) or office hours, because the teacher there has some idea what is going on in the student's mind as he/she learns, and can anticipate questions and give relevant guidance based on experience. Google and Wikipedia, for now, cannot do this as well as a good, thoughtful teacher or friend.

{724}
hide / edit[2] / print
ref: Oskoei-2008.08 tags: EMG pattern analysis classification neural network date: 04-07-2009 21:10 gmt revision:2 [1] [0] [head]

  • EMG pattern analysis and classification by Neural Network
    • 1989!
    • short, simple paper. showed that 20 patterns can accurately be decoded with a backprop-trained neural network.
  • PMID-18632358 Support vector machine-based classification scheme for myoelectric control applied to upper limb.
    • myoelectric discrimination with SVM running on features in both the time and frequency domain.
    • a survace MES (myoelectric sensor) is formed via the superposition of individual action potentials generated by irregular discharges of active motor units in a muscle fiber. It's amplitude, variance, energy, and frequency vary depending on contration level.
    • Time domain features:
      • Mean absolute value (MAV)
      • root mean square (RMS)
      • waveform length (WL)
      • variance
      • zero crossings (ZC)
      • slope sign changes (SSC)
      • William amplitude.
    • Frequency domain features:
      • power spectrum
      • autoregressive coefficients order 2 and 6
      • mean signal frequency
      • median signal frequency
      • good performance with just RMS + AR2 for 50 or 100ms segments. Used a SVM with a RBF kernel.
      • looks like you can just get away with time-domain metrics!!

{695}
hide / edit[0] / print
ref: -0 tags: alopex machine learning artificial neural networks date: 03-09-2009 22:12 gmt revision:0 [head]

Alopex: A Correlation-Based Learning Algorithm for Feed-Forward and Recurrent Neural Networks (1994)

  • read the abstract! rather than using the gradient error estimate as in backpropagation, it uses the correlation between changes in network weights and changes in the error + gaussian noise.
    • backpropagation requires calculation of the derivatives of the transfer function from one neuron to the output. This is very non-local information.
    • one alternative is somewhat empirical: compute the derivatives wrt the weights through perturbations.
    • all these algorithms are solutions to the optimization problem: minimize an error measure, E, wrt the network weights.
  • all network weights are updated synchronously.
  • can be used to train both feedforward and recurrent networks.
  • algorithm apparently has a long history, especially in visual research.
  • the algorithm is quite simple! easy to understand.
    • use stochastic weight changes with a annealing schedule.
  • this is pre-pub: tables and figures at the end.
  • looks like it has comparable or faster convergence then backpropagation.
  • not sure how it will scale to problems with hundreds of neurons; though, they looked at an encoding task with 32 outputs.

{669}
hide / edit[1] / print
ref: Pearlmutter-2009.06 tags: sleep network stability learning memory date: 02-05-2009 19:21 gmt revision:1 [0] [head]

PMID-19191602 A New Hypothesis for Sleep: Tuning for Criticality.

  • Their hypothesis: in the course of learning, the brain's networks move closer to instability, as the process of learning and information storage requires that the network move closer to instability.
    • That is, a perfectly stable network stores no information: output is the same independent of input; a highly unstable network can potentially store a lot of information, or be a very selective or critical system: output is highly sensitive to input.
  • Sleep serves to restore the stability of the network by exposing it to a variety of inputs, checking for runaway activity, and adjusting accordingly. (inhibition / glia? how?)
  • Say that when sleep is not possible, an emergency mechanism must com into play, namely tiredness, to prevent runaway behavior.
  • (From wikipedia:) a potentially serious side-effect of many antipsychotics is that they tend to lower a individual's seizure threshold. Recall that removal of all dopamine can inhibit REM sleep; it's all somehow consistent, but unclear how maintaining network stability and being able to move are related.

{503}
hide / edit[6] / print
ref: bookmark-0 tags: internet communication tax broadband election? date: 11-21-2007 22:18 gmt revision:6 [5] [4] [3] [2] [1] [0] [head]

quote:

Consumers also pay high taxes for telecommunication services, averaging about 13 percent on some telecom services, similar to the tax rate on tobacco and alcohol, Mehlman said. One tax on telecom service has remained in place since the 1898 Spanish-American War, when few U.S. residents had telephones, he noted.

"We think it's a mistake to treat telecom like a luxury and tax it like a sin," he said.

from: The internet could run out of capacity in two years

comments:

  • I bet this will turn into a great excuse for your next president not to invest on health, but rather on internet. --ana
  • Humm.. I think it is meant to be more of a wake-up call to the backhaul and ISP companies, which own most of the networking capacity (not the government). I imagine there will be some problems, people complain, it gets fixed.. hopefully soon. What is really amazing is the total amount of data the internet is expected to produce - 161 exabytes!! -- tlh
  • They won't upgrade their capacity. After all, the telcos spent a lot of money doing just that in the dot-bomb days. No, instead they will spend their money on technologies and laws that allow them to charge more for certain types of packets or for delivering some packets faster than others. You think it's a coincidence that Google is buying up dark fiber? --jeo

{497}
hide / edit[2] / print
ref: bookmark-0 tags: open source cellphone public network date: 11-13-2007 21:28 gmt revision:2 [1] [0] [head]

http://dotpublic.istumbler.net/

  • kinda high-level, rather amorphous, but generally in the right direction. The drive is there, the time is coming, but we are not quite there yet..
  • have some designs for wireless repeaters, based on 802.11g mini-pci cards in a SBC, 3 repeaters. total cost about $1000
  • also interesting: http://www.opencellphone.org/index.php?title=Main_Page

{479}
hide / edit[3] / print
ref: bookmark-0 tags: cybernetics introduction 1957 Ross Ashby feedback date: 10-26-2007 00:50 gmt revision:3 [2] [1] [0] [head]

http://pespmc1.vub.ac.be/books/IntroCyb.pdf -- dated, but still interesting, useful, a book in and of itself!

  • cybernetics = "the study of systems that are open to energy but closed to information and control"
    • cybernetics also = the study of systems whose complexity cannot be reduced away, or rather whose complexity is integral to its function, e.g. the human brain, the world economy. here simple examples have little explanatory power.
  • book, for the most part, avoids calculus, and deals instead with discrete time and sums (i think?)
  • with exercises!! for example, page 60 - cybernetics of a haunted house:)
  • random thought: a lot of this stuff seems dependent on the mathematics of statistical physics...

{467}
hide / edit[2] / print
ref: bookmark-0 tags: Saab water injection neuralnet 900 turbo date: 10-15-2007 16:09 gmt revision:2 [1] [0] [head]

Self-learning fuzzy neural network with optimal on-line leaning for water injection control of a turbocharged automobile.

  • for a 1994 - 1998 Saab 900 SE (like mine).
  • also has details on the trionic 5 ECU, including how saab detects knock through pre-ignition ionization measurement, and how it subsequently modifies ignition timing & boost pressure.
  • images/467_1.pdf

{465}
hide / edit[1] / print
ref: notes-0 tags: CRC32 ethernet blackfin date: 10-10-2007 03:57 gmt revision:1 [0] [head]

good explanation of 32-bit CRC (from the blackfin BF537 hardware ref):

{401}
hide / edit[2] / print
ref: bookmark-0 tags: RF penetration tissue 1978 date: 07-24-2007 04:15 gmt revision:2 [1] [0] [head]

http://hardm.ath.cx:88/pdf/RFpenetrationInTissue.pdf

  • from the perspective of NMR imaging.
  • gives the penetration depths & phase-shifts for RF waves from 1 - 100Mhz. I can obly assume that it is much worse for 400Mhz and 2.4Ghz.
    • that said, Zarlink's MICS transceiver works from the GI tract at 400mhz with low power, suggesting that the attenuation can't be too too great.
  • includes equations used to derive these figures.
  • document describing how various antenna types are effected by biological tissue, e.g. a human head.

even more interesting: wireless brain machine interface

{384}
hide / edit[1] / print
ref: bookmark-0 tags: magstripe magnetic stripe reader writer encoder date: 05-31-2007 02:49 gmt revision:1 [0] [head]

notes on reading magstripe cards:

{277}
hide / edit[6] / print
ref: Sergio-2005.1 tags: isometric motor control kinematics kinetics Kalaska date: 04-09-2007 22:33 gmt revision:6 [5] [4] [3] [2] [1] [0] [head]

PMID-15888522[0] Motor cortex neural correlates of output kinematics and kinetics during isometric-force and arm-reaching tasks.

  • see [1]
  • recorded 132 units from the caudal M1
  • two tasks: isometric and movement of a heavy mass, both to 8 peripheral targets.
    • target location was digitized using a 'sonic digitizer'. trajectories look really good - the monkey was well trained.
  • idea: part of M1 functions near the output (of course)
    • evidence supporting this: M1 rasters during movement of the heavy mass show a triphasic profile: one to accelerate the mass, one to decelerate it, and another to hold it steady on target. see [2,3,4,5,6,7,8,9,10]

____References____

{230}
hide / edit[0] / print
ref: engineering notes-0 tags: homopolar generator motor superconducting magnet date: 03-09-2007 14:39 gmt revision:0 [head]

http://hardm.ath.cx:88/pdf/homopolar.pdf

  • the magnets are energized in 'opposite directions - forcing the field lines to go normal to the rotar.
  • still need brushes - perhaps there is no way to avoid them in a homopolar generator.

{223}
hide / edit[0] / print
ref: physics notes-0 tags: plasma physics electromagnet tesla coil copper capillary tubing calculations date: 02-23-2007 16:01 gmt revision:0 [head]

calculations for a strong DC loop magnet using 1/8" copper capillary tubing:

  1. OD .125" = 3.1.7mm^2; ID 0.065 -> copper area = 23.2mm^2 ~= AWG 4
  2. AWG 4 = 0.8 ohms/km
  3. length of tubing: 30' ~= 40 turns @ 9" each (windings packed into a torus of major radius 1.5"; minor radius 0.5")
  4. water flow rate through copper capillary tubing: 1 liter/min; assuming we can heat it up from 30C -> 100C, this is 70KCal = 292 KJ/min = 4881 W total. (better pipe it into our hot water heater!)
  5. 4.8kw / 9m of tubing = 540 W/m
  6. 540W/m / 8e-4 = 821 A ; V = 821 * 9 * 8e-4 = 5.9V (!!! where the hall am i going to get that kind of power?)
  7. 821A * 40 turns = 32.8KA in a loop major radius 1.5" = 3.8cm
  8. magnetic field of a current loop -> B = 0.54T
  9. lamour radius: 5eV electrons @B = 0.54T : 15um; proton: 2.7cm; electrons @1KeV ~= 2.66e8 (this is close to the speed of light?) r = 3mm.

{7}
hide / edit[0] / print
ref: bookmark-0 tags: book information_theory machine_learning bayes probability neural_networks mackay date: 0-0-2007 0:0 revision:0 [head]

http://www.inference.phy.cam.ac.uk/mackay/itila/book.html -- free! (but i liked the book, so I bought it :)

{20}
hide / edit[0] / print
ref: bookmark-0 tags: neural_networks machine_learning matlab toolbox supervised_learning PCA perceptron SOM EM date: 0-0-2006 0:0 revision:0 [head]

http://www.ncrg.aston.ac.uk/netlab/index.php n.b. kinda old. (or does that just mean well established?)

{39}
hide / edit[0] / print
ref: bookmark-0 tags: Numenta Bayesian_networks date: 0-0-2006 0:0 revision:0 [head]

http://www.numenta.com/Numenta_HTM_Concepts.pdf

  • shared, hierarchal representation reduces memory requirements, training time, and mirrors the structure of the world.
  • belief propagation techniques force the network into a set of mutually consistent beliefs.
  • a belief is a form of spatio-temporal quantization: ignore the unusual.
  • a cause is a persistent or recurring structure in the world - the root of a spatiotemporal pattern. This is a simple but important concept.
    • HTM marginalize along space and time - they assume time patterns and space patterns, not both at the same time. Temporal parameterization follows spatial parameterization.

{40}
hide / edit[0] / print
ref: bookmark-0 tags: Bayes Baysian_networks probability probabalistic_networks Kalman ICA PCA HMM Dynamic_programming inference learning date: 0-0-2006 0:0 revision:0 [head]

http://www.cs.ubc.ca/~murphyk/Bayes/bnintro.html very, very good! many references, well explained too.

{92}
hide / edit[0] / print
ref: bookmark-0 tags: training neural_networks with kalman filters date: 0-0-2006 0:0 revision:0 [head]

with the extended kalman filter, from '92: http://ftp.ccs.neu.edu/pub/people/rjw/kalman-ijcnn-92.ps

with the unscented kalman filter : http://hardm.ath.cx/pdf/NNTrainingwithUnscentedKalmanFilter.pdf