m8ta
you are not logged in, login. new entry
text: sort by
tags: modified
type: chronology
{842}
hide / edit[5] / print
ref: work-0 tags: distilling free-form natural laws from experimental data Schmidt Cornell automatic programming genetic algorithms date: 09-14-2018 01:34 gmt revision:5 [4] [3] [2] [1] [0] [head]

Distilling free-form natural laws from experimental data

  • There critical step was to use partial derivatives to evaluate the search for invariants. Even yet, with a 4D data set the search for natural laws took ~ 30 hours.
    • Then again, how long did it take humans to figure out these invariants? (Went about it in a decidedly different way..)
    • Further, how long did it take for biology to discover similar invariants?
      • They claim elsewhere that the same algorithm has been applied to biological data - a metabolic pathway - with some success.
      • Of course evolution had to explore a much larger space - proteins and reculatory pathways, not simpler mathematical expressions / linkages.

{1409}
hide / edit[8] / print
ref: -0 tags: coevolution fitness prediction schmidt genetic algorithm date: 09-14-2018 01:34 gmt revision:8 [7] [6] [5] [4] [3] [2] [head]

Coevolution of Fitness Predictors

  • Michael D. Schmidt and Hod Lipson, Member, IEEE
  • Fitness prediction is a technique to replace fitness evaluation in evolutionary algorithms with a light-weight approximation that adapts with the solution population.
    • Cannot approximate the full landscape, but shift focus during evolution.
    • Aka local caching.
    • Or adversarial techniques.
  • Instead use coevolution, with three populations:
    • 1) solutions to the original problem, evaluated using only fitness predictors;
    • 2) fitness predictors of the problem; and
    • 3) fitness trainers, whose exact fitness is used to train predictors.
      • Trainers are selected high variance solutions across the predictors, and predictors are trained on this subset.
  • Lightweight fitness predictors evolve faster than the solution population, so they cap the computational effort on that at 5% overall effort.
    • These fitness predictors are basically an array of integers which index the full training set -- very simple and linear. Maybe boring, but the simplest solution that works ...
    • They only sample 8 training examples for even complex 30-node solution functions (!!).
    • I guess, because the information introduced into the solution set is relatively small per generation, it makes little sense to over-sample or over-specify this; all that matters is that, on average, it's directionally correct and unbiased.
  • Used deterministic crowding selection as the evolutionary algorithm.
    • Similar individuals have to compete in tournaments for space.
  • Showed that the coevolution algorithm is capable of inferring even highly complex many-term functions
    • And, it uses function evaluations more efficiently than the 'exact' (each solution evaluated exactly) algorithm.
  • Coevolution algorithm seems to induce less 'bloat' in the complexity of the solutions.
  • See also {842}

{1236}
hide / edit[9] / print
ref: -0 tags: optogenetics micro LED flexible electrodes PET rogers date: 12-28-2017 03:24 gmt revision:9 [8] [7] [6] [5] [4] [3] [head]

PMID-23580530 Injectable, cellular-scale optoelectronics with applications for wireless optogenetics.

  • Supplementary materials
  • 21 authors, University Illinois at Urbana-Champaign, Tufts, China, Northwestern, Miami ..
  • GaN blue and green LEDs fabricated on a flexible substrate with stiff inserter.
    • Inserter is released in 15 min with a dissolving silk fibrin.
    • made of 250um thick SU-8 epoxy, reverse photocured on a glass slide.
  • GaN LEDS fabricated on a sapphire substrate & transfer printed via modified Karl-Suss mask aligner.
    • See supplemental materials for the intricate steps.
    • LEDs are 50um x 50um x 6.75um
  • Have integrated:
    • Temperature sensor (Pt serpentine resistor) / heater.
    • inorganic photodetector (IPD)
      • ultrathin silicon photodiode 1.25um thick, 200 x 200um^2, made on a SOI wafer
    • Pt extracellular recording electrode.
        • This insulated via 2um thick more SU-8.
  • Layers are precisely aligned and assembled via 500nm layer of epoxy.
    • Layers made of 6um or 2.5um thick mylar (polyethylene terephthalate (PET))
    • Layers joined with SU-8.
    • Wiring patterned via lift-off.
  • Powered via RF scavenging at 910 Mhz.
    • appeared to be simple, power in = light out; no data connection.
  • Tested vs control and fiber optic stimulation, staining for:
    • Tyrosine hydroxylase (makes l-DOPA)
    • c-fos, a neural activity marker
    • u-LEDs show significant activation.
  • Also tested for GFAP (astrocytes) and Iba1 (activated microglia); flexible & smaller devices had lower gliosis.
  • Next tested for behavior using a self-stimulation protocol; mice learned to self-stimulate to release DA.
  • Devices are somewhat reliable to 250 days!

{711}
hide / edit[8] / print
ref: Gradinaru-2009.04 tags: Deisseroth DBS STN optical stimulation 6-OHDA optogenetics date: 05-10-2016 23:48 gmt revision:8 [7] [6] [5] [4] [3] [2] [head]

PMID-19299587[0] Optical Deconstruction of Parkinsonian Neural Circuitry.

  • Viviana Gradinaru, Murtaza Mogri, Kimberly R. Thompson, Jaimie M. Henderson, Karl Deisseroth
  • DA depletion of the SN leads to abnormal activity in the BG ; HFS (>90Hz) of the STN has been found to be therapeutic, but the mechanism is imperfectly understood.
    • lesions of the BG can also be therapeutic.
  • Used chanelrhodopsin (light activated cation channel (+)) which are expressed by cell type specific promoters. (transgenic animals). Also used halorhodopsins, which are light activated chloride pumps (inhibition).
    • optogenetics allows simultaneous optical stimulation and electrical recording without artifact.
  • Made PD rats by 6-hydroxydopamine unilaterally into the medial forebrain bundle of rats.
  • Then they injected eNpHr (inhibitory) opsin vector targeting excitatory neurons (under control of the CaMKIIa receptor) to the STN as identified stereotaxically & by firing pattern.
    • Electrical stimulation of this area alleviated rotational behavior (they were hemiparkinsonian rats), but not optical inhibition of STN.
  • Alternately, the glia in STN may be secreting molecules that modulate local circuit activity; it has been shown that glial-derived factor adenosine accumulates during DBS & seems to help with attenuation of tremor.
    • Tested this by activating glia with ChR2, which can pass small Ca+2 currents.
    • This worked: blue light halted firing in the STN; but, again, no behavioral trace of the silencing was found.
  • PD is characterized by pathological levels of beta oscillations in the BG, and synchronizing STN with the BG at gamma frequencies may ameliorate PD symptoms; while sync. at beta will worsen -- see [1][2]
  • Therefore, they tried excitatory optical stimulation of excitatory STN neurons at the high frequencies used in DBS (90-130Hz).
    • HFS to STN failed, again, to produce any therapeutic effect!
  • Next expressed channel rhodopsin in only projection neurons Thy1::ChR2 (not excitatory cells in STN), again did optotrode (optical stim, eletrical record) recordings.
    • HFS of afferent fibers to STN shut down most of the local circuitry there, with some residual low-amplitude high frequency burstiness.
    • Observed marked effects with this treatment! Afferent HFS alleviated Parkinsonian symptoms, profoundly, with immediate reversal once the laser was turned off.
    • LFS worsened PD symptoms, in accord with electrical stimulation.
    • The Thy1::ChR2 only affected excitatory projections; GABAergic projections from GPe were absent. Dopamine projections from SNr were not affected by the virus either. However, M1 layer V projection neurons were strongly labeled by the retrovirus.
      • M1 layer V neurons could be antidromically recruited by optical stimulation in the STN.
  • Selective M1 layer V HFS also alleviated PD symptoms ; LFS had no effect; M2 (Pmd/Pmv?) LFS causes motor behavior.
  • Remind us that DBS can treat tremor, rigidity, and bradykinesia, but is ineffective at treating speech impairment, depression, and dementia.
  • Suggest that axon tract modulation could be a common theme in DBS (all the different types..), as activity in white matter represents the activity of larger regions compactly.
  • The result that the excitatory fibers of projections, mainly from the motor cortex, matter most in producing therapeutic effects of DBS is counterintuitive but important.
    • What do these neurons do normally, anyway? give a 'copy' of an action plan to the STN? What is their role in M1 / the BG? They should test with normal mice.

____References____

[0] Gradinaru V, Mogri M, Thompson KR, Henderson JM, Deisseroth K, Optical Deconstruction of Parkinsonian Neural Circuitry.Science no Volume no Issue no Pages (2009 Mar 19)
[1] Eusebio A, Brown P, Synchronisation in the beta frequency-band - The bad boy of parkinsonism or an innocent bystander?Exp Neurol no Volume no Issue no Pages (2009 Feb 20)
[2] Wingeier B, Tcheng T, Koop MM, Hill BC, Heit G, Bronte-Stewart HM, Intra-operative STN DBS attenuates the prominent beta rhythm in the STN in Parkinson's disease.Exp Neurol 197:1, 244-51 (2006 Jan)

{1334}
hide / edit[0] / print
ref: -0 tags: micro LEDS Buzaki silicon neural probes optogenetics date: 04-18-2016 18:00 gmt revision:0 [head]

PMID-26627311 Monolithically Integrated ╬╝LEDs on Silicon Neural Probes for High-Resolution Optogenetic Studies in Behaving Animals.

  • 12 uLEDs and 32 rec sites integrated into one probe.
  • InGaN monolithically integrated LEDs.
    • Si has ~ 5x higher thermal conductivity than sapphire, allowing better heat dissipation.
    • Use quantum-well epitaxial layers, 460nm emission, 5nm Ni / 5nm Au current injection w/ 75% transmittance @ design wavelength.
      • Think the n/p GaN epitaxy is done by an outside company, NOVAGAN.
    • Efficiency near 80% -- small LEDs have fewer defects!
    • SiO2 + ALD Al2O3 passivation.
    • 70um wide, 30um thick shanks.

{1287}
hide / edit[0] / print
ref: -0 tags: maleimide azobenzine glutamate photoswitch optogenetics date: 06-16-2014 21:19 gmt revision:0 [head]

PMID-16408092 Allosteric control of an ionotropic glutamate receptor with an optical switch

  • 2006
  • Use an azobenzene (benzine linked by two double-bonded nitrogens) as a photo-switchable allosteric activator that reversibly presents glutamate to an ion channel.
  • PIMD:17521567 Remote control of neuronal activity with a light-gated glutamate receptor.
    • The molecule, in use.
  • Likely the molecule is harder to produce than channelrhodopsin or halorhodopsin, hence less used. Still, it's quite a technology.

{1283}
hide / edit[0] / print
ref: -0 tags: optogenetics glutamate azobenzine date: 05-07-2014 19:51 gmt revision:0 [head]

PMID-17521567 Remote control of neuronal activity with a light-gated glutamate receptor.

  • Neuron 2007.
  • azobenzines undergo a cis to trans confirmational change via illumination with one wavelength, and trans to cis via another. (neat!!)
  • This was used to create photo-controlled (on and off) glutamate channels.

{1257}
hide / edit[3] / print
ref: -0 tags: Anna Roe optogenetics artificial dura monkeys intrinisic imaging date: 09-30-2013 19:08 gmt revision:3 [2] [1] [0] [head]

PMID-23761700 Optogenetics through windows on the brain in nonhuman primates

  • technique paper.
  • placed over the visual cortex.
  • Injected virus through the artificial dura -- micropipette, not CVD.
  • Strong expression:
  • See also: PMID-19409264 (Boyden, 2009)

{1255}
hide / edit[0] / print
ref: -0 tags: Disseroth Kreitzer parkinsons optogenetics D1 D2 6OHDA date: 09-30-2013 18:15 gmt revision:0 [head]

PMID-20613723 Regulation of parkinsonian motor behaviors by optogenetic control of basal ganglia circuitry

  • Kravitz AV, Freeze BS, Parker PR, Kay K, Thwin MT, Deisseroth K, Kreitzer AC.
  • Generated mouse lines with channelrhodopsin2, with Cre recombinase under control of regulatory elements for the dopamine D1 (direct) or D2 (indirect) receptor.
  • optogenetic exitation of the indirect pathway elicited a parkinsonian state: increased freezing, bradykinesia and decreased locomotor initiations;
  • Activation of the direct pathway decreased freezing and increased locomotion.
  • Then: 6OHDA depletion of striatal dopamine neurons.
  • Optogenetic activation of direct pathway (D1 Cre/loxp) neurons restored behavior to pre-lesion levels.
    • Hence, this seems like a good target for therapy.

{54}
hide / edit[1] / print
ref: bookmark-0 tags: intrinsic evolution FPGA GPU optimization algorithm genetic date: 01-27-2013 22:27 gmt revision:1 [0] [head]

!:

  • http://evolutioninmaterio.com/ - using FPGAs in intrinsic evolution, e.g. the device is actually programmed and tested.
  • - Adrian Thompson's homepage. There are many PDFs of his work on his homepage.
  • Parallel genetic algorithms on programmable graphics hardware
    • basically deals with optimizing mutation and fitness evaluation using the parallel arcitecture of a GPU: larger populations can be evaluated at one time.
    • does not concern the intrinsic evolution of algorithms to the GPU, as in the Adrian's work.
    • uses a linear conguent generator to produce random numbers.
    • used a really simple problem: Colville minimization problem which need only search through a four-dimensional space.
  • Cellular genetic algoritms and local search for 3-SAT problem on Graphic Hardware
    • concerning SAT: satisfiabillity technique: " many practical problems, such as graph coloring, job-shop scheduling, and real-world scheduling can be represented as a SAT problem.
    • SAT-3 refers to the length of the search clause. length 3 is apparently very hard..
    • they use a combination of greedy search (flip the bit that increases the fitness the largest ammount) and random-walk via point mutations to keep the algorithm away from local minima.
    • also use cellular genetic algorithm which works better on a GPU): select the optimal neignbor, not global, individual.
    • only used a GeForce 6200 gpu, but it was still 5x faster than a p4 2.4ghz.
  • Evolution of a robot controller using cartesian genetic programming
    • cartesian programming has many advantages over traditional tree based methods - e.g. blot-free evolution & faster evolution through neutral search.
    • cartesian programming is characterized by its encoding of a graph as a string of integers that represent the functions and connections between graph nodes, and program inputs and outputs.
      • this encoding was developed in the course of evolving electronic circuits, e.g. above ?
      • can encode a non-connected graph. the genetic material that is not utilized is analogous to biological junk DNA.
    • even in converged populations, small mutations can produce large changes in phenotypic behavior.
    • in this work he only uses directed graphs - there are no cycles & an organized flow of information.
    • mentions automatically defined functions - what is this??
    • used diffusion to define the fitness values of particular locations in the map. the fewer particles there eventually were in a grid location, the higher the fitness value of the robot that managed to get there.
  • Hardware evolution: on the nature of artifically evolved circuits - doctoral dissertation.
    • because evolved circuits utilize the parasitic properties of devices, they have little tolerance of the value of components. Reverse engineering of the circuits evolved to improve tolerance is not easy.

{795}
hide / edit[1] / print
ref: work-0 tags: machine learning reinforcement genetic algorithms date: 10-26-2009 04:49 gmt revision:1 [0] [head]

I just had dinner with Jesse, and the we had a good/productive discussion/brainstorm about algorithms, learning, and neurobio. Two things worth repeating, one simpler than the other:

1. Gradient descent / Newton-Rhapson like techniques should be tried with genetic algorithms. As of my current understanding, genetic algorithms perform an semi-directed search, randomly exploring the space of solutions with natural selection exerting a pressure to improve. What if you took the partial derivative of each of the organism's genes, and used that to direct mutation, rather than random selection of the mutated element? What if you looked before mating and crossover? Seems like this would speed up the algorithm greatly (though it might get it stuck in local minima, too). Not sure if this has been done before - if it has, edit this to indicate where!

2. Most supervised machine learning algorithms seem to rely on one single, externally applied objective function which they then attempt to optimize. (Rather this is what convex programming is. Unsupervised learning of course exists, like PCA, ICA, and other means of learning correlative structure) There are a great many ways to do optimization, but all are exactly that - optimization, search through a space for some set of weights / set of rules / decision tree that maximizes or minimizes an objective function. What Jesse and I have arrived at is that there is no real utility function in the world, (Corollary #1: life is not an optimization problem (**)) -- we generate these utility functions, just as we generate our own behavior. What would happen if an algorithm iteratively estimated, checked, cross-validated its utility function based on the small rewards actually found in the world / its synthetic environment? Would we get generative behavior greater than the complexity of the inputs? (Jesse and I also had an in-depth talk about information generation / destruction in non-linear systems.)

Put another way, perhaps part of learning is to structure internal valuation / utility functions to set up reinforcement learning problems where the reinforcement signal comes according to satisfaction of sub-goals (= local utility functions). Or, the gradient signal comes by evaluating partial derivatives of actions wrt Creating these goals is natural but not always easy, which is why one reason (of very many!) sports are so great - the utility function is clean, external, and immutable. The recursive, introspective creation of valuation / utility functions is what drives a lot of my internal monologues, mixed with a hefty dose of taking partial derivatives (see {780}) based on models of the world. (Stated this way, they seem so similar that perhaps they are the same thing?)

To my limited knowledge, there has been some work as of recent in the creation of sub-goals in reinforcement learning. One paper I read used a system to look for states that had a high ratio of ultimately rewarded paths to unrewarded paths, and selected these as subgoals (e.g. rewarded the agent when this state was reached.) I'm not talking about these sorts of sub-goals. In these systems, there is an ultimate goal that the researcher wants the agent to achieve, and it is the algorithm's (or s') task to make a policy for generating/selecting behavior. Rather, I'm interested in even more unstructured tasks - make a utility function, and a behavioral policy, based on small continuous (possibly irrelevant?) rewards in the environment.

Why would I want to do this? The pet project I have in mind is a 'cognitive' PCB part placement / layout / routing algorithm to add to my pet project, kicadocaml, to finally get some people to use it (the attention economy :-) In the course of thinking about how to do this, I've realized that a substantial problem is simply determining what board layouts are good, and what are not. I have a rough aesthetic idea + some heuristics that I learned from my dad + some heuristics I've learned through practice of what is good layout and what is not - but, how to code these up? And what if these aren't the best rules, anyway? If i just code up the rules I've internalized as utility functions, then the board layout will be pretty much as I do it - boring!

Well, I've stated my sub-goal in the form of a problem statement and some criteria to meet. Now, to go and search for a decent solution to it. (Have to keep this blog m8ta!) (Or, realistically, to go back and see if the problem statement is sensible).

(**) Corollary #2 - There is no god. nod, Dawkins.

{787}
hide / edit[1] / print
ref: life-0 tags: IQ intelligence Flynn effect genetics facebook social utopia data machine learning date: 10-02-2009 14:19 gmt revision:1 [0] [head]

src

My theory on the Flynn effect - human intelligence IS increasing, and this is NOT stopping. Look at it from a ML perspective: there is more free time to get data, the data (and world) has almost unlimited complexity, the data is much higher quality and much easier to get (the vast internet & world!(travel)), there is (hopefully) more fuel to process that data (food!). Therefore, we are getting more complex, sophisticated, and intelligent. Also, the idea that less-intelligent people having more kids will somehow 'dilute' our genetic IQ is bullshit - intelligence is mostly a product of environment and education, and is tailored to the tasks we need to do; it is not (or only very weakly, except at the extremes) tied to the wetware. Besides, things are changing far too fast for genetics to follow.

Regarding this social media, like facebook and others, you could posit that social intelligence is increasing, along similar arguments to above: social data is seemingly more prevalent, more available, and people spend more time examining it. Yet this feels to be a weaker argument, as people have always been socializing, talking, etc., and I'm not sure if any of these social media have really increased it. Irregardless, people enjoy it - that's the important part.

My utopia for today :-)