m8ta
You are not authenticated, login.
text: sort by
tags: modified
type: chronology
{1169}
hide / / print
ref: -0 tags: artificial intelligence projection episodic memory reinforcement learning date: 08-15-2012 19:16 gmt revision:0 [head]

Projective simulation for artificial intelligence

  • Agent learns based on memory 'clips' which are combined using some pseudo-bayesian method to trigger actions.
    • These clips are learned from experience / observation.
    • Quote: "..more complex behavior seems to arise when an agent is able to “think for a while” before it “decides what to do next.” This means the agent somehow evaluates a given situation in the light of previous experience, whereby the type of evaluation is different from the execution of a simple reflex circuit"
    • Quote: "Learning is achieved by evaluating past experience, for example by simple reinforcement learning".
  • The forward exploration of learned action-stimulus patterns is seemingly a general problem-solving strategy (my generalization).
  • Pretty simple task:
    • Robot can only move left / right; shows a symbol to indicate which way it (might?) be going.

{858}
hide / / print
ref: -0 tags: artificial intelligence machine learning education john toobey leda cosmides date: 12-13-2010 03:43 gmt revision:3 [2] [1] [0] [head]

Notes & responses to evolutionary psychologists John Toobey and Leda Cosmides' - authors of The Adapted Mind - essay in This Will change Everything

  • quote: Currently the most keenly awaited technological development is an all-purpose artificial intelligence-perhaps even an intelligence that would revise itself and grow at an ever-accelerating rate until it enacts millennial transformations. [...] Yet somehow this goal, like the horizon, keeps retreating as fast as it is approached.
  • AI's wrong turn was assuming that the best methods for reasoning and thinking are those that can be applied successfully to any problem domain.
    • But of course it must be possible - we are here, and we did evolve!
    • My opinion: the limit is codifying abstract, assumed, and ambiguous information into program function - e.g. embodying the world.
  • Their idea: intelligences use a number of domain-specific, specialized "hacks", that work for limited tasks; general intelligence appears as a result of the combination of all of these.
    • "Our mental programs can be fiendishly well engineered to solve some problems because they are not limited to using only those strategies that can be applied to all problems."
    • Given the content of the wikipedia page (above), it seems that they have latched onto this particular idea for at least 18 years. Strange how these sorts of things work.
  • Having accurate models of human intelligence would achieve two things:
    • It would enable humans to communicate more effectively with machines via shared knowledge and reasoning.
    • (me:) The AI would be enhanced by the tricks and hacks that evolution took millions of years, billions of individuals, and 10e?? (non-discrete) interactions between individuals and the environment. This constitutes an enormous store of information, to overlook it necessitates (probably, there may be seriuos shortcuts to biological evolution) re-simulating all of the steps that it took to get here. We exist as a cashed output of the evolutionary algorithm; recomputing this particular function is energetically impossible.
  • "The long term ambition [of evolutionary psychology] is to develop a model of human nature as precise as if we had the engineering specifications for the control systems of a robot.
  • "Humanity will continue to be blind slaves to the programs evolution has built into our brains until we drag them into the light. Ordinarily, we inhabit only the versions of reality that they spontaneously construct for us -- the surfaces of things. Because we are unaware that we are in a theater, with our roles and our lines largely written for us by our mental programs, we are credulously swept up in these plays (such as the genocidal drama of us versus them). Endless chain reactions among these programs leave us the victims of history -- embedded in war and oppression, enveloped in mass delusions and cultural epidemics, mired in endless negative-sum conflict \\ If we understood these programs and the coordinated hallucinations they orchestrate in our minds, our species could awaken from the roles these programs assign to us. Yet this cannot happen if knowledge -- like quantum mechanics -- remains forever locked up in the minds of a few specialists, walled off by the years of study required to master it. " Exactly. Well said.
    • The solution, then: much much better education; education that utilizes the best knowledge about transferring knowledge.
    • The authors propose video games; this is already being tested, see {859}

{838}
hide / / print
ref: -0 tags: meta learning Artificial intelligence competent evolutionary programming Moshe Looks MOSES date: 08-07-2010 16:30 gmt revision:6 [5] [4] [3] [2] [1] [0] [head]

Competent Program Evolution

  • An excellent start, excellent good description + meta-description / review of existing literature.
  • He thinks about things in a slightly different way - separates what I call solutions and objective functions "post- and pre-representational levels" (respectively).
  • The thesis focuses on post-representational search/optimization, not pre-representational (though, I believe that both should meet in the middle - eg. pre-representational levels/ objective functions tuned iteratively during post-representational solution creation. This is what a human would do!)
  • The primary difficulty in competent program evolution is the intense non-decomposability of programs: every variable, constant, branch effects the execution of every other little bit.
  • Competent program creation is possible - humans create programs significantly shorter than lookup tables - hence it should be possible to make a program to do the same job.
  • One solution to the problem is representation - formulate the program creation as a set of 'knobs' that can be twiddled (here he means both gradient-descent partial-derivative optimization and simplex or heuristic one-dimensional probabilistic search, of which there are many good algorithms.)
  • pp 27: outline of his MOSES program. Read it for yourself, but looks like:
  • The representation step above "explicitly addresses the underlying (semantic) structure of program space independently of the search for any kind of modularity or problem decomposition."
    • In MOSES, optimization does not operate directly on program space, but rather on subspaces defined by the representation-building process. These subspaces may be considered as being defined by templates assigning values to some of the underlying dimensions (e.g., they restrict the size and shape of any resulting trees).
  • In chapter 3 he examines the properties of the boolean programming space, which is claimed to be a good model of larger/more complicated programming spaces in that:
    • Simpler functions are much more heavily sampled - e.g. he generated 1e6 samples of 100-term boolean functions, then reduced them to minimal form using standard operators. The vast majority of the resultant minimum length (compressed) functions were simple - tautologies or of a few terms.
    • A corollary is that simply increasing syntactic sample length is insufficient for increasing program behavioral complexity / variety.
      • Actually, as random program length increases, the percentage with interesting behaviors decreases due to the structure of the minimum length function distribution.
  • Also tests random perturbations to large boolean formulae (variable replacement/removal, operator swapping) - ~90% of these do nothing.
    • These randomly perturbed programs show a similar structure to above: most of them have very similar behavior to their neighbors; only a few have unique behaviors. makes sense.
    • Run the other way: "syntactic space of large programs is nearly uniform with respect to semantic distance." Semantically similar (boolean) programs are not grouped together.
  • Results somehow seem a let-down: the program does not scale to even moderately large problem spaces. No loops, only functions with conditional evalutation - Jacques Pitrat's results are far more impressive. {815}
    • Seems that, still, there were a lot of meta-knobs to tweak in each implementation. Perhaps this is always the case?
  • My thought: perhaps you can run the optimization not on program representations, but rather program codepaths. He claims that one problem is that behavior is loosely or at worst chaotically related to program structure - which is true - hence optimization on the program itself is very difficult. This is why Moshe runs optimization on the 'knobs' of a representational structure.

{837}
hide / / print
ref: -0 tags: artificial intelligence Hutters theorem date: 08-05-2010 05:06 gmt revision:0 [head]

Hutter's Theorem: for all problems asymptotically large enough, there exists one algorithm that is within a factor of 5 as fast as the fastest algorithm for a particular problem. http://www.hutter1.net/ai/pfastprg.htm

{787}
hide / / print
ref: life-0 tags: IQ intelligence Flynn effect genetics facebook social utopia data machine learning date: 10-02-2009 14:19 gmt revision:1 [0] [head]

src

My theory on the Flynn effect - human intelligence IS increasing, and this is NOT stopping. Look at it from a ML perspective: there is more free time to get data, the data (and world) has almost unlimited complexity, the data is much higher quality and much easier to get (the vast internet & world!(travel)), there is (hopefully) more fuel to process that data (food!). Therefore, we are getting more complex, sophisticated, and intelligent. Also, the idea that less-intelligent people having more kids will somehow 'dilute' our genetic IQ is bullshit - intelligence is mostly a product of environment and education, and is tailored to the tasks we need to do; it is not (or only very weakly, except at the extremes) tied to the wetware. Besides, things are changing far too fast for genetics to follow.

Regarding this social media, like facebook and others, you could posit that social intelligence is increasing, along similar arguments to above: social data is seemingly more prevalent, more available, and people spend more time examining it. Yet this feels to be a weaker argument, as people have always been socializing, talking, etc., and I'm not sure if any of these social media have really increased it. Irregardless, people enjoy it - that's the important part.

My utopia for today :-)