m8ta
you are not logged in, login. new entry
text: sort by
tags: modified
type: chronology
{1409}
hide / edit[8] / print
ref: -0 tags: coevolution fitness prediction schmidt genetic algorithm date: 09-14-2018 01:34 gmt revision:8 [7] [6] [5] [4] [3] [2] [head]

Coevolution of Fitness Predictors

  • Michael D. Schmidt and Hod Lipson, Member, IEEE
  • Fitness prediction is a technique to replace fitness evaluation in evolutionary algorithms with a light-weight approximation that adapts with the solution population.
    • Cannot approximate the full landscape, but shift focus during evolution.
    • Aka local caching.
    • Or adversarial techniques.
  • Instead use coevolution, with three populations:
    • 1) solutions to the original problem, evaluated using only fitness predictors;
    • 2) fitness predictors of the problem; and
    • 3) fitness trainers, whose exact fitness is used to train predictors.
      • Trainers are selected high variance solutions across the predictors, and predictors are trained on this subset.
  • Lightweight fitness predictors evolve faster than the solution population, so they cap the computational effort on that at 5% overall effort.
    • These fitness predictors are basically an array of integers which index the full training set -- very simple and linear. Maybe boring, but the simplest solution that works ...
    • They only sample 8 training examples for even complex 30-node solution functions (!!).
    • I guess, because the information introduced into the solution set is relatively small per generation, it makes little sense to over-sample or over-specify this; all that matters is that, on average, it's directionally correct and unbiased.
  • Used deterministic crowding selection as the evolutionary algorithm.
    • Similar individuals have to compete in tournaments for space.
  • Showed that the coevolution algorithm is capable of inferring even highly complex many-term functions
    • And, it uses function evaluations more efficiently than the 'exact' (each solution evaluated exactly) algorithm.
  • Coevolution algorithm seems to induce less 'bloat' in the complexity of the solutions.
  • See also {842}

{825}
hide / edit[2] / print
ref: work-0 tags: no free lunch wolpert coevolution date: 07-19-2010 12:54 gmt revision:2 [1] [0] [head]

http://www.no-free-lunch.org/

  • Just discovered this. It makes perfect sense - bias free learning is 'futile'. Learning need be characterized by its biases, which enable faster or better results in particular problem domains.
  • Equivalently: any two algorithms are equivalent when their performance is averaged across all possible problems. (This is not as strong as it sounds, as most problems will never be encountered).
  • Wolper 1996 provides an excellent geometric interpretation of this: the quality of the search/optimization algorithm within a particular domain iis proporational to the inner product of its expected search stream with the actual (expected?) probability distribution of the data.
  • However! with coevolutionary algorithms, there can be a free lunch - "in coevolution some algorithms have better performance than other algorithms, averaged across all possible problems." Wolpert 2005
    • claims that this does not (??) hold in biological evolution, where there is no champion. Yet biology seems all about co-evolution.
    • coevolution of a backgammon player details how it may be coevolution + the structure of the backgammon game, not reinforcement learning, which led Tesauro to his championship-level player. Specifically, coevolutionary algorithms tend to get stuck in local minima - where both contestants play mediocre and draw - but this is not possible in backgammon; there is only one winner, and the games must terminate eventually.
      • These authors introduce a very interesting twist to improve coevolutionary bootstrapping: Firstly, the games are played in pairs, with the order of play reversed and the same random seed used to generate the dice rolls for both games. This washes out some of the unfairness due to the dice rolls when the two networks are very close - in particular, if they were identical, the result would always be one win each.