optimization – Black-box Optimization Algorithms

The two base classes

class pybrain.optimization.optimizer.BlackBoxOptimizer(evaluator=None, initEvaluable=None, **kwargs)

The super-class for learning algorithms that treat the problem as a black box. At each step they change the policy, and get a fitness value by invoking the FitnessEvaluator (provided as first argument upon initialization).

Evaluable objects can be lists or arrays of continuous values (also wrapped in ParameterContainer) or subclasses of Evolvable (that define its methods).

__init__(evaluator=None, initEvaluable=None, **kwargs)
The evaluator is any callable object (e.g. a lambda function). Algorithm parameters can be set here if provided as keyword arguments.
setEvaluator(evaluator, initEvaluable=None)
If not provided upon construction, the objective function can be given through this method. If necessary, also provide an initial evaluable.
The main loop that does the learning.
Minimize cost or maximize fitness? By default, all functions are maximized.
Stopping criterion based on number of evaluations.
Stopping criterion based on number of learning steps.
Is there a known value of sufficient fitness?
provide console output during learning
Store all evaluations (in the ._allEvaluations list)?
Store all evaluated instances (in the ._allEvaluated list)?
dimension of the search space, if applicable
class pybrain.optimization.optimizer.ContinuousOptimizer(evaluator=None, initEvaluable=None, **kwargs)

Bases: pybrain.optimization.optimizer.BlackBoxOptimizer

A more restricted class of black-box optimization algorithms that assume the parameters to be necessarily an array of continuous values (which can be wrapped in a ParameterContainer).

__init__(evaluator=None, initEvaluable=None, **kwargs)
The evaluator is any callable object (e.g. a lambda function). Algorithm parameters can be set here if provided as keyword arguments.

General Black-box optimizers

class pybrain.optimization.RandomSearch(evaluator=None, initEvaluable=None, **kwargs)
Every point is chosen randomly, independently of all previous ones.
class pybrain.optimization.HillClimber(evaluator=None, initEvaluable=None, **kwargs)
The simplest kind of stochastic search: hill-climbing in the fitness landscape.
class pybrain.optimization.StochasticHillClimber(evaluator=None, initEvaluable=None, **kwargs)

Stochastic hill-climbing always moves to a better point, but may also go to a worse point with a probability that decreases with increasing drop in fitness (and depends on a temperature parameter).

The larger the temperature, the more explorative (less greedy) it behaves.

Continuous optimizers

class pybrain.optimization.NelderMead(evaluator=None, initEvaluable=None, **kwargs)
Do the optimization using a simple wrapper for scipy’s fmin.
class pybrain.optimization.CMAES(evaluator=None, initEvaluable=None, **kwargs)
CMA-ES: Evolution Strategy with Covariance Matrix Adaptation for nonlinear function minimization. This code is a close transcription of the provided matlab code.
class pybrain.optimization.OriginalNES(evaluator=None, initEvaluable=None, **kwargs)
Reference implementation of the original Natural Evolution Strategies algorithm (CEC-2008).
class pybrain.optimization.ExactNES(evaluator=None, initEvaluable=None, **kwargs)

A new version of NES, using the exact instead of the approximate Fisher Information Matrix, as well as a number of other improvements. (GECCO 2009).

Type of baseline. The most robust one is also the default.
class pybrain.optimization.FEM(evaluator=None, initEvaluable=None, **kwargs)
Fitness Expectation-Maximization (PPSN 2008).

Finite difference methods

class pybrain.optimization.FiniteDifferences(evaluator=None, initEvaluable=None, **kwargs)

Basic finite difference method.

produce a parameter perturbation
class pybrain.optimization.PGPE(evaluator=None, initEvaluable=None, **kwargs)

Bases: pybrain.optimization.finitedifference.fd.FiniteDifferences

Policy Gradients with Parameter Exploration (ICANN 2008).

Initial value of sigmas
exploration type
Generate a difference vector with the given standard deviations
specific settings for sigma updates
lasso weight decay (0 to deactivate)
class pybrain.optimization.SimpleSPSA(evaluator=None, initEvaluable=None, **kwargs)

Bases: pybrain.optimization.finitedifference.fd.FiniteDifferences

Simultaneous Perturbation Stochastic Approximation.

This class uses SPSA in general, but uses the likelihood gradient and a simpler exploration decay.


class pybrain.optimization.ParticleSwarmOptimizer(evaluator=None, initEvaluable=None, **kwargs)

Particle Swarm Optimization

size determines the number of particles.

boundaries should be a list of (min, max) pairs with the length of the dimensionality of the vector to be optimized (default: +-10). Particles will be initialized with a position drawn uniformly in that interval.

memory indicates how much the velocity of a particle is affected by its previous best position.

sociality indicates how much the velocity of a particle is affected by its neighbours best position.

inertia is a damping factor.

Return the particle with the best fitness from a list of particles.
class pybrain.optimization.GA(evaluator=None, initEvaluable=None, **kwargs)

Standard Genetic Algorithm.

crossOver(parents, nbChildren)
generate a number of children by doing 1-point cross-over
mutate some genes of the given individual
mutation probability
produce offspring by selection, mutation and crossover.

select some of the individuals of the population, taking into account their fitnesses

Returns:list of selected parents
the number of parents selected from the current population
selection proportion
selection scheme

Multi-objective Optimization

class pybrain.optimization.MultiObjectiveGA(evaluator=None, initEvaluable=None, **kwargs)
Multi-objective Genetic Algorithm: the fitness is a vector with one entry per objective. By default we use NSGA-II selection.