finantic.Optimizer

finantic.Optimizer is an optimizer extension package for WealthLab 7. The package contains the following optimizer algorithms.

Optimization becomes a challenge if there are many parameters:

Random search optimizer initializes random parameters between min and max of the provided parameters.
Roughly based on: http://www.jmlr.org/papers/volume13/bergstra12a/bergstra12a.pdf

Random Search

Parameters

iterations: The number of iterations to perform

seed: Seed for random number generator

Grid Search

Simple grid search that tries all combinations of the provided parameters.

if 1 parameter: create a vector with iterations values
if 2 parameters: create an array with values, i.e. each axis has sqrt(iterations) values
if 3 parameters: each axis has iterations^(1/3) values
etc…

Parameters

iterations: The number of iterations to perform

Particle Swarm

Particle Swarm optimizer (PSO). PSO is initialized with a group of random particles
and then searches for optima by updating generations. In every iteration, each particle is updated by following two „best“ values. The first one is the best solution found by the specific particle so far. The other „best“ value is the global best value obtained by any particle in the population so far.
http://www.swarmintelligence.org/tutorials.php
https://en.wikipedia.org/wiki/Particle_swarm_optimization

Globalized Bounded Nelder-Mead

Globalized bounded Nelder-Mead method. This version of Nelder-Mead optimization
avoids some of the shortcomings the standard implementation.
Specifically it is better suited for multi-modal optimization problems through its restart property.
It also respect the bounds given by the provided parameter space.
Roughly based on:
http://home.ku.edu.tr/~daksen/2004-Nelder-Mead-Method-Wolff.pdf
and
http://www.emse.fr/~leriche/GBNM_SMO_1026_final.pdf

Parameters

maxRestarts: Maximum number of restart (default is 8)
noImprovementThreshold: Minimum value of improvement before the improvement is accepted as an actual improvement (default is 0.001)
maxIterationsWithoutImprovement: Maximum number of iterations without an improvement (default is 5)
maxIterationsPrRestart: Maximum iterations pr. restart. 0 is no limit and will run to convergence (default is 0)
maxFunctionEvaluations: Maximum function evaluations. 0 is no limit and will run to convergence (default is 0)
alpha: Coefficient for reflection part of the algorithm (default is 1)
gamma: Coefficient for expansion part of the algorithm (default is 2)
rho: Coefficient for contraction part of the algorithm (default is -0.5)
sigma: Coefficient for shrink part of the algorithm (default is 0.5)
seed: Seed for random restarts
maxDegreeOfParallelism: Maximum number of concurrent operations (default is -1 (unlimited))

Bayesian Optimization

Bayesian optimization (BO) for global black box optimization problems. BO learns a model based on the initial parameter sets and scores.
This model is used to sample new promising parameter candidates which are evaluated and added to the existing parameter sets.
This process iterates several times. The method is computational expensive so is most relevant for expensive problems,
where each evaluation of the function to minimize takes a long time, like hyper parameter tuning a machine learning method.
But in that case it can usually reduce the number of iterations required to reach a good solution compared to less sophisticated methods.
Implementation loosely based on:
http://www.cs.ubc.ca/~hutter/papers/10-TR-SMAC.pdf
https://papers.nips.cc/paper/4522-practical-bayesian-optimization-of-machine-learning-algorithms.pdf
https://papers.nips.cc/paper/4443-algorithms-for-hyper-parameter-optimization.pdf

Parameters

iterations: The number of iterations to perform.
Iteration * functionEvaluationsPerIteration = totalFunctionEvaluations
randomStartingPointCount: Number of randomly parameter sets used
for initialization (default is 20)
functionEvaluationsPerIterationCount: The number of function evaluations per iteration.
The parameter sets are included in order of most promising outcome (default is 1)
randomSearchPointCount: The number of random parameter sets
used when maximizing the expected improvement acquisition function (default is 1000)
seed:
runParallel: Use multi threading to speed up execution (default is false).
Note that the order of results returned the Optimize method will not be reproducible when running in parallel.
Results will be the same, only the order is not reproducible
maxDegreeOfParallelism: Maximum number of concurrent operations (default is -1 (unlimited))

SMAC: Sequential Model-Based Optimization for General Algorithm Configuration

Implementation of the SMAC algorithm.
Based on: Sequential Model-Based Optimization for General Algorithm Configuration:
https://ml.informatik.uni-freiburg.de/papers/11-LION5-SMAC.pdf
And ML.Net implementation:
https://github.com/dotnet/machinelearning/blob/master/src/Microsoft.ML.Sweeper/Algorithms/SmacSweeper.cs
Uses Bayesian optimization in tandem with a greedy local search on the top performing solutions.

Parameters

iterations: The number of iterations to perform. Iteration * functionEvaluationsPerIteration = totalFunctionEvaluations
randomStartingPointCount: Number of randomly parameter sets used for initialization (default is 20)
functionEvaluationsPerIterationCount: The number of function evaluations per iteration.
The parameter sets are included in order of most promising outcome (default is 1)
localSearchPointCount: The number of top contenders
to use in the greedy local search (default is (10)
randomSearchPointCount: The number of random parameter sets
used when maximizing the expected improvement acquisition function (default is 1000)
epsilon: Threshold for ending local search (default is 0.00001)
seed: