Skip to content

Commit 0f4a30b

Browse files
authored
Wrap oneplusone optimizer from nevergrad (#576)
1 parent 26ba9cf commit 0f4a30b

File tree

4 files changed

+251
-18
lines changed

4 files changed

+251
-18
lines changed

docs/source/algorithms.md

Lines changed: 74 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -4124,6 +4124,80 @@ package. To use it, you need to have
41244124
- **n_restarts** (int): Number of times to restart the optimizer. Default is 1.
41254125
```
41264126

4127+
```{eval-rst}
4128+
.. dropdown:: nevergrad_oneplusone
4129+
4130+
.. code-block::
4131+
4132+
"nevergrad_oneplusone"
4133+
4134+
Minimize a scalar function using the One Plus One Evolutionary algorithm from Nevergrad.
4135+
4136+
THe One Plus One evolutionary algorithm iterates to find a set of parameters that minimizes the loss
4137+
function. It does this by perturbing, or mutating, the parameters from the last iteration (the
4138+
parent). If the new (child) parameters yield a better result, then the child becomes the new parent
4139+
whose parameters are perturbed, perhaps more aggressively. If the parent yields a better result, it
4140+
remains the parent and the next perturbation is less aggressive. Originally proposed by
4141+
:cite:`Rechenberg1973`. The implementation in Nevergrad is based on the one-fifth adaptation rule,
4142+
going back to :cite:`Schumer1968.
4143+
4144+
- **noise\_handling**: Method for handling the noise, can be
4145+
- "random": A random point is reevaluated regularly using the one-fifth adaptation rule.
4146+
- "optimistic": The best optimistic point is reevaluated regularly, embracing optimism in the face of uncertainty.
4147+
- A float coefficient can be provided to tune the regularity of these reevaluations (default is 0.05). Eg: with 0.05, each evaluation has a 5% chance (i.e., 1 in 20) of being repeated (i.e., the same candidate solution is reevaluated to better estimate its performance). (Default: `None`).
4148+
- **n\_cores**: Number of cores to use.
4149+
4150+
- **stopping.maxfun**: Maximum number of function evaluations.
4151+
- **mutation**: Type of mutation to apply. Available options are (Default: `"gaussian"`).
4152+
- "gaussian": Standard mutation by adding a Gaussian random variable (with progressive widening) to the best pessimistic point.
4153+
- "cauchy": Same as Gaussian but using a Cauchy distribution.
4154+
- "discrete": Mutates a randomly drawn variable (mutation occurs with probability 1/d in d dimensions, hence ~1 variable per mutation).
4155+
- "discreteBSO": Follows brainstorm optimization by gradually decreasing mutation rate from 1 to 1/d.
4156+
- "fastga": Fast Genetic Algorithm mutations from the current best.
4157+
- "doublefastga": Double-FastGA mutations from the current best :cite:`doerr2017`.
4158+
- "rls": Randomized Local Search — mutates one and only one variable.
4159+
- "portfolio": Random number of mutated bits, known as uniform mixing :cite:`dang2016`.
4160+
- "lengler": Mutation rate is a function of dimension and iteration index.
4161+
- "lengler{2|3|half|fourth}": Variants of the Lengler mutation rate adaptation.
4162+
- **sparse**: Whether to apply random mutations that set variables to zero. Default is `False`.
4163+
- **smoother**: Whether to suggest smooth mutations. Default is `False`.
4164+
- **annealing**:
4165+
Annealing schedule to apply to mutation amplitude or temperature-based control. Options are:
4166+
- "none": No annealing is applied.
4167+
- "Exp0.9": Exponential decay with rate 0.9.
4168+
- "Exp0.99": Exponential decay with rate 0.99.
4169+
- "Exp0.9Auto": Exponential decay with rate 0.9, auto-scaled based on problem horizon.
4170+
- "Lin100.0": Linear decay from 1 to 0 over 100 iterations.
4171+
- "Lin1.0": Linear decay from 1 to 0 over 1 iteration.
4172+
- "LinAuto": Linearly decaying annealing automatically scaled to the problem horizon. Default is `"none"`.
4173+
- **super\_radii**:
4174+
Whether to apply extended radii beyond standard bounds for candidate generation, enabling broader
4175+
exploration. Default is `False`.
4176+
- **roulette\_size**:
4177+
Size of the roulette wheel used for selection in the evolutionary process. Affects the sampling
4178+
diversity from past candidates. (Default: `64`)
4179+
- **antismooth**:
4180+
Degree of anti-smoothing applied to prevent premature convergence in smooth landscapes. This alters
4181+
the landscape by penalizing overly smooth improvements. (Default: `4`)
4182+
- **crossover**: Whether to include a genetic crossover step every other iteration. Default is `False`.
4183+
- **crossover\_type**:
4184+
Method used for genetic crossover between individuals in the population. Available options (Default: `"none"`):
4185+
- "none": No crossover is applied.
4186+
- "rand": Randomized selection of crossover point.
4187+
- "max": Crossover at the point with maximum fitness gain.
4188+
- "min": Crossover at the point with minimum fitness gain.
4189+
- "onepoint": One-point crossover, splitting the genome at a single random point.
4190+
- "twopoint": Two-point crossover, splitting the genome at two points and exchanging the middle section.
4191+
- **tabu\_length**:
4192+
Length of the tabu list used to prevent revisiting recently evaluated candidates in local search
4193+
strategies. Helps in escaping local minima. (Default: `1000`)
4194+
- **rotation**:
4195+
Whether to apply rotational transformations to the search space, promoting invariance to axis-
4196+
aligned structures and enhancing search performance in rotated coordinate systems. (Default:
4197+
`False`)
4198+
- **seed**: Seed for the random number generator for reproducibility.
4199+
```
4200+
41274201
## References
41284202

41294203
```{eval-rst}

docs/source/refs.bib

Lines changed: 42 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -927,6 +927,48 @@ @InProceedings{Zambrano2013
927927
doi = {10.1109/CEC.2013.6557848},
928928
}
929929

930+
@book{Rechenberg1973,
931+
author = {Rechenberg, Ingo},
932+
title = {Evolutionsstrategie: Optimierung technischer Systeme nach Prinzipien der biologischen Evolution},
933+
publisher = {Frommann-Holzboog Verlag},
934+
year = {1973},
935+
url = {https://gwern.net/doc/reinforcement-learning/exploration/1973-rechenberg.pdf},
936+
address = {Stuttgart},
937+
note = {[Evolution Strategy: Optimization of Technical Systems According to the Principles of Biological Evolution]}
938+
}
939+
940+
@article{Schumer1968,
941+
author={Schumer, M. and Steiglitz, K.},
942+
journal={IEEE Transactions on Automatic Control},
943+
title={Adaptive step size random search},
944+
year={1968},
945+
volume={13},
946+
number={3},
947+
pages={270-276},
948+
keywords={Minimization methods;Gradient methods;Search methods;Adaptive control;Communication systems;Q measurement;Cost function;Newton method;Military computing},
949+
doi={10.1109/TAC.1968.1098903}
950+
}
951+
952+
@misc{doerr2017,
953+
title={Fast Genetic Algorithms},
954+
author={Benjamin Doerr and Huu Phuoc Le and Régis Makhmara and Ta Duy Nguyen},
955+
year={2017},
956+
eprint={1703.03334},
957+
archivePrefix={arXiv},
958+
primaryClass={cs.NE},
959+
url={https://arxiv.org/abs/1703.03334},
960+
}
961+
962+
@misc{dang2016,
963+
title={Self-adaptation of Mutation Rates in Non-elitist Populations},
964+
author={Duc-Cuong Dang and Per Kristian Lehre},
965+
year={2016},
966+
eprint={1606.05551},
967+
archivePrefix={arXiv},
968+
primaryClass={cs.NE},
969+
url={https://arxiv.org/abs/1606.05551},
970+
}
971+
930972
@Misc{Nogueira2014,
931973
author={Fernando Nogueira},
932974
title={{Bayesian Optimization}: Open source constrained global optimization tool for {Python}},

src/optimagic/algorithms.py

Lines changed: 0 additions & 17 deletions
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,6 @@
1212
from typing import Type, cast
1313

1414
from optimagic.optimization.algorithm import Algorithm
15-
from optimagic.optimizers.bayesian_optimizer import BayesOpt
1615
from optimagic.optimizers.bhhh import BHHH
1716
from optimagic.optimizers.fides import Fides
1817
from optimagic.optimizers.iminuit_migrad import IminuitMigrad
@@ -367,7 +366,6 @@ def Scalar(self) -> BoundedGlobalGradientFreeNonlinearConstrainedScalarAlgorithm
367366

368367
@dataclass(frozen=True)
369368
class BoundedGlobalGradientFreeScalarAlgorithms(AlgoSelection):
370-
bayes_opt: Type[BayesOpt] = BayesOpt
371369
nevergrad_pso: Type[NevergradPSO] = NevergradPSO
372370
nlopt_crs2_lm: Type[NloptCRS2LM] = NloptCRS2LM
373371
nlopt_direct: Type[NloptDirect] = NloptDirect
@@ -1034,7 +1032,6 @@ def Local(self) -> GradientBasedLocalNonlinearConstrainedScalarAlgorithms:
10341032

10351033
@dataclass(frozen=True)
10361034
class BoundedGlobalGradientFreeAlgorithms(AlgoSelection):
1037-
bayes_opt: Type[BayesOpt] = BayesOpt
10381035
nevergrad_pso: Type[NevergradPSO] = NevergradPSO
10391036
nlopt_crs2_lm: Type[NloptCRS2LM] = NloptCRS2LM
10401037
nlopt_direct: Type[NloptDirect] = NloptDirect
@@ -1099,7 +1096,6 @@ def Scalar(self) -> GlobalGradientFreeNonlinearConstrainedScalarAlgorithms:
10991096

11001097
@dataclass(frozen=True)
11011098
class GlobalGradientFreeScalarAlgorithms(AlgoSelection):
1102-
bayes_opt: Type[BayesOpt] = BayesOpt
11031099
nevergrad_pso: Type[NevergradPSO] = NevergradPSO
11041100
nlopt_crs2_lm: Type[NloptCRS2LM] = NloptCRS2LM
11051101
nlopt_direct: Type[NloptDirect] = NloptDirect
@@ -1309,7 +1305,6 @@ def Scalar(self) -> BoundedGradientFreeNonlinearConstrainedScalarAlgorithms:
13091305

13101306
@dataclass(frozen=True)
13111307
class BoundedGradientFreeScalarAlgorithms(AlgoSelection):
1312-
bayes_opt: Type[BayesOpt] = BayesOpt
13131308
nag_pybobyqa: Type[NagPyBOBYQA] = NagPyBOBYQA
13141309
nevergrad_pso: Type[NevergradPSO] = NevergradPSO
13151310
nlopt_bobyqa: Type[NloptBOBYQA] = NloptBOBYQA
@@ -1534,7 +1529,6 @@ def Scalar(self) -> BoundedGlobalNonlinearConstrainedScalarAlgorithms:
15341529

15351530
@dataclass(frozen=True)
15361531
class BoundedGlobalScalarAlgorithms(AlgoSelection):
1537-
bayes_opt: Type[BayesOpt] = BayesOpt
15381532
nevergrad_pso: Type[NevergradPSO] = NevergradPSO
15391533
nlopt_crs2_lm: Type[NloptCRS2LM] = NloptCRS2LM
15401534
nlopt_direct: Type[NloptDirect] = NloptDirect
@@ -2147,7 +2141,6 @@ def Local(self) -> GradientBasedLikelihoodLocalAlgorithms:
21472141

21482142
@dataclass(frozen=True)
21492143
class GlobalGradientFreeAlgorithms(AlgoSelection):
2150-
bayes_opt: Type[BayesOpt] = BayesOpt
21512144
nevergrad_pso: Type[NevergradPSO] = NevergradPSO
21522145
nlopt_crs2_lm: Type[NloptCRS2LM] = NloptCRS2LM
21532146
nlopt_direct: Type[NloptDirect] = NloptDirect
@@ -2234,7 +2227,6 @@ def Scalar(self) -> GradientFreeLocalScalarAlgorithms:
22342227

22352228
@dataclass(frozen=True)
22362229
class BoundedGradientFreeAlgorithms(AlgoSelection):
2237-
bayes_opt: Type[BayesOpt] = BayesOpt
22382230
nag_dfols: Type[NagDFOLS] = NagDFOLS
22392231
nag_pybobyqa: Type[NagPyBOBYQA] = NagPyBOBYQA
22402232
nevergrad_pso: Type[NevergradPSO] = NevergradPSO
@@ -2332,7 +2324,6 @@ def Scalar(self) -> GradientFreeNonlinearConstrainedScalarAlgorithms:
23322324

23332325
@dataclass(frozen=True)
23342326
class GradientFreeScalarAlgorithms(AlgoSelection):
2335-
bayes_opt: Type[BayesOpt] = BayesOpt
23362327
nag_pybobyqa: Type[NagPyBOBYQA] = NagPyBOBYQA
23372328
neldermead_parallel: Type[NelderMeadParallel] = NelderMeadParallel
23382329
nevergrad_pso: Type[NevergradPSO] = NevergradPSO
@@ -2456,7 +2447,6 @@ def Scalar(self) -> GradientFreeParallelScalarAlgorithms:
24562447

24572448
@dataclass(frozen=True)
24582449
class BoundedGlobalAlgorithms(AlgoSelection):
2459-
bayes_opt: Type[BayesOpt] = BayesOpt
24602450
nevergrad_pso: Type[NevergradPSO] = NevergradPSO
24612451
nlopt_crs2_lm: Type[NloptCRS2LM] = NloptCRS2LM
24622452
nlopt_direct: Type[NloptDirect] = NloptDirect
@@ -2539,7 +2529,6 @@ def Scalar(self) -> GlobalNonlinearConstrainedScalarAlgorithms:
25392529

25402530
@dataclass(frozen=True)
25412531
class GlobalScalarAlgorithms(AlgoSelection):
2542-
bayes_opt: Type[BayesOpt] = BayesOpt
25432532
nevergrad_pso: Type[NevergradPSO] = NevergradPSO
25442533
nlopt_crs2_lm: Type[NloptCRS2LM] = NloptCRS2LM
25452534
nlopt_direct: Type[NloptDirect] = NloptDirect
@@ -2854,7 +2843,6 @@ def Scalar(self) -> BoundedNonlinearConstrainedScalarAlgorithms:
28542843

28552844
@dataclass(frozen=True)
28562845
class BoundedScalarAlgorithms(AlgoSelection):
2857-
bayes_opt: Type[BayesOpt] = BayesOpt
28582846
fides: Type[Fides] = Fides
28592847
iminuit_migrad: Type[IminuitMigrad] = IminuitMigrad
28602848
ipopt: Type[Ipopt] = Ipopt
@@ -3167,7 +3155,6 @@ def Scalar(self) -> GradientBasedScalarAlgorithms:
31673155

31683156
@dataclass(frozen=True)
31693157
class GradientFreeAlgorithms(AlgoSelection):
3170-
bayes_opt: Type[BayesOpt] = BayesOpt
31713158
nag_dfols: Type[NagDFOLS] = NagDFOLS
31723159
nag_pybobyqa: Type[NagPyBOBYQA] = NagPyBOBYQA
31733160
neldermead_parallel: Type[NelderMeadParallel] = NelderMeadParallel
@@ -3242,7 +3229,6 @@ def Scalar(self) -> GradientFreeScalarAlgorithms:
32423229

32433230
@dataclass(frozen=True)
32443231
class GlobalAlgorithms(AlgoSelection):
3245-
bayes_opt: Type[BayesOpt] = BayesOpt
32463232
nevergrad_pso: Type[NevergradPSO] = NevergradPSO
32473233
nlopt_crs2_lm: Type[NloptCRS2LM] = NloptCRS2LM
32483234
nlopt_direct: Type[NloptDirect] = NloptDirect
@@ -3372,7 +3358,6 @@ def Scalar(self) -> LocalScalarAlgorithms:
33723358

33733359
@dataclass(frozen=True)
33743360
class BoundedAlgorithms(AlgoSelection):
3375-
bayes_opt: Type[BayesOpt] = BayesOpt
33763361
fides: Type[Fides] = Fides
33773362
iminuit_migrad: Type[IminuitMigrad] = IminuitMigrad
33783363
ipopt: Type[Ipopt] = Ipopt
@@ -3510,7 +3495,6 @@ def Scalar(self) -> NonlinearConstrainedScalarAlgorithms:
35103495

35113496
@dataclass(frozen=True)
35123497
class ScalarAlgorithms(AlgoSelection):
3513-
bayes_opt: Type[BayesOpt] = BayesOpt
35143498
fides: Type[Fides] = Fides
35153499
iminuit_migrad: Type[IminuitMigrad] = IminuitMigrad
35163500
ipopt: Type[Ipopt] = Ipopt
@@ -3687,7 +3671,6 @@ def Scalar(self) -> ParallelScalarAlgorithms:
36873671

36883672
@dataclass(frozen=True)
36893673
class Algorithms(AlgoSelection):
3690-
bayes_opt: Type[BayesOpt] = BayesOpt
36913674
bhhh: Type[BHHH] = BHHH
36923675
fides: Type[Fides] = Fides
36933676
iminuit_migrad: Type[IminuitMigrad] = IminuitMigrad

0 commit comments

Comments
 (0)