|
3 | 3 | # Optimizers |
4 | 4 |
|
5 | 5 | Check out {ref}`how-to-select-algorithms` to see how to select an algorithm and specify |
6 | | -`algo_options` when using `maximize` or `minimize`. |
| 6 | +`algo_options` when using `maximize` or `minimize`. The default algorithm options are |
| 7 | +discussed in {ref}`algo_options` and their type hints are documented in {ref}`typing`. |
7 | 8 |
|
8 | | -## Optimizers from scipy |
| 9 | +## Optimizers from SciPy |
9 | 10 |
|
10 | 11 | (scipy-algorithms)= |
11 | 12 |
|
12 | | -optimagic supports most `scipy` algorithms and scipy is automatically installed when you |
13 | | -install optimagic. |
| 13 | +optimagic supports most [SciPy](https://scipy.org/) algorithms and SciPy is |
| 14 | +automatically installed when you install optimagic. |
14 | 15 |
|
15 | 16 | ```{eval-rst} |
16 | 17 | .. dropdown:: scipy_lbfgsb |
17 | 18 |
|
18 | | - .. code-block:: |
19 | | -
|
20 | | - "scipy_lbfgsb" |
21 | | -
|
22 | | - Minimize a scalar function of one or more variables using the L-BFGS-B algorithm. |
23 | | -
|
24 | | - The optimizer is taken from scipy, which calls the Fortran code written by the |
25 | | - original authors of the algorithm. The Fortran code includes the corrections |
26 | | - and improvements that were introduced in a follow up paper. |
27 | | -
|
28 | | - lbfgsb is a limited memory version of the original bfgs algorithm, that deals with |
29 | | - lower and upper bounds via an active set approach. |
| 19 | + **How to use this algorithm:** |
30 | 20 |
|
31 | | - The lbfgsb algorithm is well suited for differentiable scalar optimization problems |
32 | | - with up to several hundred parameters. |
33 | | -
|
34 | | - It is a quasi-newton line search algorithm. At each trial point it evaluates the |
35 | | - criterion function and its gradient to find a search direction. It then approximates |
36 | | - the hessian using the stored history of gradients and uses the hessian to calculate |
37 | | - a candidate step size. Then it uses a gradient based line search algorithm to |
38 | | - determine the actual step length. Since the algorithm always evaluates the gradient |
39 | | - and criterion function jointly, the user should provide a |
40 | | - ``criterion_and_derivative`` function that exploits the synergies in the |
41 | | - calculation of criterion and gradient. |
42 | | -
|
43 | | - The lbfgsb algorithm is almost perfectly scale invariant. Thus, it is not necessary |
44 | | - to scale the parameters. |
45 | | -
|
46 | | - - **convergence.ftol_rel** (float): Stop when the relative improvement |
47 | | - between two iterations is smaller than this. More formally, this is expressed as |
| 21 | + .. code-block:: |
48 | 22 |
|
49 | | - .. math:: |
| 23 | + import optimagic as om |
| 24 | + om.minimize( |
| 25 | + ..., |
| 26 | + algorithm=om.algos.scipy_lbfgsb(stopping_maxiter=1_000, ...) |
| 27 | + ) |
| 28 | + |
| 29 | + or |
| 30 | + |
| 31 | + .. code-block:: |
50 | 32 |
|
51 | | - \frac{(f^k - f^{k+1})}{\\max{{|f^k|, |f^{k+1}|, 1}}} \leq |
52 | | - \text{relative_criterion_tolerance} |
| 33 | + om.minimize( |
| 34 | + ..., |
| 35 | + algorithm="scipy_lbfgsb", |
| 36 | + algo_options={"stopping_maxiter": 1_000, ...} |
| 37 | + ) |
53 | 38 |
|
| 39 | + **Description and available options:** |
54 | 40 |
|
55 | | - - **convergence.gtol_abs** (float): Stop if all elements of the projected |
56 | | - gradient are smaller than this. |
57 | | - - **stopping.maxfun** (int): If the maximum number of function |
58 | | - evaluation is reached, the optimization stops but we do not count this as convergence. |
59 | | - - **stopping.maxiter** (int): If the maximum number of iterations is reached, |
60 | | - the optimization stops, but we do not count this as convergence. |
61 | | - - **limited_memory_storage_length** (int): Maximum number of saved gradients used to approximate the hessian matrix. |
| 41 | + .. autoclass:: optimagic.optimizers.scipy_optimizers.ScipyLBFGSB |
62 | 42 |
|
63 | 43 | ``` |
64 | 44 |
|
|
0 commit comments