**Derivative-free optimization** is a discipline in mathematical optimization that does not use derivative information in the classical sense to find optimal solutions: Sometimes information about the derivative of the objective function *f* is unavailable, unreliable or impractical to obtain. For example, *f* might be non-smooth, or time-consuming to evaluate, or in some way noisy, so that methods that rely on derivatives or compute finite differences are of little use. The problem to find optimal points in such situations is referred to as derivative-free optimization, algorithms that do not use derivatives or finite differences are called **derivative-free algorithms** (note that this classification is not precise). Derivative-free optimization is closely related to black-box optimization.^{1}

Contents

## Algorithms

Evolution strategies, Natural evolution strategies (CMA-ES, xNES, SNES)

Powell's COBYLA, UOBYQA, NEWUOA, BOBYQA and LINCOA algorithms

## See also

## References

Category:Optimization algorithms and methods

**Sources**

Note: first version of this page is based entirely on the first citation

“Derivative-Free Optimization”

*WikiPedia*, 19 March. 2018, https://en.wikipedia.org/wiki/Derivative-free_optimization (1)