Hyperparameter optimization

This article is about hyperparameters in machine learning. For hyperparameters in Bayesian statistics, see Hyperparameter.

In the context of machine learning, hyperparameter optimization or model selection is the problem of choosing a set of hyperparameters for a learning algorithm, usually with the goal of optimizing a measure of the algorithm's performance on an independent data set. Often cross-validation is used to estimate this generalization performance.[1] Hyperparameter optimization contrasts with actual learning problems, which are also often cast as optimization problems, but optimize a loss function on the training set alone. In effect, learning algorithms learn parameters that model/reconstruct their inputs well, while hyperparameter optimization is to ensure the model does not overfit its data by tuning, e.g., regularization.

Algorithms for hyperparameter optimization

Grid search

The traditional way of performing hyperparameter optimization has been grid search, or a parameter sweep, which is simply an exhaustive searching through a manually specified subset of the hyperparameter space of a learning algorithm. A grid search algorithm must be guided by some performance metric, typically measured by cross-validation on the training set[2] or evaluation on a held-out validation set.

Since the parameter space of a machine learner may include real-valued or unbounded value spaces for certain parameters, manually set bounds and discretization may be necessary before applying grid search.

For example, a typical soft-margin SVM classifier equipped with an RBF kernel has at least two hyperparameters that need to be tuned for good performance on unseen data: a regularization constant C and a kernel hyperparameter γ. Both parameters are continuous, so to perform grid search, one selects a finite set of "reasonable" values for each, say

Grid search then trains an SVM with each pair (C, γ) in the Cartesian product of these two sets and evaluates their performance on a held-out validation set (or by internal cross-validation on the training set, in which case multiple SVMs are trained per pair). Finally, the grid search algorithm outputs the settings that achieved the highest score in the validation procedure.

Grid search suffers from the curse of dimensionality, but is often embarrassingly parallel because typically the hyperparameter settings it evaluates are independent of each other.[1]

Bayesian optimization

Main article: Bayesian optimization

Bayesian optimization is a methodology for the global optimization of noisy black-box functions. Applied to hyperparameter optimization, Bayesian optimization consists of developing a statistical model of the function from hyperparameter values to the objective evaluated on a validation set. Intuitively, the methodology assumes that there is some smooth but noisy function that acts as a mapping from hyperparameters to the objective. In Bayesian optimization, one aims to gather observations in such a manner as to evaluate the machine learning model the least number of times while revealing as much information as possible about this function and, in particular, the location of the optimum. Bayesian optimization relies on assuming a very general prior over functions which when combined with observed hyperparameter values and corresponding outputs yields a distribution over functions. The methodology proceeds by iteratively picking hyperparameters to observe (experiments to run) in a manner that trades off exploration (hyperparameters for which the outcome is most uncertain) and exploitation (hyperparameters which are expected to have a good outcome). In practice, Bayesian optimization has been shown[3][4][5][6] to obtain better results in fewer experiments than grid search and random search, due to the ability to reason about the quality of experiments before they are run.

Random search

Main article: Random search

Since grid searching is an exhaustive and therefore potentially expensive method, several alternatives have been proposed. In particular, a randomized search that simply samples parameter settings a fixed number of times has been found to be more effective in high-dimensional spaces than exhaustive search. This is because often times, it turns out some hyperparameters do not significantly affect the loss. Therefore having randomly dispersed data gives more "textured" data than an exhaustive search over parameters that ultimately do not affect the loss.[1]

Gradient-based optimization

For specific learning algorithms, it is possible to compute the gradient with respect to hyperparameters and then optimize the hyperparameters using gradient descent. The first usage of these techniques was focused on neural networks.[7] Since then, these methods have been extended to other models such as support vector machines[8] or logistic regression.[9]

A different approach in order to obtain a gradient with respect to hyperparameters consists in differentiating the steps of an iteratative optimization algorithm using automatic differentiation.[10][11]

Software

See also

References

  1. 1 2 3 Bergstra, James; Bengio, Yoshua (2012). "Random Search for Hyper-Parameter Optimization" (PDF). J. Machine Learning Research. 13: 281–305.
  2. Chin-Wei Hsu, Chih-Chung Chang and Chih-Jen Lin (2010). A practical guide to support vector classification. Technical Report, National Taiwan University.
  3. Hutter, Frank; Hoos, Holger; Leyton-Brown, Kevin (2011), "Sequential model-based optimization for general algorithm configuration" (PDF), Learning and Intelligent Optimization
  4. Bergstra, James; Bardenet, Remi; Bengio, Yoshua; Kegl, Balazs (2011), "Algorithms for hyper-parameter optimization" (PDF), Advances in Neural Information Processing Systems
  5. Snoek, Jasper; Larochelle, Hugo; Adams, Ryan (2012), "Practical Bayesian Optimization of Machine Learning Algorithms" (PDF), Advances in Neural Information Processing Systems
  6. Thornton, Chris; Hutter, Frank; Hoos, Holger; Leyton-Brown, Kevin (2013), "Auto-WEKA: Combined selection and hyperparameter optimization of classification algorithms" (PDF), Knowledge discovery and data mining
  7. Larsen, Jan; Hansen, Lars Kai; Svarer, Claus; Ohlsson, M (1996). "Design and regularization of neural networks: the optimal use of a validation set". Proceedings of the 1996 IEEE Signal Processing Society Workshop.
  8. Olivier Chapelle; Vladimir Vapnik; Olivier Bousquet; Sayan Mukherjee (2002). "Choosing multiple parameters for support vector machines" (PDF). Machine Learning. 46: 131–159. doi:10.1023/a:1012450327387.
  9. Chuong B; Chuan-Sheng Foo; Andrew Y Ng (2008). "Efficient multiple hyperparameter learning for log-linear models". Advances in Neural Information Processing Systems 20.
  10. Domke, Justin (2012). "Generic Methods for Optimization-Based Modeling" (PDF). AISTATS. 22.
  11. Maclaurin, Douglas; Duvenaud, David; Adams, Ryan P. (2015). "Gradient-based Hyperparameter Optimization through Reversible Learning". arXiv:1502.03492Freely accessible [stat.ML].
  12. Gorissen, Dirk; Crombecq, Karel; Couckuyt, Ivo; Demeester, Piet; Dhaene, Tom (2010). "A Surrogate Modeling and Adaptive Sampling Toolbox for Computer Based Design" (PDF). J. Machine Learning Research. 11: 2051–2055.
This article is issued from Wikipedia - version of the 10/27/2016. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.