Processing math: 0%
+ - 0:00:00
Notes for current slide

FIXME successive halving and hyperband explaination with budget is too confusing, don't need it FIXME add neural networks to meta-model FIXME bullet points FIXME show figure 2x random is as good as hyperband? FIXME needs lots more polish! FIXME too long?! what?! how?! FIXME difference between hyperband and SuccessiveHalving unclear

Notes for next slide

W4995 Applied Machine Learning

Parameter Tuning and AutoML

03/30/20

Andreas C. Müller

1 / 41

FIXME successive halving and hyperband explaination with budget is too confusing, don't need it FIXME add neural networks to meta-model FIXME bullet points FIXME show figure 2x random is as good as hyperband? FIXME needs lots more polish! FIXME too long?! what?! how?! FIXME difference between hyperband and SuccessiveHalving unclear

Motivation

  • Need to select among Models
  • Need to select Hyper-Parameters
  • Need to select among preprocessing methods
2 / 41

Conditional Hyper-Parameters

  • Kernels
  • Neural Nets
  • Pipelines
3 / 41

Formulating model-selection as Hyperparameter Optimization

  • One big search, many conditional Hyper-Parameters
  • Categorical, integer, continuous, conditional
  • Different distributions
4 / 41

CASH problem

  • Find the best configuration
  • Global optimization on complex (high-dim?) space
5 / 41

Issues with Grid-Search

  • Need to define Grid
  • Exponential in number of dims
6 / 41

Black-Box Search Procedures

Λ=argmax Parameters \Lambda, model-evaluation f.

7 / 41

General optimization of unknown, non-differentiable f, possibly no-smooth. NP Hard in general. Function f is very slow to evaluate - think training a neural net for a week.

Random Search

8 / 41

Random Search with scikit-learn

# specify parameters and distributions to sample from
from scipy.stats import randint
param_dist = {"max_depth": [3, None],
"max_features": randint(1, 11),
"min_samples_split": randint(2, 11),
"bootstrap": [True, False],
"criterion": ["gini", "entropy"]}
random_search = RandomizedSearchCV(clf,
param_distributions=param_dist,
n_iter=200)
  • lists or objects with rvs method
  • Use continuous distributions for biggest advantage
9 / 41

Bayesian Optimization, SMBO

  • fit 'cheap' probabilistic function to black-box

  • pick next point using exploration / exploitation

  • Implemented as acquisition function

10 / 41

Surrogate functions

11 / 41

FIXME NEED NEURAL NETS

Evolutionary Methods: TPOT

12 / 41

Implementations

  • SMAC: Random Forest model (Hutter group)
  • spearmint: GP (Snoek et al)
  • hyperopt: TPE (Bergstra, not maintained)
  • scikit-optimize: GP, tree, etc
  • GPyOpt: GP based on GPy (Lawrence group)
13 / 41

Criticism


14 / 41

FIXME use newer hyperband figures!

Beyond Black-Box

  • Hyperparameter gradient descent
  • Multi-Fidelity optimization
  • Meta-learning
  • (others...)
16 / 41

Multi-Fidelity Search

Approximate function by similar cheaper function

17 / 41

Top: subsample the datasets bottom: use less trees in forest

Multi-Fidelity Bayesian Optimization

  • Fit model to performance given parameters and budget
  • Choose parameters and budget for best exploration / exploitation
  • Related to multi-armed bandits and A/B testing
  • Differences to bandits:
    • non-stationary distributions
    • receiving loss (computing validation error) is expensive
    • possibly infinitely many arms (continuous parameters)
18 / 41

Successive Halving

  • Given n configuration and budget B
  • pick \eta=2 or \eta=3 (wording follows 2)
  • Each iteration, keep best halve of configurations
  • after k=\log_\eta(n) + 1 left with single configuration.
  • initially allocate \frac{B}{kn} to each configuration, double each iteration (exact budget is slightly more complicated, see algorithm).
19 / 41

Successive Halving

  • Given n configuration and budget B
  • pick \eta=2 or \eta=3 (wording follows 2)
  • Each iteration, keep best halve of configurations
  • after k=\log_\eta(n) + 1 left with single configuration.
  • initially allocate \frac{B}{kn} to each configuration, double each iteration (exact budget is slightly more complicated, see algorithm).
20 / 41

Successive Halving Example

  • configurations n=81
  • total budget B=20000
    train 81 configurations with resources 41
    train 27 configurations with resources 123
    train 9 configurations with resources 370
    train 3 configurations with resources 1111
    train 1 configurations with resources 3333
    resources total: 16638
21 / 41

Think about resources as in "maximum number of trees build in total" or "maximum number of data points used in total".

Successive Halving (different) Example

22 / 41

Hyperband

23 / 41

Hyperband

24 / 41

BOHB / HpBandSter

25 / 41

In Practice

"With the exception of the LeNet experiment (Section 3.3) and the 117 Datasets experi- ment (Section 4.2.1), the most aggressive bracket of SuccessiveHalving outperformed Hyperband in all of our experiments." Li et. al.

  • Soon (?) in sklearn (discussion)
  • HpBandSter distributed implementation, some custom code required
  • scikit-hyperband looks good, but doesn't do out-of-the-box subsampling
  • Successive halving really easy to implement yourself.
26 / 41

Meta-learning

Learning from experience (other datasets)

27 / 41

Ranking

  • Run many algorithms on large array of datasets
  • Rank by "best on average"
28 / 41

Portfolios

  • PoSH auto-sklearn

  • Create diverse set so that a good one among top k

  • Submodular optimization problem
  • greedy approximation
29 / 41

Discretization

Are continuous parameters actually important ?

  • Anectotal evidence that it's not.
30 / 41

Meta-Features and Meta-Models

31 / 41

Multi-Task Bayesian Optimization

  • Create estimate over all datasets at the same time!
  • Not scalable with Gaussian Processes
  • Maybe scalable with Neural Networks?
33 / 41

Ensemble Models

34 / 41

Auto-sklearn

  • end-to-end auto-ml
  • seaches a fixed sklearn pipeline (4 steps)
  • Warm-starting with meta-features & KNN
  • Bayesian Optimization with SMAC

https://automl.github.io/auto-sklearn/stable/index.html

35 / 41

Playing around with auto-sklearn

import autosklearn.classification
import sklearn.model_selection
import sklearn.datasets
from sklearn.metrics import accuracy_score
X, y = sklearn.datasets.load_digits(return_X_y=True)
X_train, X_test, y_train, y_test = \
sklearn.model_selection.train_test_split(X, y, random_state=1)
automl = autosklearn.classification.AutoSklearnClassifier()
automl.fit(X_train, y_train)
y_hat = automl.predict(X_test)
print("Accuracy score", accuracy_score(y_test, y_hat))

"This will run for one hour and should result in an accuracy above 0.98."

36 / 41

Practical Recommendations

  • Multi-Fidelity! Simple, effective!
  • Portfolios
  • BOHB / HpBandSter
  • auto-sklearn
  • TPot?

    Seems promising:

  • Transfering surrogates / ensembles
  • Collaborative filtering / active testing
37 / 41

Criticisms

  • Do we need 100 classifiers?
  • Do we need Complex Pipelines?
  • Creates complex models and ensembles
  • "Making it too easy"?
38 / 41

Although we already reduced the space of considered ML algorithms substantially compared to our previous Auto-sklearn (4 vs. 15 classifiers), we could have reduced this set even further since, in the end, only XGBoost models ended up in the final ensembles for the challenge.

Feurer et al, PoSH auto-sklearn

39 / 41

dabl

https://dabl.github.io/

Implements a portfolio classfier with successive halving:

40 / 41

Questions ?

41 / 41

Motivation

  • Need to select among Models
  • Need to select Hyper-Parameters
  • Need to select among preprocessing methods
2 / 41
Paused

Help

Keyboard shortcuts

, , Pg Up, k Go to previous slide
, , Pg Dn, Space, j Go to next slide
Home Go to first slide
End Go to last slide
Number + Return Go to specific slide
b / m / f Toggle blackout / mirrored / fullscreen mode
c Clone slideshow
p Toggle presenter mode
t Restart the presentation timer
?, h Toggle this help
Esc Back to slideshow