index
int64 0
731k
| package
stringlengths 2
98
⌀ | name
stringlengths 1
76
| docstring
stringlengths 0
281k
⌀ | code
stringlengths 4
1.07M
⌀ | signature
stringlengths 2
42.8k
⌀ |
---|---|---|---|---|---|
52,725 |
flaml.automl.automl
|
retrain_from_log
|
Retrain from log file.
This function is intended to retrain the logged configurations.
NOTE: In some rare case, the last config is early stopped to meet time_budget and it's the best config.
But the logged config's ITER_HP (e.g., n_estimators) is not reduced.
Args:
log_file_name: A string of the log file name.
X_train: A numpy array or dataframe of training data in shape n*m.
For time series forecast tasks, the first column of X_train must be the timestamp column (datetime type). Other columns in the dataframe are assumed to be exogenous variables (categorical or numeric).
y_train: A numpy array or series of labels in shape n*1.
dataframe: A dataframe of training data including label column.
For time series forecast tasks, dataframe must be specified and should
have at least two columns: timestamp and label, where the first
column is the timestamp column (datetime type). Other columns
in the dataframe are assumed to be exogenous variables
(categorical or numeric).
label: A str of the label column name, e.g., 'label';
Note: If X_train and y_train are provided,
dataframe and label are ignored;
If not, dataframe and label must be provided.
time_budget: A float number of the time budget in seconds.
task: A string of the task type, e.g.,
'classification', 'regression', 'ts_forecast', 'rank',
'seq-classification', 'seq-regression', 'summarization',
or an instance of Task class.
eval_method: A string of resampling strategy, one of
['auto', 'cv', 'holdout'].
split_ratio: A float of the validation data percentage for holdout.
n_splits: An integer of the number of folds for cross-validation.
split_type: str or splitter object, default="auto" | the data split type.
* A valid splitter object is an instance of a derived class of scikit-learn
[KFold](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.KFold.html#sklearn.model_selection.KFold)
and have ``split`` and ``get_n_splits`` methods with the same signatures.
Set eval_method to "cv" to use the splitter object.
* Valid str options depend on different tasks.
For classification tasks, valid choices are
["auto", 'stratified', 'uniform', 'time', 'group']. "auto" -> stratified.
For regression tasks, valid choices are ["auto", 'uniform', 'time'].
"auto" -> uniform.
For time series forecast tasks, must be "auto" or 'time'.
For ranking task, must be "auto" or 'group'.
groups: None or array-like | Group labels (with matching length to
y_train) or groups counts (with sum equal to length of y_train)
for training data.
n_jobs: An integer of the number of threads for training | default=-1.
Use all available resources when n_jobs == -1.
train_best: A boolean of whether to train the best config in the
time budget; if false, train the last config in the budget.
train_full: A boolean of whether to train on the full data. If true,
eval_method and sample_size in the log file will be ignored.
record_id: the ID of the training log record from which the model will
be retrained. By default `record_id = -1` which means this will be
ignored. `record_id = 0` corresponds to the first trial, and
when `record_id >= 0`, `time_budget` will be ignored.
auto_augment: boolean, default=True | Whether to automatically
augment rare classes.
custom_hp: dict, default=None | The custom search space specified by user
Each key is the estimator name, each value is a dict of the custom search space for that estimator. Notice the
domain of the custom search space can either be a value or a sample.Domain object.
```python
custom_hp = {
"transformer_ms": {
"model_path": {
"domain": "albert-base-v2",
},
"learning_rate": {
"domain": tune.choice([1e-4, 1e-5]),
}
}
}
```
fit_kwargs_by_estimator: dict, default=None | The user specified keywords arguments, grouped by estimator name.
e.g.,
```python
fit_kwargs_by_estimator = {
"transformer": {
"output_dir": "test/data/output/",
"fp16": False,
}
}
```
**fit_kwargs: Other key word arguments to pass to fit() function of
the searched learners, such as sample_weight. Below are a few examples of
estimator-specific parameters:
period: int | forecast horizon for all time series forecast tasks.
gpu_per_trial: float, default = 0 | A float of the number of gpus per trial,
only used by TransformersEstimator, XGBoostSklearnEstimator, and
TemporalFusionTransformerEstimator.
group_ids: list of strings of column names identifying a time series, only
used by TemporalFusionTransformerEstimator, required for
'ts_forecast_panel' task. `group_ids` is a parameter for TimeSeriesDataSet object
from PyTorchForecasting.
For other parameters to describe your dataset, refer to
[TimeSeriesDataSet PyTorchForecasting](https://pytorch-forecasting.readthedocs.io/en/stable/api/pytorch_forecasting.data.timeseries.TimeSeriesDataSet.html).
To specify your variables, use `static_categoricals`, `static_reals`,
`time_varying_known_categoricals`, `time_varying_known_reals`,
`time_varying_unknown_categoricals`, `time_varying_unknown_reals`,
`variable_groups`. To provide more information on your data, use
`max_encoder_length`, `min_encoder_length`, `lags`.
log_dir: str, default = "lightning_logs" | Folder into which to log results
for tensorboard, only used by TemporalFusionTransformerEstimator.
max_epochs: int, default = 20 | Maximum number of epochs to run training,
only used by TemporalFusionTransformerEstimator.
batch_size: int, default = 64 | Batch size for training model, only
used by TemporalFusionTransformerEstimator.
|
def retrain_from_log(
self,
log_file_name,
X_train=None,
y_train=None,
dataframe=None,
label=None,
time_budget=np.inf,
task: Optional[Union[str, Task]] = None,
eval_method=None,
split_ratio=None,
n_splits=None,
split_type=None,
groups=None,
n_jobs=-1,
# gpu_per_trial=0,
train_best=True,
train_full=False,
record_id=-1,
auto_augment=None,
custom_hp=None,
skip_transform=None,
preserve_checkpoint=True,
fit_kwargs_by_estimator=None,
**fit_kwargs,
):
"""Retrain from log file.
This function is intended to retrain the logged configurations.
NOTE: In some rare case, the last config is early stopped to meet time_budget and it's the best config.
But the logged config's ITER_HP (e.g., n_estimators) is not reduced.
Args:
log_file_name: A string of the log file name.
X_train: A numpy array or dataframe of training data in shape n*m.
For time series forecast tasks, the first column of X_train must be the timestamp column (datetime type). Other columns in the dataframe are assumed to be exogenous variables (categorical or numeric).
y_train: A numpy array or series of labels in shape n*1.
dataframe: A dataframe of training data including label column.
For time series forecast tasks, dataframe must be specified and should
have at least two columns: timestamp and label, where the first
column is the timestamp column (datetime type). Other columns
in the dataframe are assumed to be exogenous variables
(categorical or numeric).
label: A str of the label column name, e.g., 'label';
Note: If X_train and y_train are provided,
dataframe and label are ignored;
If not, dataframe and label must be provided.
time_budget: A float number of the time budget in seconds.
task: A string of the task type, e.g.,
'classification', 'regression', 'ts_forecast', 'rank',
'seq-classification', 'seq-regression', 'summarization',
or an instance of Task class.
eval_method: A string of resampling strategy, one of
['auto', 'cv', 'holdout'].
split_ratio: A float of the validation data percentage for holdout.
n_splits: An integer of the number of folds for cross-validation.
split_type: str or splitter object, default="auto" | the data split type.
* A valid splitter object is an instance of a derived class of scikit-learn
[KFold](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.KFold.html#sklearn.model_selection.KFold)
and have ``split`` and ``get_n_splits`` methods with the same signatures.
Set eval_method to "cv" to use the splitter object.
* Valid str options depend on different tasks.
For classification tasks, valid choices are
["auto", 'stratified', 'uniform', 'time', 'group']. "auto" -> stratified.
For regression tasks, valid choices are ["auto", 'uniform', 'time'].
"auto" -> uniform.
For time series forecast tasks, must be "auto" or 'time'.
For ranking task, must be "auto" or 'group'.
groups: None or array-like | Group labels (with matching length to
y_train) or groups counts (with sum equal to length of y_train)
for training data.
n_jobs: An integer of the number of threads for training | default=-1.
Use all available resources when n_jobs == -1.
train_best: A boolean of whether to train the best config in the
time budget; if false, train the last config in the budget.
train_full: A boolean of whether to train on the full data. If true,
eval_method and sample_size in the log file will be ignored.
record_id: the ID of the training log record from which the model will
be retrained. By default `record_id = -1` which means this will be
ignored. `record_id = 0` corresponds to the first trial, and
when `record_id >= 0`, `time_budget` will be ignored.
auto_augment: boolean, default=True | Whether to automatically
augment rare classes.
custom_hp: dict, default=None | The custom search space specified by user
Each key is the estimator name, each value is a dict of the custom search space for that estimator. Notice the
domain of the custom search space can either be a value or a sample.Domain object.
```python
custom_hp = {
"transformer_ms": {
"model_path": {
"domain": "albert-base-v2",
},
"learning_rate": {
"domain": tune.choice([1e-4, 1e-5]),
}
}
}
```
fit_kwargs_by_estimator: dict, default=None | The user specified keywords arguments, grouped by estimator name.
e.g.,
```python
fit_kwargs_by_estimator = {
"transformer": {
"output_dir": "test/data/output/",
"fp16": False,
}
}
```
**fit_kwargs: Other key word arguments to pass to fit() function of
the searched learners, such as sample_weight. Below are a few examples of
estimator-specific parameters:
period: int | forecast horizon for all time series forecast tasks.
gpu_per_trial: float, default = 0 | A float of the number of gpus per trial,
only used by TransformersEstimator, XGBoostSklearnEstimator, and
TemporalFusionTransformerEstimator.
group_ids: list of strings of column names identifying a time series, only
used by TemporalFusionTransformerEstimator, required for
'ts_forecast_panel' task. `group_ids` is a parameter for TimeSeriesDataSet object
from PyTorchForecasting.
For other parameters to describe your dataset, refer to
[TimeSeriesDataSet PyTorchForecasting](https://pytorch-forecasting.readthedocs.io/en/stable/api/pytorch_forecasting.data.timeseries.TimeSeriesDataSet.html).
To specify your variables, use `static_categoricals`, `static_reals`,
`time_varying_known_categoricals`, `time_varying_known_reals`,
`time_varying_unknown_categoricals`, `time_varying_unknown_reals`,
`variable_groups`. To provide more information on your data, use
`max_encoder_length`, `min_encoder_length`, `lags`.
log_dir: str, default = "lightning_logs" | Folder into which to log results
for tensorboard, only used by TemporalFusionTransformerEstimator.
max_epochs: int, default = 20 | Maximum number of epochs to run training,
only used by TemporalFusionTransformerEstimator.
batch_size: int, default = 64 | Batch size for training model, only
used by TemporalFusionTransformerEstimator.
"""
task = task or self._settings.get("task")
if isinstance(task, str):
task = task_factory(task)
eval_method = eval_method or self._settings.get("eval_method")
split_ratio = split_ratio or self._settings.get("split_ratio")
n_splits = n_splits or self._settings.get("n_splits")
split_type = split_type or self._settings.get("split_type")
auto_augment = self._settings.get("auto_augment") if auto_augment is None else auto_augment
self._state.task = task
self._estimator_type = "classifier" if task.is_classification() else "regressor"
self._state.fit_kwargs = fit_kwargs
self._state.custom_hp = custom_hp or self._settings.get("custom_hp")
self._skip_transform = self._settings.get("skip_transform") if skip_transform is None else skip_transform
self._state.fit_kwargs_by_estimator = fit_kwargs_by_estimator or self._settings.get("fit_kwargs_by_estimator")
self.preserve_checkpoint = (
self._settings.get("preserve_checkpoint") if preserve_checkpoint is None else preserve_checkpoint
)
task.validate_data(self, self._state, X_train, y_train, dataframe, label, groups=groups)
logger.info("log file name {}".format(log_file_name))
best_config = None
best_val_loss = float("+inf")
best_estimator = None
sample_size = None
time_used = 0.0
training_duration = 0
best = None
with training_log_reader(log_file_name) as reader:
if record_id >= 0:
best = reader.get_record(record_id)
else:
for record in reader.records():
time_used = record.wall_clock_time
if time_used > time_budget:
break
training_duration = time_used
val_loss = record.validation_loss
if val_loss <= best_val_loss or not train_best:
if val_loss == best_val_loss and train_best:
size = record.sample_size
if size > sample_size:
best = record
best_val_loss = val_loss
sample_size = size
else:
best = record
size = record.sample_size
best_val_loss = val_loss
sample_size = size
if not training_duration:
logger.warning(f"No estimator found within time_budget={time_budget}")
from .model import BaseEstimator as Estimator
self._trained_estimator = Estimator()
return training_duration
if not best:
return
best_estimator = best.learner
best_config = best.config
sample_size = len(self._y_train_all) if train_full else best.sample_size
this_estimator_kwargs = self._state.fit_kwargs_by_estimator.get(best_estimator)
if this_estimator_kwargs:
this_estimator_kwargs = (
this_estimator_kwargs.copy()
) # make another shallow copy of the value (a dict obj), so user's fit_kwargs_by_estimator won't be updated
this_estimator_kwargs.update(self._state.fit_kwargs)
self._state.fit_kwargs_by_estimator[best_estimator] = this_estimator_kwargs
else:
self._state.fit_kwargs_by_estimator[best_estimator] = self._state.fit_kwargs
logger.info(
"estimator = {}, config = {}, #training instances = {}".format(best_estimator, best_config, sample_size)
)
# Partially copied from fit() function
# Initilize some attributes required for retrain_from_log
self._split_type = task.decide_split_type(
split_type,
self._y_train_all,
self._state.fit_kwargs,
self._state.groups,
)
eval_method = self._decide_eval_method(eval_method, time_budget)
self.modelcount = 0
self._auto_augment = auto_augment
self._prepare_data(eval_method, split_ratio, n_splits)
self._state.time_budget = -1
self._state.free_mem_ratio = 0
self._state.n_jobs = n_jobs
import os
self._state.resources_per_trial = (
{
"cpu": max(1, os.cpu_count() >> 1),
"gpu": fit_kwargs.get("gpu_per_trial", 0),
}
if self._state.n_jobs < 0
else {"cpu": self._state.n_jobs, "gpu": fit_kwargs.get("gpu_per_trial", 0)}
)
self._trained_estimator = self._state._train_with_config(
best_estimator,
best_config,
sample_size=sample_size,
)[0]
logger.info("retrain from log succeeded")
return training_duration
|
(self, log_file_name, X_train=None, y_train=None, dataframe=None, label=None, time_budget=inf, task: Union[str, flaml.automl.task.task.Task, NoneType] = None, eval_method=None, split_ratio=None, n_splits=None, split_type=None, groups=None, n_jobs=-1, train_best=True, train_full=False, record_id=-1, auto_augment=None, custom_hp=None, skip_transform=None, preserve_checkpoint=True, fit_kwargs_by_estimator=None, **fit_kwargs)
|
52,726 |
flaml.automl.automl
|
save_best_config
| null |
def save_best_config(self, filename):
best = {
"class": self.best_estimator,
"hyperparameters": self.best_config,
}
os.makedirs(os.path.dirname(filename), exist_ok=True)
with open(filename, "w") as f:
json.dump(best, f)
|
(self, filename)
|
52,727 |
flaml.automl.automl
|
score
| null |
def score(
self,
X: Union[DataFrame, psDataFrame],
y: Union[Series, psSeries],
**kwargs,
):
estimator = getattr(self, "_trained_estimator", None)
if estimator is None:
logger.warning("No estimator is trained. Please run fit with enough budget.")
return None
X = self._state.task.preprocess(X, self._transformer)
if self._label_transformer:
y = self._label_transformer.transform(y)
return estimator.score(X, y, **kwargs)
|
(self, X: Union[pandas.core.frame.DataFrame, flaml.automl.spark.psDataFrame], y: Union[pandas.core.series.Series, flaml.automl.spark.psDataFrame], **kwargs)
|
52,728 |
flaml.onlineml.autovw
|
AutoVW
|
Class for the AutoVW algorithm.
|
class AutoVW:
"""Class for the AutoVW algorithm."""
WARMSTART_NUM = 100
AUTOMATIC = "_auto"
VW_INTERACTION_ARG_NAME = "interactions"
def __init__(
self,
max_live_model_num: int,
search_space: dict,
init_config: Optional[dict] = {},
min_resource_lease: Optional[Union[str, float]] = "auto",
automl_runner_args: Optional[dict] = {},
scheduler_args: Optional[dict] = {},
model_select_policy: Optional[str] = "threshold_loss_ucb",
metric: Optional[str] = "mae_clipped",
random_seed: Optional[int] = None,
model_selection_mode: Optional[str] = "min",
cb_coef: Optional[float] = None,
):
"""Constructor.
Args:
max_live_model_num: An int to specify the maximum number of
'live' models, which, in other words, is the maximum number
of models allowed to update in each learning iteraction.
search_space: A dictionary of the search space. This search space
includes both hyperparameters we want to tune and fixed
hyperparameters. In the latter case, the value is a fixed value.
init_config: A dictionary of a partial or full initial config,
e.g. {'interactions': set(), 'learning_rate': 0.5}
min_resource_lease: string or float | The minimum resource lease
assigned to a particular model/trial. If set as 'auto', it will
be calculated automatically.
automl_runner_args: A dictionary of configuration for the OnlineTrialRunner.
If set {}, default values will be used, which is equivalent to using
the following configs.
Example:
```python
automl_runner_args = {
"champion_test_policy": 'loss_ucb', # the statistic test for a better champion
"remove_worse": False, # whether to do worse than test
}
```
scheduler_args: A dictionary of configuration for the scheduler.
If set {}, default values will be used, which is equivalent to using the
following config.
Example:
```python
scheduler_args = {
"keep_challenger_metric": 'ucb', # what metric to use when deciding the top performing challengers
"keep_challenger_ratio": 0.5, # denotes the ratio of top performing challengers to keep live
"keep_champion": True, # specifcies whether to keep the champion always running
}
```
model_select_policy: A string in ['threshold_loss_ucb',
'threshold_loss_lcb', 'threshold_loss_avg', 'loss_ucb', 'loss_lcb',
'loss_avg'] to specify how to select one model to do prediction from
the live model pool. Default value is 'threshold_loss_ucb'.
metric: A string in ['mae_clipped', 'mae', 'mse', 'absolute_clipped',
'absolute', 'squared'] to specify the name of the loss function used
for calculating the progressive validation loss in ChaCha.
random_seed: An integer of the random seed used in the searcher
(more specifically this the random seed for ConfigOracle).
model_selection_mode: A string in ['min', 'max'] to specify the objective as
minimization or maximization.
cb_coef: A float coefficient (optional) used in the sample complexity bound.
"""
self._max_live_model_num = max_live_model_num
self._search_space = search_space
self._init_config = init_config
self._online_trial_args = {
"metric": metric,
"min_resource_lease": min_resource_lease,
"cb_coef": cb_coef,
}
self._automl_runner_args = automl_runner_args
self._scheduler_args = scheduler_args
self._model_select_policy = model_select_policy
self._model_selection_mode = model_selection_mode
self._random_seed = random_seed
self._trial_runner = None
self._best_trial = None
# code for debugging purpose
self._prediction_trial_id = None
self._iter = 0
def _setup_trial_runner(self, vw_example):
"""Set up the _trial_runner based on one vw_example."""
# setup the default search space for the namespace interaction hyperparameter
search_space = self._search_space.copy()
for k, v in self._search_space.items():
if k == self.VW_INTERACTION_ARG_NAME and v == self.AUTOMATIC:
raw_namespaces = self.get_ns_feature_dim_from_vw_example(vw_example).keys()
search_space[k] = polynomial_expansion_set(init_monomials=set(raw_namespaces))
# setup the init config based on the input _init_config and search space
init_config = self._init_config.copy()
for k, v in search_space.items():
if k not in init_config.keys():
if isinstance(v, PolynomialExpansionSet):
init_config[k] = set()
elif not isinstance(v, Categorical) and not isinstance(v, Float):
init_config[k] = v
searcher_args = {
"init_config": init_config,
"space": search_space,
"random_seed": self._random_seed,
"online_trial_args": self._online_trial_args,
}
logger.info("original search_space %s", self._search_space)
logger.info("original init_config %s", self._init_config)
logger.info("searcher_args %s", searcher_args)
logger.info("scheduler_args %s", self._scheduler_args)
logger.info("automl_runner_args %s", self._automl_runner_args)
searcher = ChampionFrontierSearcher(**searcher_args)
scheduler = ChaChaScheduler(**self._scheduler_args)
self._trial_runner = OnlineTrialRunner(
max_live_model_num=self._max_live_model_num,
searcher=searcher,
scheduler=scheduler,
**self._automl_runner_args,
)
def predict(self, data_sample):
"""Predict on the input data sample.
Args:
data_sample: one data example in vw format.
"""
if self._trial_runner is None:
self._setup_trial_runner(data_sample)
self._best_trial = self._select_best_trial()
self._y_predict = self._best_trial.predict(data_sample)
# code for debugging purpose
if self._prediction_trial_id is None or self._prediction_trial_id != self._best_trial.trial_id:
self._prediction_trial_id = self._best_trial.trial_id
logger.info(
"prediction trial id changed to %s at iter %s, resource used: %s",
self._prediction_trial_id,
self._iter,
self._best_trial.result.resource_used,
)
return self._y_predict
def learn(self, data_sample):
"""Perform one online learning step with the given data sample.
Args:
data_sample: one data example in vw format. It will be used to
update the vw model.
"""
self._iter += 1
self._trial_runner.step(data_sample, (self._y_predict, self._best_trial))
def _select_best_trial(self):
"""Select a best trial from the running trials according to the _model_select_policy."""
best_score = float("+inf") if self._model_selection_mode == "min" else float("-inf")
new_best_trial = None
for trial in self._trial_runner.running_trials:
if trial.result is not None and (
"threshold" not in self._model_select_policy or trial.result.resource_used >= self.WARMSTART_NUM
):
score = trial.result.get_score(self._model_select_policy)
if ("min" == self._model_selection_mode and score < best_score) or (
"max" == self._model_selection_mode and score > best_score
):
best_score = score
new_best_trial = trial
if new_best_trial is not None:
logger.debug("best_trial resource used: %s", new_best_trial.result.resource_used)
return new_best_trial
else:
# This branch will be triggered when the resource consumption all trials are smaller
# than the WARMSTART_NUM threshold. In this case, we will select the _best_trial
# selected in the previous iteration.
if self._best_trial is not None and self._best_trial.status == Trial.RUNNING:
logger.debug("old best trial %s", self._best_trial.trial_id)
return self._best_trial
else:
# this will be triggered in the first iteration or in the iteration where we want
# to select the trial from the previous iteration but that trial has been paused
# (i.e., self._best_trial.status != Trial.RUNNING) by the scheduler.
logger.debug(
"using champion trial: %s",
self._trial_runner.champion_trial.trial_id,
)
return self._trial_runner.champion_trial
@staticmethod
def get_ns_feature_dim_from_vw_example(vw_example) -> dict:
"""Get a dictionary of feature dimensionality for each namespace singleton."""
return get_ns_feature_dim_from_vw_example(vw_example)
|
(max_live_model_num: int, search_space: dict, init_config: Optional[dict] = {}, min_resource_lease: Union[str, float, NoneType] = 'auto', automl_runner_args: Optional[dict] = {}, scheduler_args: Optional[dict] = {}, model_select_policy: Optional[str] = 'threshold_loss_ucb', metric: Optional[str] = 'mae_clipped', random_seed: Optional[int] = None, model_selection_mode: Optional[str] = 'min', cb_coef: Optional[float] = None)
|
52,729 |
flaml.onlineml.autovw
|
__init__
|
Constructor.
Args:
max_live_model_num: An int to specify the maximum number of
'live' models, which, in other words, is the maximum number
of models allowed to update in each learning iteraction.
search_space: A dictionary of the search space. This search space
includes both hyperparameters we want to tune and fixed
hyperparameters. In the latter case, the value is a fixed value.
init_config: A dictionary of a partial or full initial config,
e.g. {'interactions': set(), 'learning_rate': 0.5}
min_resource_lease: string or float | The minimum resource lease
assigned to a particular model/trial. If set as 'auto', it will
be calculated automatically.
automl_runner_args: A dictionary of configuration for the OnlineTrialRunner.
If set {}, default values will be used, which is equivalent to using
the following configs.
Example:
```python
automl_runner_args = {
"champion_test_policy": 'loss_ucb', # the statistic test for a better champion
"remove_worse": False, # whether to do worse than test
}
```
scheduler_args: A dictionary of configuration for the scheduler.
If set {}, default values will be used, which is equivalent to using the
following config.
Example:
```python
scheduler_args = {
"keep_challenger_metric": 'ucb', # what metric to use when deciding the top performing challengers
"keep_challenger_ratio": 0.5, # denotes the ratio of top performing challengers to keep live
"keep_champion": True, # specifcies whether to keep the champion always running
}
```
model_select_policy: A string in ['threshold_loss_ucb',
'threshold_loss_lcb', 'threshold_loss_avg', 'loss_ucb', 'loss_lcb',
'loss_avg'] to specify how to select one model to do prediction from
the live model pool. Default value is 'threshold_loss_ucb'.
metric: A string in ['mae_clipped', 'mae', 'mse', 'absolute_clipped',
'absolute', 'squared'] to specify the name of the loss function used
for calculating the progressive validation loss in ChaCha.
random_seed: An integer of the random seed used in the searcher
(more specifically this the random seed for ConfigOracle).
model_selection_mode: A string in ['min', 'max'] to specify the objective as
minimization or maximization.
cb_coef: A float coefficient (optional) used in the sample complexity bound.
|
def __init__(
self,
max_live_model_num: int,
search_space: dict,
init_config: Optional[dict] = {},
min_resource_lease: Optional[Union[str, float]] = "auto",
automl_runner_args: Optional[dict] = {},
scheduler_args: Optional[dict] = {},
model_select_policy: Optional[str] = "threshold_loss_ucb",
metric: Optional[str] = "mae_clipped",
random_seed: Optional[int] = None,
model_selection_mode: Optional[str] = "min",
cb_coef: Optional[float] = None,
):
"""Constructor.
Args:
max_live_model_num: An int to specify the maximum number of
'live' models, which, in other words, is the maximum number
of models allowed to update in each learning iteraction.
search_space: A dictionary of the search space. This search space
includes both hyperparameters we want to tune and fixed
hyperparameters. In the latter case, the value is a fixed value.
init_config: A dictionary of a partial or full initial config,
e.g. {'interactions': set(), 'learning_rate': 0.5}
min_resource_lease: string or float | The minimum resource lease
assigned to a particular model/trial. If set as 'auto', it will
be calculated automatically.
automl_runner_args: A dictionary of configuration for the OnlineTrialRunner.
If set {}, default values will be used, which is equivalent to using
the following configs.
Example:
```python
automl_runner_args = {
"champion_test_policy": 'loss_ucb', # the statistic test for a better champion
"remove_worse": False, # whether to do worse than test
}
```
scheduler_args: A dictionary of configuration for the scheduler.
If set {}, default values will be used, which is equivalent to using the
following config.
Example:
```python
scheduler_args = {
"keep_challenger_metric": 'ucb', # what metric to use when deciding the top performing challengers
"keep_challenger_ratio": 0.5, # denotes the ratio of top performing challengers to keep live
"keep_champion": True, # specifcies whether to keep the champion always running
}
```
model_select_policy: A string in ['threshold_loss_ucb',
'threshold_loss_lcb', 'threshold_loss_avg', 'loss_ucb', 'loss_lcb',
'loss_avg'] to specify how to select one model to do prediction from
the live model pool. Default value is 'threshold_loss_ucb'.
metric: A string in ['mae_clipped', 'mae', 'mse', 'absolute_clipped',
'absolute', 'squared'] to specify the name of the loss function used
for calculating the progressive validation loss in ChaCha.
random_seed: An integer of the random seed used in the searcher
(more specifically this the random seed for ConfigOracle).
model_selection_mode: A string in ['min', 'max'] to specify the objective as
minimization or maximization.
cb_coef: A float coefficient (optional) used in the sample complexity bound.
"""
self._max_live_model_num = max_live_model_num
self._search_space = search_space
self._init_config = init_config
self._online_trial_args = {
"metric": metric,
"min_resource_lease": min_resource_lease,
"cb_coef": cb_coef,
}
self._automl_runner_args = automl_runner_args
self._scheduler_args = scheduler_args
self._model_select_policy = model_select_policy
self._model_selection_mode = model_selection_mode
self._random_seed = random_seed
self._trial_runner = None
self._best_trial = None
# code for debugging purpose
self._prediction_trial_id = None
self._iter = 0
|
(self, max_live_model_num: int, search_space: dict, init_config: Optional[dict] = {}, min_resource_lease: Union[str, float, NoneType] = 'auto', automl_runner_args: Optional[dict] = {}, scheduler_args: Optional[dict] = {}, model_select_policy: Optional[str] = 'threshold_loss_ucb', metric: Optional[str] = 'mae_clipped', random_seed: Optional[int] = None, model_selection_mode: Optional[str] = 'min', cb_coef: Optional[float] = None)
|
52,730 |
flaml.onlineml.autovw
|
_select_best_trial
|
Select a best trial from the running trials according to the _model_select_policy.
|
def _select_best_trial(self):
"""Select a best trial from the running trials according to the _model_select_policy."""
best_score = float("+inf") if self._model_selection_mode == "min" else float("-inf")
new_best_trial = None
for trial in self._trial_runner.running_trials:
if trial.result is not None and (
"threshold" not in self._model_select_policy or trial.result.resource_used >= self.WARMSTART_NUM
):
score = trial.result.get_score(self._model_select_policy)
if ("min" == self._model_selection_mode and score < best_score) or (
"max" == self._model_selection_mode and score > best_score
):
best_score = score
new_best_trial = trial
if new_best_trial is not None:
logger.debug("best_trial resource used: %s", new_best_trial.result.resource_used)
return new_best_trial
else:
# This branch will be triggered when the resource consumption all trials are smaller
# than the WARMSTART_NUM threshold. In this case, we will select the _best_trial
# selected in the previous iteration.
if self._best_trial is not None and self._best_trial.status == Trial.RUNNING:
logger.debug("old best trial %s", self._best_trial.trial_id)
return self._best_trial
else:
# this will be triggered in the first iteration or in the iteration where we want
# to select the trial from the previous iteration but that trial has been paused
# (i.e., self._best_trial.status != Trial.RUNNING) by the scheduler.
logger.debug(
"using champion trial: %s",
self._trial_runner.champion_trial.trial_id,
)
return self._trial_runner.champion_trial
|
(self)
|
52,731 |
flaml.onlineml.autovw
|
_setup_trial_runner
|
Set up the _trial_runner based on one vw_example.
|
def _setup_trial_runner(self, vw_example):
"""Set up the _trial_runner based on one vw_example."""
# setup the default search space for the namespace interaction hyperparameter
search_space = self._search_space.copy()
for k, v in self._search_space.items():
if k == self.VW_INTERACTION_ARG_NAME and v == self.AUTOMATIC:
raw_namespaces = self.get_ns_feature_dim_from_vw_example(vw_example).keys()
search_space[k] = polynomial_expansion_set(init_monomials=set(raw_namespaces))
# setup the init config based on the input _init_config and search space
init_config = self._init_config.copy()
for k, v in search_space.items():
if k not in init_config.keys():
if isinstance(v, PolynomialExpansionSet):
init_config[k] = set()
elif not isinstance(v, Categorical) and not isinstance(v, Float):
init_config[k] = v
searcher_args = {
"init_config": init_config,
"space": search_space,
"random_seed": self._random_seed,
"online_trial_args": self._online_trial_args,
}
logger.info("original search_space %s", self._search_space)
logger.info("original init_config %s", self._init_config)
logger.info("searcher_args %s", searcher_args)
logger.info("scheduler_args %s", self._scheduler_args)
logger.info("automl_runner_args %s", self._automl_runner_args)
searcher = ChampionFrontierSearcher(**searcher_args)
scheduler = ChaChaScheduler(**self._scheduler_args)
self._trial_runner = OnlineTrialRunner(
max_live_model_num=self._max_live_model_num,
searcher=searcher,
scheduler=scheduler,
**self._automl_runner_args,
)
|
(self, vw_example)
|
52,732 |
flaml.onlineml.autovw
|
get_ns_feature_dim_from_vw_example
|
Get a dictionary of feature dimensionality for each namespace singleton.
|
@staticmethod
def get_ns_feature_dim_from_vw_example(vw_example) -> dict:
"""Get a dictionary of feature dimensionality for each namespace singleton."""
return get_ns_feature_dim_from_vw_example(vw_example)
|
(vw_example) -> dict
|
52,733 |
flaml.onlineml.autovw
|
learn
|
Perform one online learning step with the given data sample.
Args:
data_sample: one data example in vw format. It will be used to
update the vw model.
|
def learn(self, data_sample):
"""Perform one online learning step with the given data sample.
Args:
data_sample: one data example in vw format. It will be used to
update the vw model.
"""
self._iter += 1
self._trial_runner.step(data_sample, (self._y_predict, self._best_trial))
|
(self, data_sample)
|
52,734 |
flaml.onlineml.autovw
|
predict
|
Predict on the input data sample.
Args:
data_sample: one data example in vw format.
|
def predict(self, data_sample):
"""Predict on the input data sample.
Args:
data_sample: one data example in vw format.
"""
if self._trial_runner is None:
self._setup_trial_runner(data_sample)
self._best_trial = self._select_best_trial()
self._y_predict = self._best_trial.predict(data_sample)
# code for debugging purpose
if self._prediction_trial_id is None or self._prediction_trial_id != self._best_trial.trial_id:
self._prediction_trial_id = self._best_trial.trial_id
logger.info(
"prediction trial id changed to %s at iter %s, resource used: %s",
self._prediction_trial_id,
self._iter,
self._best_trial.result.resource_used,
)
return self._y_predict
|
(self, data_sample)
|
52,735 |
flaml.tune.searcher.blendsearch
|
BlendSearch
|
class for BlendSearch algorithm.
|
class BlendSearch(Searcher):
"""class for BlendSearch algorithm."""
lagrange = "_lagrange" # suffix for lagrange-modified metric
LocalSearch = FLOW2
def __init__(
self,
metric: Optional[str] = None,
mode: Optional[str] = None,
space: Optional[dict] = None,
low_cost_partial_config: Optional[dict] = None,
cat_hp_cost: Optional[dict] = None,
points_to_evaluate: Optional[List[dict]] = None,
evaluated_rewards: Optional[List] = None,
time_budget_s: Union[int, float] = None,
num_samples: Optional[int] = None,
resource_attr: Optional[str] = None,
min_resource: Optional[float] = None,
max_resource: Optional[float] = None,
reduction_factor: Optional[float] = None,
global_search_alg: Optional[Searcher] = None,
config_constraints: Optional[List[Tuple[Callable[[dict], float], str, float]]] = None,
metric_constraints: Optional[List[Tuple[str, str, float]]] = None,
seed: Optional[int] = 20,
cost_attr: Optional[str] = "auto",
cost_budget: Optional[float] = None,
experimental: Optional[bool] = False,
lexico_objectives: Optional[dict] = None,
use_incumbent_result_in_evaluation=False,
allow_empty_config=False,
):
"""Constructor.
Args:
metric: A string of the metric name to optimize for.
mode: A string in ['min', 'max'] to specify the objective as
minimization or maximization.
space: A dictionary to specify the search space.
low_cost_partial_config: A dictionary from a subset of
controlled dimensions to the initial low-cost values.
E.g., ```{'n_estimators': 4, 'max_leaves': 4}```.
cat_hp_cost: A dictionary from a subset of categorical dimensions
to the relative cost of each choice.
E.g., ```{'tree_method': [1, 1, 2]}```.
I.e., the relative cost of the three choices of 'tree_method'
is 1, 1 and 2 respectively.
points_to_evaluate: Initial parameter suggestions to be run first.
evaluated_rewards (list): If you have previously evaluated the
parameters passed in as points_to_evaluate you can avoid
re-running those trials by passing in the reward attributes
as a list so the optimiser can be told the results without
needing to re-compute the trial. Must be the same or shorter length than
points_to_evaluate. When provided, `mode` must be specified.
time_budget_s: int or float | Time budget in seconds.
num_samples: int | The number of configs to try. -1 means no limit on the
number of configs to try.
resource_attr: A string to specify the resource dimension and the best
performance is assumed to be at the max_resource.
min_resource: A float of the minimal resource to use for the resource_attr.
max_resource: A float of the maximal resource to use for the resource_attr.
reduction_factor: A float of the reduction factor used for
incremental pruning.
global_search_alg: A Searcher instance as the global search
instance. If omitted, Optuna is used. The following algos have
known issues when used as global_search_alg:
- HyperOptSearch raises exception sometimes
- TuneBOHB has its own scheduler
config_constraints: A list of config constraints to be satisfied.
E.g., ```config_constraints = [(mem_size, '<=', 1024**3)]```.
`mem_size` is a function which produces a float number for the bytes
needed for a config.
It is used to skip configs which do not fit in memory.
metric_constraints: A list of metric constraints to be satisfied.
E.g., `['precision', '>=', 0.9]`. The sign can be ">=" or "<=".
seed: An integer of the random seed.
cost_attr: None or str to specify the attribute to evaluate the cost of different trials.
Default is "auto", which means that we will automatically choose the cost attribute to use (depending
on the nature of the resource budget). When cost_attr is set to None, cost differences between different trials will be omitted
in our search algorithm. When cost_attr is set to a str different from "auto" and "time_total_s",
this cost_attr must be available in the result dict of the trial.
cost_budget: A float of the cost budget. Only valid when cost_attr is a str different from "auto" and "time_total_s".
lexico_objectives: dict, default=None | It specifics information needed to perform multi-objective
optimization with lexicographic preferences. This is only supported in CFO currently.
When lexico_objectives is not None, the arguments metric, mode will be invalid.
This dictionary shall contain the following fields of key-value pairs:
- "metrics": a list of optimization objectives with the orders reflecting the priorities/preferences of the
objectives.
- "modes" (optional): a list of optimization modes (each mode either "min" or "max") corresponding to the
objectives in the metric list. If not provided, we use "min" as the default mode for all the objectives.
- "targets" (optional): a dictionary to specify the optimization targets on the objectives. The keys are the
metric names (provided in "metric"), and the values are the numerical target values.
- "tolerances" (optional): a dictionary to specify the optimality tolerances on objectives. The keys are the metric names (provided in "metrics"), and the values are the absolute/percentage tolerance in the form of numeric/string.
E.g.,
```python
lexico_objectives = {
"metrics": ["error_rate", "pred_time"],
"modes": ["min", "min"],
"tolerances": {"error_rate": 0.01, "pred_time": 0.0},
"targets": {"error_rate": 0.0},
}
```
We also support percentage tolerance.
E.g.,
```python
lexico_objectives = {
"metrics": ["error_rate", "pred_time"],
"modes": ["min", "min"],
"tolerances": {"error_rate": "5%", "pred_time": "0%"},
"targets": {"error_rate": 0.0},
}
```
experimental: A bool of whether to use experimental features.
"""
self._eps = SEARCH_THREAD_EPS
self._input_cost_attr = cost_attr
if cost_attr == "auto":
if time_budget_s is not None:
self.cost_attr = TIME_TOTAL_S
else:
self.cost_attr = None
self._cost_budget = None
else:
self.cost_attr = cost_attr
self._cost_budget = cost_budget
self.penalty = PENALTY # penalty term for constraints
self._metric, self._mode = metric, mode
self._use_incumbent_result_in_evaluation = use_incumbent_result_in_evaluation
self.lexico_objectives = lexico_objectives
init_config = low_cost_partial_config or {}
if not init_config:
logger.info(
"No low-cost partial config given to the search algorithm. "
"For cost-frugal search, "
"consider providing low-cost values for cost-related hps via "
"'low_cost_partial_config'. More info can be found at "
"https://microsoft.github.io/FLAML/docs/FAQ#about-low_cost_partial_config-in-tune"
)
if evaluated_rewards:
assert mode, "mode must be specified when evaluted_rewards is provided."
self._points_to_evaluate = []
self._evaluated_rewards = []
n = len(evaluated_rewards)
self._evaluated_points = points_to_evaluate[:n]
new_points_to_evaluate = points_to_evaluate[n:]
self._all_rewards = evaluated_rewards
best = max(evaluated_rewards) if mode == "max" else min(evaluated_rewards)
# only keep the best points as start points
for i, r in enumerate(evaluated_rewards):
if r == best:
p = points_to_evaluate[i]
self._points_to_evaluate.append(p)
self._evaluated_rewards.append(r)
self._points_to_evaluate.extend(new_points_to_evaluate)
else:
self._points_to_evaluate = points_to_evaluate or []
self._evaluated_rewards = evaluated_rewards or []
self._config_constraints = config_constraints
self._metric_constraints = metric_constraints
if metric_constraints:
assert all(x[1] in ["<=", ">="] for x in metric_constraints), "sign of metric constraints must be <= or >=."
# metric modified by lagrange
metric += self.lagrange
self._cat_hp_cost = cat_hp_cost or {}
if space:
add_cost_to_space(space, init_config, self._cat_hp_cost)
self._ls = self.LocalSearch(
init_config,
metric,
mode,
space,
resource_attr,
min_resource,
max_resource,
reduction_factor,
self.cost_attr,
seed,
self.lexico_objectives,
)
if global_search_alg is not None:
self._gs = global_search_alg
elif getattr(self, "__name__", None) != "CFO":
if space and self._ls.hierarchical:
from functools import partial
gs_space = partial(define_by_run_func, space=space)
evaluated_rewards = None # not supported by define-by-run
else:
gs_space = space
gs_seed = seed - 10 if (seed - 10) >= 0 else seed - 11 + (1 << 32)
self._gs_seed = gs_seed
if experimental:
import optuna as ot
sampler = ot.samplers.TPESampler(seed=gs_seed, multivariate=True, group=True)
else:
sampler = None
try:
assert evaluated_rewards
self._gs = GlobalSearch(
space=gs_space,
metric=metric,
mode=mode,
seed=gs_seed,
sampler=sampler,
points_to_evaluate=self._evaluated_points,
evaluated_rewards=evaluated_rewards,
)
except (AssertionError, ValueError):
self._gs = GlobalSearch(
space=gs_space,
metric=metric,
mode=mode,
seed=gs_seed,
sampler=sampler,
)
self._gs.space = space
else:
self._gs = None
self._experimental = experimental
if getattr(self, "__name__", None) == "CFO" and points_to_evaluate and len(self._points_to_evaluate) > 1:
# use the best config in points_to_evaluate as the start point
self._candidate_start_points = {}
self._started_from_low_cost = not low_cost_partial_config
else:
self._candidate_start_points = None
self._time_budget_s, self._num_samples = time_budget_s, num_samples
self._allow_empty_config = allow_empty_config
if space is not None:
self._init_search()
def set_search_properties(
self,
metric: Optional[str] = None,
mode: Optional[str] = None,
config: Optional[Dict] = None,
**spec,
) -> bool:
metric_changed = mode_changed = False
if metric and self._metric != metric:
metric_changed = True
self._metric = metric
if self._metric_constraints:
# metric modified by lagrange
metric += self.lagrange
# TODO: don't change metric for global search methods that
# can handle constraints already
if mode and self._mode != mode:
mode_changed = True
self._mode = mode
if not self._ls.space:
# the search space can be set only once
if self._gs is not None:
# define-by-run is not supported via set_search_properties
self._gs.set_search_properties(metric, mode, config)
self._gs.space = config
if config:
add_cost_to_space(config, self._ls.init_config, self._cat_hp_cost)
self._ls.set_search_properties(metric, mode, config)
self._init_search()
else:
if metric_changed or mode_changed:
# reset search when metric or mode changed
self._ls.set_search_properties(metric, mode)
if self._gs is not None:
self._gs = GlobalSearch(
space=self._gs._space,
metric=metric,
mode=mode,
seed=self._gs_seed,
)
self._gs.space = self._ls.space
self._init_search()
if spec:
# CFO doesn't need these settings
if "time_budget_s" in spec:
self._time_budget_s = spec["time_budget_s"] # budget from now
now = time.time()
self._time_used += now - self._start_time
self._start_time = now
self._set_deadline()
if self._input_cost_attr == "auto" and self._time_budget_s:
self.cost_attr = self._ls.cost_attr = TIME_TOTAL_S
if "metric_target" in spec:
self._metric_target = spec.get("metric_target")
num_samples = spec.get("num_samples")
if num_samples is not None:
self._num_samples = (
(num_samples + len(self._result) + len(self._trial_proposed_by))
if num_samples > 0 # 0 is currently treated the same as -1
else num_samples
)
return True
def _set_deadline(self):
if self._time_budget_s is not None:
self._deadline = self._time_budget_s + self._start_time
self._set_eps()
else:
self._deadline = np.inf
def _set_eps(self):
"""set eps for search threads according to time budget"""
self._eps = max(min(self._time_budget_s / 1000.0, 1.0), 1e-9)
def _init_search(self):
"""initialize the search"""
self._start_time = time.time()
self._time_used = 0
self._set_deadline()
self._is_ls_ever_converged = False
self._subspace = {} # the subspace for each trial id
self._metric_target = np.inf * self._ls.metric_op
self._search_thread_pool = {
# id: int -> thread: SearchThread
0: SearchThread(self._ls.mode, self._gs, self.cost_attr, self._eps)
}
self._thread_count = 1 # total # threads created
self._init_used = self._ls.init_config is None
self._trial_proposed_by = {} # trial_id: str -> thread_id: int
self._ls_bound_min = normalize(
self._ls.init_config.copy(),
self._ls.space,
self._ls.init_config,
{},
recursive=True,
)
self._ls_bound_max = normalize(
self._ls.init_config.copy(),
self._ls.space,
self._ls.init_config,
{},
recursive=True,
)
self._gs_admissible_min = self._ls_bound_min.copy()
self._gs_admissible_max = self._ls_bound_max.copy()
if self._metric_constraints:
self._metric_constraint_satisfied = False
self._metric_constraint_penalty = [self.penalty for _ in self._metric_constraints]
else:
self._metric_constraint_satisfied = True
self._metric_constraint_penalty = None
self.best_resource = self._ls.min_resource
i = 0
# config_signature: tuple -> result: Dict
self._result = {}
self._cost_used = 0
while self._evaluated_rewards:
# go over the evaluated rewards
trial_id = f"trial_for_evaluated_{i}"
self.suggest(trial_id)
i += 1
def save(self, checkpoint_path: str):
"""save states to a checkpoint path."""
self._time_used += time.time() - self._start_time
self._start_time = time.time()
save_object = self
with open(checkpoint_path, "wb") as outputFile:
pickle.dump(save_object, outputFile)
def restore(self, checkpoint_path: str):
"""restore states from checkpoint."""
with open(checkpoint_path, "rb") as inputFile:
state = pickle.load(inputFile)
self.__dict__ = state.__dict__
self._start_time = time.time()
self._set_deadline()
@property
def metric_target(self):
return self._metric_target
@property
def is_ls_ever_converged(self):
return self._is_ls_ever_converged
def on_trial_complete(self, trial_id: str, result: Optional[Dict] = None, error: bool = False):
"""search thread updater and cleaner."""
metric_constraint_satisfied = True
if result and not error and self._metric_constraints:
# account for metric constraints if any
objective = result[self._metric]
for i, constraint in enumerate(self._metric_constraints):
metric_constraint, sign, threshold = constraint
value = result.get(metric_constraint)
if value:
sign_op = 1 if sign == "<=" else -1
violation = (value - threshold) * sign_op
if violation > 0:
# add penalty term to the metric
objective += self._metric_constraint_penalty[i] * violation * self._ls.metric_op
metric_constraint_satisfied = False
if self._metric_constraint_penalty[i] < self.penalty:
self._metric_constraint_penalty[i] += violation
result[self._metric + self.lagrange] = objective
if metric_constraint_satisfied and not self._metric_constraint_satisfied:
# found a feasible point
self._metric_constraint_penalty = [1 for _ in self._metric_constraints]
self._metric_constraint_satisfied |= metric_constraint_satisfied
thread_id = self._trial_proposed_by.get(trial_id)
if thread_id in self._search_thread_pool:
self._search_thread_pool[thread_id].on_trial_complete(trial_id, result, error)
del self._trial_proposed_by[trial_id]
if result:
config = result.get("config", {})
if not config:
for key, value in result.items():
if key.startswith("config/"):
config[key[7:]] = value
if self._allow_empty_config and not config:
return
signature = self._ls.config_signature(config, self._subspace.get(trial_id, {}))
if error: # remove from result cache
del self._result[signature]
else: # add to result cache
self._cost_used += result.get(self.cost_attr, 0)
self._result[signature] = result
# update target metric if improved
objective = result[self._ls.metric]
if (objective - self._metric_target) * self._ls.metric_op < 0:
self._metric_target = objective
if self._ls.resource:
self._best_resource = config[self._ls.resource_attr]
if thread_id:
if not self._metric_constraint_satisfied:
# no point has been found to satisfy metric constraint
self._expand_admissible_region(
self._ls_bound_min,
self._ls_bound_max,
self._subspace.get(trial_id, self._ls.space),
)
if self._gs is not None and self._experimental and (not self._ls.hierarchical):
self._gs.add_evaluated_point(flatten_dict(config), objective)
# TODO: recover when supported
# converted = convert_key(config, self._gs.space)
# logger.info(converted)
# self._gs.add_evaluated_point(converted, objective)
elif metric_constraint_satisfied and self._create_condition(result):
# thread creator
thread_id = self._thread_count
self._started_from_given = self._candidate_start_points and trial_id in self._candidate_start_points
if self._started_from_given:
del self._candidate_start_points[trial_id]
else:
self._started_from_low_cost = True
self._create_thread(config, result, self._subspace.get(trial_id, self._ls.space))
# reset admissible region to ls bounding box
self._gs_admissible_min.update(self._ls_bound_min)
self._gs_admissible_max.update(self._ls_bound_max)
# cleaner
if thread_id and thread_id in self._search_thread_pool:
# local search thread
self._clean(thread_id)
if trial_id in self._subspace and not (
self._candidate_start_points and trial_id in self._candidate_start_points
):
del self._subspace[trial_id]
def _create_thread(self, config, result, space):
if self.lexico_objectives is None:
obj = result[self._ls.metric]
else:
obj = {k: result[k] for k in self.lexico_objectives["metrics"]}
self._search_thread_pool[self._thread_count] = SearchThread(
self._ls.mode,
self._ls.create(
config,
obj,
cost=result.get(self.cost_attr, 1),
space=space,
),
self.cost_attr,
self._eps,
)
self._thread_count += 1
self._update_admissible_region(
unflatten_dict(config),
self._ls_bound_min,
self._ls_bound_max,
space,
self._ls.space,
)
def _update_admissible_region(
self,
config,
admissible_min,
admissible_max,
subspace: Dict = {},
space: Dict = {},
):
# update admissible region
normalized_config = normalize(config, subspace, config, {})
for key in admissible_min:
value = normalized_config[key]
if isinstance(admissible_max[key], list):
domain = space[key]
choice = indexof(domain, value)
self._update_admissible_region(
value,
admissible_min[key][choice],
admissible_max[key][choice],
subspace[key],
domain[choice],
)
if len(admissible_max[key]) > len(domain.categories):
# points + index
normal = (choice + 0.5) / len(domain.categories)
admissible_max[key][-1] = max(normal, admissible_max[key][-1])
admissible_min[key][-1] = min(normal, admissible_min[key][-1])
elif isinstance(value, dict):
self._update_admissible_region(
value,
admissible_min[key],
admissible_max[key],
subspace[key],
space[key],
)
else:
if value > admissible_max[key]:
admissible_max[key] = value
elif value < admissible_min[key]:
admissible_min[key] = value
def _create_condition(self, result: Dict) -> bool:
"""create thread condition"""
if len(self._search_thread_pool) < 2:
return True
obj_median = np.median([thread.obj_best1 for id, thread in self._search_thread_pool.items() if id])
return result[self._ls.metric] * self._ls.metric_op < obj_median
def _clean(self, thread_id: int):
"""delete thread and increase admissible region if converged,
merge local threads if they are close
"""
assert thread_id
todelete = set()
for id in self._search_thread_pool:
if id and id != thread_id:
if self._inferior(id, thread_id):
todelete.add(id)
for id in self._search_thread_pool:
if id and id != thread_id:
if self._inferior(thread_id, id):
todelete.add(thread_id)
break
create_new = False
if self._search_thread_pool[thread_id].converged:
self._is_ls_ever_converged = True
todelete.add(thread_id)
self._expand_admissible_region(
self._ls_bound_min,
self._ls_bound_max,
self._search_thread_pool[thread_id].space,
)
if self._candidate_start_points:
if not self._started_from_given:
# remove start points whose perf is worse than the converged
obj = self._search_thread_pool[thread_id].obj_best1
worse = [
trial_id
for trial_id, r in self._candidate_start_points.items()
if r and r[self._ls.metric] * self._ls.metric_op >= obj
]
# logger.info(f"remove candidate start points {worse} than {obj}")
for trial_id in worse:
del self._candidate_start_points[trial_id]
if self._candidate_start_points and self._started_from_low_cost:
create_new = True
for id in todelete:
del self._search_thread_pool[id]
if create_new:
self._create_thread_from_best_candidate()
def _create_thread_from_best_candidate(self):
# find the best start point
best_trial_id = None
obj_best = None
for trial_id, r in self._candidate_start_points.items():
if r and (best_trial_id is None or r[self._ls.metric] * self._ls.metric_op < obj_best):
best_trial_id = trial_id
obj_best = r[self._ls.metric] * self._ls.metric_op
if best_trial_id:
# create a new thread
config = {}
result = self._candidate_start_points[best_trial_id]
for key, value in result.items():
if key.startswith("config/"):
config[key[7:]] = value
self._started_from_given = True
del self._candidate_start_points[best_trial_id]
self._create_thread(config, result, self._subspace.get(best_trial_id, self._ls.space))
def _expand_admissible_region(self, lower, upper, space):
"""expand the admissible region for the subspace `space`"""
for key in upper:
ub = upper[key]
if isinstance(ub, list):
choice = space[key].get("_choice_")
if choice:
self._expand_admissible_region(lower[key][choice], upper[key][choice], space[key])
elif isinstance(ub, dict):
self._expand_admissible_region(lower[key], ub, space[key])
else:
upper[key] += self._ls.STEPSIZE
lower[key] -= self._ls.STEPSIZE
def _inferior(self, id1: int, id2: int) -> bool:
"""whether thread id1 is inferior to id2"""
t1 = self._search_thread_pool[id1]
t2 = self._search_thread_pool[id2]
if t1.obj_best1 < t2.obj_best2:
return False
elif t1.resource and t1.resource < t2.resource:
return False
elif t2.reach(t1):
return True
return False
def on_trial_result(self, trial_id: str, result: Dict):
"""receive intermediate result."""
if trial_id not in self._trial_proposed_by:
return
thread_id = self._trial_proposed_by[trial_id]
if thread_id not in self._search_thread_pool:
return
if result and self._metric_constraints:
result[self._metric + self.lagrange] = result[self._metric]
self._search_thread_pool[thread_id].on_trial_result(trial_id, result)
def suggest(self, trial_id: str) -> Optional[Dict]:
"""choose thread, suggest a valid config."""
if self._init_used and not self._points_to_evaluate:
if self._cost_budget and self._cost_used >= self._cost_budget:
return None
choice, backup = self._select_thread()
config = self._search_thread_pool[choice].suggest(trial_id)
if not choice and config is not None and self._ls.resource:
config[self._ls.resource_attr] = self.best_resource
elif choice and config is None:
# local search thread finishes
if self._search_thread_pool[choice].converged:
self._expand_admissible_region(
self._ls_bound_min,
self._ls_bound_max,
self._search_thread_pool[choice].space,
)
del self._search_thread_pool[choice]
return
# preliminary check; not checking config validation
space = self._search_thread_pool[choice].space
skip = self._should_skip(choice, trial_id, config, space)
use_rs = 0
if skip:
if choice:
return
# use rs when BO fails to suggest a config
config, space = self._ls.complete_config({})
skip = self._should_skip(-1, trial_id, config, space)
if skip:
return
use_rs = 1
if choice or self._valid(
config,
self._ls.space,
space,
self._gs_admissible_min,
self._gs_admissible_max,
):
# LS or valid or no backup choice
self._trial_proposed_by[trial_id] = choice
self._search_thread_pool[choice].running += use_rs
else: # invalid config proposed by GS
if choice == backup:
# use CFO's init point
init_config = self._ls.init_config
config, space = self._ls.complete_config(init_config, self._ls_bound_min, self._ls_bound_max)
self._trial_proposed_by[trial_id] = choice
self._search_thread_pool[choice].running += 1
else:
thread = self._search_thread_pool[backup]
config = thread.suggest(trial_id)
space = thread.space
skip = self._should_skip(backup, trial_id, config, space)
if skip:
return
self._trial_proposed_by[trial_id] = backup
choice = backup
if not choice: # global search
# temporarily relax admissible region for parallel proposals
self._update_admissible_region(
config,
self._gs_admissible_min,
self._gs_admissible_max,
space,
self._ls.space,
)
else:
self._update_admissible_region(
config,
self._ls_bound_min,
self._ls_bound_max,
space,
self._ls.space,
)
self._gs_admissible_min.update(self._ls_bound_min)
self._gs_admissible_max.update(self._ls_bound_max)
signature = self._ls.config_signature(config, space)
self._result[signature] = {}
self._subspace[trial_id] = space
else: # use init config
if self._candidate_start_points is not None and self._points_to_evaluate:
self._candidate_start_points[trial_id] = None
reward = None
if self._points_to_evaluate:
init_config = self._points_to_evaluate.pop(0)
if self._evaluated_rewards:
reward = self._evaluated_rewards.pop(0)
else:
init_config = self._ls.init_config
if self._allow_empty_config and not init_config:
assert reward is None, "Empty config can't have reward."
return init_config
config, space = self._ls.complete_config(init_config, self._ls_bound_min, self._ls_bound_max)
config_signature = self._ls.config_signature(config, space)
if reward is None:
result = self._result.get(config_signature)
if result: # tried before
return
elif result is None: # not tried before
if self._violate_config_constriants(config, config_signature):
# violate config constraints
return
self._result[config_signature] = {}
else: # running but no result yet
return
self._init_used = True
self._trial_proposed_by[trial_id] = 0
self._search_thread_pool[0].running += 1
self._subspace[trial_id] = space
if reward is not None:
result = {self._metric: reward, self.cost_attr: 1, "config": config}
# result = self._result[config_signature]
self.on_trial_complete(trial_id, result)
return
if self._use_incumbent_result_in_evaluation:
if self._trial_proposed_by[trial_id] > 0:
choice_thread = self._search_thread_pool[self._trial_proposed_by[trial_id]]
config[INCUMBENT_RESULT] = choice_thread.best_result
return config
def _violate_config_constriants(self, config, config_signature):
"""check if config violates config constraints.
If so, set the result to worst and return True.
"""
if not self._config_constraints:
return False
for constraint in self._config_constraints:
func, sign, threshold = constraint
value = func(config)
if (
sign == "<="
and value > threshold
or sign == ">="
and value < threshold
or sign == ">"
and value <= threshold
or sign == "<"
and value > threshold
):
self._result[config_signature] = {
self._metric: np.inf * self._ls.metric_op,
"time_total_s": 1,
}
return True
return False
def _should_skip(self, choice, trial_id, config, space) -> bool:
"""if config is None or config's result is known or constraints are violated
return True; o.w. return False
"""
if config is None:
return True
config_signature = self._ls.config_signature(config, space)
exists = config_signature in self._result
if not exists:
# check constraints
exists = self._violate_config_constriants(config, config_signature)
if exists: # suggested before (including violate constraints)
if choice >= 0: # not fallback to rs
result = self._result.get(config_signature)
if result: # finished
self._search_thread_pool[choice].on_trial_complete(trial_id, result, error=False)
if choice:
# local search thread
self._clean(choice)
# else: # running
# # tell the thread there is an error
# self._search_thread_pool[choice].on_trial_complete(
# trial_id, {}, error=True)
return True
return False
def _select_thread(self) -> Tuple:
"""thread selector; use can_suggest to check LS availability"""
# calculate min_eci according to the budget left
min_eci = np.inf
if self.cost_attr == TIME_TOTAL_S:
now = time.time()
min_eci = self._deadline - now
if min_eci <= 0:
# return -1, -1
# keep proposing new configs assuming no budget left
min_eci = 0
elif self._num_samples and self._num_samples > 0:
# estimate time left according to num_samples limitation
num_finished = len(self._result)
num_proposed = num_finished + len(self._trial_proposed_by)
num_left = max(self._num_samples - num_proposed, 0)
if num_proposed > 0:
time_used = now - self._start_time + self._time_used
min_eci = min(min_eci, time_used / num_finished * num_left)
# print(f"{min_eci}, {time_used / num_finished * num_left}, {num_finished}, {num_left}")
elif self.cost_attr is not None and self._cost_budget:
min_eci = max(self._cost_budget - self._cost_used, 0)
elif self._num_samples and self._num_samples > 0:
num_finished = len(self._result)
num_proposed = num_finished + len(self._trial_proposed_by)
min_eci = max(self._num_samples - num_proposed, 0)
# update priority
max_speed = 0
for thread in self._search_thread_pool.values():
if thread.speed > max_speed:
max_speed = thread.speed
for thread in self._search_thread_pool.values():
thread.update_eci(self._metric_target, max_speed)
if thread.eci < min_eci:
min_eci = thread.eci
for thread in self._search_thread_pool.values():
thread.update_priority(min_eci)
top_thread_id = backup_thread_id = 0
priority1 = priority2 = self._search_thread_pool[0].priority
for thread_id, thread in self._search_thread_pool.items():
if thread_id and thread.can_suggest:
priority = thread.priority
if priority > priority1:
priority1 = priority
top_thread_id = thread_id
if priority > priority2 or backup_thread_id == 0:
priority2 = priority
backup_thread_id = thread_id
return top_thread_id, backup_thread_id
def _valid(self, config: Dict, space: Dict, subspace: Dict, lower: Dict, upper: Dict) -> bool:
"""config validator"""
normalized_config = normalize(config, subspace, config, {})
for key, lb in lower.items():
if key in config:
value = normalized_config[key]
if isinstance(lb, list):
domain = space[key]
index = indexof(domain, value)
nestedspace = subspace[key]
lb = lb[index]
ub = upper[key][index]
elif isinstance(lb, dict):
nestedspace = subspace[key]
domain = space[key]
ub = upper[key]
else:
nestedspace = None
if nestedspace:
valid = self._valid(value, domain, nestedspace, lb, ub)
if not valid:
return False
elif value + self._ls.STEPSIZE < lower[key] or value > upper[key] + self._ls.STEPSIZE:
return False
return True
@property
def results(self) -> List[Dict]:
"""A list of dicts of results for each evaluated configuration.
Each dict has "config" and metric names as keys.
The returned dict includes the initial results provided via `evaluated_reward`.
"""
return [x for x in getattr(self, "_result", {}).values() if x]
|
(metric: Optional[str] = None, mode: Optional[str] = None, space: Optional[dict] = None, low_cost_partial_config: Optional[dict] = None, cat_hp_cost: Optional[dict] = None, points_to_evaluate: Optional[List[dict]] = None, evaluated_rewards: Optional[List] = None, time_budget_s: Union[int, float] = None, num_samples: Optional[int] = None, resource_attr: Optional[str] = None, min_resource: Optional[float] = None, max_resource: Optional[float] = None, reduction_factor: Optional[float] = None, global_search_alg: Optional[flaml.tune.searcher.suggestion.Searcher] = None, config_constraints: Optional[List[Tuple[Callable[[dict], float], str, float]]] = None, metric_constraints: Optional[List[Tuple[str, str, float]]] = None, seed: Optional[int] = 20, cost_attr: Optional[str] = 'auto', cost_budget: Optional[float] = None, experimental: Optional[bool] = False, lexico_objectives: Optional[dict] = None, use_incumbent_result_in_evaluation=False, allow_empty_config=False)
|
52,736 |
flaml.tune.searcher.blendsearch
|
__init__
|
Constructor.
Args:
metric: A string of the metric name to optimize for.
mode: A string in ['min', 'max'] to specify the objective as
minimization or maximization.
space: A dictionary to specify the search space.
low_cost_partial_config: A dictionary from a subset of
controlled dimensions to the initial low-cost values.
E.g., ```{'n_estimators': 4, 'max_leaves': 4}```.
cat_hp_cost: A dictionary from a subset of categorical dimensions
to the relative cost of each choice.
E.g., ```{'tree_method': [1, 1, 2]}```.
I.e., the relative cost of the three choices of 'tree_method'
is 1, 1 and 2 respectively.
points_to_evaluate: Initial parameter suggestions to be run first.
evaluated_rewards (list): If you have previously evaluated the
parameters passed in as points_to_evaluate you can avoid
re-running those trials by passing in the reward attributes
as a list so the optimiser can be told the results without
needing to re-compute the trial. Must be the same or shorter length than
points_to_evaluate. When provided, `mode` must be specified.
time_budget_s: int or float | Time budget in seconds.
num_samples: int | The number of configs to try. -1 means no limit on the
number of configs to try.
resource_attr: A string to specify the resource dimension and the best
performance is assumed to be at the max_resource.
min_resource: A float of the minimal resource to use for the resource_attr.
max_resource: A float of the maximal resource to use for the resource_attr.
reduction_factor: A float of the reduction factor used for
incremental pruning.
global_search_alg: A Searcher instance as the global search
instance. If omitted, Optuna is used. The following algos have
known issues when used as global_search_alg:
- HyperOptSearch raises exception sometimes
- TuneBOHB has its own scheduler
config_constraints: A list of config constraints to be satisfied.
E.g., ```config_constraints = [(mem_size, '<=', 1024**3)]```.
`mem_size` is a function which produces a float number for the bytes
needed for a config.
It is used to skip configs which do not fit in memory.
metric_constraints: A list of metric constraints to be satisfied.
E.g., `['precision', '>=', 0.9]`. The sign can be ">=" or "<=".
seed: An integer of the random seed.
cost_attr: None or str to specify the attribute to evaluate the cost of different trials.
Default is "auto", which means that we will automatically choose the cost attribute to use (depending
on the nature of the resource budget). When cost_attr is set to None, cost differences between different trials will be omitted
in our search algorithm. When cost_attr is set to a str different from "auto" and "time_total_s",
this cost_attr must be available in the result dict of the trial.
cost_budget: A float of the cost budget. Only valid when cost_attr is a str different from "auto" and "time_total_s".
lexico_objectives: dict, default=None | It specifics information needed to perform multi-objective
optimization with lexicographic preferences. This is only supported in CFO currently.
When lexico_objectives is not None, the arguments metric, mode will be invalid.
This dictionary shall contain the following fields of key-value pairs:
- "metrics": a list of optimization objectives with the orders reflecting the priorities/preferences of the
objectives.
- "modes" (optional): a list of optimization modes (each mode either "min" or "max") corresponding to the
objectives in the metric list. If not provided, we use "min" as the default mode for all the objectives.
- "targets" (optional): a dictionary to specify the optimization targets on the objectives. The keys are the
metric names (provided in "metric"), and the values are the numerical target values.
- "tolerances" (optional): a dictionary to specify the optimality tolerances on objectives. The keys are the metric names (provided in "metrics"), and the values are the absolute/percentage tolerance in the form of numeric/string.
E.g.,
```python
lexico_objectives = {
"metrics": ["error_rate", "pred_time"],
"modes": ["min", "min"],
"tolerances": {"error_rate": 0.01, "pred_time": 0.0},
"targets": {"error_rate": 0.0},
}
```
We also support percentage tolerance.
E.g.,
```python
lexico_objectives = {
"metrics": ["error_rate", "pred_time"],
"modes": ["min", "min"],
"tolerances": {"error_rate": "5%", "pred_time": "0%"},
"targets": {"error_rate": 0.0},
}
```
experimental: A bool of whether to use experimental features.
|
def __init__(
self,
metric: Optional[str] = None,
mode: Optional[str] = None,
space: Optional[dict] = None,
low_cost_partial_config: Optional[dict] = None,
cat_hp_cost: Optional[dict] = None,
points_to_evaluate: Optional[List[dict]] = None,
evaluated_rewards: Optional[List] = None,
time_budget_s: Union[int, float] = None,
num_samples: Optional[int] = None,
resource_attr: Optional[str] = None,
min_resource: Optional[float] = None,
max_resource: Optional[float] = None,
reduction_factor: Optional[float] = None,
global_search_alg: Optional[Searcher] = None,
config_constraints: Optional[List[Tuple[Callable[[dict], float], str, float]]] = None,
metric_constraints: Optional[List[Tuple[str, str, float]]] = None,
seed: Optional[int] = 20,
cost_attr: Optional[str] = "auto",
cost_budget: Optional[float] = None,
experimental: Optional[bool] = False,
lexico_objectives: Optional[dict] = None,
use_incumbent_result_in_evaluation=False,
allow_empty_config=False,
):
"""Constructor.
Args:
metric: A string of the metric name to optimize for.
mode: A string in ['min', 'max'] to specify the objective as
minimization or maximization.
space: A dictionary to specify the search space.
low_cost_partial_config: A dictionary from a subset of
controlled dimensions to the initial low-cost values.
E.g., ```{'n_estimators': 4, 'max_leaves': 4}```.
cat_hp_cost: A dictionary from a subset of categorical dimensions
to the relative cost of each choice.
E.g., ```{'tree_method': [1, 1, 2]}```.
I.e., the relative cost of the three choices of 'tree_method'
is 1, 1 and 2 respectively.
points_to_evaluate: Initial parameter suggestions to be run first.
evaluated_rewards (list): If you have previously evaluated the
parameters passed in as points_to_evaluate you can avoid
re-running those trials by passing in the reward attributes
as a list so the optimiser can be told the results without
needing to re-compute the trial. Must be the same or shorter length than
points_to_evaluate. When provided, `mode` must be specified.
time_budget_s: int or float | Time budget in seconds.
num_samples: int | The number of configs to try. -1 means no limit on the
number of configs to try.
resource_attr: A string to specify the resource dimension and the best
performance is assumed to be at the max_resource.
min_resource: A float of the minimal resource to use for the resource_attr.
max_resource: A float of the maximal resource to use for the resource_attr.
reduction_factor: A float of the reduction factor used for
incremental pruning.
global_search_alg: A Searcher instance as the global search
instance. If omitted, Optuna is used. The following algos have
known issues when used as global_search_alg:
- HyperOptSearch raises exception sometimes
- TuneBOHB has its own scheduler
config_constraints: A list of config constraints to be satisfied.
E.g., ```config_constraints = [(mem_size, '<=', 1024**3)]```.
`mem_size` is a function which produces a float number for the bytes
needed for a config.
It is used to skip configs which do not fit in memory.
metric_constraints: A list of metric constraints to be satisfied.
E.g., `['precision', '>=', 0.9]`. The sign can be ">=" or "<=".
seed: An integer of the random seed.
cost_attr: None or str to specify the attribute to evaluate the cost of different trials.
Default is "auto", which means that we will automatically choose the cost attribute to use (depending
on the nature of the resource budget). When cost_attr is set to None, cost differences between different trials will be omitted
in our search algorithm. When cost_attr is set to a str different from "auto" and "time_total_s",
this cost_attr must be available in the result dict of the trial.
cost_budget: A float of the cost budget. Only valid when cost_attr is a str different from "auto" and "time_total_s".
lexico_objectives: dict, default=None | It specifics information needed to perform multi-objective
optimization with lexicographic preferences. This is only supported in CFO currently.
When lexico_objectives is not None, the arguments metric, mode will be invalid.
This dictionary shall contain the following fields of key-value pairs:
- "metrics": a list of optimization objectives with the orders reflecting the priorities/preferences of the
objectives.
- "modes" (optional): a list of optimization modes (each mode either "min" or "max") corresponding to the
objectives in the metric list. If not provided, we use "min" as the default mode for all the objectives.
- "targets" (optional): a dictionary to specify the optimization targets on the objectives. The keys are the
metric names (provided in "metric"), and the values are the numerical target values.
- "tolerances" (optional): a dictionary to specify the optimality tolerances on objectives. The keys are the metric names (provided in "metrics"), and the values are the absolute/percentage tolerance in the form of numeric/string.
E.g.,
```python
lexico_objectives = {
"metrics": ["error_rate", "pred_time"],
"modes": ["min", "min"],
"tolerances": {"error_rate": 0.01, "pred_time": 0.0},
"targets": {"error_rate": 0.0},
}
```
We also support percentage tolerance.
E.g.,
```python
lexico_objectives = {
"metrics": ["error_rate", "pred_time"],
"modes": ["min", "min"],
"tolerances": {"error_rate": "5%", "pred_time": "0%"},
"targets": {"error_rate": 0.0},
}
```
experimental: A bool of whether to use experimental features.
"""
self._eps = SEARCH_THREAD_EPS
self._input_cost_attr = cost_attr
if cost_attr == "auto":
if time_budget_s is not None:
self.cost_attr = TIME_TOTAL_S
else:
self.cost_attr = None
self._cost_budget = None
else:
self.cost_attr = cost_attr
self._cost_budget = cost_budget
self.penalty = PENALTY # penalty term for constraints
self._metric, self._mode = metric, mode
self._use_incumbent_result_in_evaluation = use_incumbent_result_in_evaluation
self.lexico_objectives = lexico_objectives
init_config = low_cost_partial_config or {}
if not init_config:
logger.info(
"No low-cost partial config given to the search algorithm. "
"For cost-frugal search, "
"consider providing low-cost values for cost-related hps via "
"'low_cost_partial_config'. More info can be found at "
"https://microsoft.github.io/FLAML/docs/FAQ#about-low_cost_partial_config-in-tune"
)
if evaluated_rewards:
assert mode, "mode must be specified when evaluted_rewards is provided."
self._points_to_evaluate = []
self._evaluated_rewards = []
n = len(evaluated_rewards)
self._evaluated_points = points_to_evaluate[:n]
new_points_to_evaluate = points_to_evaluate[n:]
self._all_rewards = evaluated_rewards
best = max(evaluated_rewards) if mode == "max" else min(evaluated_rewards)
# only keep the best points as start points
for i, r in enumerate(evaluated_rewards):
if r == best:
p = points_to_evaluate[i]
self._points_to_evaluate.append(p)
self._evaluated_rewards.append(r)
self._points_to_evaluate.extend(new_points_to_evaluate)
else:
self._points_to_evaluate = points_to_evaluate or []
self._evaluated_rewards = evaluated_rewards or []
self._config_constraints = config_constraints
self._metric_constraints = metric_constraints
if metric_constraints:
assert all(x[1] in ["<=", ">="] for x in metric_constraints), "sign of metric constraints must be <= or >=."
# metric modified by lagrange
metric += self.lagrange
self._cat_hp_cost = cat_hp_cost or {}
if space:
add_cost_to_space(space, init_config, self._cat_hp_cost)
self._ls = self.LocalSearch(
init_config,
metric,
mode,
space,
resource_attr,
min_resource,
max_resource,
reduction_factor,
self.cost_attr,
seed,
self.lexico_objectives,
)
if global_search_alg is not None:
self._gs = global_search_alg
elif getattr(self, "__name__", None) != "CFO":
if space and self._ls.hierarchical:
from functools import partial
gs_space = partial(define_by_run_func, space=space)
evaluated_rewards = None # not supported by define-by-run
else:
gs_space = space
gs_seed = seed - 10 if (seed - 10) >= 0 else seed - 11 + (1 << 32)
self._gs_seed = gs_seed
if experimental:
import optuna as ot
sampler = ot.samplers.TPESampler(seed=gs_seed, multivariate=True, group=True)
else:
sampler = None
try:
assert evaluated_rewards
self._gs = GlobalSearch(
space=gs_space,
metric=metric,
mode=mode,
seed=gs_seed,
sampler=sampler,
points_to_evaluate=self._evaluated_points,
evaluated_rewards=evaluated_rewards,
)
except (AssertionError, ValueError):
self._gs = GlobalSearch(
space=gs_space,
metric=metric,
mode=mode,
seed=gs_seed,
sampler=sampler,
)
self._gs.space = space
else:
self._gs = None
self._experimental = experimental
if getattr(self, "__name__", None) == "CFO" and points_to_evaluate and len(self._points_to_evaluate) > 1:
# use the best config in points_to_evaluate as the start point
self._candidate_start_points = {}
self._started_from_low_cost = not low_cost_partial_config
else:
self._candidate_start_points = None
self._time_budget_s, self._num_samples = time_budget_s, num_samples
self._allow_empty_config = allow_empty_config
if space is not None:
self._init_search()
|
(self, metric: Optional[str] = None, mode: Optional[str] = None, space: Optional[dict] = None, low_cost_partial_config: Optional[dict] = None, cat_hp_cost: Optional[dict] = None, points_to_evaluate: Optional[List[dict]] = None, evaluated_rewards: Optional[List] = None, time_budget_s: Union[int, float, NoneType] = None, num_samples: Optional[int] = None, resource_attr: Optional[str] = None, min_resource: Optional[float] = None, max_resource: Optional[float] = None, reduction_factor: Optional[float] = None, global_search_alg: Optional[flaml.tune.searcher.suggestion.Searcher] = None, config_constraints: Optional[List[Tuple[Callable[[dict], float], str, float]]] = None, metric_constraints: Optional[List[Tuple[str, str, float]]] = None, seed: Optional[int] = 20, cost_attr: Optional[str] = 'auto', cost_budget: Optional[float] = None, experimental: Optional[bool] = False, lexico_objectives: Optional[dict] = None, use_incumbent_result_in_evaluation=False, allow_empty_config=False)
|
52,737 |
flaml.tune.searcher.blendsearch
|
_clean
|
delete thread and increase admissible region if converged,
merge local threads if they are close
|
def _clean(self, thread_id: int):
"""delete thread and increase admissible region if converged,
merge local threads if they are close
"""
assert thread_id
todelete = set()
for id in self._search_thread_pool:
if id and id != thread_id:
if self._inferior(id, thread_id):
todelete.add(id)
for id in self._search_thread_pool:
if id and id != thread_id:
if self._inferior(thread_id, id):
todelete.add(thread_id)
break
create_new = False
if self._search_thread_pool[thread_id].converged:
self._is_ls_ever_converged = True
todelete.add(thread_id)
self._expand_admissible_region(
self._ls_bound_min,
self._ls_bound_max,
self._search_thread_pool[thread_id].space,
)
if self._candidate_start_points:
if not self._started_from_given:
# remove start points whose perf is worse than the converged
obj = self._search_thread_pool[thread_id].obj_best1
worse = [
trial_id
for trial_id, r in self._candidate_start_points.items()
if r and r[self._ls.metric] * self._ls.metric_op >= obj
]
# logger.info(f"remove candidate start points {worse} than {obj}")
for trial_id in worse:
del self._candidate_start_points[trial_id]
if self._candidate_start_points and self._started_from_low_cost:
create_new = True
for id in todelete:
del self._search_thread_pool[id]
if create_new:
self._create_thread_from_best_candidate()
|
(self, thread_id: int)
|
52,738 |
flaml.tune.searcher.blendsearch
|
_create_condition
|
create thread condition
|
def _create_condition(self, result: Dict) -> bool:
"""create thread condition"""
if len(self._search_thread_pool) < 2:
return True
obj_median = np.median([thread.obj_best1 for id, thread in self._search_thread_pool.items() if id])
return result[self._ls.metric] * self._ls.metric_op < obj_median
|
(self, result: Dict) -> bool
|
52,739 |
flaml.tune.searcher.blendsearch
|
_create_thread
| null |
def _create_thread(self, config, result, space):
if self.lexico_objectives is None:
obj = result[self._ls.metric]
else:
obj = {k: result[k] for k in self.lexico_objectives["metrics"]}
self._search_thread_pool[self._thread_count] = SearchThread(
self._ls.mode,
self._ls.create(
config,
obj,
cost=result.get(self.cost_attr, 1),
space=space,
),
self.cost_attr,
self._eps,
)
self._thread_count += 1
self._update_admissible_region(
unflatten_dict(config),
self._ls_bound_min,
self._ls_bound_max,
space,
self._ls.space,
)
|
(self, config, result, space)
|
52,740 |
flaml.tune.searcher.blendsearch
|
_create_thread_from_best_candidate
| null |
def _create_thread_from_best_candidate(self):
# find the best start point
best_trial_id = None
obj_best = None
for trial_id, r in self._candidate_start_points.items():
if r and (best_trial_id is None or r[self._ls.metric] * self._ls.metric_op < obj_best):
best_trial_id = trial_id
obj_best = r[self._ls.metric] * self._ls.metric_op
if best_trial_id:
# create a new thread
config = {}
result = self._candidate_start_points[best_trial_id]
for key, value in result.items():
if key.startswith("config/"):
config[key[7:]] = value
self._started_from_given = True
del self._candidate_start_points[best_trial_id]
self._create_thread(config, result, self._subspace.get(best_trial_id, self._ls.space))
|
(self)
|
52,741 |
flaml.tune.searcher.blendsearch
|
_expand_admissible_region
|
expand the admissible region for the subspace `space`
|
def _expand_admissible_region(self, lower, upper, space):
"""expand the admissible region for the subspace `space`"""
for key in upper:
ub = upper[key]
if isinstance(ub, list):
choice = space[key].get("_choice_")
if choice:
self._expand_admissible_region(lower[key][choice], upper[key][choice], space[key])
elif isinstance(ub, dict):
self._expand_admissible_region(lower[key], ub, space[key])
else:
upper[key] += self._ls.STEPSIZE
lower[key] -= self._ls.STEPSIZE
|
(self, lower, upper, space)
|
52,742 |
flaml.tune.searcher.blendsearch
|
_inferior
|
whether thread id1 is inferior to id2
|
def _inferior(self, id1: int, id2: int) -> bool:
"""whether thread id1 is inferior to id2"""
t1 = self._search_thread_pool[id1]
t2 = self._search_thread_pool[id2]
if t1.obj_best1 < t2.obj_best2:
return False
elif t1.resource and t1.resource < t2.resource:
return False
elif t2.reach(t1):
return True
return False
|
(self, id1: int, id2: int) -> bool
|
52,743 |
flaml.tune.searcher.blendsearch
|
_init_search
|
initialize the search
|
def _init_search(self):
"""initialize the search"""
self._start_time = time.time()
self._time_used = 0
self._set_deadline()
self._is_ls_ever_converged = False
self._subspace = {} # the subspace for each trial id
self._metric_target = np.inf * self._ls.metric_op
self._search_thread_pool = {
# id: int -> thread: SearchThread
0: SearchThread(self._ls.mode, self._gs, self.cost_attr, self._eps)
}
self._thread_count = 1 # total # threads created
self._init_used = self._ls.init_config is None
self._trial_proposed_by = {} # trial_id: str -> thread_id: int
self._ls_bound_min = normalize(
self._ls.init_config.copy(),
self._ls.space,
self._ls.init_config,
{},
recursive=True,
)
self._ls_bound_max = normalize(
self._ls.init_config.copy(),
self._ls.space,
self._ls.init_config,
{},
recursive=True,
)
self._gs_admissible_min = self._ls_bound_min.copy()
self._gs_admissible_max = self._ls_bound_max.copy()
if self._metric_constraints:
self._metric_constraint_satisfied = False
self._metric_constraint_penalty = [self.penalty for _ in self._metric_constraints]
else:
self._metric_constraint_satisfied = True
self._metric_constraint_penalty = None
self.best_resource = self._ls.min_resource
i = 0
# config_signature: tuple -> result: Dict
self._result = {}
self._cost_used = 0
while self._evaluated_rewards:
# go over the evaluated rewards
trial_id = f"trial_for_evaluated_{i}"
self.suggest(trial_id)
i += 1
|
(self)
|
52,744 |
flaml.tune.searcher.blendsearch
|
_select_thread
|
thread selector; use can_suggest to check LS availability
|
def _select_thread(self) -> Tuple:
"""thread selector; use can_suggest to check LS availability"""
# calculate min_eci according to the budget left
min_eci = np.inf
if self.cost_attr == TIME_TOTAL_S:
now = time.time()
min_eci = self._deadline - now
if min_eci <= 0:
# return -1, -1
# keep proposing new configs assuming no budget left
min_eci = 0
elif self._num_samples and self._num_samples > 0:
# estimate time left according to num_samples limitation
num_finished = len(self._result)
num_proposed = num_finished + len(self._trial_proposed_by)
num_left = max(self._num_samples - num_proposed, 0)
if num_proposed > 0:
time_used = now - self._start_time + self._time_used
min_eci = min(min_eci, time_used / num_finished * num_left)
# print(f"{min_eci}, {time_used / num_finished * num_left}, {num_finished}, {num_left}")
elif self.cost_attr is not None and self._cost_budget:
min_eci = max(self._cost_budget - self._cost_used, 0)
elif self._num_samples and self._num_samples > 0:
num_finished = len(self._result)
num_proposed = num_finished + len(self._trial_proposed_by)
min_eci = max(self._num_samples - num_proposed, 0)
# update priority
max_speed = 0
for thread in self._search_thread_pool.values():
if thread.speed > max_speed:
max_speed = thread.speed
for thread in self._search_thread_pool.values():
thread.update_eci(self._metric_target, max_speed)
if thread.eci < min_eci:
min_eci = thread.eci
for thread in self._search_thread_pool.values():
thread.update_priority(min_eci)
top_thread_id = backup_thread_id = 0
priority1 = priority2 = self._search_thread_pool[0].priority
for thread_id, thread in self._search_thread_pool.items():
if thread_id and thread.can_suggest:
priority = thread.priority
if priority > priority1:
priority1 = priority
top_thread_id = thread_id
if priority > priority2 or backup_thread_id == 0:
priority2 = priority
backup_thread_id = thread_id
return top_thread_id, backup_thread_id
|
(self) -> Tuple
|
52,745 |
flaml.tune.searcher.blendsearch
|
_set_deadline
| null |
def _set_deadline(self):
if self._time_budget_s is not None:
self._deadline = self._time_budget_s + self._start_time
self._set_eps()
else:
self._deadline = np.inf
|
(self)
|
52,746 |
flaml.tune.searcher.blendsearch
|
_set_eps
|
set eps for search threads according to time budget
|
def _set_eps(self):
"""set eps for search threads according to time budget"""
self._eps = max(min(self._time_budget_s / 1000.0, 1.0), 1e-9)
|
(self)
|
52,747 |
flaml.tune.searcher.blendsearch
|
_should_skip
|
if config is None or config's result is known or constraints are violated
return True; o.w. return False
|
def _should_skip(self, choice, trial_id, config, space) -> bool:
"""if config is None or config's result is known or constraints are violated
return True; o.w. return False
"""
if config is None:
return True
config_signature = self._ls.config_signature(config, space)
exists = config_signature in self._result
if not exists:
# check constraints
exists = self._violate_config_constriants(config, config_signature)
if exists: # suggested before (including violate constraints)
if choice >= 0: # not fallback to rs
result = self._result.get(config_signature)
if result: # finished
self._search_thread_pool[choice].on_trial_complete(trial_id, result, error=False)
if choice:
# local search thread
self._clean(choice)
# else: # running
# # tell the thread there is an error
# self._search_thread_pool[choice].on_trial_complete(
# trial_id, {}, error=True)
return True
return False
|
(self, choice, trial_id, config, space) -> bool
|
52,748 |
flaml.tune.searcher.blendsearch
|
_update_admissible_region
| null |
def _update_admissible_region(
self,
config,
admissible_min,
admissible_max,
subspace: Dict = {},
space: Dict = {},
):
# update admissible region
normalized_config = normalize(config, subspace, config, {})
for key in admissible_min:
value = normalized_config[key]
if isinstance(admissible_max[key], list):
domain = space[key]
choice = indexof(domain, value)
self._update_admissible_region(
value,
admissible_min[key][choice],
admissible_max[key][choice],
subspace[key],
domain[choice],
)
if len(admissible_max[key]) > len(domain.categories):
# points + index
normal = (choice + 0.5) / len(domain.categories)
admissible_max[key][-1] = max(normal, admissible_max[key][-1])
admissible_min[key][-1] = min(normal, admissible_min[key][-1])
elif isinstance(value, dict):
self._update_admissible_region(
value,
admissible_min[key],
admissible_max[key],
subspace[key],
space[key],
)
else:
if value > admissible_max[key]:
admissible_max[key] = value
elif value < admissible_min[key]:
admissible_min[key] = value
|
(self, config, admissible_min, admissible_max, subspace: Dict = {}, space: Dict = {})
|
52,749 |
flaml.tune.searcher.blendsearch
|
_valid
|
config validator
|
def _valid(self, config: Dict, space: Dict, subspace: Dict, lower: Dict, upper: Dict) -> bool:
"""config validator"""
normalized_config = normalize(config, subspace, config, {})
for key, lb in lower.items():
if key in config:
value = normalized_config[key]
if isinstance(lb, list):
domain = space[key]
index = indexof(domain, value)
nestedspace = subspace[key]
lb = lb[index]
ub = upper[key][index]
elif isinstance(lb, dict):
nestedspace = subspace[key]
domain = space[key]
ub = upper[key]
else:
nestedspace = None
if nestedspace:
valid = self._valid(value, domain, nestedspace, lb, ub)
if not valid:
return False
elif value + self._ls.STEPSIZE < lower[key] or value > upper[key] + self._ls.STEPSIZE:
return False
return True
|
(self, config: Dict, space: Dict, subspace: Dict, lower: Dict, upper: Dict) -> bool
|
52,750 |
flaml.tune.searcher.blendsearch
|
_violate_config_constriants
|
check if config violates config constraints.
If so, set the result to worst and return True.
|
def _violate_config_constriants(self, config, config_signature):
"""check if config violates config constraints.
If so, set the result to worst and return True.
"""
if not self._config_constraints:
return False
for constraint in self._config_constraints:
func, sign, threshold = constraint
value = func(config)
if (
sign == "<="
and value > threshold
or sign == ">="
and value < threshold
or sign == ">"
and value <= threshold
or sign == "<"
and value > threshold
):
self._result[config_signature] = {
self._metric: np.inf * self._ls.metric_op,
"time_total_s": 1,
}
return True
return False
|
(self, config, config_signature)
|
52,751 |
flaml.tune.searcher.blendsearch
|
on_trial_complete
|
search thread updater and cleaner.
|
def on_trial_complete(self, trial_id: str, result: Optional[Dict] = None, error: bool = False):
"""search thread updater and cleaner."""
metric_constraint_satisfied = True
if result and not error and self._metric_constraints:
# account for metric constraints if any
objective = result[self._metric]
for i, constraint in enumerate(self._metric_constraints):
metric_constraint, sign, threshold = constraint
value = result.get(metric_constraint)
if value:
sign_op = 1 if sign == "<=" else -1
violation = (value - threshold) * sign_op
if violation > 0:
# add penalty term to the metric
objective += self._metric_constraint_penalty[i] * violation * self._ls.metric_op
metric_constraint_satisfied = False
if self._metric_constraint_penalty[i] < self.penalty:
self._metric_constraint_penalty[i] += violation
result[self._metric + self.lagrange] = objective
if metric_constraint_satisfied and not self._metric_constraint_satisfied:
# found a feasible point
self._metric_constraint_penalty = [1 for _ in self._metric_constraints]
self._metric_constraint_satisfied |= metric_constraint_satisfied
thread_id = self._trial_proposed_by.get(trial_id)
if thread_id in self._search_thread_pool:
self._search_thread_pool[thread_id].on_trial_complete(trial_id, result, error)
del self._trial_proposed_by[trial_id]
if result:
config = result.get("config", {})
if not config:
for key, value in result.items():
if key.startswith("config/"):
config[key[7:]] = value
if self._allow_empty_config and not config:
return
signature = self._ls.config_signature(config, self._subspace.get(trial_id, {}))
if error: # remove from result cache
del self._result[signature]
else: # add to result cache
self._cost_used += result.get(self.cost_attr, 0)
self._result[signature] = result
# update target metric if improved
objective = result[self._ls.metric]
if (objective - self._metric_target) * self._ls.metric_op < 0:
self._metric_target = objective
if self._ls.resource:
self._best_resource = config[self._ls.resource_attr]
if thread_id:
if not self._metric_constraint_satisfied:
# no point has been found to satisfy metric constraint
self._expand_admissible_region(
self._ls_bound_min,
self._ls_bound_max,
self._subspace.get(trial_id, self._ls.space),
)
if self._gs is not None and self._experimental and (not self._ls.hierarchical):
self._gs.add_evaluated_point(flatten_dict(config), objective)
# TODO: recover when supported
# converted = convert_key(config, self._gs.space)
# logger.info(converted)
# self._gs.add_evaluated_point(converted, objective)
elif metric_constraint_satisfied and self._create_condition(result):
# thread creator
thread_id = self._thread_count
self._started_from_given = self._candidate_start_points and trial_id in self._candidate_start_points
if self._started_from_given:
del self._candidate_start_points[trial_id]
else:
self._started_from_low_cost = True
self._create_thread(config, result, self._subspace.get(trial_id, self._ls.space))
# reset admissible region to ls bounding box
self._gs_admissible_min.update(self._ls_bound_min)
self._gs_admissible_max.update(self._ls_bound_max)
# cleaner
if thread_id and thread_id in self._search_thread_pool:
# local search thread
self._clean(thread_id)
if trial_id in self._subspace and not (
self._candidate_start_points and trial_id in self._candidate_start_points
):
del self._subspace[trial_id]
|
(self, trial_id: str, result: Optional[Dict] = None, error: bool = False)
|
52,752 |
flaml.tune.searcher.blendsearch
|
on_trial_result
|
receive intermediate result.
|
def on_trial_result(self, trial_id: str, result: Dict):
"""receive intermediate result."""
if trial_id not in self._trial_proposed_by:
return
thread_id = self._trial_proposed_by[trial_id]
if thread_id not in self._search_thread_pool:
return
if result and self._metric_constraints:
result[self._metric + self.lagrange] = result[self._metric]
self._search_thread_pool[thread_id].on_trial_result(trial_id, result)
|
(self, trial_id: str, result: Dict)
|
52,753 |
flaml.tune.searcher.blendsearch
|
restore
|
restore states from checkpoint.
|
def restore(self, checkpoint_path: str):
"""restore states from checkpoint."""
with open(checkpoint_path, "rb") as inputFile:
state = pickle.load(inputFile)
self.__dict__ = state.__dict__
self._start_time = time.time()
self._set_deadline()
|
(self, checkpoint_path: str)
|
52,754 |
flaml.tune.searcher.blendsearch
|
save
|
save states to a checkpoint path.
|
def save(self, checkpoint_path: str):
"""save states to a checkpoint path."""
self._time_used += time.time() - self._start_time
self._start_time = time.time()
save_object = self
with open(checkpoint_path, "wb") as outputFile:
pickle.dump(save_object, outputFile)
|
(self, checkpoint_path: str)
|
52,755 |
flaml.tune.searcher.blendsearch
|
set_search_properties
| null |
def set_search_properties(
self,
metric: Optional[str] = None,
mode: Optional[str] = None,
config: Optional[Dict] = None,
**spec,
) -> bool:
metric_changed = mode_changed = False
if metric and self._metric != metric:
metric_changed = True
self._metric = metric
if self._metric_constraints:
# metric modified by lagrange
metric += self.lagrange
# TODO: don't change metric for global search methods that
# can handle constraints already
if mode and self._mode != mode:
mode_changed = True
self._mode = mode
if not self._ls.space:
# the search space can be set only once
if self._gs is not None:
# define-by-run is not supported via set_search_properties
self._gs.set_search_properties(metric, mode, config)
self._gs.space = config
if config:
add_cost_to_space(config, self._ls.init_config, self._cat_hp_cost)
self._ls.set_search_properties(metric, mode, config)
self._init_search()
else:
if metric_changed or mode_changed:
# reset search when metric or mode changed
self._ls.set_search_properties(metric, mode)
if self._gs is not None:
self._gs = GlobalSearch(
space=self._gs._space,
metric=metric,
mode=mode,
seed=self._gs_seed,
)
self._gs.space = self._ls.space
self._init_search()
if spec:
# CFO doesn't need these settings
if "time_budget_s" in spec:
self._time_budget_s = spec["time_budget_s"] # budget from now
now = time.time()
self._time_used += now - self._start_time
self._start_time = now
self._set_deadline()
if self._input_cost_attr == "auto" and self._time_budget_s:
self.cost_attr = self._ls.cost_attr = TIME_TOTAL_S
if "metric_target" in spec:
self._metric_target = spec.get("metric_target")
num_samples = spec.get("num_samples")
if num_samples is not None:
self._num_samples = (
(num_samples + len(self._result) + len(self._trial_proposed_by))
if num_samples > 0 # 0 is currently treated the same as -1
else num_samples
)
return True
|
(self, metric: Optional[str] = None, mode: Optional[str] = None, config: Optional[Dict] = None, **spec) -> bool
|
52,756 |
flaml.tune.searcher.blendsearch
|
suggest
|
choose thread, suggest a valid config.
|
def suggest(self, trial_id: str) -> Optional[Dict]:
"""choose thread, suggest a valid config."""
if self._init_used and not self._points_to_evaluate:
if self._cost_budget and self._cost_used >= self._cost_budget:
return None
choice, backup = self._select_thread()
config = self._search_thread_pool[choice].suggest(trial_id)
if not choice and config is not None and self._ls.resource:
config[self._ls.resource_attr] = self.best_resource
elif choice and config is None:
# local search thread finishes
if self._search_thread_pool[choice].converged:
self._expand_admissible_region(
self._ls_bound_min,
self._ls_bound_max,
self._search_thread_pool[choice].space,
)
del self._search_thread_pool[choice]
return
# preliminary check; not checking config validation
space = self._search_thread_pool[choice].space
skip = self._should_skip(choice, trial_id, config, space)
use_rs = 0
if skip:
if choice:
return
# use rs when BO fails to suggest a config
config, space = self._ls.complete_config({})
skip = self._should_skip(-1, trial_id, config, space)
if skip:
return
use_rs = 1
if choice or self._valid(
config,
self._ls.space,
space,
self._gs_admissible_min,
self._gs_admissible_max,
):
# LS or valid or no backup choice
self._trial_proposed_by[trial_id] = choice
self._search_thread_pool[choice].running += use_rs
else: # invalid config proposed by GS
if choice == backup:
# use CFO's init point
init_config = self._ls.init_config
config, space = self._ls.complete_config(init_config, self._ls_bound_min, self._ls_bound_max)
self._trial_proposed_by[trial_id] = choice
self._search_thread_pool[choice].running += 1
else:
thread = self._search_thread_pool[backup]
config = thread.suggest(trial_id)
space = thread.space
skip = self._should_skip(backup, trial_id, config, space)
if skip:
return
self._trial_proposed_by[trial_id] = backup
choice = backup
if not choice: # global search
# temporarily relax admissible region for parallel proposals
self._update_admissible_region(
config,
self._gs_admissible_min,
self._gs_admissible_max,
space,
self._ls.space,
)
else:
self._update_admissible_region(
config,
self._ls_bound_min,
self._ls_bound_max,
space,
self._ls.space,
)
self._gs_admissible_min.update(self._ls_bound_min)
self._gs_admissible_max.update(self._ls_bound_max)
signature = self._ls.config_signature(config, space)
self._result[signature] = {}
self._subspace[trial_id] = space
else: # use init config
if self._candidate_start_points is not None and self._points_to_evaluate:
self._candidate_start_points[trial_id] = None
reward = None
if self._points_to_evaluate:
init_config = self._points_to_evaluate.pop(0)
if self._evaluated_rewards:
reward = self._evaluated_rewards.pop(0)
else:
init_config = self._ls.init_config
if self._allow_empty_config and not init_config:
assert reward is None, "Empty config can't have reward."
return init_config
config, space = self._ls.complete_config(init_config, self._ls_bound_min, self._ls_bound_max)
config_signature = self._ls.config_signature(config, space)
if reward is None:
result = self._result.get(config_signature)
if result: # tried before
return
elif result is None: # not tried before
if self._violate_config_constriants(config, config_signature):
# violate config constraints
return
self._result[config_signature] = {}
else: # running but no result yet
return
self._init_used = True
self._trial_proposed_by[trial_id] = 0
self._search_thread_pool[0].running += 1
self._subspace[trial_id] = space
if reward is not None:
result = {self._metric: reward, self.cost_attr: 1, "config": config}
# result = self._result[config_signature]
self.on_trial_complete(trial_id, result)
return
if self._use_incumbent_result_in_evaluation:
if self._trial_proposed_by[trial_id] > 0:
choice_thread = self._search_thread_pool[self._trial_proposed_by[trial_id]]
config[INCUMBENT_RESULT] = choice_thread.best_result
return config
|
(self, trial_id: str) -> Optional[Dict]
|
52,757 |
flaml.tune.searcher.blendsearch
|
BlendSearchTuner
|
Tuner class for NNI.
|
class BlendSearchTuner(BlendSearch, NNITuner):
"""Tuner class for NNI."""
def receive_trial_result(self, parameter_id, parameters, value, **kwargs):
"""Receive trial's final result.
Args:
parameter_id: int.
parameters: object created by `generate_parameters()`.
value: final metrics of the trial, including default metric.
"""
result = {
"config": parameters,
self._metric: extract_scalar_reward(value),
self.cost_attr: 1 if isinstance(value, float) else value.get(self.cost_attr, value.get("sequence", 1)),
# if nni does not report training cost,
# using sequence as an approximation.
# if no sequence, using a constant 1
}
self.on_trial_complete(str(parameter_id), result)
...
def generate_parameters(self, parameter_id, **kwargs) -> Dict:
"""Returns a set of trial (hyper-)parameters, as a serializable object.
Args:
parameter_id: int.
"""
return self.suggest(str(parameter_id))
...
def update_search_space(self, search_space):
"""Required by NNI.
Tuners are advised to support updating search space at run-time.
If a tuner can only set search space once before generating first hyper-parameters,
it should explicitly document this behaviour.
Args:
search_space: JSON object created by experiment owner.
"""
config = {}
for key, value in search_space.items():
v = value.get("_value")
_type = value["_type"]
if _type == "choice":
config[key] = choice(v)
elif _type == "randint":
config[key] = randint(*v)
elif _type == "uniform":
config[key] = uniform(*v)
elif _type == "quniform":
config[key] = quniform(*v)
elif _type == "loguniform":
config[key] = loguniform(*v)
elif _type == "qloguniform":
config[key] = qloguniform(*v)
elif _type == "normal":
config[key] = randn(*v)
elif _type == "qnormal":
config[key] = qrandn(*v)
else:
raise ValueError(f"unsupported type in search_space {_type}")
# low_cost_partial_config is passed to constructor,
# which is before update_search_space() is called
init_config = self._ls.init_config
add_cost_to_space(config, init_config, self._cat_hp_cost)
self._ls = self.LocalSearch(
init_config,
self._ls.metric,
self._mode,
config,
self._ls.resource_attr,
self._ls.min_resource,
self._ls.max_resource,
self._ls.resource_multiple_factor,
cost_attr=self.cost_attr,
seed=self._ls.seed,
lexico_objectives=self.lexico_objectives,
)
if self._gs is not None:
self._gs = GlobalSearch(
space=config,
metric=self._metric,
mode=self._mode,
sampler=self._gs._sampler,
)
self._gs.space = config
self._init_search()
|
(metric: Optional[str] = None, mode: Optional[str] = None, space: Optional[dict] = None, low_cost_partial_config: Optional[dict] = None, cat_hp_cost: Optional[dict] = None, points_to_evaluate: Optional[List[dict]] = None, evaluated_rewards: Optional[List] = None, time_budget_s: Union[int, float] = None, num_samples: Optional[int] = None, resource_attr: Optional[str] = None, min_resource: Optional[float] = None, max_resource: Optional[float] = None, reduction_factor: Optional[float] = None, global_search_alg: Optional[flaml.tune.searcher.suggestion.Searcher] = None, config_constraints: Optional[List[Tuple[Callable[[dict], float], str, float]]] = None, metric_constraints: Optional[List[Tuple[str, str, float]]] = None, seed: Optional[int] = 20, cost_attr: Optional[str] = 'auto', cost_budget: Optional[float] = None, experimental: Optional[bool] = False, lexico_objectives: Optional[dict] = None, use_incumbent_result_in_evaluation=False, allow_empty_config=False)
|
52,773 |
flaml.tune.searcher.blendsearch
|
generate_parameters
|
Returns a set of trial (hyper-)parameters, as a serializable object.
Args:
parameter_id: int.
|
def generate_parameters(self, parameter_id, **kwargs) -> Dict:
"""Returns a set of trial (hyper-)parameters, as a serializable object.
Args:
parameter_id: int.
"""
return self.suggest(str(parameter_id))
|
(self, parameter_id, **kwargs) -> Dict
|
52,776 |
flaml.tune.searcher.blendsearch
|
receive_trial_result
|
Receive trial's final result.
Args:
parameter_id: int.
parameters: object created by `generate_parameters()`.
value: final metrics of the trial, including default metric.
|
def receive_trial_result(self, parameter_id, parameters, value, **kwargs):
"""Receive trial's final result.
Args:
parameter_id: int.
parameters: object created by `generate_parameters()`.
value: final metrics of the trial, including default metric.
"""
result = {
"config": parameters,
self._metric: extract_scalar_reward(value),
self.cost_attr: 1 if isinstance(value, float) else value.get(self.cost_attr, value.get("sequence", 1)),
# if nni does not report training cost,
# using sequence as an approximation.
# if no sequence, using a constant 1
}
self.on_trial_complete(str(parameter_id), result)
|
(self, parameter_id, parameters, value, **kwargs)
|
52,781 |
flaml.tune.searcher.blendsearch
|
update_search_space
|
Required by NNI.
Tuners are advised to support updating search space at run-time.
If a tuner can only set search space once before generating first hyper-parameters,
it should explicitly document this behaviour.
Args:
search_space: JSON object created by experiment owner.
|
def update_search_space(self, search_space):
"""Required by NNI.
Tuners are advised to support updating search space at run-time.
If a tuner can only set search space once before generating first hyper-parameters,
it should explicitly document this behaviour.
Args:
search_space: JSON object created by experiment owner.
"""
config = {}
for key, value in search_space.items():
v = value.get("_value")
_type = value["_type"]
if _type == "choice":
config[key] = choice(v)
elif _type == "randint":
config[key] = randint(*v)
elif _type == "uniform":
config[key] = uniform(*v)
elif _type == "quniform":
config[key] = quniform(*v)
elif _type == "loguniform":
config[key] = loguniform(*v)
elif _type == "qloguniform":
config[key] = qloguniform(*v)
elif _type == "normal":
config[key] = randn(*v)
elif _type == "qnormal":
config[key] = qrandn(*v)
else:
raise ValueError(f"unsupported type in search_space {_type}")
# low_cost_partial_config is passed to constructor,
# which is before update_search_space() is called
init_config = self._ls.init_config
add_cost_to_space(config, init_config, self._cat_hp_cost)
self._ls = self.LocalSearch(
init_config,
self._ls.metric,
self._mode,
config,
self._ls.resource_attr,
self._ls.min_resource,
self._ls.max_resource,
self._ls.resource_multiple_factor,
cost_attr=self.cost_attr,
seed=self._ls.seed,
lexico_objectives=self.lexico_objectives,
)
if self._gs is not None:
self._gs = GlobalSearch(
space=config,
metric=self._metric,
mode=self._mode,
sampler=self._gs._sampler,
)
self._gs.space = config
self._init_search()
|
(self, search_space)
|
52,782 |
flaml.tune.searcher.blendsearch
|
CFO
|
class for CFO algorithm.
|
class CFO(BlendSearchTuner):
"""class for CFO algorithm."""
__name__ = "CFO"
def suggest(self, trial_id: str) -> Optional[Dict]:
# Number of threads is 1 or 2. Thread 0 is a vacuous thread
assert len(self._search_thread_pool) < 3, len(self._search_thread_pool)
if len(self._search_thread_pool) < 2:
# When a local thread converges, the number of threads is 1
# Need to restart
self._init_used = False
return super().suggest(trial_id)
def _select_thread(self) -> Tuple:
for key in self._search_thread_pool:
if key:
return key, key
def _create_condition(self, result: Dict) -> bool:
"""create thread condition"""
if self._points_to_evaluate:
# still evaluating user-specified init points
# we evaluate all candidate start points before we
# create the first local search thread
return False
if len(self._search_thread_pool) == 2:
return False
if self._candidate_start_points and self._thread_count == 1:
# result needs to match or exceed the best candidate start point
obj_best = min(
(self._ls.metric_op * r[self._ls.metric] for r in self._candidate_start_points.values() if r),
default=-np.inf,
)
return result[self._ls.metric] * self._ls.metric_op <= obj_best
else:
return True
def on_trial_complete(self, trial_id: str, result: Optional[Dict] = None, error: bool = False):
super().on_trial_complete(trial_id, result, error)
if self._candidate_start_points and trial_id in self._candidate_start_points:
# the trial is a candidate start point
self._candidate_start_points[trial_id] = result
if len(self._search_thread_pool) < 2 and not self._points_to_evaluate:
self._create_thread_from_best_candidate()
|
(metric: Optional[str] = None, mode: Optional[str] = None, space: Optional[dict] = None, low_cost_partial_config: Optional[dict] = None, cat_hp_cost: Optional[dict] = None, points_to_evaluate: Optional[List[dict]] = None, evaluated_rewards: Optional[List] = None, time_budget_s: Union[int, float] = None, num_samples: Optional[int] = None, resource_attr: Optional[str] = None, min_resource: Optional[float] = None, max_resource: Optional[float] = None, reduction_factor: Optional[float] = None, global_search_alg: Optional[flaml.tune.searcher.suggestion.Searcher] = None, config_constraints: Optional[List[Tuple[Callable[[dict], float], str, float]]] = None, metric_constraints: Optional[List[Tuple[str, str, float]]] = None, seed: Optional[int] = 20, cost_attr: Optional[str] = 'auto', cost_budget: Optional[float] = None, experimental: Optional[bool] = False, lexico_objectives: Optional[dict] = None, use_incumbent_result_in_evaluation=False, allow_empty_config=False)
|
52,785 |
flaml.tune.searcher.blendsearch
|
_create_condition
|
create thread condition
|
def _create_condition(self, result: Dict) -> bool:
"""create thread condition"""
if self._points_to_evaluate:
# still evaluating user-specified init points
# we evaluate all candidate start points before we
# create the first local search thread
return False
if len(self._search_thread_pool) == 2:
return False
if self._candidate_start_points and self._thread_count == 1:
# result needs to match or exceed the best candidate start point
obj_best = min(
(self._ls.metric_op * r[self._ls.metric] for r in self._candidate_start_points.values() if r),
default=-np.inf,
)
return result[self._ls.metric] * self._ls.metric_op <= obj_best
else:
return True
|
(self, result: Dict) -> bool
|
52,791 |
flaml.tune.searcher.blendsearch
|
_select_thread
| null |
def _select_thread(self) -> Tuple:
for key in self._search_thread_pool:
if key:
return key, key
|
(self) -> Tuple
|
52,799 |
flaml.tune.searcher.blendsearch
|
on_trial_complete
| null |
def on_trial_complete(self, trial_id: str, result: Optional[Dict] = None, error: bool = False):
super().on_trial_complete(trial_id, result, error)
if self._candidate_start_points and trial_id in self._candidate_start_points:
# the trial is a candidate start point
self._candidate_start_points[trial_id] = result
if len(self._search_thread_pool) < 2 and not self._points_to_evaluate:
self._create_thread_from_best_candidate()
|
(self, trial_id: str, result: Optional[Dict] = None, error: bool = False)
|
52,805 |
flaml.tune.searcher.blendsearch
|
suggest
| null |
def suggest(self, trial_id: str) -> Optional[Dict]:
# Number of threads is 1 or 2. Thread 0 is a vacuous thread
assert len(self._search_thread_pool) < 3, len(self._search_thread_pool)
if len(self._search_thread_pool) < 2:
# When a local thread converges, the number of threads is 1
# Need to restart
self._init_used = False
return super().suggest(trial_id)
|
(self, trial_id: str) -> Optional[Dict]
|
52,807 |
flaml.tune.searcher.flow2
|
FLOW2
|
Local search algorithm FLOW2, with adaptive step size.
|
class FLOW2(Searcher):
"""Local search algorithm FLOW2, with adaptive step size."""
STEPSIZE = 0.1
STEP_LOWER_BOUND = 0.0001
def __init__(
self,
init_config: dict,
metric: Optional[str] = None,
mode: Optional[str] = None,
space: Optional[dict] = None,
resource_attr: Optional[str] = None,
min_resource: Optional[float] = None,
max_resource: Optional[float] = None,
resource_multiple_factor: Optional[float] = None,
cost_attr: Optional[str] = "time_total_s",
seed: Optional[int] = 20,
lexico_objectives=None,
):
"""Constructor.
Args:
init_config: a dictionary of a partial or full initial config,
e.g., from a subset of controlled dimensions
to the initial low-cost values.
E.g., {'epochs': 1}.
metric: A string of the metric name to optimize for.
mode: A string in ['min', 'max'] to specify the objective as
minimization or maximization.
space: A dictionary to specify the search space.
resource_attr: A string to specify the resource dimension and the best
performance is assumed to be at the max_resource.
min_resource: A float of the minimal resource to use for the resource_attr.
max_resource: A float of the maximal resource to use for the resource_attr.
resource_multiple_factor: A float of the multiplicative factor
used for increasing resource.
cost_attr: A string of the attribute used for cost.
seed: An integer of the random seed.
lexico_objectives: dict, default=None | It specifics information needed to perform multi-objective
optimization with lexicographic preferences. When lexico_objectives is not None, the arguments metric,
mode will be invalid. This dictionary shall contain the following fields of key-value pairs:
- "metrics": a list of optimization objectives with the orders reflecting the priorities/preferences of the
objectives.
- "modes" (optional): a list of optimization modes (each mode either "min" or "max") corresponding to the
objectives in the metric list. If not provided, we use "min" as the default mode for all the objectives
- "targets" (optional): a dictionary to specify the optimization targets on the objectives. The keys are the
metric names (provided in "metric"), and the values are the numerical target values.
- "tolerances" (optional): a dictionary to specify the optimality tolerances on objectives. The keys are the metric names (provided in "metrics"), and the values are the absolute/percentage tolerance in the form of numeric/string.
E.g.,
```python
lexico_objectives = {
"metrics": ["error_rate", "pred_time"],
"modes": ["min", "min"],
"tolerances": {"error_rate": 0.01, "pred_time": 0.0},
"targets": {"error_rate": 0.0},
}
```
We also support percentage tolerance.
E.g.,
```python
lexico_objectives = {
"metrics": ["error_rate", "pred_time"],
"modes": ["min", "min"],
"tolerances": {"error_rate": "5%", "pred_time": "0%"},
"targets": {"error_rate": 0.0},
}
```
"""
if mode:
assert mode in ["min", "max"], "`mode` must be 'min' or 'max'."
else:
mode = "min"
super(FLOW2, self).__init__(metric=metric, mode=mode)
# internally minimizes, so "max" => -1
if mode == "max":
self.metric_op = -1.0
elif mode == "min":
self.metric_op = 1.0
self.space = space or {}
self._space = flatten_dict(self.space, prevent_delimiter=True)
self._random = np.random.RandomState(seed)
self.rs_random = sample._BackwardsCompatibleNumpyRng(seed + 19823)
self.seed = seed
self.init_config = init_config
self.best_config = flatten_dict(init_config)
self.resource_attr = resource_attr
self.min_resource = min_resource
self.lexico_objectives = lexico_objectives
if self.lexico_objectives is not None:
if "modes" not in self.lexico_objectives.keys():
self.lexico_objectives["modes"] = ["min"] * len(self.lexico_objectives["metrics"])
for t_metric, t_mode in zip(self.lexico_objectives["metrics"], self.lexico_objectives["modes"]):
if t_metric not in self.lexico_objectives["tolerances"].keys():
self.lexico_objectives["tolerances"][t_metric] = 0
if t_metric not in self.lexico_objectives["targets"].keys():
self.lexico_objectives["targets"][t_metric] = -float("inf") if t_mode == "min" else float("inf")
self.resource_multiple_factor = resource_multiple_factor or SAMPLE_MULTIPLY_FACTOR
self.cost_attr = cost_attr
self.max_resource = max_resource
self._resource = None
self._f_best = None # only use for lexico_comapre. It represent the best value achieved by lexico_flow.
self._step_lb = np.inf
self._histories = None # only use for lexico_comapre. It records the result of historical configurations.
if space is not None:
self._init_search()
def _init_search(self):
self._tunable_keys = []
self._bounded_keys = []
self._unordered_cat_hp = {}
hier = False
for key, domain in self._space.items():
assert not (
isinstance(domain, dict) and "grid_search" in domain
), f"{key}'s domain is grid search, not supported in FLOW^2."
if callable(getattr(domain, "get_sampler", None)):
self._tunable_keys.append(key)
sampler = domain.get_sampler()
# the step size lower bound for uniform variables doesn't depend
# on the current config
if isinstance(sampler, sample.Quantized):
q = sampler.q
sampler = sampler.get_sampler()
if str(sampler) == "Uniform":
self._step_lb = min(self._step_lb, q / (domain.upper - domain.lower + 1))
elif isinstance(domain, sample.Integer) and str(sampler) == "Uniform":
self._step_lb = min(self._step_lb, 1.0 / (domain.upper - domain.lower))
if isinstance(domain, sample.Categorical):
if not domain.ordered:
self._unordered_cat_hp[key] = len(domain.categories)
if not hier:
for cat in domain.categories:
if isinstance(cat, dict):
hier = True
break
if str(sampler) != "Normal":
self._bounded_keys.append(key)
if not hier:
self._space_keys = sorted(self._tunable_keys)
self.hierarchical = hier
if self.resource_attr and self.resource_attr not in self._space and self.max_resource:
self.min_resource = self.min_resource or self._min_resource()
self._resource = self._round(self.min_resource)
if not hier:
self._space_keys.append(self.resource_attr)
else:
self._resource = None
self.incumbent = {}
self.incumbent = self.normalize(self.best_config) # flattened
self.best_obj = self.cost_incumbent = None
self.dim = len(self._tunable_keys) # total # tunable dimensions
self._direction_tried = None
self._num_complete4incumbent = self._cost_complete4incumbent = 0
self._num_allowed4incumbent = 2 * self.dim
self._proposed_by = {} # trial_id: int -> incumbent: Dict
self.step_ub = np.sqrt(self.dim)
self.step = self.STEPSIZE * self.step_ub
lb = self.step_lower_bound
if lb > self.step:
self.step = lb * 2
# upper bound
self.step = min(self.step, self.step_ub)
# maximal # consecutive no improvements
self.dir = 2 ** (min(9, self.dim))
self._configs = {} # dict from trial_id to (config, stepsize)
self._K = 0
self._iter_best_config = 1
self.trial_count_proposed = self.trial_count_complete = 1
self._num_proposedby_incumbent = 0
self._reset_times = 0
# record intermediate trial cost
self._trial_cost = {}
self._same = False # whether the proposed config is the same as best_config
self._init_phase = True # initial phase to increase initial stepsize
self._trunc = 0
# no truncation by default. when > 0, it means how many
# non-zero dimensions to keep in the random unit vector
@property
def step_lower_bound(self) -> float:
step_lb = self._step_lb
for key in self._tunable_keys:
if key not in self.best_config:
continue
domain = self._space[key]
sampler = domain.get_sampler()
# the stepsize lower bound for log uniform variables depends on the
# current config
if isinstance(sampler, sample.Quantized):
q = sampler.q
sampler_inner = sampler.get_sampler()
if str(sampler_inner) == "LogUniform":
step_lb = min(
step_lb,
np.log(1.0 + q / self.best_config[key]) / np.log(domain.upper / domain.lower),
)
elif isinstance(domain, sample.Integer) and str(sampler) == "LogUniform":
step_lb = min(
step_lb,
np.log(1.0 + 1.0 / self.best_config[key]) / np.log((domain.upper - 1) / domain.lower),
)
if np.isinf(step_lb):
step_lb = self.STEP_LOWER_BOUND
else:
step_lb *= self.step_ub
return step_lb
@property
def resource(self) -> float:
return self._resource
def _min_resource(self) -> float:
"""automatically decide minimal resource"""
return self.max_resource / np.pow(self.resource_multiple_factor, 5)
def _round(self, resource) -> float:
"""round the resource to self.max_resource if close to it"""
if resource * self.resource_multiple_factor > self.max_resource:
return self.max_resource
return resource
def rand_vector_gaussian(self, dim, std=1.0):
return self._random.normal(0, std, dim)
def complete_config(
self,
partial_config: Dict,
lower: Optional[Dict] = None,
upper: Optional[Dict] = None,
) -> Tuple[Dict, Dict]:
"""Generate a complete config from the partial config input.
Add minimal resource to config if available.
"""
disturb = self._reset_times and partial_config == self.init_config
# if not the first time to complete init_config, use random gaussian
config, space = complete_config(partial_config, self.space, self, disturb, lower, upper)
if partial_config == self.init_config:
self._reset_times += 1
if self._resource:
config[self.resource_attr] = self.min_resource
return config, space
def create(self, init_config: Dict, obj: float, cost: float, space: Dict) -> Searcher:
# space is the subspace where the init_config is located
flow2 = self.__class__(
init_config,
self.metric,
self.mode,
space,
self.resource_attr,
self.min_resource,
self.max_resource,
self.resource_multiple_factor,
self.cost_attr,
self.seed + 1,
self.lexico_objectives,
)
if self.lexico_objectives is not None:
flow2.best_obj = {}
for k, v in obj.items():
flow2.best_obj[k] = (
-v if self.lexico_objectives["modes"][self.lexico_objectives["metrics"].index(k)] == "max" else v
)
else:
flow2.best_obj = obj * self.metric_op # minimize internally
flow2.cost_incumbent = cost
self.seed += 1
return flow2
def normalize(self, config, recursive=False) -> Dict:
"""normalize each dimension in config to [0,1]."""
return normalize(config, self._space, self.best_config, self.incumbent, recursive)
def denormalize(self, config):
"""denormalize each dimension in config from [0,1]."""
return denormalize(config, self._space, self.best_config, self.incumbent, self._random)
def set_search_properties(
self,
metric: Optional[str] = None,
mode: Optional[str] = None,
config: Optional[Dict] = None,
) -> bool:
if metric:
self._metric = metric
if mode:
assert mode in ["min", "max"], "`mode` must be 'min' or 'max'."
self._mode = mode
if mode == "max":
self.metric_op = -1.0
elif mode == "min":
self.metric_op = 1.0
if config:
self.space = config
self._space = flatten_dict(self.space)
self._init_search()
return True
def update_fbest(
self,
):
obj_initial = self.lexico_objectives["metrics"][0]
feasible_index = np.array([*range(len(self._histories[obj_initial]))])
for k_metric in self.lexico_objectives["metrics"]:
k_values = np.array(self._histories[k_metric])
feasible_value = k_values.take(feasible_index)
self._f_best[k_metric] = np.min(feasible_value)
if not isinstance(self.lexico_objectives["tolerances"][k_metric], str):
tolerance_bound = self._f_best[k_metric] + self.lexico_objectives["tolerances"][k_metric]
else:
assert (
self.lexico_objectives["tolerances"][k_metric][-1] == "%"
), "String tolerance of {} should use %% as the suffix".format(k_metric)
tolerance_bound = self._f_best[k_metric] * (
1 + 0.01 * float(self.lexico_objectives["tolerances"][k_metric].replace("%", ""))
)
feasible_index_filter = np.where(
feasible_value
<= max(
tolerance_bound,
self.lexico_objectives["targets"][k_metric],
)
)[0]
feasible_index = feasible_index.take(feasible_index_filter)
def lexico_compare(self, result) -> bool:
if self._histories is None:
self._histories, self._f_best = defaultdict(list), {}
for k in self.lexico_objectives["metrics"]:
self._histories[k].append(result[k])
self.update_fbest()
return True
else:
for k in self.lexico_objectives["metrics"]:
self._histories[k].append(result[k])
self.update_fbest()
for k_metric, k_mode in zip(self.lexico_objectives["metrics"], self.lexico_objectives["modes"]):
k_target = (
self.lexico_objectives["targets"][k_metric]
if k_mode == "min"
else -self.lexico_objectives["targets"][k_metric]
)
if not isinstance(self.lexico_objectives["tolerances"][k_metric], str):
tolerance_bound = self._f_best[k_metric] + self.lexico_objectives["tolerances"][k_metric]
else:
assert (
self.lexico_objectives["tolerances"][k_metric][-1] == "%"
), "String tolerance of {} should use %% as the suffix".format(k_metric)
tolerance_bound = self._f_best[k_metric] * (
1 + 0.01 * float(self.lexico_objectives["tolerances"][k_metric].replace("%", ""))
)
if (result[k_metric] < max(tolerance_bound, k_target)) and (
self.best_obj[k_metric]
< max(
tolerance_bound,
k_target,
)
):
continue
elif result[k_metric] < self.best_obj[k_metric]:
return True
else:
return False
for k_metr in self.lexico_objectives["metrics"]:
if result[k_metr] == self.best_obj[k_metr]:
continue
elif result[k_metr] < self.best_obj[k_metr]:
return True
else:
return False
def on_trial_complete(self, trial_id: str, result: Optional[Dict] = None, error: bool = False):
"""
Compare with incumbent.
If better, move, reset num_complete and num_proposed.
If not better and num_complete >= 2*dim, num_allowed += 2.
"""
self.trial_count_complete += 1
if not error and result:
obj = (
result.get(self._metric)
if self.lexico_objectives is None
else {k: result[k] for k in self.lexico_objectives["metrics"]}
)
if obj:
obj = (
{
k: -obj[k] if m == "max" else obj[k]
for k, m in zip(
self.lexico_objectives["metrics"],
self.lexico_objectives["modes"],
)
}
if isinstance(obj, dict)
else obj * self.metric_op
)
if (
self.best_obj is None
or (self.lexico_objectives is None and obj < self.best_obj)
or (self.lexico_objectives is not None and self.lexico_compare(obj))
):
self.best_obj = obj
self.best_config, self.step = self._configs[trial_id]
self.incumbent = self.normalize(self.best_config)
self.cost_incumbent = result.get(self.cost_attr, 1)
if self._resource:
self._resource = self.best_config[self.resource_attr]
self._num_complete4incumbent = 0
self._cost_complete4incumbent = 0
self._num_proposedby_incumbent = 0
self._num_allowed4incumbent = 2 * self.dim
self._proposed_by.clear()
if self._K > 0:
self.step *= np.sqrt(self._K / self._oldK)
self.step = min(self.step, self.step_ub)
self._iter_best_config = self.trial_count_complete
if self._trunc:
self._trunc = min(self._trunc + 1, self.dim)
return
elif self._trunc:
self._trunc = max(self._trunc >> 1, 1)
proposed_by = self._proposed_by.get(trial_id)
if proposed_by == self.incumbent:
self._num_complete4incumbent += 1
cost = result.get(self.cost_attr, 1) if result else self._trial_cost.get(trial_id)
if cost:
self._cost_complete4incumbent += cost
if self._num_complete4incumbent >= 2 * self.dim and self._num_allowed4incumbent == 0:
self._num_allowed4incumbent = 2
if self._num_complete4incumbent == self.dir and (not self._resource or self._resource == self.max_resource):
self._num_complete4incumbent -= 2
self._num_allowed4incumbent = max(self._num_allowed4incumbent, 2)
def on_trial_result(self, trial_id: str, result: Dict):
"""Early update of incumbent."""
if result:
obj = (
result.get(self._metric)
if self.lexico_objectives is None
else {k: result[k] for k in self.lexico_objectives["metrics"]}
)
if obj:
obj = (
{
k: -obj[k] if m == "max" else obj[k]
for k, m in zip(
self.lexico_objectives["metrics"],
self.lexico_objectives["modes"],
)
}
if isinstance(obj, dict)
else obj * self.metric_op
)
if (
self.best_obj is None
or (self.lexico_objectives is None and obj < self.best_obj)
or (self.lexico_objectives is not None and self.lexico_compare(obj))
):
self.best_obj = obj
config = self._configs[trial_id][0]
if self.best_config != config:
self.best_config = config
if self._resource:
self._resource = config[self.resource_attr]
self.incumbent = self.normalize(self.best_config)
self.cost_incumbent = result.get(self.cost_attr, 1)
self._cost_complete4incumbent = 0
self._num_complete4incumbent = 0
self._num_proposedby_incumbent = 0
self._num_allowed4incumbent = 2 * self.dim
self._proposed_by.clear()
self._iter_best_config = self.trial_count_complete
cost = result.get(self.cost_attr, 1)
# record the cost in case it is pruned and cost info is lost
self._trial_cost[trial_id] = cost
def rand_vector_unit_sphere(self, dim, trunc=0) -> np.ndarray:
vec = self._random.normal(0, 1, dim)
if 0 < trunc < dim:
vec[np.abs(vec).argsort()[: dim - trunc]] = 0
mag = np.linalg.norm(vec)
return vec / mag
def suggest(self, trial_id: str) -> Optional[Dict]:
"""Suggest a new config, one of the following cases:
1. same incumbent, increase resource.
2. same resource, move from the incumbent to a random direction.
3. same resource, move from the incumbent to the opposite direction.
"""
# TODO: better decouple FLOW2 config suggestion and stepsize update
self.trial_count_proposed += 1
if (
self._num_complete4incumbent > 0
and self.cost_incumbent
and self._resource
and self._resource < self.max_resource
and (self._cost_complete4incumbent >= self.cost_incumbent * self.resource_multiple_factor)
):
return self._increase_resource(trial_id)
self._num_allowed4incumbent -= 1
move = self.incumbent.copy()
if self._direction_tried is not None:
# return negative direction
for i, key in enumerate(self._tunable_keys):
move[key] -= self._direction_tried[i]
self._direction_tried = None
else:
# propose a new direction
self._direction_tried = self.rand_vector_unit_sphere(self.dim, self._trunc) * self.step
for i, key in enumerate(self._tunable_keys):
move[key] += self._direction_tried[i]
self._project(move)
config = self.denormalize(move)
self._proposed_by[trial_id] = self.incumbent
self._configs[trial_id] = (config, self.step)
self._num_proposedby_incumbent += 1
best_config = self.best_config
if self._init_phase:
if self._direction_tried is None:
if self._same:
same = not any(key not in best_config or value != best_config[key] for key, value in config.items())
if same:
# increase step size
self.step += self.STEPSIZE
self.step = min(self.step, self.step_ub)
else:
same = not any(key not in best_config or value != best_config[key] for key, value in config.items())
self._same = same
if self._num_proposedby_incumbent == self.dir and (not self._resource or self._resource == self.max_resource):
# check stuck condition if using max resource
self._num_proposedby_incumbent -= 2
self._init_phase = False
if self.step < self.step_lower_bound:
return None
# decrease step size
self._oldK = self._K or self._iter_best_config
self._K = self.trial_count_proposed + 1
self.step *= np.sqrt(self._oldK / self._K)
if self._init_phase:
return unflatten_dict(config)
if self._trunc == 1 and self._direction_tried is not None:
# random
for i, key in enumerate(self._tunable_keys):
if self._direction_tried[i] != 0:
for _, generated in generate_variants_compatible(
{"config": {key: self._space[key]}}, random_state=self.rs_random
):
if generated["config"][key] != best_config[key]:
config[key] = generated["config"][key]
return unflatten_dict(config)
break
elif len(config) == len(best_config):
for key, value in best_config.items():
if value != config[key]:
return unflatten_dict(config)
# print('move to', move)
self.incumbent = move
return unflatten_dict(config)
def _increase_resource(self, trial_id):
# consider increasing resource using sum eval cost of complete
# configs
old_resource = self._resource
self._resource = self._round(self._resource * self.resource_multiple_factor)
self.cost_incumbent *= self._resource / old_resource
config = self.best_config.copy()
config[self.resource_attr] = self._resource
self._direction_tried = None
self._configs[trial_id] = (config, self.step)
return unflatten_dict(config)
def _project(self, config):
"""project normalized config in the feasible region and set resource_attr"""
for key in self._bounded_keys:
value = config[key]
config[key] = max(0, min(1, value))
if self._resource:
config[self.resource_attr] = self._resource
@property
def can_suggest(self) -> bool:
"""Can't suggest if 2*dim configs have been proposed for the incumbent
while fewer are completed.
"""
return self._num_allowed4incumbent > 0
def config_signature(self, config, space: Dict = None) -> tuple:
"""Return the signature tuple of a config."""
config = flatten_dict(config)
space = flatten_dict(space) if space else self._space
value_list = []
# self._space_keys doesn't contain keys with const values,
# e.g., "eval_metric": ["logloss", "error"].
keys = sorted(config.keys()) if self.hierarchical else self._space_keys
for key in keys:
value = config[key]
if key == self.resource_attr:
value_list.append(value)
else:
# key must be in space
domain = space[key]
if self.hierarchical and not (
domain is None or type(domain) in (str, int, float) or isinstance(domain, sample.Domain)
):
# not domain or hashable
# get rid of list type for hierarchical search space.
continue
if isinstance(domain, sample.Integer):
value_list.append(int(round(value)))
else:
value_list.append(value)
return tuple(value_list)
@property
def converged(self) -> bool:
"""Whether the local search has converged."""
if self._num_complete4incumbent < self.dir - 2:
return False
# check stepsize after enough configs are completed
return self.step < self.step_lower_bound
def reach(self, other: Searcher) -> bool:
"""whether the incumbent can reach the incumbent of other."""
config1, config2 = self.best_config, other.best_config
incumbent1, incumbent2 = self.incumbent, other.incumbent
if self._resource and config1[self.resource_attr] > config2[self.resource_attr]:
# resource will not decrease
return False
for key in self._unordered_cat_hp:
# unordered cat choice is hard to reach by chance
if config1[key] != config2.get(key):
return False
delta = np.array([incumbent1[key] - incumbent2.get(key, np.inf) for key in self._tunable_keys])
return np.linalg.norm(delta) <= self.step
|
(init_config: dict, metric: Optional[str] = None, mode: Optional[str] = None, space: Optional[dict] = None, resource_attr: Optional[str] = None, min_resource: Optional[float] = None, max_resource: Optional[float] = None, resource_multiple_factor: Optional[float] = None, cost_attr: Optional[str] = 'time_total_s', seed: Optional[int] = 20, lexico_objectives=None)
|
52,808 |
flaml.tune.searcher.flow2
|
__init__
|
Constructor.
Args:
init_config: a dictionary of a partial or full initial config,
e.g., from a subset of controlled dimensions
to the initial low-cost values.
E.g., {'epochs': 1}.
metric: A string of the metric name to optimize for.
mode: A string in ['min', 'max'] to specify the objective as
minimization or maximization.
space: A dictionary to specify the search space.
resource_attr: A string to specify the resource dimension and the best
performance is assumed to be at the max_resource.
min_resource: A float of the minimal resource to use for the resource_attr.
max_resource: A float of the maximal resource to use for the resource_attr.
resource_multiple_factor: A float of the multiplicative factor
used for increasing resource.
cost_attr: A string of the attribute used for cost.
seed: An integer of the random seed.
lexico_objectives: dict, default=None | It specifics information needed to perform multi-objective
optimization with lexicographic preferences. When lexico_objectives is not None, the arguments metric,
mode will be invalid. This dictionary shall contain the following fields of key-value pairs:
- "metrics": a list of optimization objectives with the orders reflecting the priorities/preferences of the
objectives.
- "modes" (optional): a list of optimization modes (each mode either "min" or "max") corresponding to the
objectives in the metric list. If not provided, we use "min" as the default mode for all the objectives
- "targets" (optional): a dictionary to specify the optimization targets on the objectives. The keys are the
metric names (provided in "metric"), and the values are the numerical target values.
- "tolerances" (optional): a dictionary to specify the optimality tolerances on objectives. The keys are the metric names (provided in "metrics"), and the values are the absolute/percentage tolerance in the form of numeric/string.
E.g.,
```python
lexico_objectives = {
"metrics": ["error_rate", "pred_time"],
"modes": ["min", "min"],
"tolerances": {"error_rate": 0.01, "pred_time": 0.0},
"targets": {"error_rate": 0.0},
}
```
We also support percentage tolerance.
E.g.,
```python
lexico_objectives = {
"metrics": ["error_rate", "pred_time"],
"modes": ["min", "min"],
"tolerances": {"error_rate": "5%", "pred_time": "0%"},
"targets": {"error_rate": 0.0},
}
```
|
def __init__(
self,
init_config: dict,
metric: Optional[str] = None,
mode: Optional[str] = None,
space: Optional[dict] = None,
resource_attr: Optional[str] = None,
min_resource: Optional[float] = None,
max_resource: Optional[float] = None,
resource_multiple_factor: Optional[float] = None,
cost_attr: Optional[str] = "time_total_s",
seed: Optional[int] = 20,
lexico_objectives=None,
):
"""Constructor.
Args:
init_config: a dictionary of a partial or full initial config,
e.g., from a subset of controlled dimensions
to the initial low-cost values.
E.g., {'epochs': 1}.
metric: A string of the metric name to optimize for.
mode: A string in ['min', 'max'] to specify the objective as
minimization or maximization.
space: A dictionary to specify the search space.
resource_attr: A string to specify the resource dimension and the best
performance is assumed to be at the max_resource.
min_resource: A float of the minimal resource to use for the resource_attr.
max_resource: A float of the maximal resource to use for the resource_attr.
resource_multiple_factor: A float of the multiplicative factor
used for increasing resource.
cost_attr: A string of the attribute used for cost.
seed: An integer of the random seed.
lexico_objectives: dict, default=None | It specifics information needed to perform multi-objective
optimization with lexicographic preferences. When lexico_objectives is not None, the arguments metric,
mode will be invalid. This dictionary shall contain the following fields of key-value pairs:
- "metrics": a list of optimization objectives with the orders reflecting the priorities/preferences of the
objectives.
- "modes" (optional): a list of optimization modes (each mode either "min" or "max") corresponding to the
objectives in the metric list. If not provided, we use "min" as the default mode for all the objectives
- "targets" (optional): a dictionary to specify the optimization targets on the objectives. The keys are the
metric names (provided in "metric"), and the values are the numerical target values.
- "tolerances" (optional): a dictionary to specify the optimality tolerances on objectives. The keys are the metric names (provided in "metrics"), and the values are the absolute/percentage tolerance in the form of numeric/string.
E.g.,
```python
lexico_objectives = {
"metrics": ["error_rate", "pred_time"],
"modes": ["min", "min"],
"tolerances": {"error_rate": 0.01, "pred_time": 0.0},
"targets": {"error_rate": 0.0},
}
```
We also support percentage tolerance.
E.g.,
```python
lexico_objectives = {
"metrics": ["error_rate", "pred_time"],
"modes": ["min", "min"],
"tolerances": {"error_rate": "5%", "pred_time": "0%"},
"targets": {"error_rate": 0.0},
}
```
"""
if mode:
assert mode in ["min", "max"], "`mode` must be 'min' or 'max'."
else:
mode = "min"
super(FLOW2, self).__init__(metric=metric, mode=mode)
# internally minimizes, so "max" => -1
if mode == "max":
self.metric_op = -1.0
elif mode == "min":
self.metric_op = 1.0
self.space = space or {}
self._space = flatten_dict(self.space, prevent_delimiter=True)
self._random = np.random.RandomState(seed)
self.rs_random = sample._BackwardsCompatibleNumpyRng(seed + 19823)
self.seed = seed
self.init_config = init_config
self.best_config = flatten_dict(init_config)
self.resource_attr = resource_attr
self.min_resource = min_resource
self.lexico_objectives = lexico_objectives
if self.lexico_objectives is not None:
if "modes" not in self.lexico_objectives.keys():
self.lexico_objectives["modes"] = ["min"] * len(self.lexico_objectives["metrics"])
for t_metric, t_mode in zip(self.lexico_objectives["metrics"], self.lexico_objectives["modes"]):
if t_metric not in self.lexico_objectives["tolerances"].keys():
self.lexico_objectives["tolerances"][t_metric] = 0
if t_metric not in self.lexico_objectives["targets"].keys():
self.lexico_objectives["targets"][t_metric] = -float("inf") if t_mode == "min" else float("inf")
self.resource_multiple_factor = resource_multiple_factor or SAMPLE_MULTIPLY_FACTOR
self.cost_attr = cost_attr
self.max_resource = max_resource
self._resource = None
self._f_best = None # only use for lexico_comapre. It represent the best value achieved by lexico_flow.
self._step_lb = np.inf
self._histories = None # only use for lexico_comapre. It records the result of historical configurations.
if space is not None:
self._init_search()
|
(self, init_config: dict, metric: Optional[str] = None, mode: Optional[str] = None, space: Optional[dict] = None, resource_attr: Optional[str] = None, min_resource: Optional[float] = None, max_resource: Optional[float] = None, resource_multiple_factor: Optional[float] = None, cost_attr: Optional[str] = 'time_total_s', seed: Optional[int] = 20, lexico_objectives=None)
|
52,809 |
flaml.tune.searcher.flow2
|
_increase_resource
| null |
def _increase_resource(self, trial_id):
# consider increasing resource using sum eval cost of complete
# configs
old_resource = self._resource
self._resource = self._round(self._resource * self.resource_multiple_factor)
self.cost_incumbent *= self._resource / old_resource
config = self.best_config.copy()
config[self.resource_attr] = self._resource
self._direction_tried = None
self._configs[trial_id] = (config, self.step)
return unflatten_dict(config)
|
(self, trial_id)
|
52,810 |
flaml.tune.searcher.flow2
|
_init_search
| null |
def _init_search(self):
self._tunable_keys = []
self._bounded_keys = []
self._unordered_cat_hp = {}
hier = False
for key, domain in self._space.items():
assert not (
isinstance(domain, dict) and "grid_search" in domain
), f"{key}'s domain is grid search, not supported in FLOW^2."
if callable(getattr(domain, "get_sampler", None)):
self._tunable_keys.append(key)
sampler = domain.get_sampler()
# the step size lower bound for uniform variables doesn't depend
# on the current config
if isinstance(sampler, sample.Quantized):
q = sampler.q
sampler = sampler.get_sampler()
if str(sampler) == "Uniform":
self._step_lb = min(self._step_lb, q / (domain.upper - domain.lower + 1))
elif isinstance(domain, sample.Integer) and str(sampler) == "Uniform":
self._step_lb = min(self._step_lb, 1.0 / (domain.upper - domain.lower))
if isinstance(domain, sample.Categorical):
if not domain.ordered:
self._unordered_cat_hp[key] = len(domain.categories)
if not hier:
for cat in domain.categories:
if isinstance(cat, dict):
hier = True
break
if str(sampler) != "Normal":
self._bounded_keys.append(key)
if not hier:
self._space_keys = sorted(self._tunable_keys)
self.hierarchical = hier
if self.resource_attr and self.resource_attr not in self._space and self.max_resource:
self.min_resource = self.min_resource or self._min_resource()
self._resource = self._round(self.min_resource)
if not hier:
self._space_keys.append(self.resource_attr)
else:
self._resource = None
self.incumbent = {}
self.incumbent = self.normalize(self.best_config) # flattened
self.best_obj = self.cost_incumbent = None
self.dim = len(self._tunable_keys) # total # tunable dimensions
self._direction_tried = None
self._num_complete4incumbent = self._cost_complete4incumbent = 0
self._num_allowed4incumbent = 2 * self.dim
self._proposed_by = {} # trial_id: int -> incumbent: Dict
self.step_ub = np.sqrt(self.dim)
self.step = self.STEPSIZE * self.step_ub
lb = self.step_lower_bound
if lb > self.step:
self.step = lb * 2
# upper bound
self.step = min(self.step, self.step_ub)
# maximal # consecutive no improvements
self.dir = 2 ** (min(9, self.dim))
self._configs = {} # dict from trial_id to (config, stepsize)
self._K = 0
self._iter_best_config = 1
self.trial_count_proposed = self.trial_count_complete = 1
self._num_proposedby_incumbent = 0
self._reset_times = 0
# record intermediate trial cost
self._trial_cost = {}
self._same = False # whether the proposed config is the same as best_config
self._init_phase = True # initial phase to increase initial stepsize
self._trunc = 0
# no truncation by default. when > 0, it means how many
# non-zero dimensions to keep in the random unit vector
|
(self)
|
52,811 |
flaml.tune.searcher.flow2
|
_min_resource
|
automatically decide minimal resource
|
def _min_resource(self) -> float:
"""automatically decide minimal resource"""
return self.max_resource / np.pow(self.resource_multiple_factor, 5)
|
(self) -> float
|
52,812 |
flaml.tune.searcher.flow2
|
_project
|
project normalized config in the feasible region and set resource_attr
|
def _project(self, config):
"""project normalized config in the feasible region and set resource_attr"""
for key in self._bounded_keys:
value = config[key]
config[key] = max(0, min(1, value))
if self._resource:
config[self.resource_attr] = self._resource
|
(self, config)
|
52,813 |
flaml.tune.searcher.flow2
|
_round
|
round the resource to self.max_resource if close to it
|
def _round(self, resource) -> float:
"""round the resource to self.max_resource if close to it"""
if resource * self.resource_multiple_factor > self.max_resource:
return self.max_resource
return resource
|
(self, resource) -> float
|
52,814 |
flaml.tune.searcher.flow2
|
complete_config
|
Generate a complete config from the partial config input.
Add minimal resource to config if available.
|
def complete_config(
self,
partial_config: Dict,
lower: Optional[Dict] = None,
upper: Optional[Dict] = None,
) -> Tuple[Dict, Dict]:
"""Generate a complete config from the partial config input.
Add minimal resource to config if available.
"""
disturb = self._reset_times and partial_config == self.init_config
# if not the first time to complete init_config, use random gaussian
config, space = complete_config(partial_config, self.space, self, disturb, lower, upper)
if partial_config == self.init_config:
self._reset_times += 1
if self._resource:
config[self.resource_attr] = self.min_resource
return config, space
|
(self, partial_config: Dict, lower: Optional[Dict] = None, upper: Optional[Dict] = None) -> Tuple[Dict, Dict]
|
52,815 |
flaml.tune.searcher.flow2
|
config_signature
|
Return the signature tuple of a config.
|
def config_signature(self, config, space: Dict = None) -> tuple:
"""Return the signature tuple of a config."""
config = flatten_dict(config)
space = flatten_dict(space) if space else self._space
value_list = []
# self._space_keys doesn't contain keys with const values,
# e.g., "eval_metric": ["logloss", "error"].
keys = sorted(config.keys()) if self.hierarchical else self._space_keys
for key in keys:
value = config[key]
if key == self.resource_attr:
value_list.append(value)
else:
# key must be in space
domain = space[key]
if self.hierarchical and not (
domain is None or type(domain) in (str, int, float) or isinstance(domain, sample.Domain)
):
# not domain or hashable
# get rid of list type for hierarchical search space.
continue
if isinstance(domain, sample.Integer):
value_list.append(int(round(value)))
else:
value_list.append(value)
return tuple(value_list)
|
(self, config, space: Optional[Dict] = None) -> tuple
|
52,816 |
flaml.tune.searcher.flow2
|
create
| null |
def create(self, init_config: Dict, obj: float, cost: float, space: Dict) -> Searcher:
# space is the subspace where the init_config is located
flow2 = self.__class__(
init_config,
self.metric,
self.mode,
space,
self.resource_attr,
self.min_resource,
self.max_resource,
self.resource_multiple_factor,
self.cost_attr,
self.seed + 1,
self.lexico_objectives,
)
if self.lexico_objectives is not None:
flow2.best_obj = {}
for k, v in obj.items():
flow2.best_obj[k] = (
-v if self.lexico_objectives["modes"][self.lexico_objectives["metrics"].index(k)] == "max" else v
)
else:
flow2.best_obj = obj * self.metric_op # minimize internally
flow2.cost_incumbent = cost
self.seed += 1
return flow2
|
(self, init_config: Dict, obj: float, cost: float, space: Dict) -> flaml.tune.searcher.suggestion.Searcher
|
52,817 |
flaml.tune.searcher.flow2
|
denormalize
|
denormalize each dimension in config from [0,1].
|
def denormalize(self, config):
"""denormalize each dimension in config from [0,1]."""
return denormalize(config, self._space, self.best_config, self.incumbent, self._random)
|
(self, config)
|
52,818 |
flaml.tune.searcher.flow2
|
lexico_compare
| null |
def lexico_compare(self, result) -> bool:
if self._histories is None:
self._histories, self._f_best = defaultdict(list), {}
for k in self.lexico_objectives["metrics"]:
self._histories[k].append(result[k])
self.update_fbest()
return True
else:
for k in self.lexico_objectives["metrics"]:
self._histories[k].append(result[k])
self.update_fbest()
for k_metric, k_mode in zip(self.lexico_objectives["metrics"], self.lexico_objectives["modes"]):
k_target = (
self.lexico_objectives["targets"][k_metric]
if k_mode == "min"
else -self.lexico_objectives["targets"][k_metric]
)
if not isinstance(self.lexico_objectives["tolerances"][k_metric], str):
tolerance_bound = self._f_best[k_metric] + self.lexico_objectives["tolerances"][k_metric]
else:
assert (
self.lexico_objectives["tolerances"][k_metric][-1] == "%"
), "String tolerance of {} should use %% as the suffix".format(k_metric)
tolerance_bound = self._f_best[k_metric] * (
1 + 0.01 * float(self.lexico_objectives["tolerances"][k_metric].replace("%", ""))
)
if (result[k_metric] < max(tolerance_bound, k_target)) and (
self.best_obj[k_metric]
< max(
tolerance_bound,
k_target,
)
):
continue
elif result[k_metric] < self.best_obj[k_metric]:
return True
else:
return False
for k_metr in self.lexico_objectives["metrics"]:
if result[k_metr] == self.best_obj[k_metr]:
continue
elif result[k_metr] < self.best_obj[k_metr]:
return True
else:
return False
|
(self, result) -> bool
|
52,819 |
flaml.tune.searcher.flow2
|
normalize
|
normalize each dimension in config to [0,1].
|
def normalize(self, config, recursive=False) -> Dict:
"""normalize each dimension in config to [0,1]."""
return normalize(config, self._space, self.best_config, self.incumbent, recursive)
|
(self, config, recursive=False) -> Dict
|
52,820 |
flaml.tune.searcher.flow2
|
on_trial_complete
|
Compare with incumbent.
If better, move, reset num_complete and num_proposed.
If not better and num_complete >= 2*dim, num_allowed += 2.
|
def on_trial_complete(self, trial_id: str, result: Optional[Dict] = None, error: bool = False):
"""
Compare with incumbent.
If better, move, reset num_complete and num_proposed.
If not better and num_complete >= 2*dim, num_allowed += 2.
"""
self.trial_count_complete += 1
if not error and result:
obj = (
result.get(self._metric)
if self.lexico_objectives is None
else {k: result[k] for k in self.lexico_objectives["metrics"]}
)
if obj:
obj = (
{
k: -obj[k] if m == "max" else obj[k]
for k, m in zip(
self.lexico_objectives["metrics"],
self.lexico_objectives["modes"],
)
}
if isinstance(obj, dict)
else obj * self.metric_op
)
if (
self.best_obj is None
or (self.lexico_objectives is None and obj < self.best_obj)
or (self.lexico_objectives is not None and self.lexico_compare(obj))
):
self.best_obj = obj
self.best_config, self.step = self._configs[trial_id]
self.incumbent = self.normalize(self.best_config)
self.cost_incumbent = result.get(self.cost_attr, 1)
if self._resource:
self._resource = self.best_config[self.resource_attr]
self._num_complete4incumbent = 0
self._cost_complete4incumbent = 0
self._num_proposedby_incumbent = 0
self._num_allowed4incumbent = 2 * self.dim
self._proposed_by.clear()
if self._K > 0:
self.step *= np.sqrt(self._K / self._oldK)
self.step = min(self.step, self.step_ub)
self._iter_best_config = self.trial_count_complete
if self._trunc:
self._trunc = min(self._trunc + 1, self.dim)
return
elif self._trunc:
self._trunc = max(self._trunc >> 1, 1)
proposed_by = self._proposed_by.get(trial_id)
if proposed_by == self.incumbent:
self._num_complete4incumbent += 1
cost = result.get(self.cost_attr, 1) if result else self._trial_cost.get(trial_id)
if cost:
self._cost_complete4incumbent += cost
if self._num_complete4incumbent >= 2 * self.dim and self._num_allowed4incumbent == 0:
self._num_allowed4incumbent = 2
if self._num_complete4incumbent == self.dir and (not self._resource or self._resource == self.max_resource):
self._num_complete4incumbent -= 2
self._num_allowed4incumbent = max(self._num_allowed4incumbent, 2)
|
(self, trial_id: str, result: Optional[Dict] = None, error: bool = False)
|
52,821 |
flaml.tune.searcher.flow2
|
on_trial_result
|
Early update of incumbent.
|
def on_trial_result(self, trial_id: str, result: Dict):
"""Early update of incumbent."""
if result:
obj = (
result.get(self._metric)
if self.lexico_objectives is None
else {k: result[k] for k in self.lexico_objectives["metrics"]}
)
if obj:
obj = (
{
k: -obj[k] if m == "max" else obj[k]
for k, m in zip(
self.lexico_objectives["metrics"],
self.lexico_objectives["modes"],
)
}
if isinstance(obj, dict)
else obj * self.metric_op
)
if (
self.best_obj is None
or (self.lexico_objectives is None and obj < self.best_obj)
or (self.lexico_objectives is not None and self.lexico_compare(obj))
):
self.best_obj = obj
config = self._configs[trial_id][0]
if self.best_config != config:
self.best_config = config
if self._resource:
self._resource = config[self.resource_attr]
self.incumbent = self.normalize(self.best_config)
self.cost_incumbent = result.get(self.cost_attr, 1)
self._cost_complete4incumbent = 0
self._num_complete4incumbent = 0
self._num_proposedby_incumbent = 0
self._num_allowed4incumbent = 2 * self.dim
self._proposed_by.clear()
self._iter_best_config = self.trial_count_complete
cost = result.get(self.cost_attr, 1)
# record the cost in case it is pruned and cost info is lost
self._trial_cost[trial_id] = cost
|
(self, trial_id: str, result: Dict)
|
52,822 |
flaml.tune.searcher.flow2
|
rand_vector_gaussian
| null |
def rand_vector_gaussian(self, dim, std=1.0):
return self._random.normal(0, std, dim)
|
(self, dim, std=1.0)
|
52,823 |
flaml.tune.searcher.flow2
|
rand_vector_unit_sphere
| null |
def rand_vector_unit_sphere(self, dim, trunc=0) -> np.ndarray:
vec = self._random.normal(0, 1, dim)
if 0 < trunc < dim:
vec[np.abs(vec).argsort()[: dim - trunc]] = 0
mag = np.linalg.norm(vec)
return vec / mag
|
(self, dim, trunc=0) -> numpy.ndarray
|
52,824 |
flaml.tune.searcher.flow2
|
reach
|
whether the incumbent can reach the incumbent of other.
|
def reach(self, other: Searcher) -> bool:
"""whether the incumbent can reach the incumbent of other."""
config1, config2 = self.best_config, other.best_config
incumbent1, incumbent2 = self.incumbent, other.incumbent
if self._resource and config1[self.resource_attr] > config2[self.resource_attr]:
# resource will not decrease
return False
for key in self._unordered_cat_hp:
# unordered cat choice is hard to reach by chance
if config1[key] != config2.get(key):
return False
delta = np.array([incumbent1[key] - incumbent2.get(key, np.inf) for key in self._tunable_keys])
return np.linalg.norm(delta) <= self.step
|
(self, other: flaml.tune.searcher.suggestion.Searcher) -> bool
|
52,825 |
flaml.tune.searcher.flow2
|
set_search_properties
| null |
def set_search_properties(
self,
metric: Optional[str] = None,
mode: Optional[str] = None,
config: Optional[Dict] = None,
) -> bool:
if metric:
self._metric = metric
if mode:
assert mode in ["min", "max"], "`mode` must be 'min' or 'max'."
self._mode = mode
if mode == "max":
self.metric_op = -1.0
elif mode == "min":
self.metric_op = 1.0
if config:
self.space = config
self._space = flatten_dict(self.space)
self._init_search()
return True
|
(self, metric: Optional[str] = None, mode: Optional[str] = None, config: Optional[Dict] = None) -> bool
|
52,826 |
flaml.tune.searcher.flow2
|
suggest
|
Suggest a new config, one of the following cases:
1. same incumbent, increase resource.
2. same resource, move from the incumbent to a random direction.
3. same resource, move from the incumbent to the opposite direction.
|
def suggest(self, trial_id: str) -> Optional[Dict]:
"""Suggest a new config, one of the following cases:
1. same incumbent, increase resource.
2. same resource, move from the incumbent to a random direction.
3. same resource, move from the incumbent to the opposite direction.
"""
# TODO: better decouple FLOW2 config suggestion and stepsize update
self.trial_count_proposed += 1
if (
self._num_complete4incumbent > 0
and self.cost_incumbent
and self._resource
and self._resource < self.max_resource
and (self._cost_complete4incumbent >= self.cost_incumbent * self.resource_multiple_factor)
):
return self._increase_resource(trial_id)
self._num_allowed4incumbent -= 1
move = self.incumbent.copy()
if self._direction_tried is not None:
# return negative direction
for i, key in enumerate(self._tunable_keys):
move[key] -= self._direction_tried[i]
self._direction_tried = None
else:
# propose a new direction
self._direction_tried = self.rand_vector_unit_sphere(self.dim, self._trunc) * self.step
for i, key in enumerate(self._tunable_keys):
move[key] += self._direction_tried[i]
self._project(move)
config = self.denormalize(move)
self._proposed_by[trial_id] = self.incumbent
self._configs[trial_id] = (config, self.step)
self._num_proposedby_incumbent += 1
best_config = self.best_config
if self._init_phase:
if self._direction_tried is None:
if self._same:
same = not any(key not in best_config or value != best_config[key] for key, value in config.items())
if same:
# increase step size
self.step += self.STEPSIZE
self.step = min(self.step, self.step_ub)
else:
same = not any(key not in best_config or value != best_config[key] for key, value in config.items())
self._same = same
if self._num_proposedby_incumbent == self.dir and (not self._resource or self._resource == self.max_resource):
# check stuck condition if using max resource
self._num_proposedby_incumbent -= 2
self._init_phase = False
if self.step < self.step_lower_bound:
return None
# decrease step size
self._oldK = self._K or self._iter_best_config
self._K = self.trial_count_proposed + 1
self.step *= np.sqrt(self._oldK / self._K)
if self._init_phase:
return unflatten_dict(config)
if self._trunc == 1 and self._direction_tried is not None:
# random
for i, key in enumerate(self._tunable_keys):
if self._direction_tried[i] != 0:
for _, generated in generate_variants_compatible(
{"config": {key: self._space[key]}}, random_state=self.rs_random
):
if generated["config"][key] != best_config[key]:
config[key] = generated["config"][key]
return unflatten_dict(config)
break
elif len(config) == len(best_config):
for key, value in best_config.items():
if value != config[key]:
return unflatten_dict(config)
# print('move to', move)
self.incumbent = move
return unflatten_dict(config)
|
(self, trial_id: str) -> Optional[Dict]
|
52,827 |
flaml.tune.searcher.flow2
|
update_fbest
| null |
def update_fbest(
self,
):
obj_initial = self.lexico_objectives["metrics"][0]
feasible_index = np.array([*range(len(self._histories[obj_initial]))])
for k_metric in self.lexico_objectives["metrics"]:
k_values = np.array(self._histories[k_metric])
feasible_value = k_values.take(feasible_index)
self._f_best[k_metric] = np.min(feasible_value)
if not isinstance(self.lexico_objectives["tolerances"][k_metric], str):
tolerance_bound = self._f_best[k_metric] + self.lexico_objectives["tolerances"][k_metric]
else:
assert (
self.lexico_objectives["tolerances"][k_metric][-1] == "%"
), "String tolerance of {} should use %% as the suffix".format(k_metric)
tolerance_bound = self._f_best[k_metric] * (
1 + 0.01 * float(self.lexico_objectives["tolerances"][k_metric].replace("%", ""))
)
feasible_index_filter = np.where(
feasible_value
<= max(
tolerance_bound,
self.lexico_objectives["targets"][k_metric],
)
)[0]
feasible_index = feasible_index.take(feasible_index_filter)
|
(self)
|
52,828 |
flaml.tune.searcher.blendsearch
|
RandomSearch
|
Class for random search.
|
class RandomSearch(CFO):
"""Class for random search."""
def suggest(self, trial_id: str) -> Optional[Dict]:
if self._points_to_evaluate:
return super().suggest(trial_id)
config, _ = self._ls.complete_config({})
return config
def on_trial_complete(self, trial_id: str, result: Optional[Dict] = None, error: bool = False):
return
def on_trial_result(self, trial_id: str, result: Dict):
return
|
(metric: Optional[str] = None, mode: Optional[str] = None, space: Optional[dict] = None, low_cost_partial_config: Optional[dict] = None, cat_hp_cost: Optional[dict] = None, points_to_evaluate: Optional[List[dict]] = None, evaluated_rewards: Optional[List] = None, time_budget_s: Union[int, float] = None, num_samples: Optional[int] = None, resource_attr: Optional[str] = None, min_resource: Optional[float] = None, max_resource: Optional[float] = None, reduction_factor: Optional[float] = None, global_search_alg: Optional[flaml.tune.searcher.suggestion.Searcher] = None, config_constraints: Optional[List[Tuple[Callable[[dict], float], str, float]]] = None, metric_constraints: Optional[List[Tuple[str, str, float]]] = None, seed: Optional[int] = 20, cost_attr: Optional[str] = 'auto', cost_budget: Optional[float] = None, experimental: Optional[bool] = False, lexico_objectives: Optional[dict] = None, use_incumbent_result_in_evaluation=False, allow_empty_config=False)
|
52,845 |
flaml.tune.searcher.blendsearch
|
on_trial_complete
| null |
def on_trial_complete(self, trial_id: str, result: Optional[Dict] = None, error: bool = False):
return
|
(self, trial_id: str, result: Optional[Dict] = None, error: bool = False)
|
52,846 |
flaml.tune.searcher.blendsearch
|
on_trial_result
| null |
def on_trial_result(self, trial_id: str, result: Dict):
return
|
(self, trial_id: str, result: Dict)
|
52,851 |
flaml.tune.searcher.blendsearch
|
suggest
| null |
def suggest(self, trial_id: str) -> Optional[Dict]:
if self._points_to_evaluate:
return super().suggest(trial_id)
config, _ = self._ls.complete_config({})
return config
|
(self, trial_id: str) -> Optional[Dict]
|
52,860 |
duckduckgo_search.duckduckgo_search_async
|
AsyncDDGS
|
DuckDuckgo_search async class to get search results from duckduckgo.com.
|
class AsyncDDGS:
"""DuckDuckgo_search async class to get search results from duckduckgo.com."""
_executor: Optional[ThreadPoolExecutor] = None
def __init__(
self,
headers: Optional[Dict[str, str]] = None,
proxy: Optional[str] = None,
proxies: Union[Dict[str, str], str, None] = None, # deprecated
timeout: Optional[int] = 10,
) -> None:
"""Initialize the AsyncDDGS object.
Args:
headers (dict, optional): Dictionary of headers for the HTTP client. Defaults to None.
proxy (str, optional): proxy for the HTTP client, supports http/https/socks5 protocols.
example: "http://user:[email protected]:3128". Defaults to None.
timeout (int, optional): Timeout value for the HTTP client. Defaults to 10.
"""
self.proxy: Optional[str] = proxy
assert self.proxy is None or isinstance(self.proxy, str), "proxy must be a str"
if not proxy and proxies:
warnings.warn("'proxies' is deprecated, use 'proxy' instead.", stacklevel=1)
self.proxy = proxies.get("http") or proxies.get("https") if isinstance(proxies, dict) else proxies
self._asession = requests.AsyncSession(
headers=headers,
proxy=self.proxy,
timeout=timeout,
impersonate="chrome",
allow_redirects=False,
)
self._asession.headers["Referer"] = "https://duckduckgo.com/"
self._exception_event = asyncio.Event()
async def __aenter__(self) -> "AsyncDDGS":
return self
async def __aexit__(
self,
exc_type: Optional[Type[BaseException]] = None,
exc_val: Optional[BaseException] = None,
exc_tb: Optional[TracebackType] = None,
) -> None:
await self._asession.__aexit__(exc_type, exc_val, exc_tb)
def __del__(self) -> None:
if hasattr(self, "_asession") and self._asession._closed is False:
with suppress(RuntimeError, RuntimeWarning):
asyncio.create_task(self._asession.close())
@cached_property
def parser(self) -> Optional["LHTMLParser"]:
"""Get HTML parser."""
return LHTMLParser(remove_blank_text=True, remove_comments=True, remove_pis=True, collect_ids=False)
@classmethod
def _get_executor(cls, max_workers: int = 1) -> ThreadPoolExecutor:
"""Get ThreadPoolExecutor. Default max_workers=1, because >=2 leads to a big overhead"""
if cls._executor is None:
cls._executor = ThreadPoolExecutor(max_workers=max_workers)
return cls._executor
@property
def executor(cls) -> Optional[ThreadPoolExecutor]:
return cls._get_executor()
async def _aget_url(
self,
method: str,
url: str,
data: Optional[Union[Dict[str, str], bytes]] = None,
params: Optional[Dict[str, str]] = None,
) -> bytes:
if self._exception_event.is_set():
raise DuckDuckGoSearchException("Exception occurred in previous call.")
try:
resp = await self._asession.request(method, url, data=data, params=params)
except Exception as ex:
self._exception_event.set()
if "time" in str(ex).lower():
raise TimeoutException(f"{url} {type(ex).__name__}: {ex}") from ex
raise DuckDuckGoSearchException(f"{url} {type(ex).__name__}: {ex}") from ex
logger.debug(f"_aget_url() {resp.url} {resp.status_code} {resp.elapsed:.2f} {len(resp.content)}")
if resp.status_code == 200:
return cast(bytes, resp.content)
self._exception_event.set()
if resp.status_code in (202, 301, 403):
raise RatelimitException(f"{resp.url} {resp.status_code} Ratelimit")
raise DuckDuckGoSearchException(f"{resp.url} return None. {params=} {data=}")
async def _aget_vqd(self, keywords: str) -> str:
"""Get vqd value for a search query."""
resp_content = await self._aget_url("POST", "https://duckduckgo.com", data={"q": keywords})
return _extract_vqd(resp_content, keywords)
async def text(
self,
keywords: str,
region: str = "wt-wt",
safesearch: str = "moderate",
timelimit: Optional[str] = None,
backend: str = "api",
max_results: Optional[int] = None,
) -> List[Dict[str, str]]:
"""DuckDuckGo text search generator. Query params: https://duckduckgo.com/params.
Args:
keywords: keywords for query.
region: wt-wt, us-en, uk-en, ru-ru, etc. Defaults to "wt-wt".
safesearch: on, moderate, off. Defaults to "moderate".
timelimit: d, w, m, y. Defaults to None.
backend: api, html, lite. Defaults to api.
api - collect data from https://duckduckgo.com,
html - collect data from https://html.duckduckgo.com,
lite - collect data from https://lite.duckduckgo.com.
max_results: max number of results. If None, returns results only from the first response. Defaults to None.
Returns:
List of dictionaries with search results, or None if there was an error.
Raises:
DuckDuckGoSearchException: Base exception for duckduckgo_search errors.
RatelimitException: Inherits from DuckDuckGoSearchException, raised for exceeding API request rate limits.
TimeoutException: Inherits from DuckDuckGoSearchException, raised for API request timeouts.
"""
if LXML_AVAILABLE is False and backend != "api":
backend = "api"
warnings.warn("lxml is not installed. Using backend='api'.", stacklevel=2)
if backend == "api":
results = await self._text_api(keywords, region, safesearch, timelimit, max_results)
elif backend == "html":
results = await self._text_html(keywords, region, safesearch, timelimit, max_results)
elif backend == "lite":
results = await self._text_lite(keywords, region, timelimit, max_results)
return results
async def _text_api(
self,
keywords: str,
region: str = "wt-wt",
safesearch: str = "moderate",
timelimit: Optional[str] = None,
max_results: Optional[int] = None,
) -> List[Dict[str, str]]:
"""DuckDuckGo text search generator. Query params: https://duckduckgo.com/params.
Args:
keywords: keywords for query.
region: wt-wt, us-en, uk-en, ru-ru, etc. Defaults to "wt-wt".
safesearch: on, moderate, off. Defaults to "moderate".
timelimit: d, w, m, y. Defaults to None.
max_results: max number of results. If None, returns results only from the first response. Defaults to None.
Returns:
List of dictionaries with search results.
Raises:
DuckDuckGoSearchException: Base exception for duckduckgo_search errors.
RatelimitException: Inherits from DuckDuckGoSearchException, raised for exceeding API request rate limits.
TimeoutException: Inherits from DuckDuckGoSearchException, raised for API request timeouts.
"""
assert keywords, "keywords is mandatory"
vqd = await self._aget_vqd(keywords)
payload = {
"q": keywords,
"kl": region,
"l": region,
"p": "",
"s": "0",
"df": "",
"vqd": vqd,
"ex": "",
}
safesearch = safesearch.lower()
if safesearch == "moderate":
payload["ex"] = "-1"
elif safesearch == "off":
payload["ex"] = "-2"
elif safesearch == "on": # strict
payload["p"] = "1"
if timelimit:
payload["df"] = timelimit
cache = set()
results: List[Optional[Dict[str, str]]] = [None] * 1100
async def _text_api_page(s: int, page: int) -> None:
priority = page * 100
payload["s"] = f"{s}"
resp_content = await self._aget_url("GET", "https://links.duckduckgo.com/d.js", params=payload)
page_data = _text_extract_json(resp_content, keywords)
for row in page_data:
href = row.get("u", None)
if href and href not in cache and href != f"http://www.google.com/search?q={keywords}":
cache.add(href)
body = _normalize(row["a"])
if body:
priority += 1
result = {
"title": _normalize(row["t"]),
"href": _normalize_url(href),
"body": body,
}
results[priority] = result
tasks = [asyncio.create_task(_text_api_page(0, 0))]
if max_results:
max_results = min(max_results, 500)
tasks.extend(
asyncio.create_task(_text_api_page(s, i)) for i, s in enumerate(range(23, max_results, 50), start=1)
)
try:
await asyncio.gather(*tasks)
except Exception as e:
for task in tasks:
task.cancel()
await asyncio.gather(*tasks, return_exceptions=True)
raise e
return list(islice(filter(None, results), max_results))
async def _text_html(
self,
keywords: str,
region: str = "wt-wt",
safesearch: str = "moderate",
timelimit: Optional[str] = None,
max_results: Optional[int] = None,
) -> List[Dict[str, str]]:
"""DuckDuckGo text search generator. Query params: https://duckduckgo.com/params.
Args:
keywords: keywords for query.
region: wt-wt, us-en, uk-en, ru-ru, etc. Defaults to "wt-wt".
safesearch: on, moderate, off. Defaults to "moderate".
timelimit: d, w, m, y. Defaults to None.
max_results: max number of results. If None, returns results only from the first response. Defaults to None.
Returns:
List of dictionaries with search results.
Raises:
DuckDuckGoSearchException: Base exception for duckduckgo_search errors.
RatelimitException: Inherits from DuckDuckGoSearchException, raised for exceeding API request rate limits.
TimeoutException: Inherits from DuckDuckGoSearchException, raised for API request timeouts.
"""
assert keywords, "keywords is mandatory"
self._asession.headers["Referer"] = "https://html.duckduckgo.com/"
safesearch_base = {"on": "1", "moderate": "-1", "off": "-2"}
payload = {
"q": keywords,
"kl": region,
"p": safesearch_base[safesearch.lower()],
"o": "json",
"api": "d.js",
}
if timelimit:
payload["df"] = timelimit
if max_results and max_results > 20:
vqd = await self._aget_vqd(keywords)
payload["vqd"] = vqd
cache = set()
results: List[Optional[Dict[str, str]]] = [None] * 1100
async def _text_html_page(s: int, page: int) -> None:
priority = page * 100
payload["s"] = f"{s}"
resp_content = await self._aget_url("POST", "https://html.duckduckgo.com/html", data=payload)
if b"No results." in resp_content:
return
tree = await self._asession.loop.run_in_executor(
self.executor, partial(document_fromstring, resp_content, self.parser)
)
for e in tree.xpath("//div[h2]"):
href = e.xpath("./a/@href")
href = href[0] if href else None
if (
href
and href not in cache
and not href.startswith(
("http://www.google.com/search?q=", "https://duckduckgo.com/y.js?ad_domain")
)
):
cache.add(href)
title = e.xpath("./h2/a/text()")
body = e.xpath("./a//text()")
priority += 1
result = {
"title": _normalize(title[0]),
"href": _normalize_url(href),
"body": _normalize("".join(body)),
}
results[priority] = result
tasks = [asyncio.create_task(_text_html_page(0, 0))]
if max_results:
max_results = min(max_results, 500)
tasks.extend(
asyncio.create_task(_text_html_page(s, i)) for i, s in enumerate(range(23, max_results, 50), start=1)
)
try:
await asyncio.gather(*tasks)
except Exception as e:
for task in tasks:
task.cancel()
await asyncio.gather(*tasks, return_exceptions=True)
raise e
return list(islice(filter(None, results), max_results))
async def _text_lite(
self,
keywords: str,
region: str = "wt-wt",
timelimit: Optional[str] = None,
max_results: Optional[int] = None,
) -> List[Dict[str, str]]:
"""DuckDuckGo text search generator. Query params: https://duckduckgo.com/params.
Args:
keywords: keywords for query.
region: wt-wt, us-en, uk-en, ru-ru, etc. Defaults to "wt-wt".
timelimit: d, w, m, y. Defaults to None.
max_results: max number of results. If None, returns results only from the first response. Defaults to None.
Returns:
List of dictionaries with search results.
Raises:
DuckDuckGoSearchException: Base exception for duckduckgo_search errors.
RatelimitException: Inherits from DuckDuckGoSearchException, raised for exceeding API request rate limits.
TimeoutException: Inherits from DuckDuckGoSearchException, raised for API request timeouts.
"""
assert keywords, "keywords is mandatory"
self._asession.headers["Referer"] = "https://lite.duckduckgo.com/"
payload = {
"q": keywords,
"o": "json",
"api": "d.js",
"kl": region,
}
if timelimit:
payload["df"] = timelimit
cache = set()
results: List[Optional[Dict[str, str]]] = [None] * 1100
async def _text_lite_page(s: int, page: int) -> None:
priority = page * 100
payload["s"] = f"{s}"
resp_content = await self._aget_url("POST", "https://lite.duckduckgo.com/lite/", data=payload)
if b"No more results." in resp_content:
return
tree = await self._asession.loop.run_in_executor(
self.executor, partial(document_fromstring, resp_content, self.parser)
)
data = zip(cycle(range(1, 5)), tree.xpath("//table[last()]//tr"))
for i, e in data:
if i == 1:
href = e.xpath(".//a//@href")
href = href[0] if href else None
if (
href is None
or href in cache
or href.startswith(("http://www.google.com/search?q=", "https://duckduckgo.com/y.js?ad_domain"))
):
[next(data, None) for _ in range(3)] # skip block(i=1,2,3,4)
else:
cache.add(href)
title = e.xpath(".//a//text()")[0]
elif i == 2:
body = e.xpath(".//td[@class='result-snippet']//text()")
body = "".join(body).strip()
elif i == 3:
priority += 1
result = {
"title": _normalize(title),
"href": _normalize_url(href),
"body": _normalize(body),
}
results[priority] = result
tasks = [asyncio.create_task(_text_lite_page(0, 0))]
if max_results:
max_results = min(max_results, 500)
tasks.extend(
asyncio.create_task(_text_lite_page(s, i)) for i, s in enumerate(range(23, max_results, 50), start=1)
)
try:
await asyncio.gather(*tasks)
except Exception as e:
for task in tasks:
task.cancel()
await asyncio.gather(*tasks, return_exceptions=True)
raise e
return list(islice(filter(None, results), max_results))
async def images(
self,
keywords: str,
region: str = "wt-wt",
safesearch: str = "moderate",
timelimit: Optional[str] = None,
size: Optional[str] = None,
color: Optional[str] = None,
type_image: Optional[str] = None,
layout: Optional[str] = None,
license_image: Optional[str] = None,
max_results: Optional[int] = None,
) -> List[Dict[str, str]]:
"""DuckDuckGo images search. Query params: https://duckduckgo.com/params.
Args:
keywords: keywords for query.
region: wt-wt, us-en, uk-en, ru-ru, etc. Defaults to "wt-wt".
safesearch: on, moderate, off. Defaults to "moderate".
timelimit: Day, Week, Month, Year. Defaults to None.
size: Small, Medium, Large, Wallpaper. Defaults to None.
color: color, Monochrome, Red, Orange, Yellow, Green, Blue,
Purple, Pink, Brown, Black, Gray, Teal, White. Defaults to None.
type_image: photo, clipart, gif, transparent, line.
Defaults to None.
layout: Square, Tall, Wide. Defaults to None.
license_image: any (All Creative Commons), Public (PublicDomain),
Share (Free to Share and Use), ShareCommercially (Free to Share and Use Commercially),
Modify (Free to Modify, Share, and Use), ModifyCommercially (Free to Modify, Share, and
Use Commercially). Defaults to None.
max_results: max number of results. If None, returns results only from the first response. Defaults to None.
Returns:
List of dictionaries with images search results.
Raises:
DuckDuckGoSearchException: Base exception for duckduckgo_search errors.
RatelimitException: Inherits from DuckDuckGoSearchException, raised for exceeding API request rate limits.
TimeoutException: Inherits from DuckDuckGoSearchException, raised for API request timeouts.
"""
assert keywords, "keywords is mandatory"
vqd = await self._aget_vqd(keywords)
safesearch_base = {"on": "1", "moderate": "1", "off": "-1"}
timelimit = f"time:{timelimit}" if timelimit else ""
size = f"size:{size}" if size else ""
color = f"color:{color}" if color else ""
type_image = f"type:{type_image}" if type_image else ""
layout = f"layout:{layout}" if layout else ""
license_image = f"license:{license_image}" if license_image else ""
payload = {
"l": region,
"o": "json",
"q": keywords,
"vqd": vqd,
"f": f"{timelimit},{size},{color},{type_image},{layout},{license_image}",
"p": safesearch_base[safesearch.lower()],
}
cache = set()
results: List[Optional[Dict[str, str]]] = [None] * 600
async def _images_page(s: int, page: int) -> None:
priority = page * 100
payload["s"] = f"{s}"
resp_content = await self._aget_url("GET", "https://duckduckgo.com/i.js", params=payload)
resp_json = json_loads(resp_content)
page_data = resp_json.get("results", [])
for row in page_data:
image_url = row.get("image")
if image_url and image_url not in cache:
cache.add(image_url)
priority += 1
result = {
"title": row["title"],
"image": _normalize_url(image_url),
"thumbnail": _normalize_url(row["thumbnail"]),
"url": _normalize_url(row["url"]),
"height": row["height"],
"width": row["width"],
"source": row["source"],
}
results[priority] = result
tasks = [asyncio.create_task(_images_page(0, page=0))]
if max_results:
max_results = min(max_results, 500)
tasks.extend(
asyncio.create_task(_images_page(s, i)) for i, s in enumerate(range(100, max_results, 100), start=1)
)
try:
await asyncio.gather(*tasks)
except Exception as e:
for task in tasks:
task.cancel()
await asyncio.gather(*tasks, return_exceptions=True)
raise e
return list(islice(filter(None, results), max_results))
async def videos(
self,
keywords: str,
region: str = "wt-wt",
safesearch: str = "moderate",
timelimit: Optional[str] = None,
resolution: Optional[str] = None,
duration: Optional[str] = None,
license_videos: Optional[str] = None,
max_results: Optional[int] = None,
) -> List[Dict[str, str]]:
"""DuckDuckGo videos search. Query params: https://duckduckgo.com/params.
Args:
keywords: keywords for query.
region: wt-wt, us-en, uk-en, ru-ru, etc. Defaults to "wt-wt".
safesearch: on, moderate, off. Defaults to "moderate".
timelimit: d, w, m. Defaults to None.
resolution: high, standart. Defaults to None.
duration: short, medium, long. Defaults to None.
license_videos: creativeCommon, youtube. Defaults to None.
max_results: max number of results. If None, returns results only from the first response. Defaults to None.
Returns:
List of dictionaries with videos search results.
Raises:
DuckDuckGoSearchException: Base exception for duckduckgo_search errors.
RatelimitException: Inherits from DuckDuckGoSearchException, raised for exceeding API request rate limits.
TimeoutException: Inherits from DuckDuckGoSearchException, raised for API request timeouts.
"""
assert keywords, "keywords is mandatory"
vqd = await self._aget_vqd(keywords)
safesearch_base = {"on": "1", "moderate": "-1", "off": "-2"}
timelimit = f"publishedAfter:{timelimit}" if timelimit else ""
resolution = f"videoDefinition:{resolution}" if resolution else ""
duration = f"videoDuration:{duration}" if duration else ""
license_videos = f"videoLicense:{license_videos}" if license_videos else ""
payload = {
"l": region,
"o": "json",
"q": keywords,
"vqd": vqd,
"f": f"{timelimit},{resolution},{duration},{license_videos}",
"p": safesearch_base[safesearch.lower()],
}
cache = set()
results: List[Optional[Dict[str, str]]] = [None] * 700
async def _videos_page(s: int, page: int) -> None:
priority = page * 100
payload["s"] = f"{s}"
resp_content = await self._aget_url("GET", "https://duckduckgo.com/v.js", params=payload)
resp_json = json_loads(resp_content)
page_data = resp_json.get("results", [])
for row in page_data:
if row["content"] not in cache:
cache.add(row["content"])
priority += 1
results[priority] = row
tasks = [asyncio.create_task(_videos_page(0, 0))]
if max_results:
max_results = min(max_results, 400)
tasks.extend(
asyncio.create_task(_videos_page(s, i)) for i, s in enumerate(range(59, max_results, 59), start=1)
)
try:
await asyncio.gather(*tasks)
except Exception as e:
for task in tasks:
task.cancel()
await asyncio.gather(*tasks, return_exceptions=True)
raise e
return list(islice(filter(None, results), max_results))
async def news(
self,
keywords: str,
region: str = "wt-wt",
safesearch: str = "moderate",
timelimit: Optional[str] = None,
max_results: Optional[int] = None,
) -> List[Dict[str, str]]:
"""DuckDuckGo news search. Query params: https://duckduckgo.com/params.
Args:
keywords: keywords for query.
region: wt-wt, us-en, uk-en, ru-ru, etc. Defaults to "wt-wt".
safesearch: on, moderate, off. Defaults to "moderate".
timelimit: d, w, m. Defaults to None.
max_results: max number of results. If None, returns results only from the first response. Defaults to None.
Returns:
List of dictionaries with news search results.
Raises:
DuckDuckGoSearchException: Base exception for duckduckgo_search errors.
RatelimitException: Inherits from DuckDuckGoSearchException, raised for exceeding API request rate limits.
TimeoutException: Inherits from DuckDuckGoSearchException, raised for API request timeouts.
"""
assert keywords, "keywords is mandatory"
vqd = await self._aget_vqd(keywords)
safesearch_base = {"on": "1", "moderate": "-1", "off": "-2"}
payload = {
"l": region,
"o": "json",
"noamp": "1",
"q": keywords,
"vqd": vqd,
"p": safesearch_base[safesearch.lower()],
}
if timelimit:
payload["df"] = timelimit
cache = set()
results: List[Optional[Dict[str, str]]] = [None] * 700
async def _news_page(s: int, page: int) -> None:
priority = page * 100
payload["s"] = f"{s}"
resp_content = await self._aget_url("GET", "https://duckduckgo.com/news.js", params=payload)
resp_json = json_loads(resp_content)
page_data = resp_json.get("results", [])
for row in page_data:
if row["url"] not in cache:
cache.add(row["url"])
image_url = row.get("image", None)
priority += 1
result = {
"date": datetime.fromtimestamp(row["date"], timezone.utc).isoformat(),
"title": row["title"],
"body": _normalize(row["excerpt"]),
"url": _normalize_url(row["url"]),
"image": _normalize_url(image_url),
"source": row["source"],
}
results[priority] = result
tasks = [asyncio.create_task(_news_page(0, 0))]
if max_results:
max_results = min(max_results, 200)
tasks.extend(
asyncio.create_task(_news_page(s, i)) for i, s in enumerate(range(29, max_results, 29), start=1)
)
try:
await asyncio.gather(*tasks)
except Exception as e:
for task in tasks:
task.cancel()
await asyncio.gather(*tasks, return_exceptions=True)
raise e
return list(islice(filter(None, results), max_results))
async def answers(self, keywords: str) -> List[Dict[str, str]]:
"""DuckDuckGo instant answers. Query params: https://duckduckgo.com/params.
Args:
keywords: keywords for query,
Returns:
List of dictionaries with instant answers results.
Raises:
DuckDuckGoSearchException: Base exception for duckduckgo_search errors.
RatelimitException: Inherits from DuckDuckGoSearchException, raised for exceeding API request rate limits.
TimeoutException: Inherits from DuckDuckGoSearchException, raised for API request timeouts.
"""
assert keywords, "keywords is mandatory"
payload = {
"q": f"what is {keywords}",
"format": "json",
}
resp_content = await self._aget_url("GET", "https://api.duckduckgo.com/", params=payload)
page_data = json_loads(resp_content)
results = []
answer = page_data.get("AbstractText")
url = page_data.get("AbstractURL")
if answer:
results.append(
{
"icon": None,
"text": answer,
"topic": None,
"url": url,
}
)
# related
payload = {
"q": f"{keywords}",
"format": "json",
}
resp_content = await self._aget_url("GET", "https://api.duckduckgo.com/", params=payload)
resp_json = json_loads(resp_content)
page_data = resp_json.get("RelatedTopics", [])
for row in page_data:
topic = row.get("Name")
if not topic:
icon = row["Icon"].get("URL")
results.append(
{
"icon": f"https://duckduckgo.com{icon}" if icon else "",
"text": row["Text"],
"topic": None,
"url": row["FirstURL"],
}
)
else:
for subrow in row["Topics"]:
icon = subrow["Icon"].get("URL")
results.append(
{
"icon": f"https://duckduckgo.com{icon}" if icon else "",
"text": subrow["Text"],
"topic": topic,
"url": subrow["FirstURL"],
}
)
return results
async def suggestions(self, keywords: str, region: str = "wt-wt") -> List[Dict[str, str]]:
"""DuckDuckGo suggestions. Query params: https://duckduckgo.com/params.
Args:
keywords: keywords for query.
region: wt-wt, us-en, uk-en, ru-ru, etc. Defaults to "wt-wt".
Returns:
List of dictionaries with suggestions results.
Raises:
DuckDuckGoSearchException: Base exception for duckduckgo_search errors.
RatelimitException: Inherits from DuckDuckGoSearchException, raised for exceeding API request rate limits.
TimeoutException: Inherits from DuckDuckGoSearchException, raised for API request timeouts.
"""
assert keywords, "keywords is mandatory"
payload = {
"q": keywords,
"kl": region,
}
resp_content = await self._aget_url("GET", "https://duckduckgo.com/ac/", params=payload)
page_data = json_loads(resp_content)
return [r for r in page_data]
async def maps(
self,
keywords: str,
place: Optional[str] = None,
street: Optional[str] = None,
city: Optional[str] = None,
county: Optional[str] = None,
state: Optional[str] = None,
country: Optional[str] = None,
postalcode: Optional[str] = None,
latitude: Optional[str] = None,
longitude: Optional[str] = None,
radius: int = 0,
max_results: Optional[int] = None,
) -> List[Dict[str, str]]:
"""DuckDuckGo maps search. Query params: https://duckduckgo.com/params.
Args:
keywords: keywords for query
place: if set, the other parameters are not used. Defaults to None.
street: house number/street. Defaults to None.
city: city of search. Defaults to None.
county: county of search. Defaults to None.
state: state of search. Defaults to None.
country: country of search. Defaults to None.
postalcode: postalcode of search. Defaults to None.
latitude: geographic coordinate (north-south position). Defaults to None.
longitude: geographic coordinate (east-west position); if latitude and
longitude are set, the other parameters are not used. Defaults to None.
radius: expand the search square by the distance in kilometers. Defaults to 0.
max_results: max number of results. If None, returns results only from the first response. Defaults to None.
Returns:
List of dictionaries with maps search results, or None if there was an error.
Raises:
DuckDuckGoSearchException: Base exception for duckduckgo_search errors.
RatelimitException: Inherits from DuckDuckGoSearchException, raised for exceeding API request rate limits.
TimeoutException: Inherits from DuckDuckGoSearchException, raised for API request timeouts.
"""
assert keywords, "keywords is mandatory"
vqd = await self._aget_vqd(keywords)
# if longitude and latitude are specified, skip the request about bbox to the nominatim api
if latitude and longitude:
lat_t = Decimal(latitude.replace(",", "."))
lat_b = Decimal(latitude.replace(",", "."))
lon_l = Decimal(longitude.replace(",", "."))
lon_r = Decimal(longitude.replace(",", "."))
if radius == 0:
radius = 1
# otherwise request about bbox to nominatim api
else:
if place:
params = {
"q": place,
"polygon_geojson": "0",
"format": "jsonv2",
}
else:
params = {
"polygon_geojson": "0",
"format": "jsonv2",
}
if street:
params["street"] = street
if city:
params["city"] = city
if county:
params["county"] = county
if state:
params["state"] = state
if country:
params["country"] = country
if postalcode:
params["postalcode"] = postalcode
# request nominatim api to get coordinates box
resp_content = await self._aget_url(
"GET",
"https://nominatim.openstreetmap.org/search.php",
params=params,
)
if resp_content == b"[]":
raise DuckDuckGoSearchException("maps() Сoordinates are not found, check function parameters.")
resp_json = json_loads(resp_content)
coordinates = resp_json[0]["boundingbox"]
lat_t, lon_l = Decimal(coordinates[1]), Decimal(coordinates[2])
lat_b, lon_r = Decimal(coordinates[0]), Decimal(coordinates[3])
# if a radius is specified, expand the search square
lat_t += Decimal(radius) * Decimal(0.008983)
lat_b -= Decimal(radius) * Decimal(0.008983)
lon_l -= Decimal(radius) * Decimal(0.008983)
lon_r += Decimal(radius) * Decimal(0.008983)
logger.debug(f"bbox coordinates\n{lat_t} {lon_l}\n{lat_b} {lon_r}")
cache = set()
results: List[Dict[str, str]] = []
async def _maps_page(
bbox: Tuple[Decimal, Decimal, Decimal, Decimal],
) -> Optional[List[Dict[str, str]]]:
if max_results and len(results) >= max_results:
return None
lat_t, lon_l, lat_b, lon_r = bbox
params = {
"q": keywords,
"vqd": vqd,
"tg": "maps_places",
"rt": "D",
"mkexp": "b",
"wiki_info": "1",
"is_requery": "1",
"bbox_tl": f"{lat_t},{lon_l}",
"bbox_br": f"{lat_b},{lon_r}",
"strict_bbox": "1",
}
resp_content = await self._aget_url("GET", "https://duckduckgo.com/local.js", params=params)
resp_json = json_loads(resp_content)
page_data = resp_json.get("results", [])
page_results = []
for res in page_data:
r_name = f'{res["name"]} {res["address"]}'
if r_name in cache:
continue
else:
cache.add(r_name)
result = {
"title": res["name"],
"address": res["address"],
"country_code": res["country_code"],
"url": _normalize_url(res["website"]),
"phone": res["phone"] or "",
"latitude": res["coordinates"]["latitude"],
"longitude": res["coordinates"]["longitude"],
"source": _normalize_url(res["url"]),
"image": x.get("image", "") if (x := res["embed"]) else "",
"desc": x.get("description", "") if (x := res["embed"]) else "",
"hours": res["hours"] or "",
"category": res["ddg_category"] or "",
"facebook": f"www.facebook.com/profile.php?id={x}" if (x := res["facebook_id"]) else "",
"instagram": f"https://www.instagram.com/{x}" if (x := res["instagram_id"]) else "",
"twitter": f"https://twitter.com/{x}" if (x := res["twitter_id"]) else "",
}
page_results.append(result)
return page_results
# search squares (bboxes)
start_bbox = (lat_t, lon_l, lat_b, lon_r)
work_bboxes = [start_bbox]
while work_bboxes:
queue_bboxes = [] # for next iteration, at the end of the iteration work_bboxes = queue_bboxes
tasks = []
for bbox in work_bboxes:
tasks.append(asyncio.create_task(_maps_page(bbox)))
# if distance between coordinates > 1, divide the square into 4 parts and save them in queue_bboxes
if _calculate_distance(lat_t, lon_l, lat_b, lon_r) > 1:
lat_t, lon_l, lat_b, lon_r = bbox
lat_middle = (lat_t + lat_b) / 2
lon_middle = (lon_l + lon_r) / 2
bbox1 = (lat_t, lon_l, lat_middle, lon_middle)
bbox2 = (lat_t, lon_middle, lat_middle, lon_r)
bbox3 = (lat_middle, lon_l, lat_b, lon_middle)
bbox4 = (lat_middle, lon_middle, lat_b, lon_r)
queue_bboxes.extend([bbox1, bbox2, bbox3, bbox4])
# gather tasks using asyncio.wait_for and timeout
with suppress(Exception):
work_bboxes_results = await asyncio.gather(*[asyncio.wait_for(task, timeout=10) for task in tasks])
for x in work_bboxes_results:
if isinstance(x, list):
results.extend(x)
elif isinstance(x, dict):
results.append(x)
work_bboxes = queue_bboxes
if not max_results or len(results) >= max_results or len(work_bboxes_results) == 0:
break
return list(islice(results, max_results))
async def translate(
self, keywords: Union[List[str], str], from_: Optional[str] = None, to: str = "en"
) -> List[Dict[str, str]]:
"""DuckDuckGo translate.
Args:
keywords: string or list of strings to translate.
from_: translate from (defaults automatically). Defaults to None.
to: what language to translate. Defaults to "en".
Returns:
List od dictionaries with translated keywords.
Raises:
DuckDuckGoSearchException: Base exception for duckduckgo_search errors.
RatelimitException: Inherits from DuckDuckGoSearchException, raised for exceeding API request rate limits.
TimeoutException: Inherits from DuckDuckGoSearchException, raised for API request timeouts.
"""
assert keywords, "keywords is mandatory"
vqd = await self._aget_vqd("translate")
payload = {
"vqd": vqd,
"query": "translate",
"to": to,
}
if from_:
payload["from"] = from_
results = []
async def _translate_keyword(keyword: str) -> None:
resp_content = await self._aget_url(
"POST",
"https://duckduckgo.com/translation.js",
params=payload,
data=keyword.encode(),
)
page_data = json_loads(resp_content)
page_data["original"] = keyword
results.append(page_data)
if isinstance(keywords, str):
keywords = [keywords]
tasks = [asyncio.create_task(_translate_keyword(keyword)) for keyword in keywords]
try:
await asyncio.gather(*tasks)
except Exception as e:
for task in tasks:
task.cancel()
await asyncio.gather(*tasks, return_exceptions=True)
raise e
return results
|
(headers: Optional[Dict[str, str]] = None, proxy: Optional[str] = None, proxies: Union[Dict[str, str], str, NoneType] = None, timeout: Optional[int] = 10) -> None
|
52,861 |
duckduckgo_search.duckduckgo_search_async
|
__aenter__
| null |
def __init__(
self,
headers: Optional[Dict[str, str]] = None,
proxy: Optional[str] = None,
proxies: Union[Dict[str, str], str, None] = None, # deprecated
timeout: Optional[int] = 10,
) -> None:
"""Initialize the AsyncDDGS object.
Args:
headers (dict, optional): Dictionary of headers for the HTTP client. Defaults to None.
proxy (str, optional): proxy for the HTTP client, supports http/https/socks5 protocols.
example: "http://user:[email protected]:3128". Defaults to None.
timeout (int, optional): Timeout value for the HTTP client. Defaults to 10.
"""
self.proxy: Optional[str] = proxy
assert self.proxy is None or isinstance(self.proxy, str), "proxy must be a str"
if not proxy and proxies:
warnings.warn("'proxies' is deprecated, use 'proxy' instead.", stacklevel=1)
self.proxy = proxies.get("http") or proxies.get("https") if isinstance(proxies, dict) else proxies
self._asession = requests.AsyncSession(
headers=headers,
proxy=self.proxy,
timeout=timeout,
impersonate="chrome",
allow_redirects=False,
)
self._asession.headers["Referer"] = "https://duckduckgo.com/"
self._exception_event = asyncio.Event()
|
(self) -> duckduckgo_search.duckduckgo_search_async.AsyncDDGS
|
52,863 |
duckduckgo_search.duckduckgo_search_async
|
__del__
| null |
def __del__(self) -> None:
if hasattr(self, "_asession") and self._asession._closed is False:
with suppress(RuntimeError, RuntimeWarning):
asyncio.create_task(self._asession.close())
|
(self) -> NoneType
|
52,864 |
duckduckgo_search.duckduckgo_search_async
|
__init__
|
Initialize the AsyncDDGS object.
Args:
headers (dict, optional): Dictionary of headers for the HTTP client. Defaults to None.
proxy (str, optional): proxy for the HTTP client, supports http/https/socks5 protocols.
example: "http://user:[email protected]:3128". Defaults to None.
timeout (int, optional): Timeout value for the HTTP client. Defaults to 10.
|
def __init__(
self,
headers: Optional[Dict[str, str]] = None,
proxy: Optional[str] = None,
proxies: Union[Dict[str, str], str, None] = None, # deprecated
timeout: Optional[int] = 10,
) -> None:
"""Initialize the AsyncDDGS object.
Args:
headers (dict, optional): Dictionary of headers for the HTTP client. Defaults to None.
proxy (str, optional): proxy for the HTTP client, supports http/https/socks5 protocols.
example: "http://user:[email protected]:3128". Defaults to None.
timeout (int, optional): Timeout value for the HTTP client. Defaults to 10.
"""
self.proxy: Optional[str] = proxy
assert self.proxy is None or isinstance(self.proxy, str), "proxy must be a str"
if not proxy and proxies:
warnings.warn("'proxies' is deprecated, use 'proxy' instead.", stacklevel=1)
self.proxy = proxies.get("http") or proxies.get("https") if isinstance(proxies, dict) else proxies
self._asession = requests.AsyncSession(
headers=headers,
proxy=self.proxy,
timeout=timeout,
impersonate="chrome",
allow_redirects=False,
)
self._asession.headers["Referer"] = "https://duckduckgo.com/"
self._exception_event = asyncio.Event()
|
(self, headers: Optional[Dict[str, str]] = None, proxy: Optional[str] = None, proxies: Union[Dict[str, str], str, NoneType] = None, timeout: Optional[int] = 10) -> NoneType
|
52,865 |
duckduckgo_search.duckduckgo_search_async
|
_aget_url
| null |
@property
def executor(cls) -> Optional[ThreadPoolExecutor]:
return cls._get_executor()
|
(self, method: str, url: str, data: Union[Dict[str, str], bytes, NoneType] = None, params: Optional[Dict[str, str]] = None) -> bytes
|
52,866 |
duckduckgo_search.duckduckgo_search_async
|
_aget_vqd
|
Get vqd value for a search query.
|
@property
def executor(cls) -> Optional[ThreadPoolExecutor]:
return cls._get_executor()
|
(self, keywords: str) -> str
|
52,867 |
duckduckgo_search.duckduckgo_search_async
|
_text_api
|
DuckDuckGo text search generator. Query params: https://duckduckgo.com/params.
Args:
keywords: keywords for query.
region: wt-wt, us-en, uk-en, ru-ru, etc. Defaults to "wt-wt".
safesearch: on, moderate, off. Defaults to "moderate".
timelimit: d, w, m, y. Defaults to None.
max_results: max number of results. If None, returns results only from the first response. Defaults to None.
Returns:
List of dictionaries with search results.
Raises:
DuckDuckGoSearchException: Base exception for duckduckgo_search errors.
RatelimitException: Inherits from DuckDuckGoSearchException, raised for exceeding API request rate limits.
TimeoutException: Inherits from DuckDuckGoSearchException, raised for API request timeouts.
|
@property
def executor(cls) -> Optional[ThreadPoolExecutor]:
return cls._get_executor()
|
(self, keywords: str, region: str = 'wt-wt', safesearch: str = 'moderate', timelimit: Optional[str] = None, max_results: Optional[int] = None) -> List[Dict[str, str]]
|
52,869 |
duckduckgo_search.duckduckgo_search_async
|
_text_lite
|
DuckDuckGo text search generator. Query params: https://duckduckgo.com/params.
Args:
keywords: keywords for query.
region: wt-wt, us-en, uk-en, ru-ru, etc. Defaults to "wt-wt".
timelimit: d, w, m, y. Defaults to None.
max_results: max number of results. If None, returns results only from the first response. Defaults to None.
Returns:
List of dictionaries with search results.
Raises:
DuckDuckGoSearchException: Base exception for duckduckgo_search errors.
RatelimitException: Inherits from DuckDuckGoSearchException, raised for exceeding API request rate limits.
TimeoutException: Inherits from DuckDuckGoSearchException, raised for API request timeouts.
|
@property
def executor(cls) -> Optional[ThreadPoolExecutor]:
return cls._get_executor()
|
(self, keywords: str, region: str = 'wt-wt', timelimit: Optional[str] = None, max_results: Optional[int] = None) -> List[Dict[str, str]]
|
52,870 |
duckduckgo_search.duckduckgo_search_async
|
answers
|
DuckDuckGo instant answers. Query params: https://duckduckgo.com/params.
Args:
keywords: keywords for query,
Returns:
List of dictionaries with instant answers results.
Raises:
DuckDuckGoSearchException: Base exception for duckduckgo_search errors.
RatelimitException: Inherits from DuckDuckGoSearchException, raised for exceeding API request rate limits.
TimeoutException: Inherits from DuckDuckGoSearchException, raised for API request timeouts.
|
@property
def executor(cls) -> Optional[ThreadPoolExecutor]:
return cls._get_executor()
|
(self, keywords: str) -> List[Dict[str, str]]
|
52,871 |
duckduckgo_search.duckduckgo_search_async
|
images
|
DuckDuckGo images search. Query params: https://duckduckgo.com/params.
Args:
keywords: keywords for query.
region: wt-wt, us-en, uk-en, ru-ru, etc. Defaults to "wt-wt".
safesearch: on, moderate, off. Defaults to "moderate".
timelimit: Day, Week, Month, Year. Defaults to None.
size: Small, Medium, Large, Wallpaper. Defaults to None.
color: color, Monochrome, Red, Orange, Yellow, Green, Blue,
Purple, Pink, Brown, Black, Gray, Teal, White. Defaults to None.
type_image: photo, clipart, gif, transparent, line.
Defaults to None.
layout: Square, Tall, Wide. Defaults to None.
license_image: any (All Creative Commons), Public (PublicDomain),
Share (Free to Share and Use), ShareCommercially (Free to Share and Use Commercially),
Modify (Free to Modify, Share, and Use), ModifyCommercially (Free to Modify, Share, and
Use Commercially). Defaults to None.
max_results: max number of results. If None, returns results only from the first response. Defaults to None.
Returns:
List of dictionaries with images search results.
Raises:
DuckDuckGoSearchException: Base exception for duckduckgo_search errors.
RatelimitException: Inherits from DuckDuckGoSearchException, raised for exceeding API request rate limits.
TimeoutException: Inherits from DuckDuckGoSearchException, raised for API request timeouts.
|
@property
def executor(cls) -> Optional[ThreadPoolExecutor]:
return cls._get_executor()
|
(self, keywords: str, region: str = 'wt-wt', safesearch: str = 'moderate', timelimit: Optional[str] = None, size: Optional[str] = None, color: Optional[str] = None, type_image: Optional[str] = None, layout: Optional[str] = None, license_image: Optional[str] = None, max_results: Optional[int] = None) -> List[Dict[str, str]]
|
52,872 |
duckduckgo_search.duckduckgo_search_async
|
maps
|
DuckDuckGo maps search. Query params: https://duckduckgo.com/params.
Args:
keywords: keywords for query
place: if set, the other parameters are not used. Defaults to None.
street: house number/street. Defaults to None.
city: city of search. Defaults to None.
county: county of search. Defaults to None.
state: state of search. Defaults to None.
country: country of search. Defaults to None.
postalcode: postalcode of search. Defaults to None.
latitude: geographic coordinate (north-south position). Defaults to None.
longitude: geographic coordinate (east-west position); if latitude and
longitude are set, the other parameters are not used. Defaults to None.
radius: expand the search square by the distance in kilometers. Defaults to 0.
max_results: max number of results. If None, returns results only from the first response. Defaults to None.
Returns:
List of dictionaries with maps search results, or None if there was an error.
Raises:
DuckDuckGoSearchException: Base exception for duckduckgo_search errors.
RatelimitException: Inherits from DuckDuckGoSearchException, raised for exceeding API request rate limits.
TimeoutException: Inherits from DuckDuckGoSearchException, raised for API request timeouts.
|
@property
def executor(cls) -> Optional[ThreadPoolExecutor]:
return cls._get_executor()
|
(self, keywords: str, place: Optional[str] = None, street: Optional[str] = None, city: Optional[str] = None, county: Optional[str] = None, state: Optional[str] = None, country: Optional[str] = None, postalcode: Optional[str] = None, latitude: Optional[str] = None, longitude: Optional[str] = None, radius: int = 0, max_results: Optional[int] = None) -> List[Dict[str, str]]
|
52,873 |
duckduckgo_search.duckduckgo_search_async
|
news
|
DuckDuckGo news search. Query params: https://duckduckgo.com/params.
Args:
keywords: keywords for query.
region: wt-wt, us-en, uk-en, ru-ru, etc. Defaults to "wt-wt".
safesearch: on, moderate, off. Defaults to "moderate".
timelimit: d, w, m. Defaults to None.
max_results: max number of results. If None, returns results only from the first response. Defaults to None.
Returns:
List of dictionaries with news search results.
Raises:
DuckDuckGoSearchException: Base exception for duckduckgo_search errors.
RatelimitException: Inherits from DuckDuckGoSearchException, raised for exceeding API request rate limits.
TimeoutException: Inherits from DuckDuckGoSearchException, raised for API request timeouts.
|
@property
def executor(cls) -> Optional[ThreadPoolExecutor]:
return cls._get_executor()
|
(self, keywords: str, region: str = 'wt-wt', safesearch: str = 'moderate', timelimit: Optional[str] = None, max_results: Optional[int] = None) -> List[Dict[str, str]]
|
52,874 |
duckduckgo_search.duckduckgo_search_async
|
suggestions
|
DuckDuckGo suggestions. Query params: https://duckduckgo.com/params.
Args:
keywords: keywords for query.
region: wt-wt, us-en, uk-en, ru-ru, etc. Defaults to "wt-wt".
Returns:
List of dictionaries with suggestions results.
Raises:
DuckDuckGoSearchException: Base exception for duckduckgo_search errors.
RatelimitException: Inherits from DuckDuckGoSearchException, raised for exceeding API request rate limits.
TimeoutException: Inherits from DuckDuckGoSearchException, raised for API request timeouts.
|
@property
def executor(cls) -> Optional[ThreadPoolExecutor]:
return cls._get_executor()
|
(self, keywords: str, region: str = 'wt-wt') -> List[Dict[str, str]]
|
52,875 |
duckduckgo_search.duckduckgo_search_async
|
text
|
DuckDuckGo text search generator. Query params: https://duckduckgo.com/params.
Args:
keywords: keywords for query.
region: wt-wt, us-en, uk-en, ru-ru, etc. Defaults to "wt-wt".
safesearch: on, moderate, off. Defaults to "moderate".
timelimit: d, w, m, y. Defaults to None.
backend: api, html, lite. Defaults to api.
api - collect data from https://duckduckgo.com,
html - collect data from https://html.duckduckgo.com,
lite - collect data from https://lite.duckduckgo.com.
max_results: max number of results. If None, returns results only from the first response. Defaults to None.
Returns:
List of dictionaries with search results, or None if there was an error.
Raises:
DuckDuckGoSearchException: Base exception for duckduckgo_search errors.
RatelimitException: Inherits from DuckDuckGoSearchException, raised for exceeding API request rate limits.
TimeoutException: Inherits from DuckDuckGoSearchException, raised for API request timeouts.
|
@property
def executor(cls) -> Optional[ThreadPoolExecutor]:
return cls._get_executor()
|
(self, keywords: str, region: str = 'wt-wt', safesearch: str = 'moderate', timelimit: Optional[str] = None, backend: str = 'api', max_results: Optional[int] = None) -> List[Dict[str, str]]
|
52,876 |
duckduckgo_search.duckduckgo_search_async
|
translate
|
DuckDuckGo translate.
Args:
keywords: string or list of strings to translate.
from_: translate from (defaults automatically). Defaults to None.
to: what language to translate. Defaults to "en".
Returns:
List od dictionaries with translated keywords.
Raises:
DuckDuckGoSearchException: Base exception for duckduckgo_search errors.
RatelimitException: Inherits from DuckDuckGoSearchException, raised for exceeding API request rate limits.
TimeoutException: Inherits from DuckDuckGoSearchException, raised for API request timeouts.
|
@property
def executor(cls) -> Optional[ThreadPoolExecutor]:
return cls._get_executor()
|
(self, keywords: Union[List[str], str], from_: Optional[str] = None, to: str = 'en') -> List[Dict[str, str]]
|
52,877 |
duckduckgo_search.duckduckgo_search_async
|
videos
|
DuckDuckGo videos search. Query params: https://duckduckgo.com/params.
Args:
keywords: keywords for query.
region: wt-wt, us-en, uk-en, ru-ru, etc. Defaults to "wt-wt".
safesearch: on, moderate, off. Defaults to "moderate".
timelimit: d, w, m. Defaults to None.
resolution: high, standart. Defaults to None.
duration: short, medium, long. Defaults to None.
license_videos: creativeCommon, youtube. Defaults to None.
max_results: max number of results. If None, returns results only from the first response. Defaults to None.
Returns:
List of dictionaries with videos search results.
Raises:
DuckDuckGoSearchException: Base exception for duckduckgo_search errors.
RatelimitException: Inherits from DuckDuckGoSearchException, raised for exceeding API request rate limits.
TimeoutException: Inherits from DuckDuckGoSearchException, raised for API request timeouts.
|
@property
def executor(cls) -> Optional[ThreadPoolExecutor]:
return cls._get_executor()
|
(self, keywords: str, region: str = 'wt-wt', safesearch: str = 'moderate', timelimit: Optional[str] = None, resolution: Optional[str] = None, duration: Optional[str] = None, license_videos: Optional[str] = None, max_results: Optional[int] = None) -> List[Dict[str, str]]
|
52,878 |
duckduckgo_search.duckduckgo_search
|
DDGS
| null |
class DDGS(AsyncDDGS):
_loop: asyncio.AbstractEventLoop = asyncio.new_event_loop()
Thread(target=_loop.run_forever, daemon=True).start() # Start the event loop run in a separate thread.
def __init__(
self,
headers: Optional[Dict[str, str]] = None,
proxy: Optional[str] = None,
proxies: Union[Dict[str, str], str, None] = None, # deprecated
timeout: Optional[int] = 10,
) -> None:
"""Initialize the DDGS object.
Args:
headers (dict, optional): Dictionary of headers for the HTTP client. Defaults to None.
proxy (str, optional): proxy for the HTTP client, supports http/https/socks5 protocols.
example: "http://user:[email protected]:3128". Defaults to None.
timeout (int, optional): Timeout value for the HTTP client. Defaults to 10.
"""
super().__init__(headers=headers, proxy=proxy, proxies=proxies, timeout=timeout)
def __enter__(self) -> "DDGS":
return self
def __exit__(
self,
exc_type: Optional[Type[BaseException]],
exc_val: Optional[BaseException],
exc_tb: Optional[TracebackType],
) -> None:
self._close_session()
def __del__(self) -> None:
self._close_session()
def _close_session(self) -> None:
"""Close the curl-cffi async session."""
if hasattr(self, "_asession") and self._asession._closed is False:
self._run_async_in_thread(self._asession.close())
def _run_async_in_thread(self, coro: Awaitable[Any]) -> Any:
"""Runs an async coroutine in a separate thread."""
future: Future[Any] = asyncio.run_coroutine_threadsafe(coro, self._loop)
result = future.result()
return result
def text(self, *args: Any, **kwargs: Any) -> Any:
return self._run_async_in_thread(super().text(*args, **kwargs))
def images(self, *args: Any, **kwargs: Any) -> Any:
return self._run_async_in_thread(super().images(*args, **kwargs))
def videos(self, *args: Any, **kwargs: Any) -> Any:
return self._run_async_in_thread(super().videos(*args, **kwargs))
def news(self, *args: Any, **kwargs: Any) -> Any:
return self._run_async_in_thread(super().news(*args, **kwargs))
def answers(self, *args: Any, **kwargs: Any) -> Any:
return self._run_async_in_thread(super().answers(*args, **kwargs))
def suggestions(self, *args: Any, **kwargs: Any) -> Any:
return self._run_async_in_thread(super().suggestions(*args, **kwargs))
def maps(self, *args: Any, **kwargs: Any) -> Any:
return self._run_async_in_thread(super().maps(*args, **kwargs))
def translate(self, *args: Any, **kwargs: Any) -> Any:
return self._run_async_in_thread(super().translate(*args, **kwargs))
|
(headers: Optional[Dict[str, str]] = None, proxy: Optional[str] = None, proxies: Union[Dict[str, str], str, NoneType] = None, timeout: Optional[int] = 10) -> None
|
52,881 |
duckduckgo_search.duckduckgo_search
|
__del__
| null |
def __del__(self) -> None:
self._close_session()
|
(self) -> NoneType
|
52,882 |
duckduckgo_search.duckduckgo_search
|
__enter__
| null |
def __enter__(self) -> "DDGS":
return self
|
(self) -> duckduckgo_search.duckduckgo_search.DDGS
|
52,883 |
duckduckgo_search.duckduckgo_search
|
__exit__
| null |
def __exit__(
self,
exc_type: Optional[Type[BaseException]],
exc_val: Optional[BaseException],
exc_tb: Optional[TracebackType],
) -> None:
self._close_session()
|
(self, exc_type: Optional[Type[BaseException]], exc_val: Optional[BaseException], exc_tb: Optional[traceback]) -> NoneType
|
52,884 |
duckduckgo_search.duckduckgo_search
|
__init__
|
Initialize the DDGS object.
Args:
headers (dict, optional): Dictionary of headers for the HTTP client. Defaults to None.
proxy (str, optional): proxy for the HTTP client, supports http/https/socks5 protocols.
example: "http://user:[email protected]:3128". Defaults to None.
timeout (int, optional): Timeout value for the HTTP client. Defaults to 10.
|
def __init__(
self,
headers: Optional[Dict[str, str]] = None,
proxy: Optional[str] = None,
proxies: Union[Dict[str, str], str, None] = None, # deprecated
timeout: Optional[int] = 10,
) -> None:
"""Initialize the DDGS object.
Args:
headers (dict, optional): Dictionary of headers for the HTTP client. Defaults to None.
proxy (str, optional): proxy for the HTTP client, supports http/https/socks5 protocols.
example: "http://user:[email protected]:3128". Defaults to None.
timeout (int, optional): Timeout value for the HTTP client. Defaults to 10.
"""
super().__init__(headers=headers, proxy=proxy, proxies=proxies, timeout=timeout)
|
(self, headers: Optional[Dict[str, str]] = None, proxy: Optional[str] = None, proxies: Union[Dict[str, str], str, NoneType] = None, timeout: Optional[int] = 10) -> NoneType
|
52,887 |
duckduckgo_search.duckduckgo_search
|
_close_session
|
Close the curl-cffi async session.
|
def _close_session(self) -> None:
"""Close the curl-cffi async session."""
if hasattr(self, "_asession") and self._asession._closed is False:
self._run_async_in_thread(self._asession.close())
|
(self) -> NoneType
|
52,888 |
duckduckgo_search.duckduckgo_search
|
_run_async_in_thread
|
Runs an async coroutine in a separate thread.
|
def _run_async_in_thread(self, coro: Awaitable[Any]) -> Any:
"""Runs an async coroutine in a separate thread."""
future: Future[Any] = asyncio.run_coroutine_threadsafe(coro, self._loop)
result = future.result()
return result
|
(self, coro: Awaitable[Any]) -> Any
|
52,892 |
duckduckgo_search.duckduckgo_search
|
answers
| null |
def answers(self, *args: Any, **kwargs: Any) -> Any:
return self._run_async_in_thread(super().answers(*args, **kwargs))
|
(self, *args: Any, **kwargs: Any) -> Any
|
52,893 |
duckduckgo_search.duckduckgo_search
|
images
| null |
def images(self, *args: Any, **kwargs: Any) -> Any:
return self._run_async_in_thread(super().images(*args, **kwargs))
|
(self, *args: Any, **kwargs: Any) -> Any
|
52,894 |
duckduckgo_search.duckduckgo_search
|
maps
| null |
def maps(self, *args: Any, **kwargs: Any) -> Any:
return self._run_async_in_thread(super().maps(*args, **kwargs))
|
(self, *args: Any, **kwargs: Any) -> Any
|
52,895 |
duckduckgo_search.duckduckgo_search
|
news
| null |
def news(self, *args: Any, **kwargs: Any) -> Any:
return self._run_async_in_thread(super().news(*args, **kwargs))
|
(self, *args: Any, **kwargs: Any) -> Any
|
52,896 |
duckduckgo_search.duckduckgo_search
|
suggestions
| null |
def suggestions(self, *args: Any, **kwargs: Any) -> Any:
return self._run_async_in_thread(super().suggestions(*args, **kwargs))
|
(self, *args: Any, **kwargs: Any) -> Any
|
52,897 |
duckduckgo_search.duckduckgo_search
|
text
| null |
def text(self, *args: Any, **kwargs: Any) -> Any:
return self._run_async_in_thread(super().text(*args, **kwargs))
|
(self, *args: Any, **kwargs: Any) -> Any
|
52,898 |
duckduckgo_search.duckduckgo_search
|
translate
| null |
def translate(self, *args: Any, **kwargs: Any) -> Any:
return self._run_async_in_thread(super().translate(*args, **kwargs))
|
(self, *args: Any, **kwargs: Any) -> Any
|
52,899 |
duckduckgo_search.duckduckgo_search
|
videos
| null |
def videos(self, *args: Any, **kwargs: Any) -> Any:
return self._run_async_in_thread(super().videos(*args, **kwargs))
|
(self, *args: Any, **kwargs: Any) -> Any
|
52,908 |
argparse_ext
|
HelpFormatter
|
formatter for generating usage messages and argument help strings;
improvements over super class:
- default indent increment is 4 (io: 2);
- default max help position is 48 (io: 24);
- short and long options are formatted together;
- do not list options in usage;
- do not wrap usage;
- enclose metavars of mandatory arguments in braces;
- do not format choices metavar;
- do not capitalize default optional metavar;
|
class HelpFormatter(argparse.HelpFormatter):
'''
formatter for generating usage messages and argument help strings;
improvements over super class:
- default indent increment is 4 (io: 2);
- default max help position is 48 (io: 24);
- short and long options are formatted together;
- do not list options in usage;
- do not wrap usage;
- enclose metavars of mandatory arguments in braces;
- do not format choices metavar;
- do not capitalize default optional metavar;
'''
def __init__(self, prog, indent_increment=4, max_help_position=48,
width=None):
return super().__init__(
prog=prog,
indent_increment=indent_increment,
max_help_position=max_help_position,
width=width,
)
def _format_action_invocation(self, action):
if not action.option_strings:
default = self._get_default_metavar_for_positional(action)
args_string = self._format_args(action, default)
if args_string:
return args_string
return ', '.join(self._metavar_formatter(action, default)(1))
else:
if action.nargs == 0:
return '{}{}'.format(
' ' * 4 * int(action.option_strings[0].startswith('--')),
', '.join(action.option_strings),
)
else:
default = self._get_default_metavar_for_optional(action)
args_string = self._format_args(action, default)
return '{}{}'.format(
' ' * 4 * int(action.option_strings[0].startswith('--')),
', '.join(action.option_strings),
) + ' ' + args_string
def _metavar_formatter(self, action, default_metavar):
if action.metavar is not None:
result = action.metavar
# elif action.choices is not None:
# choice_strs = [str(choice) for choice in action.choices]
# result = '%s' % ','.join(choice_strs)
else:
result = default_metavar
def format(tuple_size):
if isinstance(result, tuple):
return result
else:
return (result, ) * tuple_size
return format
def _format_args(self, action, default_metavar):
get_metavar = self._metavar_formatter(action, default_metavar)
if action.nargs is None:
result = '{%s}' % get_metavar(1)
elif action.nargs == OPTIONAL:
result = '[%s]' % get_metavar(1)
elif action.nargs == ZERO_OR_MORE:
result = '[%s [%s ...]]' % get_metavar(2)
elif action.nargs == ONE_OR_MORE:
result = '%s [%s ...]' % get_metavar(2)
elif action.nargs == REMAINDER:
result = '...'
elif action.nargs == PARSER:
result = '%s ...' % get_metavar(1)
else:
formats = ['{%s}' for _ in range(action.nargs)]
result = ' '.join(formats) % get_metavar(action.nargs)
return result
def _format_usage(self, usage, actions, groups, prefix):
if prefix is None:
prefix = _('usage: ')
# if usage is specified, use that
if usage is not None:
usage = usage % dict(prog=self._prog)
# if no optionals or positionals are available, usage is just prog
elif usage is None and not actions:
usage = '%(prog)s' % dict(prog=self._prog)
# if optionals and positionals are available, calculate usage
elif usage is None:
prog = '%(prog)s' % dict(prog=self._prog)
# split optionals from positionals
optionals = []
positionals = []
for action in actions:
if action.option_strings:
optionals.append(action)
else:
positionals.append(action)
# build full usage string
format = self._format_actions_usage
action_usage = format(positionals, groups)
usage = ' '.join([s for s in [
prog, '[options]' if optionals else '', action_usage
] if s])
# prefix with 'usage:'
return '%s%s\n\n' % (prefix, usage)
def _get_default_metavar_for_optional(self, action):
return action.dest
|
(prog, indent_increment=4, max_help_position=48, width=None)
|
52,909 |
argparse_ext
|
__init__
| null |
def __init__(self, prog, indent_increment=4, max_help_position=48,
width=None):
return super().__init__(
prog=prog,
indent_increment=indent_increment,
max_help_position=max_help_position,
width=width,
)
|
(self, prog, indent_increment=4, max_help_position=48, width=None)
|
52,915 |
argparse_ext
|
_format_action_invocation
| null |
def _format_action_invocation(self, action):
if not action.option_strings:
default = self._get_default_metavar_for_positional(action)
args_string = self._format_args(action, default)
if args_string:
return args_string
return ', '.join(self._metavar_formatter(action, default)(1))
else:
if action.nargs == 0:
return '{}{}'.format(
' ' * 4 * int(action.option_strings[0].startswith('--')),
', '.join(action.option_strings),
)
else:
default = self._get_default_metavar_for_optional(action)
args_string = self._format_args(action, default)
return '{}{}'.format(
' ' * 4 * int(action.option_strings[0].startswith('--')),
', '.join(action.option_strings),
) + ' ' + args_string
|
(self, action)
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.