Optimizer API#
Optimizers is much like optimizers in PyTorch, but for the purpose of optimizing queries and search. Each optimizer would perform a few steps that collectively would guide the search towards the optimal trajectory.
- class bocoel.core.optim.AxServiceOptimizer(index_eval: IndexEvaluator, index: Index, *, sobol_steps: int = 0, device: str | device = 'cpu', task: Task = Task.EXPLORE, acqf: str | AcquisitionFunc = AcquisitionFunc.AUTO, surrogate: str | SurrogateModel = SurrogateModel.AUTO, surrogate_kwargs: SurrogateOptions | None = None)[source]#
The Ax optimizer that uses the service API. See https://ax.dev/tutorials/gpei_hartmann_service.html
- __init__(index_eval: IndexEvaluator, index: Index, *, sobol_steps: int = 0, device: str | device = 'cpu', task: Task = Task.EXPLORE, acqf: str | AcquisitionFunc = AcquisitionFunc.AUTO, surrogate: str | SurrogateModel = SurrogateModel.AUTO, surrogate_kwargs: SurrogateOptions | None = None) None [source]#
- Parameters:
index_eval – The evaluator to use for the query.
index – The index to for querying.
sobol_steps – The number of steps to use for the Sobol sequence.
device – The device to use for the optimization.
task – The task to use for the optimization.
acqf – The acquisition function to use for the optimization.
surrogate – The surrogate model to use for the optimization.
surrogate_kwargs – The keyword arguments to pass to the surrogate model.
- property task: Task#
The task to use for the optimization.
- Returns:
One of Task.EXPLORE or Task.MINIMIZE or Task.MAXIMIZE.
- step() Mapping[int, float] [source]#
Optimize one step with the ax optimizer.
Note
Somehow it seems that with recent versions of
Ax
, it would crash whenworkers > 1
inget_next_trials
.Therefore, it’s removed.
- Raises:
StopIteration – When there are no more steps.
- Returns:
The resulting trial-id to the evaluation result.
- class bocoel.core.optim.BruteForceOptimizer(index_eval: IndexEvaluator, index: Index, *, total: int, batch_size: int)[source]#
-
- property task: Task#
The task to use for the optimization.
- Returns:
One of Task.EXPLORE or Task.MINIMIZE or Task.MAXIMIZE.
- step() Mapping[int, float] [source]#
Perform a single step of optimization. This is a shortcut into the optimization process. For methods that evaluate the entire search at once, this method would output the slices of the entire search.
- Returns:
A mapping of storage indices to the corresponding scores.
- Raises:
StopIteration – If the optimization is complete.
- class bocoel.core.optim.CorpusEvaluator(corpus: Corpus, adaptor: Adaptor)[source]#
Evaluates the corpus with the given adaptor.
- class bocoel.core.optim.CachedIndexEvaluator(index_eval: IndexEvaluator, /)[source]#
Since there might be duplicate indices (and a lot of them during evaluation), this utility evaluator would cache the results and only compute the unseen indices. This would help evaluating the larger models of evaluation a lot faster.
- __init__(index_eval: IndexEvaluator, /) None [source]#
- class bocoel.core.optim.IndexEvaluator(*args, **kwargs)[source]#
A protocol for evaluating with the indices.
- __init__(*args, **kwargs)#
- class bocoel.core.optim.Optimizer(*args, **kwargs)[source]#
The protocol for optimizers. Optimizers are used for optimizing the search space, Find the best exploration sequence for a given task.
- __init__(*args, **kwargs)#
- abstract property task: Task#
The task to use for the optimization.
- Returns:
One of Task.EXPLORE or Task.MINIMIZE or Task.MAXIMIZE.
- abstract step() Mapping[int, float] [source]#
Perform a single step of optimization. This is a shortcut into the optimization process. For methods that evaluate the entire search at once, this method would output the slices of the entire search.
- Returns:
A mapping of storage indices to the corresponding scores.
- Raises:
StopIteration – If the optimization is complete.
- class bocoel.core.optim.QueryEvaluator(*args, **kwargs)[source]#
A protocol for evaluating the query results.
- __init__(*args, **kwargs)#
- class bocoel.core.optim.SearchEvaluator(*args, **kwargs)[source]#
A protocol for evaluating the search results.
- __init__(*args, **kwargs)#
- class bocoel.core.optim.RemainingSteps(count: int | float)[source]#
A simple counter that counts down the number of steps remaining.
- step(size: int = 1) None [source]#
Perform a single step.
- Parameters:
size – The number of steps to perform.
- property done: bool#
Whether the number of steps is done.
- Returns:
True if the number of steps is done, False otherwise.
- classmethod infinite() RemainingSteps [source]#
Create a counter that never ends.
- Returns:
A counter that never ends.
- class bocoel.core.optim.RandomOptimizer(index_eval: IndexEvaluator, index: Index, *, samples: int, batch_size: int)[source]#
The random optimizer that uses random search.
- __init__(index_eval: IndexEvaluator, index: Index, *, samples: int, batch_size: int) None [source]#
- Parameters:
index_eval – The evaluator to use for the storage.
index – The index to use for the query.
samples – The number of samples to use for the optimization.
batch_size – The number of samples to evaluate at once.
- property task: Task#
The task to use for the optimization.
- Returns:
One of Task.EXPLORE or Task.MINIMIZE or Task.MAXIMIZE.
- step() Mapping[int, float] [source]#
Perform a single step of optimization. This is a shortcut into the optimization process. For methods that evaluate the entire search at once, this method would output the slices of the entire search.
- Returns:
A mapping of storage indices to the corresponding scores.
- Raises:
StopIteration – If the optimization is complete.
- class bocoel.core.optim.KMeansOptimizer(index_eval: IndexEvaluator, index: Index, *, batch_size: int, embeddings: ndarray[Any, dtype[_ScalarType_co]], model_kwargs: KMeansOptions)[source]#
The KMeans optimizer that uses clustering algorithms.
- __init__(index_eval: IndexEvaluator, index: Index, *, batch_size: int, embeddings: ndarray[Any, dtype[_ScalarType_co]], model_kwargs: KMeansOptions) None [source]#
- Parameters:
index_eval – The evaluator to use on the storage.
index – The index to use for the query.
batch_size – The number of embeddings to evaluate at once.
embeddings – The embeddings to cluster.
model_kwargs – The keyword arguments to pass to the KMeans model.
- class bocoel.core.optim.KMedoidsOptimizer(index_eval: IndexEvaluator, index: Index, *, batch_size: int, embeddings: ndarray[Any, dtype[_ScalarType_co]], model_kwargs: KMedoidsOptions)[source]#
The KMedoids optimizer that uses clustering algorithms.
- __init__(index_eval: IndexEvaluator, index: Index, *, batch_size: int, embeddings: ndarray[Any, dtype[_ScalarType_co]], model_kwargs: KMedoidsOptions) None [source]#
- Parameters:
index_eval – The evaluator to use for the index.
index – The index to use for the query.
batch_size – The number of embeddings to evaluate at once.
embeddings – The embeddings to cluster.
model_kwargs – The keyword arguments to pass to the KMedoids model.
- class bocoel.core.optim.ScikitLearnOptimizer(index_eval: IndexEvaluator, index: Index, embeddings: ndarray[Any, dtype[_ScalarType_co]], model: ScikitLearnCluster, batch_size: int)[source]#
The sklearn optimizer that uses clustering algorithms. See the following webpage for options https://scikit-learn.org/stable/modules/generated/sklearn.cluster.KMeans.html
- __init__(index_eval: IndexEvaluator, index: Index, embeddings: ndarray[Any, dtype[_ScalarType_co]], model: ScikitLearnCluster, batch_size: int) None [source]#
- Parameters:
index_eval – The evaluator to use for the query.
index – The index to use for the query.
model – The model to use for the optimization.
batch_size – The number of embeddings to evaluate at once.
- property task: Task#
The task to use for the optimization.
- Returns:
One of Task.EXPLORE or Task.MINIMIZE or Task.MAXIMIZE.
- step() Mapping[int, float] [source]#
Perform a single step of optimization. This is a shortcut into the optimization process. For methods that evaluate the entire search at once, this method would output the slices of the entire search.
- Returns:
A mapping of storage indices to the corresponding scores.
- Raises:
StopIteration – If the optimization is complete.
- class bocoel.core.optim.UniformOptimizer(index_eval: IndexEvaluator, index: Index, *, grids: Sequence[int], batch_size: int)[source]#
The uniform optimizer that uses grid-based search.
- __init__(index_eval: IndexEvaluator, index: Index, *, grids: Sequence[int], batch_size: int) None [source]#
- Parameters:
index_eval – The evaluator to use for the storage.
index – The index to use for the query.
grids – The number of grids to use for the optimization.
batch_size – The number of grids to evaluate at once.
- property task: Task#
The task to use for the optimization.
- Returns:
One of Task.EXPLORE or Task.MINIMIZE or Task.MAXIMIZE.
- step() Mapping[int, float] [source]#
Perform a single step of optimization. This is a shortcut into the optimization process. For methods that evaluate the entire search at once, this method would output the slices of the entire search.
- Returns:
A mapping of storage indices to the corresponding scores.
- Raises:
StopIteration – If the optimization is complete.