Optimizer API#

Optimizers is much like optimizers in PyTorch, but for the purpose of optimizing queries and search. Each optimizer would perform a few steps that collectively would guide the search towards the optimal trajectory.

class bocoel.core.optim.AcquisitionFunc(value, names=<not given>, *values, module=None, qualname=None, type=None, start=1, boundary=None)[source]#
class bocoel.core.optim.AxServiceOptimizer(index_eval: IndexEvaluator, index: Index, *, sobol_steps: int = 0, device: str | device = 'cpu', workers: int = 1, task: Task = Task.EXPLORE, acqf: str | AcquisitionFunc = AcquisitionFunc.AUTO, surrogate: str | SurrogateModel = SurrogateModel.AUTO, surrogate_kwargs: SurrogateOptions | None = None)[source]#

The Ax optimizer that uses the service API. See https://ax.dev/tutorials/gpei_hartmann_service.html

__init__(index_eval: IndexEvaluator, index: Index, *, sobol_steps: int = 0, device: str | device = 'cpu', workers: int = 1, task: Task = Task.EXPLORE, acqf: str | AcquisitionFunc = AcquisitionFunc.AUTO, surrogate: str | SurrogateModel = SurrogateModel.AUTO, surrogate_kwargs: SurrogateOptions | None = None) None[source]#
Parameters:
  • index_eval – The evaluator to use for the query.

  • index – The index to for querying.

  • sobol_steps – The number of steps to use for the Sobol sequence.

  • device – The device to use for the optimization.

  • workers – The number of workers to use for the optimization.

  • task – The task to use for the optimization.

  • acqf – The acquisition function to use for the optimization.

  • surrogate – The surrogate model to use for the optimization.

  • surrogate_kwargs – The keyword arguments to pass to the surrogate model.

property task: Task#

The task to use for the optimization.

Returns:

One of Task.EXPLORE or Task.MINIMIZE or Task.MAXIMIZE.

step() Mapping[int, float][source]#

Perform a single step of optimization. This is a shortcut into the optimization process. For methods that evaluate the entire search at once, this method would output the slices of the entire search.

Returns:

A mapping of storage indices to the corresponding scores.

Raises:

StopIteration – If the optimization is complete.

class bocoel.core.optim.AxServiceParameter[source]#

The parameter for the AxServiceOptimizer.

name: str#

The name of the parameter.

type: str#

The type of the parameter.

bounds: tuple[float, float]#

The boundaries of the parameter.

value_type: NotRequired[str]#

The value type of the parameter.

log_scale: NotRequired[bool]#

Whether the parameter is on a log scale.

class bocoel.core.optim.BruteForceOptimizer(index_eval: IndexEvaluator, index: Index, *, total: int, batch_size: int)[source]#
__init__(index_eval: IndexEvaluator, index: Index, *, total: int, batch_size: int) None[source]#
Parameters:
  • index_eval – The id evaluator. Evalutes the items at the given storage indices.

  • index – The index that contains information about the domain.

  • **kwargs – The keyword arguments.

property task: Task#

The task to use for the optimization.

Returns:

One of Task.EXPLORE or Task.MINIMIZE or Task.MAXIMIZE.

step() Mapping[int, float][source]#

Perform a single step of optimization. This is a shortcut into the optimization process. For methods that evaluate the entire search at once, this method would output the slices of the entire search.

Returns:

A mapping of storage indices to the corresponding scores.

Raises:

StopIteration – If the optimization is complete.

class bocoel.core.optim.CorpusEvaluator(corpus: Corpus, adaptor: Adaptor)[source]#

Evaluates the corpus with the given adaptor.

__init__(corpus: Corpus, adaptor: Adaptor) None[source]#
class bocoel.core.optim.CachedIndexEvaluator(index_eval: IndexEvaluator, /)[source]#

Since there might be duplicate indices (and a lot of them during evaluation), this utility evaluator would cache the results and only compute the unseen indices. This would help evaluating the larger models of evaluation a lot faster.

__init__(index_eval: IndexEvaluator, /) None[source]#
class bocoel.core.optim.IndexEvaluator(*args, **kwargs)[source]#

A protocol for evaluating with the indices.

__init__(*args, **kwargs)#
class bocoel.core.optim.Optimizer(index_eval: IndexEvaluator, index: Index, **kwargs: Any)[source]#

The protocol for optimizers. Optimizers are used for optimizing the search space, Find the best exploration sequence for a given task.

__init__(index_eval: IndexEvaluator, index: Index, **kwargs: Any) None[source]#
Parameters:
  • index_eval – The id evaluator. Evalutes the items at the given storage indices.

  • index – The index that contains information about the domain.

  • **kwargs – The keyword arguments.

abstract property task: Task#

The task to use for the optimization.

Returns:

One of Task.EXPLORE or Task.MINIMIZE or Task.MAXIMIZE.

abstract step() Mapping[int, float][source]#

Perform a single step of optimization. This is a shortcut into the optimization process. For methods that evaluate the entire search at once, this method would output the slices of the entire search.

Returns:

A mapping of storage indices to the corresponding scores.

Raises:

StopIteration – If the optimization is complete.

class bocoel.core.optim.QueryEvaluator(*args, **kwargs)[source]#

A protocol for evaluating the query results.

__init__(*args, **kwargs)#
class bocoel.core.optim.SearchEvaluator(*args, **kwargs)[source]#

A protocol for evaluating the search results.

__init__(*args, **kwargs)#
class bocoel.core.optim.RemainingSteps(count: int | float)[source]#

A simple counter that counts down the number of steps remaining.

__init__(count: int | float) None[source]#
Parameters:

count – The number of steps remaining.

property count: int | float#

The number of steps remaining.

step(size: int = 1) None[source]#

Perform a single step.

Parameters:

size – The number of steps to perform.

property done: bool#

Whether the number of steps is done.

Returns:

True if the number of steps is done, False otherwise.

classmethod infinite() RemainingSteps[source]#

Create a counter that never ends.

Returns:

A counter that never ends.

class bocoel.core.optim.RandomOptimizer(index_eval: IndexEvaluator, index: Index, *, samples: int, batch_size: int)[source]#

The random optimizer that uses random search.

__init__(index_eval: IndexEvaluator, index: Index, *, samples: int, batch_size: int) None[source]#
Parameters:
  • index_eval – The evaluator to use for the storage.

  • index – The index to use for the query.

  • samples – The number of samples to use for the optimization.

  • batch_size – The number of samples to evaluate at once.

property task: Task#

The task to use for the optimization.

Returns:

One of Task.EXPLORE or Task.MINIMIZE or Task.MAXIMIZE.

step() Mapping[int, float][source]#

Perform a single step of optimization. This is a shortcut into the optimization process. For methods that evaluate the entire search at once, this method would output the slices of the entire search.

Returns:

A mapping of storage indices to the corresponding scores.

Raises:

StopIteration – If the optimization is complete.

class bocoel.core.optim.KMeansOptimizer(index_eval: IndexEvaluator, index: Index, *, batch_size: int, embeddings: ndarray[Any, dtype[_ScalarType_co]], model_kwargs: KMeansOptions)[source]#

The KMeans optimizer that uses clustering algorithms.

__init__(index_eval: IndexEvaluator, index: Index, *, batch_size: int, embeddings: ndarray[Any, dtype[_ScalarType_co]], model_kwargs: KMeansOptions) None[source]#
Parameters:
  • index_eval – The evaluator to use on the storage.

  • index – The index to use for the query.

  • batch_size – The number of embeddings to evaluate at once.

  • embeddings – The embeddings to cluster.

  • model_kwargs – The keyword arguments to pass to the KMeans model.

class bocoel.core.optim.KMeansOptions[source]#
class bocoel.core.optim.KMedoidsOptimizer(index_eval: IndexEvaluator, index: Index, *, batch_size: int, embeddings: ndarray[Any, dtype[_ScalarType_co]], model_kwargs: KMedoidsOptions)[source]#

The KMedoids optimizer that uses clustering algorithms.

__init__(index_eval: IndexEvaluator, index: Index, *, batch_size: int, embeddings: ndarray[Any, dtype[_ScalarType_co]], model_kwargs: KMedoidsOptions) None[source]#
Parameters:
  • index_eval – The evaluator to use for the index.

  • index – The index to use for the query.

  • batch_size – The number of embeddings to evaluate at once.

  • embeddings – The embeddings to cluster.

  • model_kwargs – The keyword arguments to pass to the KMedoids model.

class bocoel.core.optim.KMedoidsOptions[source]#
class bocoel.core.optim.ScikitLearnOptimizer(index_eval: IndexEvaluator, index: Index, embeddings: ndarray[Any, dtype[_ScalarType_co]], model: ScikitLearnCluster, batch_size: int)[source]#

The sklearn optimizer that uses clustering algorithms. See the following webpage for options https://scikit-learn.org/stable/modules/generated/sklearn.cluster.KMeans.html

__init__(index_eval: IndexEvaluator, index: Index, embeddings: ndarray[Any, dtype[_ScalarType_co]], model: ScikitLearnCluster, batch_size: int) None[source]#
Parameters:
  • index_eval – The evaluator to use for the query.

  • index – The index to use for the query.

  • model – The model to use for the optimization.

  • batch_size – The number of embeddings to evaluate at once.

property task: Task#

The task to use for the optimization.

Returns:

One of Task.EXPLORE or Task.MINIMIZE or Task.MAXIMIZE.

step() Mapping[int, float][source]#

Perform a single step of optimization. This is a shortcut into the optimization process. For methods that evaluate the entire search at once, this method would output the slices of the entire search.

Returns:

A mapping of storage indices to the corresponding scores.

Raises:

StopIteration – If the optimization is complete.

class bocoel.core.optim.UniformOptimizer(index_eval: IndexEvaluator, index: Index, *, grids: Sequence[int], batch_size: int)[source]#

The uniform optimizer that uses grid-based search.

__init__(index_eval: IndexEvaluator, index: Index, *, grids: Sequence[int], batch_size: int) None[source]#
Parameters:
  • index_eval – The evaluator to use for the storage.

  • index – The index to use for the query.

  • grids – The number of grids to use for the optimization.

  • batch_size – The number of grids to evaluate at once.

property task: Task#

The task to use for the optimization.

Returns:

One of Task.EXPLORE or Task.MINIMIZE or Task.MAXIMIZE.

step() Mapping[int, float][source]#

Perform a single step of optimization. This is a shortcut into the optimization process. For methods that evaluate the entire search at once, this method would output the slices of the entire search.

Returns:

A mapping of storage indices to the corresponding scores.

Raises:

StopIteration – If the optimization is complete.