baxus package

Submodules

baxus.baxus module

class baxus.baxus.BAxUS(f: ~baxus.benchmarks.benchmark_function.Benchmark, target_dim: int, n_init: int, max_evals: int, behavior: ~baxus.util.behaviors.baxus_configuration.BaxusBehavior = BaxusBehavior(initial_base_length=0.8, max_base_length=1.6, min_base_length=0.0078125, success_tolerance=3, acquisition_function=<AcquisitionFunctionType.THOMPSON_SAMPLING: 2>, noise=0.0, embedding_type=<EmbeddingType.BAXUS: 0>, success_decision_factor=0.001, n_new_bins=3, budget_until_input_dim=0, adjust_initial_target_dim=True), gp_behaviour: ~baxus.util.behaviors.gp_configuration.GPBehaviour = GPBehaviour(mll_estimation=<MLLEstimation.LHS_PICK_BEST_START_GD: 2>, n_initial_samples=50, n_best_on_lhs_selection=5, n_mle_training_steps=50), verbose=True, use_ard=True, max_cholesky_size=2000, dtype='float64', run_dir='.', conf_name: ~typing.Optional[str] = None)

Bases: EmbeddedTuRBO

BAxUS main class.

Parameters
  • f – the function to optimize

  • target_dim – the latent dimensionality

  • n_init – number of initial samples

  • max_evals – max number of function evaluations

  • behavior – behavior configuration

  • gp_behaviour – the behavior of the associated Gaussian Process

  • verbose – verbose logging model

  • use_ard – whether the GP should use an ARD kernel (yes this should be part of the gp_behavior)

  • max_cholesky_size

  • dtype – the datatype (float32, float64)

  • run_dir – the directory to which to write the run results

  • conf_name – the name of the configuration of the optimization run

property evaluations_since_last_split: int

The number of function evaluations since the last split.

Returns: The number of function evaluations since the last split. Total number of evaluations if there was no split yet.

property failtol: float

The fail tolerance for the BAxUS algorithm. Is computed dynamically depending on the split we are in as the fail tolerance is dependent on the current target dimensionality.

Returns: the fail tolerance for the BAxUS algorithm

property length_init: float

The initial base length of the trust region.

Returns: The initial base length of the trust region.

property length_max: float

The maximum base length of the trust region.

Returns: The maximum base length of the trust region.

property length_min: float

The minimum base length of the trust region.

Returns: The minimum base length of the trust region.

optimize() None

Run the optimization

Returns: None

property splits: int

The number of splits in the current trust region.

Returns: The number of splits in the current trust region.

property target_dim: int

The target dimensionality.

Returns: the target dimensionality

property target_dim_increases: int

Returns the number of times the target dimensionality was increased. This is not the current target dimensionality minus the initial target dimensionality.

Returns: The number of times the target dimensionality was increased.

baxus.benchmark_runner module

baxus.benchmark_runner.main(argstring: List[str]) None

Parse the argstring and run algorithms based on the definition.

Note

This function should not be called directly but is called by benchmark_runner.py in the project root.

Parameters

argstring – the argument string

Returns: Nothing

baxus.embeddedturbo module

class baxus.embeddedturbo.EmbeddedTuRBO(f: ~baxus.benchmarks.benchmark_function.Benchmark, target_dim: int, n_init: int, max_evals: int, behavior: ~baxus.util.behaviors.embedded_turbo_configuration.EmbeddedTuRBOBehavior = EmbeddedTuRBOBehavior(initial_base_length=0.8, max_base_length=1.6, min_base_length=0.0078125, success_tolerance=3, acquisition_function=<AcquisitionFunctionType.THOMPSON_SAMPLING: 2>, noise=0.0, embedding_type=<EmbeddingType.BAXUS: 0>, success_decision_factor=0.001), gp_behaviour: ~baxus.util.behaviors.gp_configuration.GPBehaviour = GPBehaviour(mll_estimation=<MLLEstimation.LHS_PICK_BEST_START_GD: 2>, n_initial_samples=50, n_best_on_lhs_selection=5, n_mle_training_steps=50), verbose=True, use_ard=True, max_cholesky_size=2000, dtype='float64', run_dir: str = '.', conf_name: ~typing.Optional[str] = None)

Bases: OptimizationMethod

Embedded TuRBO is the base class for BAxUS. It is the implementation used for our ablation studies and runs TuRBO in an embedded space.

Parameters
  • f – the benchmark function

  • target_dim – the target dimensionality

  • n_init – the number of initial samples

  • max_evals – the maximum number of evaluations

  • behavior – the behavior configuration of the algorithm

  • gp_behaviour – the behavior of the GP

  • verbose – whether to print verbose log messages

  • use_ard – whether to use an ARD kernel

  • max_cholesky_size – If the size of a LazyTensor is less than max_cholesky_size, then root_decomposition and inv_matmul of LazyTensor will use Cholesky rather than Lanczos/CG.

  • dtype – the data type to use

  • run_dir – the directory to write run information to

  • conf_name – the name of the current configuration

property conf_dict: Dict[str, Any]

The current behavior configuration as a dictionary

Returns: the current behavior configuration as a dictionary

property failtol: float

The fail tolerance of the current trust region.

Returns: the fail tolerance (=max(4, current target dimensionality))

property input_dim: int

The input dimensionality

Returns: the input dimensionality

property length_init: float

The initial base length of the trust region.

Returns: The initial base length of the trust region.

property length_max: float

The maximum base length of the trust region.

Returns: The maximum base length of the trust region.

property length_min: float

The minimum base length of the trust region.

Returns: The minimum base length of the trust region.

property n_cand: int

The number of candidates for the discrete Thompson sampling

Returns: the number of candidates for the discrete Thompson sampling

optimization_results_raw() Tuple[Optional[ndarray], ndarray]

The observations in the input space and their function values.

Returns: The observations in the input space and their function values.

optimize() None

Run the optimization until the maximal number of evaluations or the optimum are reached.

Returns: None

reset() None

Reset the state of the current instance (re-initiate the projector, reset global observations, reset local observations, reset fail- and success counts). Does not reset the target dimensionality

Returns: None

property target_dim: int

The target dimensionality.

Returns: the target dimensionality

baxus.gp module

class baxus.gp.GP(train_x, train_y, likelihood, ard_dims, lengthscale_constraint=None, outputscale_constraint=None)

Bases: SingleTaskGP

Extension of a single class GP for our purposes.

Parameters
  • train_x – the x-values of the training points

  • train_y – the function values of the training points

  • likelihood – the likelihood to use

  • ard_dims – the number of ARD dimensions

  • lengthscale_constraint – the constraints for the lengthscales

  • outputscale_constraint – the constraints for the signal variances

forward(x: Tensor) MultivariateNormal

Call the GP

Parameters

x – points

Returns: MultivariateNormal distribution

property lengthscales: ndarray

return the lengthscales of the base kernel depending on the kernel type

baxus.gp.train_gp(train_x: ~torch.Tensor, train_y: ~torch.Tensor, use_ard: bool, gp_behaviour: ~baxus.util.behaviors.gp_configuration.GPBehaviour = GPBehaviour(mll_estimation=<MLLEstimation.LHS_PICK_BEST_START_GD: 2>, n_initial_samples=50, n_best_on_lhs_selection=5, n_mle_training_steps=50), hypers=None) Tuple[GP, Dict[str, Any]]

Fit a GP where train_x is in [-1, 1]^D

Parameters
  • train_x – training data

  • train_y – training data

  • use_ard – whether to use automatic relevance detection kernel

  • gp_behaviour – the configuration of the GP

  • hypers – hyperparameters for the GP, if passed, the GP won’t be re-trained

Returns:

Module contents