baxus.util.behaviors package¶
Submodules¶
baxus.util.behaviors.baxus_configuration module¶
- class baxus.util.behaviors.baxus_configuration.BaxusBehavior(initial_base_length: float = 0.8, max_base_length: float = 1.6, min_base_length: float = 0.0078125, success_tolerance: int = 3, acquisition_function: AcquisitionFunctionType = AcquisitionFunctionType.THOMPSON_SAMPLING, noise: float = 0.0, embedding_type: EmbeddingType = EmbeddingType.BAXUS, success_decision_factor: float = 0.001, n_new_bins: int = 3, budget_until_input_dim: int = 0, adjust_initial_target_dim: bool = True)¶
Bases:
EmbeddedTuRBOBehavior
The behavior of the AdaTheSBO1 algorithm.
- adjust_initial_target_dim: bool = True¶
Whether to adjust the initial target dim such that the final split is as close to the ambient dim as possible.
- budget_until_input_dim: int = 0¶
The budget after which we have reached the input dimension under the assumption that we always fail. If zero: use the entire evaluation budget.
- property conf_dict: Dict[str, Any]¶
The configuration as a dictionary.
Returns: The configuration as a dictionary.
- n_new_bins: int = 3¶
3
- Type
Number of new bins after a splitting. Default
baxus.util.behaviors.embedded_turbo_configuration module¶
- class baxus.util.behaviors.embedded_turbo_configuration.EmbeddedTuRBOBehavior(initial_base_length: float = 0.8, max_base_length: float = 1.6, min_base_length: float = 0.0078125, success_tolerance: int = 3, acquisition_function: AcquisitionFunctionType = AcquisitionFunctionType.THOMPSON_SAMPLING, noise: float = 0.0, embedding_type: EmbeddingType = EmbeddingType.BAXUS, success_decision_factor: float = 0.001)¶
Bases:
object
The behavior of the embedded TuRBO algorithm
- acquisition_function: AcquisitionFunctionType = 2¶
only Thompson sampling)
- Type
The different acquisition functions to use in a multi-batch setting (default
- property conf_dict: Dict[str, Any]¶
The configuration as a dictionary.
Returns: The configuration as a dictionary.
- embedding_type: EmbeddingType = 0¶
Uniform bin sizing means that all target bins have approx. equally many contributing input dimensions. Random bin sizing means that a random target dimension is chosen for each input dimension (standard HeSBO behavior).
- initial_base_length: float = 0.8¶
The initial base side length (see TuRBO paper)
- max_base_length: float = 1.6¶
The maximum base side length (see TuRBO paper)
- min_base_length: float = 0.0078125¶
The minimum base side length (see TuRBO paper). If you get lower than this, the trust region dies out.
- noise: float = 0.0¶
The noise of the problem.
- pretty_print() str ¶
A nice string of the configuration.
Returns: A nice string of the configuration.
- success_decision_factor: float = 0.001¶
The difference wrt to the current incumbent solution required for a next point to be considered a success.
- success_tolerance: int = 3¶
The number of times we consecutively have to find a better point in order to expand the trust region, initial value
baxus.util.behaviors.embedding_configuration module¶
- class baxus.util.behaviors.embedding_configuration.EmbeddingType(value)¶
Bases:
Enum
An enumeration.
- BAXUS = 0¶
BAxUS embedding where each target bin has approx. the same number of contributing input dimensions.
- HESBO = 1¶
HeSBO embedding where a target dimension is sampled for each input dimension.
baxus.util.behaviors.gp_configuration module¶
- class baxus.util.behaviors.gp_configuration.GPBehaviour(mll_estimation: baxus.util.behaviors.gp_configuration.MLLEstimation = <MLLEstimation.LHS_PICK_BEST_START_GD: 2>, n_initial_samples: int = 50, n_best_on_lhs_selection: int = 5, n_mle_training_steps: int = 50)¶
Bases:
object
- mll_estimation: MLLEstimation = 2¶
The maximum-likelihood-estimation method.
- n_best_on_lhs_selection: int = 5¶
The number of best samples on which to start the gradient-based optimizer.
- n_initial_samples: int = 50¶
The initial samples.
- n_mle_training_steps: int = 50¶
The number of gradient updates.
- class baxus.util.behaviors.gp_configuration.MLLEstimation(value)¶
Bases:
Enum
An enumeration.
- LHS_PICK_BEST_START_GD = 2¶
Sample a number of points and start gradient-based optimization on the best initial points.
- MULTI_START_GRADIENT_DESCENT = 1¶
Sample a number of points and start gradient-based optimization on every point.