Optimizers

Submodules

optimizer base module

Base class for Optimizers.

class tequila_code.optimizers.optimizer_base.Optimizer(backend: str = None, maxiter: int = None, samples: int = None, device: str = None, noise=None, save_history: bool = True, silent: bool | int = False, print_level: int = 99, *args, **kwargs)[source]

Bases: object

The base optimizer class, from which other optimizers inherit.

backend

The quantum backend to use (None means autopick)

maxiter

Maximum number of iterations to perform.

silent

whether or not to print during call or on init.

samples

number of samples to call objectives with during call.

print_level

Allow customization of printout in derived classes, is set to 0 if silent==True.

save_history

whether or not to save history.

history

a history object, saving information during optimization.

noise

what noise (e.g, a NoiseModel) to apply to simulations during optimization.

device

the device that sampling (real or emulated) should be performed on.

reset_history:

reset the optimizer history.

initialize_variables:

convenience: format variables of an objective and segregrate actives from passives.

compile_objective:

convenience: compile an objective.

compile_gradient:

convenience: build and compile (i.e render callable) the gradient of an objective.

compile_hessian:

convenience: build and compile (i.e render callable) the hessian of an objective.

compile_gradient(objective: Objective, variables: List[Variable], gradient=None, *args, **kwargs) Tuple[Dict, Dict][source]

convenience function to compile gradient objects and relavant types. For use by inheritors.

Parameters:
  • objective (Objective:) – the objective whose gradient is to be calculated.

  • variables (list:) – the variables to take gradients with resepct to.

  • gradient – special argument to change what structure is used to calculate the gradient, like numerical, or QNG. Default: use regular, analytic gradients.

  • optional – special argument to change what structure is used to calculate the gradient, like numerical, or QNG. Default: use regular, analytic gradients.

  • args

  • kwargs

Returns:

both the uncompiled and compiled gradients of objective, w.r.t variables.

Return type:

tuple

compile_hessian(variables: List[Variable], grad_obj: Dict[Variable, Objective], comp_grad_obj: Dict[Variable, Objective], hessian: dict = None, *args, **kwargs) tuple[source]

convenience function to compile hessians for optimizers which require it. :param variables: the variables of the hessian. :param grad_obj: the gradient object, to be differentiated once more :param comp_grad_obj: the compiled gradient object, used for further compilation of the hessian. :param hessian: extra information to modulate compilation of the hessian. :type hessian: optional: :param args: :param kwargs:

Returns:

uncompiled and compiled hessian objects, in that order

Return type:

tuple

compile_objective(objective: Objective, *args, **kwargs)[source]

convenience function to wrap over compile; for use by inheritors. :param objective: an objective to compile. :type objective: Objective: :param args: :param kwargs:

Returns:

a compiled Objective. Types vary.

Return type:

Objective

initialize_variables(objective, initial_values, variables)[source]

Convenience function to format the variables of some objective recieved in calls to optimzers.

Parameters:
  • objective (Objective:) – the objective being optimized.

  • initial_values (dict or string:) – initial values for the variables of objective, as a dictionary. if string: can be zero or random if callable: custom function that initializes when keys are passed if None: random initialization between 0 and 2pi (not recommended)

  • variables (list:) – the variables being optimized over.

Returns:

active_angles, a dict of those variables being optimized. passive_angles, a dict of those variables NOT being optimized. variables: formatted list of the variables being optimized.

Return type:

tuple

reset_history()[source]

replace self.history with a blank history.

Return type:

None

class tequila_code.optimizers.optimizer_base.OptimizerHistory(energies: ~typing.List[~numbers.Real] = <factory>, gradients: ~typing.List[~typing.Dict[str, ~numbers.Real]] = <factory>, angles: ~typing.List[~typing.Dict[str, ~numbers.Number]] = <factory>, energy_calls: ~typing.List[~numbers.Real] = <factory>, gradient_calls: ~typing.List[~typing.Dict[str, ~numbers.Real]] = <factory>, angles_calls: ~typing.List[~typing.Dict[str, ~numbers.Number]] = <factory>)[source]

Bases: object

A class representing the history of optimizers over time. Has a variety of convenience functions attached to it.

angles: List[Dict[str, Number]]
angles_calls: List[Dict[str, Number]]
energies: List[Real]
property energies_calls
property energies_evaluations
energy_calls: List[Real]
extract_angles(key: str) Dict[Integral, Real][source]

convenience function to get the value of some variable out of the history.

Parameters:

key (str:) – name of the variable whose values are sought

Returns:

a dictionary, representing the value of variable ‘key’ over time.

Return type:

dict

extract_energies(*args, **kwargs) Dict[Integral, Real][source]

convenience function to get the energies back as a dictionary.

extract_gradients(key: str) Dict[Integral, Real][source]

convenience function to get the gradients of some variable out of the history. :param key: the name of the variable whose gradients are sought :type key: str:

Returns:

a dictionary, representing the gradient of variable ‘key’ over time.

Return type:

dict

gradient_calls: List[Dict[str, Real]]
gradients: List[Dict[str, Real]]
property iterations
plot(property: str | List[str] = 'energies', key: str = None, filename=None, baselines: Dict[str, float] = None, *args, **kwargs)[source]

Convenience function to plot the progress of the optimizer over time. :param property: which property (eg angles, energies, gradients) to plot.

Default: plot energies over time.

Parameters:
  • key (str, optional:) – if property is ‘angles’ or ‘gradients’, key allows you to plot just an individual variables’ property. Default: plot everything

  • filename – if give, plot to this file; else, plot to terminal. Default: plot to terminal.

  • optional – if give, plot to this file; else, plot to terminal. Default: plot to terminal.

  • baselines (dict, optional:) – dictionary of plotting axis baseline information. Default: use whatever matplotlib auto-generates.

  • args – args.

  • kwargs – kwargs.

Return type:

None

class tequila_code.optimizers.optimizer_base.OptimizerResults(energy: float = None, history: tequila_code.optimizers.optimizer_base.OptimizerHistory = None, variables: dict = None)[source]

Bases: object

property angles
energy: float = None
history: OptimizerHistory = None
variables: dict = None
exception tequila_code.optimizers.optimizer_base.TequilaOptimizerException(msg)[source]

Bases: TequilaException

optimizer gradient module

class tequila_code.optimizers.optimizer_gd.DIIS(ndiis: int = 8, min_vectors: int = 3, tol: float = 0.05, drop: str = 'error')[source]

Bases: object

do_diis() bool[source]

Return with DIIS should be performed.

drop_error(p: Sequence[ndarray], e: Sequence[ndarray]) Tuple[List[ndarray], List[ndarray]][source]

Return P,E with the largest magnitude error vector removed.

drop_first(p: Sequence[ndarray], e: Sequence[ndarray]) Tuple[List[ndarray], List[ndarray]][source]

Return P,E with the first element removed.

push(param_vector: ndarray, error_vector: ndarray) None[source]

Update DIIS calculator with parameter and error vectors.

reset() None[source]

Reset containers.

update() ndarray | None[source]

Get update parameter from DIIS iteration, or None if DIIS is not doable.

class tequila_code.optimizers.optimizer_gd.GDResults(energy: float = None, history: tequila_code.optimizers.optimizer_base.OptimizerHistory = None, variables: dict = None, moments: dict = None, num_iteration: int = 0)[source]

Bases: OptimizerResults

moments: dict = None
num_iteration: int = 0
class tequila_code.optimizers.optimizer_gd.OptimizerGD(maxiter=100, method='sgd', tol: Real = None, lr: Real | List[Real] = 0.1, alpha: Real = None, gamma: Real = None, beta: Real = 0.9, rho: Real = 0.999, c: Real | List[Real] = 0.2, epsilon: Real = 1e-07, diis: dict | None = None, backend=None, samples=None, device=None, noise=None, silent=True, calibrate_lr: bool = False, **kwargs)[source]

Bases: Optimizer

The gradient descent optimizer for tequila.

OptimizerGD allows for two modalities: it can either function as a ‘stepper’, simply calculating updated parameter values for a given object; or it can be called to perform an entire optimization. The former is used to accomplish the latter, and can give users a more fine-grained control of the optimization. See Optimizer for details on inherited attributes or methods; there are several.

f

function which performs an optimization step.

gradient_lookup

dictionary mapping object ids as strings to said object’s callable gradient

active_key_lookup

dictionary mapping object ids as strings to said object’s active keys, itself a dict, of variables to optimize.

moments_lookup

dictionary mapping object ids as strings to said object’s current stored moments; a pair of lists of floats, namely running tallies of gradient momenta. said momenta are used to SCALE or REDIRECT gradient descent steps.

moments_trajectory

dictionary mapping object ids as strings to said object’s momenta at ALL steps; that is, a list of all the moments of a given object, in order.

step_lookup

dictionary mapping object ids as strings to an int; how many optimization steps have been performed for a given object. Relevant only to the Adam optimizer.

diis

Dictionary of parameters for the DIIS accelerator.

lr

a float or list of floats. Hyperparameter: The learning rate (unscaled) to be used in each update; in some literature, called a step size.

alpha

a float. Hyperparameter: used to adjust the learning rate each iteration using the formula: lr := original_lr / (iteration ** alpha) Default: None. If not specify alpha or lr given as a list: lr will not be adjusted

gamma

a float. Hyperparameter: used to adjust the step of the gradient for spsa method in each iteration following: c := original_c / (iteration ** gamma) Default value: None. If not specify gamma or c given as a list: c will not be adjusted

beta

a float. Hyperparameter: scales (perhaps nonlinearly) all first moment terms in any relavant method.

rho

a float. Hyperparameter: scales (perhaps nonlinearly) all second moment terms in any relavant method. in some literature, may be referred to as ‘beta_2’.

c

a float or list of floats. Hyperparameter: The step rate used in the spsa gradient. If it is a list, the steprate will change each iteration until the last item of the list is reached.

epsilon

a float. Hyperparameter: used to prevent division by zero in some methods.

tol

a float. If specified, __call__ aborts when the difference in energies between two steps is smaller than tol.

calibrate_lr

a boolean. It specifies to calibrate lr value for spsa method.

iteration

a integer. It indicates the number of the iteration being runned.

prepare:

perform all necessary compilation and registration of a given objective. Must be called before step is used on the given optimizer.

step:

perform a single optimization step on a compiled objective, starting from a given point.

reset_stepper:

wipe all stored information about all prepared objectives.

reset_momenta:

reset all moment information about all prepared objectives, but do not erase compiled gradients.

reset_momenta_for:

reset all moment information about a given objective, but do not erase compiled gradients.

classmethod available_diis()[source]
Returns:

All tested methods that can be diis accelerated

classmethod available_methods()[source]
Returns:

All tested available methods

nextLearningRate()[source]

Return the learning rate to use

Return type:

float representing the learning rate to use

prepare(objective: Objective, initial_values: dict = None, variables: list = None, gradient=None)[source]

perform all initialization for an objective, register it with lookup tables, and return it compiled. MUST be called before step is used.

Parameters:
  • objective (Objective:) – the objective to ready for optimization.

  • initial_values (dict, optional:) – the initial values of to prepare the optimizer with. Default: choose randomly.

  • variables (list, optional:) – which variables to optimize over, and hence prepare gradients for. Default value: optimize over all variables in objective.

  • gradient (optional:) – extra keyword; information used to compile alternate gradients. Default: prepare the standard, analytical gradient.

Returns:

compiled version of objective.

Return type:

Objective

reset_momenta()[source]

reset moment information about all prepared objectives. :rtype: None

reset_momenta_for(objective: Objective)[source]

reset moment information about a specific objective. :param objective: the objective whose information should be reset. :type objective: Objective:

Return type:

None

reset_stepper()[source]

reset all information about all prepared objectives. :rtype: None

step(objective: Objective, parameters: Dict[Variable, Real]) Dict[Variable, Real][source]

perform a single optimization step and return suggested parameters. :param objective: the compiled objective, to perform an optimization step for. MUST be one returned by prepare. :type objective: Objective: :param parameters: the parameters to use in performing the optimization step. :type parameters: dict:

Returns:

dict of new suggested parameters.

Return type:

dict

tequila_code.optimizers.optimizer_gd.minimize(objective: Objective, lr: float | List[float] = 0.1, method='sgd', initial_values: Dict[Hashable, Real] = None, variables: List[Hashable] = None, gradient: str = None, samples: int = None, maxiter: int = 100, diis: int = None, backend: str = None, noise: NoiseModel = None, device: str = None, tol: float = None, silent: bool = False, save_history: bool = True, alpha: float = None, gamma: float = None, beta: float = 0.9, rho: float = 0.999, c: float | List[float] = 0.2, epsilon: float = 1e-07, calibrate_lr: bool = False, *args, **kwargs) GDResults[source]

Initialize and call the GD optimizer. :param objective: The tequila objective to optimize :type objective: Objective : :param lr: the learning rate. Default 0.1. :type lr: float or list of floats >0: :param alpha: scaling factor to adjust learning rate each iteration. default None :type alpha: float >0: :param gamma: scaling facto to adjust step for gradient in spsa method. default None :type gamma: float >0: :param beta: scaling factor for first moments. default 0.9 :type beta: float >0: :param rho: scaling factor for second moments. default 0.999 :type rho: float >0: :param c: stepsize for the gradient of the spsa method :type c: float or list of floats: :param epsilon: small float for stability of division. default 10^-7 :type epsilon: float>0: :param method: which variation on Gradient Descent to use. Options include ‘sgd’,’adam’,’nesterov’,’adagrad’,’rmsprop’, etc. :type method: string: Default = ‘sgd’ :param initial_values:

Initial values as dictionary of Hashable types (variable keys) and floating point numbers. If given None,

they will all be set to zero

Parameters:
  • variables (List[Hashable], optional:) – List of Variables to optimize

  • gradient (optional:) – the gradient to use. If None, calculated in the usual way. if str=’qng’, then the qng is calculated. If a dictionary of objectives, those objectives are used. If another dictionary, an attempt will be made to interpret that dictionary to get, say, numerical gradients.

  • samples (int, optional:) – samples/shots to take in every run of the quantum circuits (None activates full wavefunction simulation)

  • maxiter (int : Default = 100:) – the maximum number of iterations to run.

  • diis (int, optional:) – Number of iteration before starting DIIS acceleration.

  • backend (str, optional:) – Simulation backend which will be automatically chosen if set to None

  • noise (NoiseModel, optional:) – a NoiseModel to apply to all expectation values in the objective.

  • device (optional:) – the device from which to (potentially, simulatedly) sample all quantum circuits employed in optimization.

  • tol (float : Default = 10^-4) – Convergence tolerance for optimization; if abs(delta f) smaller than tol, stop.

  • silent (bool : Default = False:) – No printout if True

  • save_history (bool: Default = True:) – Save the history throughout the optimization

  • calibrate_lr (bool: Default = False:) – Calibrates the value of the learning rate

Note

optional kwargs may include beta, beta2, and rho, parameters which affect (but do not need to be altered) the various method algorithms.

Returns:

the results of an optimization.

Return type:

GDResults

optimizer gpyopt module

optimizer scipy module

class tequila_code.optimizers.optimizer_scipy.OptimizerSciPy(method: str = 'L-BFGS-B', tol: Real = None, method_options=None, method_bounds=None, method_constraints=None, **kwargs)[source]

Bases: Optimizer

Class wrapping over the scipy optimizer for use by Tequila.

method

The scipy optimization method passed as string.

tol

See scipy documentation for the method you picked

method_options

See scipy documentation for the method you picked

method_bounds

See scipy documentation for the method you picked

method_constraints

See scipy documentation for the method you picked

silent

if False, the optimizer prints out all evaluated energies

classmethod available_methods()[source]
Returns:

All tested available methods

gradient_based_methods = ['L-BFGS-B', 'BFGS', 'CG', 'TNC']
gradient_free_methods = ['NELDER-MEAD', 'COBYLA', 'POWELL', 'SLSQP']
hessian_based_methods = ['TRUST-KRYLOV', 'NEWTON-CG', 'DOGLEG', 'TRUST-NCG', 'TRUST-EXACT', 'TRUST-CONSTR']
class tequila_code.optimizers.optimizer_scipy.SciPyResults(energy: float = None, history: tequila_code.optimizers.optimizer_base.OptimizerHistory = None, variables: dict = None, scipy_result: scipy.optimize._optimize.OptimizeResult = None)[source]

Bases: OptimizerResults

scipy_result: OptimizeResult = None
exception tequila_code.optimizers.optimizer_scipy.TequilaScipyException(msg)[source]

Bases: TequilaException

tequila_code.optimizers.optimizer_scipy.available_methods(energy=True, gradient=True, hessian=True) List[str][source]

Convenience :param energy: (Default value = True) :param gradient: (Default value = True) :param hessian: (Default value = True)

Return type:

Available methods of the scipy optimizer, a list of strings.

tequila_code.optimizers.optimizer_scipy.minimize(objective: Objective, gradient: str | Dict[Variable, Objective] = None, hessian: str | Dict[Tuple[Variable, Variable], Objective] = None, initial_values: Dict[Hashable, Real] = None, variables: List[Hashable] = None, samples: int = None, maxiter: int = 100, backend: str = None, backend_options: dict = None, noise: NoiseModel = None, device: str = None, method: str = 'BFGS', tol: float = 0.001, method_options: dict = None, method_bounds: Dict[Hashable, Real] = None, method_constraints=None, silent: bool = False, save_history: bool = True, *args, **kwargs) SciPyResults[source]
Parameters:
  • objective (Objective :) – The tequila objective to optimize

  • gradient (Union[str, Dict[Variable, Objective], None] : Default value = None):) – ‘2-point’, ‘cs’ or ‘3-point’ for numerical gradient evaluation (does not work in combination with all optimizers), dictionary of variables and tequila objective to define own gradient, None for automatic construction (default) Other options include ‘qng’ to use the quantum natural gradient.

  • hessian (Union[str, Dict[Variable, Objective], None], optional:) – ‘2-point’, ‘cs’ or ‘3-point’ for numerical gradient evaluation (does not work in combination with all optimizers), dictionary (keys:tuple of variables, values:tequila objective) to define own gradient, None for automatic construction (default)

  • initial_values (Dict[Hashable, numbers.Real], optional:) – Initial values as dictionary of Hashable types (variable keys) and floating point numbers. If given None they will all be set to zero

  • variables (List[Hashable], optional:) – List of Variables to optimize

  • samples (int, optional:) – samples/shots to take in every run of the quantum circuits (None activates full wavefunction simulation)

  • maxiter (int : (Default value = 100):) – max iters to use.

  • backend (str, optional:) – Simulator backend, will be automatically chosen if set to None

  • backend_options (dict, optional:) – Additional options for the backend Will be unpacked and passed to the compiled objective in every call

  • noise (NoiseModel, optional:) – a NoiseModel to apply to all expectation values in the objective.

  • method (str : (Default = "BFGS"):) – Optimization method (see scipy documentation, or ‘available methods’)

  • tol (float : (Default = 1.e-3):) – Convergence tolerance for optimization (see scipy documentation)

  • method_options (dict, optional:) – Dictionary of options (see scipy documentation)

  • method_bounds (Dict[Hashable, Tuple[float, float]], optional:) – bounds for the variables (see scipy documentation)

  • method_constraints (optional:) – (see scipy documentation

  • silent (bool :) – No printout if True

  • save_history (bool:) – Save the history throughout the optimization

Returns:

the results of optimization

Return type:

SciPyReturnType

Module contents

tequila_code.optimizers.minimize(objective, method: str = 'bfgs', variables: list = None, initial_values: dict | Number | Callable = 0.0, maxiter: int = None, *args, **kwargs)[source]
Parameters:
  • method (str:) – The optimization method (e.g. bfgs, cobyla, nelder-mead, …) see ‘tq.optimizers.show_available_methods()’ for an overview

  • objective (tq.Objective:) – The abstract tequila objective to be optimized

  • variables (list of names:) – The variables which shall be optimized given as list Can be passed as list of names or list of tq variables

  • initial_values (dict:) – Initial values for the optimization, passed as dictionary with the variable names as keys. Alternatively zero, random or a single number are accepted

  • maxiter – maximum number of iterations

  • kwargs

    further keyword arguments for the actual minimization functions can also be called directly as tq.minimize_modulename e.g. tq.minimize_scipy See their documentation for more details

    example: gradient keyword: gradient (Default Value: None): instructions for gradient compilation can be a dictionary of tequila objectives representing the gradients or a string/dictionary giving instructions for numerical gradients examples are

    gradient = ‘2-point’ gradient = {‘method’:’2-point’, ‘stepsize’: 1.e-4} gradient = {‘method’:Callable, ‘stepsize’: 1.e-4} see optimizer_base.py for method examples

    gradient = None: analytical gradients are compiled

tequila_code.optimizers.show_available_optimizers(module=None)[source]
Returns:

  • A list of available optimization methods

  • The list depends on optimization packages installed in your system