Training Distributions with KM Divergence ========================================= | Short tutorial on how to form objectives like in this paper | https://arxiv.org/pdf/1801.07686.pdf First some imports and the advice to use ``tq.numpy`` for functions in objectives (avoid issues in automatic differentiations and use the regular ``numpy`` for everything else (avoid issues with jax where you don't want them) .. code:: ipython3 import tequila as tq import numpy First we define the :math:`\max(x,\epsilon)` function we will need further down the road and some global variables for this example .. code:: ipython3 # global variables, change here if you want eps = 1.e-8 n_qubits = 5 n_layers = 1 def my_max(x): if x