When summarizing a paper by Ralph and Pienaar (see my post about it), I was wondering if one could apply their method of interpolation between desired Hamiltonians somewhere else.

Their basic idea is something like this:

• Express a quantum field theory (i.e. all its operators) in terms of annihilation and creation operators.
• Notice that nonlinear combinations can be calculated by evaluating those ladder operators and their commutators (and one only has to calculate the commutators of the creation/annihilation operators).
• Introduce some parameter $a$ that interpolates between $[A,B]_ {a=1}=\text{desired effect}$ and $[A,B]_ {a=0}=[A,B]$, where $[A,B]$ is the standard commutator $AB-BA$.

Seeing it this way it is not far to seek an interpolation that would smoothen the potential (i.e. Hamiltonian) getting rid of many local minima that prevent employing techniques like the one presented in my bachelor’s thesis and tend to cause problems for numerical methods like gradient descent which, in turn, is also the foundation of backpropagation–an algorithm widely used for optimizing neural networks.

The issue in optimization is that an approach like “take some kind of generalized Fourier transform and truncate it after $N$ terms to get a smoothed version” does not really work in practice since it would include integration of the potential (i.e. evaluation of a number of points that scales exponentially with the space’s dimension!) over a high dimensional space.

With the above considerations one can realize that it is an interesting task to be able to smoothen a potential without having to evaluate it. (Note to myself: Could this have something to do with Gauge invariance? See note concerning PDEs at the end of Terry Tao’s introduction)

So my (naïve) approach was the following:

• Quantize potential function of an optimization problem by interpreting it as the Hamiltonian of a system and replacing its arguments by the corresponding multiplication operators (see position operators of a multi-particle system; note that the eigenfunction corresponding to the lowest eigenvalue now translates naturally to the set of parameters $x$ yielding the $argmin_x V(x)$ for $V$ being the original potential function).
• Express Hamiltonian in terms of commutators (like in the above mentioned paper: Take the Hamiltonian of a harmonic oscillator and write it in terms of creation/annihilation operators).
• Introduce some parameter $a$ for the commutators like discussed above, with the “desired effect” being a smoothed version.
• Find minimum in this smoothed version and translate it back using results from previous calculations.

However, by coincidence, I just learned about the existence of the adiabatic theorem (and the corresponding technique of applying it) which does something similar (in the sense of interpolating between an easier/already known problem and the original one):

• Assert that every solution of an optimization problem corresponds to a ground state of an appropriate Hamiltonian.
• Prepare a physical system given by some “well-known” Hamiltonian $H_0$ in its ground state and interpolate between $H_0$ and $H_1$, where $H_1$ is the Hamiltonian corresponding to the original (and usually complicated) potential function. The simplest choice would be some linear interpolation $H(t)=(1-t)H_0 + tH_1$ for $t\in [0,1]$.
• Evolving the initial ground state by $H(t)$ from $t=0$ to $t=1$ “slowly enough” ($t$ should be interpreted as time scaled by a factor that depends on the difference of the first two eigenvalues; see the sources mentioned above) the ground state one started with evolves into the ground state of $H(1)=H_1$.