# gpflow.kullback_leiblers¶

## gpflow.kullback_leiblers.Dispatcher¶

class gpflow.kullback_leiblers.Dispatcher(name, doc=None)[source]

Bases: multipledispatch.dispatcher.Dispatcher

multipledispatch.Dispatcher uses a generator to yield the desired function implementation, which is problematic as TensorFlow’s autograph is not able to compile code that passes through generators.

This class overwrites the problematic method in the original Dispatcher and solely makes use of simple for-loops, which are compilable by AutoGraph.

Attributes
doc
funcs
name
ordering

Methods

 __call__(*args, **kwargs) Call self as a function. add(signature, func) Add new types/method pair to dispatcher dispatch(*types) Returns matching function for types; if not existing returns None. get_first_occurrence(*types) Returns the first occurrence of a matching function get_func_annotations(func) get annotations of function positional parameters help(*args, **kwargs) Print docstring for the function corresponding to inputs register(*types, **kwargs) register dispatcher with new implementation resolve(types) Deterimine appropriate implementation for this type signature source(*args, **kwargs) Print source code for the function corresponding to inputs
 dispatch_iter get_func_params reorder
dispatch(*types)[source]

Returns matching function for types; if not existing returns None.

get_first_occurrence(*types)[source]

Returns the first occurrence of a matching function

Based on multipledispatch.Dispatcher.dispatch_iter, which returns an iterator of matching functions. This method uses the same logic to select functions, but simply returns the first element of the iterator. If no matching functions are found, None is returned.

## gpflow.kullback_leiblers.gauss_kl¶

gpflow.kullback_leiblers.gauss_kl(q_mu, q_sqrt, K=None, *, K_cholesky=None)[source]

Compute the KL divergence KL[q || p] between

q(x) = N(q_mu, q_sqrt^2)

and

p(x) = N(0, K) if K is not None p(x) = N(0, I) if K is None

We assume L multiple independent distributions, given by the columns of q_mu and the first or last dimension of q_sqrt. Returns the sum of the divergences.

q_mu is a matrix ([M, L]), each column contains a mean.

q_sqrt can be a 3D tensor ([L, M, M]), each matrix within is a lower

triangular square-root matrix of the covariance of q.

q_sqrt can be a matrix ([M, L]), each column represents the diagonal of a

square-root matrix of the covariance of q.

K is the covariance of p (positive-definite matrix). The K matrix can be passed either directly as K, or as its Cholesky factor, K_cholesky. In either case, it can be a single matrix [M, M], in which case the sum of the L KL divergences is computed by broadcasting, or L different covariances [L, M, M].

Note: if no K matrix is given (both K and K_cholesky are None), gauss_kl computes the KL divergence from p(x) = N(0, I) instead.

## gpflow.kullback_leiblers.prior_kl¶

This function uses multiple dispatch, which will depend on the type of argument passed in:

gpflow.kullback_leiblers.prior_kl( InducingVariables, Kernel, object, object )
# dispatch to -> gpflow.kullback_leiblers._(...)

gpflow.kullback_leiblers._(inducing_variable, kernel, q_mu, q_sqrt, whiten=False)[source]

## gpflow.kullback_leiblers.to_default_float¶

gpflow.kullback_leiblers.to_default_float(x)[source]