# gpflow.models.sgpr¶

## gpflow.models.sgpr.SGPRBase¶

class gpflow.models.sgpr.SGPRBase(data, kernel, inducing_variable, *, mean_function=None, num_latent_gps=None, noise_variance=1.0)[source]

Bases: gpflow.models.model.GPModel, gpflow.models.training_mixins.InternalDataTrainingLossMixin

Common base class for SGPR and GPRFITC that provides the common __init__ and upper_bound() methods.

Attributes
parameters
trainable_parameters

Methods

 __call__(self, \*args, \*\*kw) Call self as a function. calc_num_latent_gps(kernel, likelihood, …) Calculates the number of latent GPs required given the number of outputs output_dim and the type of likelihood and kernel. calc_num_latent_gps_from_data(data, kernel, …) Calculates the number of latent GPs required based on the data as well as the type of kernel and likelihood. log_posterior_density(self, \*args, \*\*kwargs) This may be the posterior with respect to the hyperparameters (e.g. log_prior_density(self) Sum of the log prior probability densities of all (constrained) variables in this model. maximum_log_likelihood_objective(self, …) Objective for maximum likelihood estimation. predict_f_samples(self, Xnew, num_samples, …) Produce samples from the posterior latent function(s) at the input points. predict_log_density(self, data, …) Compute the log density of the data at the new data points. predict_y(self, Xnew, full_cov, full_output_cov) Compute the mean and variance of the held-out data at the input points. training_loss(self) Returns the training loss for this model. training_loss_closure(self, \*[, compile]) Convenience method. upper_bound(self) Upper bound for the sparse GP regression marginal likelihood.
 predict_f
Parameters
• data (Tuple[tensorflow.Tensor, tensorflow.Tensor]) –

• kernel (Kernel) –

• inducing_variable (InducingPoints) –

• mean_function (Optional[MeanFunction]) –

• num_latent_gps (Optional[int]) –

• noise_variance (float) –

__init__(self, data: Tuple[tensorflow.Tensor, tensorflow.Tensor], kernel: gpflow.kernels.base.Kernel, inducing_variable: gpflow.inducing_variables.inducing_variables.InducingPoints, *, mean_function: Union[gpflow.mean_functions.MeanFunction, NoneType] = None, num_latent_gps: Union[int, NoneType] = None, noise_variance: float = 1.0)[source]
data: a tuple of (X, Y), where the inputs X has shape [N, D]

and the outputs Y has shape [N, R].

inducing_variable: an InducingPoints instance or a matrix of

the pseudo inputs Z, of shape [M, D].

kernel, mean_function are appropriate GPflow objects

This method only works with a Gaussian likelihood, its variance is initialized to noise_variance.

Parameters
• data (Tuple[tensorflow.Tensor, tensorflow.Tensor]) –

• kernel (Kernel) –

• inducing_variable (InducingPoints) –

• mean_function (Optional[MeanFunction]) –

• num_latent_gps (Optional[int]) –

• noise_variance (float) –

upper_bound(self) → tensorflow.Tensor[source]

Upper bound for the sparse GP regression marginal likelihood. Note that the same inducing points are used for calculating the upper bound, as are used for computing the likelihood approximation. This may not lead to the best upper bound. The upper bound can be tightened by optimising Z, just like the lower bound. This is especially important in FITC, as FITC is known to produce poor inducing point locations. An optimisable upper bound can be found in https://github.com/markvdw/gp_upper.

The key reference is

@misc{titsias_2014,
title={Variational Inference for Gaussian and Determinantal Point Processes},
url={http://www2.aueb.gr/users/mtitsias/papers/titsiasNipsVar14.pdf},
publisher={Workshop on Advances in Variational Inference (NIPS 2014)},
author={Titsias, Michalis K.},
year={2014},
month={Dec}
}


The key quantity, the trace term, can be computed via

>>> _, v = conditionals.conditional(X, model.inducing_variable.Z, model.kernel,
...                                 np.zeros((len(model.inducing_variable), 1)))


which computes each individual element of the trace term.

Return type

tensorflow.Tensor

## gpflow.models.sgpr.data_input_to_tensor¶

gpflow.models.sgpr.data_input_to_tensor(structure)[source]

Converts non-tensor elements of a structure to TensorFlow tensors retaining the structure itself. The function doesn’t keep original element’s dtype and forcefully converts them to GPflow’s default float type.

## gpflow.models.sgpr.inducingpoint_wrapper¶

gpflow.models.sgpr.inducingpoint_wrapper(inducing_variable: Union[gpflow.inducing_variables.inducing_variables.InducingVariables, tensorflow.Tensor, numpy.ndarray]) → gpflow.inducing_variables.inducing_variables.InducingVariables[source]

This wrapper allows transparently passing either an InducingVariables object or an array specifying InducingPoints positions.

Parameters

inducing_variable (Union[InducingVariables, tensorflow.Tensor, ndarray]) –

Return type

InducingVariables