gpflow.likelihoods

gpflow.likelihoods.Bernoulli

class gpflow.likelihoods.Bernoulli(invlink=<function inv_probit>, **kwargs)[source]

Bases: gpflow.likelihoods.base.ScalarLikelihood

Attributes
parameters
trainable_parameters

Methods

__call__(self, \*args, \*\*kw)

Call self as a function.

conditional_mean(self, F)

The conditional mean of Y|F: [E[Y₁|F], …, E[Yₖ|F]] where K = observation_dim

conditional_variance(self, F)

The conditional marginal variance of Y|F: [var(Y₁|F), …, var(Yₖ|F)] where K = observation_dim

log_prob(self, F, Y)

The log probability density log p(Y|F)

predict_density(self, Fmu, Fvar, Y)

Deprecated: see predict_log_density

predict_log_density(self, Fmu, Fvar, Y)

Given a Normal distribution for the latent function, and a datum Y, compute the log predictive density of Y,

predict_mean_and_var(self, Fmu, Fvar)

Given a Normal distribution for the latent function, return the mean and marginal variance of Y,

variational_expectations(self, Fmu, Fvar, Y)

Compute the expected log density of the data, given a Gaussian distribution for the function values,

gpflow.likelihoods.Beta

class gpflow.likelihoods.Beta(invlink=<function inv_probit>, scale=1.0, **kwargs)[source]

Bases: gpflow.likelihoods.base.ScalarLikelihood

This uses a reparameterisation of the Beta density. We have the mean of the Beta distribution given by the transformed process:

m = invlink(f)

and a scale parameter. The familiar α, β parameters are given by

m = α / (α + β) scale = α + β

so:

α = scale * m β = scale * (1-m)

Attributes
parameters
trainable_parameters

Methods

__call__(self, \*args, \*\*kw)

Call self as a function.

conditional_mean(self, F)

The conditional mean of Y|F: [E[Y₁|F], …, E[Yₖ|F]] where K = observation_dim

conditional_variance(self, F)

The conditional marginal variance of Y|F: [var(Y₁|F), …, var(Yₖ|F)] where K = observation_dim

log_prob(self, F, Y)

The log probability density log p(Y|F)

predict_density(self, Fmu, Fvar, Y)

Deprecated: see predict_log_density

predict_log_density(self, Fmu, Fvar, Y)

Given a Normal distribution for the latent function, and a datum Y, compute the log predictive density of Y,

predict_mean_and_var(self, Fmu, Fvar)

Given a Normal distribution for the latent function, return the mean and marginal variance of Y,

variational_expectations(self, Fmu, Fvar, Y)

Compute the expected log density of the data, given a Gaussian distribution for the function values,

gpflow.likelihoods.Exponential

class gpflow.likelihoods.Exponential(invlink=tensorflow.exp, **kwargs)[source]

Bases: gpflow.likelihoods.base.ScalarLikelihood

Attributes
parameters
trainable_parameters

Methods

__call__(self, \*args, \*\*kw)

Call self as a function.

conditional_mean(self, F)

The conditional mean of Y|F: [E[Y₁|F], …, E[Yₖ|F]] where K = observation_dim

conditional_variance(self, F)

The conditional marginal variance of Y|F: [var(Y₁|F), …, var(Yₖ|F)] where K = observation_dim

log_prob(self, F, Y)

The log probability density log p(Y|F)

predict_density(self, Fmu, Fvar, Y)

Deprecated: see predict_log_density

predict_log_density(self, Fmu, Fvar, Y)

Given a Normal distribution for the latent function, and a datum Y, compute the log predictive density of Y,

predict_mean_and_var(self, Fmu, Fvar)

Given a Normal distribution for the latent function, return the mean and marginal variance of Y,

variational_expectations(self, Fmu, Fvar, Y)

Compute the expected log density of the data, given a Gaussian distribution for the function values,

gpflow.likelihoods.Gamma

class gpflow.likelihoods.Gamma(invlink=tensorflow.exp, **kwargs)[source]

Bases: gpflow.likelihoods.base.ScalarLikelihood

Use the transformed GP to give the scale (inverse rate) of the Gamma

Attributes
parameters
trainable_parameters

Methods

__call__(self, \*args, \*\*kw)

Call self as a function.

conditional_mean(self, F)

The conditional mean of Y|F: [E[Y₁|F], …, E[Yₖ|F]] where K = observation_dim

conditional_variance(self, F)

The conditional marginal variance of Y|F: [var(Y₁|F), …, var(Yₖ|F)] where K = observation_dim

log_prob(self, F, Y)

The log probability density log p(Y|F)

predict_density(self, Fmu, Fvar, Y)

Deprecated: see predict_log_density

predict_log_density(self, Fmu, Fvar, Y)

Given a Normal distribution for the latent function, and a datum Y, compute the log predictive density of Y,

predict_mean_and_var(self, Fmu, Fvar)

Given a Normal distribution for the latent function, return the mean and marginal variance of Y,

variational_expectations(self, Fmu, Fvar, Y)

Compute the expected log density of the data, given a Gaussian distribution for the function values,

gpflow.likelihoods.Gaussian

class gpflow.likelihoods.Gaussian(variance=1.0, variance_lower_bound=1e-06, **kwargs)[source]

Bases: gpflow.likelihoods.base.ScalarLikelihood

The Gaussian likelihood is appropriate where uncertainties associated with the data are believed to follow a normal distribution, with constant variance.

Very small uncertainties can lead to numerical instability during the optimization process. A lower bound of 1e-6 is therefore imposed on the likelihood variance by default.

Attributes
parameters
trainable_parameters

Methods

__call__(self, \*args, \*\*kw)

Call self as a function.

conditional_mean(self, F)

The conditional mean of Y|F: [E[Y₁|F], …, E[Yₖ|F]] where K = observation_dim

conditional_variance(self, F)

The conditional marginal variance of Y|F: [var(Y₁|F), …, var(Yₖ|F)] where K = observation_dim

log_prob(self, F, Y)

The log probability density log p(Y|F)

predict_density(self, Fmu, Fvar, Y)

Deprecated: see predict_log_density

predict_log_density(self, Fmu, Fvar, Y)

Given a Normal distribution for the latent function, and a datum Y, compute the log predictive density of Y,

predict_mean_and_var(self, Fmu, Fvar)

Given a Normal distribution for the latent function, return the mean and marginal variance of Y,

variational_expectations(self, Fmu, Fvar, Y)

Compute the expected log density of the data, given a Gaussian distribution for the function values,

__init__(self, variance=1.0, variance_lower_bound=1e-06, **kwargs)[source]
Parameters
  • variance – The noise variance; must be greater than variance_lower_bound.

  • variance_lower_bound – The lower (exclusive) bound of variance.

  • kwargs – Keyword arguments forwarded to ScalarLikelihood.

gpflow.likelihoods.GaussianMC

class gpflow.likelihoods.GaussianMC(*args, **kwargs)[source]

Bases: gpflow.likelihoods.base.MonteCarloLikelihood, gpflow.likelihoods.scalar_continuous.Gaussian

Stochastic version of Gaussian likelihood for demonstration purposes only.

Attributes
parameters
trainable_parameters

Methods

__call__(self, \*args, \*\*kw)

Call self as a function.

conditional_mean(self, F)

The conditional mean of Y|F: [E[Y₁|F], …, E[Yₖ|F]] where K = observation_dim

conditional_variance(self, F)

The conditional marginal variance of Y|F: [var(Y₁|F), …, var(Yₖ|F)] where K = observation_dim

log_prob(self, F, Y)

The log probability density log p(Y|F)

predict_density(self, Fmu, Fvar, Y)

Deprecated: see predict_log_density

predict_log_density(self, Fmu, Fvar, Y)

Given a Normal distribution for the latent function, and a datum Y, compute the log predictive density of Y,

predict_mean_and_var(self, Fmu, Fvar)

Given a Normal distribution for the latent function, return the mean and marginal variance of Y,

variational_expectations(self, Fmu, Fvar, Y)

Compute the expected log density of the data, given a Gaussian distribution for the function values,

gpflow.likelihoods.Likelihood

class gpflow.likelihoods.Likelihood(latent_dim, observation_dim)[source]

Bases: gpflow.base.Module

Attributes
parameters
trainable_parameters

Methods

__call__(self, \*args, \*\*kw)

Call self as a function.

conditional_mean(self, F)

The conditional mean of Y|F: [E[Y₁|F], …, E[Yₖ|F]] where K = observation_dim

conditional_variance(self, F)

The conditional marginal variance of Y|F: [var(Y₁|F), …, var(Yₖ|F)] where K = observation_dim

log_prob(self, F, Y)

The log probability density log p(Y|F)

predict_density(self, Fmu, Fvar, Y)

Deprecated: see predict_log_density

predict_log_density(self, Fmu, Fvar, Y)

Given a Normal distribution for the latent function, and a datum Y, compute the log predictive density of Y,

predict_mean_and_var(self, Fmu, Fvar)

Given a Normal distribution for the latent function, return the mean and marginal variance of Y,

variational_expectations(self, Fmu, Fvar, Y)

Compute the expected log density of the data, given a Gaussian distribution for the function values,

Parameters
  • latent_dim (int) –

  • observation_dim (int) –

__init__(self, latent_dim: int, observation_dim: int)[source]

A base class for likelihoods, which specifies an observation model connecting the latent functions (‘F’) to the data (‘Y’).

All of the members of this class are expected to obey some shape conventions, as specified by latent_dim and observation_dim.

If we’re operating on an array of function values ‘F’, then the last dimension represents multiple functions (preceding dimensions could represent different data points, or different random samples, for example). Similarly, the last dimension of Y represents a single data point. We check that the dimensions are as this object expects.

The return shapes of all functions in this class is the broadcasted shape of the arguments, excluding the last dimension of each argument.

Parameters
  • latent_dim (int) – the dimension of the vector F of latent functions for a single data point

  • observation_dim (int) – the dimension of the observation vector Y for a single data point

_check_data_dims(self, Y)[source]

Ensure that a tensor of data Y has observation_dim as right-most dimension.

Parameters

Y – observation Tensor, with shape […, observation_dim]

_check_last_dims_valid(self, F, Y)[source]

Assert that the dimensions of the latent functions F and the data Y are compatible.

Parameters
  • F – function evaluation Tensor, with shape […, latent_dim]

  • Y – observation Tensor, with shape […, observation_dim]

_check_latent_dims(self, F)[source]

Ensure that a tensor of latent functions F has latent_dim as right-most dimension.

Parameters

F – function evaluation Tensor, with shape […, latent_dim]

_check_return_shape(self, result, F, Y)[source]

Check that the shape of a computed statistic of the data is the broadcasted shape from F and Y.

Parameters
  • result – result Tensor, with shape […]

  • F – function evaluation Tensor, with shape […, latent_dim]

  • Y – observation Tensor, with shape […, observation_dim]

conditional_mean(self, F)[source]

The conditional mean of Y|F: [E[Y₁|F], …, E[Yₖ|F]] where K = observation_dim

Parameters

F – function evaluation Tensor, with shape […, latent_dim]

Returns

mean […, observation_dim]

conditional_variance(self, F)[source]

The conditional marginal variance of Y|F: [var(Y₁|F), …, var(Yₖ|F)] where K = observation_dim

Parameters

F – function evaluation Tensor, with shape […, latent_dim]

Returns

variance […, observation_dim]

log_prob(self, F, Y)[source]

The log probability density log p(Y|F)

Parameters
  • F – function evaluation Tensor, with shape […, latent_dim]

  • Y – observation Tensor, with shape […, observation_dim]:

Returns

log pdf, with shape […]

predict_density(self, Fmu, Fvar, Y)[source]

Deprecated: see predict_log_density

predict_log_density(self, Fmu, Fvar, Y)[source]

Given a Normal distribution for the latent function, and a datum Y, compute the log predictive density of Y,

i.e. if

q(F) = N(Fmu, Fvar)

and this object represents

p(y|F)

then this method computes the predictive density

log ∫ p(y=Y|F)q(F) df

Parameters
  • Fmu – mean function evaluation Tensor, with shape […, latent_dim]

  • Fvar – variance of function evaluation Tensor, with shape […, latent_dim]

  • Y – observation Tensor, with shape […, observation_dim]:

Returns

log predictive density, with shape […]

predict_mean_and_var(self, Fmu, Fvar)[source]

Given a Normal distribution for the latent function, return the mean and marginal variance of Y,

i.e. if

q(f) = N(Fmu, Fvar)

and this object represents

p(y|f)

then this method computes the predictive mean

∫∫ y p(y|f)q(f) df dy

and the predictive variance

∫∫ y² p(y|f)q(f) df dy - [ ∫∫ y p(y|f)q(f) df dy ]²

Parameters
  • Fmu – mean function evaluation Tensor, with shape […, latent_dim]

  • Fvar – variance of function evaluation Tensor, with shape […, latent_dim]

Returns

mean and variance, both with shape […, observation_dim]

variational_expectations(self, Fmu, Fvar, Y)[source]

Compute the expected log density of the data, given a Gaussian distribution for the function values,

i.e. if

q(f) = N(Fmu, Fvar)

and this object represents

p(y|f)

then this method computes

∫ log(p(y=Y|f)) q(f) df.

This only works if the broadcasting dimension of the statistics of q(f) (mean and variance) are broadcastable with that of the data Y.

Parameters
  • Fmu – mean function evaluation Tensor, with shape […, latent_dim]

  • Fvar – variance of function evaluation Tensor, with shape […, latent_dim]

  • Y – observation Tensor, with shape […, observation_dim]:

Returns

expected log density of the data given q(F), with shape […]

gpflow.likelihoods.MonteCarloLikelihood

class gpflow.likelihoods.MonteCarloLikelihood(*args, **kwargs)[source]

Bases: gpflow.likelihoods.base.Likelihood

Attributes
parameters
trainable_parameters

Methods

__call__(self, \*args, \*\*kw)

Call self as a function.

conditional_mean(self, F)

The conditional mean of Y|F: [E[Y₁|F], …, E[Yₖ|F]] where K = observation_dim

conditional_variance(self, F)

The conditional marginal variance of Y|F: [var(Y₁|F), …, var(Yₖ|F)] where K = observation_dim

log_prob(self, F, Y)

The log probability density log p(Y|F)

predict_density(self, Fmu, Fvar, Y)

Deprecated: see predict_log_density

predict_log_density(self, Fmu, Fvar, Y)

Given a Normal distribution for the latent function, and a datum Y, compute the log predictive density of Y,

predict_mean_and_var(self, Fmu, Fvar)

Given a Normal distribution for the latent function, return the mean and marginal variance of Y,

variational_expectations(self, Fmu, Fvar, Y)

Compute the expected log density of the data, given a Gaussian distribution for the function values,

_predict_log_density(self, Fmu, Fvar, Y, epsilon=None)[source]

Given a Normal distribution for the latent function, and a datum Y, compute the log predictive density of Y.

i.e. if

q(f) = N(Fmu, Fvar)

and this object represents

p(y|f)

then this method computes the predictive density

log ∫ p(y=Y|f)q(f) df

Here, we implement a default Monte Carlo routine.

_predict_mean_and_var(self, Fmu, Fvar, epsilon=None)[source]

Given a Normal distribution for the latent function, return the mean of Y

if

q(f) = N(Fmu, Fvar)

and this object represents

p(y|f)

then this method computes the predictive mean

∫∫ y p(y|f)q(f) df dy

and the predictive variance

∫∫ y² p(y|f)q(f) df dy - [ ∫∫ y p(y|f)q(f) df dy ]²

Here, we implement a default Monte Carlo routine.

_variational_expectations(self, Fmu, Fvar, Y, epsilon=None)[source]

Compute the expected log density of the data, given a Gaussian distribution for the function values.

if

q(f) = N(Fmu, Fvar) - Fmu: [N, D] Fvar: [N, D]

and this object represents

p(y|f) - Y: [N, 1]

then this method computes

∫ (log p(y|f)) q(f) df.

Here, we implement a default Monte Carlo quadrature routine.

gpflow.likelihoods.MultiClass

class gpflow.likelihoods.MultiClass(num_classes, invlink=None, **kwargs)[source]

Bases: gpflow.likelihoods.base.Likelihood

Attributes
parameters
trainable_parameters

Methods

__call__(self, \*args, \*\*kw)

Call self as a function.

conditional_mean(self, F)

The conditional mean of Y|F: [E[Y₁|F], …, E[Yₖ|F]] where K = observation_dim

conditional_variance(self, F)

The conditional marginal variance of Y|F: [var(Y₁|F), …, var(Yₖ|F)] where K = observation_dim

log_prob(self, F, Y)

The log probability density log p(Y|F)

predict_density(self, Fmu, Fvar, Y)

Deprecated: see predict_log_density

predict_log_density(self, Fmu, Fvar, Y)

Given a Normal distribution for the latent function, and a datum Y, compute the log predictive density of Y,

predict_mean_and_var(self, Fmu, Fvar)

Given a Normal distribution for the latent function, return the mean and marginal variance of Y,

variational_expectations(self, Fmu, Fvar, Y)

Compute the expected log density of the data, given a Gaussian distribution for the function values,

__init__(self, num_classes, invlink=None, **kwargs)[source]

A likelihood for multi-way classification. Currently the only valid choice of inverse-link function (invlink) is an instance of RobustMax.

For most problems, the stochastic Softmax likelihood may be more appropriate (note that you then cannot use Scipy optimizer).

gpflow.likelihoods.Ordinal

class gpflow.likelihoods.Ordinal(bin_edges, **kwargs)[source]

Bases: gpflow.likelihoods.base.ScalarLikelihood

A likelihood for doing ordinal regression.

The data are integer values from 0 to k, and the user must specify (k-1) ‘bin edges’ which define the points at which the labels switch. Let the bin edges be [a₀, a₁, … aₖ₋₁], then the likelihood is

p(Y=0|F) = ɸ((a₀ - F) / σ) p(Y=1|F) = ɸ((a₁ - F) / σ) - ɸ((a₀ - F) / σ) p(Y=2|F) = ɸ((a₂ - F) / σ) - ɸ((a₁ - F) / σ) … p(Y=K|F) = 1 - ɸ((aₖ₋₁ - F) / σ)

where ɸ is the cumulative density function of a Gaussian (the inverse probit function) and σ is a parameter to be learned. A reference is:

@article{chu2005gaussian,

title={Gaussian processes for ordinal regression}, author={Chu, Wei and Ghahramani, Zoubin}, journal={Journal of Machine Learning Research}, volume={6}, number={Jul}, pages={1019–1041}, year={2005}

}

Attributes
parameters
trainable_parameters

Methods

__call__(self, \*args, \*\*kw)

Call self as a function.

conditional_mean(self, F)

The conditional mean of Y|F: [E[Y₁|F], …, E[Yₖ|F]] where K = observation_dim

conditional_variance(self, F)

The conditional marginal variance of Y|F: [var(Y₁|F), …, var(Yₖ|F)] where K = observation_dim

log_prob(self, F, Y)

The log probability density log p(Y|F)

predict_density(self, Fmu, Fvar, Y)

Deprecated: see predict_log_density

predict_log_density(self, Fmu, Fvar, Y)

Given a Normal distribution for the latent function, and a datum Y, compute the log predictive density of Y,

predict_mean_and_var(self, Fmu, Fvar)

Given a Normal distribution for the latent function, return the mean and marginal variance of Y,

variational_expectations(self, Fmu, Fvar, Y)

Compute the expected log density of the data, given a Gaussian distribution for the function values,

__init__(self, bin_edges, **kwargs)[source]

bin_edges is a numpy array specifying at which function value the output label should switch. If the possible Y values are 0…K, then the size of bin_edges should be (K-1).

_make_phi(self, F)[source]

A helper function for making predictions. Constructs a probability matrix where each row output the probability of the corresponding label, and the rows match the entries of F.

Note that a matrix of F values is flattened.

gpflow.likelihoods.Poisson

class gpflow.likelihoods.Poisson(invlink=tensorflow.exp, binsize=1.0, **kwargs)[source]

Bases: gpflow.likelihoods.base.ScalarLikelihood

Poisson likelihood for use with count data, where the rate is given by the (transformed) GP.

let g(.) be the inverse-link function, then this likelihood represents

p(yᵢ | fᵢ) = Poisson(yᵢ | g(fᵢ) * binsize)

Note:binsize For use in a Log Gaussian Cox process (doubly stochastic model) where the rate function of an inhomogeneous Poisson process is given by a GP. The intractable likelihood can be approximated via a Riemann sum (with bins of size ‘binsize’) and using this Poisson likelihood.

Attributes
parameters
trainable_parameters

Methods

__call__(self, \*args, \*\*kw)

Call self as a function.

conditional_mean(self, F)

The conditional mean of Y|F: [E[Y₁|F], …, E[Yₖ|F]] where K = observation_dim

conditional_variance(self, F)

The conditional marginal variance of Y|F: [var(Y₁|F), …, var(Yₖ|F)] where K = observation_dim

log_prob(self, F, Y)

The log probability density log p(Y|F)

predict_density(self, Fmu, Fvar, Y)

Deprecated: see predict_log_density

predict_log_density(self, Fmu, Fvar, Y)

Given a Normal distribution for the latent function, and a datum Y, compute the log predictive density of Y,

predict_mean_and_var(self, Fmu, Fvar)

Given a Normal distribution for the latent function, return the mean and marginal variance of Y,

variational_expectations(self, Fmu, Fvar, Y)

Compute the expected log density of the data, given a Gaussian distribution for the function values,

gpflow.likelihoods.RobustMax

class gpflow.likelihoods.RobustMax(num_classes, epsilon=0.001, **kwargs)[source]

Bases: gpflow.base.Module

This class represent a multi-class inverse-link function. Given a vector f=[f_1, f_2, … f_k], the result of the mapping is

y = [y_1 … y_k]

with

y_i = (1-epsilon) i == argmax(f)

epsilon/(k-1) otherwise

where k is the number of classes.

Attributes
eps_k1
parameters
trainable_parameters

Methods

__call__(self, F)

Call self as a function.

prob_is_largest

safe_sqrt

__init__(self, num_classes, epsilon=0.001, **kwargs)[source]

epsilon represents the fraction of ‘errors’ in the labels of the dataset. This may be a hard parameter to optimize, so by default it is set un-trainable, at a small value.

gpflow.likelihoods.ScalarLikelihood

class gpflow.likelihoods.ScalarLikelihood(**kwargs)[source]

Bases: gpflow.likelihoods.base.Likelihood

A likelihood class that helps with scalar likelihood functions: likelihoods where each scalar latent function is associated with a single scalar observation variable.

If there are multiple latent functions, then there must be a corresponding number of data: we check for this.

The Likelihood class contains methods to compute marginal statistics of functions of the latents and the data ϕ(y,f):

  • variational_expectations: ϕ(y,f) = log p(y|f)

  • predict_log_density: ϕ(y,f) = p(y|f)

Those statistics are computed after having first marginalized the latent processes f under a multivariate normal distribution q(f) that is fully factorized.

Some univariate integrals can be done by quadrature: we implement quadrature routines for 1D integrals in this class, though they may be overwritten by inheriting classes where those integrals are available in closed form.

Attributes
parameters
trainable_parameters

Methods

__call__(self, \*args, \*\*kw)

Call self as a function.

conditional_mean(self, F)

The conditional mean of Y|F: [E[Y₁|F], …, E[Yₖ|F]] where K = observation_dim

conditional_variance(self, F)

The conditional marginal variance of Y|F: [var(Y₁|F), …, var(Yₖ|F)] where K = observation_dim

log_prob(self, F, Y)

The log probability density log p(Y|F)

predict_density(self, Fmu, Fvar, Y)

Deprecated: see predict_log_density

predict_log_density(self, Fmu, Fvar, Y)

Given a Normal distribution for the latent function, and a datum Y, compute the log predictive density of Y,

predict_mean_and_var(self, Fmu, Fvar)

Given a Normal distribution for the latent function, return the mean and marginal variance of Y,

variational_expectations(self, Fmu, Fvar, Y)

Compute the expected log density of the data, given a Gaussian distribution for the function values,

_check_last_dims_valid(self, F, Y)[source]

Assert that the dimensions of the latent functions and the data are compatible :param F: function evaluation Tensor, with shape […, latent_dim] :param Y: observation Tensor, with shape […, latent_dim]

_log_prob(self, F, Y)[source]

Compute log p(Y|F), where by convention we sum out the last axis as it represented independent latent functions and observations. :param F: function evaluation Tensor, with shape […, latent_dim] :param Y: observation Tensor, with shape […, latent_dim]

_predict_log_density(self, Fmu, Fvar, Y)[source]

Here, we implement a default Gauss-Hermite quadrature routine, but some likelihoods (Gaussian, Poisson) will implement specific cases. :param Fmu: mean function evaluation Tensor, with shape […, latent_dim] :param Fvar: variance of function evaluation Tensor, with shape […, latent_dim] :param Y: observation Tensor, with shape […, latent_dim]: :returns: log predictive density, with shape […]

_predict_mean_and_var(self, Fmu, Fvar)[source]

Here, we implement a default Gauss-Hermite quadrature routine, but some likelihoods (e.g. Gaussian) will implement specific cases.

Parameters
  • Fmu – mean function evaluation Tensor, with shape […, latent_dim]

  • Fvar – variance of function evaluation Tensor, with shape […, latent_dim]

Returns

mean and variance, both with shape […, observation_dim]

_variational_expectations(self, Fmu, Fvar, Y)[source]

Here, we implement a default Gauss-Hermite quadrature routine, but some likelihoods (Gaussian, Poisson) will implement specific cases. :param Fmu: mean function evaluation Tensor, with shape […, latent_dim] :param Fvar: variance of function evaluation Tensor, with shape […, latent_dim] :param Y: observation Tensor, with shape […, latent_dim]: :returns: variational expectations, with shape […]

gpflow.likelihoods.Softmax

class gpflow.likelihoods.Softmax(num_classes, **kwargs)[source]

Bases: gpflow.likelihoods.base.MonteCarloLikelihood

The soft-max multi-class likelihood. It can only provide a stochastic Monte-Carlo estimate of the variational expectations term, but this added variance tends to be small compared to that due to mini-batching (when using the SVGP model).

Attributes
parameters
trainable_parameters

Methods

__call__(self, \*args, \*\*kw)

Call self as a function.

conditional_mean(self, F)

The conditional mean of Y|F: [E[Y₁|F], …, E[Yₖ|F]] where K = observation_dim

conditional_variance(self, F)

The conditional marginal variance of Y|F: [var(Y₁|F), …, var(Yₖ|F)] where K = observation_dim

log_prob(self, F, Y)

The log probability density log p(Y|F)

predict_density(self, Fmu, Fvar, Y)

Deprecated: see predict_log_density

predict_log_density(self, Fmu, Fvar, Y)

Given a Normal distribution for the latent function, and a datum Y, compute the log predictive density of Y,

predict_mean_and_var(self, Fmu, Fvar)

Given a Normal distribution for the latent function, return the mean and marginal variance of Y,

variational_expectations(self, Fmu, Fvar, Y)

Compute the expected log density of the data, given a Gaussian distribution for the function values,

gpflow.likelihoods.StudentT

class gpflow.likelihoods.StudentT(scale=1.0, df=3.0, **kwargs)[source]

Bases: gpflow.likelihoods.base.ScalarLikelihood

Attributes
parameters
trainable_parameters

Methods

__call__(self, \*args, \*\*kw)

Call self as a function.

conditional_mean(self, F)

The conditional mean of Y|F: [E[Y₁|F], …, E[Yₖ|F]] where K = observation_dim

conditional_variance(self, F)

The conditional marginal variance of Y|F: [var(Y₁|F), …, var(Yₖ|F)] where K = observation_dim

log_prob(self, F, Y)

The log probability density log p(Y|F)

predict_density(self, Fmu, Fvar, Y)

Deprecated: see predict_log_density

predict_log_density(self, Fmu, Fvar, Y)

Given a Normal distribution for the latent function, and a datum Y, compute the log predictive density of Y,

predict_mean_and_var(self, Fmu, Fvar)

Given a Normal distribution for the latent function, return the mean and marginal variance of Y,

variational_expectations(self, Fmu, Fvar, Y)

Compute the expected log density of the data, given a Gaussian distribution for the function values,

__init__(self, scale=1.0, df=3.0, **kwargs)[source]
Parameters
  • float (df) – scale parameter

  • float – degrees of freedom

gpflow.likelihoods.SwitchedLikelihood

class gpflow.likelihoods.SwitchedLikelihood(likelihood_list, **kwargs)[source]

Bases: gpflow.likelihoods.base.ScalarLikelihood

Attributes
parameters
trainable_parameters

Methods

__call__(self, \*args, \*\*kw)

Call self as a function.

conditional_mean(self, F)

The conditional mean of Y|F: [E[Y₁|F], …, E[Yₖ|F]] where K = observation_dim

conditional_variance(self, F)

The conditional marginal variance of Y|F: [var(Y₁|F), …, var(Yₖ|F)] where K = observation_dim

log_prob(self, F, Y)

The log probability density log p(Y|F)

predict_density(self, Fmu, Fvar, Y)

Deprecated: see predict_log_density

predict_log_density(self, Fmu, Fvar, Y)

Given a Normal distribution for the latent function, and a datum Y, compute the log predictive density of Y,

predict_mean_and_var(self, Fmu, Fvar)

Given a Normal distribution for the latent function, return the mean and marginal variance of Y,

variational_expectations(self, Fmu, Fvar, Y)

Compute the expected log density of the data, given a Gaussian distribution for the function values,

__init__(self, likelihood_list, **kwargs)[source]

In this likelihood, we assume at extra column of Y, which contains integers that specify a likelihood from the list of likelihoods.

_partition_and_stitch(self, args, func_name)[source]

args is a list of tensors, to be passed to self.likelihoods.<func_name>

args[-1] is the ‘Y’ argument, which contains the indexes to self.likelihoods.

This function splits up the args using dynamic_partition, calls the relevant function on the likelihoods, and re-combines the result.

gpflow.likelihoods.utils