{
“cells”: [
{

“cell_type”: “markdown”, “metadata”: {}, “source”: [

“GP Regression with GPflown”, “–n”, “n”, “James Hensman, 2015, 2016n”, “n”, “GP regression (with Gaussian noise) is the most straightforward GP model in GPflow. Because of the conjugacy of the latent process and the noise, the marginal likelihood $p(\mathbf y\,|\,\theta)$ can be computed exactly.n”, “n”, “This notebook shows how to build a GPR model, estimate the parameters $\theta$ by both maximum likelihood and MCMC. “

]

}, {

“cell_type”: “code”,
“execution_count”: 2,

“metadata”: {}, “outputs”: [], “source”: [

“import gpflown”, “import numpy as npn”, “import matplotlibn”, “%matplotlib inlinen”, “matplotlib.rcParams[‘figure.figsize’] = (12, 6)n”, “plt = matplotlib.pyplot”

]

}, {

“cell_type”: “markdown”, “metadata”: {}, “source”: [

“First build a simple data set. “

]

}, {

“cell_type”: “code”,
“execution_count”: 3,

{
“data”: {
“text/plain”: [
“[<matplotlib.lines.Line2D at 0x11d821940>]”

]

}, “execution_count”: 3,

}, “execution_count”: 18,
>>>>>>> master

}, {

“data”: {

“<matplotlib.figure.Figure at 0x11d1ef048>”
“<matplotlib.figure.Figure at 0x12c07ca20>”
>>>>>>> master
]

}

], “source”: [

“N = 12n”, “X = np.random.rand(N,1)n”, “Y = np.sin(12*X) + 0.66*np.cos(25*X) + np.random.randn(N,1)*0.1 + 3n”, “plt.plot(X, Y, ‘kx’, mew=2)”

]

}, {

“cell_type”: “markdown”, “metadata”: {}, “source”: [

“### Model constructionn”, “n”, “A GPflow model is created by instantiating one of the GPflow model classes, in this case GPR. we’ll make a kernel k and instantiate a GPR object using the generated data and the kernel. We’ll set the variance of the likelihood to a sensible initial guess, too. “

]

}, {

“cell_type”: “code”,
“execution_count”: 4,
“collapsed”: true

}, “outputs”: [], “source”: [

“k = gpflow.kernels.Matern52(1, lengthscales=0.3)n”, “m = gpflow.models.GPR(X, Y, kern=k)n”, “m.likelihood.variance = 0.01”

]

}, {

“cell_type”: “markdown”, “metadata”: {}, “source”: [

“### Predictionn”, “n”, “GPflow models have several prediction methods:”

]

}, {

“cell_type”: “markdown”, “metadata”: {}, “source”: [

” - m.predict_f returns the mean and variance of the latent function (f) at the points Xnew. n”, “n”, ” - m.predict_f_full_cov additionally returns the full covariance matrix of the prediction.n”, “n”, ” - m.predict_y returns the mean and variance of a new data point (i.e. includes the noise varaince). In the case of non-Gaussian likelihoods, the variance is computed by (numerically) integrating the non-Gaussian likelihood. n”, “n”, ” - m.predict_f_samples returns samples of the latent functionn”, “n”, ” - m.predict_density returns the log-density of the points Ynew at Xnew. n”, ” n”, “We’ll use predict_y to make a simple plotting function. “

]

}, {

“cell_type”: “code”,
“execution_count”: 5,

{
“data”: {

“<matplotlib.figure.Figure at 0x120caaf98>”
“<matplotlib.figure.Figure at 0x12ed03668>”
>>>>>>> master
]

}

], “source”: [

“def plot(m):n”, ” xx = np.linspace(-0.1, 1.1, 100).reshape(100, 1)n”, ” mean, var = m.predict_y(xx)n”, ” plt.figure(figsize=(12, 6))n”, ” plt.plot(X, Y, ‘kx’, mew=2)n”, ” plt.plot(xx, mean, ‘C0’, lw=2)n”, ” plt.fill_between(xx[:,0],n”, ” mean[:,0] - 2*np.sqrt(var[:,0]),n”, ” mean[:,0] + 2*np.sqrt(var[:,0]),n”, ” color=’C0’, alpha=0.2)n”, ” plt.xlim(-0.1, 1.1)n”, ” n”, “plot(m)”

]

}, {

“cell_type”: “markdown”, “metadata”: {}, “source”: [

“### Using mean functionsn”, “n”, “All GPflow models can have parameterized mean functions, some simple ones are provided in gpflow.mean_functions. Here’s a model with a Linear mean function. “

]

}, {

“cell_type”: “code”,
“execution_count”: 6,

“metadata”: {}, “outputs”: [], “source”: [

“k = gpflow.kernels.Matern52(1, lengthscales=0.3)n”, “meanf = gpflow.mean_functions.Linear(1.0, 0.0)n”, “m = gpflow.models.GPR(X, Y, k, meanf)n”, “m.likelihood.variance = 0.01”

]

}, {

“cell_type”: “markdown”, “metadata”: {}, “source”: [

“### Maximum Likelihood estimation of $\theta$n”, “n”, “Getting the ML estimate of the parameters is a simple call to minimize method of the optimizer’s instance, for e.g. gpflow.train.ScipyOptimizer(). By default, GPflow plugs into the L-BFGS-B algorithm via scipy. Here are the parameters before optimization: “

]

}, {

“cell_type”: “code”,

“execution_count”: 8, “metadata”: {}, “outputs”: [

{
“data”: {
“text/html”: [
“<div>n”, “<style>n”, ” .dataframe thead tr:only-child th {n”, ” text-align: right;n”, ” }n”, “n”, ” .dataframe thead th {n”, ” text-align: left;n”, ” }n”, “n”, ” .dataframe tbody tr th {n”, ” vertical-align: top;n”, ” }n”, “</style>n”, “<table border=”1” class=”dataframe”>n”, ” <thead>n”, ” <tr style=”text-align: right;”>n”, ” <th></th>n”, ” <th>class</th>n”, ” <th>prior</th>n”, ” <th>transform</th>n”, ” <th>trainable</th>n”, ” <th>shape</th>n”, ” <th>fixed_shape</th>n”, ” <th>value</th>n”, ” </tr>n”, ” </thead>n”, ” <tbody>n”, ” <tr>n”, ” <th>GPR/mean_function/A</th>n”, ” <td>Parameter</td>n”, ” <td>None</td>n”, ” <td>(none)</td>n”, ” <td>True</td>n”, ” <td>(1, 1)</td>n”, ” <td>True</td>n”, ” <td>[[1.0]]</td>n”, ” </tr>n”, ” <tr>n”, ” <th>GPR/mean_function/b</th>n”, ” <td>Parameter</td>n”, ” <td>None</td>n”, ” <td>(none)</td>n”, ” <td>True</td>n”, ” <td>()</td>n”, ” <td>True</td>n”, ” <td>0.0</td>n”, ” </tr>n”, ” <tr>n”, ” <th>GPR/kern/variance</th>n”, ” <td>Parameter</td>n”, ” <td>None</td>n”, ” <td>+ve</td>n”, ” <td>True</td>n”, ” <td>()</td>n”, ” <td>True</td>n”, ” <td>1.0</td>n”, ” </tr>n”, ” <tr>n”, ” <th>GPR/kern/lengthscales</th>n”, ” <td>Parameter</td>n”, ” <td>None</td>n”, ” <td>+ve</td>n”, ” <td>True</td>n”, ” <td>()</td>n”, ” <td>True</td>n”, ” <td>0.3</td>n”, ” </tr>n”, ” <tr>n”, ” <th>GPR/likelihood/variance</th>n”, ” <td>Parameter</td>n”, ” <td>None</td>n”, ” <td>+ve</td>n”, ” <td>True</td>n”, ” <td>()</td>n”, ” <td>True</td>n”, ” <td>0.01</td>n”, ” </tr>n”, ” </tbody>n”, “</table>n”, “</div>”

], “text/plain”: [

” class prior transform trainable shape \n”, “GPR/mean_function/A Parameter None (none) True (1, 1) n”, “GPR/mean_function/b Parameter None (none) True () n”, “GPR/kern/variance Parameter None +ve True () n”, “GPR/kern/lengthscales Parameter None +ve True () n”, “GPR/likelihood/variance Parameter None +ve True () n”, “n”, ” fixed_shape value n”, “GPR/mean_function/A True [[1.0]] n”, “GPR/mean_function/b True 0.0 n”, “GPR/kern/variance True 1.0 n”, “GPR/kern/lengthscales True 0.3 n”, “GPR/likelihood/variance True 0.01 “

]

}, “execution_count”: 8, “metadata”: {}, “output_type”: “execute_result”

“outputs”: [
{

“name”: “stdout”, “output_type”: “stream”, “text”: [

“<Parameter name:u001b[1mGPR/mean_function/Au001b[0m [trainable] shape:(1, 1) transform:(none) prior:None>n”, “value: [[ 1.]]n”, “n”, “<Parameter name:u001b[1mGPR/mean_function/bu001b[0m [trainable] shape:() transform:(none) prior:None>n”, “value: 0.0n”, “n”, “<Parameter name:u001b[1mGPR/kern/varianceu001b[0m [trainable] shape:() transform:+ve prior:None>n”, “value: 1.0n”, “n”, “<Parameter name:u001b[1mGPR/kern/lengthscalesu001b[0m [trainable] shape:() transform:+ve prior:None>n”, “value: 0.3n”, “n”, “<Parameter name:u001b[1mGPR/likelihood/varianceu001b[0m [trainable] shape:() transform:+ve prior:None>n”, “value: 0.01n”

]

>>>>>>> master
}

], “source”: [

“m.as_pandas_table()”

]

}, {

“cell_type”: “markdown”, “metadata”: {}, “source”: [

“Here are the parameters after optimization, and a new plot. “

]

}, {

“cell_type”: “code”,
“execution_count”: 9,
“scrolled”: false

}, “outputs”: [

{

“name”: “stdout”, “output_type”: “stream”, “text”: [

INFO:tensorflow:Optimization terminated with:n”,
” Message: b’CONVERGENCE: NORM_OF_PROJECTED_GRADIENT_<=_PGTOL’n”, ” Objective function value: 3.205102n”, ” Number of iterations: 19n”, ” Number of functions evaluations: 20n”
” Number of iterations: 22n”, ” Number of functions evaluations: 24n”, “<Parameter name:u001b[1mGPR/mean_function/Au001b[0m [trainable] shape:(1, 1) transform:(none) prior:None>n”, “value: [[-1.87436664]]n”, “n”, “<Parameter name:u001b[1mGPR/mean_function/bu001b[0m [trainable] shape:() transform:(none) prior:None>n”, “value: 3.62158209355669n”, “n”, “<Parameter name:u001b[1mGPR/kern/varianceu001b[0m [trainable] shape:() transform:+ve prior:None>n”, “value: 0.3797687342720873n”, “n”, “<Parameter name:u001b[1mGPR/kern/lengthscalesu001b[0m [trainable] shape:() transform:+ve prior:None>n”, “value: 0.05235709686368796n”, “n”, “<Parameter name:u001b[1mGPR/likelihood/varianceu001b[0m [trainable] shape:() transform:+ve prior:None>n”, “value: 0.00042147799809317513n”
>>>>>>> master
]

}, {

“data”: {
“text/html”: [
“<div>n”, “<style>n”, ” .dataframe thead tr:only-child th {n”, ” text-align: right;n”, ” }n”, “n”, ” .dataframe thead th {n”, ” text-align: left;n”, ” }n”, “n”, ” .dataframe tbody tr th {n”, ” vertical-align: top;n”, ” }n”, “</style>n”, “<table border=”1” class=”dataframe”>n”, ” <thead>n”, ” <tr style=”text-align: right;”>n”, ” <th></th>n”, ” <th>class</th>n”, ” <th>prior</th>n”, ” <th>transform</th>n”, ” <th>trainable</th>n”, ” <th>shape</th>n”, ” <th>fixed_shape</th>n”, ” <th>value</th>n”, ” </tr>n”, ” </thead>n”, ” <tbody>n”, ” <tr>n”, ” <th>GPR/mean_function/A</th>n”, ” <td>Parameter</td>n”, ” <td>None</td>n”, ” <td>(none)</td>n”, ” <td>True</td>n”, ” <td>(1, 1)</td>n”, ” <td>True</td>n”, ” <td>[[0.779783699914]]</td>n”, ” </tr>n”, ” <tr>n”, ” <th>GPR/mean_function/b</th>n”, ” <td>Parameter</td>n”, ” <td>None</td>n”, ” <td>(none)</td>n”, ” <td>True</td>n”, ” <td>()</td>n”, ” <td>True</td>n”, ” <td>2.8279224174073008</td>n”, ” </tr>n”, ” <tr>n”, ” <th>GPR/kern/variance</th>n”, ” <td>Parameter</td>n”, ” <td>None</td>n”, ” <td>+ve</td>n”, ” <td>True</td>n”, ” <td>()</td>n”, ” <td>True</td>n”, ” <td>0.8160885604514175</td>n”, ” </tr>n”, ” <tr>n”, ” <th>GPR/kern/lengthscales</th>n”, ” <td>Parameter</td>n”, ” <td>None</td>n”, ” <td>+ve</td>n”, ” <td>True</td>n”, ” <td>()</td>n”, ” <td>True</td>n”, ” <td>0.11059796926993924</td>n”, ” </tr>n”, ” <tr>n”, ” <th>GPR/likelihood/variance</th>n”, ” <td>Parameter</td>n”, ” <td>None</td>n”, ” <td>+ve</td>n”, ” <td>True</td>n”, ” <td>()</td>n”, ” <td>True</td>n”, ” <td>0.0020621774056935995</td>n”, ” </tr>n”, ” </tbody>n”, “</table>n”, “</div>”

], “text/plain”: [

” class prior transform trainable shape \n”, “GPR/mean_function/A Parameter None (none) True (1, 1) n”, “GPR/mean_function/b Parameter None (none) True () n”, “GPR/kern/variance Parameter None +ve True () n”, “GPR/kern/lengthscales Parameter None +ve True () n”, “GPR/likelihood/variance Parameter None +ve True () n”, “n”, ” fixed_shape value n”, “GPR/mean_function/A True [[0.779783699914]] n”, “GPR/mean_function/b True 2.8279224174073008 n”, “GPR/kern/variance True 0.8160885604514175 n”, “GPR/kern/lengthscales True 0.11059796926993924 n”, “GPR/likelihood/variance True 0.0020621774056935995 “

]

}, “execution_count”: 9, “metadata”: {}, “output_type”: “execute_result”

}, {

“data”: {

“<matplotlib.figure.Figure at 0x11d7cbc88>”
“<matplotlib.figure.Figure at 0x12f5d8cc0>”
>>>>>>> master
]

}

], “source”: [

“gpflow.train.ScipyOptimizer().minimize(m)n”, “plot(m)n”, “m.as_pandas_table()”

]

}, {

“cell_type”: “markdown”, “metadata”: {}, “source”: [

“### MCMC for $\theta$n”, “n”, “Here’s a quick demonstration of how to obtain posteriors over the hyper-parameters in GP regression. First, we’ll set some priors on the kernel parameters, then we’ll run MCMC and see how much posterior uncertainty there is in the parameters.n”, “n”, “First we’ll choose rather arbitrary priors. n”

]

}, {

“cell_type”: “code”,
“execution_count”: 10,
“scrolled”: false

}, “outputs”: [

{
“data”: {
“text/html”: [
“<div>n”, “<style>n”, ” .dataframe thead tr:only-child th {n”, ” text-align: right;n”, ” }n”, “n”, ” .dataframe thead th {n”, ” text-align: left;n”, ” }n”, “n”, ” .dataframe tbody tr th {n”, ” vertical-align: top;n”, ” }n”, “</style>n”, “<table border=”1” class=”dataframe”>n”, ” <thead>n”, ” <tr style=”text-align: right;”>n”, ” <th></th>n”, ” <th>class</th>n”, ” <th>prior</th>n”, ” <th>transform</th>n”, ” <th>trainable</th>n”, ” <th>shape</th>n”, ” <th>fixed_shape</th>n”, ” <th>value</th>n”, ” </tr>n”, ” </thead>n”, ” <tbody>n”, ” <tr>n”, ” <th>GPR/mean_function/A</th>n”, ” <td>Parameter</td>n”, ” <td>N([ 0.],[ 10.])</td>n”, ” <td>(none)</td>n”, ” <td>True</td>n”, ” <td>(1, 1)</td>n”, ” <td>True</td>n”, ” <td>[[0.779783699914]]</td>n”, ” </tr>n”, ” <tr>n”, ” <th>GPR/mean_function/b</th>n”, ” <td>Parameter</td>n”, ” <td>N([ 0.],[ 10.])</td>n”, ” <td>(none)</td>n”, ” <td>True</td>n”, ” <td>()</td>n”, ” <td>True</td>n”, ” <td>2.8279224174073008</td>n”, ” </tr>n”, ” <tr>n”, ” <th>GPR/kern/variance</th>n”, ” <td>Parameter</td>n”, ” <td>Ga([ 1.],[ 1.])</td>n”, ” <td>+ve</td>n”, ” <td>True</td>n”, ” <td>()</td>n”, ” <td>True</td>n”, ” <td>0.8160885604514175</td>n”, ” </tr>n”, ” <tr>n”, ” <th>GPR/kern/lengthscales</th>n”, ” <td>Parameter</td>n”, ” <td>Ga([ 1.],[ 1.])</td>n”, ” <td>+ve</td>n”, ” <td>True</td>n”, ” <td>()</td>n”, ” <td>True</td>n”, ” <td>0.11059796926993924</td>n”, ” </tr>n”, ” <tr>n”, ” <th>GPR/likelihood/variance</th>n”, ” <td>Parameter</td>n”, ” <td>Ga([ 1.],[ 1.])</td>n”, ” <td>+ve</td>n”, ” <td>True</td>n”, ” <td>()</td>n”, ” <td>True</td>n”, ” <td>0.0020621774056935995</td>n”, ” </tr>n”, ” </tbody>n”, “</table>n”, “</div>”

], “text/plain”: [

” class prior transform trainable \n”, “GPR/mean_function/A Parameter N([ 0.],[ 10.]) (none) True n”, “GPR/mean_function/b Parameter N([ 0.],[ 10.]) (none) True n”, “GPR/kern/variance Parameter Ga([ 1.],[ 1.]) +ve True n”, “GPR/kern/lengthscales Parameter Ga([ 1.],[ 1.]) +ve True n”, “GPR/likelihood/variance Parameter Ga([ 1.],[ 1.]) +ve True n”, “n”, ” shape fixed_shape value n”, “GPR/mean_function/A (1, 1) True [[0.779783699914]] n”, “GPR/mean_function/b () True 2.8279224174073008 n”, “GPR/kern/variance () True 0.8160885604514175 n”, “GPR/kern/lengthscales () True 0.11059796926993924 n”, “GPR/likelihood/variance () True 0.0020621774056935995 “

]

}, “execution_count”: 10, “metadata”: {}, “output_type”: “execute_result”

“text”: [
“<Parameter name:u001b[1mGPR/mean_function/Au001b[0m [trainable] shape:(1, 1) transform:(none) prior:N([ 0.],[ 10.])>n”, “value: [[-1.87436664]]n”, “n”, “<Parameter name:u001b[1mGPR/mean_function/bu001b[0m [trainable] shape:() transform:(none) prior:N([ 0.],[ 10.])>n”, “value: 3.62158209355669n”, “n”, “<Parameter name:u001b[1mGPR/kern/varianceu001b[0m [trainable] shape:() transform:+ve prior:Ga([ 1.],[ 1.])>n”, “value: 0.3797687342720873n”, “n”, “<Parameter name:u001b[1mGPR/kern/lengthscalesu001b[0m [trainable] shape:() transform:+ve prior:Ga([ 1.],[ 1.])>n”, “value: 0.05235709686368796n”, “n”, “<Parameter name:u001b[1mGPR/likelihood/varianceu001b[0m [trainable] shape:() transform:+ve prior:Ga([ 1.],[ 1.])>n”, “value: 0.00042147799809317513n”

]

>>>>>>> master
}

], “source”: [

“m.clear()n”, “m.kern.lengthscales.prior = gpflow.priors.Gamma(1., 1.)n”, “m.kern.variance.prior = gpflow.priors.Gamma(1., 1.)n”, “m.likelihood.variance.prior = gpflow.priors.Gamma(1., 1.)n”, “m.mean_function.A.prior = gpflow.priors.Gaussian(0., 10.)n”, “m.mean_function.b.prior = gpflow.priors.Gaussian(0., 10.)n”, “m.compile()n”, “m.as_pandas_table()”

]

}, {

“cell_type”: “code”,
“execution_count”: 11,

“metadata”: {}, “outputs”: [], “source”: [

“sampler = gpflow.train.HMC()n”, “samples = sampler.sample(m, num_samples=500, epsilon=0.05, lmin=10, lmax=20, logprobs=False)”

]

}, {

“cell_type”: “code”,
“execution_count”: 12,

{
“data”: {
“text/plain”: [
“<matplotlib.text.Text at 0x1244d3fd0>”

]

}, “execution_count”: 12,

}, “execution_count”: 26,
>>>>>>> master

}, {

“data”: {

“<matplotlib.figure.Figure at 0x1244cab70>”
“<matplotlib.figure.Figure at 0x12f6370f0>”
>>>>>>> master
]

}

], “source”: [

“for i, col in samples.iteritems():n”, ” plt.plot(col, label=col.name)n”, “plt.legend(loc=0)n”, “plt.xlabel(‘hmc iteration’)n”, “plt.ylabel(‘parameter value’)”

]

}, {

“cell_type”: “markdown”, “metadata”: {}, “source”: [

“Note that the sampler runs in unconstrained space (so that positive parameters remain positive, parameters that are not trainable are ignored), but GPflow returns a dataframe with values in the true units.n”, “n”, “For serious analysis you most certainly want to run the sampler longer, with multiple chains and convergence checks. This will do for illustration though!”

]

}, {

“cell_type”: “code”,
“execution_count”: 13,

{
“data”: {
“text/plain”: [
“<matplotlib.text.Text at 0x12466bb38>”

]

}, “execution_count”: 13,

}, “execution_count”: 27,
>>>>>>> master

}, {

“data”: {

“<matplotlib.figure.Figure at 0x124d5f908>”
“<matplotlib.figure.Figure at 0x134043fd0>”
>>>>>>> master
]

}

], “source”: [

“f, axs = plt.subplots(1,3, figsize=(12,4))n”, “n”, “axs[0].plot(samples[‘GPR/likelihood/variance’],n”, ” samples[‘GPR/kern/variance’], ‘k.’, alpha = 0.15)n”, “axs[0].set_xlabel(‘noise_variance’)n”, “axs[0].set_ylabel(‘signal_variance’)n”, “n”, “axs[1].plot(samples[‘GPR/likelihood/variance’],n”, ” samples[‘GPR/kern/lengthscales’], ‘k.’, alpha = 0.15)n”, “axs[1].set_xlabel(‘noise_variance’)n”, “axs[1].set_ylabel(‘lengthscale’)n”, “n”, “axs[2].plot(samples[‘GPR/kern/lengthscales’],n”, ” samples[‘GPR/kern/variance’], ‘k.’, alpha = 0.1)n”, “axs[2].set_xlabel(‘lengthscale’)n”, “axs[2].set_ylabel(‘signal_variance’)”

]

}, {

“cell_type”: “markdown”, “metadata”: {}, “source”: [

“To plot the posterior of predictions, we’ll iterate through the samples and set the model state with each sample. Then, for that state (set of hyper-parameters) we’ll draw some samples from the prediction function. “

]

}, {

“cell_type”: “code”,
“execution_count”: 15,
“scrolled”: false

}, “outputs”: [

{
“data”: {

“<matplotlib.figure.Figure at 0x124ecc278>”
“<matplotlib.figure.Figure at 0x1340c0630>”
>>>>>>> master
]

}

], “source”: [

“#plot the function posteriorn”, “xx = np.linspace(-0.1, 1.1, 100)[:,None]n”, “plt.figure(figsize=(12, 6))n”, “for i, s in samples.iloc[::20].iterrows():n”, ” m.assign(s)n”, ” f = m.predict_f_samples(xx, 1)n”, ” plt.plot(xx, f[0,:,:], ‘C0’, lw=2, alpha=0.1)n”, ” n”, “plt.plot(X, Y, ‘kx’, mew=2)n”, “_ = plt.xlim(xx.min(), xx.max())n”, “_ = plt.ylim(0, 6)”

]

}, {

“cell_type”: “markdown”, “metadata”: {}, “source”: [

“Note the bimodal posterior: the data can be exmplained by a very smooth function (with lots of noise) or as a rough, short lengthscale function.”

]

}

“anaconda-cloud”: {}, “kernelspec”: {

“display_name”: “Python [default]”, “language”: “python”, “name”: “python3”

}, “language_info”: {

“codemirror_mode”: {
“name”: “ipython”, “version”: 3

}, “file_extension”: “.py”, “mimetype”: “text/x-python”, “name”: “python”, “nbconvert_exporter”: “python”, “pygments_lexer”: “ipython3”, “version”: “3.6.0”

}

}, “nbformat”: 4, “nbformat_minor”: 1

}