Unnamed: 0
int64 0
16k
| text_prompt
stringlengths 110
62.1k
| code_prompt
stringlengths 37
152k
|
---|---|---|
2,500 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Normalizing Flows - Introduction (Part 1)
This tutorial introduces Pyro's normalizing flow library. It is independent of much of Pyro, but users may want to read about distribution shapes in the Tensor Shapes Tutorial.
Introduction
In standard probabilistic modeling practice, we represent our beliefs over unknown continuous quantities with simple parametric distributions like the normal, exponential, and Laplacian distributions. However, using such simple forms, which are commonly symmetric and unimodal (or have a fixed number of modes when we take a mixture of them), restricts the performance and flexibility of our methods. For instance, standard variational inference in the Variational Autoencoder uses independent univariate normal distributions to represent the variational family. The true posterior is neither independent nor normally distributed, which results in suboptimal inference and simplifies the model that is learnt. In other scenarios, we are likewise restricted by not being able to model multimodal distributions and heavy or light tails.
Normalizing Flows [1-4] are a family of methods for constructing flexible learnable probability distributions, often with neural networks, which allow us to surpass the limitations of simple parametric forms. Pyro contains state-of-the-art normalizing flow implementations, and this tutorial explains how you can use this library for learning complex models and performing flexible variational inference. We introduce the main idea of Normalizing Flows (NFs) and demonstrate learning simple univariate distributions with element-wise, multivariate, and conditional flows.
Univariate Distributions
Background
Normalizing Flows are a family of methods for constructing flexible distributions. Let's first restrict our attention to representing univariate distributions. The basic idea is that a simple source of noise, for example a variable with a standard normal distribution, $X\sim\mathcal{N}(0,1)$, is passed through a bijective (i.e. invertible) function, $g(\cdot)$ to produce a more complex transformed variable $Y=g(X)$.
For a given random variable, we typically want to perform two operations
Step1: A variety of bijective transformations live in the pyro.distributions.transforms module, and the classes to define transformed distributions live in pyro.distributions. We first create the base distribution of $X$ and the class encapsulating the transform $\text{exp}(\cdot)$
Step2: The class ExpTransform derives from Transform and defines the forward, inverse, and log-absolute-derivative operations for this transform,
\begin{align}
g(x) &= \text{exp(x)}\
g^{-1}(y) &= \log(y)\
\log\left(\left|\frac{dg}{dx}\right|\right) &= y.
\end{align}
In general, a transform class defines these three operations, from which it is sufficient to perform sampling and scoring.
The class TransformedDistribution takes a base distribution of simple noise and a list of transforms, and encapsulates the distribution formed by applying these transformations in sequence. We use it as
Step3: Now, plotting samples from both to verify that we that have produced the log-normal distribution
Step4: Our example uses a single transform. However, we can compose transforms to produce more expressive distributions. For instance, if we apply an affine transformation we can produce the general log-normal distribution,
\begin{align}
X &\sim \mathcal{N}(0,1)\
Y &= \text{exp}(\mu+\sigma X).
\end{align}
or rather, $Y\sim\text{LogNormal}(\mu,\sigma^2)$. In Pyro this is accomplished, e.g. for $\mu=3, \sigma=0.5$, as follows
Step5: For the forward operation, transformations are applied in the order of the list that is the second argument to TransformedDistribution. In this case, first AffineTransform is applied to the base distribution and then ExpTransform.
Learnable Univariate Distributions in Pyro
Having introduced the interface for invertible transforms and transformed distributions, we now show how to represent learnable transforms and use them for density estimation. Our dataset in this section and the next will comprise samples along two concentric circles. Examining the joint and marginal distributions
Step6: Standard transforms derive from the Transform class and are not designed to contain learnable parameters. Learnable transforms, on the other hand, derive from TransformModule, which is a torch.nn.Module and registers parameters with the object.
We will learn the marginals of the above distribution using such a transform, Spline [5,6], defined on a two-dimensional input
Step7: This transform passes each dimension of its input through a separate monotonically increasing function known as a spline. From a high-level, a spline is a complex parametrizable curve for which we can define specific points known as knots that it passes through and the derivatives at the knots. The knots and their derivatives are parameters that can be learnt, e.g., through stochastic gradient descent on a maximum likelihood objective, as we now demonstrate
Step8: Note that we call flow_dist.clear_cache() after each optimization step to clear the transform's forward-inverse cache. This is required because flow_dist's spline_transform is a stateful TransformModule rather than a purely stateless Transform object. Purely functional Pyro code typically creates Transform objects each model execution, then discards them after .backward(), effectively clearing the transform caches. By contrast in this tutorial we create stateful module objects and need to manually clear their cache after update.
Plotting samples drawn from the transformed distribution after learning
Step9: As we can see, we have learnt close approximations to the marginal distributions, $p(x_1),p(x_2)$. It would have been challenging to fit the irregularly shaped marginals with standard methods, e.g., a mixture of normal distributions. As expected, since there is a dependency between the two dimensions, we do not learn a good representation of the joint, $p(x_1,x_2)$. In the next section, we explain how to learn multivariate distributions whose dimensions are not independent.
Multivariate Distributions
Background
The fundamental idea of normalizing flows also applies to multivariate random variables, and this is where its value is clearly seen - representing complex high-dimensional distributions. In this case, a simple multivariate source of noise, for example a standard i.i.d. normal distribution, $X\sim\mathcal{N}(\mathbf{0},I_{D\times D})$, is passed through a vector-valued bijection, $g
Step10: Similarly to before, we train this distribution on the toy dataset and plot the results
Step11: We see from the output that this normalizing flow has successfully learnt both the univariate marginals and the bivariate distribution.
Conditional versus Joint Distributions
Background
In many cases, we wish to represent conditional rather than joint distributions. For instance, in performing variational inference, the variational family is a class of conditional distributions,
$$
\begin{align}
{q_\psi(\mathbf{z}\mid\mathbf{x})\mid\theta\in\Theta},
\end{align}
$$
where $\mathbf{z}$ is the latent variable and $\mathbf{x}$ the observed one, that hopefully contains a member close to the true posterior of the model, $p(\mathbf{z}\mid\mathbf{x})$. In other cases, we may wish to learn to generate an object $\mathbf{x}$ conditioned on some context $\mathbf{c}$ using $p_\theta(\mathbf{x}\mid\mathbf{c})$ and observations ${(\mathbf{x}n,\mathbf{c}_n)}^N{n=1}$. For instance, $\mathbf{x}$ may be a spoken sentence and $\mathbf{c}$ a number of speech features.
The theory of Normalizing Flows is easily generalized to conditional distributions. We denote the variable to condition on by $C=\mathbf{c}\in\mathbb{R}^M$. A simple multivariate source of noise, for example a standard i.i.d. normal distribution, $X\sim\mathcal{N}(\mathbf{0},I_{D\times D})$, is passed through a vector-valued bijection that also conditions on C, $g
Step12: A conditional transformed distribution is created by passing the base distribution and list of conditional and non-conditional transforms to the ConditionalTransformedDistribution class
Step13: You will notice that we pass the dimension of the context variable, $M=1$, to the conditional spline helper function.
Until we condition on a value of $x_1$, the ConditionalTransformedDistribution object is merely a placeholder and cannot be used for sampling or scoring. By calling its .condition(context) method, we obtain a TransformedDistribution for which all its conditional transforms have been conditioned on context.
For example, to draw a sample from $x_2\mid x_1=1$
Step14: In general, the context variable may have batch dimensions and these dimensions must broadcast over the batch dimensions of the input variable.
Now, combining the two distributions and training it on the toy dataset | Python Code:
import torch
import pyro
import pyro.distributions as dist
import pyro.distributions.transforms as T
import matplotlib.pyplot as plt
import seaborn as sns
import os
smoke_test = ('CI' in os.environ)
Explanation: Normalizing Flows - Introduction (Part 1)
This tutorial introduces Pyro's normalizing flow library. It is independent of much of Pyro, but users may want to read about distribution shapes in the Tensor Shapes Tutorial.
Introduction
In standard probabilistic modeling practice, we represent our beliefs over unknown continuous quantities with simple parametric distributions like the normal, exponential, and Laplacian distributions. However, using such simple forms, which are commonly symmetric and unimodal (or have a fixed number of modes when we take a mixture of them), restricts the performance and flexibility of our methods. For instance, standard variational inference in the Variational Autoencoder uses independent univariate normal distributions to represent the variational family. The true posterior is neither independent nor normally distributed, which results in suboptimal inference and simplifies the model that is learnt. In other scenarios, we are likewise restricted by not being able to model multimodal distributions and heavy or light tails.
Normalizing Flows [1-4] are a family of methods for constructing flexible learnable probability distributions, often with neural networks, which allow us to surpass the limitations of simple parametric forms. Pyro contains state-of-the-art normalizing flow implementations, and this tutorial explains how you can use this library for learning complex models and performing flexible variational inference. We introduce the main idea of Normalizing Flows (NFs) and demonstrate learning simple univariate distributions with element-wise, multivariate, and conditional flows.
Univariate Distributions
Background
Normalizing Flows are a family of methods for constructing flexible distributions. Let's first restrict our attention to representing univariate distributions. The basic idea is that a simple source of noise, for example a variable with a standard normal distribution, $X\sim\mathcal{N}(0,1)$, is passed through a bijective (i.e. invertible) function, $g(\cdot)$ to produce a more complex transformed variable $Y=g(X)$.
For a given random variable, we typically want to perform two operations: sampling and scoring. Sampling $Y$ is trivial. First, we sample $X=x$, then calculate $y=g(x)$. Scoring $Y$, or rather, evaluating the log-density $\log(p_Y(y))$, is more involved. How does the density of $Y$ relate to the density of $X$? We can use the substitution rule of integral calculus to answer this. Suppose we want to evaluate the expectation of some function of $X$. Then,
\begin{align}
\mathbb{E}{p_X(\cdot)}\left[f(X)\right] &= \int{\text{supp}(X)}f(x)p_X(x)dx\
&= \int_{\text{supp}(Y)}f(g^{-1}(y))p_X(g^{-1}(y))\left|\frac{dx}{dy}\right|dy\
&= \mathbb{E}_{p_Y(\cdot)}\left[f(g^{-1}(Y))\right],
\end{align}
where $\text{supp}(X)$ denotes the support of $X$, which in this case is $(-\infty,\infty)$. Crucially, we used the fact that $g$ is bijective to apply the substitution rule in going from the first to the second line. Equating the last two lines we get,
\begin{align}
\log(p_Y(y)) &= \log(p_X(g^{-1}(y)))+\log\left(\left|\frac{dx}{dy}\right|\right)\
&= \log(p_X(g^{-1}(y)))-\log\left(\left|\frac{dy}{dx}\right|\right).
\end{align}
Inituitively, this equation says that the density of $Y$ is equal to the density at the corresponding point in $X$ plus a term that corrects for the warp in volume around an infinitesimally small length around $Y$ caused by the transformation.
If $g$ is cleverly constructed (and we will see several examples shortly), we can produce distributions that are more complex than standard normal noise and yet have easy sampling and computationally tractable scoring. Moreover, we can compose such bijective transformations to produce even more complex distributions. By an inductive argument, if we have $L$ transforms $g_{(0)}, g_{(1)},\ldots,g_{(L-1)}$, then the log-density of the transformed variable $Y=(g_{(0)}\circ g_{(1)}\circ\cdots\circ g_{(L-1)})(X)$ is
\begin{align}
\log(p_Y(y)) &= \log\left(p_X\left(\left(g_{(L-1)}^{-1}\circ\cdots\circ g_{(0)}^{-1}\right)\left(y\right)\right)\right)+\sum^{L-1}{l=0}\log\left(\left|\frac{dg^{-1}{(l)}(y_{(l)})}{dy'}\right|\right),
%\left( g^{(l)}(y^{(l)})
%\right).
\end{align}
where we've defined $y_{(0)}=x$, $y_{(L-1)}=y$ for convenience of notation.
In a latter section, we will see how to generalize this method to multivariate $X$. The field of Normalizing Flows aims to construct such $g$ for multivariate $X$ to transform simple i.i.d. standard normal noise into complex, learnable, high-dimensional distributions. The methods have been applied to such diverse applications as image modeling, text-to-speech, unsupervised language induction, data compression, and modeling molecular structures. As probability distributions are the most fundamental component of probabilistic modeling we will likely see many more exciting state-of-the-art applications in the near future.
Fixed Univariate Transforms in Pyro
PyTorch contains classes for representing fixed univariate bijective transformations, and sampling/scoring from transformed distributions derived from these. Pyro extends this with a comprehensive library of learnable univariate and multivariate transformations using the latest developments in the field. As Pyro imports all of PyTorch's distributions and transformations, we will work solely with Pyro. We also note that the NF components in Pyro can be used independently of the probabilistic programming functionality of Pyro, which is what we will be doing in the first two tutorials.
Let us begin by showing how to represent and manipulate a simple transformed distribution,
\begin{align}
X &\sim \mathcal{N}(0,1)\
Y &= \text{exp}(X).
\end{align}
You may have recognized that this is by definition, $Y\sim\text{LogNormal}(0,1)$.
We begin by importing the relevant libraries:
End of explanation
dist_x = dist.Normal(torch.zeros(1), torch.ones(1))
exp_transform = T.ExpTransform()
Explanation: A variety of bijective transformations live in the pyro.distributions.transforms module, and the classes to define transformed distributions live in pyro.distributions. We first create the base distribution of $X$ and the class encapsulating the transform $\text{exp}(\cdot)$:
End of explanation
dist_y = dist.TransformedDistribution(dist_x, [exp_transform])
Explanation: The class ExpTransform derives from Transform and defines the forward, inverse, and log-absolute-derivative operations for this transform,
\begin{align}
g(x) &= \text{exp(x)}\
g^{-1}(y) &= \log(y)\
\log\left(\left|\frac{dg}{dx}\right|\right) &= y.
\end{align}
In general, a transform class defines these three operations, from which it is sufficient to perform sampling and scoring.
The class TransformedDistribution takes a base distribution of simple noise and a list of transforms, and encapsulates the distribution formed by applying these transformations in sequence. We use it as:
End of explanation
plt.subplot(1, 2, 1)
plt.hist(dist_x.sample([1000]).numpy(), bins=50)
plt.title('Standard Normal')
plt.subplot(1, 2, 2)
plt.hist(dist_y.sample([1000]).numpy(), bins=50)
plt.title('Standard Log-Normal')
plt.show()
Explanation: Now, plotting samples from both to verify that we that have produced the log-normal distribution:
End of explanation
dist_x = dist.Normal(torch.zeros(1), torch.ones(1))
affine_transform = T.AffineTransform(loc=3, scale=0.5)
exp_transform = T.ExpTransform()
dist_y = dist.TransformedDistribution(dist_x, [affine_transform, exp_transform])
plt.subplot(1, 2, 1)
plt.hist(dist_x.sample([1000]).numpy(), bins=50)
plt.title('Standard Normal')
plt.subplot(1, 2, 2)
plt.hist(dist_y.sample([1000]).numpy(), bins=50)
plt.title('Log-Normal')
plt.show()
Explanation: Our example uses a single transform. However, we can compose transforms to produce more expressive distributions. For instance, if we apply an affine transformation we can produce the general log-normal distribution,
\begin{align}
X &\sim \mathcal{N}(0,1)\
Y &= \text{exp}(\mu+\sigma X).
\end{align}
or rather, $Y\sim\text{LogNormal}(\mu,\sigma^2)$. In Pyro this is accomplished, e.g. for $\mu=3, \sigma=0.5$, as follows:
End of explanation
import numpy as np
from sklearn import datasets
from sklearn.preprocessing import StandardScaler
n_samples = 1000
X, y = datasets.make_circles(n_samples=n_samples, factor=0.5, noise=0.05)
X = StandardScaler().fit_transform(X)
plt.title(r'Samples from $p(x_1,x_2)$')
plt.xlabel(r'$x_1$')
plt.ylabel(r'$x_2$')
plt.scatter(X[:,0], X[:,1], alpha=0.5)
plt.show()
plt.subplot(1, 2, 1)
sns.distplot(X[:,0], hist=False, kde=True,
bins=None,
hist_kws={'edgecolor':'black'},
kde_kws={'linewidth': 2})
plt.title(r'$p(x_1)$')
plt.subplot(1, 2, 2)
sns.distplot(X[:,1], hist=False, kde=True,
bins=None,
hist_kws={'edgecolor':'black'},
kde_kws={'linewidth': 2})
plt.title(r'$p(x_2)$')
plt.show()
Explanation: For the forward operation, transformations are applied in the order of the list that is the second argument to TransformedDistribution. In this case, first AffineTransform is applied to the base distribution and then ExpTransform.
Learnable Univariate Distributions in Pyro
Having introduced the interface for invertible transforms and transformed distributions, we now show how to represent learnable transforms and use them for density estimation. Our dataset in this section and the next will comprise samples along two concentric circles. Examining the joint and marginal distributions:
End of explanation
base_dist = dist.Normal(torch.zeros(2), torch.ones(2))
spline_transform = T.Spline(2, count_bins=16)
flow_dist = dist.TransformedDistribution(base_dist, [spline_transform])
Explanation: Standard transforms derive from the Transform class and are not designed to contain learnable parameters. Learnable transforms, on the other hand, derive from TransformModule, which is a torch.nn.Module and registers parameters with the object.
We will learn the marginals of the above distribution using such a transform, Spline [5,6], defined on a two-dimensional input:
End of explanation
%%time
steps = 1 if smoke_test else 1001
dataset = torch.tensor(X, dtype=torch.float)
optimizer = torch.optim.Adam(spline_transform.parameters(), lr=1e-2)
for step in range(steps):
optimizer.zero_grad()
loss = -flow_dist.log_prob(dataset).mean()
loss.backward()
optimizer.step()
flow_dist.clear_cache()
if step % 200 == 0:
print('step: {}, loss: {}'.format(step, loss.item()))
Explanation: This transform passes each dimension of its input through a separate monotonically increasing function known as a spline. From a high-level, a spline is a complex parametrizable curve for which we can define specific points known as knots that it passes through and the derivatives at the knots. The knots and their derivatives are parameters that can be learnt, e.g., through stochastic gradient descent on a maximum likelihood objective, as we now demonstrate:
End of explanation
X_flow = flow_dist.sample(torch.Size([1000,])).detach().numpy()
plt.title(r'Joint Distribution')
plt.xlabel(r'$x_1$')
plt.ylabel(r'$x_2$')
plt.scatter(X[:,0], X[:,1], label='data', alpha=0.5)
plt.scatter(X_flow[:,0], X_flow[:,1], color='firebrick', label='flow', alpha=0.5)
plt.legend()
plt.show()
plt.subplot(1, 2, 1)
sns.distplot(X[:,0], hist=False, kde=True,
bins=None,
hist_kws={'edgecolor':'black'},
kde_kws={'linewidth': 2},
label='data')
sns.distplot(X_flow[:,0], hist=False, kde=True,
bins=None, color='firebrick',
hist_kws={'edgecolor':'black'},
kde_kws={'linewidth': 2},
label='flow')
plt.title(r'$p(x_1)$')
plt.subplot(1, 2, 2)
sns.distplot(X[:,1], hist=False, kde=True,
bins=None,
hist_kws={'edgecolor':'black'},
kde_kws={'linewidth': 2},
label='data')
sns.distplot(X_flow[:,1], hist=False, kde=True,
bins=None, color='firebrick',
hist_kws={'edgecolor':'black'},
kde_kws={'linewidth': 2},
label='flow')
plt.title(r'$p(x_2)$')
plt.show()
Explanation: Note that we call flow_dist.clear_cache() after each optimization step to clear the transform's forward-inverse cache. This is required because flow_dist's spline_transform is a stateful TransformModule rather than a purely stateless Transform object. Purely functional Pyro code typically creates Transform objects each model execution, then discards them after .backward(), effectively clearing the transform caches. By contrast in this tutorial we create stateful module objects and need to manually clear their cache after update.
Plotting samples drawn from the transformed distribution after learning:
End of explanation
base_dist = dist.Normal(torch.zeros(2), torch.ones(2))
spline_transform = T.spline_coupling(2, count_bins=16)
flow_dist = dist.TransformedDistribution(base_dist, [spline_transform])
Explanation: As we can see, we have learnt close approximations to the marginal distributions, $p(x_1),p(x_2)$. It would have been challenging to fit the irregularly shaped marginals with standard methods, e.g., a mixture of normal distributions. As expected, since there is a dependency between the two dimensions, we do not learn a good representation of the joint, $p(x_1,x_2)$. In the next section, we explain how to learn multivariate distributions whose dimensions are not independent.
Multivariate Distributions
Background
The fundamental idea of normalizing flows also applies to multivariate random variables, and this is where its value is clearly seen - representing complex high-dimensional distributions. In this case, a simple multivariate source of noise, for example a standard i.i.d. normal distribution, $X\sim\mathcal{N}(\mathbf{0},I_{D\times D})$, is passed through a vector-valued bijection, $g:\mathbb{R}^D\rightarrow\mathbb{R}^D$, to produce the more complex transformed variable $Y=g(X)$.
Sampling $Y$ is again trivial and involves evaluation of the forward pass of $g$. We can score $Y$ using the multivariate substitution rule of integral calculus,
\begin{align}
\mathbb{E}{p_X(\cdot)}\left[f(X)\right] &= \int{\text{supp}(X)}f(\mathbf{x})p_X(\mathbf{x})d\mathbf{x}\
&= \int_{\text{supp}(Y)}f(g^{-1}(\mathbf{y}))p_X(g^{-1}(\mathbf{y}))\det\left|\frac{d\mathbf{x}}{d\mathbf{y}}\right|d\mathbf{y}\
&= \mathbb{E}_{p_Y(\cdot)}\left[f(g^{-1}(Y))\right],
\end{align}
where $d\mathbf{x}/d\mathbf{y}$ denotes the Jacobian matrix of $g^{-1}(\mathbf{y})$. Equating the last two lines we get,
\begin{align}
\log(p_Y(y)) &= \log(p_X(g^{-1}(y)))+\log\left(\det\left|\frac{d\mathbf{x}}{d\mathbf{y}}\right|\right)\
&= \log(p_X(g^{-1}(y)))-\log\left(\det\left|\frac{d\mathbf{y}}{d\mathbf{x}}\right|\right).
\end{align}
Inituitively, this equation says that the density of $Y$ is equal to the density at the corresponding point in $X$ plus a term that corrects for the warp in volume around an infinitesimally small volume around $Y$ caused by the transformation. For instance, in $2$-dimensions, the geometric interpretation of the absolute value of the determinant of a Jacobian is that it represents the area of a parallelogram with edges defined by the columns of the Jacobian. In $n$-dimensions, the geometric interpretation of the absolute value of the determinant Jacobian is that is represents the hyper-volume of a parallelepiped with $n$ edges defined by the columns of the Jacobian (see a calculus reference such as [7] for more details).
Similar to the univariate case, we can compose such bijective transformations to produce even more complex distributions. By an inductive argument, if we have $L$ transforms $g_{(0)}, g_{(1)},\ldots,g_{(L-1)}$, then the log-density of the transformed variable $Y=(g_{(0)}\circ g_{(1)}\circ\cdots\circ g_{(L-1)})(X)$ is
\begin{align}
\log(p_Y(y)) &= \log\left(p_X\left(\left(g_{(L-1)}^{-1}\circ\cdots\circ g_{(0)}^{-1}\right)\left(y\right)\right)\right)+\sum^{L-1}{l=0}\log\left(\left|\frac{dg^{-1}{(l)}(y_{(l)})}{dy'}\right|\right),
%\left( g^{(l)}(y^{(l)})
%\right).
\end{align}
where we've defined $y_{(0)}=x$, $y_{(L-1)}=y$ for convenience of notation.
The main challenge is in designing parametrizable multivariate bijections that have closed form expressions for both $g$ and $g^{-1}$, a tractable Jacobian whose calculation scales with $O(D)$ rather than $O(D^3)$, and can express a flexible class of functions.
Multivariate Transforms in Pyro
Up to this point we have used element-wise transforms in Pyro. These are indicated by having the property transform.event_dim == 0 set on the transform object. Such element-wise transforms can only be used to represent univariate distributions and multivariate distributions whose dimensions are independent (known in variational inference as the mean-field approximation).
The power of Normalizing Flow, however, is most apparent in their ability to model complex high-dimensional distributions with neural networks and Pyro contains several such flows for accomplishing this. Transforms that operate on vectors have the property transform.event_dim == 1, transforms on matrices with transform.event_dim == 2, and so on. In general, the event_dim property of a transform indicates how many dependent dimensions there are in the output of a transform.
In this section, we show how to use SplineCoupling to learn the bivariate toy distribution from our running example. A coupling transform [8, 9] divides the input variable into two parts and applies an element-wise bijection to the section half whose parameters are a function of the first. Optionally, an element-wise bijection is also applied to the first half. Dividing the inputs at $d$, the transform is,
\begin{align}
\mathbf{y}{1:d} &= g\theta(\mathbf{x}{1:d})\
\mathbf{y}{(d+1):D} &= h_\phi(\mathbf{x}{(d+1):D};\mathbf{x}{1:d}),
\end{align}
where $\mathbf{x}{1:d}$ represents the first $d$ elements of the inputs, $g\theta$ is either the identity function or an elementwise bijection parameters $\theta$, and $h_\phi$ is an element-wise bijection whose parameters are a function of $\mathbf{x}_{1:d}$.
This type of transform is easily invertible. We invert the first half, $\mathbf{y}{1:d}$, then use the resulting $\mathbf{x}{1:d}$ to evaluate $\phi$ and invert the second half,
\begin{align}
\mathbf{x}{1:d} &= g\theta^{-1}(\mathbf{y}{1:d})\
\mathbf{x}{(d+1):D} &= h_\phi^{-1}(\mathbf{y}{(d+1):D};\mathbf{x}{1:d}).
\end{align}
Difference choices for $g$ and $h$ form different types of coupling transforms. When both are monotonic rational splines, the transform is the spline coupling layer of Neural Spline Flow [5,6], which is represented in Pyro by the SplineCoupling class. As shown in the references, when we combine a sequence of coupling layers sandwiched between random permutations so we introduce dependencies between all dimensions, we can model complex multivariate distributions.
Most of the learnable transforms in Pyro have a corresponding helper function that takes care of constructing a neural network for the transform with the correct output shape. This neural network outputs the parameters of the transform and is known as a hypernetwork [10]. The helper functions are represented by lower-case versions of the corresponding class name, and usually input at the very least the input-dimension or shape of the distribution to model. For instance, the helper function corresponding to SplineCoupling is spline_coupling. We create a bivariate flow with a single spline coupling layer as follows:
End of explanation
%%time
steps = 1 if smoke_test else 5001
dataset = torch.tensor(X, dtype=torch.float)
optimizer = torch.optim.Adam(spline_transform.parameters(), lr=5e-3)
for step in range(steps+1):
optimizer.zero_grad()
loss = -flow_dist.log_prob(dataset).mean()
loss.backward()
optimizer.step()
flow_dist.clear_cache()
if step % 500 == 0:
print('step: {}, loss: {}'.format(step, loss.item()))
X_flow = flow_dist.sample(torch.Size([1000,])).detach().numpy()
plt.title(r'Joint Distribution')
plt.xlabel(r'$x_1$')
plt.ylabel(r'$x_2$')
plt.scatter(X[:,0], X[:,1], label='data', alpha=0.5)
plt.scatter(X_flow[:,0], X_flow[:,1], color='firebrick', label='flow', alpha=0.5)
plt.legend()
plt.show()
plt.subplot(1, 2, 1)
sns.distplot(X[:,0], hist=False, kde=True,
bins=None,
hist_kws={'edgecolor':'black'},
kde_kws={'linewidth': 2},
label='data')
sns.distplot(X_flow[:,0], hist=False, kde=True,
bins=None, color='firebrick',
hist_kws={'edgecolor':'black'},
kde_kws={'linewidth': 2},
label='flow')
plt.title(r'$p(x_1)$')
plt.subplot(1, 2, 2)
sns.distplot(X[:,1], hist=False, kde=True,
bins=None,
hist_kws={'edgecolor':'black'},
kde_kws={'linewidth': 2},
label='data')
sns.distplot(X_flow[:,1], hist=False, kde=True,
bins=None, color='firebrick',
hist_kws={'edgecolor':'black'},
kde_kws={'linewidth': 2},
label='flow')
plt.title(r'$p(x_2)$')
plt.show()
Explanation: Similarly to before, we train this distribution on the toy dataset and plot the results:
End of explanation
dist_base = dist.Normal(torch.zeros(1), torch.ones(1))
x1_transform = T.spline(1)
dist_x1 = dist.TransformedDistribution(dist_base, [x1_transform])
Explanation: We see from the output that this normalizing flow has successfully learnt both the univariate marginals and the bivariate distribution.
Conditional versus Joint Distributions
Background
In many cases, we wish to represent conditional rather than joint distributions. For instance, in performing variational inference, the variational family is a class of conditional distributions,
$$
\begin{align}
{q_\psi(\mathbf{z}\mid\mathbf{x})\mid\theta\in\Theta},
\end{align}
$$
where $\mathbf{z}$ is the latent variable and $\mathbf{x}$ the observed one, that hopefully contains a member close to the true posterior of the model, $p(\mathbf{z}\mid\mathbf{x})$. In other cases, we may wish to learn to generate an object $\mathbf{x}$ conditioned on some context $\mathbf{c}$ using $p_\theta(\mathbf{x}\mid\mathbf{c})$ and observations ${(\mathbf{x}n,\mathbf{c}_n)}^N{n=1}$. For instance, $\mathbf{x}$ may be a spoken sentence and $\mathbf{c}$ a number of speech features.
The theory of Normalizing Flows is easily generalized to conditional distributions. We denote the variable to condition on by $C=\mathbf{c}\in\mathbb{R}^M$. A simple multivariate source of noise, for example a standard i.i.d. normal distribution, $X\sim\mathcal{N}(\mathbf{0},I_{D\times D})$, is passed through a vector-valued bijection that also conditions on C, $g:\mathbb{R}^D\times\mathbb{R}^M\rightarrow\mathbb{R}^D$, to produce the more complex transformed variable $Y=g(X;C=\mathbf{c})$. In practice, this is usually accomplished by making the parameters for a known normalizing flow bijection $g$ the output of a hypernet neural network that inputs $\mathbf{c}$.
Sampling of conditional transforms simply involves evaluating $Y=g(X; C=\mathbf{c})$. Conditioning the bijections on $\mathbf{c}$, the same formula holds for scoring as for the joint multivariate case.
Conditional Transforms in Pyro
In Pyro, most learnable transforms have a corresponding conditional version that derives from ConditionalTransformModule. For instance, the conditional version of the spline transform is ConditionalSpline with helper function conditional_spline.
In this section, we will show how we can learn our toy dataset as the decomposition of the product of a conditional and a univariate distribution,
$$
\begin{align}
p(x_1,x_2) &= p(x_2\mid x_1)p(x_1).
\end{align}
$$
First, we create the univariate distribution for $x_1$ as shown previously,
End of explanation
x2_transform = T.conditional_spline(1, context_dim=1)
dist_x2_given_x1 = dist.ConditionalTransformedDistribution(dist_base, [x2_transform])
Explanation: A conditional transformed distribution is created by passing the base distribution and list of conditional and non-conditional transforms to the ConditionalTransformedDistribution class:
End of explanation
x1 = torch.ones(1)
print(dist_x2_given_x1.condition(x1).sample())
Explanation: You will notice that we pass the dimension of the context variable, $M=1$, to the conditional spline helper function.
Until we condition on a value of $x_1$, the ConditionalTransformedDistribution object is merely a placeholder and cannot be used for sampling or scoring. By calling its .condition(context) method, we obtain a TransformedDistribution for which all its conditional transforms have been conditioned on context.
For example, to draw a sample from $x_2\mid x_1=1$:
End of explanation
%%time
steps = 1 if smoke_test else 5001
modules = torch.nn.ModuleList([x1_transform, x2_transform])
optimizer = torch.optim.Adam(modules.parameters(), lr=3e-3)
x1 = dataset[:,0][:,None]
x2 = dataset[:,1][:,None]
for step in range(steps):
optimizer.zero_grad()
ln_p_x1 = dist_x1.log_prob(x1)
ln_p_x2_given_x1 = dist_x2_given_x1.condition(x1.detach()).log_prob(x2.detach())
loss = -(ln_p_x1 + ln_p_x2_given_x1).mean()
loss.backward()
optimizer.step()
dist_x1.clear_cache()
dist_x2_given_x1.clear_cache()
if step % 500 == 0:
print('step: {}, loss: {}'.format(step, loss.item()))
X = torch.cat((x1, x2), dim=-1)
x1_flow = dist_x1.sample(torch.Size([1000,]))
x2_flow = dist_x2_given_x1.condition(x1_flow).sample(torch.Size([1000,]))
X_flow = torch.cat((x1_flow, x2_flow), dim=-1)
plt.title(r'Joint Distribution')
plt.xlabel(r'$x_1$')
plt.ylabel(r'$x_2$')
plt.scatter(X[:,0], X[:,1], label='data', alpha=0.5)
plt.scatter(X_flow[:,0], X_flow[:,1], color='firebrick', label='flow', alpha=0.5)
plt.legend()
plt.show()
plt.subplot(1, 2, 1)
sns.distplot(X[:,0], hist=False, kde=True,
bins=None,
hist_kws={'edgecolor':'black'},
kde_kws={'linewidth': 2},
label='data')
sns.distplot(X_flow[:,0], hist=False, kde=True,
bins=None, color='firebrick',
hist_kws={'edgecolor':'black'},
kde_kws={'linewidth': 2},
label='flow')
plt.title(r'$p(x_1)$')
plt.subplot(1, 2, 2)
sns.distplot(X[:,1], hist=False, kde=True,
bins=None,
hist_kws={'edgecolor':'black'},
kde_kws={'linewidth': 2},
label='data')
sns.distplot(X_flow[:,1], hist=False, kde=True,
bins=None, color='firebrick',
hist_kws={'edgecolor':'black'},
kde_kws={'linewidth': 2},
label='flow')
plt.title(r'$p(x_2)$')
plt.show()
Explanation: In general, the context variable may have batch dimensions and these dimensions must broadcast over the batch dimensions of the input variable.
Now, combining the two distributions and training it on the toy dataset:
End of explanation |
2,501 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction
This work is inspired by this paper from Elie and Celine Bursztein and will try to reproduce their findings applying some different ideas.
Step1: Cards data
Load the collectible cards inside JSON data
Let's start loading the game cards data from Hearthstone JSON and loading it to python using the json library.
Step2: Card types
Step3: Card attributes
Step4: Card mechanics
Step5: The model
Card analysis will be done based on the following base model equation
Step6: Vanilla minions modelling
To test the model, let's extract the coefficients for attack and health with only the minions with no text (vanilla minions).
Step7: Let the pricing begin! Results will be stored in a new price attribute in each card. Also, coeffs are computed taking into account that a card costs
Step8: With these coeffs, we can define a ratio attribute with the ratio between the real price and cost
Step9: These results can be bad to anyone with some experience, because Wisp is listed in the first place with a great distance to the second and River Crocolisk (a good vanilla) is on the low end. But this was only a example of how the model works. More complex examples below.
Adding simple mechanics
To enrich the model, let's add simple mechanics to a minion-only matrix.
Processing the cards mechanics
Cards have to be processed to extract the card mechanics (Charge, Stealth, Windfury, Taunt, Divine Shield) from the text, adding the text_mechanics attribute with the complex mechanics. All cards with unknown mechanics are discarded when processed.
Step10: Let's price this bunch of minions as described before.
Step11: Results
0 cost minions
Step12: 1 cost minions
Step13: 2 cost minions
Step14: 3 cost minions
Step15: 4 cost minions
Step16: 5 cost minions
Step17: 6 cost minions
Step18: 7+ cost minions
Step19: Some statistics
Deathrattle minions
Let's see what is the expected damage from a Scarlet Purifier.
Step20: 2-cost minions
Let's see what is the expected minion stats from a Piloted Shredder deathrattle. | Python Code:
from hearthpricer import hearthpricer
import numpy
import os.path
import pandas
Explanation: Introduction
This work is inspired by this paper from Elie and Celine Bursztein and will try to reproduce their findings applying some different ideas.
End of explanation
all_sets_filename = os.path.join('data', 'AllSets.json')
# Uncomment the following lines to update the date file
#import urllib
#urllib.urlretrieve ('http://hearthstonejson.com/json/AllSets.json', all_sets_filename)
all_collectible_cards = hearthpricer.load_json(all_sets_filename)
print('# of collectible cards:', len(all_collectible_cards))
Explanation: Cards data
Load the collectible cards inside JSON data
Let's start loading the game cards data from Hearthstone JSON and loading it to python using the json library.
End of explanation
print('Card types:', ', '.join(set((x['type'] for x in all_collectible_cards))))
Explanation: Card types
End of explanation
print('Card attributes:', ', '.join(set(sum((list(x.keys()) for x in all_collectible_cards), list()))))
Explanation: Card attributes
End of explanation
print('Card mechanics:', ', '.join(
set(sum((x['mechanics'] for x in all_collectible_cards if 'mechanics' in x), list()))))
Explanation: Card mechanics
End of explanation
all_collectible_cards_df = pandas.DataFrame(all_collectible_cards)
all_collectible_cards_df.info()
Explanation: The model
Card analysis will be done based on the following base model equation:
$$cost = \sum (attribute_i \cdot coeff_i) + intrinsic$$
The intrinsic value represents the cost of having that card in your deck and also can be viewed as the slot_cost.
Modelling the cards
With the previous model, let's create a matrix with all the information to work with.
End of explanation
vanilla_minions_df = pandas.DataFrame(
all_collectible_cards_df[(all_collectible_cards_df['type'] == 'Minion') &
(all_collectible_cards_df['text'].isnull())])
vanilla_minions_df.info()
Explanation: Vanilla minions modelling
To test the model, let's extract the coefficients for attack and health with only the minions with no text (vanilla minions).
End of explanation
vanilla_columns = ['attack', 'health']
vanilla_coeffs = hearthpricer.pricing(vanilla_minions_df, vanilla_columns, debug=True)
Explanation: Let the pricing begin! Results will be stored in a new price attribute in each card. Also, coeffs are computed taking into account that a card costs: $2 \cdot cost + 1$. Although, price value will be comparable to cost.
End of explanation
intrinsic = vanilla_coeffs[0][0]
vanilla_minions_df['ratio'] = (vanilla_minions_df['price'] - vanilla_minions_df['cost']) / \
(vanilla_minions_df['cost'] - intrinsic)
vanilla_minions_df[['name', 'cost', 'price', 'ratio']].sort('ratio', ascending=False)
Explanation: With these coeffs, we can define a ratio attribute with the ratio between the real price and cost:
$$ratio = \frac{(price - intrinsic) - (cost - intrinsic)}{cost - intrinsic} = \frac{price - cost}{cost - intrinsic}$$
Then sort the results from the best to the worst in terms of ratio.
End of explanation
all_mechanics_cards = hearthpricer.process_mechanics(all_collectible_cards)
print('# of processed cards: {} ({:.2%})'.format(
len(all_mechanics_cards), 1.0 * len(all_mechanics_cards) / len(all_collectible_cards)))
Explanation: These results can be bad to anyone with some experience, because Wisp is listed in the first place with a great distance to the second and River Crocolisk (a good vanilla) is on the low end. But this was only a example of how the model works. More complex examples below.
Adding simple mechanics
To enrich the model, let's add simple mechanics to a minion-only matrix.
Processing the cards mechanics
Cards have to be processed to extract the card mechanics (Charge, Stealth, Windfury, Taunt, Divine Shield) from the text, adding the text_mechanics attribute with the complex mechanics. All cards with unknown mechanics are discarded when processed.
End of explanation
all_mechanics_cards_df = pandas.DataFrame(all_mechanics_cards)
mechanics_coeffs = hearthpricer.pricing(all_mechanics_cards_df, debug=True)
all_processed_cards = hearthpricer.process_mechanics(all_collectible_cards,
discard_unknown_mechanics=False)
text_mechanics = dict()
for card in all_processed_cards:
if 'text_mechanics' in card:
text_mechanics[card['text_mechanics']] = text_mechanics.get(card['text_mechanics'], 0) + 1
sorted(((y, x) for x, y in text_mechanics.items()), reverse=True)
intrinsic = vanilla_coeffs[0][0]
all_mechanics_cards_df['ratio'] = (all_mechanics_cards_df['price'] - all_mechanics_cards_df['cost']) / \
(all_mechanics_cards_df['cost'] + intrinsic)
results_df = all_mechanics_cards_df[['playerClass', 'name', 'cost', 'price', 'ratio']].sort(
'ratio', ascending=False)
Explanation: Let's price this bunch of minions as described before.
End of explanation
results_df[results_df.cost == 0]
Explanation: Results
0 cost minions
End of explanation
results_df[results_df.cost == 1]
Explanation: 1 cost minions
End of explanation
results_df[results_df.cost == 2]
Explanation: 2 cost minions
End of explanation
results_df[results_df.cost == 3]
Explanation: 3 cost minions
End of explanation
results_df[results_df.cost == 4]
Explanation: 4 cost minions
End of explanation
results_df[results_df.cost == 5]
Explanation: 5 cost minions
End of explanation
results_df[results_df.cost == 6]
Explanation: 6 cost minions
End of explanation
results_df[results_df.cost >= 7]
Explanation: 7+ cost minions
End of explanation
minions_df = all_collectible_cards_df[(all_collectible_cards_df['type'] == 'Minion')]
deathrattle_minions_df = minions_df[minions_df.apply(
lambda x: x['mechanics'] is not numpy.nan and ('Deathrattle' in x['mechanics']), axis=1)]
print('Deathrattle minions population: {} ({:.2%})'.format(
len(deathrattle_minions_df), 1.0 * len(deathrattle_minions_df) / len(minions_df)))
Explanation: Some statistics
Deathrattle minions
Let's see what is the expected damage from a Scarlet Purifier.
End of explanation
two_cost_minions_df = all_collectible_cards_df[(all_collectible_cards_df['type'] == 'Minion') &
(all_collectible_cards_df['cost'] == 2)]
print('2-cost minion mean attack: {:.2f}'.format(two_cost_minions_df.attack.mean()))
print('2-cost minion mean health: {:.2f}'.format(two_cost_minions_df.health.mean()))
Explanation: 2-cost minions
Let's see what is the expected minion stats from a Piloted Shredder deathrattle.
End of explanation |
2,502 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Class Session 17 - Date Hubs and Party Hubs
Comparing the histograms of local clustering coefficients of date hubs and party hubs
In this class, we will analyze the protein-protein interaction network for two classes of yeast proteins, "date hubs" and "party hubs" as defined by Han et al. in their 2004 study of protein-interaction networks and gene expression (Han et al., Nature, v430, p88, 2004). The authors of that study claimed that there is no difference in the local clustering density, between "date hubs" and "party hubs". We will put this to the test. We for each of the "date hub" and "party hub" proteins, we will compute its local clustering coefficient (Ci) in the protein-protein interaction network. We will then histogram the Ci values for the two sets of hubs, so that we can compare the distributions of local clustering coefficients for "date hubs" and "party hubs". We will use a statistical test (Kolmogorov-Smirnov) to compare the two distributions of Ci values.
To get started, we load the modules that we will require
Step1: Next, we'll load the file of hub types shared/han_hub_data.txt (which is a two-column TSV file in which the first column is the protein name and the second column contains the string date or party for each row; the first row of the file contains the column headers), using our old friend pandas.read_csv. This file has a header so pass header=0 to read_csv.
Step2: Let's take a peek at the structure of the hub_data data frame, using head and shape. Here's what it should look like
Step3: Let's take a peek at the data frame edge_df, using head and shape
Step4: As always, we'll use igraph.Graph.summary to sanity check the Graph object
Step5: Make a dataframe containing the protein names (as column "Protein") and the vertex IDs (as column "order")
Step6: Let's take a peek at this data frame
Step7: Having merged the hub type information into graph_vertices_df, let's take a peek at it using head and shape
Step8: Let's take a peek at this numpy.array that we have just created
Step9: Use numpy.where in order to find the index numbers of the proteins that are "date hubs" and that are "party hubs"
Step10: Use the igraph.Graph.transitivity_local_undirected function in igraph to compute the local clustering coefficients for every vertex in the graph. Make a numpy.array from the resulting list of Ci values
Step11: Let's take a peek at the ci_values_np array that you have just created. What are the nan values, and what do they signify? Is this normal?
Step12: Make a numpy.array of the Ci values of the date hubs (ci_values_date_hubs) and the Ci values of the party hubs (ci_values_party_hubs)
Step13: Plot the histograms of the local clustering coefficients of the "date hubs" and the "party hubs". | Python Code:
import igraph
import numpy
import pandas
import matplotlib.pyplot
Explanation: Class Session 17 - Date Hubs and Party Hubs
Comparing the histograms of local clustering coefficients of date hubs and party hubs
In this class, we will analyze the protein-protein interaction network for two classes of yeast proteins, "date hubs" and "party hubs" as defined by Han et al. in their 2004 study of protein-interaction networks and gene expression (Han et al., Nature, v430, p88, 2004). The authors of that study claimed that there is no difference in the local clustering density, between "date hubs" and "party hubs". We will put this to the test. We for each of the "date hub" and "party hub" proteins, we will compute its local clustering coefficient (Ci) in the protein-protein interaction network. We will then histogram the Ci values for the two sets of hubs, so that we can compare the distributions of local clustering coefficients for "date hubs" and "party hubs". We will use a statistical test (Kolmogorov-Smirnov) to compare the two distributions of Ci values.
To get started, we load the modules that we will require:
End of explanation
hub_data =
Explanation: Next, we'll load the file of hub types shared/han_hub_data.txt (which is a two-column TSV file in which the first column is the protein name and the second column contains the string date or party for each row; the first row of the file contains the column headers), using our old friend pandas.read_csv. This file has a header so pass header=0 to read_csv.
End of explanation
edge_data =
Explanation: Let's take a peek at the structure of the hub_data data frame, using head and shape. Here's what it should look like:
Next, let's load the file of yeat protein-protein interaction network edges shared/han_network_edges.txt (which is a two-column file, with first column is the first protein in the interacting pair, and the second column is the second protein in the interacting pair).This file has a header so pass header=0 to read_csv.
End of explanation
ppi_graph =
Explanation: Let's take a peek at the data frame edge_df, using head and shape:
make an undirected igraph Graph from the edgelist data; show summary data on the graph as a sanity-check
It will be convenient to let igraph compute the local clustering coefficients. So, we'll want to make an undirected igraph igraph.Graph object from the edgelist data, using our old friend igraph.Graph.TupleList:
End of explanation
graph_vertices =
graph_vertices[0:10]
[graph_vertices[i] for i in range(0,10)]
Explanation: As always, we'll use igraph.Graph.summary to sanity check the Graph object:
Generate a list of the names of the proteins in the order of the proteins' corresponding vertices in the igraph Graph object
End of explanation
graph_vertices_df =
graph_vertices_df.columns = ["Protein"]
graph_vertices_df["order"]=graph_vertices_df.index
Explanation: Make a dataframe containing the protein names (as column "Protein") and the vertex IDs (as column "order"):
End of explanation
graph_vertices_df_merged = ## fill in the merge call here
graph_vertices_df_merged = graph_vertices_df_merged.sort_values("order")
Explanation: Let's take a peek at this data frame:
Let's use the pandas.DataFrame.merge method on the graph_vertices_df object to pull in the hub type (date or party) for vertices that are hubs, by passing hub_data to merge. Don't forget to specify how='outer' and on="Protein":
End of explanation
vertex_types_np =
Explanation: Having merged the hub type information into graph_vertices_df, let's take a peek at it using head and shape:
Let's pull out the HubType column as a numpy array, using column indexing (["HubType"]) and then values.tolist():
End of explanation
vertex_types_np
Explanation: Let's take a peek at this numpy.array that we have just created:
End of explanation
date_hub_inds =
party_hub_inds =
Explanation: Use numpy.where in order to find the index numbers of the proteins that are "date hubs" and that are "party hubs":
End of explanation
ci_values =
ci_values_np =
Explanation: Use the igraph.Graph.transitivity_local_undirected function in igraph to compute the local clustering coefficients for every vertex in the graph. Make a numpy.array from the resulting list of Ci values:
End of explanation
ci_values_np
Explanation: Let's take a peek at the ci_values_np array that you have just created. What are the nan values, and what do they signify? Is this normal?
End of explanation
ci_values_date_hubs =
ci_values_party_hubs =
Explanation: Make a numpy.array of the Ci values of the date hubs (ci_values_date_hubs) and the Ci values of the party hubs (ci_values_party_hubs)
End of explanation
matplotlib.pyplot.hist(ci_values_date_hubs, normed=1, alpha=0.5, label="date")
matplotlib.pyplot.hist(ci_values_party_hubs, normed=1, alpha=0.5, label="party")
matplotlib.pyplot.legend(loc="upper center")
matplotlib.pyplot.xlabel("Ci")
matplotlib.pyplot.ylabel("frequency")
matplotlib.pyplot.show()
Explanation: Plot the histograms of the local clustering coefficients of the "date hubs" and the "party hubs".
End of explanation |
2,503 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Toplevel
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Flux Correction
3. Key Properties --> Genealogy
4. Key Properties --> Software Properties
5. Key Properties --> Coupling
6. Key Properties --> Tuning Applied
7. Key Properties --> Conservation --> Heat
8. Key Properties --> Conservation --> Fresh Water
9. Key Properties --> Conservation --> Salt
10. Key Properties --> Conservation --> Momentum
11. Radiative Forcings
12. Radiative Forcings --> Greenhouse Gases --> CO2
13. Radiative Forcings --> Greenhouse Gases --> CH4
14. Radiative Forcings --> Greenhouse Gases --> N2O
15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
17. Radiative Forcings --> Greenhouse Gases --> CFC
18. Radiative Forcings --> Aerosols --> SO4
19. Radiative Forcings --> Aerosols --> Black Carbon
20. Radiative Forcings --> Aerosols --> Organic Carbon
21. Radiative Forcings --> Aerosols --> Nitrate
22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
24. Radiative Forcings --> Aerosols --> Dust
25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
27. Radiative Forcings --> Aerosols --> Sea Salt
28. Radiative Forcings --> Other --> Land Use
29. Radiative Forcings --> Other --> Solar
1. Key Properties
Key properties of the model
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 2. Key Properties --> Flux Correction
Flux correction properties of the model
2.1. Details
Is Required
Step7: 3. Key Properties --> Genealogy
Genealogy and history of the model
3.1. Year Released
Is Required
Step8: 3.2. CMIP3 Parent
Is Required
Step9: 3.3. CMIP5 Parent
Is Required
Step10: 3.4. Previous Name
Is Required
Step11: 4. Key Properties --> Software Properties
Software properties of model
4.1. Repository
Is Required
Step12: 4.2. Code Version
Is Required
Step13: 4.3. Code Languages
Is Required
Step14: 4.4. Components Structure
Is Required
Step15: 4.5. Coupler
Is Required
Step16: 5. Key Properties --> Coupling
**
5.1. Overview
Is Required
Step17: 5.2. Atmosphere Double Flux
Is Required
Step18: 5.3. Atmosphere Fluxes Calculation Grid
Is Required
Step19: 5.4. Atmosphere Relative Winds
Is Required
Step20: 6. Key Properties --> Tuning Applied
Tuning methodology for model
6.1. Description
Is Required
Step21: 6.2. Global Mean Metrics Used
Is Required
Step22: 6.3. Regional Metrics Used
Is Required
Step23: 6.4. Trend Metrics Used
Is Required
Step24: 6.5. Energy Balance
Is Required
Step25: 6.6. Fresh Water Balance
Is Required
Step26: 7. Key Properties --> Conservation --> Heat
Global heat convervation properties of the model
7.1. Global
Is Required
Step27: 7.2. Atmos Ocean Interface
Is Required
Step28: 7.3. Atmos Land Interface
Is Required
Step29: 7.4. Atmos Sea-ice Interface
Is Required
Step30: 7.5. Ocean Seaice Interface
Is Required
Step31: 7.6. Land Ocean Interface
Is Required
Step32: 8. Key Properties --> Conservation --> Fresh Water
Global fresh water convervation properties of the model
8.1. Global
Is Required
Step33: 8.2. Atmos Ocean Interface
Is Required
Step34: 8.3. Atmos Land Interface
Is Required
Step35: 8.4. Atmos Sea-ice Interface
Is Required
Step36: 8.5. Ocean Seaice Interface
Is Required
Step37: 8.6. Runoff
Is Required
Step38: 8.7. Iceberg Calving
Is Required
Step39: 8.8. Endoreic Basins
Is Required
Step40: 8.9. Snow Accumulation
Is Required
Step41: 9. Key Properties --> Conservation --> Salt
Global salt convervation properties of the model
9.1. Ocean Seaice Interface
Is Required
Step42: 10. Key Properties --> Conservation --> Momentum
Global momentum convervation properties of the model
10.1. Details
Is Required
Step43: 11. Radiative Forcings
Radiative forcings of the model for historical and scenario (aka Table 12.1 IPCC AR5)
11.1. Overview
Is Required
Step44: 12. Radiative Forcings --> Greenhouse Gases --> CO2
Carbon dioxide forcing
12.1. Provision
Is Required
Step45: 12.2. Additional Information
Is Required
Step46: 13. Radiative Forcings --> Greenhouse Gases --> CH4
Methane forcing
13.1. Provision
Is Required
Step47: 13.2. Additional Information
Is Required
Step48: 14. Radiative Forcings --> Greenhouse Gases --> N2O
Nitrous oxide forcing
14.1. Provision
Is Required
Step49: 14.2. Additional Information
Is Required
Step50: 15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
Troposheric ozone forcing
15.1. Provision
Is Required
Step51: 15.2. Additional Information
Is Required
Step52: 16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
Stratospheric ozone forcing
16.1. Provision
Is Required
Step53: 16.2. Additional Information
Is Required
Step54: 17. Radiative Forcings --> Greenhouse Gases --> CFC
Ozone-depleting and non-ozone-depleting fluorinated gases forcing
17.1. Provision
Is Required
Step55: 17.2. Equivalence Concentration
Is Required
Step56: 17.3. Additional Information
Is Required
Step57: 18. Radiative Forcings --> Aerosols --> SO4
SO4 aerosol forcing
18.1. Provision
Is Required
Step58: 18.2. Additional Information
Is Required
Step59: 19. Radiative Forcings --> Aerosols --> Black Carbon
Black carbon aerosol forcing
19.1. Provision
Is Required
Step60: 19.2. Additional Information
Is Required
Step61: 20. Radiative Forcings --> Aerosols --> Organic Carbon
Organic carbon aerosol forcing
20.1. Provision
Is Required
Step62: 20.2. Additional Information
Is Required
Step63: 21. Radiative Forcings --> Aerosols --> Nitrate
Nitrate forcing
21.1. Provision
Is Required
Step64: 21.2. Additional Information
Is Required
Step65: 22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
Cloud albedo effect forcing (RFaci)
22.1. Provision
Is Required
Step66: 22.2. Aerosol Effect On Ice Clouds
Is Required
Step67: 22.3. Additional Information
Is Required
Step68: 23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
Cloud lifetime effect forcing (ERFaci)
23.1. Provision
Is Required
Step69: 23.2. Aerosol Effect On Ice Clouds
Is Required
Step70: 23.3. RFaci From Sulfate Only
Is Required
Step71: 23.4. Additional Information
Is Required
Step72: 24. Radiative Forcings --> Aerosols --> Dust
Dust forcing
24.1. Provision
Is Required
Step73: 24.2. Additional Information
Is Required
Step74: 25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
Tropospheric volcanic forcing
25.1. Provision
Is Required
Step75: 25.2. Historical Explosive Volcanic Aerosol Implementation
Is Required
Step76: 25.3. Future Explosive Volcanic Aerosol Implementation
Is Required
Step77: 25.4. Additional Information
Is Required
Step78: 26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
Stratospheric volcanic forcing
26.1. Provision
Is Required
Step79: 26.2. Historical Explosive Volcanic Aerosol Implementation
Is Required
Step80: 26.3. Future Explosive Volcanic Aerosol Implementation
Is Required
Step81: 26.4. Additional Information
Is Required
Step82: 27. Radiative Forcings --> Aerosols --> Sea Salt
Sea salt forcing
27.1. Provision
Is Required
Step83: 27.2. Additional Information
Is Required
Step84: 28. Radiative Forcings --> Other --> Land Use
Land use forcing
28.1. Provision
Is Required
Step85: 28.2. Crop Change Only
Is Required
Step86: 28.3. Additional Information
Is Required
Step87: 29. Radiative Forcings --> Other --> Solar
Solar forcing
29.1. Provision
Is Required
Step88: 29.2. Additional Information
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'messy-consortium', 'sandbox-3', 'toplevel')
Explanation: ES-DOC CMIP6 Model Properties - Toplevel
MIP Era: CMIP6
Institute: MESSY-CONSORTIUM
Source ID: SANDBOX-3
Sub-Topics: Radiative Forcings.
Properties: 85 (42 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:10
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Flux Correction
3. Key Properties --> Genealogy
4. Key Properties --> Software Properties
5. Key Properties --> Coupling
6. Key Properties --> Tuning Applied
7. Key Properties --> Conservation --> Heat
8. Key Properties --> Conservation --> Fresh Water
9. Key Properties --> Conservation --> Salt
10. Key Properties --> Conservation --> Momentum
11. Radiative Forcings
12. Radiative Forcings --> Greenhouse Gases --> CO2
13. Radiative Forcings --> Greenhouse Gases --> CH4
14. Radiative Forcings --> Greenhouse Gases --> N2O
15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
17. Radiative Forcings --> Greenhouse Gases --> CFC
18. Radiative Forcings --> Aerosols --> SO4
19. Radiative Forcings --> Aerosols --> Black Carbon
20. Radiative Forcings --> Aerosols --> Organic Carbon
21. Radiative Forcings --> Aerosols --> Nitrate
22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
24. Radiative Forcings --> Aerosols --> Dust
25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
27. Radiative Forcings --> Aerosols --> Sea Salt
28. Radiative Forcings --> Other --> Land Use
29. Radiative Forcings --> Other --> Solar
1. Key Properties
Key properties of the model
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Top level overview of coupled model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of coupled model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.flux_correction.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Flux Correction
Flux correction properties of the model
2.1. Details
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how flux corrections are applied in the model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.year_released')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Genealogy
Genealogy and history of the model
3.1. Year Released
Is Required: TRUE Type: STRING Cardinality: 1.1
Year the model was released
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP3_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.2. CMIP3 Parent
Is Required: FALSE Type: STRING Cardinality: 0.1
CMIP3 parent if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP5_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.3. CMIP5 Parent
Is Required: FALSE Type: STRING Cardinality: 0.1
CMIP5 parent if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.previous_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.4. Previous Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Previously known as
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Software Properties
Software properties of model
4.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.components_structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.4. Components Structure
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how model realms are structured into independent software components (coupled via a coupler) and internal software components.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.coupler')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OASIS"
# "OASIS3-MCT"
# "ESMF"
# "NUOPC"
# "Bespoke"
# "Unknown"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 4.5. Coupler
Is Required: FALSE Type: ENUM Cardinality: 0.1
Overarching coupling framework for model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Coupling
**
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of coupling in the model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_double_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.2. Atmosphere Double Flux
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the atmosphere passing a double flux to the ocean and sea ice (as opposed to a single one)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_fluxes_calculation_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Atmosphere grid"
# "Ocean grid"
# "Specific coupler grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 5.3. Atmosphere Fluxes Calculation Grid
Is Required: FALSE Type: ENUM Cardinality: 0.1
Where are the air-sea fluxes calculated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_relative_winds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.4. Atmosphere Relative Winds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are relative or absolute winds used to compute the flux? I.e. do ocean surface currents enter the wind stress calculation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Tuning Applied
Tuning methodology for model
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics/diagnostics retained. Document the relative weight given to climate performance metrics/diagnostics versus process oriented metrics/diagnostics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics/diagnostics of the global mean state used in tuning model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics/diagnostics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics/diagnostics used in tuning model/component (such as 20th century)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.energy_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.5. Energy Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how energy balance was obtained in the full system: in the various components independently or at the components coupling stage?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.fresh_water_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.6. Fresh Water Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how fresh_water balance was obtained in the full system: in the various components independently or at the components coupling stage?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Key Properties --> Conservation --> Heat
Global heat convervation properties of the model
7.1. Global
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how heat is conserved globally
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. Atmos Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the atmosphere/ocean coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.3. Atmos Land Interface
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how heat is conserved at the atmosphere/land coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.4. Atmos Sea-ice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the atmosphere/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.5. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the ocean/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.land_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.6. Land Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the land/ocean coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Key Properties --> Conservation --> Fresh Water
Global fresh water convervation properties of the model
8.1. Global
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how fresh_water is conserved globally
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.2. Atmos Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh_water is conserved at the atmosphere/ocean coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.3. Atmos Land Interface
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how fresh water is conserved at the atmosphere/land coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.4. Atmos Sea-ice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh water is conserved at the atmosphere/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.5. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh water is conserved at the ocean/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.runoff')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.6. Runoff
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how runoff is distributed and conserved
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.iceberg_calving')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.7. Iceberg Calving
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how iceberg calving is modeled and conserved
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.endoreic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.8. Endoreic Basins
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how endoreic basins (no ocean access) are treated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.snow_accumulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.9. Snow Accumulation
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how snow accumulation over land and over sea-ice is treated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.salt.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Key Properties --> Conservation --> Salt
Global salt convervation properties of the model
9.1. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how salt is conserved at the ocean/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.momentum.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10. Key Properties --> Conservation --> Momentum
Global momentum convervation properties of the model
10.1. Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how momentum is conserved in the model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11. Radiative Forcings
Radiative forcings of the model for historical and scenario (aka Table 12.1 IPCC AR5)
11.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of radiative forcings (GHG and aerosols) implementation in model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12. Radiative Forcings --> Greenhouse Gases --> CO2
Carbon dioxide forcing
12.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Radiative Forcings --> Greenhouse Gases --> CH4
Methane forcing
13.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 13.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14. Radiative Forcings --> Greenhouse Gases --> N2O
Nitrous oxide forcing
14.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
Troposheric ozone forcing
15.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
Stratospheric ozone forcing
16.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17. Radiative Forcings --> Greenhouse Gases --> CFC
Ozone-depleting and non-ozone-depleting fluorinated gases forcing
17.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.equivalence_concentration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "Option 1"
# "Option 2"
# "Option 3"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.2. Equivalence Concentration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Details of any equivalence concentrations used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18. Radiative Forcings --> Aerosols --> SO4
SO4 aerosol forcing
18.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19. Radiative Forcings --> Aerosols --> Black Carbon
Black carbon aerosol forcing
19.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20. Radiative Forcings --> Aerosols --> Organic Carbon
Organic carbon aerosol forcing
20.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21. Radiative Forcings --> Aerosols --> Nitrate
Nitrate forcing
21.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 21.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
Cloud albedo effect forcing (RFaci)
22.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 22.2. Aerosol Effect On Ice Clouds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative effects of aerosols on ice clouds are represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
Cloud lifetime effect forcing (ERFaci)
23.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 23.2. Aerosol Effect On Ice Clouds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative effects of aerosols on ice clouds are represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.RFaci_from_sulfate_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 23.3. RFaci From Sulfate Only
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative forcing from aerosol cloud interactions from sulfate aerosol only?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 23.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 24. Radiative Forcings --> Aerosols --> Dust
Dust forcing
24.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 24.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
Tropospheric volcanic forcing
25.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.2. Historical Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in historical simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.3. Future Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in future simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 25.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
Stratospheric volcanic forcing
26.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26.2. Historical Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in historical simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26.3. Future Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in future simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27. Radiative Forcings --> Aerosols --> Sea Salt
Sea salt forcing
27.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 28. Radiative Forcings --> Other --> Land Use
Land use forcing
28.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.crop_change_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 28.2. Crop Change Only
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Land use change represented via crop change only?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "irradiance"
# "proton"
# "electron"
# "cosmic ray"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 29. Radiative Forcings --> Other --> Solar
Solar forcing
29.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How solar forcing is provided
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation |
2,504 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Building an image classification model using very little data
Based on the tutorial by Francois Chollet @fchollet https
Step1: Imports
Step2: Small Conv Net
Model architecture definition
Step3: Training
Step4: If your model successfully runs at one epoch, go back and it for 30 epochs by changing nb_epoch above. I was able to get to an val_acc of 0.71 at 30 epochs.
A copy of a pretrained network is available in the pretrained folder.
Evaluating on validation set
Computing loss and accuracy
Step5: Evolution of accuracy on training (blue) and validation (green) sets for 1 to 32 epochs
Step6: Evaluating on validation set
Computing loss and accuracy
Step7: Evolution of accuracy on training (blue) and validation (green) sets for 1 to 100 epochs
Step8: Loading VGG16 weights
This part is a bit complicated because the structure of our model is not exactly the same as the one used when training weights.
Otherwise, we would use the model.load_weights() method.
Note
Step9: Using the VGG16 model to process samples
Step10: This is a long process, so we save the output of the VGG16 once and for all.
Step11: Now we can load it...
Step12: And define and train the custom fully connected neural network
Step13: The training process of this small neural network is very fast
Step14: Bottleneck model evaluation
Step15: Loss and accuracy
Step16: Evolution of accuracy on training (blue) and validation (green) sets for 1 to 32 epochs
Step17: Start by instantiating the VGG base and loading its weights.
Step18: Build a classifier model to put on top of the convolutional model. For the fine tuning, we start with a fully trained-classifer. We will use the weights from the earlier model. And then we will add this model on top of the convolutional base.
Step19: For fine turning, we only want to train a few layers. This line will set the first 25 layers (up to the conv block) to non-trainable.
Step20: Evaluating on validation set
Computing loss and accuracy | Python Code:
##This notebook is built around using tensorflow as the backend for keras
#!pip install pillow
!KERAS_BACKEND=tensorflow python -c "from keras import backend"
import os
import numpy as np
from keras.models import Sequential
from keras.layers import Activation, Dropout, Flatten, Dense
from keras.preprocessing.image import ImageDataGenerator
from keras.layers import Conv2D, Convolution2D, MaxPooling2D, ZeroPadding2D
from keras import optimizers
# dimensions of our images.
img_width, img_height = 150, 150
train_data_dir = 'data/train'
validation_data_dir = 'data/validation'
Explanation: Building an image classification model using very little data
Based on the tutorial by Francois Chollet @fchollet https://blog.keras.io/building-powerful-image-classification-models-using-very-little-data.html and the workbook by Guillaume Dominici https://github.com/gggdominici/keras-workshop
This tutorial presents several ways to build an image classifier using keras from just a few hundred or thousand pictures from each class you want to be able to recognize.
We will go over the following options:
training a small network from scratch (as a baseline)
using the bottleneck features of a pre-trained network
fine-tuning the top layers of a pre-trained network
This will lead us to cover the following Keras features:
fit_generator for training Keras a model using Python data generators
ImageDataGenerator for real-time data augmentation
layer freezing and model fine-tuning
...and more.
Data
Data can be downloaded at:
https://www.kaggle.com/c/dogs-vs-cats/data
All you need is the train set
The recommended folder structure is:
Folder structure
python
data/
train/
dogs/ ### 1024 pictures
dog001.jpg
dog002.jpg
...
cats/ ### 1024 pictures
cat001.jpg
cat002.jpg
...
validation/
dogs/ ### 416 pictures
dog001.jpg
dog002.jpg
...
cats/ ### 416 pictures
cat001.jpg
cat002.jpg
...
Note : for this example we only consider 2x1000 training images and 2x400 testing images among the 2x12500 available.
The github repo includes about 1500 images for this model. The original Kaggle dataset is much larger. The purpose of this demo is to show how you can build models with smaller size datasets. You should be able to improve this model by using more data.
Data loading
End of explanation
# used to rescale the pixel values from [0, 255] to [0, 1] interval
datagen = ImageDataGenerator(rescale=1./255)
# automagically retrieve images and their classes for train and validation sets
train_generator = datagen.flow_from_directory(
train_data_dir,
target_size=(img_width, img_height),
batch_size=16,
class_mode='binary')
validation_generator = datagen.flow_from_directory(
validation_data_dir,
target_size=(img_width, img_height),
batch_size=32,
class_mode='binary')
Explanation: Imports
End of explanation
model = Sequential()
model.add(Conv2D(32,(3,3), input_shape=(img_width, img_height,3)))
#model.add(Convolution2D(32, 3, 3, input_shape=(img_width, img_height,3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(32,(3,3)))
#model.add(Convolution2D(32, 3, 3))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(64,(3,3)))
#model.add(Convolution2D(64, 3, 3))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Flatten())
model.add(Dense(64))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(1))
model.add(Activation('sigmoid'))
model.compile(loss='binary_crossentropy',
optimizer='rmsprop',
metrics=['accuracy'])
Explanation: Small Conv Net
Model architecture definition
End of explanation
nb_epoch = 30
nb_train_samples = 2048
nb_validation_samples = 832
model.fit_generator(
train_generator,
samples_per_epoch=nb_train_samples,
nb_epoch=nb_epoch,
validation_data=validation_generator,
nb_val_samples=nb_validation_samples)
model.save_weights('models/basic_cnn_20_epochs.h5')
#model.load_weights('models_trained/basic_cnn_20_epochs.h5')
Explanation: Training
End of explanation
model.evaluate_generator(validation_generator, nb_validation_samples)
Explanation: If your model successfully runs at one epoch, go back and it for 30 epochs by changing nb_epoch above. I was able to get to an val_acc of 0.71 at 30 epochs.
A copy of a pretrained network is available in the pretrained folder.
Evaluating on validation set
Computing loss and accuracy :
End of explanation
train_datagen_augmented = ImageDataGenerator(
rescale=1./255, # normalize pixel values to [0,1]
shear_range=0.2, # randomly applies shearing transformation
zoom_range=0.2, # randomly applies shearing transformation
horizontal_flip=True) # randomly flip the images
# same code as before
train_generator_augmented = train_datagen_augmented.flow_from_directory(
train_data_dir,
target_size=(img_width, img_height),
batch_size=32,
class_mode='binary')
nb_epoch = 30
model.fit_generator(
train_generator_augmented,
samples_per_epoch=nb_train_samples,
nb_epoch=nb_epoch,
validation_data=validation_generator,
nb_val_samples=nb_validation_samples)
model.save_weights('models/augmented_30_epochs.h5')
#model.load_weights('models_trained/augmented_30_epochs.h5')
Explanation: Evolution of accuracy on training (blue) and validation (green) sets for 1 to 32 epochs :
After ~10 epochs the neural network reach ~70% accuracy. We can witness overfitting, no progress is made over validation set in the next epochs
Data augmentation for improving the model
By applying random transformation to our train set, we artificially enhance our dataset with new unseen images.
This will hopefully reduce overfitting and allows better generalization capability for our network.
Example of data augmentation applied to a picture:
End of explanation
model.evaluate_generator(validation_generator, nb_validation_samples)
Explanation: Evaluating on validation set
Computing loss and accuracy :
End of explanation
model_vgg = Sequential()
model_vgg.add(ZeroPadding2D((1, 1), input_shape=(img_width, img_height,3)))
model_vgg.add(Convolution2D(64, 3, 3, activation='relu', name='conv1_1'))
model_vgg.add(ZeroPadding2D((1, 1)))
model_vgg.add(Convolution2D(64, 3, 3, activation='relu', name='conv1_2'))
model_vgg.add(MaxPooling2D((2, 2), strides=(2, 2)))
model_vgg.add(ZeroPadding2D((1, 1)))
model_vgg.add(Convolution2D(128, 3, 3, activation='relu', name='conv2_1'))
model_vgg.add(ZeroPadding2D((1, 1)))
model_vgg.add(Convolution2D(128, 3, 3, activation='relu', name='conv2_2'))
model_vgg.add(MaxPooling2D((2, 2), strides=(2, 2)))
model_vgg.add(ZeroPadding2D((1, 1)))
model_vgg.add(Convolution2D(256, 3, 3, activation='relu', name='conv3_1'))
model_vgg.add(ZeroPadding2D((1, 1)))
model_vgg.add(Convolution2D(256, 3, 3, activation='relu', name='conv3_2'))
model_vgg.add(ZeroPadding2D((1, 1)))
model_vgg.add(Convolution2D(256, 3, 3, activation='relu', name='conv3_3'))
model_vgg.add(MaxPooling2D((2, 2), strides=(2, 2)))
model_vgg.add(ZeroPadding2D((1, 1)))
model_vgg.add(Convolution2D(512, 3, 3, activation='relu', name='conv4_1'))
model_vgg.add(ZeroPadding2D((1, 1)))
model_vgg.add(Convolution2D(512, 3, 3, activation='relu', name='conv4_2'))
model_vgg.add(ZeroPadding2D((1, 1)))
model_vgg.add(Convolution2D(512, 3, 3, activation='relu', name='conv4_3'))
model_vgg.add(MaxPooling2D((2, 2), strides=(2, 2)))
model_vgg.add(ZeroPadding2D((1, 1)))
model_vgg.add(Convolution2D(512, 3, 3, activation='relu', name='conv5_1'))
model_vgg.add(ZeroPadding2D((1, 1)))
model_vgg.add(Convolution2D(512, 3, 3, activation='relu', name='conv5_2'))
model_vgg.add(ZeroPadding2D((1, 1)))
model_vgg.add(Convolution2D(512, 3, 3, activation='relu', name='conv5_3'))
model_vgg.add(MaxPooling2D((2, 2), strides=(2, 2)))
Explanation: Evolution of accuracy on training (blue) and validation (green) sets for 1 to 100 epochs :
Thanks to data-augmentation, the accuracy on the validation set improved to ~80%
Using a pre-trained model
The process of training a convolutionnal neural network can be very time-consuming and require a lot of datas.
We can go beyond the previous models in terms of performance and efficiency by using a general-purpose, pre-trained image classifier. This example uses VGG16, a model trained on the ImageNet dataset - which contains millions of images classified in 1000 categories.
On top of it, we add a small multi-layer perceptron and we train it on our dataset.
VGG16 + small MLP
VGG16 model architecture definition
End of explanation
import h5py
f = h5py.File('models/vgg/vgg16_weights.h5')
for k in range(f.attrs['nb_layers']):
if k >= len(model_vgg.layers) - 1:
# we don't look at the last two layers in the savefile (fully-connected and activation)
break
g = f['layer_{}'.format(k)]
weights = [g['param_{}'.format(p)] for p in range(g.attrs['nb_params'])]
layer = model_vgg.layers[k]
if layer.__class__.__name__ in ['Convolution1D', 'Convolution2D', 'Convolution3D', 'AtrousConvolution2D']:
weights[0] = np.transpose(weights[0], (2, 3, 1, 0))
layer.set_weights(weights)
f.close()
Explanation: Loading VGG16 weights
This part is a bit complicated because the structure of our model is not exactly the same as the one used when training weights.
Otherwise, we would use the model.load_weights() method.
Note : the VGG16 weights file (~500MB) is not included in this repository. You can download from here :
https://gist.github.com/baraldilorenzo/07d7802847aaad0a35d3
End of explanation
train_generator_bottleneck = datagen.flow_from_directory(
train_data_dir,
target_size=(img_width, img_height),
batch_size=32,
class_mode=None,
shuffle=False)
validation_generator_bottleneck = datagen.flow_from_directory(
validation_data_dir,
target_size=(img_width, img_height),
batch_size=32,
class_mode=None,
shuffle=False)
Explanation: Using the VGG16 model to process samples
End of explanation
bottleneck_features_train = model_vgg.predict_generator(train_generator_bottleneck, nb_train_samples)
np.save(open('models/bottleneck_features_train.npy', 'wb'), bottleneck_features_train)
bottleneck_features_validation = model_vgg.predict_generator(validation_generator_bottleneck, nb_validation_samples)
np.save(open('models/bottleneck_features_validation.npy', 'wb'), bottleneck_features_validation)
Explanation: This is a long process, so we save the output of the VGG16 once and for all.
End of explanation
train_data = np.load(open('models/bottleneck_features_train.npy', 'rb'))
train_labels = np.array([0] * (nb_train_samples // 2) + [1] * (nb_train_samples // 2))
validation_data = np.load(open('models/bottleneck_features_validation.npy', 'rb'))
validation_labels = np.array([0] * (nb_validation_samples // 2) + [1] * (nb_validation_samples // 2))
Explanation: Now we can load it...
End of explanation
model_top = Sequential()
model_top.add(Flatten(input_shape=train_data.shape[1:]))
model_top.add(Dense(256, activation='relu'))
model_top.add(Dropout(0.5))
model_top.add(Dense(1, activation='sigmoid'))
model_top.compile(optimizer='rmsprop', loss='binary_crossentropy', metrics=['accuracy'])
nb_epoch=40
model_top.fit(train_data, train_labels,
nb_epoch=nb_epoch, batch_size=32,
validation_data=(validation_data, validation_labels))
Explanation: And define and train the custom fully connected neural network :
End of explanation
model_top.save_weights('models/bottleneck_40_epochs.h5')
Explanation: The training process of this small neural network is very fast : ~2s per epoch
End of explanation
#model_top.load_weights('models/with-bottleneck/1000-samples--100-epochs.h5')
#model_top.load_weights('/notebook/Data1/Code/keras-workshop/models/with-bottleneck/1000-samples--100-epochs.h5')
Explanation: Bottleneck model evaluation
End of explanation
model_top.evaluate(validation_data, validation_labels)
Explanation: Loss and accuracy :
End of explanation
##Fine-tuning the top layers of a a pre-trained network
Explanation: Evolution of accuracy on training (blue) and validation (green) sets for 1 to 32 epochs :
We reached a 90% accuracy on the validation after ~1m of training (~20 epochs) and 8% of the samples originally available on the Kaggle competition !
End of explanation
model_vgg = Sequential()
model_vgg.add(ZeroPadding2D((1, 1), input_shape=(img_width, img_height,3)))
model_vgg.add(Convolution2D(64, 3, 3, activation='relu', name='conv1_1'))
model_vgg.add(ZeroPadding2D((1, 1)))
model_vgg.add(Convolution2D(64, 3, 3, activation='relu', name='conv1_2'))
model_vgg.add(MaxPooling2D((2, 2), strides=(2, 2)))
model_vgg.add(ZeroPadding2D((1, 1)))
model_vgg.add(Convolution2D(128, 3, 3, activation='relu', name='conv2_1'))
model_vgg.add(ZeroPadding2D((1, 1)))
model_vgg.add(Convolution2D(128, 3, 3, activation='relu', name='conv2_2'))
model_vgg.add(MaxPooling2D((2, 2), strides=(2, 2)))
model_vgg.add(ZeroPadding2D((1, 1)))
model_vgg.add(Convolution2D(256, 3, 3, activation='relu', name='conv3_1'))
model_vgg.add(ZeroPadding2D((1, 1)))
model_vgg.add(Convolution2D(256, 3, 3, activation='relu', name='conv3_2'))
model_vgg.add(ZeroPadding2D((1, 1)))
model_vgg.add(Convolution2D(256, 3, 3, activation='relu', name='conv3_3'))
model_vgg.add(MaxPooling2D((2, 2), strides=(2, 2)))
model_vgg.add(ZeroPadding2D((1, 1)))
model_vgg.add(Convolution2D(512, 3, 3, activation='relu', name='conv4_1'))
model_vgg.add(ZeroPadding2D((1, 1)))
model_vgg.add(Convolution2D(512, 3, 3, activation='relu', name='conv4_2'))
model_vgg.add(ZeroPadding2D((1, 1)))
model_vgg.add(Convolution2D(512, 3, 3, activation='relu', name='conv4_3'))
model_vgg.add(MaxPooling2D((2, 2), strides=(2, 2)))
model_vgg.add(ZeroPadding2D((1, 1)))
model_vgg.add(Convolution2D(512, 3, 3, activation='relu', name='conv5_1'))
model_vgg.add(ZeroPadding2D((1, 1)))
model_vgg.add(Convolution2D(512, 3, 3, activation='relu', name='conv5_2'))
model_vgg.add(ZeroPadding2D((1, 1)))
model_vgg.add(Convolution2D(512, 3, 3, activation='relu', name='conv5_3'))
model_vgg.add(MaxPooling2D((2, 2), strides=(2, 2)))
import h5py
f = h5py.File('models/vgg/vgg16_weights.h5')
for k in range(f.attrs['nb_layers']):
if k >= len(model_vgg.layers) - 1:
# we don't look at the last two layers in the savefile (fully-connected and activation)
break
g = f['layer_{}'.format(k)]
weights = [g['param_{}'.format(p)] for p in range(g.attrs['nb_params'])]
layer = model_vgg.layers[k]
if layer.__class__.__name__ in ['Convolution1D', 'Convolution2D', 'Convolution3D', 'AtrousConvolution2D']:
weights[0] = np.transpose(weights[0], (2, 3, 1, 0))
layer.set_weights(weights)
f.close()
Explanation: Start by instantiating the VGG base and loading its weights.
End of explanation
top_model = Sequential()
top_model.add(Flatten(input_shape=model_vgg.output_shape[1:]))
top_model.add(Dense(256, activation='relu'))
top_model.add(Dropout(0.5))
top_model.add(Dense(1, activation='sigmoid'))
top_model.load_weights('models/bottleneck_40_epochs.h5')
model_vgg.add(top_model)
Explanation: Build a classifier model to put on top of the convolutional model. For the fine tuning, we start with a fully trained-classifer. We will use the weights from the earlier model. And then we will add this model on top of the convolutional base.
End of explanation
for layer in model_vgg.layers[:25]:
layer.trainable = False
# compile the model with a SGD/momentum optimizer
# and a very slow learning rate.
model_vgg.compile(loss='binary_crossentropy',
optimizer=optimizers.SGD(lr=1e-4, momentum=0.9),
metrics=['accuracy'])
# prepare data augmentation configuration . . . do we need this?
train_datagen = ImageDataGenerator(
rescale=1./255,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True)
test_datagen = ImageDataGenerator(rescale=1./255)
train_generator = train_datagen.flow_from_directory(
train_data_dir,
target_size=(img_height, img_width),
batch_size=32,
class_mode='binary')
validation_generator = test_datagen.flow_from_directory(
validation_data_dir,
target_size=(img_height, img_width),
batch_size=32,
class_mode='binary')
# fine-tune the model
model_vgg.fit_generator(
train_generator,
samples_per_epoch=nb_train_samples,
nb_epoch=nb_epoch,
validation_data=validation_generator,
nb_val_samples=nb_validation_samples)
model_vgg.save_weights('models/finetuning_20epochs_vgg.h5')
model_vgg.load_weights('models/finetuning_20epochs_vgg.h5')
Explanation: For fine turning, we only want to train a few layers. This line will set the first 25 layers (up to the conv block) to non-trainable.
End of explanation
model_vgg.evaluate_generator(validation_generator, nb_validation_samples)
model.evaluate_generator(validation_generator, nb_validation_samples)
model_top.evaluate(validation_data, validation_labels)
Explanation: Evaluating on validation set
Computing loss and accuracy :
End of explanation |
2,505 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
阅读笔记
作者:方跃文
Email
Step1: Tab 键自动完成
在python shell中,输入表达式时候,只要按下Tab键,当前命名空间中任何已输入的字符串相匹配的变量(对象、函数等)就会被找出来:
Step2: 此外,我们还可以在任何对象之后输入一个句点来方便地补全方法和属性的输入:
Step3: Tab键自动完成成功不只可以搜索命名空间和自动完成对象或模块属性。当我们输入任何看上去像文件路径的东西时(即便是在一个Python字符串中),按下Tab键即可找出电脑文件系统中与之匹配的东西。
Step4: 内省
在变量的前面或者后面加上一个问号就可以将有关该对象的一些通用信息显示出来,这个就是内省,即object introspection.
Step6: 上面执行完,jupyter会跳出一个小窗口并且显示如下:
Type
Step7: ?还有一个用法,即搜索IPython的命名空间,类似于标准UNIX或者Windows命令行中的那种用法。一些字符再配以通配符即可显示出所有与该通配符表达式相匹配的名称。例如我们可以列出NumPy顶级命名空间中含有“load"的所有函数。
Step8: 上述执行后,jupyter notebooK会给出:
np.loader
np.load
np.loads
np.loadtxt
np.pkgload
%run命令
在IPython会话中,所有文件都可以通过%run命令当作Python程序来运行。假设当前目录下的chapter03文件夹中有个simple01.py的脚本,其中内容为
Step9: 上述脚本simple01.py是在一个空的命名空间中运行的,没有任何import,也没有定义任何其他的变量,所以其行为跟在命令行运行是一样的。此后,该脚本中所定义的变量(包括脚本中的import、函数、全局变量)就可以在当前jupyter notebook中进行访问(除非有其他错误或则异常)
Step10: 如果Python脚本中需要用到命令行参数(通过 sys.argv访问),可以将参数放到文件路径的后面,就像在命令行执行那样。
如果希望脚本执行的时候会访问当前jupyter notebook中的变量,应该用…%run -i script.py,例如
我在chapter03文件夹中写下
x = 32
add = x + result
print('add is %d' % (add))
Step11: 中断执行的代码
任何代码在执行时候(无论是通过%run执行的脚本还是长时间运行的命令),只要按下按下“Ctrl+C”,就会引发一个keyboardInterrupt。出一些特殊的情况之外,绝代分python程序都将因此立即停止执行。
例如:
当python代码已经调用了某个已编译的扩展模块时,按下Ctrl+C将无法立即停止执行。
在这种情况下,要么需要等待python解释器重新获得控制权,要么只能通过
操作系统的任务管理器强制执行终止python进程。
执行剪贴板中的代码
在IPython shell(注意,我这里强调一下,并不是在jupyter notebook中,而是ipython shell,虽然我有时候
把他们两个说的好像等效一样,但是两者还是不同的)中执行代码的最简单方式就是粘贴剪贴板中的代码。虽然这种做法很粗糙,
但是在实际工作中就很有用。比如,在开发一个复杂或耗时的程序时候,我们可能需要一段
一段地执行脚本,以便查看各个阶段所加载的数据以及产生的结果。又比如说,在网上找到了
一个何用的代码,但是又不想专门为其新建一个.py文件。
多数情况下,我们可以通过“Ctrl-Shift-V”将粘贴版中的代码片段粘贴出来(windows中)。
%paste 和 %cpaste 这两个魔术函数可以粘贴剪贴板中的一切文本。在ipython shell中这两个函数
可以帮助粘贴。后者%cpaste相比于%paste只是多了粘贴代码的特殊提示符,可以一行一行粘贴。
Step12: IPython 跟编辑器和IDE之间的交互
某些文本编辑器(EMACS, VIM)带有一些能将代码块直接发送到ipython shell的第三方扩展。某些IDE中也
预装有ipython。
对于我自己而言,我喜欢用git bash,然后在里面折腾vim. 当然我有时候也用IDE
键盘快捷键
IPython提供了许多用于提示符导航(Emacs文本编辑器或者UNIX bash shell的用户对此会很熟悉)和查阅历史shell命令的快捷键。因为我不喜欢在ipython shell中写code,所以我就跳过了。如果有人读到我的笔记发现这里没有什么记录的话请自行查找原书。
异常和跟踪
如果%run某段脚本或执行某条语句发生了一场,IPython默认会输出整个调用栈跟踪traceback,其中还会附上调用栈各点附近的几行代码作为上下文参考。
Step13: 拥有额外的上下文代码参考是它相对于标准python解释器的一大优势。上下文代码参考的数量可以通过%mode魔术命令进行控制,既可以少(与标准python解释器相同)也可以多(带有函数参数值以及其他信息)。本章后面还会讲到如果在出现异常之后进入跟踪栈进行交互式的事后调试post-mortem debugging.
魔术命令
IPython有一些特殊命令(魔术命令Magic Command),它们有的为常见任务提供便利,有的则使你能够轻松控制IPtython系统的行为。魔术命令是以百分号%为前缀的命令。例如,我们可以通过 %timeit 这个魔术命令检测任意Python语句(如矩阵乘法)的执行时间(稍后对此进行详细讲解):
Step14: 魔术命令可以看作运行于IPython系统中的命令行程序。它们大都还有一些“命令行”,使用“?”即可查看其选项
Step15: 上面执行后,会跳出它的docstring
Step16: 常用的python魔术命令
| 命令 | 功能|
| ------------- | ------------- |
| %quickref | 显示IPython的快速参考 |
| %magic | 显示所有魔术命令的详细文档 |
| %debug | 从最新的异常跟踪的底部进入交互式调试器|
| %hist | #打印命令的输入(可选输出)历史 |
| %pdb | 在异常发生后自动进入调试器 |
| %paste | 执行粘贴版中的python代码 |
| %reset | 删除interactive命名空间中的全部变量、名称 |
| %page OBJECT | 通过分页器打印输出 OBJECT |
| %run script.py | 在IPython中执行脚本 |
| %run statement | 通过cProfile执行statement,并打印分析器的输出结果|
| %time statement | 报告statement的执行时间|
| %timeit statement | 多次执行statement以计算系统平均执行时间。对那些执行时间非常小的代码很有用|
| %who、%who_ls、%whos | 显示interactive命名空间中定义的变量,信息级别、冗余度可变 |
| %xdel variable | 删除variable,并尝试清楚其在IPython中的对象上的一切引用 |
基于Qt的富GUI控制台
IPython团队开发了一个基于Qt框架(其摩的是为终端应用程序提供诸如内嵌图片、多行编辑、语法高亮之类的富文本编辑功能)的GUI控制平台。如果你已经安装了PyQt或者Pyside,使用下面命令来启动的话即可为其添加绘图功能。
ipython qtconsole --pylab=inline
Qt控制台可以通过标签页的形式启动多个IPython进程,这就使得我们可以在多个任务之间轻松地切换。它也开业跟IPython HTML Notebote (即我现在用的jupyter noteboo)共享同一个进程,稍后我们对此进行演示说明。
matplotlib 集成与pylab模式
导致Ipython广泛应用于科学计算领域的部分原因是它跟matplotlib这样的库以及GUI工具集默契配合。
通常我们通过在启动Ipython时候添加--pylab标记来集成matplotlib
Step17: 上述的操作会导致几个结果:
IPython 会启动默认GUI后台集成,这样matplotib绘图窗口创建就不会出现问题;
Numpy和matplotlib的大部分功能会被引入到最顶层的interactive命名空间以产生一个交互式的计算环境(类似matlab等)。也可以通过%gui对此进行手工设置(详情请执行%gui?)
Step18: 使用命令历史
IPython 维护着一个位于硬盘上的小型数据库。其中含有你执行过的每条命令的文本。这样做有几个目的:
只需很少的按键次数即可搜索、自动完成并执行之前已经执行过的命令
在会话间持久化历史命令
将输入/输出历史纪录到日志中去
搜索并重用命令历史
IPython倡导迭代、交互的开发模式:我们常常发现自己总是重复一些命令,假设我们已经执行了
Step19: 如果我们想在修改了simple01.py(当然也可以不改)后再次执行上面的操作,只需要输入 %run 命令的前几个字符并按下“ctrl+P”键或者向上箭头就会在命令历史的第一个发现它. (可能是因为我用的是git bash on windows,我自己并未测试成功书中的这个操作;但是在Linux中,我测试是有效的)。此外,ctrl-R可以实现部分增量搜索,跟Unix shell中的readline所提供的功能一样,并且ctrl-R将会循环搜索命令历史中每一条与输入相符的行。
例如,第一次ctrl-R后,我输入了c,ipython返回给我的是:
In [6]
Step20: 输入的文本被保存在名为 _iX 的变量中,其中X是输入行的行号。每个输入变量都有一个对应的输出变量 _X。例如:
Step21: 由于输入变量是字符串,因此可用python的exec关键字重新执行
Step22: %reset 用于清空 interactive 命名空间,并可选择是否清空输入和输出缓存。%xdel 用于从IPython系统中移除特定对象的一切引用。
Step23: 注意:在处理大数据集时,需注意IPython的输入和输出历史,它会导致所有对象引用都无法被垃圾收集器处理(即释放内存),即使用del关键字将变量从interactive命名空间中删除也不行。对于这种情况,谨慎地使用%xdel和%reset将有助于避免出现内存方面的问题。
记录输入和输出
IPython能够记录整个控制台会话,包括输入和输出。执行 %logstart 即可开始记录日志
Step24: IPython的日志功能开在任何时刻开气,以便记录整个会话。%logstart的具体选项可以参考帮助文档。此外还可以看看几个与之配套的命令:%logoff, %logon, %logstate, 以及 %logstop
Step25: 与操作系统交互
IPython 的另一重要特点就是它跟操作系统的shell结合地非常紧密。即我们可以直接在IPython中实现标准的Windows或unix命令行活动。例如,执行shell命令、更改目录、将命令的执行结果保存在Python对象中等。此外,它还提供了shell命令别名以及目录书签等功能。
下表总结了用于调用shell命令的魔术命令及其语法。本笔记后面还会介绍这些功能。
| 命令 | 说明|
| ------------- | ------------- |
| !cmd | 在系统shell中执行cmd |
| output = !cmd args | 执行cmd,将stdout存放在output中|
| %alias alias_name cmd | 为系统shell命令定义别名|
| %bookmark | 使用IPtyhon的目录书签功能|
| %cd directory | 将系统工作目录更改为directory|
| %pwd | 返回当前工作目录 |
| %pushed directory | 将当前目录入栈,并转向目标目录 (这个不懂??)|
| %popd | 弹出栈顶目录,并转向该目录 |
| %dirs | 返回一个含有当前目录栈的列表 |
| %dhist | 打印目录访问历史 |
| %env | 以dict形式返回系统环境变量 |
shell 命令和别名
在 IPython 中,以感叹号开头的命令行表示其后的所有内容需要在系统shell中执行。In other words, 我们可以删除文件(如rm或者del)、修改目录或执行任意其他处理过程。甚至我们还可启动一些将控制权从IPython手中夺走的进程(比如另外再启动一个Python解释器):
yang@comet-1.edu ~ 19
Step26: 在使用!时,IPython 还允许使用当前环境中定义的python值。只需在变量名前面加上美元符号($)即可:
Step27: 魔术命令 %alias 可以为shell命令自定义简称。例:
In [3]
Step28: 定义好之后就可以在ipython shell(或jupyter notebook)中使用魔术命令%cd db来使用这些标签
如果书签的名字与当前工作目录中某个名字冲突时,可通过 -b 标记(起作用是覆写)使用书签目录。%bookmark的 -l 选项的作用是列出所有书签。
Step29: 软件开发工具
IPython 不仅是交互式环境和数据分析环境,同时也非常适合做开发环境。在数据分析应用程序中,最重要的是要拥有正确的代码。IPython继承了Python内置的 pdb 调试器。 此外,IPython 提供了一些简单易用的代码运行时间以及性能分析的工具。
交互式调试器
IPython的调试器增加了 pdb ,如 Tab 键自动完成、语法高亮、为异常跟踪的每条信息添加上下文参考等。调试代码的最佳时机之一就是错误刚发生的时候。 %debug 命令(在发生异常之后立即输入)将会条用那个“时候”调试器,并直接跳转到发生异常的那个 栈帧 (stack frame)
Step30: 在这个 pdb 调试器中,我们可以执行任意Python 代码并查看各个栈帧中的一切对象和数据,这就相当于解释器还留了条后路给我们。默认是从最低级开始的,即错误发生的地方,在上面ipdb>后面输入u (up) 或者 d (down) 即可在栈跟踪的各级别之间进行切换。
Step31: 此外调试器还能为代码开发提供帮助,尤其当我们想设置断点或者对函数/脚本进行单步调试时。实现这个目的的方式如下所述。
用带有 -d 选项的 %run 命令,这将会在执行脚本文件中的代码之前先打开调试器。必须立即输入 s(或step)才能进入脚本:
Step32: 在此之后,上述文件执行的方式就全凭我们自己说了算了。比如说,在上面那个异常中,我们可以在调用 works_fine 方法的地方设置一个断点,然后输入 c (或者 continue) 使脚本一直运行下去直到该断点时为止。
Step33: 如果想精通这个调试器,必须经过大量的实践。
虽然大部分 IDE 都会自带调试器,但是 IPython 中调试程序的方法往往会带来更高的生产率。
下面是常用的 IPython 调试器命令
|命令 | 功能 |
|------| ------|
| h(elp) | 显示命令列表 |
| help command | 显示 command 的文档 |
| c(ontinue) | 恢复程序的执行 |
| q(uit) | 推出调试器,不再执行任何代码 |
| b(reak) number | 在当前文件的第 number 行设置一个断点 |
| b path/to/file.py
Step34: 测试代码执行的时间: %time 和 %timeit
对于大规模数据分析,我们有时候需要对时间有个规划和预测。特别是对于其中最耗时的函数。IPython中可以轻松应对这种情况。
使用内置的 time 模块,以及 time.clock 和 time.time 函数 手工测试代码执行时间是令人烦闷的事情,因为我们必须编写许多一样的公式化代码:
Step35: 由于这是一个非常常用的功能,所以IPython提供了两个魔术工具 %time 和 %timeit 来自动完成该过程。%time 一次执行一条语句,然后报告总的执行时间。假设我们有一大堆字符串,希望对几个“能选出具有特殊前缀的字符串”的函数进行比较。下面是一个拥有60万字字符串的数组,以及两个不同的“能够选出其中以foo开头的字符串”的方法:
Step36: Wall time是我们感兴趣的数字。所以,看上去第一个方法耗费了接近2倍的时间,但是这并非一个非常精确的结果。如果我们队相同语句多次执行%time的话,就会发现其结果是变化的。为了得到更加精确的结果,我们需要使用魔术函数 %timeit。对于任意语句,它会自动多次执行以产生一个非常精确的平均执行时间
Step37: 这个很平淡无奇的离子告诉我们这样一个道理:我们有必要了解Python标准库、Numpy、Pandas 以及 本书所用其他库的性能特点。在大型数据分析中,这些不起眼的毫秒数会不断累积产生蝴蝶效应。
对于那些执行时间非常短(甚至是微妙 1e-6 s;或者 纳秒 1e-9 s)的分析语句和函数而言,%timeit 是非常有用的。虽然对于单次执行而言,这些时间小到几乎可以忽略不计。但是我们只要举一个例子,就会发现我们很有必要“分秒必争”:
同样执行100万次一个20微妙的函数,所化时间要比一个5微妙的多出15秒。
在上面我运行的那个例子中,我们可以直接对两个字符串运算进行比较,以了解其性能特点:
Step38: 基本性能分析
Step39: 我们将上述脚本内容写入 simple03.py (目录为当前目录下的chapter03目录中),并且执行
Step40: 即使这里不明白脚本里面具体做的事情,那也没有关系,反正先这么照着书里先做着,感受下cProfile的作用。
我们可以看到,输出结果是按照函数名排序的(ordered by standard name)。这样就比较难看出哪些地方是最花时间的,因此通常用 -s 标记,换一种排序的规则:
Step41: 我们看到此时的排序规则为 Ordered by
Step42: 在ipython terminal中,执行 %run -p -s cumulative chapter03/simple03.py也能达到上述效果,但是却无法退出IPython。
逐行分析函数性能
有时候,%prun (或者其他基于cProfile的性能分析手段)所得到的信息要么不足以说明函数的执行时间,要么难以理解(按函数名聚合?)。对于这种情形,我们可以使用一个叫做line_profiler的小型库。气质有一个心的魔术函数 %lprun, 它可以对一个或者多个函数进行逐行性能分析。我们有修改 IPython 配置 以启用这个扩展.
For IPython 0.11+, you can install it by editing the IPython configuration file ~/.ipython/profile_default/ipython_config.py to add the 'line_profiler' item to the extensions list
Step43: line_profiler 可以通过编程方式使用,但是其更强大的一面在于与 Ipython 的交互使用。
假设我们有一个名为 prof_mod 的模块,其代码内容为(我们把prof_mode.py 保存在 chapter03目录下)
Step44: 如果我们想了解 add_and_sum 函数的性能,%prun 会给出如下所示的结果
Step45: 执行的结果为:
当我们启用 line_profiler 这个扩展后,就会出现新的魔术命令 %lprun。 用法上唯一的区别就是: 必须为 %lprun 指明想要测试哪个或哪些函数。%lprun 的通用语法为:
Step46: 在本例子中,我们想要测试 add_and_sum,于是执行
Step47: 网上找了下别人也遇到了和我一样的错误,stackoverflow上面有解决方案:
Step48: 然后我们再执行一次
Step49: 这个结果就容易理解了许多。这里我们测试的只是 add_and_sum 这个函数。上面那个模块中还有一个call_function 函数,我们可以结合 add_and_sum 一起测试,于是最终我们的命令成为了这个样子:
Step50: 通常我们会用 %prun (cProfile) 做宏观性能分析,而用 %lprun 来做 微观的性能分析。
注意,在使用 %lprun 时,之所以必须显示指明待测试函数的函数名,是因为“跟踪”每一行代码的时间代价是巨大的。对不感兴趣的函数进行跟踪会对分析结果产生很显著的影响。
IPython HTML Notebook
IPthon HTML Notebook,即现在的 jupyter notebook。这个其实在我整个笔记中都已经在使用了。notebook项目最初由 Brian Graner 领导的 Ipython 团队从 2011 年开始开发。目前已被广泛使用于开发和数据分析。
首先来看个导入图标的例子,其实这个笔记的开头,我也已经展示过部分这样的功能
Step51: 此处补充一个书上 导入图片的一个例子:
Step52: jupyter notebook是一种基于 JSON 的文档格式 .ipynb, 这种格式是的我们可以轻松分享代码,分析结果,特别是展示图标。目前在各种 Python 研讨会上,一种流行的演示手段就是使用 IPython Notebook,然后再讲 .ipynb 文件发布到网上供所有人参考。
Jupyter Notebook 是一个运行于命令行上的轻量级服务器进程。执行下面代码即可启动
Step53: 如果想要图标以inline方式展示,可以在打开notebook后加入 %matplotlib --inline 或者 %pylab --inline
利用IPython 提高代码开发效率的几点提示
使用 IPython,可以让代码的结果更容易交互和亦欲查看。特别是当执行出现错误的时候,IPython 的交互性可以带来极大的便利
重新加载模块依赖项
在 Python 中,当我们输入 import some_lib 时候,some_lib 中的代码就会被执行,且其中所有的变量、函数和引入项都会保存在一个新建立的 some_lib 模块命名空间中。下次再输入 import some_lib 时,就会得到这个模块命名空间的一个引用。而这对于 IPython 的交互式代码开发模式就会有一个问题。
比如,用 %run 执行的某段脚本中包含了某个刚刚做了修改的模块。假设我们有一个 sample_script.py 文件,其中有如下代码:
Step54: 如果在执行了 %run sample_script.py 后又对 some_lib.py 进行了修改,下次再执行 %run sample_script.py 时候,将仍然会使用老版本的some_lib。其原因在于python是一种“一次加载”系统。不像 matplab等,它会自动应用代码修改。
那么怎么解决这个问题呢?
第一个办法是使用内置的reload函数,即将 sample_script.py 修改成
Step55: 这样,就可以保证每次执行 sample_script.py 时候都能使用最新的 some_lib 了。不过这个办法有个问题,当依赖变得更强时,就需要在很多地插入 reload.
第二个办法可以弥补上述第一个办法的弊端。IPython 提供了一个特殊的 dreload 函数 (非魔术函数) 来解决模块的“深度”重加载。如果执行 import some_lib 之后在输入 derealod(some_lib),则它会尝试重新加载 some_lib 及其所有的依赖项。遗憾的是,这个办法也不是“屡试不爽”的,但倘若失效的,重新启动 IPython 就可以解决所有加载问题。
代码设计提示
作者说这个问题不好讲,但是他在日常生活中的确发现了一些高层次的原则。
保留有意义的对象和数据
扁平结构要比嵌套结构好:嵌套结构犹如洋葱,想要调试需要剥掉好多层。(这种思想源自于Zen of Python by Tim Peters. 在jupyter notebook中输入 import this可以看到这首诗)
无惧大文件。这样可以减少模块的反复加载,编辑脚本时候也可以减少跳转。维护也更加方便。维护更大的模块会更实用且更加符合python的特点。
Step56: 高级python功能
让你的类对IPython更加友好
IPython 力求为各种对象呈现一个友好的字符串表示。对于许多对象(如字典、列表、组等),内置的pprint 模块就能给出漂亮的格式。但是对于我们自己所定义的那些类,必须自己格式化进行输出。假设我们以后下面这个简单的类:
Step57: 如果像下面这样写,我们会发现这个类的默认输出很不好看:
Step58: 由于IPython会获取__repr__方法返回的字符串(具体方法是 output = repr(obj)),并将其显示到控制台上。因此,我们可以为上面那个类添加一个简单的 repr 方法以得到一个更有意义的输出形式:
Step59: 个性化和配置
IPython shell 在外观和行为方面的大部分内容都是可以进行配置的。下面是能够通过配置做的部分事情:
修改颜色方案
修改输入输出提示符
去掉 out 提示符跟下一个 In 提示符之间的空行
执行任意 Python 语句。这些语句可以用于引入所有常用的东西,还可以做一些你希望每次启动 IPython 都发生的事情。
启用 IPython 扩展,如 line_profiler 中的魔术命令 %lprun
定义我们自己的魔术命令或者系统别名
所有这些设置都在一个叫做 ipython_config.py 的文件中,可以在 ~/.config/ipython 目录中找到。Linux和windows系统目录略有点小区别。对于我自己来说,我在git bash on windows 上的目录是:~/.ipython/profile_default/ipython_config.py
一个实用的功能是,利用 ipython_config.py,我们可以拥有多个个性化设置。假设我们想专门为某个特定程序或者项目量身定做 IPython 配置。输入下面这样的命令即可新建一个新的个性化配置文件:
Step60: 然后编辑新建的这个 profile_secret_project 中的配置文件,再用如下方式启动它: | Python Code:
a = 5
a
import numpy as np
from numpy.random import randn
data = {i: randn() for i in range(7)}
print(data)
data1 = {j: j**2 for j in range(5)}
print(data1)
Explanation: 阅读笔记
作者:方跃文
Email: fyuewen@gmail.com
时间:始于2017年9月12日
第三章笔记始于2017年9月28日23:38,结束于 2017年10月17日
第三章 IPtyhon: 一种交互式计算和开发环境
IPython鼓励一种“执行探索——execute explore”精神,这就区别于传统的“编辑——编译——执行 edit——complie——run”
IPython 基础
End of explanation
an_apple = 27
an_example = 42
an_ #按下tab键就会看到之前定义的变量会被显示出来,方便我们做出选择。
Explanation: Tab 键自动完成
在python shell中,输入表达式时候,只要按下Tab键,当前命名空间中任何已输入的字符串相匹配的变量(对象、函数等)就会被找出来:
End of explanation
import IPython
print(IPython.sys_info())
a = [1,2,3]
a.append(0)
a
import datetime
dt = datetime.time(22,2,2)
dd = datetime.date(2017,2,2)
print("%s %s" % (dt,dd))
Explanation: 此外,我们还可以在任何对象之后输入一个句点来方便地补全方法和属性的输入:
End of explanation
./ #按下Tab键, 如果你当前目录下有文件或者目录,会给出提示。
Explanation: Tab键自动完成成功不只可以搜索命名空间和自动完成对象或模块属性。当我们输入任何看上去像文件路径的东西时(即便是在一个Python字符串中),按下Tab键即可找出电脑文件系统中与之匹配的东西。
End of explanation
b=[1,2,3]
b?
Explanation: 内省
在变量的前面或者后面加上一个问号就可以将有关该对象的一些通用信息显示出来,这个就是内省,即object introspection.
End of explanation
def add_numbers(a,b):
#引号部分则为docstring
Add two numbers together
Returns
-------
the_sum: type of arguments
return a+b
add_numbers(1,2)
add_numbers?
#加一个问号执行会显示上述我已经编写好的docstring,这样在忘记函数作用的时候还是很不错的功能。
add_numbers??
#加两个问号则会显示该函数的源代码
Explanation: 上面执行完,jupyter会跳出一个小窗口并且显示如下:
Type: list
String form: [1, 2, 3]
Length: 3
Docstring:
list() -> new empty list
list(iterable) -> new list initialized from iterable's items
如果对象是一个函数或者实例方法,则它的docstring(如果有的话)也会显示出来。例如:
End of explanation
import numpy as np
np.*load*?
Explanation: ?还有一个用法,即搜索IPython的命名空间,类似于标准UNIX或者Windows命令行中的那种用法。一些字符再配以通配符即可显示出所有与该通配符表达式相匹配的名称。例如我们可以列出NumPy顶级命名空间中含有“load"的所有函数。
End of explanation
def f(x,y,z):
return (x+y)/z
a=5
b=6
c=8
result = f(a,b,c)
print(result)
#执行
%run ./chapter03/simple01.py
Explanation: 上述执行后,jupyter notebooK会给出:
np.loader
np.load
np.loads
np.loadtxt
np.pkgload
%run命令
在IPython会话中,所有文件都可以通过%run命令当作Python程序来运行。假设当前目录下的chapter03文件夹中有个simple01.py的脚本,其中内容为
End of explanation
result
Explanation: 上述脚本simple01.py是在一个空的命名空间中运行的,没有任何import,也没有定义任何其他的变量,所以其行为跟在命令行运行是一样的。此后,该脚本中所定义的变量(包括脚本中的import、函数、全局变量)就可以在当前jupyter notebook中进行访问(除非有其他错误或则异常)
End of explanation
%run -i ./chapter03/simple02.py #-i即interactive
Explanation: 如果Python脚本中需要用到命令行参数(通过 sys.argv访问),可以将参数放到文件路径的后面,就像在命令行执行那样。
如果希望脚本执行的时候会访问当前jupyter notebook中的变量,应该用…%run -i script.py,例如
我在chapter03文件夹中写下
x = 32
add = x + result
print('add is %d' % (add))
End of explanation
#下面我把我在ipython中执行的代码
$ ipython
Python 3.6.1 |Anaconda custom (64-bit)| (default, May 11 2017, 13:25:24) [MSC v.1900 64 bit (AMD64)]
Type 'copyright', 'credits' or 'license' for more information
IPython 6.2.0 -- An enhanced Interactive Python. Type '?' for help.
In [1]: %cpaste
Pasting code; enter '--' alone on the line to stop or use Ctrl-D.
:x = 1
:if x == 1:
: print("x is 1.")
:
:--
x is 1.
Explanation: 中断执行的代码
任何代码在执行时候(无论是通过%run执行的脚本还是长时间运行的命令),只要按下按下“Ctrl+C”,就会引发一个keyboardInterrupt。出一些特殊的情况之外,绝代分python程序都将因此立即停止执行。
例如:
当python代码已经调用了某个已编译的扩展模块时,按下Ctrl+C将无法立即停止执行。
在这种情况下,要么需要等待python解释器重新获得控制权,要么只能通过
操作系统的任务管理器强制执行终止python进程。
执行剪贴板中的代码
在IPython shell(注意,我这里强调一下,并不是在jupyter notebook中,而是ipython shell,虽然我有时候
把他们两个说的好像等效一样,但是两者还是不同的)中执行代码的最简单方式就是粘贴剪贴板中的代码。虽然这种做法很粗糙,
但是在实际工作中就很有用。比如,在开发一个复杂或耗时的程序时候,我们可能需要一段
一段地执行脚本,以便查看各个阶段所加载的数据以及产生的结果。又比如说,在网上找到了
一个何用的代码,但是又不想专门为其新建一个.py文件。
多数情况下,我们可以通过“Ctrl-Shift-V”将粘贴版中的代码片段粘贴出来(windows中)。
%paste 和 %cpaste 这两个魔术函数可以粘贴剪贴板中的一切文本。在ipython shell中这两个函数
可以帮助粘贴。后者%cpaste相比于%paste只是多了粘贴代码的特殊提示符,可以一行一行粘贴。
End of explanation
%run ./pydata-book/ch03/ipython_bug.py
Explanation: IPython 跟编辑器和IDE之间的交互
某些文本编辑器(EMACS, VIM)带有一些能将代码块直接发送到ipython shell的第三方扩展。某些IDE中也
预装有ipython。
对于我自己而言,我喜欢用git bash,然后在里面折腾vim. 当然我有时候也用IDE
键盘快捷键
IPython提供了许多用于提示符导航(Emacs文本编辑器或者UNIX bash shell的用户对此会很熟悉)和查阅历史shell命令的快捷键。因为我不喜欢在ipython shell中写code,所以我就跳过了。如果有人读到我的笔记发现这里没有什么记录的话请自行查找原书。
异常和跟踪
如果%run某段脚本或执行某条语句发生了一场,IPython默认会输出整个调用栈跟踪traceback,其中还会附上调用栈各点附近的几行代码作为上下文参考。
End of explanation
import numpy as np
from numpy.random import randn
a = randn(3,3,3)
a
%timeit np.dot(a,a)
Explanation: 拥有额外的上下文代码参考是它相对于标准python解释器的一大优势。上下文代码参考的数量可以通过%mode魔术命令进行控制,既可以少(与标准python解释器相同)也可以多(带有函数参数值以及其他信息)。本章后面还会讲到如果在出现异常之后进入跟踪栈进行交互式的事后调试post-mortem debugging.
魔术命令
IPython有一些特殊命令(魔术命令Magic Command),它们有的为常见任务提供便利,有的则使你能够轻松控制IPtython系统的行为。魔术命令是以百分号%为前缀的命令。例如,我们可以通过 %timeit 这个魔术命令检测任意Python语句(如矩阵乘法)的执行时间(稍后对此进行详细讲解):
End of explanation
%reset?
Explanation: 魔术命令可以看作运行于IPython系统中的命令行程序。它们大都还有一些“命令行”,使用“?”即可查看其选项
End of explanation
a = 1
a
'a' in _ip.user_ns # 不知道为什么这里没有执行通过?
%reset -f
'a' in __ip.user__ns
Explanation: 上面执行后,会跳出它的docstring
End of explanation
#在terminal 输入
ipython --pylab
#回显中会出现部分关于matplotlib的字段
#IPython 6.2.0 -- An enhanced Interactive Python. Type '?' for help.
#Using matplotlib backend: Qt5Agg
Explanation: 常用的python魔术命令
| 命令 | 功能|
| ------------- | ------------- |
| %quickref | 显示IPython的快速参考 |
| %magic | 显示所有魔术命令的详细文档 |
| %debug | 从最新的异常跟踪的底部进入交互式调试器|
| %hist | #打印命令的输入(可选输出)历史 |
| %pdb | 在异常发生后自动进入调试器 |
| %paste | 执行粘贴版中的python代码 |
| %reset | 删除interactive命名空间中的全部变量、名称 |
| %page OBJECT | 通过分页器打印输出 OBJECT |
| %run script.py | 在IPython中执行脚本 |
| %run statement | 通过cProfile执行statement,并打印分析器的输出结果|
| %time statement | 报告statement的执行时间|
| %timeit statement | 多次执行statement以计算系统平均执行时间。对那些执行时间非常小的代码很有用|
| %who、%who_ls、%whos | 显示interactive命名空间中定义的变量,信息级别、冗余度可变 |
| %xdel variable | 删除variable,并尝试清楚其在IPython中的对象上的一切引用 |
基于Qt的富GUI控制台
IPython团队开发了一个基于Qt框架(其摩的是为终端应用程序提供诸如内嵌图片、多行编辑、语法高亮之类的富文本编辑功能)的GUI控制平台。如果你已经安装了PyQt或者Pyside,使用下面命令来启动的话即可为其添加绘图功能。
ipython qtconsole --pylab=inline
Qt控制台可以通过标签页的形式启动多个IPython进程,这就使得我们可以在多个任务之间轻松地切换。它也开业跟IPython HTML Notebote (即我现在用的jupyter noteboo)共享同一个进程,稍后我们对此进行演示说明。
matplotlib 集成与pylab模式
导致Ipython广泛应用于科学计算领域的部分原因是它跟matplotlib这样的库以及GUI工具集默契配合。
通常我们通过在启动Ipython时候添加--pylab标记来集成matplotlib
End of explanation
#原书给了一个在ipython命令行的例子
#但是,我这里用jupyter notebook来进行演示
# 我这里的代码跟原书可能不是很相同,
#我参考的是matplotlib image tutorial
%matplotlib inline
import matplotlib.image as mpimg
import numpy as np
import matplotlib.pyplot as plt
img=mpimg.imread('pydata-book/ch03/stinkbug.png')
plt.imshow(img)
#Here, we use Pillow library to resize the figure
from PIL import Image
import matplotlib.pyplot as plt
img = Image.open('pydata-book/ch03/stinkbug.png')
img1 = img
img.thumbnail((64,64), Image.ANTIALIAS) ## resizes image in-place
img1.thumbnail((256,256), Image.ANTIALIAS)
imgplot = plt.imshow(img)
img1plot = plt.imshow(img1)
%matplotlib inline
import matplotlib.pylab as plab
from numpy.random import randn
plab.plot(randn(1000).cumsum())
Explanation: 上述的操作会导致几个结果:
IPython 会启动默认GUI后台集成,这样matplotib绘图窗口创建就不会出现问题;
Numpy和matplotlib的大部分功能会被引入到最顶层的interactive命名空间以产生一个交互式的计算环境(类似matlab等)。也可以通过%gui对此进行手工设置(详情请执行%gui?)
End of explanation
#在ipython terminal执行
%run chapter03/simple01.py
Explanation: 使用命令历史
IPython 维护着一个位于硬盘上的小型数据库。其中含有你执行过的每条命令的文本。这样做有几个目的:
只需很少的按键次数即可搜索、自动完成并执行之前已经执行过的命令
在会话间持久化历史命令
将输入/输出历史纪录到日志中去
搜索并重用命令历史
IPython倡导迭代、交互的开发模式:我们常常发现自己总是重复一些命令,假设我们已经执行了
End of explanation
a=3
a
b=4
b
__
c=5
c
_
Explanation: 如果我们想在修改了simple01.py(当然也可以不改)后再次执行上面的操作,只需要输入 %run 命令的前几个字符并按下“ctrl+P”键或者向上箭头就会在命令历史的第一个发现它. (可能是因为我用的是git bash on windows,我自己并未测试成功书中的这个操作;但是在Linux中,我测试是有效的)。此外,ctrl-R可以实现部分增量搜索,跟Unix shell中的readline所提供的功能一样,并且ctrl-R将会循环搜索命令历史中每一条与输入相符的行。
例如,第一次ctrl-R后,我输入了c,ipython返回给我的是:
In [6]: c=a+b
I-search backward: c
再按依次ctrl-R,则变成了历史中含c这个关键字的另一个命令
In [6]: c = c + 1
I-search backward: c
输入和输出变量
IPython shell和jupyter notebook中,最近的两个输出分别保存在 _ 和 __ 两个变量中
End of explanation
foo = 'bar'
foo
_i9
_9
Explanation: 输入的文本被保存在名为 _iX 的变量中,其中X是输入行的行号。每个输入变量都有一个对应的输出变量 _X。例如:
End of explanation
%hist
Explanation: 由于输入变量是字符串,因此可用python的exec关键字重新执行: exec _i9
有几个魔术命令可用于输入、输出历史。%hist用于打印全部或部分历史,可以选择是否带行号
End of explanation
%reset
a #由于上面已经清理了命名空间,所以python并不知道a是多少。
Explanation: %reset 用于清空 interactive 命名空间,并可选择是否清空输入和输出缓存。%xdel 用于从IPython系统中移除特定对象的一切引用。
End of explanation
%logstart
Explanation: 注意:在处理大数据集时,需注意IPython的输入和输出历史,它会导致所有对象引用都无法被垃圾收集器处理(即释放内存),即使用del关键字将变量从interactive命名空间中删除也不行。对于这种情况,谨慎地使用%xdel和%reset将有助于避免出现内存方面的问题。
记录输入和输出
IPython能够记录整个控制台会话,包括输入和输出。执行 %logstart 即可开始记录日志
End of explanation
%logstart?
Explanation: IPython的日志功能开在任何时刻开气,以便记录整个会话。%logstart的具体选项可以参考帮助文档。此外还可以看看几个与之配套的命令:%logoff, %logon, %logstate, 以及 %logstop
End of explanation
In [4]: my_current_dir = !pwd
In [5]: my_current_dir
Out[5]: ['/home/ywfang']
返回的python对象my_current_dir实际上是一个含有控制台输出结果的自定义列表类型。
Explanation: 与操作系统交互
IPython 的另一重要特点就是它跟操作系统的shell结合地非常紧密。即我们可以直接在IPython中实现标准的Windows或unix命令行活动。例如,执行shell命令、更改目录、将命令的执行结果保存在Python对象中等。此外,它还提供了shell命令别名以及目录书签等功能。
下表总结了用于调用shell命令的魔术命令及其语法。本笔记后面还会介绍这些功能。
| 命令 | 说明|
| ------------- | ------------- |
| !cmd | 在系统shell中执行cmd |
| output = !cmd args | 执行cmd,将stdout存放在output中|
| %alias alias_name cmd | 为系统shell命令定义别名|
| %bookmark | 使用IPtyhon的目录书签功能|
| %cd directory | 将系统工作目录更改为directory|
| %pwd | 返回当前工作目录 |
| %pushed directory | 将当前目录入栈,并转向目标目录 (这个不懂??)|
| %popd | 弹出栈顶目录,并转向该目录 |
| %dirs | 返回一个含有当前目录栈的列表 |
| %dhist | 打印目录访问历史 |
| %env | 以dict形式返回系统环境变量 |
shell 命令和别名
在 IPython 中,以感叹号开头的命令行表示其后的所有内容需要在系统shell中执行。In other words, 我们可以删除文件(如rm或者del)、修改目录或执行任意其他处理过程。甚至我们还可启动一些将控制权从IPython手中夺走的进程(比如另外再启动一个Python解释器):
yang@comet-1.edu ~ 19:17:51 >ipython
Python 3.6.1 |Continuum Analytics, Inc.| (default, May 11 2017, 13:09:58)
Type 'copyright', 'credits' or 'license' for more information
IPython 6.1.0 -- An enhanced Interactive Python. Type '?' for help.
In [1]:
In [1]: !python
Python 3.6.1 |Continuum Analytics, Inc.| (default, May 11 2017, 13:09:58)
[GCC 4.4.7 20120313 (Red Hat 4.4.7-1)] on linux
Type "help", "copyright", "credits" or "license" for more information.
此外,还可将shell命令的控制台输出存放到变量中,只需要将 !开头的表达式赋值给变量即可。例如在Linux中
End of explanation
#在ipython shell中
In [1]: foo = 'note*'
In [2]: !ls $foo
notebook.log
Explanation: 在使用!时,IPython 还允许使用当前环境中定义的python值。只需在变量名前面加上美元符号($)即可:
End of explanation
%bookmark db D:\PhDinECNU #windows中的写法;如果是Linux,应该为/d/PhDinECNU/
%bookmark dt D:\temp
%cd db
Explanation: 魔术命令 %alias 可以为shell命令自定义简称。例:
In [3]: %alias ll ls -l
In [4]: ll
total 426
drwxr-xr-x 1 YWFANG 197609 0 9月 21 22:47 appendix-A
-rw-r--r-- 1 YWFANG 197609 47204 10月 8 10:48 appendix-A-note.ipynb
可以一次执行多条命令,只需要将她们写在一行并以分号隔开(在Windows中,这个可能不可行,但是Linux可以通过)
In [3]: %alias test_fang (ls -l; cd ml; ls -l; cd ..)
In [4]: test_fang
total 211
drwxr-xr-x 2 ywfang yun112 2 Aug 22 18:45 Desktop
-rw-r--r-- 1 ywfang yun112 11148 Jul 2 22:02 bashrc-fang-20170703
drwxr-xr-x 9 ywfang yun112 9 Jul 7 01:45 glibc-2.14
drwxr-xr-x 3 ywfang yun112 3 Jul 2 23:10 intel
-rwxr-xr-x 1 ywfang yun112 645 Sep 19 04:51 jupter_notebook
drwxr-xr-x 3 ywfang yun112 5 Jul 7 18:51 materials
drwxr-xr-x 20 ywfang yun112 21 Aug 22 18:02 miniconda3
drwxr-xr-x 3 ywfang yun112 3 Sep 4 18:39 ml
-rw-r--r-- 1 ywfang yun112 826 Sep 30 08:35 notebook.log
drwxr-xr-x 3 ywfang yun112 4 Aug 22 18:21 pwwork
drwxr-xr-x 6 ywfang yun112 14 Aug 22 19:04 software
drwxr-xr-x 5 ywfang yun112 6 Sep 4 18:56 tensorflow
drwxr-xr-x 5 ywfang yun112 6 Sep 4 18:53 tf1.2-py3.6
drwxr-xr-x 5 ywfang yun112 6 Sep 4 18:54 tf12-py36
drwxr-xr-x 6 ywfang yun112 518 Jun 20 01:33 tool
total 1
drwxr-xr-x 3 ywfang yun112 3 Sep 4 18:39 tensorflow
注意,IPython会在会话结束时立即"忘记"我们前面所定义的一切别名。如果要进行永久性的别名设置,需要使用配置系统。之后会进行介绍。
目录书签系统
IPython 有一个简单的目录书签系统,它使我们能保存常用目录的别名以便方便地快速跳转。比如,作为一个狂热的dropbox用户,为了能够快速地转到dropbox目录,可以定义一个书签:
End of explanation
%bookmark -l
Explanation: 定义好之后就可以在ipython shell(或jupyter notebook)中使用魔术命令%cd db来使用这些标签
如果书签的名字与当前工作目录中某个名字冲突时,可通过 -b 标记(起作用是覆写)使用书签目录。%bookmark的 -l 选项的作用是列出所有书签。
End of explanation
%reset
%cd D:\PhDinECNU\readingnotes\readingnotes\machine-learning\McKinney-pythonbook2013
%run pydata-book/ch03/ipython_bug.py
%debug
Explanation: 软件开发工具
IPython 不仅是交互式环境和数据分析环境,同时也非常适合做开发环境。在数据分析应用程序中,最重要的是要拥有正确的代码。IPython继承了Python内置的 pdb 调试器。 此外,IPython 提供了一些简单易用的代码运行时间以及性能分析的工具。
交互式调试器
IPython的调试器增加了 pdb ,如 Tab 键自动完成、语法高亮、为异常跟踪的每条信息添加上下文参考等。调试代码的最佳时机之一就是错误刚发生的时候。 %debug 命令(在发生异常之后立即输入)将会条用那个“时候”调试器,并直接跳转到发生异常的那个 栈帧 (stack frame)
End of explanation
执行%pdb命令可以让IPython在出现异常之后直接调用调试器,很多人都认为这一功能很实用。
Explanation: 在这个 pdb 调试器中,我们可以执行任意Python 代码并查看各个栈帧中的一切对象和数据,这就相当于解释器还留了条后路给我们。默认是从最低级开始的,即错误发生的地方,在上面ipdb>后面输入u (up) 或者 d (down) 即可在栈跟踪的各级别之间进行切换。
End of explanation
%run -d ./pydata-book/ch03/ipython_bug.py
Explanation: 此外调试器还能为代码开发提供帮助,尤其当我们想设置断点或者对函数/脚本进行单步调试时。实现这个目的的方式如下所述。
用带有 -d 选项的 %run 命令,这将会在执行脚本文件中的代码之前先打开调试器。必须立即输入 s(或step)才能进入脚本:
End of explanation
%run -d ./pydata-book/ch03/ipython_bug.py
Explanation: 在此之后,上述文件执行的方式就全凭我们自己说了算了。比如说,在上面那个异常中,我们可以在调用 works_fine 方法的地方设置一个断点,然后输入 c (或者 continue) 使脚本一直运行下去直到该断点时为止。
End of explanation
%run ./pydata-book/ch03/ipython_bug.py
%debug
Explanation: 如果想精通这个调试器,必须经过大量的实践。
虽然大部分 IDE 都会自带调试器,但是 IPython 中调试程序的方法往往会带来更高的生产率。
下面是常用的 IPython 调试器命令
|命令 | 功能 |
|------| ------|
| h(elp) | 显示命令列表 |
| help command | 显示 command 的文档 |
| c(ontinue) | 恢复程序的执行 |
| q(uit) | 推出调试器,不再执行任何代码 |
| b(reak) number | 在当前文件的第 number 行设置一个断点 |
| b path/to/file.py:number | 在制定文件的第 numbe 行设置一个断点 |
| s(tep) | 单步进入函数调用 |
| n(ext) | 执行当前行,并前进到当前级别的下一行 |
| u(p)/d(own) | 在函数调用栈中向上或者向下移动 |
| a(rgs) | 显示当前函数的参数 |
| debug statement | 在新的(递归)调试其中调用语句 statement |
| l(ist) statement | 显示当前行,以及当前栈级别上的上下文参考代码 |
| w(here) | 打印当前位置的完整栈跟踪 (包括上下文参考代码) |
调试器的其他使用场景
第一,使用 set_trace 这个特别的函数(以 pdb.set_trace 命名),这差不多可算作一种 “穷人的断点”(意思是这种断点方式很随便,是硬编码的)。下面这两个方法可能会在我们的日常工作中排上用场(我们也可像作者一样直接将其加入IPython配置中):
第一个函数 set_trace 很简单。我们可以将其放在代码中任何希望停下来查看一番的地方,尤其是那些发生异常的地方:
End of explanation
import time
start = time.time()
for i in range(iterations):
#to do something
elapsed_per = (time.time() - start ) / iterations
Explanation: 测试代码执行的时间: %time 和 %timeit
对于大规模数据分析,我们有时候需要对时间有个规划和预测。特别是对于其中最耗时的函数。IPython中可以轻松应对这种情况。
使用内置的 time 模块,以及 time.clock 和 time.time 函数 手工测试代码执行时间是令人烦闷的事情,因为我们必须编写许多一样的公式化代码:
End of explanation
# a huge string array
strings = ['foo', 'foobar', 'baz', 'qux', 'python', 'God']*100000
method1 = [x for x in strings if x.startswith('foo')]
method2 = [x for x in strings if x[:3]=='foo']
#These two methos look almost same, but their performances are different.
# See below, I use %time to calculate the excutable time.
%time method1 = [x for x in strings if x.startswith('foo')]
%time method2 = [x for x in strings if x[:3]=='foo']
Explanation: 由于这是一个非常常用的功能,所以IPython提供了两个魔术工具 %time 和 %timeit 来自动完成该过程。%time 一次执行一条语句,然后报告总的执行时间。假设我们有一大堆字符串,希望对几个“能选出具有特殊前缀的字符串”的函数进行比较。下面是一个拥有60万字字符串的数组,以及两个不同的“能够选出其中以foo开头的字符串”的方法:
End of explanation
%timeit method = [ x for x in strings if x.startswith('foo')]
%timeit method = [x for x in strings if x[:0]=='foo']
Explanation: Wall time是我们感兴趣的数字。所以,看上去第一个方法耗费了接近2倍的时间,但是这并非一个非常精确的结果。如果我们队相同语句多次执行%time的话,就会发现其结果是变化的。为了得到更加精确的结果,我们需要使用魔术函数 %timeit。对于任意语句,它会自动多次执行以产生一个非常精确的平均执行时间
End of explanation
x = 'foobar'
y = 'foo'
%timeit x.startswith(y)
%timeit x[:3]==y
Explanation: 这个很平淡无奇的离子告诉我们这样一个道理:我们有必要了解Python标准库、Numpy、Pandas 以及 本书所用其他库的性能特点。在大型数据分析中,这些不起眼的毫秒数会不断累积产生蝴蝶效应。
对于那些执行时间非常短(甚至是微妙 1e-6 s;或者 纳秒 1e-9 s)的分析语句和函数而言,%timeit 是非常有用的。虽然对于单次执行而言,这些时间小到几乎可以忽略不计。但是我们只要举一个例子,就会发现我们很有必要“分秒必争”:
同样执行100万次一个20微妙的函数,所化时间要比一个5微妙的多出15秒。
在上面我运行的那个例子中,我们可以直接对两个字符串运算进行比较,以了解其性能特点:
End of explanation
import numpy as np
from numpy.linalg import eigvals
def run_experiment(niter = 100):
K = 100
results = []
for _ in range(niter):
mat = np.random.randn(K,K)
max_eigenvalue = np.abs(eigvals(mat)).max()
results.append(max_eigenvalue)
return results
some_results = run_experiment()
print('Largest one we saw: %s' %(np.max(some_results)))
Explanation: 基本性能分析: %run 和 %run -p
代码的性能分析跟代码执行时间密切关联,只是它关注的是耗费时间的位置。Python中,cProfile模块主要用来分析代码性能,它并非转为python设计。cProfile在执行一个程序代码或代码块时,会记录各函数所耗费的时间。
cProfile一般是在命令行上使用的,它将执行整个程序然后输出各个函数的执行时间。
下面,我们就给出了一个简单的例子:在一个循环中执行一些线性代数运算(即计算一个100 * 100 的矩阵的最大本征值绝对值)
End of explanation
!python -m cProfile chapter03/simple03.py
Explanation: 我们将上述脚本内容写入 simple03.py (目录为当前目录下的chapter03目录中),并且执行
End of explanation
!python -m cProfile -s cumulative chapter03/simple03.py
Explanation: 即使这里不明白脚本里面具体做的事情,那也没有关系,反正先这么照着书里先做着,感受下cProfile的作用。
我们可以看到,输出结果是按照函数名排序的(ordered by standard name)。这样就比较难看出哪些地方是最花时间的,因此通常用 -s 标记,换一种排序的规则:
End of explanation
%prun -l 7 -s cumulative run_experiment()
Explanation: 我们看到此时的排序规则为 Ordered by: cumulative time,这样我们只需要看 cumtime 列即可发现各函数所耗费的总计时间。 注意如果一个函数A调用了函数B,计时器并不会停止而重新计时。cProfile记录的是各函数调用的起始和结束时间,并依次计算总时间。
除了命令行用法外,cProfile 还可以通过编程的方式分析任意代码块的性能。IPython为此提供了一个方便的借口,即 %prun 命令和带 -p 选项的 %run。 %prun的格式跟 cProfile 的差不多,但它分析的是 Python 语句 而不是整个 .py 文件:
End of explanation
# A list of dotted module names of IPython extensions to load.
c.TerminalIPythonApp.extensions = [
'line_profiler',
]
#这个代码可以确认 line_profiler 是否被正常的安装和load
import line_profiler
line_profiler
Explanation: 在ipython terminal中,执行 %run -p -s cumulative chapter03/simple03.py也能达到上述效果,但是却无法退出IPython。
逐行分析函数性能
有时候,%prun (或者其他基于cProfile的性能分析手段)所得到的信息要么不足以说明函数的执行时间,要么难以理解(按函数名聚合?)。对于这种情形,我们可以使用一个叫做line_profiler的小型库。气质有一个心的魔术函数 %lprun, 它可以对一个或者多个函数进行逐行性能分析。我们有修改 IPython 配置 以启用这个扩展.
For IPython 0.11+, you can install it by editing the IPython configuration file ~/.ipython/profile_default/ipython_config.py to add the 'line_profiler' item to the extensions list:
End of explanation
from numpy.random import randn
def add_and_sum(x,y):
added = x + y
summed = added.sum(axis=1)
return summed
def call_function():
x = randn(1000,1000)
y = randn(1000,1000)
return add_and_sum(x,y)
Explanation: line_profiler 可以通过编程方式使用,但是其更强大的一面在于与 Ipython 的交互使用。
假设我们有一个名为 prof_mod 的模块,其代码内容为(我们把prof_mode.py 保存在 chapter03目录下)
End of explanation
%run chapter03/prof_mode.py
x = randn(3000, 3000)
y = randn(3000,3000)
%prun add_and_sum(x,y) #因为我们这里只是测试 add_and_sum 这个函数,所以必须给它实参,所以上面我们给出了 x和y
Explanation: 如果我们想了解 add_and_sum 函数的性能,%prun 会给出如下所示的结果
End of explanation
%lprun -f func1 -f func2 statement_to_profile
Explanation: 执行的结果为:
当我们启用 line_profiler 这个扩展后,就会出现新的魔术命令 %lprun。 用法上唯一的区别就是: 必须为 %lprun 指明想要测试哪个或哪些函数。%lprun 的通用语法为:
End of explanation
%lprun -f add_and_sum add_and_sum(x,y)
Explanation: 在本例子中,我们想要测试 add_and_sum,于是执行
End of explanation
%load_ext line_profiler
Explanation: 网上找了下别人也遇到了和我一样的错误,stackoverflow上面有解决方案:
End of explanation
%lprun -f add_and_sum add_and_sum(x,y)
Explanation: 然后我们再执行一次
End of explanation
%lprun -f add_and_sum -f call_function call_function()
综上,当我们需要测试一个程序中的某些函数时,我们需要使用这两行代码:
%load_ext line_profiler
%lprun -f func1 -f func2 statement_to_profile
Explanation: 这个结果就容易理解了许多。这里我们测试的只是 add_and_sum 这个函数。上面那个模块中还有一个call_function 函数,我们可以结合 add_and_sum 一起测试,于是最终我们的命令成为了这个样子:
End of explanation
import numpy as np
import pandas as pd
print('hello world!')
Explanation: 通常我们会用 %prun (cProfile) 做宏观性能分析,而用 %lprun 来做 微观的性能分析。
注意,在使用 %lprun 时,之所以必须显示指明待测试函数的函数名,是因为“跟踪”每一行代码的时间代价是巨大的。对不感兴趣的函数进行跟踪会对分析结果产生很显著的影响。
IPython HTML Notebook
IPthon HTML Notebook,即现在的 jupyter notebook。这个其实在我整个笔记中都已经在使用了。notebook项目最初由 Brian Graner 领导的 Ipython 团队从 2011 年开始开发。目前已被广泛使用于开发和数据分析。
首先来看个导入图标的例子,其实这个笔记的开头,我也已经展示过部分这样的功能
End of explanation
import numpy as np
import pandas as pd
tips = pd.read_csv('./pydata-book/ch08/tips.csv')
tips.head()
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
img = plt.imread('pydata-book/ch03/stinkbug.png')
img(figsize = (4,4))
plt.imshow(img)
Explanation: 此处补充一个书上 导入图片的一个例子:
End of explanation
jupyter notebook
Explanation: jupyter notebook是一种基于 JSON 的文档格式 .ipynb, 这种格式是的我们可以轻松分享代码,分析结果,特别是展示图标。目前在各种 Python 研讨会上,一种流行的演示手段就是使用 IPython Notebook,然后再讲 .ipynb 文件发布到网上供所有人参考。
Jupyter Notebook 是一个运行于命令行上的轻量级服务器进程。执行下面代码即可启动
End of explanation
import some_lib
x = 4
y = [1,34,5,6]
result = some_lib.get_answer(x,y)
Explanation: 如果想要图标以inline方式展示,可以在打开notebook后加入 %matplotlib --inline 或者 %pylab --inline
利用IPython 提高代码开发效率的几点提示
使用 IPython,可以让代码的结果更容易交互和亦欲查看。特别是当执行出现错误的时候,IPython 的交互性可以带来极大的便利
重新加载模块依赖项
在 Python 中,当我们输入 import some_lib 时候,some_lib 中的代码就会被执行,且其中所有的变量、函数和引入项都会保存在一个新建立的 some_lib 模块命名空间中。下次再输入 import some_lib 时,就会得到这个模块命名空间的一个引用。而这对于 IPython 的交互式代码开发模式就会有一个问题。
比如,用 %run 执行的某段脚本中包含了某个刚刚做了修改的模块。假设我们有一个 sample_script.py 文件,其中有如下代码:
End of explanation
import some_lib
reload(some_lib)
x = 4
y = [1,34,5,6]
result = some_lib.get_answer(x,y)
Explanation: 如果在执行了 %run sample_script.py 后又对 some_lib.py 进行了修改,下次再执行 %run sample_script.py 时候,将仍然会使用老版本的some_lib。其原因在于python是一种“一次加载”系统。不像 matplab等,它会自动应用代码修改。
那么怎么解决这个问题呢?
第一个办法是使用内置的reload函数,即将 sample_script.py 修改成
End of explanation
import this
Explanation: 这样,就可以保证每次执行 sample_script.py 时候都能使用最新的 some_lib 了。不过这个办法有个问题,当依赖变得更强时,就需要在很多地插入 reload.
第二个办法可以弥补上述第一个办法的弊端。IPython 提供了一个特殊的 dreload 函数 (非魔术函数) 来解决模块的“深度”重加载。如果执行 import some_lib 之后在输入 derealod(some_lib),则它会尝试重新加载 some_lib 及其所有的依赖项。遗憾的是,这个办法也不是“屡试不爽”的,但倘若失效的,重新启动 IPython 就可以解决所有加载问题。
代码设计提示
作者说这个问题不好讲,但是他在日常生活中的确发现了一些高层次的原则。
保留有意义的对象和数据
扁平结构要比嵌套结构好:嵌套结构犹如洋葱,想要调试需要剥掉好多层。(这种思想源自于Zen of Python by Tim Peters. 在jupyter notebook中输入 import this可以看到这首诗)
无惧大文件。这样可以减少模块的反复加载,编辑脚本时候也可以减少跳转。维护也更加方便。维护更大的模块会更实用且更加符合python的特点。
End of explanation
class Message:
def __init__(self, msg):
self.msg = msg
Explanation: 高级python功能
让你的类对IPython更加友好
IPython 力求为各种对象呈现一个友好的字符串表示。对于许多对象(如字典、列表、组等),内置的pprint 模块就能给出漂亮的格式。但是对于我们自己所定义的那些类,必须自己格式化进行输出。假设我们以后下面这个简单的类:
End of explanation
x = Message('I have secret')
x
Explanation: 如果像下面这样写,我们会发现这个类的默认输出很不好看:
End of explanation
class Message:
def __init__(self,msg):
self.msg = msg
def __repr__(self):
return('Message: %s' % self.msg)
x = Message('I have a secret')
x
Explanation: 由于IPython会获取__repr__方法返回的字符串(具体方法是 output = repr(obj)),并将其显示到控制台上。因此,我们可以为上面那个类添加一个简单的 repr 方法以得到一个更有意义的输出形式:
End of explanation
ipython profile create secret_project
#这会创建一个新的配置文件,目录在 :~/.ipython/profile_secret_project/ipython_config.py
Explanation: 个性化和配置
IPython shell 在外观和行为方面的大部分内容都是可以进行配置的。下面是能够通过配置做的部分事情:
修改颜色方案
修改输入输出提示符
去掉 out 提示符跟下一个 In 提示符之间的空行
执行任意 Python 语句。这些语句可以用于引入所有常用的东西,还可以做一些你希望每次启动 IPython 都发生的事情。
启用 IPython 扩展,如 line_profiler 中的魔术命令 %lprun
定义我们自己的魔术命令或者系统别名
所有这些设置都在一个叫做 ipython_config.py 的文件中,可以在 ~/.config/ipython 目录中找到。Linux和windows系统目录略有点小区别。对于我自己来说,我在git bash on windows 上的目录是:~/.ipython/profile_default/ipython_config.py
一个实用的功能是,利用 ipython_config.py,我们可以拥有多个个性化设置。假设我们想专门为某个特定程序或者项目量身定做 IPython 配置。输入下面这样的命令即可新建一个新的个性化配置文件:
End of explanation
ipython --profile=secret_project
Explanation: 然后编辑新建的这个 profile_secret_project 中的配置文件,再用如下方式启动它:
End of explanation |
2,506 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Example 5
Step1: A very simple pipeline to show how registers are inferred.
Step2: Simulation of the core | Python Code:
import pyrtl
pyrtl.reset_working_block()
class SimplePipeline(object):
def __init__(self):
self._pipeline_register_map = {}
self._current_stage_num = 0
stage_list = [method for method in dir(self) if method.startswith('stage')]
for stage in sorted(stage_list):
stage_method = getattr(self, stage)
stage_method()
self._current_stage_num += 1
def __getattr__(self, name):
try:
return self._pipeline_register_map[self._current_stage_num][name]
except KeyError:
raise pyrtl.PyrtlError(
'error, no pipeline register "%s" defined for stage %d'
% (name, self._current_stage_num))
def __setattr__(self, name, value):
if name.startswith('_'):
# do not do anything tricky with variables starting with '_'
object.__setattr__(self, name, value)
else:
next_stage = self._current_stage_num + 1
pipereg_id = str(self._current_stage_num) + 'to' + str(next_stage)
rname = 'pipereg_' + pipereg_id + '_' + name
new_pipereg = pyrtl.Register(bitwidth=len(value), name=rname)
if next_stage not in self._pipeline_register_map:
self._pipeline_register_map[next_stage] = {}
self._pipeline_register_map[next_stage][name] = new_pipereg
new_pipereg.next <<= value
Explanation: Example 5: Making use of PyRTL and Introspection.
The following example shows how pyrtl can be used to make some interesting
hardware structures using python introspection. In particular, this example
makes a N-stage pipeline structure. Any specific pipeline is then a derived
class of SimplePipeline where methods with names starting with "stage" are
stages, and new members with names not starting with "_" are to be registered
for the next stage.
Pipeline builder with auto generation of pipeline registers.
End of explanation
class SimplePipelineExample(SimplePipeline):
def __init__(self):
self._loopback = pyrtl.WireVector(1, 'loopback')
super(SimplePipelineExample, self).__init__()
def stage0(self):
self.n = ~ self._loopback
def stage1(self):
self.n = self.n
def stage2(self):
self.n = self.n
def stage3(self):
self.n = self.n
def stage4(self):
self._loopback <<= self.n
Explanation: A very simple pipeline to show how registers are inferred.
End of explanation
simplepipeline = SimplePipelineExample()
sim_trace = pyrtl.SimulationTrace()
sim = pyrtl.Simulation(tracer=sim_trace)
for cycle in range(15):
sim.step({})
sim_trace.render_trace()
Explanation: Simulation of the core
End of explanation |
2,507 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
[Your name]
Homework 1
The maximum score of this homework is 100+20 points. Grading is listed in this table
Step1: 1.2 Replace rare words (10 points)
Write a function that takes a text and a number $N$ as parameters and replaces every word other than the most common $N$ in the text with a common symbol. The symbol by default is __RARE__ but it can be redefined.
Step2: 1.3 Levenshtein distance (10 points)
Write a function that returns the Levenshtein distance of two strings.
https
Step3: Exercise 2, Mutable string (40 points)
Python strings are immutable. Create a mutable string class.
Implement the following features (see the tests below).
initialization from str.
assignment (i.e. modifying a character),
if the index is out of range, it should fill the blanks with spaces (see the tests below)
conversion to built-in str and list. The latter is a list of the characters.
addition with other MutableString instances and built-in strings,
multiplication with integers. Multiplying a string with 3 means repeating the string 3 times.
built-in len function,
comparision with strings,
iteration.
Step4: Exercise 3 - Text generation (30+20 points)
3.1 (Same as a laboratory exercise) Write a function that computes N-gram frequencies in a string. (0 point)
Step5: 3.2 Define a text generator function. (25 points)
The function takes 4 arguments | Python Code:
def group_by_retval(sequence, grouper_func):
# TODO
l = ["ab", 12, "cd", "d", 3]
assert(group_by_retval(l, lambda x: isinstance(x, str)) == {True: ["ab", "cd", "d"], False: [12, 3]})
assert(group_by_retval([1, 1, 2, 3, 4], lambda x: x % 3) == {0: [3], 1: [1, 1, 4], 2: [2]})
Explanation: [Your name]
Homework 1
The maximum score of this homework is 100+20 points. Grading is listed in this table:
| Grade | Score range |
| --- | --- |
| 5 | 85+ |
| 4 | 70-84 |
| 3 | 55-69 |
| 2 | 40-54 |
| 1 | 0-39 |
Most exercises include tests which should pass if your solution is correct.
However successful test do not guarantee that your solution is correct.
You are free to add more tests.
Exercise 1, small exercises (30 points)
1.1 Groupby function (10 points)
Write a function that takes a sequene and a callable as parameters. The function should call its second parameter on every element on the sequence and group them by return value. It should return a dictionary whose keys are the return values of the callable and values are lists of sequence elements that the callable return that value to.
End of explanation
#TODO
assert(replace_rare_words("a b a b b c", 2) == "a b a b b __RARE__")
assert(replace_rare_words("a b a b b c", 2, rare_symbol="rare") == "a b a b b rare")
Explanation: 1.2 Replace rare words (10 points)
Write a function that takes a text and a number $N$ as parameters and replaces every word other than the most common $N$ in the text with a common symbol. The symbol by default is __RARE__ but it can be redefined.
End of explanation
# TODO
assert(levenshtein("abc", "ab") == 1)
assert(levenshtein("abc", "abc") == 0)
assert(levenshtein("abc", "ab c") == 1)
assert(levenshtein("", "abc") == 3)
Explanation: 1.3 Levenshtein distance (10 points)
Write a function that returns the Levenshtein distance of two strings.
https://en.wikipedia.org/wiki/Levenshtein_distance
End of explanation
class MutableString(object):
#TODO
m1 = MutableString("abc")
m1[1] = "d"
assert(m1[1] == "d")
m1[1] = "b"
m1[4] = "d"
assert(m1[3] == " " and m1[4] == "d" and len(m1) == 5)
assert(list(m1) == list("abc d"))
assert(str(m1) == "abc d")
m1 = MutableString("abc")
m2 = m1 + "def"
assert(isinstance(m2, MutableString))
assert(m2 == "abcdef")
m3 = m1 * 3
assert(isinstance(m3, MutableString) and m3 == "abcabcabc")
m2[0] = "A" # modifying m2 should not change m1
assert(m1 == "abc")
# right addition with strings
m1 = MutableString("abc")
m2 = "def" + m1
assert(m2 == "defabc")
Explanation: Exercise 2, Mutable string (40 points)
Python strings are immutable. Create a mutable string class.
Implement the following features (see the tests below).
initialization from str.
assignment (i.e. modifying a character),
if the index is out of range, it should fill the blanks with spaces (see the tests below)
conversion to built-in str and list. The latter is a list of the characters.
addition with other MutableString instances and built-in strings,
multiplication with integers. Multiplying a string with 3 means repeating the string 3 times.
built-in len function,
comparision with strings,
iteration.
End of explanation
# TODO
assert(count_ngram_freqs("abcc", 1) == {"a": 1, "b": 1, "c": 2})
assert(count_ngram_freqs("abccab", 2) == {"ab": 2, "bc": 1, "cc": 1, "ca": 1})
Explanation: Exercise 3 - Text generation (30+20 points)
3.1 (Same as a laboratory exercise) Write a function that computes N-gram frequencies in a string. (0 point)
End of explanation
# TODO
toy_freqs = count_ngram_freqs("abcabcda", 3)
gen = generate_text("abc", 5, toy_freqs, 3)
assert(len(gen) == 5)
assert(set(gen) <= set("abcd"))
Explanation: 3.2 Define a text generator function. (25 points)
The function takes 4 arguments:
starting text (at least $N-1$ long,
target length: length of the output string,
n-gram frequency dictionary,
N, length of the n-grams.
The function generates one character at a time given the last $N-1$ characters.
The probability of c being generated after ab is defined as:
$$
P(c | a b ) = \frac{\text{freq}(a b c)}{\text{freq}(a b)},
$$
where $\text{freq}(a b c)$ is obtained by counting how many times abc occurs in the training corpus (count_ngram_freqs function).
If the generated text ends with a $N-1$-gram that does not occur in the training data, generate the next character from the full distribution.
End of explanation |
2,508 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Statsmodels - glm, mixed models, survival analysis
Data scientists normally use R for statistical heavy lifting, with a few exceptions
Step1: Above, the Region variable was treated as cathegorical, and the central (C) region was treated as intercept. To explicitely force cathegorical treatment and excluding the intercept, as well as exemplify advanced functions we can do | Python Code:
import statsmodels.api as sm
import statsmodels.formula.api as smf
import numpy as np
import pandas
df = sm.datasets.get_rdataset("Guerry", "HistData").data
df = df[['Lottery', 'Literacy', 'Wealth', 'Region']].dropna()
df.head()
mod = smf.ols(formula='Lottery ~ Literacy + Wealth + Region', data=df)
res = mod.fit()
print(res.summary())
Explanation: Statsmodels - glm, mixed models, survival analysis
Data scientists normally use R for statistical heavy lifting, with a few exceptions:
- When a statistical method requires a mature, production level language (Java, Python, C)
- When a newer method is better implemented in a newer language (linear mixed models in Julia for example)
- When the method is needed as part of a library foreign to R (deep learning with Python)
So there is quite a lot of benefit for getting advanced statistical models working in Python! Statsmodels has everything for everyone here are a few examples:
- Ordinary and generalized linear models, with R-style formulas
- Linear (and generalized) mixed models
- Time series analysis
- Survival analysis
If you have background in statistics with R, then you will appreciate the formula api. For exemple, let us redo the ordinary least squares example, but this time using multiple variables and the formula API.
End of explanation
def log_plus_1(x):
return np.log(x) + 1.0
res = smf.ols(formula='Lottery ~ log_plus_1(Literacy)*np.log(Wealth) + C(Region) - 1', data=df).fit()
print(res.params)
Explanation: Above, the Region variable was treated as cathegorical, and the central (C) region was treated as intercept. To explicitely force cathegorical treatment and excluding the intercept, as well as exemplify advanced functions we can do:
End of explanation |
2,509 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Loading and Fomatting the Required Data
Step2: Helper Functions For The PreProcessing
Taken from the original letter merging python script
Step3: Compute the RV Coefficient on the Log Change Matrix
Step4: Plotting The Dissimilarity Scores
Without sorting
Step5: Sorted by median value
Step6: Sorted by 75th percentile
Step7: Normalize The Obtained Values
By the maximum median value obtained for the different letters
Step8: By the maximum 75h percentile obtained for the different letters
Step9: Plot one with respect to the other
Step10: Saving the obtained difficulty Metric
Here the one with the 75th percentile | Python Code:
# Load the database of letters and numbers
subject_folders_path = os.path.join(os.getcwd(), "DB_wacomPaper_v2")
subject_folders = os.listdir(subject_folders_path)
letters_db = dict()
trajectories = dict()
for subject in tqdm(subject_folders):
letters_db[subject] = dict()
trajectories[subject] = dict()
letters_path = os.path.join(subject_folders_path, subject)
letters_csv = [x for x in os.listdir(letters_path) if ".csv" in x]
for letter in letters_csv:
letter_path = os.path.join(letters_path, letter)
letter_name = letter.split(".csv")[0]
try:
letters_db[subject][letter_name] = pd.read_csv(letter_path)
x = letters_db[subject][letter_name]["x"]
y = -letters_db[subject][letter_name]["y"]
time = letters_db[subject][letter_name]["time"]
trajectories[subject][letter_name] = {"x":list(x), "y":list(-y), "t": list(time)}
except:
print("Failure : Letter {} from subject {}".format(letter_name, subject))
letters = list(trajectories[subject_folders[0]].keys())
unique_letters = list(set([l[0] for l in letters]))
print(unique_letters)
letters_db = dict()
for letter in unique_letters:
letters_db[letter[0]] = list()
for subject in trajectories.keys():
for letter in trajectories[subject]:
letters_db[letter[0]].append(trajectories[subject][letter])
Explanation: Loading and Fomatting the Required Data
End of explanation
def distance(x1,y1,x2,y2):
return np.sqrt((x1-x2)**2+(y1-y2)**2)
def remove_redundant_points(x_pts, y_pts):
dists = [distance(x1, y1, x2, y2) for x1, y1, x2, y2 in zip(x_pts[:-1], y_pts[:-1], x_pts[1:], y_pts[1:])]
same_idx = [i for i in range(len(dists)) if dists[i] == 0]
x = [x_pts[i] for i in range(len(x_pts)) if i not in same_idx]
y = [y_pts[i] for i in range(len(y_pts)) if i not in same_idx]
return x, y
def evenly_spaced_interpolation(x1,y1,t1,x2,y2,t2, step = 0.01):
dx, dy = x2-x1, y2-y1
theta = math.atan2(dy, dx)
dist = np.sqrt(dx**2+dy**2)
if dist<step:
x = [x1,x2]
y = [y1,y2]
t = [t1, t2]
else:
n_pts = int(np.round(dist/step))+1
new_step = dist/(n_pts-1)
x_pts = [x1+i*new_step*math.cos(theta) for i in range(n_pts)]
y_pts = [y1+i*new_step*math.sin(theta) for i in range(n_pts)]
x, y = remove_redundant_points(x_pts, y_pts)
t = [i*(t2-t1)/len(x_pts)+t1 for i in range(len(x_pts))]
return {"x":x, "y":y, "t":t}
def uniformize_with_specific_step(x_pts, y_pts, t_pts, desired_step = 0.01):
densified_stroke = [evenly_spaced_interpolation(x1,y1,t1,x2,y2,t2) for x1, y1,t1, x2, y2, t2
in zip(x_pts[:-1], y_pts[:-1], t_pts[:-1], x_pts[1:], y_pts[1:], t_pts[1:])]
x, y = [s["x"] for s in densified_stroke], [s["y"] for s in densified_stroke]
t = [s["t"] for s in densified_stroke]
x, y, t = sum(x, []), sum(y, []), sum(t, [])
return x,y,t
def normalize_wrt_max(x_pts, y_pts):
dx = max(x_pts)-min(x_pts)
dy = max(y_pts)-min(y_pts)
x_pts = [x/max([dx,dy]) for x in x_pts]
y_pts = [y/max([dx,dy]) for y in y_pts]
x_pts = [x-min(x_pts)+0.0001 for x in x_pts]
y_pts = [y-min(y_pts)+0.0001 for y in y_pts]
return x_pts, y_pts
def interp(vector, numDesiredPoints):
if len(vector)>2:
t_current = np.linspace(0, 1, len(vector))
t_desired = np.linspace(0, 1, numDesiredPoints)
f = interpolate.interp1d(t_current, vector, kind='linear')
vector = f(t_desired).tolist()
return vector
def downsampleShape(x, y, t, numDesiredPoints):
change the length of a stroke with interpolation
if len(x)>2:
x = interp(x, numDesiredPoints)
y = interp(y, numDesiredPoints)
t = interp(t, numDesiredPoints)
return x,y,t
Explanation: Helper Functions For The PreProcessing
Taken from the original letter merging python script
End of explanation
log_change = dict()
similarity_metric = dict()
for letter in tqdm(letters_db.keys()):
log_change[letter] = list()
# Compute the log change matrix
for traj in letters_db[letter]:
try:
x, y, t = traj["x"],traj["y"],[t-min(traj["t"])+1 for t in traj["t"]]
x, y = normalize_wrt_max(x, y) # Normalize wrt max
x, y, t = uniformize_with_specific_step(x, y, t, 0.001) # Upsample
x, y, t = downsampleShape(x, y, t, 100) # Downsample
vx = np.array([math.log(xi)-math.log(xj) for xi, xj in zip(x[1:], x[:-1])])
vy = np.array([math.log(yi)-math.log(yj) for yi, yj in zip(y[1:], y[:-1])])
vt = np.array([math.log(ti)-math.log(tj) for ti, tj in zip(t[1:], t[:-1])])
mat = np.vstack((vx, vy))
mat = np.vstack((mat, vt))
cov = np.cov(mat)
log_change[letter].append({"vx":vx, "vy":vy, "vt":vt, "mat":mat, "cov": cov})
except:
print("Error")
n_demos = len(log_change[letter])
# Compute the dissimilarity metric using the RV Coefficient
similarity_metric[letter] = list()
for i in range(n_demos):
X = log_change[letter][i]["mat"]
cov_ii = log_change[letter][i]["cov"]
trace_covii2 = np.trace(np.matmul(cov_ii,cov_ii.transpose()))
for j in range(i+1, n_demos):
Y = log_change[letter][j]["mat"]
cov_jj = log_change[letter][j]["cov"]
trace_covjj2 = np.trace(np.matmul(cov_jj,cov_jj.transpose()))
covij = np.matmul(X, Y.transpose())
covji = np.matmul(Y, X.transpose())
numerator = np.trace(np.matmul(covij,covji))
denominator = np.sqrt(trace_covii2*trace_covjj2)
metric_val = numerator/denominator
similarity_metric[letter].append(metric_val)
Explanation: Compute the RV Coefficient on the Log Change Matrix
End of explanation
labels = list(similarity_metric.keys())
data = list(similarity_metric.values())
df_labels = [[label for i in range(len(data[i]))] for i, label in enumerate(labels)]
df_labels = sum(df_labels, [])
df_data = sum(data, [])
df = pd.DataFrame({'value':df_data, 'group':df_labels})
ax = df.boxplot(column='value', by='group', showfliers=True,
positions=range(df.group.unique().shape[0]))
sns.pointplot(x='group', y='value', data=df.groupby('group', as_index=False).mean(), ax=ax)
Explanation: Plotting The Dissimilarity Scores
Without sorting
End of explanation
grouped = df.groupby(["group"])
df2 = pd.DataFrame({col:vals['value'] for col,vals in grouped})
meds = df2.median()
meds = meds.sort_values()
df2 = df2[meds.index]
ax = df2.boxplot(figsize=(10,5))
plt.xlabel("Letter")
plt.ylabel("Dissimilarity Metric")
plt.title("Dissimilarity Metric Sorted By Median Value")
plt.show()
fig = ax.get_figure()
fig.savefig("Dissimilarity_by_median.svg")
Explanation: Sorted by median value
End of explanation
quantile_letters = df2.quantile(0.75)
quantile_letters = quantile_letters.sort_values()
df2 = df2[quantile_letters.index]
ax = df2.boxplot(figsize=(10,5))
plt.xlabel("Letter")
plt.ylabel("Dissimilarity Metric")
plt.title("Dissimilarity Metric Sorted By the 75th Percentile")
plt.show()
fig = ax.get_figure()
fig.savefig("Dissimilarity_by_percentile.svg")
Explanation: Sorted by 75th percentile
End of explanation
median_similarities = dict()
for letter in similarity_metric.keys():
med_val = np.median(similarity_metric[letter])
median_similarities[letter] = med_val
sorted_by_value = sorted(median_similarities.items(), key=lambda kv: kv[1])
max_val = max(list(median_similarities.values()))
normalized_median_similarities = dict()
for letter in similarity_metric.keys():
med_val = np.median(similarity_metric[letter])
normalized_median_similarities[letter] = med_val/max_val
sorted_normalized_by_value_med = sorted(normalized_median_similarities.items(), key=lambda kv: kv[1])
Explanation: Normalize The Obtained Values
By the maximum median value obtained for the different letters
End of explanation
quantile_similarities = dict()
for letter in similarity_metric.keys():
quantile_similarities[letter] = np.percentile(similarity_metric[letter], 75)
sorted_by_value = sorted(quantile_similarities.items(), key=lambda kv: kv[1])
max_val = max(list(quantile_similarities.values()))
normalized_quantile_similarities = dict()
for letter in similarity_metric.keys():
normalized_quantile_similarities[letter] = np.percentile(similarity_metric[letter], 75)/max_val
sorted_normalized_by_value_quant = sorted(normalized_quantile_similarities.items(), key=lambda kv: kv[1])
Explanation: By the maximum 75h percentile obtained for the different letters
End of explanation
normalized_med_vals = [x[1] for x in sorted_normalized_by_value_med]
normalized_med_labels = [x[0] for x in sorted_normalized_by_value_med]
normalized_quant_vals = [x[1] for x in sorted_normalized_by_value_quant]
normalized_quant_labels = [x[0] for x in sorted_normalized_by_value_quant]
fig, ax = plt.subplots(figsize = (10,6))
plt.ylim((-6, 6))
plt.xlim((-0.05, 1.05))
med = ax.scatter(normalized_med_vals, [1 for i in range(len(normalized_med_vals))])
quant = ax.scatter(normalized_quant_vals, [-1 for i in range(len(normalized_quant_vals))])
y_offset = 0.25
prev_xpos = -100
prev_ypos = -100
for i, val in enumerate(normalized_med_vals):
x = normalized_med_vals[i]-0.005
y = 1+y_offset
if (x-prev_xpos<0.02):
y = prev_ypos+0.5
ax.annotate(normalized_med_labels[i], (x,y), size=12)
prev_xpos = x
prev_ypos = y
prev_xpos = -100
prev_ypos = -100
y_offset = 0.5
for i, val in enumerate(normalized_quant_vals):
x = normalized_quant_vals[i]-0.005
y = -1-y_offset
if (x-prev_xpos<0.02):
y = prev_ypos-0.55
ax.annotate(normalized_quant_labels[i], (x,y), size=12)
prev_xpos = x
prev_ypos = y
plt.xlabel("Letter Difficulty")
plt.legend((med, quant), ("Median value", "75th Percentile"), scatterpoints = 1)
plt.title("Letter Difficulty Based On Median Value Versus The 75th Percentile")
ax.axes.get_yaxis().set_visible(False)
fig.savefig("Dissimilarity_median_versus_percentile.svg")
Explanation: Plot one with respect to the other
End of explanation
import pickle
letter_difficulty_metric = dict(zip(normalized_quant_labels, normalized_quant_vals))
pkl_file = open("letter_difficulty_metric.pkl", 'wb')
data = pickle.dump(letter_difficulty_metric, pkl_file)
pkl_file.close()
Explanation: Saving the obtained difficulty Metric
Here the one with the 75th percentile
End of explanation |
2,510 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Hey there! Here's more-or-less the steps you'll be taking to reduce our data, and, using those reduced data, extract some flux-calibrated lightcurves of WR 124!
First things first, copy this file, standards.txt, adapt.py, and phot_tools.py into the directory where you want to do your work.
Step1: This notebook is organized into four sections
Step2: While this part is implicit in the reduction steps, keep in mind that all of our images have an 'overscan region,' which we'll need to fit and subtract from each image, and the order of the polynomial used to fit the overscan is a free parameter. Now let's start making master calibration images.
Let's first measure the bias level in all of our images --- i.e., the little bit of signal that is inherent to every exposure --- by taking the median of a series of zero-second exposures.
Step3: Load up the bias and take a look at it. Does it look like there are any systematic trends in the bias -- e.g., the top of the image has more bias than the bottom? Try messing with overscan_fit_degree. You can look at the image in ds9 or QFitsView, or just try loading it up in here!
Now, because our detector isn't cooled to absolute zero, it adds a little bit of signal (called the dark current), which gets stronger with time. If we 'expose' the detector (without letting any light get to it) for the same amount of time as our actual images, we'll have an estimate of the dark current in those images. Because the darks are also affected by the bias level, master_dark will subtract the bias from the dark frames before combining them.
Step4: Now load up the 30s dark. What is the typical value of the dark current? Is it higher or lower than you expected? Is there any structure in the image (i.e., does one part of the detector have more dark current than another?) If so, do you see the same structure in the bias?
Our final step in the calibration process is called 'flat fielding.' This takes into account the fact that the efficiency of our detector is a function of both what pixel you're looking at and of color. For example, the outer parts of the detector receive less light than the inner parts (this is called vignetting) or the filter may only cover part of the chip (most relevant for our H$\alpha$ images). Some pixels are just less efficient than others, and the efficiency is a function of wavelength! These effects imprint themselves on the science images, so we need to 'flatten' them out. We can construct flats by exposing the detector to a uniform source of light. After subtracting the bias level and dark current, any variation in the flat images is due to these effects. Let's construct a flat field for our H$\alpha$ images!
Step5: What does the master flat look like? What are the typical values of the pixels? Can you see the residual image of the filter?
Alright, we've made our calibration images. Let's reduce the Ha science images of WR124! The basic steps are
1. Fit and subtract the overscan region, then trim it off.
2. Subtract the residual bias level.
3. Subtract the dark current, scaled to the exposure times.
4. Divide by the normalized master_flat image. Why do we divide? If we think of the flat field like the 'efficiency' of the camera, then the measured image is the 'true' image times the flat field. To back out the true image, we just divide the measured image by the flat!
Step6: Load up one of the reduced images... what do you see?! You might need to mess with the scale parameters to see the entirety of the nebula. Does the rest of the image look 'flat'? I.e., if you ignore the stars, the sky should be uniformly bright.
Ok, now that you've messed with the reduction of a few images, and you like what the code is giving you, let's run these steps for every science image. run_pipeline_run is a function that first assembles lists of darks/biases/flats/science/etc, then creates master cals, and finally reduces all of the science images.
Step7: 2. Now that we've reduced our data, you can focus solely on the final images in reddir. This step involves going from measurements we might make of our images (which are specific to the instrument and night that the data were taken on) to calibrated values.
We want to perform aperture photometry on our images to measure the brightness of the objects in them. It's fairly straightforward, and the code in phot_tools.py does a lot of this for you, but you should know what it does. The basic steps are
Step8: To extract the photometry of an object at some position in some image, we'll use extract_photometry which you can call like this
Step9: Repeat the previous few steps for r and i!
Step10: 3. Now let's extract lightcurves of WR124. A lightcurve consists of three components | Python Code:
#Now, let's import some useful libraries
import numpy as np
from matplotlib import pyplot as plt
from adapt import *
from phot_tools import *
from glob import glob
import os
from astropy.io import fits
from astropy.coordinates import SkyCoord
from astropy.table import vstack, Table
%matplotlib inline
#What do all of these libraries do? If you aren't familiar with any of them, please ask!
Explanation: Hey there! Here's more-or-less the steps you'll be taking to reduce our data, and, using those reduced data, extract some flux-calibrated lightcurves of WR 124!
First things first, copy this file, standards.txt, adapt.py, and phot_tools.py into the directory where you want to do your work.
End of explanation
#First, point this notebook to the directory where your raw data are in (datadir), and
#directories where you want the reduced master calibrations (caldir) and reduced science
#images (reddir) to go. You'll probably want to make the directories first. The trailing
#slash in the name is important (the code breaks if you don't give it the trailing slash...)
datadir = '/path/to/the/data/'
caldir = '/path/to/caldir/'
reddir = '/path/to/reddir/'
Explanation: This notebook is organized into four sections:
1. Data reduction
2. Deriving photometric zero points
3. Extracting lightcurves and calibrating them
4. Searching for systematic trends.
1. First things first, we need to reduce our data; that is, convert from the number that is stored in each pixel (which takes into account aaaaallllll of the optics and disturbances and quantum mechanics that are between the detector and the sky) to a number that we really hope corresponds to the number of photons that actually hit the detector.
End of explanation
#Use glob to assemble lists of biases...
biaslist = glob(datadir+'string that glob can use to find biases. use * as a wildcard!')
print(biaslist) #this should be all biases...
#Now create the master bias. You'll have to decide some parameters. Play with
#overscan_fit_degree and see how it affects the output bias (which will now be
#in caldir/master_bias.fits). Do you notice any trend that affects each column of
#the bias? Try messing with overscan_fit_degree til it goes away. Overwrite determines
#what will happen if the bias already exists.
master_bias(biaslist=biaslist,overscan_fit_degree=?,caldir=caldir,overwrite=?)
Explanation: While this part is implicit in the reduction steps, keep in mind that all of our images have an 'overscan region,' which we'll need to fit and subtract from each image, and the order of the polynomial used to fit the overscan is a free parameter. Now let's start making master calibration images.
Let's first measure the bias level in all of our images --- i.e., the little bit of signal that is inherent to every exposure --- by taking the median of a series of zero-second exposures.
End of explanation
#Because darks are dependent on exposure time, we'll have to make one dark for each exposure
#time. For now, just make a 30s dark
darklist = glob(datadir+'string that glob can use to find 30s darks.')
print(darklist) #did you only pick out the 30s darks?
#Now create the master 30s dark. The free parameters are what the exposure time is, the
#overscan_fit_degree (this should be the same as when you created the bias!), and
#whether or not you want to overwrite the output
master_dark(darklist=darklist,exptime=?,overscan_fit_degree=?,caldir=caldir,overwrite=?)
Explanation: Load up the bias and take a look at it. Does it look like there are any systematic trends in the bias -- e.g., the top of the image has more bias than the bottom? Try messing with overscan_fit_degree. You can look at the image in ds9 or QFitsView, or just try loading it up in here!
Now, because our detector isn't cooled to absolute zero, it adds a little bit of signal (called the dark current), which gets stronger with time. If we 'expose' the detector (without letting any light get to it) for the same amount of time as our actual images, we'll have an estimate of the dark current in those images. Because the darks are also affected by the bias level, master_dark will subtract the bias from the dark frames before combining them.
End of explanation
#Flat-fielding in Astronomy can be quite contentious, so let's take a careful look at one
#of the flat images before we do anything else. What is the exposure time listed in the image
#header? Does it match up with the exposure time of the dark we made? If not, there's a nice
#little function in adapt.py that will just scale the longest master_dark we made (which has
#the highest signal) to the exposure time of the flat images. This only works assuming the dark
#current scales linearly with time, which we hope it does...
#Next, do you see any weird structure in the flat fields? Turns out the H-alpha filter was
#placed into the instrument kind of wonky. That square you see on the image IS the filter!
#This means that, for the H-alpha images, anything outside of that square doesn't have the
#filter on it, so it should be ignored.
#Just like the darks, we'll have to select a subset of the flat fields in datadir:
flatlist_ha = glob(datadir+'string that glob can use to find the H-alpha flats')
print(flatlist_ha) #Did you pick out just the H-alpha flats?
#Now let's make the master flat! overscan_fit_degree and overwrite do the same thing here.
#filt is a string that is mostly just to help name the file that gets made.
master_flat(flatlist=flatlist_ha,filt=?,overscan_fit_degree=?,caldir=caldir,overwrite=?)
Explanation: Now load up the 30s dark. What is the typical value of the dark current? Is it higher or lower than you expected? Is there any structure in the image (i.e., does one part of the detector have more dark current than another?) If so, do you see the same structure in the bias?
Our final step in the calibration process is called 'flat fielding.' This takes into account the fact that the efficiency of our detector is a function of both what pixel you're looking at and of color. For example, the outer parts of the detector receive less light than the inner parts (this is called vignetting) or the filter may only cover part of the chip (most relevant for our H$\alpha$ images). Some pixels are just less efficient than others, and the efficiency is a function of wavelength! These effects imprint themselves on the science images, so we need to 'flatten' them out. We can construct flats by exposing the detector to a uniform source of light. After subtracting the bias level and dark current, any variation in the flat images is due to these effects. Let's construct a flat field for our H$\alpha$ images!
End of explanation
#Let's construct a list of H-alpha images to feed into reduce_science.
sciencelist = glob(datadir+'string to just get the images we want')
print(sciencelist)
#Now reduce the science! reduce_science uses a couple helper functions to access the correct
#dark and flat images, so all you need to worry about are the overscan fit degree, the
#overwriting behavior, and out_pref, which is a string that gets prepended to the filename
#to distinguish it from the raw image. The default is 'red_'
reduce_science(sciencelist=sciencelist,overscan_fit_degree=?,caldir=caldir,
reddir=reddir,out_pref=?,overwrite=?)
Explanation: What does the master flat look like? What are the typical values of the pixels? Can you see the residual image of the filter?
Alright, we've made our calibration images. Let's reduce the Ha science images of WR124! The basic steps are
1. Fit and subtract the overscan region, then trim it off.
2. Subtract the residual bias level.
3. Subtract the dark current, scaled to the exposure times.
4. Divide by the normalized master_flat image. Why do we divide? If we think of the flat field like the 'efficiency' of the camera, then the measured image is the 'true' image times the flat field. To back out the true image, we just divide the measured image by the flat!
End of explanation
#You should be familiar with all of the free parameters are this point...
run_pipeline_run(datadir=datadir,caldir=caldir,reddir=reddir,overscan_fit_degree=?,
out_pref=?,overwrite=?)
Explanation: Load up one of the reduced images... what do you see?! You might need to mess with the scale parameters to see the entirety of the nebula. Does the rest of the image look 'flat'? I.e., if you ignore the stars, the sky should be uniformly bright.
Ok, now that you've messed with the reduction of a few images, and you like what the code is giving you, let's run these steps for every science image. run_pipeline_run is a function that first assembles lists of darks/biases/flats/science/etc, then creates master cals, and finally reduces all of the science images.
End of explanation
#First up: let's look up the true (calibrated) magnitude of BD+28 4211. Go ahead and search
#through standards.txt to find the row with BD+28 4211 in it. Standards.txt has a list of
#standards with their magnitude in the r filter, and a bunch of colors (i.e., the difference
#of the magnitude of an object in two different filters). We want to know how bright the star
#is in g, r, and i. Go ahead and calculate then record those values in variables:
g_stan = ?
r_stan = ?
i_stan = ?
#Now open up one of the reduced images of the standard star in ds9. It should be the brightest
#star towards the center of the image. Zoom in close, and put your mouse over what appears to
#be the center of the star. Record the Right Ascension (ra, or alpha) and Declination (dec
#or delta) in the following line of code, following the example format
stan_coords = SkyCoord(ra='1h2m3s', dec='+4d5m6s')
#This is a SkyCoord object, which has some pretty useful features. phot_tools.py uses them
#to create apertures to do photometry!
Explanation: 2. Now that we've reduced our data, you can focus solely on the final images in reddir. This step involves going from measurements we might make of our images (which are specific to the instrument and night that the data were taken on) to calibrated values.
We want to perform aperture photometry on our images to measure the brightness of the objects in them. It's fairly straightforward, and the code in phot_tools.py does a lot of this for you, but you should know what it does. The basic steps are:
1. Define apertures centered on an object. Essentially you want to make a circle that you think captures the light from the entire object. The size of the circle depends on the optics of the telescope and atmospheric turbulence that blurs the image slightly (called seeing). Then you want to make an annulus (a bullseye shape with the center taken out) around that circle that doesn't have any objects in it (called the background or sky). We'll call these two apertures src (for source) and bkg (for background)
2. Sum up all of the photons (or counts) in the src and bkg apertures.
3. Because the measured src counts are the true object counts, plus the brightness of the background, we'll use the bkg counts to remove that background. But the counts in the bkg aperture depends on the size of the aperture (a bigger region captures more photons!), so we scale the bkg counts by the ratio of the areas of the src and bkg apertures.
4. The net counts is thus the src counts minus the scaled bkg counts. Because all of these numbers should scale with the exposure time, if we divide the net counts by the exposure time, we get the net count rate.
5. Now we calculate the instrumental magnitude ($m_{inst}$), which is defined to be
$
\begin{equation}
m_{inst} = -2.5\log_{10}({\rm net\:count\:rate})
\end{equation}
$
This is a ridiculous formula, and I'm really sorry on behalf of all astronomy. Magnitudes are silly. Like actually, something with a smaller magnitude is brighter, how does that make sense?! The only good thing about magnitudes is that they are logarithmic. So if you take the difference of two magnitudes, you're actually taking the ratio of the count rates. We use this fact in the next step:
Some of our observations weren't of WR124. They were of a star called HIP 107864, also known as BD+28 4211. This object is a standard star, or a star whose brightness is a known quanitity. This means we can transform from $m_{inst}$ (which depends on the telescope setup, the weather, manufacturing imperfections in the filters, what you had for breakfast, etc.) to calibrated magnitudes ($m_{cal}$). We call the difference $Z = m_{cal}-m_{inst}$ the photometric zero point (in reality, we also need to correct for the fact that $m_{inst}$ depends on how high the object is in the sky, but because our observations only cover about an hour of time, that factor doesn't change significantly, but we'll still need to keep it in mind). Because we know both $m_{cal}$ and $m_{inst}$ for our standard star, we can derive $Z$, which we can then add to our measurements of $m_{inst}$ for WR124 to get $m_{cal}$. Unfortunately $Z$ depends on wavelength, so we'll need to calculate $Z$ for each filter we want to do science with (in this case, only three filters). Let's do that!
End of explanation
# extract photometry command goes here:
extract_photometry()
#Ok now we're ready to do all of our g images.
g_images = glob(reddir+'string that glob can use to find all of the g images')
print(g_images)
#We have three observations in g. Write some code that loops over those observations, does
#extract_photometry on each, and saves the measured instrumental magnitude from each to an
#array. Also save the error on the measured instrumental magnitude.
#code goes here.
#Now take the average of all three measurements, and save it in a variable, along with the
#error of the average.
g_inst = ?
g_inst_err = ?
#Finally calculate the photometric zero point for our g observations, and the error in that
#measurement
Z_g = ?
Z_g_err = ?
Explanation: To extract the photometry of an object at some position in some image, we'll use extract_photometry which you can call like this:
extract_photometry(filename,approx_location,centering_width=?,ap_rad=?,in_rad=?,out_rad=?)
filename is a string with the name of the file (pick one of the standard observations in the g filter), approx_location is a SkyCoord object. Because we'll want to be really precise with our apertures, extract_photometry uses the function generate_regions() to search within a small number of pixels (centering_width) for the centroid of the object. It then makes a src aperture with radius ap_rad (measured in arcseconds), and a bkg aperture with inner radius in_rad and outer radius out_rad. It returns an astropy Table object with a whole bunch of information; take a look at the output and see what you get!
To test that you chose the right size parameters, open up the same image in ds9, and create a new region with the center and radius that extract_photometry calculates. Does it capture the entire star? Does it look huge? Is it more-or-less centered? You want to be just large enough to get all of the flux, so adjust the region until it looks ok. Do the same up an annular region for the background. It should be big enough to get a decent chunk of sky, but not contain any sources in it.
Run extract_photometry again with the modified parameters, and then open the image and double check that the apertures look good.
End of explanation
#initial extract_photometry to test parameter values for r
#Use glob to make a list of r images
#Loop over images, extract instrumental mags and errors
#Take the average and the error
#Calculate zero point for r
#Repeat for i
Explanation: Repeat the previous few steps for r and i!
End of explanation
# Step 1: open up one of the WR124 images. Our star is the bright one in the bottom right
#quadrant. Estimate its coordinates and make a SkyCoords object just like you did for the
#standard star.
# Step 2: Use glob to make a list of WR124 images that are all in the same band. Note that a
#images were taken without the diffuser (they have _phot or _guide) in their names, so try to
#exclude them
# Step 3: Loop over images, for each one do extract_photometry, and record the time in the
#middle of the observation, the instrumental magnitude, and the error.
# Step 4: To each point add the corresponding zero point, and make sure to modify the
#associated error.
# Step 5: Save the array of times, magnitudes, errors to a file. Move on to the next band!
Explanation: 3. Now let's extract lightcurves of WR124. A lightcurve consists of three components: a list of times, a list of magnitudes, and a list of errors.
Now that you've gotten some experience with extract_photometry, we can extract a lightcurve of WR124. These data are slightly different, because they were taken with the diffuser: the diffuser spreads the light from each star out, which is ordinarily bad, but in this case it makes the size of the star very consistent from observation to observation, so the default values for centering_width, ap_rad, in_rad, and out_rad should work just fine.
End of explanation |
2,511 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<!--<img width=700px; src="../img/logoUPSayPlusCDS_990.png"> -->
<p style="margin-top
Step1: 1. Let's start with a showcase
Case 1
Step2: Starting from reading this dataset, to answering questions about this data in a few lines of code
Step3: How does the survival rate of the passengers differ between sexes?
Step4: Or how does it differ between the different classes?
Step5: All the needed functionality for the above examples will be explained throughout this tutorial.
Case 2
Step6: to answering questions about this data in a few lines of code
Step7: What is the difference in diurnal profile between weekdays and weekend?
Step8: We will come back to these example, and build them up step by step.
2. Pandas
Step9: Attributes of the DataFrame
A DataFrame has besides a index attribute, also a columns attribute
Step10: To check the data types of the different columns
Step11: An overview of that information can be given with the info() method
Step12: Also a DataFrame has a values attribute, but attention
Step13: Apart from importing your data from an external source (text file, excel, database, ..), one of the most common ways of creating a dataframe is from a dictionary of arrays or lists.
Note that in the IPython notebook, the dataframe will display in a rich HTML view
Step14: One-dimensional data
Step15: Attributes of a Series
Step16: You can access the underlying numpy array representation with the .values attribute
Step17: We can access series values via the index, just like for NumPy arrays
Step18: Unlike the NumPy array, though, this index can be something other than integers
Step19: but with the power of numpy arrays. Many things you can do with numpy arrays, can also be applied on DataFrames / Series.
Eg element-wise operations
Step20: A range of methods
Step21: Fancy indexing, like indexing with a list or boolean indexing
Step22: But also a lot of pandas specific methods, e.g.
Step23: <div class="alert alert-success">
<b>EXERCISE</b>
Step24: <div class="alert alert-success">
<b>EXERCISE</b>
Step25: 3. Data import and export
A wide range of input/output formats are natively supported by pandas
Step26: Very powerful csv reader
Step27: Luckily, if we have a well formed csv file, we don't need many of those arguments
Step28: <div class="alert alert-success">
<b>EXERCISE</b>
Step29: 4. Exploration
Some useful methods
Step30: info()
Step31: Getting some basic summary statistics about the data with describe
Step32: Quickly visualizing the data
Step33: <div class="alert alert-success">
<b>EXERCISE</b>
Step34: The default plot (when not specifying kind) is a line plot of all columns
Step35: This does not say too much ..
We can select part of the data (eg the latest 500 data points)
Step36: Or we can use some more advanced time series features -> see further in this notebook!
5. Selecting and filtering data
<div class="alert alert-warning">
<b>ATTENTION!</b>
Step37: df[] provides some convenience shortcuts
For a DataFrame, basic indexing selects the columns.
Selecting a single column
Step38: or multiple columns
Step39: But, slicing accesses the rows
Step40: Systematic indexing with loc and iloc
When using [] like above, you can only select from one axis at once (rows or columns, not both). For more advanced indexing, you have some extra attributes
Step41: Selecting by position with iloc works similar as indexing numpy arrays
Step42: The different indexing methods can also be used to assign data
Step43: Boolean indexing (filtering)
Often, you want to select rows based on a certain condition. This can be done with 'boolean indexing' (like a where clause in SQL) and comparable to numpy.
The indexer (or boolean mask) should be 1-dimensional and the same length as the thing being indexed.
Step44: <div class="alert alert-success">
<b>EXERCISE</b>
Step45: <div class="alert alert-success">
<b>EXERCISE</b>
Step46: 6. The group-by operation
Some 'theory'
Step47: Recap
Step48: However, in many cases your data has certain groups in it, and in that case, you may want to calculate this statistic for each of the groups.
For example, in the above dataframe df, there is a column 'key' which has three possible values
Step49: This becomes very verbose when having multiple groups. You could make the above a bit easier by looping over the different values, but still, it is not very convenient to work with.
What we did above, applying a function on different groups, is a "groupby operation", and pandas provides some convenient functionality for this.
Groupby
Step50: And many more methods are available.
Step51: Application of the groupby concept on the titanic data
We go back to the titanic passengers survival data
Step52: <div class="alert alert-success">
<b>EXERCISE</b>
Step53: <div class="alert alert-success">
<b>EXERCISE</b>
Step54: <div class="alert alert-success">
<b>EXERCISE</b>
Step55: <div class="alert alert-success">
<b>EXERCISE</b>
Step56: <div class="alert alert-success">
<b>EXERCISE</b>
Step57: <div class="alert alert-success">
<b>EXERCISE</b>
Step58: 7. Working with time series data
Step59: When we ensure the DataFrame has a DatetimeIndex, time-series related functionality becomes available
Step60: Indexing a time series works with strings
Step61: A nice feature is "partial string" indexing, so you don't need to provide the full datetime string.
E.g. all data of January up to March 2012
Step62: Time and date components can be accessed from the index
Step63: Converting your time series with resample
A very powerfull method is resample
Step64: The time series has a frequency of 1 hour. I want to change this to daily
Step65: Above I take the mean, but as with groupby I can also specify other methods
Step66: The string to specify the new time frequency
Step67: <div class="alert alert-success">
<b>EXERCISE</b>
Step68: <div class="alert alert-success">
<b>EXERCISE</b>
Step69: Now, we can calculate the mean of each month over the different years
Step70: <div class="alert alert-success">
<b>EXERCISE</b>
Step71: <div class="alert alert-success">
<b>EXERCISE</b>
Step72: Add a column indicating week/weekend
Step73: Now we can groupby the hour of the day and the weekend (or use pivot_table)
Step74: <div class="alert alert-success">
<b>EXERCISE</b> | Python Code:
%matplotlib inline
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
pd.options.display.max_rows = 8
Explanation: <!--<img width=700px; src="../img/logoUPSayPlusCDS_990.png"> -->
<p style="margin-top: 3em; margin-bottom: 2em;"><b><big><big><big><big>Introduction to Pandas</big></big></big></big></b></p>
End of explanation
df = pd.read_csv("data/titanic.csv")
df.head()
Explanation: 1. Let's start with a showcase
Case 1: titanic survival data
End of explanation
df['Age'].hist()
Explanation: Starting from reading this dataset, to answering questions about this data in a few lines of code:
What is the age distribution of the passengers?
End of explanation
df.groupby('Sex')[['Survived']].aggregate(lambda x: x.sum() / len(x))
Explanation: How does the survival rate of the passengers differ between sexes?
End of explanation
df.groupby('Pclass')['Survived'].aggregate(lambda x: x.sum() / len(x)).plot(kind='bar')
Explanation: Or how does it differ between the different classes?
End of explanation
data = pd.read_csv('data/20000101_20161231-NO2.csv', sep=';', skiprows=[1], na_values=['n/d'], index_col=0, parse_dates=True)
data.head()
Explanation: All the needed functionality for the above examples will be explained throughout this tutorial.
Case 2: air quality measurement timeseries
AirBase (The European Air quality dataBase): hourly measurements of all air quality monitoring stations from Europe
Starting from these hourly data for different stations:
End of explanation
data['1999':].resample('M').mean().plot(ylim=[0,120])
data['1999':].resample('A').mean().plot(ylim=[0,100])
Explanation: to answering questions about this data in a few lines of code:
Does the air pollution show a decreasing trend over the years?
End of explanation
data['weekday'] = data.index.weekday
data['weekend'] = data['weekday'].isin([5, 6])
data_weekend = data.groupby(['weekend', data.index.hour])['BASCH'].mean().unstack(level=0)
data_weekend.plot()
Explanation: What is the difference in diurnal profile between weekdays and weekend?
End of explanation
df
Explanation: We will come back to these example, and build them up step by step.
2. Pandas: data analysis in python
For data-intensive work in Python the Pandas library has become essential.
What is pandas?
Pandas can be thought of as NumPy arrays with labels for rows and columns, and better support for heterogeneous data types, but it's also much, much more than that.
Pandas can also be thought of as R's data.frame in Python.
Powerful for working with missing data, working with time series data, for reading and writing your data, for reshaping, grouping, merging your data, ...
It's documentation: http://pandas.pydata.org/pandas-docs/stable/
When do you need pandas?
When working with tabular or structured data (like R dataframe, SQL table, Excel spreadsheet, ...):
Import data
Clean up messy data
Explore data, gain insight into data
Process and prepare your data for analysis
Analyse your data (together with scikit-learn, statsmodels, ...)
<div class="alert alert-warning">
<b>ATTENTION!</b>: <br><br>
Pandas is great for working with heterogeneous and tabular 1D/2D data, but not all types of data fit in such structures!
<ul>
<li>When working with array data (e.g. images, numerical algorithms): just stick with numpy</li>
<li>When working with multidimensional labeled data (e.g. climate data): have a look at [xarray](http://xarray.pydata.org/en/stable/)</li>
</ul>
</div>
2. The pandas data structures: DataFrame and Series
A DataFrame is a tablular data structure (multi-dimensional object to hold labeled data) comprised of rows and columns, akin to a spreadsheet, database table, or R's data.frame object. You can think of it as multiple Series object which share the same index.
<img align="left" width=50% src="img/schema-dataframe.svg">
End of explanation
df.index
df.columns
Explanation: Attributes of the DataFrame
A DataFrame has besides a index attribute, also a columns attribute:
End of explanation
df.dtypes
Explanation: To check the data types of the different columns:
End of explanation
df.info()
Explanation: An overview of that information can be given with the info() method:
End of explanation
df.values
Explanation: Also a DataFrame has a values attribute, but attention: when you have heterogeneous data, all values will be upcasted:
End of explanation
data = {'country': ['Belgium', 'France', 'Germany', 'Netherlands', 'United Kingdom'],
'population': [11.3, 64.3, 81.3, 16.9, 64.9],
'area': [30510, 671308, 357050, 41526, 244820],
'capital': ['Brussels', 'Paris', 'Berlin', 'Amsterdam', 'London']}
df_countries = pd.DataFrame(data)
df_countries
Explanation: Apart from importing your data from an external source (text file, excel, database, ..), one of the most common ways of creating a dataframe is from a dictionary of arrays or lists.
Note that in the IPython notebook, the dataframe will display in a rich HTML view:
End of explanation
df['Age']
age = df['Age']
Explanation: One-dimensional data: Series (a column of a DataFrame)
A Series is a basic holder for one-dimensional labeled data.
End of explanation
age.index
Explanation: Attributes of a Series: index and values
The Series has also an index and values attribute, but no columns
End of explanation
age.values[:10]
Explanation: You can access the underlying numpy array representation with the .values attribute:
End of explanation
age[0]
Explanation: We can access series values via the index, just like for NumPy arrays:
End of explanation
df = df.set_index('Name')
df
age = df['Age']
age
age['Dooley, Mr. Patrick']
Explanation: Unlike the NumPy array, though, this index can be something other than integers:
End of explanation
age * 1000
Explanation: but with the power of numpy arrays. Many things you can do with numpy arrays, can also be applied on DataFrames / Series.
Eg element-wise operations:
End of explanation
age.mean()
Explanation: A range of methods:
End of explanation
age[age > 70]
Explanation: Fancy indexing, like indexing with a list or boolean indexing:
End of explanation
df['Embarked'].value_counts()
Explanation: But also a lot of pandas specific methods, e.g.
End of explanation
# %load snippets/01-pandas_introduction31.py
# %load snippets/01-pandas_introduction32.py
Explanation: <div class="alert alert-success">
<b>EXERCISE</b>:
<ul>
<li>What is the maximum Fare that was paid? And the median?</li>
</ul>
</div>
End of explanation
# %load snippets/01-pandas_introduction33.py
Explanation: <div class="alert alert-success">
<b>EXERCISE</b>:
<ul>
<li>Calculate the average survival ratio for all passengers (note: the 'Survived' column indicates whether someone survived (1) or not (0)).</li>
</ul>
</div>
End of explanation
#pd.read
#df.to
Explanation: 3. Data import and export
A wide range of input/output formats are natively supported by pandas:
CSV, text
SQL database
Excel
HDF5
json
html
pickle
sas, stata
(parquet)
...
End of explanation
pd.read_csv?
Explanation: Very powerful csv reader:
End of explanation
df = pd.read_csv("data/titanic.csv")
df.head()
Explanation: Luckily, if we have a well formed csv file, we don't need many of those arguments:
End of explanation
# %load snippets/01-pandas_introduction39.py
no2
Explanation: <div class="alert alert-success">
<b>EXERCISE</b>: Read the `data/20000101_20161231-NO2.csv` file into a DataFrame `no2`
<br><br>
Some aspects about the file:
<ul>
<li>Which separator is used in the file?</li>
<li>The second row includes unit information and should be skipped (check `skiprows` keyword)</li>
<li>For missing values, it uses the `'n/d'` notation (check `na_values` keyword)</li>
<li>We want to parse the 'timestamp' column as datetimes (check the `parse_dates` keyword)</li>
</ul>
</div>
End of explanation
no2.head(3)
no2.tail()
Explanation: 4. Exploration
Some useful methods:
head and tail
End of explanation
no2.info()
Explanation: info()
End of explanation
no2.describe()
Explanation: Getting some basic summary statistics about the data with describe:
End of explanation
no2.plot(kind='box', ylim=[0,250])
no2['BASCH'].plot(kind='hist', bins=50)
Explanation: Quickly visualizing the data
End of explanation
# %load snippets/01-pandas_introduction47.py
Explanation: <div class="alert alert-success">
<b>EXERCISE</b>:
<ul>
<li>Plot the age distribution of the titanic passengers</li>
</ul>
</div>
End of explanation
no2.plot(figsize=(12,6))
Explanation: The default plot (when not specifying kind) is a line plot of all columns:
End of explanation
no2[-500:].plot(figsize=(12,6))
Explanation: This does not say too much ..
We can select part of the data (eg the latest 500 data points):
End of explanation
df = pd.read_csv("data/titanic.csv")
Explanation: Or we can use some more advanced time series features -> see further in this notebook!
5. Selecting and filtering data
<div class="alert alert-warning">
<b>ATTENTION!</b>: <br><br>
One of pandas' basic features is the labeling of rows and columns, but this makes indexing also a bit more complex compared to numpy. <br><br> We now have to distuinguish between:
<ul>
<li>selection by **label**</li>
<li>selection by **position**</li>
</ul>
</div>
End of explanation
df['Age']
Explanation: df[] provides some convenience shortcuts
For a DataFrame, basic indexing selects the columns.
Selecting a single column:
End of explanation
df[['Age', 'Fare']]
Explanation: or multiple columns:
End of explanation
df[10:15]
Explanation: But, slicing accesses the rows:
End of explanation
df = df.set_index('Name')
df.loc['Bonnell, Miss. Elizabeth', 'Fare']
df.loc['Bonnell, Miss. Elizabeth':'Andersson, Mr. Anders Johan', :]
Explanation: Systematic indexing with loc and iloc
When using [] like above, you can only select from one axis at once (rows or columns, not both). For more advanced indexing, you have some extra attributes:
loc: selection by label
iloc: selection by position
End of explanation
df.iloc[0:2,1:3]
Explanation: Selecting by position with iloc works similar as indexing numpy arrays:
End of explanation
df.loc['Braund, Mr. Owen Harris', 'Survived'] = 100
df
Explanation: The different indexing methods can also be used to assign data:
End of explanation
df['Fare'] > 50
df[df['Fare'] > 50]
Explanation: Boolean indexing (filtering)
Often, you want to select rows based on a certain condition. This can be done with 'boolean indexing' (like a where clause in SQL) and comparable to numpy.
The indexer (or boolean mask) should be 1-dimensional and the same length as the thing being indexed.
End of explanation
df = pd.read_csv("data/titanic.csv")
# %load snippets/01-pandas_introduction63.py
# %load snippets/01-pandas_introduction64.py
# %load snippets/01-pandas_introduction65.py
Explanation: <div class="alert alert-success">
<b>EXERCISE</b>:
<ul>
<li>Based on the titanic data set, select all rows for male passengers and calculate the mean age of those passengers. Do the same for the female passengers</li>
</ul>
</div>
End of explanation
# %load snippets/01-pandas_introduction66.py
# %load snippets/01-pandas_introduction67.py
Explanation: <div class="alert alert-success">
<b>EXERCISE</b>:
<ul>
<li>Based on the titanic data set, how many passengers older than 70 were on the Titanic?</li>
</ul>
</div>
End of explanation
df = pd.DataFrame({'key':['A','B','C','A','B','C','A','B','C'],
'data': [0, 5, 10, 5, 10, 15, 10, 15, 20]})
df
Explanation: 6. The group-by operation
Some 'theory': the groupby operation (split-apply-combine)
End of explanation
df['data'].sum()
Explanation: Recap: aggregating functions
When analyzing data, you often calculate summary statistics (aggregations like the mean, max, ...). As we have seen before, we can easily calculate such a statistic for a Series or column using one of the many available methods. For example:
End of explanation
for key in ['A', 'B', 'C']:
print(key, df[df['key'] == key]['data'].sum())
Explanation: However, in many cases your data has certain groups in it, and in that case, you may want to calculate this statistic for each of the groups.
For example, in the above dataframe df, there is a column 'key' which has three possible values: 'A', 'B' and 'C'. When we want to calculate the sum for each of those groups, we could do the following:
End of explanation
df.groupby('key').sum()
df.groupby('key').aggregate(np.sum) # 'sum'
Explanation: This becomes very verbose when having multiple groups. You could make the above a bit easier by looping over the different values, but still, it is not very convenient to work with.
What we did above, applying a function on different groups, is a "groupby operation", and pandas provides some convenient functionality for this.
Groupby: applying functions per group
The "group by" concept: we want to apply the same function on subsets of your dataframe, based on some key to split the dataframe in subsets
This operation is also referred to as the "split-apply-combine" operation, involving the following steps:
Splitting the data into groups based on some criteria
Applying a function to each group independently
Combining the results into a data structure
<img src="img/splitApplyCombine.png">
Similar to SQL GROUP BY
Instead of doing the manual filtering as above
df[df['key'] == "A"].sum()
df[df['key'] == "B"].sum()
...
pandas provides the groupby method to do exactly this:
End of explanation
df.groupby('key')['data'].sum()
Explanation: And many more methods are available.
End of explanation
df = pd.read_csv("data/titanic.csv")
df.head()
Explanation: Application of the groupby concept on the titanic data
We go back to the titanic passengers survival data:
End of explanation
# %load snippets/01-pandas_introduction76.py
Explanation: <div class="alert alert-success">
<b>EXERCISE</b>:
<ul>
<li>Calculate the average age for each sex again, but now using groupby.</li>
</ul>
</div>
End of explanation
# %load snippets/01-pandas_introduction77.py
Explanation: <div class="alert alert-success">
<b>EXERCISE</b>:
<ul>
<li>Calculate the average survival ratio for all passengers.</li>
</ul>
</div>
End of explanation
# %load snippets/01-pandas_introduction78.py
Explanation: <div class="alert alert-success">
<b>EXERCISE</b>:
<ul>
<li>Calculate this survival ratio for all passengers younger that 25 (remember: filtering/boolean indexing).</li>
</ul>
</div>
End of explanation
# %load snippets/01-pandas_introduction79.py
Explanation: <div class="alert alert-success">
<b>EXERCISE</b>:
<ul>
<li>What is the difference in the survival ratio between the sexes?</li>
</ul>
</div>
End of explanation
# %load snippets/01-pandas_introduction80.py
Explanation: <div class="alert alert-success">
<b>EXERCISE</b>:
<ul>
<li>Or how does it differ between the different classes? Make a bar plot visualizing the survival ratio for the 3 classes.</li>
</ul>
</div>
End of explanation
df['AgeClass'] = pd.cut(df['Age'], bins=np.arange(0,90,10))
# %load snippets/01-pandas_introduction82.py
Explanation: <div class="alert alert-success">
<b>EXERCISE</b>:
<ul>
<li>Make a bar plot to visualize the average Fare payed by people depending on their age. The age column is devided is separate classes using the `pd.cut` function as provided below.</li>
</ul>
</div>
End of explanation
no2 = pd.read_csv('data/20000101_20161231-NO2.csv', sep=';', skiprows=[1], na_values=['n/d'], index_col=0, parse_dates=True)
Explanation: 7. Working with time series data
End of explanation
no2.index
Explanation: When we ensure the DataFrame has a DatetimeIndex, time-series related functionality becomes available:
End of explanation
no2["2010-01-01 09:00": "2010-01-01 12:00"]
Explanation: Indexing a time series works with strings:
End of explanation
no2['2012-01':'2012-03']
Explanation: A nice feature is "partial string" indexing, so you don't need to provide the full datetime string.
E.g. all data of January up to March 2012:
End of explanation
no2.index.hour
no2.index.year
Explanation: Time and date components can be accessed from the index:
End of explanation
no2.plot()
Explanation: Converting your time series with resample
A very powerfull method is resample: converting the frequency of the time series (e.g. from hourly to daily data).
Remember the air quality data:
End of explanation
no2.head()
no2.resample('D').mean().head()
Explanation: The time series has a frequency of 1 hour. I want to change this to daily:
End of explanation
no2.resample('D').max().head()
Explanation: Above I take the mean, but as with groupby I can also specify other methods:
End of explanation
no2.resample('M').mean().plot() # 'A'
# no2['2012'].resample('D').plot()
# %load snippets/01-pandas_introduction95.py
Explanation: The string to specify the new time frequency: http://pandas.pydata.org/pandas-docs/dev/timeseries.html#offset-aliases
These strings can also be combined with numbers, eg '10D'.
Further exploring the data:
End of explanation
# %load snippets/01-pandas_introduction96.py
Explanation: <div class="alert alert-success">
<b>EXERCISE</b>: The evolution of the yearly averages with, and the overall mean of all stations
<ul>
<li>Use `resample` and `plot` to plot the yearly averages for the different stations.</li>
<li>The overall mean of all stations can be calculated by taking the mean of the different columns (`.mean(axis=1)`).</li>
</ul>
</div>
End of explanation
# %load snippets/01-pandas_introduction97.py
Explanation: <div class="alert alert-success">
<b>EXERCISE</b>: how does the *typical monthly profile* look like for the different stations?
<ul>
<li>Add a 'month' column to the dataframe.</li>
<li>Group by the month to obtain the typical monthly averages over the different years.</li>
</ul>
</div>
First, we add a column to the dataframe that indicates the month (integer value of 1 to 12):
End of explanation
# %load snippets/01-pandas_introduction98.py
# %load snippets/01-pandas_introduction99.py
Explanation: Now, we can calculate the mean of each month over the different years:
End of explanation
# %load snippets/01-pandas_introduction100.py
Explanation: <div class="alert alert-success">
<b>EXERCISE</b>: The typical diurnal profile for the different stations
<ul>
<li>Similar as for the month, you can now group by the hour of the day.</li>
</ul>
</div>
End of explanation
no2.index.weekday?
# %load snippets/01-pandas_introduction102.py
Explanation: <div class="alert alert-success">
<b>EXERCISE</b>: What is the difference in the typical diurnal profile between week and weekend days for the 'BASCH' station.
<ul>
<li>Add a column 'weekday' defining the different days in the week.</li>
<li>Add a column 'weekend' defining if a days is in the weekend (i.e. days 5 and 6) or not (True/False).</li>
<li>You can groupby on multiple items at the same time. In this case you would need to group by both weekend/weekday and hour of the day.</li>
</ul>
</div>
Add a column indicating the weekday:
End of explanation
# %load snippets/01-pandas_introduction103.py
Explanation: Add a column indicating week/weekend
End of explanation
# %load snippets/01-pandas_introduction104.py
# %load snippets/01-pandas_introduction105.py
# %load snippets/01-pandas_introduction106.py
# %load snippets/01-pandas_introduction107.py
Explanation: Now we can groupby the hour of the day and the weekend (or use pivot_table):
End of explanation
# re-reading the data to have a clean version
no2 = pd.read_csv('data/20000101_20161231-NO2.csv', sep=';', skiprows=[1], na_values=['n/d'], index_col=0, parse_dates=True)
# %load snippets/01-pandas_introduction109.py
# %load snippets/01-pandas_introduction110.py
# %load snippets/01-pandas_introduction111.py
Explanation: <div class="alert alert-success">
<b>EXERCISE</b>: What are the number of exceedances of hourly values above the European limit 200 µg/m3 ?
Count the number of exceedances of hourly values above the European limit 200 µg/m3 for each year and station after 2005. Make a barplot of the counts. Add an horizontal line indicating the maximum number of exceedances (which is 18) allowed per year?
<br><br>
Hints:
<ul>
<li>Create a new DataFrame, called `exceedances`, (with boolean values) indicating if the threshold is exceeded or not</li>
<li>Remember that the sum of True values can be used to count elements. Do this using groupby for each year.</li>
<li>Adding a horizontal line can be done with the matplotlib function `ax.axhline`.</li>
</ul>
</div>
End of explanation |
2,512 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Table of Contents
Preface
Step1: Introductory textbook for Kalman filters and Bayesian filters. The book is written using Jupyter Notebook so you may read the book in your browser and also run and modify the code, seeing the results inside the book. What better way to learn?
Kalman and Bayesian Filters
Sensors are noisy. The world is full of data and events that we want to measure and track, but we cannot rely on sensors to give us perfect information. The GPS in my car reports altitude. Each time I pass the same point in the road it reports a slightly different altitude. My kitchen scale gives me different readings if I weigh the same object twice.
In simple cases the solution is obvious. If my scale gives slightly different readings I can just take a few readings and average them. Or I can replace it with a more accurate scale. But what do we do when the sensor is very noisy, or the environment makes data collection difficult? We may be trying to track the movement of a low flying aircraft. We may want to create an autopilot for a drone, or ensure that our farm tractor seeded the entire field. I work on computer vision, and I need to track moving objects in images, and the computer vision algorithms create very noisy and unreliable results.
This book teaches you how to solve these sorts of filtering problems. I use many different algorithms, but they are all based on Bayesian probability. In simple terms Bayesian probability determines what is likely to be true based on past information.
If I asked you the heading of my car at this moment you would have no idea. You'd proffer a number between 1$^\circ$ and 360$^\circ$ degrees, and have a 1 in 360 chance of being right. Now suppose I told you that 2 seconds ago its heading was 243$^\circ$. In 2 seconds my car could not turn very far so you could make a far more accurate prediction. You are using past information to more accurately infer information about the present or future.
The world is also noisy. That prediction helps you make a better estimate, but it also subject to noise. I may have just braked for a dog or swerved around a pothole. Strong winds and ice on the road are external influences on the path of my car. In control literature we call this noise though you may not think of it that way.
There is more to Bayesian probability, but you have the main idea. Knowledge is uncertain, and we alter our beliefs based on the strength of the evidence. Kalman and Bayesian filters blend our noisy and limited knowledge of how a system behaves with the noisy and limited sensor readings to produce the best possible estimate of the state of the system. Our principle is to never discard information.
Say we are tracking an object and a sensor reports that it suddenly changed direction. Did it really turn, or is the data noisy? It depends. If this is a jet fighter we'd be very inclined to believe the report of a sudden maneuver. If it is a freight train on a straight track we would discount it. We'd further modify our belief depending on how accurate the sensor is. Our beliefs depend on the past and on our knowledge of the system we are tracking and on the characteristics of the sensors.
The Kalman filter was invented by Rudolf Emil Kálmán to solve this sort of problem in a mathematically optimal way. Its first use was on the Apollo missions to the moon, and since then it has been used in an enormous variety of domains. There are Kalman filters in aircraft, on submarines, and on cruise missiles. Wall street uses them to track the market. They are used in robots, in IoT (Internet of Things) sensors, and in laboratory instruments. Chemical plants use them to control and monitor reactions. They are used to perform medical imaging and to remove noise from cardiac signals. If it involves a sensor and/or time-series data, a Kalman filter or a close relative to the Kalman filter is usually involved.
Motivation for this Book
I'm a software engineer that spent almost two decades in aerospace, and so I have always been 'bumping elbows' with the Kalman filter, but never implemented one. They've always had a fearsome reputation for difficulty. The theory is beautiful, but quite difficult to learn if you are not already well trained in topics such as signal processing, control theory, probability and statistics, and guidance and control theory. As I moved into solving tracking problems with computer vision the need to implement them myself became urgent.
There are excellent textbooks in the field, such as Grewal and Andrew's Kalman Filtering. But sitting down and trying to read many of these books is a dismal and trying experience if you do not have the necessary background. Typically the first few chapters fly through several years of undergraduate math, blithely referring you to textbooks on Itō calculus, and presenting an entire semester's worth of statistics in a few brief paragraphs. They are textbooks for an upper undergraduate or graduate level course, and an invaluable reference to researchers and professionals, but the going is truly difficult for the more casual reader. Notation is introduced without explanation, different texts use different words and variable names for the same concept, and the books are almost devoid of examples or worked problems. I often found myself able to parse the words and comprehend the mathematics of a definition, but had no idea as to what real world phenomena these words and math were attempting to describe. "But what does that mean?" was my repeated thought. Here are typical examples which once puzzled me
Step2: It has become a industry standard to use import numpy as np.
You can also use tuples
Step3: Create multidimensional arrays with nested brackets
Step4: You can create arrays of 3 or more dimensions, but we have no need for that here, and so I will not elaborate.
By default the arrays use the data type of the values in the list; if there are multiple types then it will choose the type that most accurately represents all the values. So, for example, if your list contains a mix of int and float the data type of the array would be of type float. You can override this with the dtype parameter.
Step5: You can access the array elements using subscript location
Step6: You can access a column or row by using slices. A colon (
Step7: We can get the second row with
Step8: Get the last two elements of the second row with
Step9: As with Python lists, you can use negative indexes to refer to the end of the array. -1 refers to the last index. So another way to get the last two elements of the second (last) row would be
Step10: You can perform matrix addition with the + operator, but matrix multiplication requires the dot method or function. The * operator performs element-wise multiplication, which is not what you want for linear algebra.
Step11: Python 3.5 introduced the @ operator for matrix multiplication.
```python
x @ x
[[ 7.0 10.0]
[ 15.0 22.0]]
```
This will only work if you are using Python 3.5+. So, as much as I prefer this notation to np.dot(x, x) I will not use it in this book.
You can get the transpose with .T, and the inverse with numpy.linalg.inv. The SciPy package also provides the inverse function.
Step12: There are helper functions like zeros to create a matrix of all zeros, ones to get all ones, and eye to get the identity matrix. If you want a multidimensional array, use a tuple to specify the shape.
Step13: We have functions to create equally spaced data. arange works much like Python's range function, except it returns a NumPy array. linspace works slightly differently, you call it with linspace(start, stop, num), where num is the length of the array that you want.
Step14: Now let's plot some data. For the most part it is very simple. Matplotlib contains a plotting library pyplot. It is industry standard to import it as plt. Once imported, plot numbers by calling plt.plot with a list or array of numbers. If you make multiple calls it will plot multiple series, each with a different color.
Step15: The output [<matplotlib.lines.Line2D at 0x2ba160bed68>] is because plt.plot returns the object that was just created. Ordinarily we do not want to see that, so I add a ; to my last plotting command to suppress that output.
By default plot assumes that the x-series is incremented by one. You can provide your own x-series by passing in both x and y.
Step16: There are many more features to these packages which I use in this book. Normally I will introduce them without explanation, trusting that you can infer the usage from context, or search online for an explanation. As always, if you are unsure, create a new cell in the Notebook or fire up a Python console and experiment!
Exercise - Create arrays
I want you to create a NumPy array of 10 elements with each element containing 1/10. There are several ways to do this; try to implement as many as you can think of.
Step17: Solution
Here are three ways to do this. The first one is the one I want you to know. I used the '/' operator to divide all of the elements of the array with 10. We will shortly use this to convert the units of an array from meters to km.
Step18: Here is one I haven't covered yet. The function numpy.asarray() will convert its argument to an ndarray if it isn't already one. If it is, the data is unchanged. This is a handy way to write a function that can accept either Python lists or ndarrays, and it is very efficient if the type is already ndarray as nothing new is created. | Python Code:
from __future__ import division, print_function
%matplotlib inline
#format the book
import book_format
book_format.set_style()
Explanation: Table of Contents
Preface
End of explanation
import numpy as np
x = np.array([1, 2, 3])
print(type(x))
x
Explanation: Introductory textbook for Kalman filters and Bayesian filters. The book is written using Jupyter Notebook so you may read the book in your browser and also run and modify the code, seeing the results inside the book. What better way to learn?
Kalman and Bayesian Filters
Sensors are noisy. The world is full of data and events that we want to measure and track, but we cannot rely on sensors to give us perfect information. The GPS in my car reports altitude. Each time I pass the same point in the road it reports a slightly different altitude. My kitchen scale gives me different readings if I weigh the same object twice.
In simple cases the solution is obvious. If my scale gives slightly different readings I can just take a few readings and average them. Or I can replace it with a more accurate scale. But what do we do when the sensor is very noisy, or the environment makes data collection difficult? We may be trying to track the movement of a low flying aircraft. We may want to create an autopilot for a drone, or ensure that our farm tractor seeded the entire field. I work on computer vision, and I need to track moving objects in images, and the computer vision algorithms create very noisy and unreliable results.
This book teaches you how to solve these sorts of filtering problems. I use many different algorithms, but they are all based on Bayesian probability. In simple terms Bayesian probability determines what is likely to be true based on past information.
If I asked you the heading of my car at this moment you would have no idea. You'd proffer a number between 1$^\circ$ and 360$^\circ$ degrees, and have a 1 in 360 chance of being right. Now suppose I told you that 2 seconds ago its heading was 243$^\circ$. In 2 seconds my car could not turn very far so you could make a far more accurate prediction. You are using past information to more accurately infer information about the present or future.
The world is also noisy. That prediction helps you make a better estimate, but it also subject to noise. I may have just braked for a dog or swerved around a pothole. Strong winds and ice on the road are external influences on the path of my car. In control literature we call this noise though you may not think of it that way.
There is more to Bayesian probability, but you have the main idea. Knowledge is uncertain, and we alter our beliefs based on the strength of the evidence. Kalman and Bayesian filters blend our noisy and limited knowledge of how a system behaves with the noisy and limited sensor readings to produce the best possible estimate of the state of the system. Our principle is to never discard information.
Say we are tracking an object and a sensor reports that it suddenly changed direction. Did it really turn, or is the data noisy? It depends. If this is a jet fighter we'd be very inclined to believe the report of a sudden maneuver. If it is a freight train on a straight track we would discount it. We'd further modify our belief depending on how accurate the sensor is. Our beliefs depend on the past and on our knowledge of the system we are tracking and on the characteristics of the sensors.
The Kalman filter was invented by Rudolf Emil Kálmán to solve this sort of problem in a mathematically optimal way. Its first use was on the Apollo missions to the moon, and since then it has been used in an enormous variety of domains. There are Kalman filters in aircraft, on submarines, and on cruise missiles. Wall street uses them to track the market. They are used in robots, in IoT (Internet of Things) sensors, and in laboratory instruments. Chemical plants use them to control and monitor reactions. They are used to perform medical imaging and to remove noise from cardiac signals. If it involves a sensor and/or time-series data, a Kalman filter or a close relative to the Kalman filter is usually involved.
Motivation for this Book
I'm a software engineer that spent almost two decades in aerospace, and so I have always been 'bumping elbows' with the Kalman filter, but never implemented one. They've always had a fearsome reputation for difficulty. The theory is beautiful, but quite difficult to learn if you are not already well trained in topics such as signal processing, control theory, probability and statistics, and guidance and control theory. As I moved into solving tracking problems with computer vision the need to implement them myself became urgent.
There are excellent textbooks in the field, such as Grewal and Andrew's Kalman Filtering. But sitting down and trying to read many of these books is a dismal and trying experience if you do not have the necessary background. Typically the first few chapters fly through several years of undergraduate math, blithely referring you to textbooks on Itō calculus, and presenting an entire semester's worth of statistics in a few brief paragraphs. They are textbooks for an upper undergraduate or graduate level course, and an invaluable reference to researchers and professionals, but the going is truly difficult for the more casual reader. Notation is introduced without explanation, different texts use different words and variable names for the same concept, and the books are almost devoid of examples or worked problems. I often found myself able to parse the words and comprehend the mathematics of a definition, but had no idea as to what real world phenomena these words and math were attempting to describe. "But what does that mean?" was my repeated thought. Here are typical examples which once puzzled me:
$$\begin{aligned}\hat{x}{k} = \Phi{k}\hat{x}{k-1} + G_k u{k-1} + K_k [z_k - H \Phi_{k} \hat{x}{k-1} - H G_k u{k-1}]
\
\mathbf{P}{k\mid k} = (I - \mathbf{K}_k \mathbf{H}{k})\textrm{cov}(\mathbf{x}k - \hat{\mathbf{x}}{k\mid k-1})(I - \mathbf{K}k \mathbf{H}{k})^{\text{T}} + \mathbf{K}_k\textrm{cov}(\mathbf{v}_k )\mathbf{K}_k^{\text{T}}\end{aligned}$$
However, as I began to finally understand the Kalman filter I realized the underlying concepts are quite straightforward. If you know a few simple probability rules, and have some intuition about how we fuse uncertain knowledge, the concepts of the Kalman filter are accessible. Kalman filters have a reputation for difficulty, but shorn of much of the formal terminology the beauty of the subject and of their math became clear to me, and I fell in love with the topic.
As I began to understand the math and theory more difficulties appeared. A book or paper will make some statement of fact and presents a graph as proof. Unfortunately, why the statement is true is not clear to me, or I cannot reproduce the plot. Or maybe I wonder "is this true if R=0?" Or the author provides pseudocode at such a high level that the implementation is not obvious. Some books offer Matlab code, but I do not have a license to that expensive package. Finally, many books end each chapter with many useful exercises. Exercises which you need to understand if you want to implement Kalman filters for yourself, but exercises with no answers. If you are using the book in a classroom, perhaps this is okay, but it is terrible for the independent reader. I loathe that an author withholds information from me, presumably to avoid 'cheating' by the student in the classroom.
All of this impedes learning. I want to track an image on a screen, or write some code for my Arduino project. I want to know how the plots in the book are made, and to choose different parameters than the author chose. I want to run simulations. I want to inject more noise into the signal and see how a filter performs. There are thousands of opportunities for using Kalman filters in everyday code, and yet this fairly straightforward topic is the provenance of rocket scientists and academics.
I wrote this book to address all of those needs. This is not the sole book for you if you design military radars. Go get a Masters or PhD at a great STEM school, because you'll need it. This book is for the hobbyist, the curious, and the working engineer that needs to filter or smooth data. If you are a hobbyist this book should provide everything you need. If you are serious about Kalman filters you'll need more. My intention is to introduce enough of the concepts and mathematics to make the textbooks and papers approachable.
This book is interactive. While you can read it online as static content, I urge you to use it as intended. It is written using Jupyter Notebook. This allows me to combine text, math, Python, and Python output in one place. Every plot, every piece of data in this book is generated from Python inside the notebook. Want to double the value of a parameter? Just change the parameter's value, and press CTRL-ENTER. A new plot or printed output will appear.
This book has exercises, but it also has the answers. I trust you. If you just need an answer, go ahead and read the answer. If you want to internalize this knowledge, try to implement the exercise before you read the answer. Since the book is interactive, you enter and run your solution inside the book - you don't have to move to a different environment, or deal with importing a bunch of stuff before starting.
This book is free. I've spent several thousand dollars on Kalman filtering books. I cannot believe they are within the reach of someone in a depressed economy or a financially struggling student. I have gained so much from free software like Python, and free books like those from Allen B. Downey [1]. It's time to repay that. So, the book is free, it is hosted on free servers at GitHub, and it uses only free and open software such as IPython and MathJax.
Reading Online
<b>GitHub</b>
The book is hosted on GitHub, and you can read any chapter by clicking on its name. GitHub statically renders Jupyter Notebooks. You will not be able to run or alter the code, but you can read all of the content.
The GitHub pages for this project are at
https://github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python
<b>binder</b>
binder serves interactive notebooks online, so you can run the code and change the code within your browser without downloading the book or installing Jupyter. Use this link to access the book via binder:
http://mybinder.org/repo/rlabbe/Kalman-and-Bayesian-Filters-in-Python
<b>nbviewer</b>
The nbviewer website will render any Notebook in a static format. I find it does a slightly better job than the GitHub renderer, but it is slighty harder to use. It accesses GitHub directly; whatever I have checked into GitHub will be rendered by nbviewer.
You may access this book via nbviewer here:
http://nbviewer.ipython.org/github/rlabbe/Kalman-and-Bayesian-Filters-in-Python/blob/master/table_of_contents.ipynb
PDF Version
I periodically generate a PDF of the book from the notebooks. You can access it here:
https://drive.google.com/file/d/0By_SW19c1BfhSVFzNHc0SjduNzg/view?usp=sharing
Downloading and Running the Book
However, this book is intended to be interactive and I recommend using it in that form. It's a little more effort to set up, but worth it. If you install IPython and some supporting libraries on your computer and then clone this book you will be able to run all of the code in the book yourself. You can perform experiments, see how filters react to different data, see how different filters react to the same data, and so on. I find this sort of immediate feedback both vital and invigorating. You do not have to wonder "what happens if". Try it and see!
Instructions for installation can be found in the Installation appendix, found here.
Once the software is installed you can navigate to the installation directory and run Juptyer notebook with the command line instruction
jupyter notebook
This will open a browser window showing the contents of the base directory. The book is organized into chapters. Each chapter is named xx-name.ipynb, where xx is the chapter number. .ipynb is the Notebook file extension. To read Chapter 2, click on the link for chapter 2. This will cause the browser to open that subdirectory. In each subdirectory there will be one or more IPython Notebooks (all notebooks have a .ipynb file extension). The chapter contents are in the notebook with the same name as the chapter name. There are sometimes supporting notebooks for doing things like generating animations that are displayed in the chapter. These are not intended to be read by the end user, but of course if you are curious as to how an animation is made go ahead and take a look.
Admittedly this is a cumbersome interface to a book. I am following in the footsteps of several other projects that are re-purposing Jupyter Notebook to generate entire books. I feel the slight annoyances have a huge payoff - instead of having to download a separate code base and run it in an IDE while you try to read a book, all of the code and text is in one place. If you want to alter the code, you may do so and immediately see the effects of your change. If you find a bug, you can make a fix, and push it back to my repository so that everyone in the world benefits. And, of course, you will never encounter a problem I face all the time with traditional books - the book and the code are out of sync with each other, and you are left scratching your head as to which source to trust.
Jupyter
First, some words about using Jupyter Notebooks with this book. This book is interactive. If you want to run code examples, and especially if you want to see animated plots, you will need to run the code cells. I cannot teach you everything about Jupyter Notebooks. However, a few things trip readers up. You can go to http://jupyter.org/ for detailed documentation.
First, you must always run the topmost code cell, the one with the comment #format the book. It is directly above. This does not just set up formatting, which you might not care about, but it also loads some necessary modules and makes some global settings regarding plotting and printing. So, always run this cell unless you are just passively reading. The import from __future__ helps Python 2.7 work like Python 3.X. Division of integers will return a float (3/10 == 0.3) instead of an int (3/10 == 0), and printing requires parens: print(3), not print 3. The line
python
%matplotlib inline
causes plots to be displayed inside the notebook. Matplotlib is a plotting package which is described below. For reasons I don't understand the default behavior of Jupyter Notebooks is to generate plots in an external window.
The percent sign in %matplotlib is used for IPython magic - these are commands to the kernel to do things that are not part of the Python language. There are many useful magic commands, and you can read about them here: http://ipython.readthedocs.io/en/stable/interactive/magics.html
Running the code inside a cell is easy. Click on it so that it has focus (a box will be drawn around it), and then press CTRL-Enter.
Second, cells must be run in order. I break problems up over several cells; if you try to just skip down and run the tenth code cell it almost certainly won't work. If you haven't run anything yet just choose Run All Above from the Cell menu item. That's the easiest way to ensure everything has been run.
Once cells are run you can often jump around and rerun cells in different orders, but not always. I'm trying to fix this, but there is a tradeoff. I'll define a variable in cell 10 (say), and then run code that modifies that variable in cells 11 and 12. If you go back and run cell 11 again the variable will have the value that was set in cell 12, and the code expects the value that was set in cell 10. So, occasionally you'll get weird results if you run cells out of order. My advise is to backtrack a bit, and run cells in order again to get back to a proper state. It's annoying, but the interactive aspect of Jupyter notebooks more than makes up for it. Better yet, submit an issue on GitHub so I know about the problem and fix it!
Finally, some readers have reported problems with the animated plotting features in some browsers. I have not been able to reproduce this. In parts of the book I use the %matplotlib notebook magic, which enables interactive plotting. If these plots are not working for you, try changing this to read %matplotlib inline. You will lose the animated plotting, but it seems to work on all platforms and browsers.
SciPy, NumPy, and Matplotlib
SciPy is a open source collection of software for mathematics. Included in SciPy are NumPy, which provides array objects, linear algebra, random numbers, and more. Matplotlib provides plotting of NumPy arrays. SciPy's modules duplicate some of the functionality in NumPy while adding features such as optimization, image processing, and more.
To keep my efforts for this book managable I have elected to assume that you know how to program in Python, and that you also are familiar with these packages. Nonetheless, I will take a few moments to illustrate a few features of each; realistically you will have to find outside sources to teach you the details. The home page for SciPy, https://scipy.org, is the perfect starting point, though you will soon want to search for relevant tutorials and/or videos.
NumPy, SciPy, and Matplotlib do not come with the default Python distribution; see the Installation Appendix if you do not have them installed.
I use NumPy's array data structure throughout the book, so let's learn about them now. I will teach you enough to get started; refer to NumPy's documentation if you want to become an expert.
numpy.array implements a one or more dimensional array. Its type is numpy.ndarray, and we will refer to this as an ndarray for short. You can construct it with any list-like object. The following constructs a 1-D array from a list:
End of explanation
x = np.array((4,5,6))
x
Explanation: It has become a industry standard to use import numpy as np.
You can also use tuples:
End of explanation
x = np.array([[1, 2, 3],
[4, 5, 6]])
print(x)
Explanation: Create multidimensional arrays with nested brackets:
End of explanation
x = np.array([1, 2, 3], dtype=float)
print(x)
Explanation: You can create arrays of 3 or more dimensions, but we have no need for that here, and so I will not elaborate.
By default the arrays use the data type of the values in the list; if there are multiple types then it will choose the type that most accurately represents all the values. So, for example, if your list contains a mix of int and float the data type of the array would be of type float. You can override this with the dtype parameter.
End of explanation
x = np.array([[1, 2, 3],
[4, 5, 6]])
print(x[1,2])
Explanation: You can access the array elements using subscript location:
End of explanation
x[:, 0]
Explanation: You can access a column or row by using slices. A colon (:) used as a subscript is shorthand for all data in that row or column. So x[:,0] returns an array of all data in the first column (the 0 specifies the first column):
End of explanation
x[1, :]
Explanation: We can get the second row with:
End of explanation
x[1, 1:]
Explanation: Get the last two elements of the second row with:
End of explanation
x[-1, -2:]
Explanation: As with Python lists, you can use negative indexes to refer to the end of the array. -1 refers to the last index. So another way to get the last two elements of the second (last) row would be:
End of explanation
x = np.array([[1., 2.],
[3., 4.]])
print('addition:\n', x + x)
print('\nelement-wise multiplication\n', x * x)
print('\nmultiplication\n', np.dot(x, x))
print('\ndot is also a member of np.array\n', x.dot(x))
Explanation: You can perform matrix addition with the + operator, but matrix multiplication requires the dot method or function. The * operator performs element-wise multiplication, which is not what you want for linear algebra.
End of explanation
import scipy.linalg as linalg
print('transpose\n', x.T)
print('\nNumPy ninverse\n', np.linalg.inv(x))
print('\nSciPy inverse\n', linalg.inv(x))
Explanation: Python 3.5 introduced the @ operator for matrix multiplication.
```python
x @ x
[[ 7.0 10.0]
[ 15.0 22.0]]
```
This will only work if you are using Python 3.5+. So, as much as I prefer this notation to np.dot(x, x) I will not use it in this book.
You can get the transpose with .T, and the inverse with numpy.linalg.inv. The SciPy package also provides the inverse function.
End of explanation
print('zeros\n', np.zeros(7))
print('\nzeros(3x2)\n', np.zeros((3, 2)))
print('\neye\n', np.eye(3))
Explanation: There are helper functions like zeros to create a matrix of all zeros, ones to get all ones, and eye to get the identity matrix. If you want a multidimensional array, use a tuple to specify the shape.
End of explanation
np.arange(0, 2, 0.1)
np.linspace(0, 2, 20)
Explanation: We have functions to create equally spaced data. arange works much like Python's range function, except it returns a NumPy array. linspace works slightly differently, you call it with linspace(start, stop, num), where num is the length of the array that you want.
End of explanation
import matplotlib.pyplot as plt
a = np.array([6, 3, 5, 2, 4, 1])
plt.plot([1, 4, 2, 5, 3, 6])
plt.plot(a)
Explanation: Now let's plot some data. For the most part it is very simple. Matplotlib contains a plotting library pyplot. It is industry standard to import it as plt. Once imported, plot numbers by calling plt.plot with a list or array of numbers. If you make multiple calls it will plot multiple series, each with a different color.
End of explanation
plt.plot(np.arange(0,1, 0.1), [1,4,3,2,6,4,7,3,4,5]);
Explanation: The output [<matplotlib.lines.Line2D at 0x2ba160bed68>] is because plt.plot returns the object that was just created. Ordinarily we do not want to see that, so I add a ; to my last plotting command to suppress that output.
By default plot assumes that the x-series is incremented by one. You can provide your own x-series by passing in both x and y.
End of explanation
# your solution
Explanation: There are many more features to these packages which I use in this book. Normally I will introduce them without explanation, trusting that you can infer the usage from context, or search online for an explanation. As always, if you are unsure, create a new cell in the Notebook or fire up a Python console and experiment!
Exercise - Create arrays
I want you to create a NumPy array of 10 elements with each element containing 1/10. There are several ways to do this; try to implement as many as you can think of.
End of explanation
print(np.ones(10) / 10.)
print(np.array([.1, .1, .1, .1, .1, .1, .1, .1, .1, .1]))
print(np.array([.1]*10))
Explanation: Solution
Here are three ways to do this. The first one is the one I want you to know. I used the '/' operator to divide all of the elements of the array with 10. We will shortly use this to convert the units of an array from meters to km.
End of explanation
def one_tenth(x):
x = np.asarray(x)
return x / 10.
print(one_tenth([1, 2, 3])) # I work!
print(one_tenth(np.array([4, 5, 6]))) # so do I!
Explanation: Here is one I haven't covered yet. The function numpy.asarray() will convert its argument to an ndarray if it isn't already one. If it is, the data is unchanged. This is a handy way to write a function that can accept either Python lists or ndarrays, and it is very efficient if the type is already ndarray as nothing new is created.
End of explanation |
2,513 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Screenshots and Movies with WebGL
One can use the REBOUND WebGL ipython widget to capture screenshots of a simulation. These screenshots can then be easily compiled into a movie.
The widget is using the ipywidgets package which needs to be installed and enabled. More information on this can be found in the ipywidgets documentation at https
Step1: You can now drag the widget with your mouse or touchpad to look at the simulation from a different angle. Keep the shift key pressed while you drag to zoom in or out.
To take a single screenshot, all you have to do is call the takeScreenshot function of the widget.
Step2: You will see that there is now a file screenshot00000.png in the current directory. It shows the same view as the WebGL widget in the notebook. To get a larger image, increase the size of the widget (see the documentation for the widget for all possible options).
We could now rotate the widget or integrate the simulation. If we then execute the same command takeScreenshot command again, we will get another file screenshot00001.png.
Consider the following code
Step3: This will not produce the desired outcome (in fact it will through an exception). The reason is complex. In short, ipywidgets provides no blocking calls to wait for updates of a widget because the widget updates make use of the ipython event loop which does not get run during an execution of a cell.
Thus, to capture multiple screenshots at different times, one either needs to take one screenshot per cell, or use the following more convenient way | Python Code:
import rebound
sim = rebound.Simulation()
sim.add(m=1) # add a star
for i in range(10):
sim.add(m=1e-3,a=0.4+0.1*i,inc=0.03*i,omega=5.*i) # Jupiter mass planets on close orbits
sim.move_to_com() # Move to the centre of mass frame
w = sim.getWidget()
w
Explanation: Screenshots and Movies with WebGL
One can use the REBOUND WebGL ipython widget to capture screenshots of a simulation. These screenshots can then be easily compiled into a movie.
The widget is using the ipywidgets package which needs to be installed and enabled. More information on this can be found in the ipywidgets documentation at https://ipywidgets.readthedocs.io/en/latest/user_install.html. You also need a browser and a graphics card that supports WebGL.
Note that this is a new feature and might not work on all systems. We've tested it on python 3.5.2.
Let's first create a simulation and display it using the REBOUND WebGL widget.
End of explanation
w.takeScreenshot()
Explanation: You can now drag the widget with your mouse or touchpad to look at the simulation from a different angle. Keep the shift key pressed while you drag to zoom in or out.
To take a single screenshot, all you have to do is call the takeScreenshot function of the widget.
End of explanation
# w.takeScreenshot()
# sim.integrate(10)
# w.takeScreenshot()
Explanation: You will see that there is now a file screenshot00000.png in the current directory. It shows the same view as the WebGL widget in the notebook. To get a larger image, increase the size of the widget (see the documentation for the widget for all possible options).
We could now rotate the widget or integrate the simulation. If we then execute the same command takeScreenshot command again, we will get another file screenshot00001.png.
Consider the following code:
End of explanation
times = [0,10,100]
w.takeScreenshot(times)
Explanation: This will not produce the desired outcome (in fact it will through an exception). The reason is complex. In short, ipywidgets provides no blocking calls to wait for updates of a widget because the widget updates make use of the ipython event loop which does not get run during an execution of a cell.
Thus, to capture multiple screenshots at different times, one either needs to take one screenshot per cell, or use the following more convenient way:
End of explanation |
2,514 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Your first neural network
In this project, you'll build your first neural network and use it to predict daily bike rental ridership. We've provided some of the code, but left the implementation of the neural network up to you (for the most part). After you've submitted this project, feel free to explore the data and the model more.
Step1: Load and prepare the data
A critical step in working with neural networks is preparing the data correctly. Variables on different scales make it difficult for the network to efficiently learn the correct weights. Below, we've written the code to load and prepare the data. You'll learn more about this soon!
Step2: Checking out the data
This dataset has the number of riders for each hour of each day from January 1 2011 to December 31 2012. The number of riders is split between casual and registered, summed up in the cnt column. You can see the first few rows of the data above.
Below is a plot showing the number of bike riders over the first 10 days in the data set. You can see the hourly rentals here. This data is pretty complicated! The weekends have lower over all ridership and there are spikes when people are biking to and from work during the week. Looking at the data above, we also have information about temperature, humidity, and windspeed, all of these likely affecting the number of riders. You'll be trying to capture all this with your model.
Step3: Dummy variables
Here we have some categorical variables like season, weather, month. To include these in our model, we'll need to make binary dummy variables. This is simple to do with Pandas thanks to get_dummies().
Step4: Scaling target variables
To make training the network easier, we'll standardize each of the continuous variables. That is, we'll shift and scale the variables such that they have zero mean and a standard deviation of 1.
The scaling factors are saved so we can go backwards when we use the network for predictions.
Step5: Splitting the data into training, testing, and validation sets
We'll save the last 21 days of the data to use as a test set after we've trained the network. We'll use this set to make predictions and compare them with the actual number of riders.
Step6: We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set).
Step7: Time to build the network
Below you'll build your network. We've built out the structure and the backwards pass. You'll implement the forward pass through the network. You'll also set the hyperparameters
Step8: Training the network
Here you'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training set and will fail to generalize to the validation set. That is, the loss on the validation set will start increasing as the training set loss drops.
You'll also be using a method know as Stochastic Gradient Descent (SGD) to train the network. The idea is that for each training pass, you grab a random sample of the data instead of using the whole data set. You use many more training passes than with normal gradient descent, but each pass is much faster. This ends up training the network more efficiently. You'll learn more about SGD later.
Choose the number of epochs
This is the number of times the dataset will pass through the network, each time updating the weights. As the number of epochs increases, the network becomes better and better at predicting the targets in the training set. You'll need to choose enough epochs to train the network well but not too many or you'll be overfitting.
Choose the learning rate
This scales the size of weight updates. If this is too big, the weights tend to explode and the network fails to fit the data. A good choice to start at is 0.1. If the network has problems fitting the data, try reducing the learning rate. Note that the lower the learning rate, the smaller the steps are in the weight updates and the longer it takes for the neural network to converge.
Choose the number of hidden nodes
The more hidden nodes you have, the more accurate predictions the model will make. Try a few different numbers and see how it affects the performance. You can look at the losses dictionary for a metric of the network performance. If the number of hidden units is too low, then the model won't have enough space to learn and if it is too high there are too many options for the direction that the learning can take. The trick here is to find the right balance in number of hidden units you choose.
Step9: Check out your predictions
Here, use the test data to view how well your network is modeling the data. If something is completely wrong here, make sure each step in your network is implemented correctly.
Step10: Thinking about your results
Answer these questions about your results. How well does the model predict the data? Where does it fail? Why does it fail where it does?
Note | Python Code:
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
Explanation: Your first neural network
In this project, you'll build your first neural network and use it to predict daily bike rental ridership. We've provided some of the code, but left the implementation of the neural network up to you (for the most part). After you've submitted this project, feel free to explore the data and the model more.
End of explanation
data_path = 'Bike-Sharing-Dataset/hour.csv'
rides = pd.read_csv(data_path)
rides.head()
Explanation: Load and prepare the data
A critical step in working with neural networks is preparing the data correctly. Variables on different scales make it difficult for the network to efficiently learn the correct weights. Below, we've written the code to load and prepare the data. You'll learn more about this soon!
End of explanation
rides[:24*10].plot(x='dteday', y='cnt')
Explanation: Checking out the data
This dataset has the number of riders for each hour of each day from January 1 2011 to December 31 2012. The number of riders is split between casual and registered, summed up in the cnt column. You can see the first few rows of the data above.
Below is a plot showing the number of bike riders over the first 10 days in the data set. You can see the hourly rentals here. This data is pretty complicated! The weekends have lower over all ridership and there are spikes when people are biking to and from work during the week. Looking at the data above, we also have information about temperature, humidity, and windspeed, all of these likely affecting the number of riders. You'll be trying to capture all this with your model.
End of explanation
dummy_fields = ['season', 'weathersit', 'mnth', 'hr', 'weekday']
for each in dummy_fields:
dummies = pd.get_dummies(rides[each], prefix=each, drop_first=False)
rides = pd.concat([rides, dummies], axis=1)
fields_to_drop = ['instant', 'dteday', 'season', 'weathersit',
'weekday', 'atemp', 'mnth', 'workingday', 'hr']
data = rides.drop(fields_to_drop, axis=1)
data.head()
Explanation: Dummy variables
Here we have some categorical variables like season, weather, month. To include these in our model, we'll need to make binary dummy variables. This is simple to do with Pandas thanks to get_dummies().
End of explanation
quant_features = ['casual', 'registered', 'cnt', 'temp', 'hum', 'windspeed']
# Store scalings in a dictionary so we can convert back later
scaled_features = {}
for each in quant_features:
mean, std = data[each].mean(), data[each].std()
scaled_features[each] = [mean, std]
data.loc[:, each] = (data[each] - mean)/std
data.head()
Explanation: Scaling target variables
To make training the network easier, we'll standardize each of the continuous variables. That is, we'll shift and scale the variables such that they have zero mean and a standard deviation of 1.
The scaling factors are saved so we can go backwards when we use the network for predictions.
End of explanation
# Save the last 21 days
test_data = data[-21*24:]
data = data[:-21*24]
# Separate the data into features and targets
target_fields = ['cnt', 'casual', 'registered']
features, targets = data.drop(target_fields, axis=1), data[target_fields]
test_features, test_targets = test_data.drop(target_fields, axis=1), test_data[target_fields]
targets.head()
Explanation: Splitting the data into training, testing, and validation sets
We'll save the last 21 days of the data to use as a test set after we've trained the network. We'll use this set to make predictions and compare them with the actual number of riders.
End of explanation
# Hold out the last 60 days of the remaining data as a validation set
train_features, train_targets = features[:-60*24], targets[:-60*24]
val_features, val_targets = features[-60*24:], targets[-60*24:]
Explanation: We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set).
End of explanation
def sigmoid(x):
return 1 / (1 + np.exp(-x))
class NeuralNetwork(object):
def __init__(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Set number of nodes in input, hidden and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Initialize weights
self.weights_input_to_hidden = np.random.normal(0.0, self.hidden_nodes**-0.5,
(self.hidden_nodes, self.input_nodes))
self.weights_hidden_to_output = np.random.normal(0.0, self.output_nodes**-0.5,
(self.output_nodes, self.hidden_nodes))
self.lr = learning_rate
#### Set this to your implemented sigmoid function ####
# Activation function is the sigmoid function
self.activation_function = sigmoid
def train(self, inputs_list, targets_list):
# Convert inputs list to 2d array
inputs = np.array(inputs_list, ndmin=2).T
targets = np.array(targets_list, ndmin=2).T
#### Implement the forward pass here ####
### Forward pass ###
# TODO: Hidden layer
hidden_inputs = np.dot(self.weights_input_to_hidden, inputs)
hidden_outputs = sigmoid(hidden_inputs)
# TODO: Output layer
final_inputs = np.dot(self.weights_hidden_to_output, hidden_outputs)
final_outputs = 1 * final_inputs
#### Implement the backward pass here ####
### Backward pass ###
# TODO: Output error
output_errors = targets - final_outputs
# TODO: Backpropagated error
hidden_errors = np.dot(output_errors, self.weights_hidden_to_output)
hidden_grad = (hidden_outputs * (1 - hidden_outputs)).T
# TODO: Update the weights
#self.weights_hidden_to_output += self.lr * output_errors * hidden_outputs.T
#self.weights_input_to_hidden += self.lr * (hidden_errors * hidden_grad * inputs).T
#Adjustment after review (should works also with multiple samples per batch)
#REVIEWER COMMENT
#Nice work here! I ran your code and tried with my own implementation and your code runs like a charm!
#Although your answer is right for this case scenario as we use single one training example for
#each weight update(SGD) and if at all we were to use a complete batch, your code might start throwing errors,
#to avoid such errors I would suggest using np.dot instead of using direct in weight updates as ,
#please note that the difference between the np.dot function and in numpy -
#np.dot performs matrix multiplication and has a very good property of broadcasting.
#Whereas * does element wise multiplication.
#My suggestion is to use something like this:
#self.weight_input_to_hidden += self.learning_rate np.dot(hidden_errors hidden_grad, inputs.T)
#do take care that the dimensionality of the arrays are preserved in your implementations -
#change the order of arguments in dot to do that.
self.weights_hidden_to_output += self.lr * np.dot(output_errors, hidden_outputs.T)
self.weights_input_to_hidden += self.lr * np.dot((hidden_errors * hidden_grad).T, inputs.T)
def run(self, inputs_list):
# Run a forward pass through the network
inputs = np.array(inputs_list, ndmin=2).T
#### Implement the forward pass here ####
# TODO: Hidden layer
hidden_inputs = np.dot(self.weights_input_to_hidden, inputs)
hidden_outputs = sigmoid(hidden_inputs)
# TODO: Output layer
final_inputs = np.dot(self.weights_hidden_to_output, hidden_outputs)
final_outputs = 1 * final_inputs
return final_outputs
def MSE(y, Y):
return np.mean((y-Y)**2)
Explanation: Time to build the network
Below you'll build your network. We've built out the structure and the backwards pass. You'll implement the forward pass through the network. You'll also set the hyperparameters: the learning rate, the number of hidden units, and the number of training passes.
The network has two layers, a hidden layer and an output layer. The hidden layer will use the sigmoid function for activations. The output layer has only one node and is used for the regression, the output of the node is the same as the input of the node. That is, the activation function is $f(x)=x$. A function that takes the input signal and generates an output signal, but takes into account the threshold, is called an activation function. We work through each layer of our network calculating the outputs for each neuron. All of the outputs from one layer become inputs to the neurons on the next layer. This process is called forward propagation.
We use the weights to propagate signals forward from the input to the output layers in a neural network. We use the weights to also propagate error backwards from the output back into the network to update our weights. This is called backpropagation.
Hint: You'll need the derivative of the output activation function ($f(x) = x$) for the backpropagation implementation. If you aren't familiar with calculus, this function is equivalent to the equation $y = x$. What is the slope of that equation? That is the derivative of $f(x)$.
Below, you have these tasks:
1. Implement the sigmoid function to use as the activation function. Set self.activation_function in __init__ to your sigmoid function.
2. Implement the forward pass in the train method.
3. Implement the backpropagation algorithm in the train method, including calculating the output error.
4. Implement the forward pass in the run method.
End of explanation
import sys
### Set the hyperparameters here ###
epochs = 300
learning_rate = 0.1
hidden_nodes = 50
output_nodes = 1
N_i = train_features.shape[1]
network = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate)
losses = {'train':[], 'validation':[]}
for e in range(epochs):
# Go through a random batch of 128 records from the training data set
batch = np.random.choice(train_features.index, size=128)
for record, target in zip(train_features.ix[batch].values,
train_targets.ix[batch]['cnt']):
network.train(record, target)
# Printing out the training progress
train_loss = MSE(network.run(train_features), train_targets['cnt'].values)
val_loss = MSE(network.run(val_features), val_targets['cnt'].values)
sys.stdout.write("\rProgress: " + str(100 * e/float(epochs))[:4] \
+ "% ... Training loss: " + str(train_loss)[:5] \
+ " ... Validation loss: " + str(val_loss)[:5])
losses['train'].append(train_loss)
losses['validation'].append(val_loss)
plt.plot(losses['train'], label='Training loss')
plt.plot(losses['validation'], label='Validation loss')
plt.legend()
plt.ylim(ymax=1.0)
Explanation: Training the network
Here you'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training set and will fail to generalize to the validation set. That is, the loss on the validation set will start increasing as the training set loss drops.
You'll also be using a method know as Stochastic Gradient Descent (SGD) to train the network. The idea is that for each training pass, you grab a random sample of the data instead of using the whole data set. You use many more training passes than with normal gradient descent, but each pass is much faster. This ends up training the network more efficiently. You'll learn more about SGD later.
Choose the number of epochs
This is the number of times the dataset will pass through the network, each time updating the weights. As the number of epochs increases, the network becomes better and better at predicting the targets in the training set. You'll need to choose enough epochs to train the network well but not too many or you'll be overfitting.
Choose the learning rate
This scales the size of weight updates. If this is too big, the weights tend to explode and the network fails to fit the data. A good choice to start at is 0.1. If the network has problems fitting the data, try reducing the learning rate. Note that the lower the learning rate, the smaller the steps are in the weight updates and the longer it takes for the neural network to converge.
Choose the number of hidden nodes
The more hidden nodes you have, the more accurate predictions the model will make. Try a few different numbers and see how it affects the performance. You can look at the losses dictionary for a metric of the network performance. If the number of hidden units is too low, then the model won't have enough space to learn and if it is too high there are too many options for the direction that the learning can take. The trick here is to find the right balance in number of hidden units you choose.
End of explanation
fig, ax = plt.subplots(figsize=(16,6))
mean, std = scaled_features['cnt']
predictions = network.run(test_features)*std + mean
ax.plot(predictions[0], label='Prediction')
ax.plot((test_targets['cnt']*std + mean).values, label='Data')
ax.set_xlim(right=len(predictions))
ax.legend()
dates = pd.to_datetime(rides.ix[test_data.index]['dteday'])
dates = dates.apply(lambda d: d.strftime('%b %d'))
ax.set_xticks(np.arange(len(dates))[12::24])
_ = ax.set_xticklabels(dates[12::24], rotation=45)
Explanation: Check out your predictions
Here, use the test data to view how well your network is modeling the data. If something is completely wrong here, make sure each step in your network is implemented correctly.
End of explanation
import unittest
inputs = [0.5, -0.2, 0.1]
targets = [0.4]
test_w_i_h = np.array([[0.1, 0.4, -0.3],
[-0.2, 0.5, 0.2]])
test_w_h_o = np.array([[0.3, -0.1]])
class TestMethods(unittest.TestCase):
##########
# Unit tests for data loading
##########
def test_data_path(self):
# Test that file path to dataset has been unaltered
self.assertTrue(data_path.lower() == 'bike-sharing-dataset/hour.csv')
def test_data_loaded(self):
# Test that data frame loaded
self.assertTrue(isinstance(rides, pd.DataFrame))
##########
# Unit tests for network functionality
##########
def test_activation(self):
network = NeuralNetwork(3, 2, 1, 0.5)
# Test that the activation function is a sigmoid
self.assertTrue(np.all(network.activation_function(0.5) == 1/(1+np.exp(-0.5))))
def test_train(self):
# Test that weights are updated correctly on training
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
network.train(inputs, targets)
self.assertTrue(np.allclose(network.weights_hidden_to_output,
np.array([[ 0.37275328, -0.03172939]])))
self.assertTrue(np.allclose(network.weights_input_to_hidden,
np.array([[ 0.10562014, 0.39775194, -0.29887597],
[-0.20185996, 0.50074398, 0.19962801]])))
def test_run(self):
# Test correctness of run method
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
self.assertTrue(np.allclose(network.run(inputs), 0.09998924))
suite = unittest.TestLoader().loadTestsFromModule(TestMethods())
unittest.TextTestRunner().run(suite)
Explanation: Thinking about your results
Answer these questions about your results. How well does the model predict the data? Where does it fail? Why does it fail where it does?
Note: You can edit the text in this cell by double clicking on it. When you want to render the text, press control + enter
The configuration with 50 hidden neurons, 300 epochs and 0.1 learning rate is best among my trials on validation test (loss <= 0.150). Furthermore also on the test set it follows the data quite well. The most difficult days to predict are 13, 14 October and 20, 21 October, maybe special days of the week (separated by exactly one week, checking the calendar they should be thrusday and friday). This final network performs better than other trials, but it is still wrong on that days and to predict some rapid spikes (the prediction is more conservative). On the other hand it seems easy to understand the periodicity.
Comment added after review
Don't know why when I submitted the result the graph was about October and not final days of December. Probably I ran multiple times the test_features splitting, that involves a change of the original data variable. On the real test set we have errors on the last part of the year (holidays pattern not predicted).
Unit tests
Run these unit tests to check the correctness of your network implementation. These tests must all be successful to pass the project.
End of explanation |
2,515 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Predicting sentiment from product reviews
The goal of this first notebook is to explore logistic regression and feature engineering with existing GraphLab functions.
In this notebook you will use product review data from Amazon.com to predict whether the sentiments about a product (from its reviews) are positive or negative.
Use SFrames to do some feature engineering
Train a logistic regression model to predict the sentiment of product reviews.
Inspect the weights (coefficients) of a trained logistic regression model.
Make a prediction (both class and probability) of sentiment for a new product review.
Given the logistic regression weights, predictors and ground truth labels, write a function to compute the accuracy of the model.
Inspect the coefficients of the logistic regression model and interpret their meanings.
Compare multiple logistic regression models.
Let's get started!
Fire up GraphLab Create
Make sure you have the latest version of GraphLab Create.
Step1: Data preparation
We will use a dataset consisting of baby product reviews on Amazon.com.
Step2: Now, let us see a preview of what the dataset looks like.
Step3: Build the word count vector for each review
Let us explore a specific example of a baby product.
Step4: Now, we will perform 2 simple data transformations
Step5: Now, let us explore what the sample example above looks like after these 2 transformations. Here, each entry in the word_count column is a dictionary where the key is the word and the value is a count of the number of times the word occurs.
Step6: Extract sentiments
We will ignore all reviews with rating = 3, since they tend to have a neutral sentiment.
Step7: Now, we will assign reviews with a rating of 4 or higher to be positive reviews, while the ones with rating of 2 or lower are negative. For the sentiment column, we use +1 for the positive class label and -1 for the negative class label.
Step8: Now, we can see that the dataset contains an extra column called sentiment which is either positive (+1) or negative (-1).
Split data into training and test sets
Let's perform a train/test split with 80% of the data in the training set and 20% of the data in the test set. We use seed=1 so that everyone gets the same result.
Step9: Train a sentiment classifier with logistic regression
We will now use logistic regression to create a sentiment classifier on the training data. This model will use the column word_count as a feature and the column sentiment as the target. We will use validation_set=None to obtain same results as everyone else.
Note
Step10: Aside. You may get a warning to the effect of "Terminated due to numerical difficulties --- this model may not be ideal". It means that the quality metric (to be covered in Module 3) failed to improve in the last iteration of the run. The difficulty arises as the sentiment model puts too much weight on extremely rare words. A way to rectify this is to apply regularization, to be covered in Module 4. Regularization lessens the effect of extremely rare words. For the purpose of this assignment, however, please proceed with the model above.
Now that we have fitted the model, we can extract the weights (coefficients) as an SFrame as follows
Step11: There are a total of 121713 coefficients in the model. Recall from the lecture that positive weights $w_j$ correspond to weights that cause positive sentiment, while negative weights correspond to negative sentiment.
Fill in the following block of code to calculate how many weights are positive ( >= 0). (Hint
Step12: Quiz Question
Step13: Let's dig deeper into the first row of the sample_test_data. Here's the full review
Step14: That review seems pretty positive.
Now, let's see what the next row of the sample_test_data looks like. As we could guess from the sentiment (-1), the review is quite negative.
Step15: We will now make a class prediction for the sample_test_data. The sentiment_model should predict +1 if the sentiment is positive and -1 if the sentiment is negative. Recall from the lecture that the score (sometimes called margin) for the logistic regression model is defined as
Step16: Predicting sentiment
These scores can be used to make class predictions as follows
Step17: Checkpoint
Step18: Quiz Question
Step19: Now, let's compute the classification accuracy of the sentiment_model on the test_data.
Step20: Quiz Question
Step21: For each review, we will use the word_count column and trim out all words that are not in the significant_words list above. We will use the SArray dictionary trim by keys functionality. Note that we are performing this on both the training and test set.
Step22: Let's see what the first example of the dataset looks like
Step23: The word_count column had been working with before looks like the following
Step24: Since we are only working with a subset of these words, the column word_count_subset is a subset of the above dictionary. In this example, only 2 significant words are present in this review.
Step25: Train a logistic regression model on a subset of data
We will now build a classifier with word_count_subset as the feature and sentiment as the target.
Step26: We can compute the classification accuracy using the get_classification_accuracy function you implemented earlier.
Step27: Now, we will inspect the weights (coefficients) of the simple_model
Step28: Let's sort the coefficients (in descending order) by the value to obtain the coefficients with the most positive effect on the sentiment.
Step29: Quiz Question | Python Code:
from __future__ import division
import graphlab
import math
import string
Explanation: Predicting sentiment from product reviews
The goal of this first notebook is to explore logistic regression and feature engineering with existing GraphLab functions.
In this notebook you will use product review data from Amazon.com to predict whether the sentiments about a product (from its reviews) are positive or negative.
Use SFrames to do some feature engineering
Train a logistic regression model to predict the sentiment of product reviews.
Inspect the weights (coefficients) of a trained logistic regression model.
Make a prediction (both class and probability) of sentiment for a new product review.
Given the logistic regression weights, predictors and ground truth labels, write a function to compute the accuracy of the model.
Inspect the coefficients of the logistic regression model and interpret their meanings.
Compare multiple logistic regression models.
Let's get started!
Fire up GraphLab Create
Make sure you have the latest version of GraphLab Create.
End of explanation
products = graphlab.SFrame('amazon_baby.gl/')
Explanation: Data preparation
We will use a dataset consisting of baby product reviews on Amazon.com.
End of explanation
products
Explanation: Now, let us see a preview of what the dataset looks like.
End of explanation
products[269]
Explanation: Build the word count vector for each review
Let us explore a specific example of a baby product.
End of explanation
def remove_punctuation(text):
import string
return text.translate(None, string.punctuation)
review_without_punctuation = products['review'].apply(remove_punctuation)
products['word_count'] = graphlab.text_analytics.count_words(review_without_punctuation)
Explanation: Now, we will perform 2 simple data transformations:
Remove punctuation using Python's built-in string functionality.
Transform the reviews into word-counts.
Aside. In this notebook, we remove all punctuations for the sake of simplicity. A smarter approach to punctuations would preserve phrases such as "I'd", "would've", "hadn't" and so forth. See this page for an example of smart handling of punctuations.
End of explanation
products[269]['word_count']
Explanation: Now, let us explore what the sample example above looks like after these 2 transformations. Here, each entry in the word_count column is a dictionary where the key is the word and the value is a count of the number of times the word occurs.
End of explanation
products = products[products['rating'] != 3]
len(products)
Explanation: Extract sentiments
We will ignore all reviews with rating = 3, since they tend to have a neutral sentiment.
End of explanation
products['sentiment'] = products['rating'].apply(lambda rating : +1 if rating > 3 else -1)
products
Explanation: Now, we will assign reviews with a rating of 4 or higher to be positive reviews, while the ones with rating of 2 or lower are negative. For the sentiment column, we use +1 for the positive class label and -1 for the negative class label.
End of explanation
train_data, test_data = products.random_split(.8, seed=1)
print len(train_data)
print len(test_data)
Explanation: Now, we can see that the dataset contains an extra column called sentiment which is either positive (+1) or negative (-1).
Split data into training and test sets
Let's perform a train/test split with 80% of the data in the training set and 20% of the data in the test set. We use seed=1 so that everyone gets the same result.
End of explanation
sentiment_model = graphlab.logistic_classifier.create(train_data,
target = 'sentiment',
features=['word_count'],
validation_set=None)
sentiment_model
Explanation: Train a sentiment classifier with logistic regression
We will now use logistic regression to create a sentiment classifier on the training data. This model will use the column word_count as a feature and the column sentiment as the target. We will use validation_set=None to obtain same results as everyone else.
Note: This line may take 1-2 minutes.
End of explanation
weights = sentiment_model.coefficients
weights.column_names()
Explanation: Aside. You may get a warning to the effect of "Terminated due to numerical difficulties --- this model may not be ideal". It means that the quality metric (to be covered in Module 3) failed to improve in the last iteration of the run. The difficulty arises as the sentiment model puts too much weight on extremely rare words. A way to rectify this is to apply regularization, to be covered in Module 4. Regularization lessens the effect of extremely rare words. For the purpose of this assignment, however, please proceed with the model above.
Now that we have fitted the model, we can extract the weights (coefficients) as an SFrame as follows:
End of explanation
num_positive_weights = ...
num_negative_weights = ...
print "Number of positive weights: %s " % num_positive_weights
print "Number of negative weights: %s " % num_negative_weights
Explanation: There are a total of 121713 coefficients in the model. Recall from the lecture that positive weights $w_j$ correspond to weights that cause positive sentiment, while negative weights correspond to negative sentiment.
Fill in the following block of code to calculate how many weights are positive ( >= 0). (Hint: The 'value' column in SFrame weights must be positive ( >= 0)).
End of explanation
sample_test_data = test_data[10:13]
print sample_test_data['rating']
sample_test_data
Explanation: Quiz Question: How many weights are >= 0?
Making predictions with logistic regression
Now that a model is trained, we can make predictions on the test data. In this section, we will explore this in the context of 3 examples in the test dataset. We refer to this set of 3 examples as the sample_test_data.
End of explanation
sample_test_data[0]['review']
Explanation: Let's dig deeper into the first row of the sample_test_data. Here's the full review:
End of explanation
sample_test_data[1]['review']
Explanation: That review seems pretty positive.
Now, let's see what the next row of the sample_test_data looks like. As we could guess from the sentiment (-1), the review is quite negative.
End of explanation
scores = sentiment_model.predict(sample_test_data, output_type='margin')
print scores
Explanation: We will now make a class prediction for the sample_test_data. The sentiment_model should predict +1 if the sentiment is positive and -1 if the sentiment is negative. Recall from the lecture that the score (sometimes called margin) for the logistic regression model is defined as:
$$
\mbox{score}_i = \mathbf{w}^T h(\mathbf{x}_i)
$$
where $h(\mathbf{x}_i)$ represents the features for example $i$. We will write some code to obtain the scores using GraphLab Create. For each row, the score (or margin) is a number in the range [-inf, inf].
End of explanation
print "Class predictions according to GraphLab Create:"
print sentiment_model.predict(sample_test_data)
Explanation: Predicting sentiment
These scores can be used to make class predictions as follows:
$$
\hat{y} =
\left{
\begin{array}{ll}
+1 & \mathbf{w}^T h(\mathbf{x}_i) > 0 \
-1 & \mathbf{w}^T h(\mathbf{x}_i) \leq 0 \
\end{array}
\right.
$$
Using scores, write code to calculate $\hat{y}$, the class predictions:
Run the following code to verify that the class predictions obtained by your calculations are the same as that obtained from GraphLab Create.
End of explanation
print "Class predictions according to GraphLab Create:"
print sentiment_model.predict(sample_test_data, output_type='probability')
Explanation: Checkpoint: Make sure your class predictions match with the one obtained from GraphLab Create.
Probability predictions
Recall from the lectures that we can also calculate the probability predictions from the scores using:
$$
P(y_i = +1 | \mathbf{x}_i,\mathbf{w}) = \frac{1}{1 + \exp(-\mathbf{w}^T h(\mathbf{x}_i))}.
$$
Using the variable scores calculated previously, write code to calculate the probability that a sentiment is positive using the above formula. For each row, the probabilities should be a number in the range [0, 1].
Checkpoint: Make sure your probability predictions match the ones obtained from GraphLab Create.
End of explanation
def get_classification_accuracy(model, data, true_labels):
# First get the predictions
## YOUR CODE HERE
...
# Compute the number of correctly classified examples
## YOUR CODE HERE
...
# Then compute accuracy by dividing num_correct by total number of examples
## YOUR CODE HERE
...
return accuracy
Explanation: Quiz Question: Of the three data points in sample_test_data, which one (first, second, or third) has the lowest probability of being classified as a positive review?
Find the most positive (and negative) review
We now turn to examining the full test dataset, test_data, and use GraphLab Create to form predictions on all of the test data points for faster performance.
Using the sentiment_model, find the 20 reviews in the entire test_data with the highest probability of being classified as a positive review. We refer to these as the "most positive reviews."
To calculate these top-20 reviews, use the following steps:
1. Make probability predictions on test_data using the sentiment_model. (Hint: When you call .predict to make predictions on the test data, use option output_type='probability' to output the probability rather than just the most likely class.)
2. Sort the data according to those predictions and pick the top 20. (Hint: You can use the .topk method on an SFrame to find the top k rows sorted according to the value of a specified column.)
Quiz Question: Which of the following products are represented in the 20 most positive reviews? [multiple choice]
Now, let us repeat this exercise to find the "most negative reviews." Use the prediction probabilities to find the 20 reviews in the test_data with the lowest probability of being classified as a positive review. Repeat the same steps above but make sure you sort in the opposite order.
Quiz Question: Which of the following products are represented in the 20 most negative reviews? [multiple choice]
Compute accuracy of the classifier
We will now evaluate the accuracy of the trained classifier. Recall that the accuracy is given by
$$
\mbox{accuracy} = \frac{\mbox{# correctly classified examples}}{\mbox{# total examples}}
$$
This can be computed as follows:
Step 1: Use the trained model to compute class predictions (Hint: Use the predict method)
Step 2: Count the number of data points when the predicted class labels match the ground truth labels (called true_labels below).
Step 3: Divide the total number of correct predictions by the total number of data points in the dataset.
Complete the function below to compute the classification accuracy:
End of explanation
get_classification_accuracy(sentiment_model, test_data, test_data['sentiment'])
Explanation: Now, let's compute the classification accuracy of the sentiment_model on the test_data.
End of explanation
significant_words = ['love', 'great', 'easy', 'old', 'little', 'perfect', 'loves',
'well', 'able', 'car', 'broke', 'less', 'even', 'waste', 'disappointed',
'work', 'product', 'money', 'would', 'return']
len(significant_words)
Explanation: Quiz Question: What is the accuracy of the sentiment_model on the test_data? Round your answer to 2 decimal places (e.g. 0.76).
Quiz Question: Does a higher accuracy value on the training_data always imply that the classifier is better?
Learn another classifier with fewer words
There were a lot of words in the model we trained above. We will now train a simpler logistic regression model using only a subset of words that occur in the reviews. For this assignment, we selected a 20 words to work with. These are:
End of explanation
train_data['word_count_subset'] = train_data['word_count'].dict_trim_by_keys(significant_words, exclude=False)
test_data['word_count_subset'] = test_data['word_count'].dict_trim_by_keys(significant_words, exclude=False)
Explanation: For each review, we will use the word_count column and trim out all words that are not in the significant_words list above. We will use the SArray dictionary trim by keys functionality. Note that we are performing this on both the training and test set.
End of explanation
train_data[0]['review']
Explanation: Let's see what the first example of the dataset looks like:
End of explanation
print train_data[0]['word_count']
Explanation: The word_count column had been working with before looks like the following:
End of explanation
print train_data[0]['word_count_subset']
Explanation: Since we are only working with a subset of these words, the column word_count_subset is a subset of the above dictionary. In this example, only 2 significant words are present in this review.
End of explanation
simple_model = graphlab.logistic_classifier.create(train_data,
target = 'sentiment',
features=['word_count_subset'],
validation_set=None)
simple_model
Explanation: Train a logistic regression model on a subset of data
We will now build a classifier with word_count_subset as the feature and sentiment as the target.
End of explanation
get_classification_accuracy(simple_model, test_data, test_data['sentiment'])
Explanation: We can compute the classification accuracy using the get_classification_accuracy function you implemented earlier.
End of explanation
simple_model.coefficients
Explanation: Now, we will inspect the weights (coefficients) of the simple_model:
End of explanation
simple_model.coefficients.sort('value', ascending=False).print_rows(num_rows=21)
Explanation: Let's sort the coefficients (in descending order) by the value to obtain the coefficients with the most positive effect on the sentiment.
End of explanation
num_positive = (train_data['sentiment'] == +1).sum()
num_negative = (train_data['sentiment'] == -1).sum()
print num_positive
print num_negative
Explanation: Quiz Question: Consider the coefficients of simple_model. There should be 21 of them, an intercept term + one for each word in significant_words. How many of the 20 coefficients (corresponding to the 20 significant_words and excluding the intercept term) are positive for the simple_model?
Quiz Question: Are the positive words in the simple_model (let us call them positive_significant_words) also positive words in the sentiment_model?
Comparing models
We will now compare the accuracy of the sentiment_model and the simple_model using the get_classification_accuracy method you implemented above.
First, compute the classification accuracy of the sentiment_model on the train_data:
Now, compute the classification accuracy of the simple_model on the train_data:
Quiz Question: Which model (sentiment_model or simple_model) has higher accuracy on the TRAINING set?
Now, we will repeat this exercise on the test_data. Start by computing the classification accuracy of the sentiment_model on the test_data:
Next, we will compute the classification accuracy of the simple_model on the test_data:
Quiz Question: Which model (sentiment_model or simple_model) has higher accuracy on the TEST set?
Baseline: Majority class prediction
It is quite common to use the majority class classifier as the a baseline (or reference) model for comparison with your classifier model. The majority classifier model predicts the majority class for all data points. At the very least, you should healthily beat the majority class classifier, otherwise, the model is (usually) pointless.
What is the majority class in the train_data?
End of explanation |
2,516 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Convenience
This is an example of aneris' convenience module. This module doesn't have anywhere near the error checking of aneris' other features, but it does make it slightly simpler to calibrate timeseries and it adds unit handling onto aneris' harmonisation.
Step1: We start by loading some dummy data.
Step2: The data must be set up slightly differently to use the convenience methods (it should match the format provided by scmdata and pyam aka the IAMC style).
Step3: We're also going to only harmonise the World data.
Step4: Finally, we alter the units of the historical data.
Step5: Now we harmonise the data using the convenience methods. Note how the historical data's units have been converted to the input data's units before harmonisation.
Step6: Make a plot to examine (doing this without scmdata/pyam is fiddly).
Step7: The above plot makes clear that the default harmonisation method is reduce_ratio_2080. We can override this using the overrides argument, which takes a pandas DataFrame as input.
Step8: A quick plot shows the change in output as a result of overriding the method. | Python Code:
import matplotlib.pyplot as plt
import pandas as pd
import aneris.tutorial
import aneris.convenience
plt.rcParams["figure.figsize"] = (12, 8)
Explanation: Convenience
This is an example of aneris' convenience module. This module doesn't have anywhere near the error checking of aneris' other features, but it does make it slightly simpler to calibrate timeseries and it adds unit handling onto aneris' harmonisation.
End of explanation
model, hist, _ = aneris.tutorial.load_data()
model
hist
Explanation: We start by loading some dummy data.
End of explanation
def convert_to_iamc_style(inp, idx=("model", "scenario", "region", "variable", "unit")):
out = inp.copy()
out.columns = out.columns.str.lower()
out = out.set_index(list(idx))
out.columns = out.columns.map(int)
return out
hist_iamc_style = convert_to_iamc_style(hist)
model_iamc_style = convert_to_iamc_style(model)
model_iamc_style
Explanation: The data must be set up slightly differently to use the convenience methods (it should match the format provided by scmdata and pyam aka the IAMC style).
End of explanation
hist_iamc_style = hist_iamc_style[hist_iamc_style.index.get_level_values("region") == "World"]
model_iamc_style = model_iamc_style[model_iamc_style.index.get_level_values("region") == "World"]
model_iamc_style
Explanation: We're also going to only harmonise the World data.
End of explanation
hist_iamc_style *= 1000
hist_iamc_style.index = hist_iamc_style.index.set_levels(["kt BC / yr"], level="unit")
hist_iamc_style
Explanation: Finally, we alter the units of the historical data.
End of explanation
model_harmonised = aneris.convenience.harmonise_all(
scenarios=model_iamc_style,
history=hist_iamc_style,
harmonisation_year=2005,
)
model_harmonised
Explanation: Now we harmonise the data using the convenience methods. Note how the historical data's units have been converted to the input data's units before harmonisation.
End of explanation
model_iamc_style_pdf = model_iamc_style.copy()
model_iamc_style_pdf["harmonised"] = False
model_harmonised_pdf = model_harmonised.copy()
model_harmonised_pdf["harmonised"] = True
pd.concat([model_iamc_style_pdf, model_harmonised_pdf]).groupby(["harmonised", "variable"]).mean().T.plot()
Explanation: Make a plot to examine (doing this without scmdata/pyam is fiddly).
End of explanation
overrides = pd.DataFrame([
{"variable": "prefix|Emissions|BC|suffix", "method": "reduce_offset_2030"},
{"variable": "prefix|Emissions|BC|sector1|suffix", "method": "reduce_ratio_2100"},
])
overrides
model_harmonised_overrides = aneris.convenience.harmonise_all(
scenarios=model_iamc_style,
history=hist_iamc_style,
harmonisation_year=2005,
overrides=overrides
)
model_harmonised_overrides
Explanation: The above plot makes clear that the default harmonisation method is reduce_ratio_2080. We can override this using the overrides argument, which takes a pandas DataFrame as input.
End of explanation
model_iamc_style_pdf = model_iamc_style.copy()
model_iamc_style_pdf["harmonised"] = False
model_harmonised_overrides_pdf = model_harmonised_overrides.copy()
model_harmonised_overrides_pdf["harmonised"] = True
pd.concat([model_iamc_style_pdf, model_harmonised_overrides_pdf]).groupby(["harmonised", "variable"]).mean().T.plot()
Explanation: A quick plot shows the change in output as a result of overriding the method.
End of explanation |
2,517 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Flowers Image Classification with TensorFlow on Cloud ML Engine
This notebook demonstrates how to do image classification from scratch on a flowers dataset using the Estimator API.
Step1: Input functions to read JPEG images
The key difference between this notebook and the MNIST one is in the input function.
In the input function here, we are doing the following
Step2: Now, let's do it on ML Engine. Note the --model parameter
Step3: Monitor training with TensorBoard
To activate TensorBoard within the JupyterLab UI navigate to "<b>File</b>" - "<b>New Launcher</b>". Then double-click the 'Tensorboard' icon on the bottom row.
TensorBoard 1 will appear in the new tab. Navigate through the three tabs to see the active TensorBoard. The 'Graphs' and 'Projector' tabs offer very interesting information including the ability to replay the tests.
You may close the TensorBoard tab when you are finished exploring.
Deploying and predicting with model
Deploy the model
Step4: To predict with the model, let's take one of the example images that is available on Google Cloud Storage <img src="http
Step5: Send it to the prediction service | Python Code:
import os
PROJECT = "cloud-training-demos" # REPLACE WITH YOUR PROJECT ID
BUCKET = "cloud-training-demos-ml" # REPLACE WITH YOUR BUCKET NAME
REGION = "us-central1" # REPLACE WITH YOUR BUCKET REGION e.g. us-central1
MODEL_TYPE = "cnn"
# do not change these
os.environ["PROJECT"] = PROJECT
os.environ["BUCKET"] = BUCKET
os.environ["REGION"] = REGION
os.environ["MODEL_TYPE"] = MODEL_TYPE
os.environ["TFVERSION"] = "1.13" # Tensorflow version
%%bash
gcloud config set project $PROJECT
gcloud config set compute/region $REGION
Explanation: Flowers Image Classification with TensorFlow on Cloud ML Engine
This notebook demonstrates how to do image classification from scratch on a flowers dataset using the Estimator API.
End of explanation
%%bash
rm -rf flowersmodel.tar.gz flowers_trained
gcloud ai-platform local train \
--module-name=flowersmodel.task \
--package-path=${PWD}/flowersmodel \
-- \
--output_dir=${PWD}/flowers_trained \
--train_steps=5 \
--learning_rate=0.01 \
--batch_size=2 \
--model=$MODEL_TYPE \
--augment \
--train_data_path=gs://cloud-ml-data/img/flower_photos/train_set.csv \
--eval_data_path=gs://cloud-ml-data/img/flower_photos/eval_set.csv
Explanation: Input functions to read JPEG images
The key difference between this notebook and the MNIST one is in the input function.
In the input function here, we are doing the following:
* Reading JPEG images, rather than 2D integer arrays.
* Reading in batches of batch_size images rather than slicing our in-memory structure to be batch_size images.
* Resizing the images to the expected HEIGHT, WIDTH. Because this is a real-world dataset, the images are of different sizes. We need to preprocess the data to, at the very least, resize them to constant size.
Run as a Python module
Since we want to run our code on Cloud ML Engine, we've packaged it as a python module.
The model.py and task.py containing the model code is in <a href="flowersmodel">flowersmodel</a>
Complete the TODOs in model.py before proceeding!
Once you've completed the TODOs, run it locally for a few steps to test the code.
End of explanation
%%bash
OUTDIR=gs://${BUCKET}/flowers/trained_${MODEL_TYPE}
JOBNAME=flowers_${MODEL_TYPE}_$(date -u +%y%m%d_%H%M%S)
echo $OUTDIR $REGION $JOBNAME
gsutil -m rm -rf $OUTDIR
gcloud ai-platform jobs submit training $JOBNAME \
--region=$REGION \
--module-name=flowersmodel.task \
--package-path=${PWD}/flowersmodel \
--job-dir=$OUTDIR \
--staging-bucket=gs://$BUCKET \
--scale-tier=BASIC_GPU \
--runtime-version=$TFVERSION \
-- \
--output_dir=$OUTDIR \
--train_steps=1000 \
--learning_rate=0.01 \
--batch_size=40 \
--model=$MODEL_TYPE \
--augment \
--batch_norm \
--train_data_path=gs://cloud-ml-data/img/flower_photos/train_set.csv \
--eval_data_path=gs://cloud-ml-data/img/flower_photos/eval_set.csv
Explanation: Now, let's do it on ML Engine. Note the --model parameter
End of explanation
%%bash
MODEL_NAME="flowers"
MODEL_VERSION=${MODEL_TYPE}
MODEL_LOCATION=$(gsutil ls gs://${BUCKET}/flowers/trained_${MODEL_TYPE}/export/exporter | tail -1)
echo "Deleting and deploying $MODEL_NAME $MODEL_VERSION from $MODEL_LOCATION ... this will take a few minutes"
#gcloud ai-platform versions delete --quiet ${MODEL_VERSION} --model ${MODEL_NAME}
#gcloud ai-platform models delete ${MODEL_NAME}
gcloud ai-platform models create ${MODEL_NAME} --regions $REGION
gcloud ai-platform versions create ${MODEL_VERSION} --model ${MODEL_NAME} --origin ${MODEL_LOCATION} --runtime-version=$TFVERSION
Explanation: Monitor training with TensorBoard
To activate TensorBoard within the JupyterLab UI navigate to "<b>File</b>" - "<b>New Launcher</b>". Then double-click the 'Tensorboard' icon on the bottom row.
TensorBoard 1 will appear in the new tab. Navigate through the three tabs to see the active TensorBoard. The 'Graphs' and 'Projector' tabs offer very interesting information including the ability to replay the tests.
You may close the TensorBoard tab when you are finished exploring.
Deploying and predicting with model
Deploy the model:
End of explanation
%%bash
IMAGE_URL=gs://cloud-ml-data/img/flower_photos/sunflowers/1022552002_2b93faf9e7_n.jpg
# Copy the image to local disk.
gsutil cp $IMAGE_URL flower.jpg
# Base64 encode and create request message in json format.
python -c 'import base64, sys, json; img = base64.b64encode(open("flower.jpg", "rb").read()).decode(); print(json.dumps({"image_bytes":{"b64": img}}))' &> request.json
Explanation: To predict with the model, let's take one of the example images that is available on Google Cloud Storage <img src="http://storage.googleapis.com/cloud-ml-data/img/flower_photos/sunflowers/1022552002_2b93faf9e7_n.jpg" />
The online prediction service expects images to be base64 encoded as described here.
End of explanation
%%bash
gcloud ai-platform predict \
--model=flowers \
--version=${MODEL_TYPE} \
--json-instances=./request.json
Explanation: Send it to the prediction service
End of explanation |
2,518 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Examining racial discrimination in the US job market
Background
Racial discrimination continues to be pervasive in cultures throughout the world. Researchers examined the level of racial discrimination in the United States labor market by randomly assigning identical résumés black-sounding or white-sounding names and observing the impact on requests for interviews from employers.
Data
In the dataset provided, each row represents a resume. The 'race' column has two values, 'b' and 'w', indicating black-sounding and white-sounding. The column 'call' has two values, 1 and 0, indicating whether the resume received a call from employers or not.
Note that the 'b' and 'w' values in race are assigned randomly to the resumes.
Exercise
You will perform a statistical analysis to establish whether race has a significant impact on the rate of callbacks for resumes.
Answer the following questions in this notebook below and submit to your Github account.
What test is appropriate for this problem? Does CLT apply?
What are the null and alternate hypotheses?
Compute margin of error, confidence interval, and p-value.
Discuss statistical significance.
You can include written notes in notebook cells using Markdown
Step1: The outcome variable here is binary, so this might be treated in several ways. First, it might be possible to apply the normal approximation to the binomial distribution. In this case, the distribution proportions is $\mathcal{N}(np,np(1-p))$
There are a number of guidelines as to whether this is a suitable approximation (see Wikipedia for a list of such conditions), some of which include
Step2: So, the difference in probability of a call-back is statistically significant here.
Plotting the distribution for call-backs with black-sounding names, it looks fairly symmetrical and well-behaved, so it's quite likely that the normal approximation is fairly reasonable here.
Step3: Alternatives
Because the normal distribution is only an approximation, the assumptions don't always work out for a particular data set. There are several methods for calculating confidence intervals around the estimated proportion. For example, with a significance level of $\alpha$, the Jeffrey's interval is defined as the $\frac{\alpha}{2}$ and 1-$\frac{\alpha}{2}$ quantiles of a beta$(x+\frac{1}{2}, n-x+\frac{1}{2})$ distribution. Using scipy
Step4: The complete lack of overlap in the intervals here implies a significant difference with $p\lt 0.05$ (Cumming & Finch,2005). Given that this particular interval can be interpreted as a Bayesian credible interval, this is a fairly comfortable conclusion.
Calculating credible intervals using Markov Chain Monte Carlo
Slightly different method of calculating approximately the same thing (the beta distribution used above the posterior distribution given given the observations with a Jeffreys prior)
Step5: Estimating rough 95% credible intervals
Step6: So, this method gives a result that fits quite nicely with previous results, while allowing more flexible specification of priors.
Interval for sampled differences in proportions
Step7: And this interval does not include 0, so that we're left fairly confident that black-sounding names get less call-backs, although the estimated differences in proportions are fairly small (significant in the technical sense isn't really the right word to describe this part).
Accounting for additional factors
Step8: Checking to see if computer skills have a significant effect on call-backs
Step9: The effect might be described as marginal, but probably best not to over-interpret. But maybe the combination of race and computer skills makes a difference? Apparently not in this data (not even an improvement to the model log-likelihood or other measures of model fit) | Python Code:
%matplotlib inline
from __future__ import division
import matplotlib
matplotlib.rcParams['figure.figsize'] = (15.0,5.0)
import pandas as pd
import numpy as np
from scipy import stats
data = pd.io.stata.read_stata('data/us_job_market_discrimination.dta')
print "Total count: ",len(data)
print "race == 'b': ",len(data[data.race=='b'])
print "race == 'w': ",len(data[data.race=='w'])
data.head()
# number of callbacks and proportion of callbacks
print "Callback count for black-sounding names: ",sum(data[data.race=='b'].call)
print "Callback proportion for black-sounding names: ",sum(data[data.race=='b'].call)/len(data[data.race=='b'])
print "Callback count for white-sounding names: ",sum(data[data.race=='w'].call)
print "Callback proportion for white-sounding names: ",sum(data[data.race=='w'].call)/len(data[data.race=='w'])
Explanation: Examining racial discrimination in the US job market
Background
Racial discrimination continues to be pervasive in cultures throughout the world. Researchers examined the level of racial discrimination in the United States labor market by randomly assigning identical résumés black-sounding or white-sounding names and observing the impact on requests for interviews from employers.
Data
In the dataset provided, each row represents a resume. The 'race' column has two values, 'b' and 'w', indicating black-sounding and white-sounding. The column 'call' has two values, 1 and 0, indicating whether the resume received a call from employers or not.
Note that the 'b' and 'w' values in race are assigned randomly to the resumes.
Exercise
You will perform a statistical analysis to establish whether race has a significant impact on the rate of callbacks for resumes.
Answer the following questions in this notebook below and submit to your Github account.
What test is appropriate for this problem? Does CLT apply?
What are the null and alternate hypotheses?
Compute margin of error, confidence interval, and p-value.
Discuss statistical significance.
You can include written notes in notebook cells using Markdown:
- In the control panel at the top, choose Cell > Cell Type > Markdown
- Markdown syntax: http://nestacms.com/docs/creating-content/markdown-cheat-sheet
Resources
Experiment information and data source: http://www.povertyactionlab.org/evaluation/discrimination-job-market-united-states
Scipy statistical methods: http://docs.scipy.org/doc/scipy/reference/stats.html
Markdown syntax: http://nestacms.com/docs/creating-content/markdown-cheat-sheet
End of explanation
xb = sum(data[data.race=='b'].call)
nb = len(data[data.race=='b'])
xw = sum(data[data.race=='w'].call)
nw = len(data[data.race=='w'])
pHat = (nb*(xb/nb) + nw*(xw/nw))/(nb+nw)
se = np.sqrt(pHat*(1-pHat)*(1/nb + 1/nw))
z = (xb/nb -xw/nw)/se
print "z-score:",round(z,3),"p =", round(stats.norm.sf(abs(z))*2,6)
Explanation: The outcome variable here is binary, so this might be treated in several ways. First, it might be possible to apply the normal approximation to the binomial distribution. In this case, the distribution proportions is $\mathcal{N}(np,np(1-p))$
There are a number of guidelines as to whether this is a suitable approximation (see Wikipedia for a list of such conditions), some of which include:
n > 20 (or 30)
np > 5, np(1-p) > 5 (or 10)
But these conditions can be roughly summed up as not too small of a sample and an estimated proportion far enough from 0 and 1 that the distribution isn't overly skewed. If the normal approximation is reasonable, a z-test can be used, with the following standard error calculation:
$$SE = \sqrt{\hat{p}(1-\hat{p})\left(\frac{1}{n_1}+\frac{1}{n_2}\right)}$$
where $$\hat{p}=\frac{np_1+np_2}{n_1+n_2}$$
giving
$$z = \frac{p_1-p2}{SE}$$
End of explanation
pb = xb/nb
x = np.arange(110,210)
matplotlib.pyplot.vlines(x,0,stats.binom.pmf(x,nb,pb))
Explanation: So, the difference in probability of a call-back is statistically significant here.
Plotting the distribution for call-backs with black-sounding names, it looks fairly symmetrical and well-behaved, so it's quite likely that the normal approximation is fairly reasonable here.
End of explanation
intervalB = (stats.beta.ppf(0.025,xb+0.5,nb-xb+0.5),stats.beta.ppf(0.975,xb+0.5,nb-xb+0.5))
intervalW = (stats.beta.ppf(0.025,xw+0.5,nw-xw+0.5),stats.beta.ppf(0.975,xw+0.5,nw-xw+0.5))
print "Interval for black-sounding names: ",map(lambda x: round(x,3),intervalB)
print "Interval for white-sounding names: ",map(lambda x: round(x,3),intervalW)
Explanation: Alternatives
Because the normal distribution is only an approximation, the assumptions don't always work out for a particular data set. There are several methods for calculating confidence intervals around the estimated proportion. For example, with a significance level of $\alpha$, the Jeffrey's interval is defined as the $\frac{\alpha}{2}$ and 1-$\frac{\alpha}{2}$ quantiles of a beta$(x+\frac{1}{2}, n-x+\frac{1}{2})$ distribution. Using scipy:
End of explanation
import pystan
modelCode = '''
data {
int<lower=0> N;
int<lower=1,upper=2> G[N];
int<lower=0,upper=1> y[N];
}
parameters {
real<lower=0,upper=1> theta[2];
}
model {
# beta(0.5,0.5) prior
theta ~ beta(0.5,0.5);
# bernoulli likelihood
# This could be modified to use a binomial with successes and counts instead
for (i in 1:N)
y[i] ~ bernoulli(theta[G[i]]);
}
generated quantities {
real diff;
// difference in proportions:
diff <- theta[1]-theta[2];
}
'''
model = pystan.StanModel(model_code=modelCode)
dataDict = dict(N=len(data),G=np.where(data.race=='b',1,2),y=map(int,data.call))
fit = model.sampling(data=dataDict)
print fit
samples = fit.extract(permuted=True)
MCMCIntervalB = np.percentile(samples['theta'].transpose()[0],[2.5,97.5])
MCMCIntervalW = np.percentile(samples['theta'].transpose()[1],[2.5,97.5])
fit.plot().show()
Explanation: The complete lack of overlap in the intervals here implies a significant difference with $p\lt 0.05$ (Cumming & Finch,2005). Given that this particular interval can be interpreted as a Bayesian credible interval, this is a fairly comfortable conclusion.
Calculating credible intervals using Markov Chain Monte Carlo
Slightly different method of calculating approximately the same thing (the beta distribution used above the posterior distribution given given the observations with a Jeffreys prior):
End of explanation
print map(lambda x: round(x,3),MCMCIntervalB)
print map(lambda x: round(x,3),MCMCIntervalW)
Explanation: Estimating rough 95% credible intervals:
End of explanation
print map(lambda x: round(x,3),np.percentile(samples['diff'],[2.5,97.5]))
Explanation: So, this method gives a result that fits quite nicely with previous results, while allowing more flexible specification of priors.
Interval for sampled differences in proportions:
End of explanation
data.columns
# The data is balanced by design, and this mostly isn't a problem for relatively simple models.
# For example:
pd.crosstab(data.computerskills,data.race)
import statsmodels.formula.api as smf
Explanation: And this interval does not include 0, so that we're left fairly confident that black-sounding names get less call-backs, although the estimated differences in proportions are fairly small (significant in the technical sense isn't really the right word to describe this part).
Accounting for additional factors:
A next step here would be to check whether other factors influence the proportion of call-backs. This can be done using logistic regression, although there will be a limit to the complexity of the model to be fit, given that the proportion of call-backs is quite small, potentially leading to small cell-counts and unstable estimates (one rule of thumb being n>30 per cell is reasonably safe).
End of explanation
glm = smf.Logit.from_formula(formula="call~race+computerskills",data=data).fit()
glm.summary()
Explanation: Checking to see if computer skills have a significant effect on call-backs:
End of explanation
glm2 = smf.Logit.from_formula(formula="call~race*computerskills",data=data).fit()
glm2.summary()
Explanation: The effect might be described as marginal, but probably best not to over-interpret. But maybe the combination of race and computer skills makes a difference? Apparently not in this data (not even an improvement to the model log-likelihood or other measures of model fit):
End of explanation |
2,519 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Atmoschem
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Timestep Framework --> Split Operator Order
5. Key Properties --> Tuning Applied
6. Grid
7. Grid --> Resolution
8. Transport
9. Emissions Concentrations
10. Emissions Concentrations --> Surface Emissions
11. Emissions Concentrations --> Atmospheric Emissions
12. Emissions Concentrations --> Concentrations
13. Gas Phase Chemistry
14. Stratospheric Heterogeneous Chemistry
15. Tropospheric Heterogeneous Chemistry
16. Photo Chemistry
17. Photo Chemistry --> Photolysis
1. Key Properties
Key properties of the atmospheric chemistry
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Chemistry Scheme Scope
Is Required
Step7: 1.4. Basic Approximations
Is Required
Step8: 1.5. Prognostic Variables Form
Is Required
Step9: 1.6. Number Of Tracers
Is Required
Step10: 1.7. Family Approach
Is Required
Step11: 1.8. Coupling With Chemical Reactivity
Is Required
Step12: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required
Step13: 2.2. Code Version
Is Required
Step14: 2.3. Code Languages
Is Required
Step15: 3. Key Properties --> Timestep Framework
Timestepping in the atmospheric chemistry model
3.1. Method
Is Required
Step16: 3.2. Split Operator Advection Timestep
Is Required
Step17: 3.3. Split Operator Physical Timestep
Is Required
Step18: 3.4. Split Operator Chemistry Timestep
Is Required
Step19: 3.5. Split Operator Alternate Order
Is Required
Step20: 3.6. Integrated Timestep
Is Required
Step21: 3.7. Integrated Scheme Type
Is Required
Step22: 4. Key Properties --> Timestep Framework --> Split Operator Order
**
4.1. Turbulence
Is Required
Step23: 4.2. Convection
Is Required
Step24: 4.3. Precipitation
Is Required
Step25: 4.4. Emissions
Is Required
Step26: 4.5. Deposition
Is Required
Step27: 4.6. Gas Phase Chemistry
Is Required
Step28: 4.7. Tropospheric Heterogeneous Phase Chemistry
Is Required
Step29: 4.8. Stratospheric Heterogeneous Phase Chemistry
Is Required
Step30: 4.9. Photo Chemistry
Is Required
Step31: 4.10. Aerosols
Is Required
Step32: 5. Key Properties --> Tuning Applied
Tuning methodology for atmospheric chemistry component
5.1. Description
Is Required
Step33: 5.2. Global Mean Metrics Used
Is Required
Step34: 5.3. Regional Metrics Used
Is Required
Step35: 5.4. Trend Metrics Used
Is Required
Step36: 6. Grid
Atmospheric chemistry grid
6.1. Overview
Is Required
Step37: 6.2. Matches Atmosphere Grid
Is Required
Step38: 7. Grid --> Resolution
Resolution in the atmospheric chemistry grid
7.1. Name
Is Required
Step39: 7.2. Canonical Horizontal Resolution
Is Required
Step40: 7.3. Number Of Horizontal Gridpoints
Is Required
Step41: 7.4. Number Of Vertical Levels
Is Required
Step42: 7.5. Is Adaptive Grid
Is Required
Step43: 8. Transport
Atmospheric chemistry transport
8.1. Overview
Is Required
Step44: 8.2. Use Atmospheric Transport
Is Required
Step45: 8.3. Transport Details
Is Required
Step46: 9. Emissions Concentrations
Atmospheric chemistry emissions
9.1. Overview
Is Required
Step47: 10. Emissions Concentrations --> Surface Emissions
**
10.1. Sources
Is Required
Step48: 10.2. Method
Is Required
Step49: 10.3. Prescribed Climatology Emitted Species
Is Required
Step50: 10.4. Prescribed Spatially Uniform Emitted Species
Is Required
Step51: 10.5. Interactive Emitted Species
Is Required
Step52: 10.6. Other Emitted Species
Is Required
Step53: 11. Emissions Concentrations --> Atmospheric Emissions
TO DO
11.1. Sources
Is Required
Step54: 11.2. Method
Is Required
Step55: 11.3. Prescribed Climatology Emitted Species
Is Required
Step56: 11.4. Prescribed Spatially Uniform Emitted Species
Is Required
Step57: 11.5. Interactive Emitted Species
Is Required
Step58: 11.6. Other Emitted Species
Is Required
Step59: 12. Emissions Concentrations --> Concentrations
TO DO
12.1. Prescribed Lower Boundary
Is Required
Step60: 12.2. Prescribed Upper Boundary
Is Required
Step61: 13. Gas Phase Chemistry
Atmospheric chemistry transport
13.1. Overview
Is Required
Step62: 13.2. Species
Is Required
Step63: 13.3. Number Of Bimolecular Reactions
Is Required
Step64: 13.4. Number Of Termolecular Reactions
Is Required
Step65: 13.5. Number Of Tropospheric Heterogenous Reactions
Is Required
Step66: 13.6. Number Of Stratospheric Heterogenous Reactions
Is Required
Step67: 13.7. Number Of Advected Species
Is Required
Step68: 13.8. Number Of Steady State Species
Is Required
Step69: 13.9. Interactive Dry Deposition
Is Required
Step70: 13.10. Wet Deposition
Is Required
Step71: 13.11. Wet Oxidation
Is Required
Step72: 14. Stratospheric Heterogeneous Chemistry
Atmospheric chemistry startospheric heterogeneous chemistry
14.1. Overview
Is Required
Step73: 14.2. Gas Phase Species
Is Required
Step74: 14.3. Aerosol Species
Is Required
Step75: 14.4. Number Of Steady State Species
Is Required
Step76: 14.5. Sedimentation
Is Required
Step77: 14.6. Coagulation
Is Required
Step78: 15. Tropospheric Heterogeneous Chemistry
Atmospheric chemistry tropospheric heterogeneous chemistry
15.1. Overview
Is Required
Step79: 15.2. Gas Phase Species
Is Required
Step80: 15.3. Aerosol Species
Is Required
Step81: 15.4. Number Of Steady State Species
Is Required
Step82: 15.5. Interactive Dry Deposition
Is Required
Step83: 15.6. Coagulation
Is Required
Step84: 16. Photo Chemistry
Atmospheric chemistry photo chemistry
16.1. Overview
Is Required
Step85: 16.2. Number Of Reactions
Is Required
Step86: 17. Photo Chemistry --> Photolysis
Photolysis scheme
17.1. Method
Is Required
Step87: 17.2. Environmental Conditions
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'test-institute-1', 'sandbox-1', 'atmoschem')
Explanation: ES-DOC CMIP6 Model Properties - Atmoschem
MIP Era: CMIP6
Institute: TEST-INSTITUTE-1
Source ID: SANDBOX-1
Topic: Atmoschem
Sub-Topics: Transport, Emissions Concentrations, Gas Phase Chemistry, Stratospheric Heterogeneous Chemistry, Tropospheric Heterogeneous Chemistry, Photo Chemistry.
Properties: 84 (39 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:43
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Timestep Framework --> Split Operator Order
5. Key Properties --> Tuning Applied
6. Grid
7. Grid --> Resolution
8. Transport
9. Emissions Concentrations
10. Emissions Concentrations --> Surface Emissions
11. Emissions Concentrations --> Atmospheric Emissions
12. Emissions Concentrations --> Concentrations
13. Gas Phase Chemistry
14. Stratospheric Heterogeneous Chemistry
15. Tropospheric Heterogeneous Chemistry
16. Photo Chemistry
17. Photo Chemistry --> Photolysis
1. Key Properties
Key properties of the atmospheric chemistry
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmospheric chemistry model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of atmospheric chemistry model code.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.chemistry_scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Chemistry Scheme Scope
Is Required: TRUE Type: ENUM Cardinality: 1.N
Atmospheric domains covered by the atmospheric chemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: STRING Cardinality: 1.1
Basic approximations made in the atmospheric chemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/mixing ratio for gas"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.5. Prognostic Variables Form
Is Required: TRUE Type: ENUM Cardinality: 1.N
Form of prognostic variables in the atmospheric chemistry component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 1.6. Number Of Tracers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of advected tracers in the atmospheric chemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 1.7. Family Approach
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Atmospheric chemistry calculations (not advection) generalized into families of species?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.coupling_with_chemical_reactivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 1.8. Coupling With Chemical Reactivity
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Atmospheric chemistry transport scheme turbulence is couple with chemical reactivity?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Operator splitting"
# "Integrated"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Timestep Framework
Timestepping in the atmospheric chemistry model
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Mathematical method deployed to solve the evolution of a given variable
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Split Operator Advection Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for chemical species advection (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.3. Split Operator Physical Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for physics (in seconds).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_chemistry_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.4. Split Operator Chemistry Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for chemistry (in seconds).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_alternate_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 3.5. Split Operator Alternate Order
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.6. Integrated Timestep
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Timestep for the atmospheric chemistry model (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3.7. Integrated Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the type of timestep scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.turbulence')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Timestep Framework --> Split Operator Order
**
4.1. Turbulence
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for turbulence scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.convection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.2. Convection
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for convection scheme This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.3. Precipitation
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for precipitation scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.emissions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.4. Emissions
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for emissions scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.5. Deposition
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for deposition scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.gas_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.6. Gas Phase Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for gas phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.tropospheric_heterogeneous_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.7. Tropospheric Heterogeneous Phase Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for tropospheric heterogeneous phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.stratospheric_heterogeneous_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.8. Stratospheric Heterogeneous Phase Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for stratospheric heterogeneous phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.photo_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.9. Photo Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for photo chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.aerosols')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.10. Aerosols
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for aerosols scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Tuning Applied
Tuning methodology for atmospheric chemistry component
5.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Grid
Atmospheric chemistry grid
6.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the atmopsheric chemistry grid
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.2. Matches Atmosphere Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
* Does the atmospheric chemistry grid match the atmosphere grid?*
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Grid --> Resolution
Resolution in the atmospheric chemistry grid
7.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. Canonical Horizontal Resolution
Is Required: FALSE Type: STRING Cardinality: 0.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 7.3. Number Of Horizontal Gridpoints
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 7.4. Number Of Vertical Levels
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Number of vertical levels resolved on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 7.5. Is Adaptive Grid
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Transport
Atmospheric chemistry transport
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview of transport implementation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.use_atmospheric_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 8.2. Use Atmospheric Transport
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is transport handled by the atmosphere, rather than within atmospheric cehmistry?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.transport_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.3. Transport Details
Is Required: FALSE Type: STRING Cardinality: 0.1
If transport is handled within the atmospheric chemistry scheme, describe it.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Emissions Concentrations
Atmospheric chemistry emissions
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview atmospheric chemistry emissions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Soil"
# "Sea surface"
# "Anthropogenic"
# "Biomass burning"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10. Emissions Concentrations --> Surface Emissions
**
10.1. Sources
Is Required: FALSE Type: ENUM Cardinality: 0.N
Sources of the chemical species emitted at the surface that are taken into account in the emissions scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Climatology"
# "Spatially uniform mixing ratio"
# "Spatially uniform concentration"
# "Interactive"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.2. Method
Is Required: FALSE Type: ENUM Cardinality: 0.N
Methods used to define chemical species emitted directly into model layers above the surface (several methods allowed because the different species may not use the same method).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.3. Prescribed Climatology Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and prescribed via a climatology, and the nature of the climatology (E.g. CO (monthly), C2H6 (constant))
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.4. Prescribed Spatially Uniform Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and prescribed as spatially uniform
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.5. Interactive Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and specified via an interactive method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.6. Other Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and specified via any other method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Aircraft"
# "Biomass burning"
# "Lightning"
# "Volcanos"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11. Emissions Concentrations --> Atmospheric Emissions
TO DO
11.1. Sources
Is Required: FALSE Type: ENUM Cardinality: 0.N
Sources of chemical species emitted in the atmosphere that are taken into account in the emissions scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Climatology"
# "Spatially uniform mixing ratio"
# "Spatially uniform concentration"
# "Interactive"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.2. Method
Is Required: FALSE Type: ENUM Cardinality: 0.N
Methods used to define the chemical species emitted in the atmosphere (several methods allowed because the different species may not use the same method).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.3. Prescribed Climatology Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and prescribed via a climatology (E.g. CO (monthly), C2H6 (constant))
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.4. Prescribed Spatially Uniform Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and prescribed as spatially uniform
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.5. Interactive Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and specified via an interactive method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.6. Other Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and specified via an "other method"
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12. Emissions Concentrations --> Concentrations
TO DO
12.1. Prescribed Lower Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the lower boundary.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.2. Prescribed Upper Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the upper boundary.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 13. Gas Phase Chemistry
Atmospheric chemistry transport
13.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview gas phase atmospheric chemistry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HOx"
# "NOy"
# "Ox"
# "Cly"
# "HSOx"
# "Bry"
# "VOCs"
# "isoprene"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Species included in the gas phase chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_bimolecular_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.3. Number Of Bimolecular Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of bi-molecular reactions in the gas phase chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_termolecular_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.4. Number Of Termolecular Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of ter-molecular reactions in the gas phase chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_tropospheric_heterogenous_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.5. Number Of Tropospheric Heterogenous Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of reactions in the tropospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_stratospheric_heterogenous_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.6. Number Of Stratospheric Heterogenous Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of reactions in the stratospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_advected_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.7. Number Of Advected Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of advected species in the gas phase chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.8. Number Of Steady State Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of gas phase species for which the concentration is updated in the chemical solver assuming photochemical steady state
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.interactive_dry_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13.9. Interactive Dry Deposition
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is dry deposition interactive (as opposed to prescribed)? Dry deposition describes the dry processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13.10. Wet Deposition
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is wet deposition included? Wet deposition describes the moist processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_oxidation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13.11. Wet Oxidation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is wet oxidation included? Oxidation describes the loss of electrons or an increase in oxidation state by a molecule
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14. Stratospheric Heterogeneous Chemistry
Atmospheric chemistry startospheric heterogeneous chemistry
14.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview stratospheric heterogenous atmospheric chemistry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.gas_phase_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Cly"
# "Bry"
# "NOy"
# TODO - please enter value(s)
Explanation: 14.2. Gas Phase Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Gas phase species included in the stratospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.aerosol_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule))"
# TODO - please enter value(s)
Explanation: 14.3. Aerosol Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Aerosol species included in the stratospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.4. Number Of Steady State Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of steady state species in the stratospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.sedimentation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 14.5. Sedimentation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is sedimentation is included in the stratospheric heterogeneous chemistry scheme or not?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.coagulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 14.6. Coagulation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is coagulation is included in the stratospheric heterogeneous chemistry scheme or not?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Tropospheric Heterogeneous Chemistry
Atmospheric chemistry tropospheric heterogeneous chemistry
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview tropospheric heterogenous atmospheric chemistry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.gas_phase_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Gas Phase Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of gas phase species included in the tropospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.aerosol_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon/soot"
# "Polar stratospheric ice"
# "Secondary organic aerosols"
# "Particulate organic matter"
# TODO - please enter value(s)
Explanation: 15.3. Aerosol Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Aerosol species included in the tropospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.4. Number Of Steady State Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of steady state species in the tropospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.interactive_dry_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.5. Interactive Dry Deposition
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is dry deposition interactive (as opposed to prescribed)? Dry deposition describes the dry processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.coagulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.6. Coagulation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is coagulation is included in the tropospheric heterogeneous chemistry scheme or not?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16. Photo Chemistry
Atmospheric chemistry photo chemistry
16.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview atmospheric photo chemistry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.number_of_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 16.2. Number Of Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of reactions in the photo-chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Offline (clear sky)"
# "Offline (with clouds)"
# "Online"
# TODO - please enter value(s)
Explanation: 17. Photo Chemistry --> Photolysis
Photolysis scheme
17.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Photolysis scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.environmental_conditions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.2. Environmental Conditions
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any environmental conditions taken into account by the photolysis scheme (e.g. whether pressure- and temperature-sensitive cross-sections and quantum yields in the photolysis calculations are modified to reflect the modelled conditions.)
End of explanation |
2,520 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Dogs vs Cats
https
Step1: データ整形
https
Step2: 訓練データからランダムに選んだ2000画像をvalidationデータとする
Step3: PyTorchで読み込みやすいようにクラスごとにサブディレクトリを作成する
Kaggleのテストデータは正解ラベルがついていないため unknown というサブディレクトリにいれる
Step4: VGG16 の出力層のみ置き換える
分類層を除いたネットワークのパラメータを固定する
分類層のパラメータのみ学習対象
Step5: 層の置き換え
下のように (classifier) の (6) だけを置き換えることはできないみたい
```
最後のfc層のみ2クラス分類できるように置き換える
num_features = vgg16.classifier[6].in_features
vgg16.classifier[6] = nn.Linear(num_features, 2) # <= この代入はできない!
```
classifierをまるごと置き換える必要がある
Step6: VGG用のデータ変換を定義
訓練もテストも (224, 224) にサイズ変更のみ
正方形の画像でないので Resize(224) は動作しない
最初はデータ拡張は使わないで試す
Step7: データをロード
Step8: クラスはアルファベット順?
Step9: モデル訓練
optimizerには更新対象のパラメータのみ渡す必要がある!
requires_grad = False している vgg16.parameters() を指定するとエラーになる | Python Code:
mkdir
%matplotlib inline
Explanation: Dogs vs Cats
https://blog.keras.io/building-powerful-image-classification-models-using-very-little-data.html
https://www.kaggle.com/c/dogs-vs-cats-redux-kernels-edition
http://aidiary.hatenablog.com/entry/20170108/1483876657
http://aidiary.hatenablog.com/entry/20170603/1496493646
End of explanation
!ls data/
import os
current_dir = os.getcwd()
data_dir = os.path.join(current_dir, 'data', 'dogscats')
train_dir = os.path.join(data_dir, 'train')
valid_dir = os.path.join(data_dir, 'valid')
test_dir = os.path.join(data_dir, 'test')
!mkdir $data_dir
!unzip train.zip -d $data_dir
!unzip test.zip -d $data_dir
!ls -1 $train_dir | wc -l
!ls -1 $test_dir | wc -l
Explanation: データ整形
https://www.kaggle.com/c/dogs-vs-cats-redux-kernels-edition
train.zipとtest.zipをカレントディレクトリにダウンロードしておく
End of explanation
!mkdir $valid_dir
%cd $train_dir
import os
from glob import glob
import numpy as np
g = glob('*.jpg')
shuf = np.random.permutation(g)
for i in range(2000):
os.rename(shuf[i], os.path.join(valid_dir, shuf[i]))
!ls -1 $valid_dir | wc -l
Explanation: 訓練データからランダムに選んだ2000画像をvalidationデータとする
End of explanation
# train
%cd $train_dir
%mkdir cats dogs
%mv cat.*.jpg cats/
%mv dog.*.jpg dogs/
# valid
%cd $valid_dir
%mkdir cats dogs
%mv cat.*.jpg cats/
%mv dog.*.jpg dogs/
# test
%cd $test_dir
%mkdir unknown
%mv *.jpg unknown
Explanation: PyTorchで読み込みやすいようにクラスごとにサブディレクトリを作成する
Kaggleのテストデータは正解ラベルがついていないため unknown というサブディレクトリにいれる
End of explanation
vgg16 = models.vgg16(pretrained=True)
vgg16.eval() # eval mode!
Explanation: VGG16 の出力層のみ置き換える
分類層を除いたネットワークのパラメータを固定する
分類層のパラメータのみ学習対象
End of explanation
# 全層のパラメータを固定
for param in vgg16.parameters():
param.requires_grad = False
vgg16.classifier = nn.Sequential(
nn.Linear(25088, 4096),
nn.ReLU(inplace=True),
nn.Dropout(0.5),
nn.Linear(4096, 4096),
nn.ReLU(inplace=True),
nn.Dropout(0.5),
nn.Linear(4096, 2)
)
use_gpu = torch.cuda.is_available()
if use_gpu:
vgg16 = vgg16.cuda()
print(vgg16)
Explanation: 層の置き換え
下のように (classifier) の (6) だけを置き換えることはできないみたい
```
最後のfc層のみ2クラス分類できるように置き換える
num_features = vgg16.classifier[6].in_features
vgg16.classifier[6] = nn.Linear(num_features, 2) # <= この代入はできない!
```
classifierをまるごと置き換える必要がある
End of explanation
train_preprocess = transforms.Compose([
transforms.Resize((224, 224)),
transforms.ToTensor(),
transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
])
test_preprocess = transforms.Compose([
transforms.Resize((224, 224)),
transforms.ToTensor(),
transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
])
Explanation: VGG用のデータ変換を定義
訓練もテストも (224, 224) にサイズ変更のみ
正方形の画像でないので Resize(224) は動作しない
最初はデータ拡張は使わないで試す
End of explanation
train_dataset = datasets.ImageFolder(train_dir, train_preprocess)
valid_dataset = datasets.ImageFolder(valid_dir, test_preprocess)
test_dataset = datasets.ImageFolder(test_dir, test_preprocess)
# DataSetのlenはサンプル数
print(len(train_dataset))
print(len(valid_dataset))
print(len(test_dataset))
Explanation: データをロード
End of explanation
classes = train_dataset.classes
print(train_dataset.classes)
print(valid_dataset.classes)
print(test_dataset.classes)
train_loader = torch.utils.data.DataLoader(train_dataset,
batch_size=128,
shuffle=True)
valid_loader = torch.utils.data.DataLoader(valid_dataset,
batch_size=128,
shuffle=False)
test_loader = torch.utils.data.DataLoader(test_dataset,
batch_size=128,
shuffle=False)
# DataLoaderのlenはミニバッチ数
print(len(train_loader))
print(len(valid_loader))
print(len(test_loader))
def imshow(images, title=None):
images = images.numpy().transpose((1, 2, 0)) # (h, w, c)
# denormalize
mean = np.array([0.485, 0.456, 0.406])
std = np.array([0.229, 0.224, 0.225])
images = std * images + mean
images = np.clip(images, 0, 1)
plt.imshow(images)
if title is not None:
plt.title(title)
images, classes = next(iter(train_loader))
print(images.size(), classes.size())
images = torchvision.utils.make_grid(images[:25], nrow=5)
imshow(images)
Explanation: クラスはアルファベット順?
End of explanation
if use_gpu:
vgg16 = vgg16.cuda()
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(vgg16.classifier.parameters(), lr=0.001, momentum=0.9)
def train(model, criterion, optimizer, train_loader):
model.train()
running_loss = 0
for batch_idx, (images, labels) in enumerate(train_loader):
if use_gpu:
images = Variable(images.cuda())
labels = Variable(labels.cuda())
else:
images = Variable(images)
labels = Variable(labels)
optimizer.zero_grad()
outputs = model(images)
loss = criterion(outputs, labels)
running_loss += loss.data[0]
loss.backward()
optimizer.step()
train_loss = running_loss / len(train_loader)
return train_loss
def valid(model, criterion, valid_loader):
model.eval()
running_loss = 0
correct = 0
total = 0
for batch_idx, (images, labels) in enumerate(valid_loader):
if use_gpu:
images = Variable(images.cuda())
labels = Variable(labels.cuda())
else:
images = Variable(images)
labels = Variable(labels)
outputs = model(images)
loss = criterion(outputs, labels)
running_loss += loss.data[0]
_, predicted = torch.max(outputs.data, 1)
correct += (predicted == labels.data).sum()
total += labels.size(0)
val_loss = running_loss / len(valid_loader)
val_acc = correct / total
return val_loss, val_acc
%mkdir logs
num_epochs = 5
log_dir = './logs'
best_acc = 0
loss_list = []
val_loss_list = []
val_acc_list = []
for epoch in range(num_epochs):
loss = train(vgg16, criterion, optimizer, train_loader)
val_loss, val_acc = valid(vgg16, criterion, valid_loader)
print('epoch %d, loss: %.4f val_loss: %.4f val_acc: %.4f'
% (epoch, loss, val_loss, val_acc))
if val_acc > best_acc:
print('val_acc improved from %.5f to %.5f!' % (best_acc, val_acc))
best_acc = val_acc
model_file = 'epoch%03d-%.3f-%.3f.pth' % (epoch, val_loss, val_acc)
torch.save(vgg16.state_dict(), os.path.join(log_dir, model_file))
# logging
loss_list.append(loss)
val_loss_list.append(val_loss)
val_acc_list.append(val_acc)
Explanation: モデル訓練
optimizerには更新対象のパラメータのみ渡す必要がある!
requires_grad = False している vgg16.parameters() を指定するとエラーになる
End of explanation |
2,521 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Processing Multiple Pandas Series in Parallel
Introduction
Python's Pandas library for data processing is great for all sorts of data-processing tasks. However, one thing it doesn't support out of the box is parallel processing across multiple cores.
I've been wanting a simple way to process Pandas DataFrames in parallel, and recently I found this truly awesome blog post.. It shows how to apply an arbitrary Python function to each object in a sequence, in parallel, using Pool.map from the Multiprocessing library.
The author's example involves running urllib2.urlopen() across a list of urls, to scrape html from several web sites in parallel. But the principle applies equally to mapping a function across several columns in a Pandas DataFrame. Here's an example of how useful that can be.
A simple multiprocessing wrapper
Here's some code which will accept a Pandas DataFrame and a function, apply the function to each column in the DataFrame, and return the results (as a new dataframe). It also allows the caller to specify the number of processes to run in parallel, but uses a sensible default when not provided.
Step1: Hopefully the code above looks pretty straightforward, but if it looks a bit confusing at first glance, ultimately the key is these two lines
Step2: the rest was just setting the default number of processes to run in parallel, getting a 'sequence of columns' from our input dataframe, and concatenating the list of results we get back from pool.map
A function to measure parallel performance gains with
To measure the speed boost from wrapping a bit of Pandas processing in this multiprocessing wrapper, I'm going to load the Quora Duplicate Questions dataset, and the vectorized text-tokenizing function from my last blog post on using vectorized Pandas functions.
Step3: To see what this does "tokenizing" function does, here's a few unprocessed quora questions, followed by their outputs from the tokenizer
Step4: Clocking Performance Gains of Using Multiprocessing, 2 Cores
The two functions below clock the time elapsed from tokenizing our two question columns in series or in parallel.
Defining these tests as their own functions means we're not creating any new global-scope variables when we measure performance. All the intermediate results (like the new dataframes of processed questions) are garbage-collected after the function returns its results (an elapsed time). This is important to maintain an apples-to-apples performance comparison; otherwise, performance tests run later in the notebook would have less RAM available than the first test we run.
Step5: And now to measure our results
Step6: So processing the two columns in parallel cut our processing time from 23.7 seconds down to 14.7 seconds, a decrease of 38%. The theoretical maximum reduction we might have expected with no multiprocessing overhead would of course been a 50% reduction, so this is not bad.
Comparing Performance with 4 Cores
I have four cores on this laptop, and I'd like to see how the performance gains scale here from two to four cores. Below, I'll make copies of our q1 and q2 so we have four total text columns, then re-run the comparison by passing this new 4-column dataframe to the testing function defined above. | Python Code:
from multiprocessing import Pool, cpu_count
def process_Pandas_data(func, df, num_processes=None):
''' Apply a function separately to each column in a dataframe, in parallel.'''
# If num_processes is not specified, default to minimum(#columns, #machine-cores)
if num_processes==None:
num_processes = min(df.shape[1], cpu_count())
# 'with' context manager takes care of pool.close() and pool.join() for us
with Pool(num_processes) as pool:
# we need a sequence to pass pool.map; this line creates a generator (lazy iterator) of columns
seq = [df[col_name] for col_name in df.columns]
# pool.map returns results as a list
results_list = pool.map(func, seq)
# return list of processed columns, concatenated together as a new dataframe
return pd.concat(results_list, axis=1)
Explanation: Processing Multiple Pandas Series in Parallel
Introduction
Python's Pandas library for data processing is great for all sorts of data-processing tasks. However, one thing it doesn't support out of the box is parallel processing across multiple cores.
I've been wanting a simple way to process Pandas DataFrames in parallel, and recently I found this truly awesome blog post.. It shows how to apply an arbitrary Python function to each object in a sequence, in parallel, using Pool.map from the Multiprocessing library.
The author's example involves running urllib2.urlopen() across a list of urls, to scrape html from several web sites in parallel. But the principle applies equally to mapping a function across several columns in a Pandas DataFrame. Here's an example of how useful that can be.
A simple multiprocessing wrapper
Here's some code which will accept a Pandas DataFrame and a function, apply the function to each column in the DataFrame, and return the results (as a new dataframe). It also allows the caller to specify the number of processes to run in parallel, but uses a sensible default when not provided.
End of explanation
#!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
# UNCOMMENT IN MARKDOWN BEFORE PUSHING LIVE
# !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
# (commented out so can run notebook in one click.)
#with Pool(num_processes) as pool:
# ...
# results_list = pool.map(func, seq)
Explanation: Hopefully the code above looks pretty straightforward, but if it looks a bit confusing at first glance, ultimately the key is these two lines:
End of explanation
import pandas as pd
df = pd.read_csv('datasets/quora_kaggle.csv')
df.head(3)
import re
from nltk.corpus import stopwords
def tokenize_column(text_series):
''' Accept a series of strings, returns list of words (lowercased) without punctuation or stopwords'''
# lowercase everything
text_series = text_series.astype(str).str.lower()
# remove punctuation (r'\W' is regex, matches any non-alphanumeric character)
text_series = text_series.str.replace(r'\W', ' ')
# return list of words, without stopwords
sw = stopwords.words('english')
return text_series.apply(lambda row: [word for word in row.split() if word not in sw])
Explanation: the rest was just setting the default number of processes to run in parallel, getting a 'sequence of columns' from our input dataframe, and concatenating the list of results we get back from pool.map
A function to measure parallel performance gains with
To measure the speed boost from wrapping a bit of Pandas processing in this multiprocessing wrapper, I'm going to load the Quora Duplicate Questions dataset, and the vectorized text-tokenizing function from my last blog post on using vectorized Pandas functions.
End of explanation
print(df.question1.head(3), '\n\n', tokenize_column(df.question1.head(3)))
Explanation: To see what this does "tokenizing" function does, here's a few unprocessed quora questions, followed by their outputs from the tokenizer
End of explanation
from datetime import datetime
def clock_tokenize_in_series(df):
'''Calc time to process in series'''
# Initialize dataframe to hold processed questions, and start clock
qs_processed = pd.DataFrame()
start = datetime.now()
# process question columns in series
for col in df.columns:
qs_processed[col] = tokenize_column(df[col])
# return time elapsed
return datetime.now() - start
def clock_tokenize_in_parallel(df):
'''Calc time to process in parallel'''
# Initialize dataframe to hold processed questions, and start clock
qs_processed = pd.DataFrame()
start = datetime.now()
# process question columns in parallel
qs_processed2 = process_Pandas_data(tokenize_column, df)
# return time elapsed
return datetime.now() - start
Explanation: Clocking Performance Gains of Using Multiprocessing, 2 Cores
The two functions below clock the time elapsed from tokenizing our two question columns in series or in parallel.
Defining these tests as their own functions means we're not creating any new global-scope variables when we measure performance. All the intermediate results (like the new dataframes of processed questions) are garbage-collected after the function returns its results (an elapsed time). This is important to maintain an apples-to-apples performance comparison; otherwise, performance tests run later in the notebook would have less RAM available than the first test we run.
End of explanation
# Print Time Results
no_parallel = clock_tokenize_in_series(df[['question1', 'question2']])
parallel = clock_tokenize_in_parallel(df[['question1', 'question2']])
print('Time elapsed for processing 2 questions in series :', no_parallel)
print('Time elapsed for processing 2 questions in parallel :', parallel)
Explanation: And now to measure our results:
End of explanation
# Column-bind two questions with copies of themselves for 4 text columns
four_qs = pd.concat([df[['question1','question2']],
df[['question1','question2']]], axis=1)
four_qs.columns = ['q1', 'q2', 'q1copy', 'q2copy']
four_qs.head(2)
# Print Results for running tokenizer on 4 questions in series, then in parallel
no_parallel = clock_tokenize_in_series(four_qs)
parallel = clock_tokenize_in_parallel(four_qs)
print('Time elapsed for processing 4 questions in series :', no_parallel)
print('Time elapsed for processing 4 questions in parallel :', parallel)
Explanation: So processing the two columns in parallel cut our processing time from 23.7 seconds down to 14.7 seconds, a decrease of 38%. The theoretical maximum reduction we might have expected with no multiprocessing overhead would of course been a 50% reduction, so this is not bad.
Comparing Performance with 4 Cores
I have four cores on this laptop, and I'd like to see how the performance gains scale here from two to four cores. Below, I'll make copies of our q1 and q2 so we have four total text columns, then re-run the comparison by passing this new 4-column dataframe to the testing function defined above.
End of explanation |
2,522 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
iPython Cookbook - Monte Carlo III - Principal Components
Generating a Monte Carlo vector using eigenvector decomposition
Theory
Before we go into the implementation, a bit of theory on Monte Carlo and linear algebra, and in particular the eigenvalue / eigenvector decomposition. Assume we have a Standard Gaussian vector $Z=(Z_i)$ and we want a general Gaussian vector $X=(X_i)$ with correlation matrix $C = (C_{ij})$.
We know that there are vectors $e^i=(e^i_j)$ such that $e^i \cdot C = \lambda_i e^i$, the so called eigenvectors, with the $\lambda_i$ being the eigenvalues (we use row vectors, hence we multiply from the left). We know that those eigenvectors are orthonormal (they are orthogonal actually, but we can choose them to be of unit length), ie $e^i \cdot e^j = \delta_{ij}$ where $\delta$ is the well known Kronecker delta.
We now take our Standard Gaussian vector $Z=(Z_i)$ and we define the vector $X=(X_i)$ through
$$
X = \sum_\mu \sqrt{\lambda_\mu} e^\mu Z_\mu
$$
We can compute the covariance of the $X$ that we want to call $\bar{C}$ for the time being
$$
\bar{C}{ij} = E[X_i X_j] = \sum{\mu\nu} \sqrt{\lambda_\mu \lambda_\nu} e^\mu_i e^\nu_j E[Z_\mu Z_\nu]=\sum_{\mu}\lambda_\mu e^\mu_i e^\mu_j
$$
We now multiply the vector $e^i$ from the left
$$
(e^i \bar{C})j = \sum\nu e^i_\nu \bar{C}{\nu j} = \sum{\nu\mu}\lambda_\mu e^i_\nu e^\mu_\nu e^\mu_j = \sum_\mu \lambda_\mu \delta_{i\mu} e^\mu_j = \lambda_i e^i_j
$$
and we find that the matrix $\bar{C}$ satisfies for all $e^i$ the above eigenvector equation $e^i \cdot \bar{C} = \lambda_i e^i$. Because the $e^i$ form a basis we know that $\bar{C}=C$.
Implementation
Generating a covariance matrix
First we generat a covariance matrix. This is not entirely trivial - the matrix must be symmetric and positive definite - and one way going about is to simply write $C = R^tR$ where $R$ is any random matrix (note that this is not a particularly good covariance matrix, because it is pretty close to the one-systemic-factor model)
Step1: Decomposing the covariance matrix
We are given a covariance matrix $C$ and we want to find its eigenvalues and eigenvectors. In Python the function that does this is scipy.linalg.eigh(). It returns a tuple, the first component being a row-vector containing the eigenvalues, and the second one being a matrix whose columns correspond to the eigenvectors (which we transpose, ie in evm the eigenvectors are in rows!).
Step2: Generating $z$
We now generate our Standard Gaussian $z$, as usual one row being one observation ($N$ is the number of rows)
Step3: Generating $x$
We have matrix of row-vectors $z$. Each row corresponds to one draw of all random variables, and each column corresponds to all draws of one random variable. We also have the matrix of eigenvectors, where every row corresponds to one eigenvector. We also construct a matrix that on the diagonal has the square-roots of the eigenvalues
$$
lm = \begin{pmatrix}
\sqrt{\lambda_0} \
& \sqrt{\lambda_1} \
& & \ddots \
& & & \sqrt{\lambda_{d-1}}
\end{pmatrix}
$$
We then compute $x$ from the $z$ as
$$
x = z.lm.evm
$$
Step4: Check
We now check that the ex-post covariance matrix $C1$ is reasonably close to the ex-ante matrix $C$
Step5: Licence and version
(c) Stefan Loesch / oditorium 2014; all rights reserved
(license) | Python Code:
import numpy as np
d = 3
R = np.random.uniform(-1,1,(d,d))+np.eye(d)
C = np.dot(R.T, R)
#C = np.array(((5,2,3),(2,5,4),(3,4,5)))
C
Explanation: iPython Cookbook - Monte Carlo III - Principal Components
Generating a Monte Carlo vector using eigenvector decomposition
Theory
Before we go into the implementation, a bit of theory on Monte Carlo and linear algebra, and in particular the eigenvalue / eigenvector decomposition. Assume we have a Standard Gaussian vector $Z=(Z_i)$ and we want a general Gaussian vector $X=(X_i)$ with correlation matrix $C = (C_{ij})$.
We know that there are vectors $e^i=(e^i_j)$ such that $e^i \cdot C = \lambda_i e^i$, the so called eigenvectors, with the $\lambda_i$ being the eigenvalues (we use row vectors, hence we multiply from the left). We know that those eigenvectors are orthonormal (they are orthogonal actually, but we can choose them to be of unit length), ie $e^i \cdot e^j = \delta_{ij}$ where $\delta$ is the well known Kronecker delta.
We now take our Standard Gaussian vector $Z=(Z_i)$ and we define the vector $X=(X_i)$ through
$$
X = \sum_\mu \sqrt{\lambda_\mu} e^\mu Z_\mu
$$
We can compute the covariance of the $X$ that we want to call $\bar{C}$ for the time being
$$
\bar{C}{ij} = E[X_i X_j] = \sum{\mu\nu} \sqrt{\lambda_\mu \lambda_\nu} e^\mu_i e^\nu_j E[Z_\mu Z_\nu]=\sum_{\mu}\lambda_\mu e^\mu_i e^\mu_j
$$
We now multiply the vector $e^i$ from the left
$$
(e^i \bar{C})j = \sum\nu e^i_\nu \bar{C}{\nu j} = \sum{\nu\mu}\lambda_\mu e^i_\nu e^\mu_\nu e^\mu_j = \sum_\mu \lambda_\mu \delta_{i\mu} e^\mu_j = \lambda_i e^i_j
$$
and we find that the matrix $\bar{C}$ satisfies for all $e^i$ the above eigenvector equation $e^i \cdot \bar{C} = \lambda_i e^i$. Because the $e^i$ form a basis we know that $\bar{C}=C$.
Implementation
Generating a covariance matrix
First we generat a covariance matrix. This is not entirely trivial - the matrix must be symmetric and positive definite - and one way going about is to simply write $C = R^tR$ where $R$ is any random matrix (note that this is not a particularly good covariance matrix, because it is pretty close to the one-systemic-factor model)
End of explanation
from scipy.linalg import eigh
lam, evm = eigh(C)
evm = evm.T
lam, evm
#np.dot(evm[0],C), lam[0] * evm[0]
Explanation: Decomposing the covariance matrix
We are given a covariance matrix $C$ and we want to find its eigenvalues and eigenvectors. In Python the function that does this is scipy.linalg.eigh(). It returns a tuple, the first component being a row-vector containing the eigenvalues, and the second one being a matrix whose columns correspond to the eigenvectors (which we transpose, ie in evm the eigenvectors are in rows!).
End of explanation
N = 10000
z = np.random.standard_normal((N, d))
z
Explanation: Generating $z$
We now generate our Standard Gaussian $z$, as usual one row being one observation ($N$ is the number of rows)
End of explanation
lm = np.diag(sqrt(lam))
lm
x = np.dot (z,lm)
x = np.dot (x, evm)
x
Explanation: Generating $x$
We have matrix of row-vectors $z$. Each row corresponds to one draw of all random variables, and each column corresponds to all draws of one random variable. We also have the matrix of eigenvectors, where every row corresponds to one eigenvector. We also construct a matrix that on the diagonal has the square-roots of the eigenvalues
$$
lm = \begin{pmatrix}
\sqrt{\lambda_0} \
& \sqrt{\lambda_1} \
& & \ddots \
& & & \sqrt{\lambda_{d-1}}
\end{pmatrix}
$$
We then compute $x$ from the $z$ as
$$
x = z.lm.evm
$$
End of explanation
C1 = np.cov(x, rowvar=0, bias=1)
C1, C, sort(eigvalsh(C1))[::-1],sort(eigvalsh(C))[::-1]
Explanation: Check
We now check that the ex-post covariance matrix $C1$ is reasonably close to the ex-ante matrix $C$
End of explanation
import sys
print(sys.version)
Explanation: Licence and version
(c) Stefan Loesch / oditorium 2014; all rights reserved
(license)
End of explanation |
2,523 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2019 The TensorFlow IO Authors.
Step1: 解码用于医学成像的 DICOM 文件
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https
Step2: 安装要求的软件包,然后重新启动运行时
Step3: 解码 DICOM 图像
Step4: 解码 DICOM 元数据和使用标记
decode_dicom_data 用于解码标记信息。dicom_tags 包含有用的信息,如患者的年龄和性别,因此可以使用 dicom_tags.PatientsAge 和 dicom_tags.PatientsSex 等 DICOM 标记。tensorflow_io 借用了 pydicom dicom 软件包的标记法。 | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2019 The TensorFlow IO Authors.
End of explanation
!curl -OL https://github.com/tensorflow/io/raw/master/docs/tutorials/dicom/dicom_00000001_000.dcm
!ls -l dicom_00000001_000.dcm
Explanation: 解码用于医学成像的 DICOM 文件
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https://tensorflow.google.cn/io/tutorials/dicom"><img src="https://tensorflow.google.cn/images/tf_logo_32px.png">在 TensorFlow.org 上查看 </a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/zh-cn/io/tutorials/dicom.ipynb"><img src="https://tensorflow.google.cn/images/colab_logo_32px.png">在 Google Colab 中运行 </a></td>
<td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/zh-cn/io/tutorials/dicom.ipynb"><img src="https://tensorflow.google.cn/images/GitHub-Mark-32px.png">在 GitHub 中查看源代码</a></td>
<td><a href="https://storage.googleapis.com/tensorflow_docs/io/docs/tutorials/dicom.ipynb">{img1下载笔记本</a></td>
</table>
概述
本教程将介绍如何在 TensorFlow IO 中使用 tfio.image.decode_dicom_image 通过 TensorFlow 解码 DICOM 文件。
设置和用法
下载 DICOM 图像
本教程中使用的 DICOM 图像来自 NIH Chest X-Ray 数据集。
NIH Chest X-Ray 数据集包含 NIH 临床中心提供的 100,000 张胸部 X 射线检查的去标识化 PNG 图像,可通过此链接下载。
Google Cloud 还提供了可从 Cloud Storage 中获得的 DICOM 版本图像。
在本教程中,您将从 GitHub 仓库下载数据集的样本文件
注:有关数据集的更多信息,请查看以下参考资料:
Xiaosong Wang, Yifan Peng, Le Lu, Zhiyong Lu, Mohammadhadi Bagheri, Ronald Summers, ChestX-ray8: Hospital-scale Chest X-ray Database and Benchmarks on Weakly-Supervised Classification and Localization of Common Thorax Diseases, IEEE CVPR, pp. 3462-3471, 2017
End of explanation
try:
# Use the Colab's preinstalled TensorFlow 2.x
%tensorflow_version 2.x
except:
pass
!pip install tensorflow-io
Explanation: 安装要求的软件包,然后重新启动运行时
End of explanation
import matplotlib.pyplot as plt
import numpy as np
import tensorflow as tf
import tensorflow_io as tfio
image_bytes = tf.io.read_file('dicom_00000001_000.dcm')
image = tfio.image.decode_dicom_image(image_bytes, dtype=tf.uint16)
skipped = tfio.image.decode_dicom_image(image_bytes, on_error='skip', dtype=tf.uint8)
lossy_image = tfio.image.decode_dicom_image(image_bytes, scale='auto', on_error='lossy', dtype=tf.uint8)
fig, axes = plt.subplots(1,2, figsize=(10,10))
axes[0].imshow(np.squeeze(image.numpy()), cmap='gray')
axes[0].set_title('image')
axes[1].imshow(np.squeeze(lossy_image.numpy()), cmap='gray')
axes[1].set_title('lossy image');
Explanation: 解码 DICOM 图像
End of explanation
tag_id = tfio.image.dicom_tags.PatientsAge
tag_value = tfio.image.decode_dicom_data(image_bytes,tag_id)
print(tag_value)
print(f"PatientsAge : {tag_value.numpy().decode('UTF-8')}")
tag_id = tfio.image.dicom_tags.PatientsSex
tag_value = tfio.image.decode_dicom_data(image_bytes,tag_id)
print(f"PatientsSex : {tag_value.numpy().decode('UTF-8')}")
Explanation: 解码 DICOM 元数据和使用标记
decode_dicom_data 用于解码标记信息。dicom_tags 包含有用的信息,如患者的年龄和性别,因此可以使用 dicom_tags.PatientsAge 和 dicom_tags.PatientsSex 等 DICOM 标记。tensorflow_io 借用了 pydicom dicom 软件包的标记法。
End of explanation |
2,524 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
CS228 Python Tutorial
Adapted from the CS231n Python tutorial by Justin Johnson (http
Step1: Python versions
There are currently two different supported versions of Python, 2.7 and 3.6. Somewhat confusingly, Python 3.X introduced many backwards-incompatible changes to the language, so code written for 2.7 may not work under 3.6 and vice versa. For this class all code will use Python 2.7.
You can check your Python version at the command line by running python --version.
Basic data types
Numbers
Integers and floats work as you would expect from other languages
Step2: Note that unlike many languages, Python does not have unary increment (x++) or decrement (x--) operators.
Python also has built-in types for long integers and complex numbers; you can find all of the details in the documentation.
Booleans
Python implements all of the usual operators for Boolean logic, but uses English words rather than symbols (&&, ||, etc.)
Step3: Now we let's look at the operations
Step4: Strings
Step5: String objects have a bunch of useful methods; for example
Step6: You can find a list of all string methods in the documentation.
Containers
Python includes several built-in container types
Step7: As usual, you can find all the gory details about lists in the documentation.
Slicing
In addition to accessing list elements one at a time, Python provides concise syntax to access sublists; this is known as slicing
Step8: Loops
You can loop over the elements of a list like this
Step9: If you want access to the index of each element within the body of a loop, use the built-in enumerate function
Step10: List comprehensions
Step11: You can make this code simpler using a list comprehension
Step12: List comprehensions can also contain conditions
Step13: Dictionaries
A dictionary stores (key, value) pairs, similar to a Map in Java or an object in Javascript. You can use it like this
Step14: You can find all you need to know about dictionaries in the documentation.
It is easy to iterate over the keys in a dictionary
Step15: If you want access to keys and their corresponding values, use the iteritems method
Step16: Dictionary comprehensions
Step17: Sets
A set is an unordered collection of distinct elements. As a simple example, consider the following
Step18: Loops
Step19: Functions
Python functions are defined using the def keyword. For example
Step20: We will often define functions to take optional keyword arguments, like this
Step21: Classes
The syntax for defining classes in Python is straightforward
Step22: Numpy
Numpy is the core library for scientific computing in Python. It provides a high-performance multidimensional array object, and tools for working with these arrays. If you are already familiar with MATLAB, you might find this tutorial useful to get started with Numpy.
To use Numpy, we first need to import the numpy package
Step23: Arrays
A numpy array is a grid of values, all of the same type, and is indexed by a tuple of nonnegative integers. The number of dimensions is the rank of the array; the shape of an array is a tuple of integers giving the size of the array along each dimension.
We can initialize numpy arrays from nested Python lists, and access elements using square brackets
Step24: Numpy also provides many functions to create arrays
Step25: Array indexing
Numpy offers several ways to index into arrays.
Slicing
Step26: A slice of an array is a view into the same data, so modifying it will modify the original array.
Step27: You can also mix integer indexing with slice indexing. However, doing so will yield an array of lower rank than the original array. Note that this is quite different from the way that MATLAB handles array slicing
Step28: Two ways of accessing the data in the middle row of the array.
Mixing integer indexing with slices yields an array of lower rank,
while using only slices yields an array of the same rank as the
original array
Step29: Integer array indexing
Step30: One useful trick with integer array indexing is selecting or mutating one element from each row of a matrix
Step31: Boolean array indexing
Step32: For brevity we have left out a lot of details about numpy array indexing; if you want to know more you should read the documentation.
Datatypes
Every numpy array is a grid of elements of the same type. Numpy provides a large set of numeric datatypes that you can use to construct arrays. Numpy tries to guess a datatype when you create an array, but functions that construct arrays usually also include an optional argument to explicitly specify the datatype. Here is an example
Step33: You can read all about numpy datatypes in the documentation.
Array math
Basic mathematical functions operate elementwise on arrays, and are available both as operator overloads and as functions in the numpy module
Step34: Note that unlike MATLAB, * is elementwise multiplication, not matrix multiplication. We instead use the dot function to compute inner products of vectors, to multiply a vector by a matrix, and to multiply matrices. dot is available both as a function in the numpy module and as an instance method of array objects
Step35: Numpy provides many useful functions for performing computations on arrays; one of the most useful is sum
Step36: You can find the full list of mathematical functions provided by numpy in the documentation.
Apart from computing mathematical functions using arrays, we frequently need to reshape or otherwise manipulate data in arrays. The simplest example of this type of operation is transposing a matrix; to transpose a matrix, simply use the T attribute of an array object
Step37: Broadcasting
Broadcasting is a powerful mechanism that allows numpy to work with arrays of different shapes when performing arithmetic operations. Frequently we have a smaller array and a larger array, and we want to use the smaller array multiple times to perform some operation on the larger array.
For example, suppose that we want to add a constant vector to each row of a matrix. We could do it like this
Step38: This works; however when the matrix x is very large, computing an explicit loop in Python could be slow. Note that adding the vector v to each row of the matrix x is equivalent to forming a matrix vv by stacking multiple copies of v vertically, then performing elementwise summation of x and vv. We could implement this approach like this
Step39: Numpy broadcasting allows us to perform this computation without actually creating multiple copies of v. Consider this version, using broadcasting
Step40: The line y = x + v works even though x has shape (4, 3) and v has shape (3,) due to broadcasting; this line works as if v actually had shape (4, 3), where each row was a copy of v, and the sum was performed elementwise.
Broadcasting two arrays together follows these rules
Step41: Broadcasting typically makes your code more concise and faster, so you should strive to use it where possible.
This brief overview has touched on many of the important things that you need to know about numpy, but is far from complete. Check out the numpy reference to find out much more about numpy.
Matplotlib
Matplotlib is a plotting library. In this section give a brief introduction to the matplotlib.pyplot module, which provides a plotting system similar to that of MATLAB.
Step42: By running this special iPython command, we will be displaying plots inline
Step43: Plotting
The most important function in matplotlib is plot, which allows you to plot 2D data. Here is a simple example
Step44: With just a little bit of extra work we can easily plot multiple lines at once, and add a title, legend, and axis labels
Step45: Subplots
You can plot different things in the same figure using the subplot function. Here is an example
Step46: You can read much more about the subplot function in the documentation.
Pandas
Step47: Images
Step48: KNN Classifier | Python Code:
def quicksort(arr):
if len(arr) <= 1:
return arr
pivot = arr[int(len(arr) / 2)]
left = [x for x in arr if x < pivot]
middle = [x for x in arr if x == pivot]
right = [x for x in arr if x > pivot]
return quicksort(left) + middle + quicksort(right)
print (quicksort([3,6,8,10,1,2]))
Explanation: CS228 Python Tutorial
Adapted from the CS231n Python tutorial by Justin Johnson (http://cs231n.github.io/python-numpy-tutorial/).
Introduction
Python is a great general-purpose programming language on its own, but with the help of a few popular libraries (numpy, scipy, matplotlib) it becomes a powerful environment for scientific computing.
We expect that many of you will have some experience with Python and numpy; for the rest of you, this section will serve as a quick crash course both on the Python programming language and on the use of Python for scientific computing.
Some of you may have previous knowledge in Matlab, in which case we also recommend the numpy for Matlab users page (https://docs.scipy.org/doc/numpy-dev/user/numpy-for-matlab-users.html).
In this tutorial, we will cover:
Basic Python: Basic data types (Containers, Lists, Dictionaries, Sets, Tuples), Functions, Classes
Numpy: Arrays, Array indexing, Datatypes, Array math, Broadcasting
Matplotlib: Plotting, Subplots, Images
Basics of Python
Python is a high-level, dynamically typed multiparadigm programming language. Python code is often said to be almost like pseudocode, since it allows you to express very powerful ideas in very few lines of code while being very readable. As an example, here is an implementation of the classic quicksort algorithm in Python:
End of explanation
x,y = 3,4
print (x,y)
# type of variable
print(type(x))
print (x + 1) # Addition;
print (x - 1) # Subtraction;
print (x * 2) # Multiplication;
print (x ** 2) # Exponentiation;
x += 1
print (x) # Prints "4"
x *= 2
print (x) # Prints "8"
y = 2.5
print (type(y)) # Prints "<type 'float'>"
print (y, y + 1, y * 2, y ** 2) # Prints "2.5 3.5 5.0 6.25"
Explanation: Python versions
There are currently two different supported versions of Python, 2.7 and 3.6. Somewhat confusingly, Python 3.X introduced many backwards-incompatible changes to the language, so code written for 2.7 may not work under 3.6 and vice versa. For this class all code will use Python 2.7.
You can check your Python version at the command line by running python --version.
Basic data types
Numbers
Integers and floats work as you would expect from other languages:
End of explanation
t, f = True, False
print (type(t)) # Prints "<type 'bool'>"
Explanation: Note that unlike many languages, Python does not have unary increment (x++) or decrement (x--) operators.
Python also has built-in types for long integers and complex numbers; you can find all of the details in the documentation.
Booleans
Python implements all of the usual operators for Boolean logic, but uses English words rather than symbols (&&, ||, etc.):
End of explanation
print (t and f) # Logical AND;
print (t or f) # Logical OR;
print (not t) # Logical NOT;
print (t != f) # Logical XOR;
Explanation: Now we let's look at the operations:
End of explanation
hello = 'hello' # String literals can use single quotes
world = "world" # or double quotes; it does not matter.
print (hello, len(hello))
hw = hello + ' ' + world # String concatenation
print (hw) # prints "hello world"
hw12 = '%s %s %d' % (hello, world, 12) # sprintf style string formatting
print (hw12) # prints "hello world 12"
Explanation: Strings
End of explanation
s = "hello"
print (s.capitalize()) # Capitalize a string; prints "Hello"
print (s.upper()) # Convert a string to uppercase; prints "HELLO"
print (s.rjust(7)) # Right-justify a string, padding with spaces; prints " hello"
print (s.center(7)) # Center a string, padding with spaces; prints " hello "
print (s.replace('l', '(ell)')) # Replace all instances of one substring with another;
# prints "he(ell)(ell)o"
print (' world '.strip()) # Strip leading and trailing whitespace; prints "world"
Explanation: String objects have a bunch of useful methods; for example:
End of explanation
xs = [3, 1, 2] # Create a list
print (xs, xs[2])
print (xs[-1]) # Negative indices count from the end of the list; prints "2"
ys = [[1,2,3],[2,3,4]]
print(ys)
print(ys[1][2])
xs[2] = 'foo' # Lists can contain elements of different types
print (xs)
xs.append('bar') # Add a new element to the end of the list
print (xs)
x = xs.pop() # Remove and return the last element of the list
print (x, xs)
Explanation: You can find a list of all string methods in the documentation.
Containers
Python includes several built-in container types: lists, dictionaries, sets, and tuples.
Lists
A list is the Python equivalent of an array, but is resizeable and can contain elements of different types:
End of explanation
# nums = range(5) # range is a built-in function that creates a list of integers
nums = [2,3,5,1,2,8]
print (nums) # Prints "[0, 1, 2, 3, 4]"
print (nums[2:4]) # Get a slice from index 2 to 4 (exclusive); prints "[2, 3]"
print (nums[2:]) # Get a slice from index 2 to the end; prints "[2, 3, 4]"
print (nums[:2]) # Get a slice from the start to index 2 (exclusive); prints "[0, 1]"
print (nums[:]) # Get a slice of the whole list; prints ["0, 1, 2, 3, 4]"
print (nums[:-2]) # Slice indices can be negative; prints ["0, 1, 2, 3]"
Explanation: As usual, you can find all the gory details about lists in the documentation.
Slicing
In addition to accessing list elements one at a time, Python provides concise syntax to access sublists; this is known as slicing:
End of explanation
animals = ['cat', 'dog', 'monkey']
for animal in animals:
print (animal)
print(1)
x =1
print(x)
Explanation: Loops
You can loop over the elements of a list like this:
End of explanation
animals = ['cat', 'dog', 'monkey']
for idx, animal in enumerate(animals):
print ('#%d: %s' % (idx + 1, animal))
Explanation: If you want access to the index of each element within the body of a loop, use the built-in enumerate function:
End of explanation
nums = [0, 1, 2, 3, 4]
squares = []
for x in nums:
squares.append(x ** 2)
print (squares)
Explanation: List comprehensions:
When programming, frequently we want to transform one type of data into another. As a simple example, consider the following code that computes square numbers:
End of explanation
nums = [0, 1, 2, 3, 4]
squares = [x ** 2 for x in nums]
print (squares)
Explanation: You can make this code simpler using a list comprehension:
End of explanation
nums = [0, 1, 2, 3, 4]
even_squares = [x ** 2 for x in nums if x % 2 == 0]
print (even_squares)
Explanation: List comprehensions can also contain conditions:
End of explanation
d = {'cat': 'cute', 'dog': 'furry'} # Create a new dictionary with some data
print (d['cat']) # Get an entry from a dictionary; prints "cute"
print ('cat' in d) # Check if a dictionary has a given key; prints "True"
d['fish'] = 'wet' # Set an entry in a dictionary
print (d['fish']) # Prints "wet"
print (d['monkey']) # KeyError: 'monkey' not a key of d
print (d.get('monkey', 'N/A')) # Get an element with a default; prints "N/A"
print (d.get('fish', 'N/A')) # Get an element with a default; prints "wet"
del d['fish'] # Remove an element from a dictionary
print (d.get('fish', 'N/A')) # "fish" is no longer a key; prints "N/A"
Explanation: Dictionaries
A dictionary stores (key, value) pairs, similar to a Map in Java or an object in Javascript. You can use it like this:
End of explanation
d = {'person': 2, 'cat': 4, 'spider': 8}
for animal in d:
legs = d[animal]
print ('A %s has %d legs' % (animal, legs))
Explanation: You can find all you need to know about dictionaries in the documentation.
It is easy to iterate over the keys in a dictionary:
End of explanation
d = {'person': 2, 'cat': 4, 'spider': 8}
for animal, legs in d.items():
print ('A %s has %d legs' % (animal, legs))
Explanation: If you want access to keys and their corresponding values, use the iteritems method:
End of explanation
nums = [0, 1, 2, 3, 4]
even_num_to_square = {x: x ** 2 for x in nums if x % 2 == 0}
print (even_num_to_square)
Explanation: Dictionary comprehensions: These are similar to list comprehensions, but allow you to easily construct dictionaries. For example:
End of explanation
animals = {'cat', 'dog'}
print ('cat' in animals) # Check if an element is in a set; prints "True"
print ('fish' in animals) # prints "False"
animals.add('fish') # Add an element to a set
print ('fish' in animals)
print (animals) # Number of elements in a set;
animals.add('cat') # Adding an element that is already in the set does nothing
print (animals)
animals.remove('cat') # Remove an element from a set
print (animals)
Explanation: Sets
A set is an unordered collection of distinct elements. As a simple example, consider the following:
End of explanation
animals = {'cat', 'dog', 'fish'}
for idx, animal in enumerate(animals):
print ('#%d: %s' % (idx + 1, animal))
# Prints "#1: fish", "#2: dog", "#3: cat"
Explanation: Loops: Iterating over a set has the same syntax as iterating over a list; however since sets are unordered, you cannot make assumptions about the order in which you visit the elements of the set:
End of explanation
def sign(x):
if x > 0:
return 'positive'
elif x < 0:
return 'negative'
else:
return 'zero'
for x in [-1, 0, 1]:
print (sign(x))
Explanation: Functions
Python functions are defined using the def keyword. For example:
End of explanation
def hello(name, loud=False):
if loud:
print ('HELLO, %s' % name.upper())
else:
print ('Hello, %s!' % name)
hello('Bob')
hello('Fred', loud=True)
Explanation: We will often define functions to take optional keyword arguments, like this:
End of explanation
class Greeter:
# Constructor
def __init__(self, name):
self.name = name # Create an instance variable
# Instance method
def greet(self, loud=False):
if loud:
print ('HELLO, %s!' % self.name.upper())
else:
print ('Hello, %s' % self.name)
g = Greeter('Fred') # Construct an instance of the Greeter class
g.greet() # Call an instance method; prints "Hello, Fred"
g.greet(loud=True) # Call an instance method; prints "HELLO, FRED!"
Explanation: Classes
The syntax for defining classes in Python is straightforward:
End of explanation
import numpy as np
Explanation: Numpy
Numpy is the core library for scientific computing in Python. It provides a high-performance multidimensional array object, and tools for working with these arrays. If you are already familiar with MATLAB, you might find this tutorial useful to get started with Numpy.
To use Numpy, we first need to import the numpy package:
End of explanation
a = np.array([1, 2, 3]) # Create a rank 1 array
print (type(a), a.shape, a[0], a[1], a[2])
a[0] = 5 # Change an element of the array
print (a)
b = np.array([[1,2,3],[4,5,6]]) # Create a rank 2 array
print (b)
print (b.shape)
print (b[0, 0], b[0, 1], b[1, 0])
Explanation: Arrays
A numpy array is a grid of values, all of the same type, and is indexed by a tuple of nonnegative integers. The number of dimensions is the rank of the array; the shape of an array is a tuple of integers giving the size of the array along each dimension.
We can initialize numpy arrays from nested Python lists, and access elements using square brackets:
End of explanation
a = np.zeros((2,2)) # Create an array of all zeros
print (a)
b = np.ones((1,2)) # Create an array of all ones
print (b)
c = np.full((2,2), 7) # Create a constant array
print (c)
d = np.eye(2) # Create a 2x2 identity matrix
print (d)
e = np.random.random((2,2)) # Create an array filled with random values
print (e)
Explanation: Numpy also provides many functions to create arrays:
End of explanation
import numpy as np
# Create the following rank 2 array with shape (3, 4)
# [[ 1 2 3 4]
# [ 5 6 7 8]
# [ 9 10 11 12]]
a = np.array([[1,2,3,4], [5,6,7,8], [9,10,11,12]])
# Use slicing to pull out the subarray consisting of the first 2 rows
# and columns 1 and 2; b is the following array of shape (2, 2):
# [[2 3]
# [6 7]]
b = a[:2, 1:3]
print (b)
Explanation: Array indexing
Numpy offers several ways to index into arrays.
Slicing: Similar to Python lists, numpy arrays can be sliced. Since arrays may be multidimensional, you must specify a slice for each dimension of the array:
End of explanation
print (a[0, 1])
b[0, 0] = 77 # b[0, 0] is the same piece of data as a[0, 1]
print (a[0, 1])
Explanation: A slice of an array is a view into the same data, so modifying it will modify the original array.
End of explanation
# Create the following rank 2 array with shape (3, 4)
a = np.array([[1,2,3,4], [5,6,7,8], [9,10,11,12]])
print (a)
print(a.shape)
Explanation: You can also mix integer indexing with slice indexing. However, doing so will yield an array of lower rank than the original array. Note that this is quite different from the way that MATLAB handles array slicing:
End of explanation
row_r1 = a[1, :] # Rank 1 view of the second row of a
row_r2 = a[1:2, :] # Rank 2 view of the second row of a
row_r3 = a[[1], :] # Rank 2 view of the second row of a
print (row_r1, row_r1.shape)
print (row_r2, row_r2.shape)
print (row_r3, row_r3.shape)
# We can make the same distinction when accessing columns of an array:
col_r1 = a[:, 1]
col_r2 = a[:, 1:2]
print (col_r1, col_r1.shape)
print (col_r2, col_r2.shape)
Explanation: Two ways of accessing the data in the middle row of the array.
Mixing integer indexing with slices yields an array of lower rank,
while using only slices yields an array of the same rank as the
original array:
End of explanation
a = np.array([[1,2], [3, 4], [5, 6]])
# An example of integer array indexing.
# The returned array will have shape (3,) and
print (a[[0, 1, 2], [0, 1, 0]])
# The above example of integer array indexing is equivalent to this:
print (np.array([a[0, 0], a[1, 1], a[2, 0]]))
# When using integer array indexing, you can reuse the same
# element from the source array:
print (a[[0, 0], [1, 1]])
# Equivalent to the previous integer array indexing example
print (np.array([a[0, 1], a[0, 1]]))
Explanation: Integer array indexing: When you index into numpy arrays using slicing, the resulting array view will always be a subarray of the original array. In contrast, integer array indexing allows you to construct arbitrary arrays using the data from another array. Here is an example:
End of explanation
# Create a new array from which we will select elements
a = np.array([[1,2,3], [4,5,6], [7,8,9], [10, 11, 12]])
print (a)
# Create an array of indices
b = np.array([0, 2, 0, 1])
# Select one element from each row of a using the indices in b
print (a[np.arange(4), b]) # Prints "[ 1 6 7 11]"
# Mutate one element from each row of a using the indices in b
a[np.arange(4), b] += 10
print (a)
Explanation: One useful trick with integer array indexing is selecting or mutating one element from each row of a matrix:
End of explanation
import numpy as np
a = np.array([[1,2], [3, 4], [5, 6]])
bool_idx = (a > 2) # Find the elements of a that are bigger than 2;
# this returns a numpy array of Booleans of the same
# shape as a, where each slot of bool_idx tells
# whether that element of a is > 2.
print (bool_idx)
# We use boolean array indexing to construct a rank 1 array
# consisting of the elements of a corresponding to the True values
# of bool_idx
print (a[bool_idx])
# We can do all of the above in a single concise statement:
print (a[a > 2])
Explanation: Boolean array indexing: Boolean array indexing lets you pick out arbitrary elements of an array. Frequently this type of indexing is used to select the elements of an array that satisfy some condition. Here is an example:
End of explanation
x = np.array([1, 2]) # Let numpy choose the datatype
y = np.array([1.0, 2.0]) # Let numpy choose the datatype
z = np.array([1, 2], dtype=np.int64) # Force a particular datatype
print (x.dtype, y.dtype, z.dtype)
Explanation: For brevity we have left out a lot of details about numpy array indexing; if you want to know more you should read the documentation.
Datatypes
Every numpy array is a grid of elements of the same type. Numpy provides a large set of numeric datatypes that you can use to construct arrays. Numpy tries to guess a datatype when you create an array, but functions that construct arrays usually also include an optional argument to explicitly specify the datatype. Here is an example:
End of explanation
x = np.array([[1,2],[3,4]], dtype=np.float64)
y = np.array([[5,6],[7,8]], dtype=np.float64)
# Elementwise sum; both produce the array
print (x + y)
print (np.add(x, y))
# Elementwise difference; both produce the array
print x - y
print np.subtract(x, y)
# Elementwise product; both produce the array
print x * y
print np.multiply(x, y)
# Elementwise division; both produce the array
# [[ 0.2 0.33333333]
# [ 0.42857143 0.5 ]]
print x / y
print np.divide(x, y)
# Elementwise square root; produces the array
# [[ 1. 1.41421356]
# [ 1.73205081 2. ]]
print np.sqrt(x)
Explanation: You can read all about numpy datatypes in the documentation.
Array math
Basic mathematical functions operate elementwise on arrays, and are available both as operator overloads and as functions in the numpy module:
End of explanation
x = np.array([[1,2],[3,4]])
y = np.array([[5,6],[7,8]])
v = np.array([9,10])
w = np.array([11, 12])
# Inner product of vectors; both produce 219
print v.dot(w)
print np.dot(v, w)
# Matrix / vector product; both produce the rank 1 array [29 67]
print x.dot(v)
print np.dot(x, v)
# Matrix / matrix product; both produce the rank 2 array
# [[19 22]
# [43 50]]
print x.dot(y)
print np.dot(x, y)
Explanation: Note that unlike MATLAB, * is elementwise multiplication, not matrix multiplication. We instead use the dot function to compute inner products of vectors, to multiply a vector by a matrix, and to multiply matrices. dot is available both as a function in the numpy module and as an instance method of array objects:
End of explanation
x = np.array([[1,2],[3,4]])
print np.sum(x) # Compute sum of all elements; prints "10"
print np.sum(x, axis=0) # Compute sum of each column; prints "[4 6]"
print np.sum(x, axis=1) # Compute sum of each row; prints "[3 7]"
Explanation: Numpy provides many useful functions for performing computations on arrays; one of the most useful is sum:
End of explanation
print x
print x.T
v = np.array([[1,2,3]])
print v
print v.T
Explanation: You can find the full list of mathematical functions provided by numpy in the documentation.
Apart from computing mathematical functions using arrays, we frequently need to reshape or otherwise manipulate data in arrays. The simplest example of this type of operation is transposing a matrix; to transpose a matrix, simply use the T attribute of an array object:
End of explanation
# We will add the vector v to each row of the matrix x,
# storing the result in the matrix y
x = np.array([[1,2,3], [4,5,6], [7,8,9], [10, 11, 12]])
v = np.array([1, 0, 1])
y = np.empty_like(x) # Create an empty matrix with the same shape as x
# Add the vector v to each row of the matrix x with an explicit loop
for i in range(4):
y[i, :] = x[i, :] + v
print (y)
Explanation: Broadcasting
Broadcasting is a powerful mechanism that allows numpy to work with arrays of different shapes when performing arithmetic operations. Frequently we have a smaller array and a larger array, and we want to use the smaller array multiple times to perform some operation on the larger array.
For example, suppose that we want to add a constant vector to each row of a matrix. We could do it like this:
End of explanation
vv = np.tile(v, (4, 1)) # Stack 4 copies of v on top of each other
print (vv) # Prints "[[1 0 1]
# [1 0 1]
# [1 0 1]
# [1 0 1]]"
y = x + vv # Add x and vv elementwise
print (y)
Explanation: This works; however when the matrix x is very large, computing an explicit loop in Python could be slow. Note that adding the vector v to each row of the matrix x is equivalent to forming a matrix vv by stacking multiple copies of v vertically, then performing elementwise summation of x and vv. We could implement this approach like this:
End of explanation
import numpy as np
# We will add the vector v to each row of the matrix x,
# storing the result in the matrix y
x = np.array([[1,2,3], [4,5,6], [7,8,9], [10, 11, 12]])
v = np.array([1, 0, 1])
y = x + v # Add v to each row of x using broadcasting
print (y)
Explanation: Numpy broadcasting allows us to perform this computation without actually creating multiple copies of v. Consider this version, using broadcasting:
End of explanation
# Compute outer product of vectors
v = np.array([1,2,3]) # v has shape (3,)
w = np.array([4,5]) # w has shape (2,)
# To compute an outer product, we first reshape v to be a column
# vector of shape (3, 1); we can then broadcast it against w to yield
# an output of shape (3, 2), which is the outer product of v and w:
print (np.reshape(v, (3, 1)) * w)
# Add a vector to each row of a matrix
x = np.array([[1,2,3], [4,5,6]])
# x has shape (2, 3) and v has shape (3,) so they broadcast to (2, 3),
# giving the following matrix:
print (x + v)
# Add a vector to each column of a matrix
# x has shape (2, 3) and w has shape (2,).
# If we transpose x then it has shape (3, 2) and can be broadcast
# against w to yield a result of shape (3, 2); transposing this result
# yields the final result of shape (2, 3) which is the matrix x with
# the vector w added to each column. Gives the following matrix:
print ((x.T + w).T)
# Another solution is to reshape w to be a row vector of shape (2, 1);
# we can then broadcast it directly against x to produce the same
# output.
print x + np.reshape(w, (2, 1))
# Multiply a matrix by a constant:
# x has shape (2, 3). Numpy treats scalars as arrays of shape ();
# these can be broadcast together to shape (2, 3), producing the
# following array:
print x * 2
Explanation: The line y = x + v works even though x has shape (4, 3) and v has shape (3,) due to broadcasting; this line works as if v actually had shape (4, 3), where each row was a copy of v, and the sum was performed elementwise.
Broadcasting two arrays together follows these rules:
If the arrays do not have the same rank, prepend the shape of the lower rank array with 1s until both shapes have the same length.
The two arrays are said to be compatible in a dimension if they have the same size in the dimension, or if one of the arrays has size 1 in that dimension.
The arrays can be broadcast together if they are compatible in all dimensions.
After broadcasting, each array behaves as if it had shape equal to the elementwise maximum of shapes of the two input arrays.
In any dimension where one array had size 1 and the other array had size greater than 1, the first array behaves as if it were copied along that dimension
If this explanation does not make sense, try reading the explanation from the documentation or this explanation.
Functions that support broadcasting are known as universal functions. You can find the list of all universal functions in the documentation.
Here are some applications of broadcasting:
End of explanation
import matplotlib.pyplot as plt
Explanation: Broadcasting typically makes your code more concise and faster, so you should strive to use it where possible.
This brief overview has touched on many of the important things that you need to know about numpy, but is far from complete. Check out the numpy reference to find out much more about numpy.
Matplotlib
Matplotlib is a plotting library. In this section give a brief introduction to the matplotlib.pyplot module, which provides a plotting system similar to that of MATLAB.
End of explanation
%matplotlib inline
Explanation: By running this special iPython command, we will be displaying plots inline:
End of explanation
# Compute the x and y coordinates for points on a sine curve
x = np.arange(0, 3 * np.pi, 0.1)
y = np.sin(x)
# Plot the points using matplotlib
plt.scatter(x, y)
Explanation: Plotting
The most important function in matplotlib is plot, which allows you to plot 2D data. Here is a simple example:
End of explanation
y_cos = np.cos(x)
# Plot the points using matplotlib
plt.plot(x, y)
plt.plot(x, y_cos)
plt.xlabel('x axis label')
plt.ylabel('y axis label')
plt.title('Sine and Cosine')
plt.legend(['Sine', 'Cosine'])
Explanation: With just a little bit of extra work we can easily plot multiple lines at once, and add a title, legend, and axis labels:
End of explanation
# Compute the x and y coordinates for points on sine and cosine curves
x = np.arange(0, 3 * np.pi, 0.1)
y_sin = np.sin(x)
y_cos = np.cos(x)
# Set up a subplot grid that has height 2 and width 1,
# and set the first such subplot as active.
plt.subplot(2, 1, 1)
# Make the first plot
plt.plot(x, y_sin)
plt.title('Sine')
# Set the second subplot as active, and make the second plot.
plt.subplot(2, 1, 2)
plt.plot(x, y_cos)
plt.title('Cosine')
# Show the figure.
plt.show()
Explanation: Subplots
You can plot different things in the same figure using the subplot function. Here is an example:
End of explanation
import pandas as pd
# read_csv() is the function (or feature) from pandas we want to use to load the file into memory
dframe = pd.read_csv("lectures/datasets/titanic_dataset.csv")
# .head(num_of_rows) is a method that displays the first few (num_of_rows) rows, not counting column headers
dframe.head(5)
# rows and columns in dataset
dframe.shape
# check columns in dataset
dframe.columns
# select a row
hundredth_row = dframe.loc[99]
print(hundredth_row)
# select multiple rows
print("Rows 3, 4, 5 and 6")
print(dframe.loc[3:6])
# select specific columns
cols = ['survived','sex','age']
specific_cols = dframe[cols]
specific_cols.head()
# check statistics of the data
dframe.describe()
# check histogram of age
dframe.hist(column='age', bins=10)
# Replace all the occurences of male with the number 0 and female with 1
dframe.loc[dframe["sex"] == "male", "sex"] = 0
dframe.loc[dframe["sex"] == "female", "sex"] = 1
Explanation: You can read much more about the subplot function in the documentation.
Pandas
End of explanation
from IPython.display import Image
Image(filename='lectures/images/01_02.png', width=500)
Image(filename='lectures/images/01_01.png', width=500)
Explanation: Images
End of explanation
# read X and y
# cols = ['pclass','sex','age','fare']
cols = ['pclass','sex','age']
X = dframe[cols]
y = dframe[["survived"]]
dframe.head()
# Use scikit-learn KNN classifier to predit survival probability
from sklearn.neighbors import KNeighborsClassifier
neigh = KNeighborsClassifier(n_neighbors=3)
neigh.fit(X, y)
# check accuracy
neigh.score(X,y)
# define a passenger
passenger = [1,1,29]
# predict survial label
print(neigh.predict([passenger]))
# predict survial probability
print(neigh.predict_proba([passenger]))
# find k-nearest neighbors
neigh.kneighbors(passenger,3)
# Let's create some data for DiCaprio and Winslet and you
import numpy as np
colsidx = [0,2,3];
dicaprio = np.array([3, 'Jack Dawson', 0, 19, 0, 0, 'N/A', 5.0000])
winslet = np.array([1, 'Rose DeWitt Bukater', 1, 17, 1, 2, 'N/A', 100.0000])
you = np.array([1, 'user', 1, 24, 0, 2, 'N/A', 50.0000])
# Preprocess data
dicaprio = dicaprio[colsidx]
winslet = winslet[colsidx]
you = you[colsidx]
# # Predict surviving chances (class 1 results)
pred = neigh.predict([dicaprio, winslet, you])
prob = neigh.predict_proba([dicaprio, winslet, you])
print("DiCaprio Surviving:", pred[0], " with probability", prob[0])
print("Winslet Surviving Rate:", pred[1], " with probability", prob[2])
print("user Surviving Rate:", pred[2], " with probability", prob[2])
Explanation: KNN Classifier
End of explanation |
2,525 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
automaton.is_functional
Whether the automaton is functional, i.e. each input (string) is transduced to a unique output (string). There may be multiple paths, however, that contain this input and output string pair.
Precondition
Step1: Simple Cases
Step2: This transducer is functional, as can also be seen from its series (computed thanks to automaton.shortest)
Step3: However, the following transducer is not functional, as it maps ab to both xy and xz, again, as demonstrated by shortest.
Step4: A More Complex Example
The following example (Figure 3 from beal.2003.tcs) shows a transducer whose input automaton is ambiguous, yet the transduder is functional.
Step5: This transducer is functional
Step6: If we focus on the "input automaton", in other words, on the tape 0 of this transducer, we can see that it is ambigous. | Python Code:
import vcsn
Explanation: automaton.is_functional
Whether the automaton is functional, i.e. each input (string) is transduced to a unique output (string). There may be multiple paths, however, that contain this input and output string pair.
Precondition:
- The automaton is transducer
Examples
End of explanation
%%automaton a
context = "lat<lal_char(abc),lal_char(xyz)>, b"
$ -> 0
0 -> 1 a|x
0 -> 2 a|x
1 -> 3 b|y
2 -> 3 b|y
3 -> $
Explanation: Simple Cases
End of explanation
a.is_functional()
a.shortest(10)
Explanation: This transducer is functional, as can also be seen from its series (computed thanks to automaton.shortest): it uniquely maps ab to xy.
End of explanation
%%automaton a
context = "lat<lal_char(abc),lal_char(xyz)>, b"
$ -> 0
0 -> 1 a|x
0 -> 2 a|x
1 -> 3 b|y
2 -> 3 b|z
3 -> $
a.is_functional()
a.shortest(10)
Explanation: However, the following transducer is not functional, as it maps ab to both xy and xz, again, as demonstrated by shortest.
End of explanation
%%automaton a
context = "lat<lal_char(a),law_char(x)>, b"
$ -> 0
0 -> $
0 -> 1 a|x
0 -> 2 a|xxx
1 -> 2 a|xxxx
1 -> 3 a|xxx
2 -> 3 a|x
3 -> 0 a|xx
$ -> 3
3 -> $
Explanation: A More Complex Example
The following example (Figure 3 from beal.2003.tcs) shows a transducer whose input automaton is ambiguous, yet the transduder is functional.
End of explanation
a.is_functional()
Explanation: This transducer is functional:
End of explanation
b = a.focus(0)
b
b.is_ambiguous()
Explanation: If we focus on the "input automaton", in other words, on the tape 0 of this transducer, we can see that it is ambigous.
End of explanation |
2,526 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
决策树在 sklearn 中的实现简介
0. 预前
本文简单分析 scikit-learn/scikit-learn 中决策树涉及的代码模块关系。
分析的代码版本信息是:
```shell
~/W/s/sklearn ❯❯❯ git log -n 1 study/analyses_decision_tree
commit d161bfaa1a42da75f4940464f7f1c524ef53484f
Author
Step1: Tree.py 下定义了 BaseDecisionTree 基类,实现了完整的分类和回归功能,衍生出的子类主要用于封装初始化参数。两种子类的区别在于:DecisionTree* 类会遍历特征和值,从而找到最佳分割点,而 ExtraTree* 类会随机抽取特征和值,来寻找分割点。
下面是基类的训练方法 fit 流程:
检查参数。
设置评价函数。
创建分割方法:根据数据是否稀疏阵,生成相应类。
创建树:根据叶子数决定用深度优先还是评价优先。
调用树的构建方法:生成决策树。
代码如下,细节已经折叠:
python
72 class BaseDecisionTree(six.with_metaclass(ABCMeta, BaseEstimator,
73 _LearntSelectorMixin))
Step5: 对于分类问题,sklearn 提供了 Gini 和 Entropy 两种评价函数;
默认会用 Gini 。
Decision Trees
Step6: Splitter 基类依数据储存方式(实阵或稀疏阵)衍生为 BaseDenseSplitte 和 BaseSparseSplitter。在这之下根据阈值的寻优方法再细分两类:Best*Splitter 会遍历特征的可能值,而 Random*Splitter 则是随机抽取。
2.2 树的组建方法
_tree.* 是树组建方法相关的文件。
下面是类的关系图: | Python Code:
SVG("./res/uml/Model__tree_0.svg")
Explanation: 决策树在 sklearn 中的实现简介
0. 预前
本文简单分析 scikit-learn/scikit-learn 中决策树涉及的代码模块关系。
分析的代码版本信息是:
```shell
~/W/s/sklearn ❯❯❯ git log -n 1 study/analyses_decision_tree
commit d161bfaa1a42da75f4940464f7f1c524ef53484f
Author: John B Nelson jnelso11@gmu.edu
Date: Thu May 26 18:36:37 2016 -0400
Add missing double quote (#6831)
```
本文假设读者已经了解决策树的其本概念,阅读 sklearn - Decision Trees 有助于快速了解。
1. 总纲
决策树的代码位于 scikit-learn/sklearn/tree 目录下,各文件大意说明如下:
tree
+-- __init__.py
+-- setup.py
+-- tree.py 主文件
+-- export.py 导出树模型
+-- _tree.* 组建树的类
+-- _splitter.* 分割方法
+-- _criterion.* 不纯度评价
+-- _utils.* 辅助数据结构:栈和最小堆
+-- tests/
+-- __init__.py
+-- test_tree.py
+-- test_export.py
类之间的大致关系如下:
End of explanation
SVG("./res/uml/Model___criterion_1.svg")
Explanation: Tree.py 下定义了 BaseDecisionTree 基类,实现了完整的分类和回归功能,衍生出的子类主要用于封装初始化参数。两种子类的区别在于:DecisionTree* 类会遍历特征和值,从而找到最佳分割点,而 ExtraTree* 类会随机抽取特征和值,来寻找分割点。
下面是基类的训练方法 fit 流程:
检查参数。
设置评价函数。
创建分割方法:根据数据是否稀疏阵,生成相应类。
创建树:根据叶子数决定用深度优先还是评价优先。
调用树的构建方法:生成决策树。
代码如下,细节已经折叠:
python
72 class BaseDecisionTree(six.with_metaclass(ABCMeta, BaseEstimator,
73 _LearntSelectorMixin)):
74 Base class for decision trees.
75 #+-- 3 lines: Warning: This class should not be used directly.-------------------
78
79
80 @abstractmethod
81 def __init__(self,
82 #+-- 30 lines: criterion,---------------------------------------------------------
112
113 def fit(self, X, y, sample_weight=None, check_input=True,
114 X_idx_sorted=None):
115 Build a decision tree from the training set (X, y).
116 #+-- 34 lines: Parameters---------------------------------------------------------
150
151
152 #+--180 lines: random_state = check_random_state(self.random_state)---------------
332
333 # Build tree
334 criterion = self.criterion
335 #+-- 6 lines: if not isinstance(criterion, Criterion):---------------------------
341
342 SPLITTERS = SPARSE_SPLITTERS if issparse(X) else DENSE_SPLITTERS
343 #+-- 9 lines: splitter = self.splitter-------------------------------------------
352
353 self.tree_ = Tree(self.n_features_, self.n_classes_, self.n_outputs_)
354
355 # Use BestFirst if max_leaf_nodes given; use DepthFirst otherwise
356 if max_leaf_nodes < 0:
357 builder = DepthFirstTreeBuilder(splitter, min_samples_split,
358 min_samples_leaf,
359 min_weight_leaf,
360 max_depth)
361 else:
362 builder = BestFirstTreeBuilder(splitter, min_samples_split,
363 min_samples_leaf,
364 min_weight_leaf,
365 max_depth,
366 max_leaf_nodes)
367
368 builder.build(self.tree_, X, y, sample_weight, X_idx_sorted)
369
370 #+-- 3 lines: if self.n_outputs_ == 1:-------------------------------------------
373
374 return self
预测方法 predict 代码非常简单,调用 tree_.predict() 方法取得预测值:如果是分类问题,输出预测值最大的类;如果是回归问题,直接输出。
Python
398 def predict(self, X, check_input=True):
399 Predict class or regression value for X.
400 +-- 20 lines: For a classification model, the predicted class for each sample in X
420
421
422 X = self._validate_X_predict(X, check_input)
423 proba = self.tree_.predict(X)
424 n_samples = X.shape[0]
425
426 # Classification
427 if isinstance(self, ClassifierMixin):
428 if self.n_outputs_ == 1:
429 return self.classes_.take(np.argmax(proba, axis=1), axis=0)
430
431 +--- 9 lines: else:--------------------------------------------------------------
440
441 # Regression
442 else:
443 if self.n_outputs_ == 1:
444 return proba[:, 0]
445 +--- 3 lines: else:--------------------------------------------------------------
sklearn 的决策树是 CART(Classification and Regression Trees) 算法,分类问题会转换成预测概率的回归问题,所以两类问题的处理方法是相同的,主要区别在评价函数。
2 模块简介
2.0 评价函数
_criterion.* 是评价函数相关的文件,用 Cython 实现,.pxd 和 .pyx 文件分别对等 C 语言中的 .h 和 .c 文件。
下面是类的关系图:
End of explanation
SVG("./res/uml/Model___splitter_2.svg")
Explanation: 对于分类问题,sklearn 提供了 Gini 和 Entropy 两种评价函数;
默认会用 Gini 。
Decision Trees: “Gini” vs. “Entropy” criteria
对于回归问题,则提供了 MSE(均方差)和 FriedmanMSE。
默认会用 MSE 。
FriedmanMSE 用于 gradient boosting。
在实际使用中,我们应该都测试下同的评价函数。
2.1 分割方法
_splitter.* 是分割方法相关的文件。
下面是类的关系图:
End of explanation
SVG("./res/uml/Model___tree_3.svg")
Explanation: Splitter 基类依数据储存方式(实阵或稀疏阵)衍生为 BaseDenseSplitte 和 BaseSparseSplitter。在这之下根据阈值的寻优方法再细分两类:Best*Splitter 会遍历特征的可能值,而 Random*Splitter 则是随机抽取。
2.2 树的组建方法
_tree.* 是树组建方法相关的文件。
下面是类的关系图:
End of explanation |
2,527 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
DKRZ data ingest information handling
The submission_forms package provides a collection of components to support the management of information related to data ingest related activities (data transport, data checking, data publication and data archival)
Step1: 2) Explore the structure of a workflow Form object
(i.e submission workflow json file)
The workflow is structured according to the following workfow steps
Step2: each workflow step is structured acording to
Step3: agent related information
this is generally defined in the dkrz_forms.config.workflow_steps.py templates
(see source code on github
Step5: activity related information
again the generic definition is defined in the dkrz_forms.workflow_steps.py templates.
for example the quality assurance (qua) related activity information is defined as
Step7: workflow step report documents
each workflow step produces an output associated to the entity_out keyword.
To each output a user defined dictionary can be attached as report
so e.g.
my_form.sub.entity_out.report contains all the user input provided e.g. by mail or in a excel
sheet or provided via a (jupyter notebook) form
my_form.qua.entity_out.report contains the quality_assurance tool json output as dictionary
etc. | Python Code:
## the following libraries are needed to interact with
## json based form submissions
from dkrz_forms import form_handler, utils, checks,wflow_handler
from datetime import datetime
## info_file = "path to json file"
info_file = "../Forms/../xxx.json"
# load json file and convert to Form object for simple updating
my_form = utils.load_workflow_form(info_file)
# use "tab" completion to view the attributes
# every form has a project and has the above workflow steps associated
my_form.
# evalulate to see doc string of submission part
?my_form
Explanation: DKRZ data ingest information handling
The submission_forms package provides a collection of components to support the management of information related to data ingest related activities (data transport, data checking, data publication and data archival):
data submission related information management
who, when, what, for which project, data characteristics
data management related information collection
ingest, quality assurance, publication, archiving
The information is stored in structured json files which are 1-to-1 mapped to Form objects to simplify information handling. In the following it is assumed that an initial structured json file was generated. For the different ways to generate initial structured json files see the Workflow_Form_Generation.ipynb notebook:
DKRZ ingest workflow system
Approach:
* Data managment related information is managed in structured json files
* To simplify interactive information updates etc. json files are converted to Form objects
* There are multiple possibilities to populate the json files (and associated Form objects):
* DKRZ served jupyter notebooks (e.g. in DKRZ jupyterhub http://data-forms.dkrz.de:8080)
* Client side jupyter notebooks (submission via email, rt ticket, git commit)
* Client side excel sheets (submission via email, rt ticket)
* Unstructured email exchange (json population done by data managers)
* A toolset to manage Form objects (specially structured json files) along a well defined workflow
* A toolset to search and intercorrelate data submission information
* Support for W3C prov standard exposure of the structured json files
1) Get a Form object for information stored in a json file
End of explanation
# evaluate to view associated documentation string
?my_form.sub
# use "tab" completion
my_form.sub.
Explanation: 2) Explore the structure of a workflow Form object
(i.e submission workflow json file)
The workflow is structured according to the following workfow steps:
'sub': data submission related information (client side: who, what, how, .., manager side: who, status,.. )
'rev': data submission review information
'ing': data ingest related information
'qua': data quality assurance related information
'pub': data publication related information
'lta': data long term archival and data citation related information
information on the form objects can be retrieved interactively in ipython
in jupyter notebooks - use again "tab" for completion and ? to retrieve
docstring documentation.
Examples:
End of explanation
# example: "tab" completion to view attributes of agent
# thus - agent has an email, first_name and last_name
my_form.sub.agent.
Explanation: each workflow step is structured acording to:
agent: step related person or software tool information
activity: step execution related information
entity_in: input information for this workflow step
entity_out: output information for this workflow step
these parts have to be filled for each workflow step to characterize who (agent), did what (activity) using which input information (entity_in) to produce which output information (entity_out). These parts align with the WC3 Prov model allowing for a translation of all collected information based on the W3C prov standard (see the provenance.ipynb notebook for an example).
End of explanation
# e.g. set email of person responsible for data submission:
my_form.sub.agent.email = 'franz_mustermann@hzg.de'
Explanation: agent related information
this is generally defined in the dkrz_forms.config.workflow_steps.py templates
(see source code on github: https://github.com/IS-ENES/submission_forms/dkrz_forms/config/workflow_steps.py)
for example the agent responsible for data submission this is SUBMISSION_AGENT, which is defined as:
SUBMISSION_AGENT = {
'doc': Attributes characterizing the person responsible for form completion and submission:
- last_name: Last name of the person responsible for the submission form content
- first_name: Corresponding first name
- email: Valid user email address: all follow up activities will use this email to contact end user
- keyword : user provided key word to remember and separate submission
,
'i_name': 'submission_agent',
'last_name' : 'mandatory',
'first_name' : 'mandatory',
'keyword': 'mandatory',
'email': 'mandatory',
'responsible_person':'mandatory'
}
All entries charactized as 'mandatory' have to be filled.
End of explanation
## back to example: submission related activity information
import pprint
pprint.pprint(my_form.sub.activity.__doc__)
Explanation: activity related information
again the generic definition is defined in the dkrz_forms.workflow_steps.py templates.
for example the quality assurance (qua) related activity information is defined as:
QUA_ACTIVITY= {
'doc':
Attributes characterizing the data quality assurance activity:
- status: status information
- start_time, end_time: data ingest timing information
- comment : free text comment
- ticket_id: related RT ticket number
- follow_up_ticket: in case new data has to be provided
- quality_report: dictionary with quality related information (tbd.)
,
'i_name':'qua_activity',
'status':ACTIVITY_STATUS,
'error_status':ERROR_STATUS,
'qua_tool_version':"mandatory",
"start_time":"mandatory",
"end_time":"optional",
"comment":"optional",
"ticket_id": "mandatory",
"follow_up_ticket": 'optional', # qa feedback to users, follow up actions
}
End of explanation
# view the submission related information provided by the end user:
pprint.pprint(my_form.sub.entity_out.report.__dict__)
## Example for the quality assurance workflow step (qua):
my_form.qua.entity_out.report = {
"QA_conclusion": "PASS",
"project": "CORDEX",
"institute": "CLMcom",
"model": "CLMcom-CCLM4-8-17-CLM3-5",
"domain": "AUS-44",
"driving_experiment": [ "ICHEC-EC-EARTH"],
"experiment": [ "history", "rcp45", "rcp85"],
"ensemble_member": [ "r12i1p1" ],
"frequency": [ "day", "mon", "sem" ],
"annotation":
[
{
"scope": ["mon", "sem"],
"variable": [ "tasmax", "tasmin", "sfcWindmax" ],
"caption": "attribute <variable>:cell_methods for climatologies requires <time>:climatology instead of time_bnds",
"comment": "due to the format of the data, climatology is equivalent to time_bnds",
"severity": "note"
}
]
}
Explanation: workflow step report documents
each workflow step produces an output associated to the entity_out keyword.
To each output a user defined dictionary can be attached as report
so e.g.
my_form.sub.entity_out.report contains all the user input provided e.g. by mail or in a excel
sheet or provided via a (jupyter notebook) form
my_form.qua.entity_out.report contains the quality_assurance tool json output as dictionary
etc.
End of explanation |
2,528 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
2A.ML101.8
Step1: Hyperparameters, Over-fitting, and Under-fitting
The issues associated with validation and
cross-validation are some of the most important
aspects of the practice of machine learning. Selecting the optimal model
for your data is vital, and is a piece of the problem that is not often
appreciated by machine learning practitioners.
Of core importance is the following question
Step2: In real life situation, we have noise (e.g. measurement noise) in our data
Step3: As we can see, our linear model captures and amplifies the noise in the data. It displays a lot of variance.
We can use another linear estimator that uses regularization, the Ridge estimator. This estimator regularizes the coefficients by shrinking them to zero, under the assumption that very high correlations are often spurious. The alpha parameter controls the amount of shrinkage used.
Step4: As we can see, the estimator displays much less variance. However it systematically under-estimates the coefficient. It displays a biased behavior.
This is a typical example of bias/variance tradeof
Step5: In the above figure, we see fits for three different values of d.
For d = 1, the data is under-fit. This means that the model is too
simplistic
Step6: In order to quantify the effects of bias and variance and construct
the best possible estimator, we will split our training data into
a training set and a validation set. As a general rule, the
training set should be about 60% of the samples.
The overarching idea is as follows. The model parameters (in our case,
the coefficients of the polynomials) are learned using the training
set as above. The error is evaluated on the validation set,
and the meta-parameters (in our case, the degree of the polynomial)
are adjusted so that this validation error is minimized.
Finally, the labels are predicted for the test set. These labels
are used to evaluate how well the algorithm can be expected to
perform on unlabeled data.
The validation error of our polynomial classifier can be visualized
by plotting the error as a function of the polynomial degree d. We can do
this as follows
Step7: This figure compactly shows the reason that validation is
important. On the left side of the plot, we have very low-degree
polynomial, which under-fits the data. This leads to a very high
error for both the training set and the validation set. On
the far right side of the plot, we have a very high degree
polynomial, which over-fits the data. This can be seen in the fact
that the training error is very low, while the validation
error is very high. Plotted for comparison is the intrinsic error
(this is the scatter artificially added to the data
Step8: Now that we've defined this function, we can plot the learning curve.
But first, take a moment to think about what we're going to see
Step9: Notice that the validation error generally decreases with a growing training set,
while the training error generally increases with a growing training set. From
this we can infer that as the training size increases, they will converge to a single
value.
From the above discussion, we know that d = 1 is a high-bias estimator which
under-fits the data. This is indicated by the fact that both the
training and validation errors are very high. When confronted with this type of learning curve,
we can expect that adding more training data will not help matters | Python Code:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
Explanation: 2A.ML101.8: Parameter selection, Validation & Testing
The content in this section is adapted from Andrew Ng's excellent Coursera course.
Source: Course on machine learning with scikit-learn by Gaël Varoquaux
End of explanation
X = np.c_[ .5, 1].T
y = [.5, 1]
X_test = np.c_[ 0, 2].T
X
y
X_test
from sklearn import linear_model
regr = linear_model.LinearRegression()
regr.fit(X, y)
plt.plot(X, y, 'o')
plt.plot(X_test, regr.predict(X_test));
Explanation: Hyperparameters, Over-fitting, and Under-fitting
The issues associated with validation and
cross-validation are some of the most important
aspects of the practice of machine learning. Selecting the optimal model
for your data is vital, and is a piece of the problem that is not often
appreciated by machine learning practitioners.
Of core importance is the following question:
If our estimator is underperforming, how should we move forward?
Use simpler or more complicated model?
Add more features to each observed data point?
Add more training samples?
The answer is often counter-intuitive. In particular, Sometimes using a
more complicated model will give worse results. Also, Sometimes adding
training data will not improve your results. The ability to determine
what steps will improve your model is what separates the successful machine
learning practitioners from the unsuccessful.
Bias-variance trade-off: illustration on a simple regression problem
For this section, we'll work with a simple 1D regression problem. This will help us to
easily visualize the data and the model, and the results generalize easily to higher-dimensional
datasets. We'll explore a simple linear regression problem.
This can be accomplished within scikit-learn with the sklearn.linear_model module.
We consider the situation where we have only 2 data points:
End of explanation
np.random.seed(0)
for _ in range(6):
noise = np.random.normal(loc=0, scale=.1, size=X.shape)
noisy_X = X + noise
plt.plot(noisy_X, y, 'o')
regr.fit(noisy_X, y)
plt.plot(X_test, regr.predict(X_test))
Explanation: In real life situation, we have noise (e.g. measurement noise) in our data:
End of explanation
regr = linear_model.Ridge(alpha=.1)
np.random.seed(0)
for _ in range(6):
noise = np.random.normal(loc=0, scale=.1, size=X.shape)
noisy_X = X + noise
plt.plot(noisy_X, y, 'o')
regr.fit(noisy_X, y)
plt.plot(X_test, regr.predict(X_test))
Explanation: As we can see, our linear model captures and amplifies the noise in the data. It displays a lot of variance.
We can use another linear estimator that uses regularization, the Ridge estimator. This estimator regularizes the coefficients by shrinking them to zero, under the assumption that very high correlations are often spurious. The alpha parameter controls the amount of shrinkage used.
End of explanation
from plot_bias_variance import plot_bias_variance
plot_bias_variance(8, random_seed=42)
Explanation: As we can see, the estimator displays much less variance. However it systematically under-estimates the coefficient. It displays a biased behavior.
This is a typical example of bias/variance tradeof: non-regularized estimator are not biased, but they can display a lot of bias. Highly-regularized models have little variance, but high bias. This bias is not necessarily a bad thing: it practice what matters is choosing the tradeoff between bias and variance that leads to the best prediction performance. For a specific dataset there is a sweet spot corresponding to the highest complexity that the data can support, depending on the amount of noise and of observations available.
Learning Curves and the Bias/Variance Tradeoff
One way to address this issue is to use what are often called Learning Curves.
Given a particular dataset and a model we'd like to fit (e.g. a polynomial), we'd
like to tune our value of the hyperparameter d to give us the best fit.
We'll imagine we have a simple regression problem: given the size of a house, we'd
like to predict how much it's worth. We'll fit it with our polynomial regression
model.
Run the following code to see an example plot:
End of explanation
def test_func(x, err=0.5):
return np.random.normal(10 - 1. / (x + 0.1), err)
def compute_error(x, y, p):
yfit = np.polyval(p, x)
return np.sqrt(np.mean((y - yfit) ** 2))
from sklearn.model_selection import train_test_split
N = 200
test_size = 0.4
error = 1.0
# randomly sample the data
np.random.seed(1)
x = np.random.random(N)
y = test_func(x, error)
# split into training, validation, and testing sets.
xtrain, xtest, ytrain, ytest = train_test_split(x, y, test_size=test_size)
# show the training and validation sets
plt.scatter(xtrain, ytrain, color='red')
plt.scatter(xtest, ytest, color='blue');
Explanation: In the above figure, we see fits for three different values of d.
For d = 1, the data is under-fit. This means that the model is too
simplistic: no straight line will ever be a good fit to this data. In
this case, we say that the model suffers from high bias. The model
itself is biased, and this will be reflected in the fact that the data
is poorly fit. At the other extreme, for d = 6 the data is over-fit.
This means that the model has too many free parameters (6 in this case)
which can be adjusted to perfectly fit the training data. If we add a
new point to this plot, though, chances are it will be very far from
the curve representing the degree-6 fit. In this case, we say that the
model suffers from high variance. The reason for the term "high variance" is that if
any of the input points are varied slightly, it could result in a very different model.
In the middle, for d = 2, we have found a good mid-point. It fits
the data fairly well, and does not suffer from the bias and variance
problems seen in the figures on either side. What we would like is a
way to quantitatively identify bias and variance, and optimize the
metaparameters (in this case, the polynomial degree d) in order to
determine the best algorithm. This can be done through a process
called validation.
Validation Curves
We'll create a dataset like in the example above, and use this to test our
validation scheme. First we'll define some utility routines:
End of explanation
# suppress warnings from Polyfit
import warnings
warnings.filterwarnings('ignore', message='Polyfit*')
degrees = np.arange(21)
train_err = np.zeros(len(degrees))
validation_err = np.zeros(len(degrees))
for i, d in enumerate(degrees):
p = np.polyfit(xtrain, ytrain, d)
train_err[i] = compute_error(xtrain, ytrain, p)
validation_err[i] = compute_error(xtest, ytest, p)
fig, ax = plt.subplots()
ax.plot(degrees, validation_err, lw=2, label = 'cross-validation error')
ax.plot(degrees, train_err, lw=2, label = 'training error')
ax.plot([0, 20], [error, error], '--k', label='intrinsic error')
ax.legend(loc=0)
ax.set_xlabel('degree of fit')
ax.set_ylabel('rms error');
Explanation: In order to quantify the effects of bias and variance and construct
the best possible estimator, we will split our training data into
a training set and a validation set. As a general rule, the
training set should be about 60% of the samples.
The overarching idea is as follows. The model parameters (in our case,
the coefficients of the polynomials) are learned using the training
set as above. The error is evaluated on the validation set,
and the meta-parameters (in our case, the degree of the polynomial)
are adjusted so that this validation error is minimized.
Finally, the labels are predicted for the test set. These labels
are used to evaluate how well the algorithm can be expected to
perform on unlabeled data.
The validation error of our polynomial classifier can be visualized
by plotting the error as a function of the polynomial degree d. We can do
this as follows:
End of explanation
# suppress warnings from Polyfit
import warnings
warnings.filterwarnings('ignore', message='Polyfit*')
def plot_learning_curve(d, N=200):
n_sizes = 50
n_runs = 10
sizes = np.linspace(2, N, n_sizes).astype(int)
train_err = np.zeros((n_runs, n_sizes))
validation_err = np.zeros((n_runs, n_sizes))
for i in range(n_runs):
for j, size in enumerate(sizes):
xtrain, xtest, ytrain, ytest = train_test_split(
x, y, test_size=test_size, random_state=j)
# Train on only the first `size` points
p = np.polyfit(xtrain[:size], ytrain[:size], d)
# Validation error is on the *entire* validation set
validation_err[i, j] = compute_error(xtest, ytest, p)
# Training error is on only the points used for training
train_err[i, j] = compute_error(xtrain[:size], ytrain[:size], p)
fig, ax = plt.subplots()
ax.plot(sizes, validation_err.mean(axis=0), lw=2, label='mean validation error')
ax.plot(sizes, train_err.mean(axis=0), lw=2, label='mean training error')
ax.plot([0, N], [error, error], '--k', label='intrinsic error')
ax.set_xlabel('traning set size')
ax.set_ylabel('rms error')
ax.legend(loc=0)
ax.set_xlim(0, N-1)
ax.set_title('d = %i' % d)
Explanation: This figure compactly shows the reason that validation is
important. On the left side of the plot, we have very low-degree
polynomial, which under-fits the data. This leads to a very high
error for both the training set and the validation set. On
the far right side of the plot, we have a very high degree
polynomial, which over-fits the data. This can be seen in the fact
that the training error is very low, while the validation
error is very high. Plotted for comparison is the intrinsic error
(this is the scatter artificially added to the data: click on the
above image to see the source code). For this toy dataset,
error = 1.0 is the best we can hope to attain. Choosing d=6 in
this case gets us very close to the optimal error.
The astute reader will realize that something is amiss here: in
the above plot, d = 6 gives the best results. But in the previous
plot, we found that d = 6 vastly over-fits the data. What’s going
on here? The difference is the number of training points used.
In the previous example, there were only eight training points.
In this example, we have 100. As a general rule of thumb, the more
training points used, the more complicated model can be used.
But how can you determine for a given model whether more training
points will be helpful? A useful diagnostic for this are learning curves.
Learning Curves
A learning curve is a plot of the training and validation
error as a function of the number of training points. Note that
when we train on a small subset of the training data, the training
error is computed using this subset, not the full training set.
These plots can give a quantitative view into how beneficial it
will be to add training samples.
End of explanation
plot_learning_curve(d=1)
Explanation: Now that we've defined this function, we can plot the learning curve.
But first, take a moment to think about what we're going to see:
Questions:
As the number of training samples are increased, what do you expect to see for the training error? For the validation error?
Would you expect the training error to be higher or lower than the validation error? Would you ever expect this to change?
We can run the following code to plot the learning curve for a d=1 model:
End of explanation
plot_learning_curve(d=20, N=100)
plt.ylim(0, 15)
Explanation: Notice that the validation error generally decreases with a growing training set,
while the training error generally increases with a growing training set. From
this we can infer that as the training size increases, they will converge to a single
value.
From the above discussion, we know that d = 1 is a high-bias estimator which
under-fits the data. This is indicated by the fact that both the
training and validation errors are very high. When confronted with this type of learning curve,
we can expect that adding more training data will not help matters: both
lines will converge to a relatively high error.
When the learning curves have converged to a high error, we have a high bias model.
A high-bias model can be improved by:
Using a more sophisticated model (i.e. in this case, increase d)
Gather more features for each sample.
Decrease regularlization in a regularized model.
A high-bias model cannot be improved, however, by increasing the number of training
samples (do yousee why?)
Now let's look at a high-variance (i.e. over-fit) model:
End of explanation |
2,529 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
with open('inputcode.txt',encoding="utf8") as f
Step1: def build_dataset(words)
Step2: #testing
#symbols_in_keys = [ [dictionary[ str(training_data[i])]] for i in range(offset, offset+n_input) ]
#symbols_in_keys = np.reshape(np.array(symbols_in_keys), [-1, n_input, 1])
#input_set = np.reshape(training_data, [-1, n_input, 1])
offset = 8
symbols_out_onehot = np.zeros([vocab_size], dtype=float)
#symbols_out_onehot[dictionary[str(training_data[offset+n_input])]] = 1.0
symbols_out_onehot[int(training_data[offset+n_input]) - 1] = 1.0
print(symbols_out_onehot)
#symbols_out_onehot[training_data[offset+n_input]] = 1.0
#symbols_out_onehot = np.reshape(symbols_out_onehot,[1,-1])
symbols_in_keys = [ [dictionary[ str(training_data[i])]] for i in range(offset, offset+n_input) ]
symbols_in_keys = np.reshape(np.array(symbols_in_keys), [-1, n_input, 1])
input_data = [training_data[i] for i in range(offset, offset+n_input)]
input_set = np.reshape(input_data, [-1, n_input, 1])
print(input_set) | Python Code:
training_data = read_data(training_file)
print("Loaded training data...")
print(training_data)
training_data = list(map(int, training_data))
print(training_data)
print(training_data[:10])
print(len(training_data))
Explanation: with open('inputcode.txt',encoding="utf8") as f:
content = f.read()
data = content.split(',')
print(data)
content = list(content)
content = [content[i].split(',') for i in range(len(content))]
print(split(content))
End of explanation
# Parameters
learning_rate = 0.001
training_iters = 50000
display_step = 1000
n_input = 3
vocab = [1,2,3]
vocab_size = 3
# number of units in RNN cell
n_hidden = 512
py_flag = False
# tf Graph input
x = tf.placeholder("float", [1, n_input, 1])
y = tf.placeholder("float", [1, vocab_size])
# RNN output node weights and biases
weights = {
'out': tf.Variable(tf.random_normal([n_hidden, vocab_size]))
}
biases = {
'out': tf.Variable(tf.random_normal([vocab_size]))
}
#tf.reset_default_graph()
#print(x)
print(tf.split(x,n_input,1))
def RNN(x, weights, biases):
#print("1", x)
# reshape to [1, n_input]
x = tf.reshape(x, [-1, n_input])
#print("2", x, x.shape)
# Generate a n_input-element sequence of inputs
# (eg. [had] [a] [general] -> [20] [6] [33])
x = tf.split(x,n_input,1)
#print("3", x)
# 1-layer LSTM with n_hidden units.
rnn_cell = rnn.BasicLSTMCell(n_hidden)
# generate prediction
outputs, states = rnn.static_rnn(rnn_cell, x, dtype=tf.float32)
print("4", outputs)
# there are n_input outputs but
# we only want the last output
return tf.matmul(outputs[-1], weights['out']) + biases['out']
pred = RNN(x, weights, biases)
# Loss and optimizer
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=pred, labels=y))
optimizer = tf.train.RMSPropOptimizer(learning_rate=learning_rate).minimize(cost)
# Model evaluation
correct_pred = tf.equal(tf.argmax(pred,1), tf.argmax(y,1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
# Initializing the variables
init = tf.global_variables_initializer()
Explanation: def build_dataset(words):
count = collections.Counter(words).most_common()
dictionary = dict()
for word, _ in count:
dictionary[word] = len(dictionary)
reverse_dictionary = dict(zip(dictionary.values(), dictionary.keys()))
return dictionary, reverse_dictionary
dictionary, reverse_dictionary = build_dataset(training_data)
vocab_size = len(dictionary)
print(vocab_size)
print(dictionary)
print(training_data)
vocab = list(set(training_data))
print(vocab)
vocab1 = ['1','2','3']
print(vocab1)
End of explanation
# Launch the graph
with tf.Session() as session:
session.run(init)
step = 0
offset = random.randint(0,n_input+1)
end_offset = n_input + 1
acc_total = 0
loss_total = 0
writer.add_graph(session.graph)
while step < training_iters: #training_iters=50000
# Generate a minibatch. Add some randomness on selection process.
if offset > (len(training_data)-end_offset):
offset = random.randint(0, n_input+1)
#symbols_in_keys = [ [dictionary[ str(training_data[i])]] for i in range(offset, offset+n_input) ]
#symbols_in_keys = np.reshape(np.array(symbols_in_keys), [-1, n_input, 1])
input_data = [training_data[i] for i in range(offset, offset+n_input)]
input_set = np.reshape(input_data, [-1, n_input, 1])
#input_set = np.reshape(training_data, [-1, n_input, 1])
#print(input_set, input_set.shape)
symbols_out_onehot = np.zeros([vocab_size], dtype=float)
symbols_out_onehot[int(training_data[offset+n_input]) - 1] = 1.0
symbols_out_onehot = np.reshape(symbols_out_onehot,[1,-1])
#print(symbols_out_onehot, symbols_out_onehot.shape)
_, acc, loss, onehot_pred = session.run([optimizer, accuracy, cost, pred], \
feed_dict={x: input_set, y: symbols_out_onehot})
loss_total += loss
acc_total += acc
if (step+1) % display_step == 0:
print("Iter= " + str(step+1) + ", Average Loss= " + \
"{:.6f}".format(loss_total/display_step) + ", Average Accuracy= " + \
"{:.2f}%".format(100*acc_total/display_step))
acc_total = 0
loss_total = 0
symbols_in = [training_data[i] for i in range(offset, offset + n_input)]
symbols_out = training_data[offset + n_input]
#symbols_out_pred = reverse_dictionary[int(tf.argmax(onehot_pred, 1).eval())]
symbols_out_pred = int(tf.argmax(onehot_pred, 1).eval())
print("%s - [%s] vs [%s]" % (symbols_in,symbols_out,symbols_out_pred))
step += 1
offset += (n_input+1)
print("Optimization Finished!")
print("Elapsed time: ", elapsed(time.time() - start_time))
print("Run on command line.")
print("\ttensorboard --logdir=%s" % (logs_path))
print("Point your web browser to: http://localhost:6006/")
'''
while True:
prompt = "%s words: " % n_input
sentence = input(prompt)
sentence = sentence.strip()
words = sentence.split(' ')
if len(words) != n_input:
continue
try:
symbols_in_keys = [dictionary[str(words[i])] for i in range(len(words))]
for i in range(32):
keys = np.reshape(np.array(symbols_in_keys), [-1, n_input, 1])
onehot_pred = session.run(pred, feed_dict={x: keys})
onehot_pred_index = int(tf.argmax(onehot_pred, 1).eval())
sentence = "%s %s" % (sentence,reverse_dictionary[onehot_pred_index])
symbols_in_keys = symbols_in_keys[1:]
symbols_in_keys.append(onehot_pred_index)
print(sentence)
except:
print("Word not in dictionary")'''
Explanation: #testing
#symbols_in_keys = [ [dictionary[ str(training_data[i])]] for i in range(offset, offset+n_input) ]
#symbols_in_keys = np.reshape(np.array(symbols_in_keys), [-1, n_input, 1])
#input_set = np.reshape(training_data, [-1, n_input, 1])
offset = 8
symbols_out_onehot = np.zeros([vocab_size], dtype=float)
#symbols_out_onehot[dictionary[str(training_data[offset+n_input])]] = 1.0
symbols_out_onehot[int(training_data[offset+n_input]) - 1] = 1.0
print(symbols_out_onehot)
#symbols_out_onehot[training_data[offset+n_input]] = 1.0
#symbols_out_onehot = np.reshape(symbols_out_onehot,[1,-1])
symbols_in_keys = [ [dictionary[ str(training_data[i])]] for i in range(offset, offset+n_input) ]
symbols_in_keys = np.reshape(np.array(symbols_in_keys), [-1, n_input, 1])
input_data = [training_data[i] for i in range(offset, offset+n_input)]
input_set = np.reshape(input_data, [-1, n_input, 1])
print(input_set)
End of explanation |
2,530 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Mixed Invasion Percolation
Until now we have demonstrated percolation that assumed that the important entry pressures are determined by the throat connections, i.e. Bond Percolation. When modelling imbibiton the reverse can be true and in-fact the often larger pores can become the most resistive parts of the network. Occasionally processes may exist that require analysis of the phase configuration within both pores and throats and an associated entry pressure can be attributed to both elements of the network. For example, co-operative pore filling is primarily a throat driven process where a phase may be present in more than one of the connected throats for a given pore. The phases may coalesce at pressures lesser than their individual entry pressures by bulging into the pore and touching each other or other solid objects. This can be accounted for with a conditional evaluation of the the combined throat occupancy at each pore thus providing a throat and pore dependend invasion mechanism. The phenomenon is discussed in greater detail in Tranter 2017 and the paper recreation notebooks.
Step1: The Mixed Invasion Percolation algorithm therefore requires a physics associated with its invading phase that contains both a pore and throat entry pressure. Initially we can set the pore entry pressure to be zero, in which case the behaviour should be identical to the normal invasion percolation algorithm.
Step2: The intrusion data for Mixed Invasion Percolation is shown as an invasion pressure envelope, as ordinary percolation would but we can still compare the two plots.
Step3: Like invasion percolation, it is possible to apply trapping
Step4: Now we show an example where a characteristic entry pressure is applied to both pores and throats
Step5: We can use the basic plotting tools in OpenPNM to show that pores and throats are invaded individually by incrementing the invasion sequence
Step6: We can simulate drainage and imbibition using the pore entry pressure on two phases. Here we set up a new network and the appropriate phase and physics objects. We use the contact angle in the air phase as 180 - the contact angle in the water phase but these values can be changed (and often are) to represent contact angle hysteresis.
Step7: Normally, an algorithm proceeds from the initial condition that the network is completley occupied with defender phase but it is also possible to start with a partially saturated network where a proportion is already invaded with residual saturation.
Step8: First we define an injection algorithm and use the max_pressure argument for the run method to stop the invasion algorithm once all the elements have been invaded with entry pressure lower than this threshold.
Step9: Now we can run the next step using the results from the injection algorithm as residual saturation. Water withdrawal is equivalent to air invasion.
Step10: Firstly we can verify that the initial condition for air invasion is the inverse of the final condition for water invasion
Step11: Now we can plot both saturation curves, remembering to multiply the capillary pressure value by -1 for withdrawal at it represents pressure in the invading phases but capillary pressure is defined classically as Pc_nwp - Pc_wp. And also remembering to invert the phase occupancy for withdrawal to make it consistent with the water volume fraction | Python Code:
import warnings
import numpy as np
import openpnm as op
%config InlineBackend.figure_formats = ['svg']
from openpnm.algorithms import MixedInvasionPercolation as mp
import matplotlib as mpl
import matplotlib.pyplot as plt
from ipywidgets import interact, IntSlider
%load_ext autoreload
%autoreload 2
%matplotlib inline
mpl.rcParams["image.interpolation"] = "None"
warnings.simplefilter("ignore")
np.random.seed(10)
ws = op.Workspace()
ws.settings['loglevel'] = 50
Explanation: Mixed Invasion Percolation
Until now we have demonstrated percolation that assumed that the important entry pressures are determined by the throat connections, i.e. Bond Percolation. When modelling imbibiton the reverse can be true and in-fact the often larger pores can become the most resistive parts of the network. Occasionally processes may exist that require analysis of the phase configuration within both pores and throats and an associated entry pressure can be attributed to both elements of the network. For example, co-operative pore filling is primarily a throat driven process where a phase may be present in more than one of the connected throats for a given pore. The phases may coalesce at pressures lesser than their individual entry pressures by bulging into the pore and touching each other or other solid objects. This can be accounted for with a conditional evaluation of the the combined throat occupancy at each pore thus providing a throat and pore dependend invasion mechanism. The phenomenon is discussed in greater detail in Tranter 2017 and the paper recreation notebooks.
End of explanation
N = 100
net = op.network.Cubic(shape=[N, N, 1], spacing=2.5e-5)
geom = op.geometry.SpheresAndCylinders(network=net, pores=net.Ps, throats=net.Ts)
water = op.phases.Water(network=net)
phys = op.physics.Standard(network=net, phase=water, geometry=geom)
phys['pore.entry_pressure'] = 0.0
fig, ax = plt.subplots(figsize=[5, 5])
ax.hist(phys['throat.entry_pressure'])
plt.show()
def run_mp(trapping=False, residual=None, snap_off=False, plot=True, flowrate=None, phase=None):
alg = mp(network=net)
if snap_off:
alg.settings['snap_off'] = 'throat.snap_off'
alg.setup(phase=phase)
alg.set_inlets(pores=net.pores('left'))
if residual is not None:
alg.set_residual(pores=residual)
alg.run()
if trapping:
alg.set_outlets(net.pores('right'))
alg.apply_trapping()
inv_points = np.arange(0, 100, 1)
# returns data as well as plotting
alg_data = alg.get_intrusion_data(inv_points=inv_points)
water.update(alg.results(Pc=inv_points.max()))
if plot:
fig, ax = plt.subplots(figsize=[5, 5])
L = np.sqrt(net.Np).astype(int)
ax.imshow(alg['pore.invasion_sequence'].reshape([L, L]),
cmap=plt.get_cmap('Blues'))
plt.show()
if flowrate is not None:
alg.apply_flow(flowrate=flowrate)
return alg
alg1 = run_mp(phase=water)
Explanation: The Mixed Invasion Percolation algorithm therefore requires a physics associated with its invading phase that contains both a pore and throat entry pressure. Initially we can set the pore entry pressure to be zero, in which case the behaviour should be identical to the normal invasion percolation algorithm.
End of explanation
alg_ip = op.algorithms.InvasionPercolation(network=net, phase=water)
alg_ip.set_inlets(pores=net.pores('left'))
alg_ip.run()
ip_data = alg_ip.get_intrusion_data()
mip_data = alg1.get_intrusion_data()
fig, ax = plt.subplots(figsize=[4*1.25, 3*1.25])
ax.plot(ip_data.Pcap, ip_data.S_tot);
ax.plot(mip_data.Pcap, mip_data.S_tot);
Explanation: The intrusion data for Mixed Invasion Percolation is shown as an invasion pressure envelope, as ordinary percolation would but we can still compare the two plots.
End of explanation
alg2 = run_mp(phase=water, trapping=True)
fig, ax = plt.subplots(figsize=[4*1.25, 3*1.25])
alg1.plot_intrusion_curve(ax=ax)
alg2.plot_intrusion_curve(ax=ax)
Explanation: Like invasion percolation, it is possible to apply trapping
End of explanation
N = 10
net = op.network.Cubic(shape=[N, N, 1], spacing=2.5e-5)
geom = op.geometry.SpheresAndCylinders(network=net, pores=net.Ps, throats=net.Ts)
water = op.phases.Water(network=net)
phys = op.physics.Standard(network=net, phase=water, geometry=geom)
phys.add_model(propname='pore.entry_pressure',
model=op.models.physics.capillary_pressure.washburn,
diameter='pore.diameter')
fig, ax = plt.subplots(figsize=[5, 5])
ax.hist(phys['throat.entry_pressure'])
ax.hist(phys['pore.entry_pressure'])
plt.show()
alg1 = run_mp(phase=water, plot=False)
Explanation: Now we show an example where a characteristic entry pressure is applied to both pores and throats
End of explanation
from openpnm.topotools import plot_coordinates, plot_connections
alg1.props()
def plot_invasion_sequence(seq):
from openpnm.topotools import get_shape
pmask = alg1['pore.invasion_sequence'] < seq
tmask = alg1['throat.invasion_sequence'] < seq
fig, ax = plt.subplots(figsize=[5, 5])
# Uncomment the next 3 lines for a more rigorous plot
# plot_connections(network=net, throats=net.Ts[~tmask], c='k', linestyle='dashed', ax=ax)
# plot_connections(network=net, throats=net.Ts[tmask], c='b', ax=ax)
# plot_coordinates(network=net, pores=net.Ps[pmask], c='b', ax=ax)
# Comment the next line if using the above 3 lines for plotting
ax.imshow(pmask.reshape(get_shape(net)).squeeze().T)
ax.set_title(f'# invaded pores: {pmask.sum()}, throats: {tmask.sum()}')
plt.show()
slider = IntSlider(min=1, max=alg1['throat.invasion_sequence'].max(), step=10, value=2300)
interact(plot_invasion_sequence, seq=slider);
Explanation: We can use the basic plotting tools in OpenPNM to show that pores and throats are invaded individually by incrementing the invasion sequence
End of explanation
N = 100
net = op.network.Cubic(shape=[N, N, 1], spacing=2.5e-5)
geom = op.geometry.SpheresAndCylinders(network=net, pores=net.Ps, throats=net.Ts)
water = op.phases.Water(network=net)
air = op.phases.Air(network=net)
water['pore.contact_angle'] = 120
air['pore.contact_angle'] = 60
phys_w = op.physics.Standard(network=net, phase=water, geometry=geom)
phys_w.add_model(propname='pore.entry_pressure',
model=op.models.physics.capillary_pressure.washburn,
diameter='pore.diameter')
phys_a = op.physics.Standard(network=net, phase=air, geometry=geom)
phys_a.add_model(propname='pore.entry_pressure',
model=op.models.physics.capillary_pressure.washburn,
diameter='pore.diameter')
phys_w['throat.entry_pressure'] = -1e9
phys_a['throat.entry_pressure'] = -1e9
fig, ax = plt.subplots(figsize=[5, 5])
ax.hist(phys_w['pore.entry_pressure'])
ax.hist(phys_a['pore.entry_pressure'])
geom['pore.volume'][net['pore.surface']] = 0.0
geom['throat.volume'] = 0.0
plt.show()
Explanation: We can simulate drainage and imbibition using the pore entry pressure on two phases. Here we set up a new network and the appropriate phase and physics objects. We use the contact angle in the air phase as 180 - the contact angle in the water phase but these values can be changed (and often are) to represent contact angle hysteresis.
End of explanation
residual = np.zeros([N, N], dtype='bool')
residual[:50, :] = True
alg1 = run_mp(phase=water, plot=True, residual=residual.flatten())
res_data = alg1.get_intrusion_data()
fig, ax = plt.subplots(figsize=[5, 5])
ax.plot(res_data.Pcap, res_data.S_tot)
ax.set_xlim(0, 30000)
ax.set_ylim(0, 1.0)
Explanation: Normally, an algorithm proceeds from the initial condition that the network is completley occupied with defender phase but it is also possible to start with a partially saturated network where a proportion is already invaded with residual saturation.
End of explanation
Pc_max = 14000
inj = mp(network=net)
inj.setup(phase=water)
inj.set_inlets(pores=net.pores('left'))
#inj.set_residual(pores=phase['pore.occupancy'])
inj.run(max_pressure=Pc_max)
inj.set_outlets(net.pores(['back', 'front', 'right']))
#inj.apply_trapping()
inv_points = np.arange(0, 100, 1)
# returns data as well as plotting
alg_data = inj.get_intrusion_data(inv_points=inv_points)
fig, ax = plt.subplots(figsize=[5, 5])
L = np.sqrt(net.Np).astype(int)
mask = inj['pore.invasion_sequence'] > -1
ax.imshow(mask.reshape([L, L]), cmap=plt.get_cmap('Blues'))
plt.show()
inj_data = inj.get_intrusion_data()
Explanation: First we define an injection algorithm and use the max_pressure argument for the run method to stop the invasion algorithm once all the elements have been invaded with entry pressure lower than this threshold.
End of explanation
air['pore.occupancy'] = inj['pore.invasion_sequence'] == -1
withdrawal = mp(network=net)
withdrawal.setup(phase=air)
withdrawal.set_inlets(pores=net.pores(['back', 'front', 'right']))
withdrawal.set_residual(pores=air['pore.occupancy'])
withdrawal.run()
withdrawal.set_outlets(net.pores(['right']))
# inj.apply_trapping()
# inv_points = np.arange(0, 100, 1)
# returns data as well as plotting
wth_data = withdrawal.get_intrusion_data()
Explanation: Now we can run the next step using the results from the injection algorithm as residual saturation. Water withdrawal is equivalent to air invasion.
End of explanation
fig, ax = plt.subplots(figsize=[5, 5])
L = np.sqrt(net.Np).astype(int)
mask = withdrawal['pore.invasion_sequence'] == 0
ax.imshow(mask.reshape([L, L]),
cmap=plt.get_cmap('Blues'));
Explanation: Firstly we can verify that the initial condition for air invasion is the inverse of the final condition for water invasion
End of explanation
fig, ax = plt.subplots(figsize=[5, 5])
ax.plot(inj_data.Pcap, inj_data.S_tot)
ax.plot(-wth_data.Pcap, 1-wth_data.S_tot)
ax.set_xlim(0, Pc_max)
Explanation: Now we can plot both saturation curves, remembering to multiply the capillary pressure value by -1 for withdrawal at it represents pressure in the invading phases but capillary pressure is defined classically as Pc_nwp - Pc_wp. And also remembering to invert the phase occupancy for withdrawal to make it consistent with the water volume fraction
End of explanation |
2,531 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Work through Geometric Factor for Sullivan 1971
How do the results depend on stackup?
Both the full formula and a bounded formula
How do the results depend on diameter?
Both the full formula and a bounded formula
$G=\frac{1}{2}\pi^2 \left[R_1^2+R_2^2+l^2 -\left{\left(R_1^2+R_2^2+l^2\right)^2-4R_1^2R_2\right}^{\frac{1}{2}} \right]$
$G \ge \frac{A_1A_2}{R_1^2+R_2^2+l^2}$
Step2: Just thickness
Stack up five colimator discs to compute G
Step3: As a function of measurement uncertainty | Python Code:
from pprint import pprint
import numpy as np
import pymc3 as pm
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
sns.set(font_scale=1.5)
sns.set_context("notebook", rc={"lines.linewidth": 3})
%matplotlib inline
def getBoundedNormal_dist(mean=None, FWHM=None, name=None, lower=0, upper=1e6):
Make a bounded normal distribution
NOTE: https://github.com/pymc-devs/pymc3/issues/1672 bounded dist fail until 3.1 on
non array bounds!!
assert mean is not None
assert FWHM is not None
assert name is not None
BoundedNormal = mc3.Bound(mc3.Normal, lower=lower, upper=upper)
return BoundedNormal('{0}'.format(name), mu=mean, sd=FWHMtoSD_Normal(mean * (FWHM / 100.)))
def Sullivan_Bound(R1, R2, l):
A1 = np.pi*R1**2
A2 = np.pi*R2**2
top = A1*A2
bottom = R1**2+R2**2+l**2
return top/bottom
def Sullivan(R1, R2, l):
f = 0.5*np.pi**2
t1 = R1**2+R2**2+l**2
t2 = 4*R1**2*R2**2
G = f*(t1 - (t1**2-t2)**0.5 )
return G
def frac_bounds(trace):
med = np.median(trace)
bounds = np.percentile(trace, (2.5, 97.5))
frac = (med-bounds[0])/med
return med, frac*100
Explanation: Work through Geometric Factor for Sullivan 1971
How do the results depend on stackup?
Both the full formula and a bounded formula
How do the results depend on diameter?
Both the full formula and a bounded formula
$G=\frac{1}{2}\pi^2 \left[R_1^2+R_2^2+l^2 -\left{\left(R_1^2+R_2^2+l^2\right)^2-4R_1^2R_2\right}^{\frac{1}{2}} \right]$
$G \ge \frac{A_1A_2}{R_1^2+R_2^2+l^2}$
End of explanation
Sullivan(4, 5, 10)
R1 = 0.5
R2 = 0.5
l = np.linspace(2, 20, 100)
fig, ax = plt.subplots(ncols=2, figsize=(8,4))
ax[0].plot(l, Sullivan(R1, R2, l), lw=2)
ax[0].set_xlabel('Distance between [$cm$]')
ax[0].set_ylabel('GF [$cm^2sr$]');
ax[1].loglog(l, Sullivan(R1, R2, l), lw=2)
ax[1].set_xlabel('Distance between [$cm$]');
ax[0].grid(True, which='both')
ax[1.grid(True, which='both')
with pm.Model() as model1:
T1 = pm.Normal('T1', 1.0, 0.1e-2) # 1cm +/- 0.1mm
T2 = pm.Normal('T2', 1.0, 0.1e-2) # 1cm +/- 0.1mm
T3 = pm.Normal('T3', 1.0, 0.1e-2) # 1cm +/- 0.1mm
T4 = pm.Normal('T4', 1.0, 0.1e-2) # 1cm +/- 0.1mm
T5 = pm.Normal('T5', 1.0, 0.1e-2) # 1cm +/- 0.1mm
R1 = 0.5
R2 = 0.5
R3 = 0.5
G = pm.Deterministic('G', Sullivan(R1, R3, T1+T2+T3+T4+T5))
Gbound = pm.Deterministic('Gbound', Sullivan_Bound(R1, R3, T1+T2+T3+T4+T5))
trace = pm.sample(1000, chains=4, target_accept=0.9)
pm.summary(trace).round(3)
pm.traceplot(trace, combined=False);
gf = frac_bounds(trace['G'])
gbf = frac_bounds(trace['Gbound'])
print("G={:.5f} +/- {:.2f}%".format(gf[0], gf[1]))
print("Gbound={:.5f} +/- {:.2f}%".format(gbf[0], gbf[1]))
Explanation: Just thickness
Stack up five colimator discs to compute G
End of explanation
pm = np.logspace(-3, -1, 10)
ans = {}
for ii, p in enumerate(pm):
print(p, ii+1, len(pm))
with mc3.Model() as model2:
T1 = mc3.Normal('T1', 1.0, tau=(p)**-2) # 1cm +/- 0.1mm
T2 = mc3.Normal('T2', 1.0, tau=(p)**-2) # 1cm +/- 0.1mm
T3 = mc3.Normal('T3', 1.0, tau=(p)**-2) # 1cm +/- 0.1mm
T4 = mc3.Normal('T4', 1.0, tau=(p)**-2) # 1cm +/- 0.1mm
T5 = mc3.Normal('T5', 1.0, tau=(p)**-2) # 1cm +/- 0.1mm
R1 = 0.5
R2 = 0.5
R3 = 0.5
G = mc3.Deterministic('G', Sullivan(R1, R3, T1+T2+T3+T4+T5))
Gbound = mc3.Deterministic('Gbound', Sullivan_Bound(R1, R3, T1+T2+T3+T4+T5))
start = mc3.find_MAP()
trace = mc3.sample(10000, start=start, jobs=2)
ans[p] = gf = frac_bounds(trace['G'])
pprint(ans)
vals = np.asarray(list(ans.keys()))
gs = np.asarray([ans[v][0] for v in ans ])
gse = np.asarray([ans[v][1] for v in ans ])
valsf = (vals/1.0)*100
plt.errorbar(valsf, gs, yerr=gse, elinewidth=1, capsize=2, barsabove=True)
plt.ylim([0,15])
# plt.xscale('log')
plt.plot(valsf, gse)
Explanation: As a function of measurement uncertainty
End of explanation |
2,532 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
WMI Win32_Process Class and Create Method for Remote Execution
Metadata
| Metadata | Value |
|
Step1: Download & Process Security Dataset
Step2: Analytic I
Look for wmiprvse.exe spawning processes that are part of non-system account sessions.
| Data source | Event Provider | Relationship | Event |
|
Step3: Analytic II
Look for wmiprvse.exe spawning processes that are part of non-system account sessions.
| Data source | Event Provider | Relationship | Event |
|
Step4: Analytic III
Look for non-system accounts leveraging WMI over the netwotk to execute code
| Data source | Event Provider | Relationship | Event |
| | Python Code:
from openhunt.mordorutils import *
spark = get_spark()
Explanation: WMI Win32_Process Class and Create Method for Remote Execution
Metadata
| Metadata | Value |
|:------------------|:---|
| collaborators | ['@Cyb3rWard0g', '@Cyb3rPandaH'] |
| creation date | 2019/08/10 |
| modification date | 2020/09/20 |
| playbook related | [] |
Hypothesis
Adversaries might be leveraging WMI Win32_Process class and method Create to execute code remotely across my environment
Technical Context
WMI is the Microsoft implementation of the Web-Based Enterprise Management (WBEM) and Common Information Model (CIM).
Both standards aim to provide an industry-agnostic means of collecting and transmitting information related to any managed component in an enterprise.
An example of a managed component in WMI would be a running process, registry key, installed service, file information, etc.
At a high level, Microsoft's implementation of these standards can be summarized as follows > Managed Components Managed components are represented as WMI objects — class instances representing highly structured operating system data. Microsoft provides a wealth of WMI objects that communicate information related to the operating system. E.g. Win32_Process, Win32_Service, AntiVirusProduct, Win32_StartupCommand, etc.
Offensive Tradecraft
One well known lateral movement technique is performed via the WMI object — class Win32_Process and its method Create.
This is because the Create method allows a user to create a process either locally or remotely.
One thing to notice is that when the Create method is used on a remote system, the method is run under a host process named "Wmiprvse.exe".
The process WmiprvSE.exe is what spawns the process defined in the CommandLine parameter of the Create method. Therefore, the new process created remotely will have Wmiprvse.exe as a parent. WmiprvSE.exe is a DCOM server and it is spawned underneath the DCOM service host svchost.exe with the following parameters C:\WINDOWS\system32\svchost.exe -k DcomLaunch -p.
From a logon session perspective, on the target, WmiprvSE.exe is spawned in a different logon session by the DCOM service host. However, whatever is executed by WmiprvSE.exe occurs on the new network type (3) logon session created by the user that authenticated from the network.
Additional Reading
* https://github.com/OTRF/ThreatHunter-Playbook/tree/master/docs/library/windows/logon_session.md
Security Datasets
| Metadata | Value |
|:----------|:----------|
| docs | https://securitydatasets.com/notebooks/atomic/windows/lateral_movement/SDWIN-200921001437.html |
| link | https://raw.githubusercontent.com/OTRF/Security-Datasets/master/datasets/atomic/windows/lateral_movement/host/empire_wmi_dcerpc_wmi_IWbemServices_ExecMethod.zip |
Analytics
Initialize Analytics Engine
End of explanation
sd_file = "https://raw.githubusercontent.com/OTRF/Security-Datasets/master/datasets/atomic/windows/lateral_movement/host/empire_wmi_dcerpc_wmi_IWbemServices_ExecMethod.zip"
registerMordorSQLTable(spark, sd_file, "sdTable")
Explanation: Download & Process Security Dataset
End of explanation
df = spark.sql(
'''
SELECT `@timestamp`, Hostname, SubjectUserName, TargetUserName, NewProcessName, CommandLine
FROM sdTable
WHERE LOWER(Channel) = "security"
AND EventID = 4688
AND lower(ParentProcessName) LIKE "%wmiprvse.exe"
AND NOT TargetLogonId = "0x3e7"
'''
)
df.show(10,False)
Explanation: Analytic I
Look for wmiprvse.exe spawning processes that are part of non-system account sessions.
| Data source | Event Provider | Relationship | Event |
|:------------|:---------------|--------------|-------|
| Process | Microsoft-Windows-Security-Auditing | Process created Process | 4688 |
| Process | Microsoft-Windows-Security-Auditing | User created Process | 4688 |
End of explanation
df = spark.sql(
'''
SELECT `@timestamp`, Hostname, User, Image, CommandLine
FROM sdTable
WHERE Channel = "Microsoft-Windows-Sysmon/Operational"
AND EventID = 1
AND lower(ParentImage) LIKE "%wmiprvse.exe"
AND NOT LogonId = "0x3e7"
'''
)
df.show(10,False)
Explanation: Analytic II
Look for wmiprvse.exe spawning processes that are part of non-system account sessions.
| Data source | Event Provider | Relationship | Event |
|:------------|:---------------|--------------|-------|
| Process | Microsoft-Windows-Sysmon/Operational | Process created Process | 1 |
| Process | Microsoft-Windows-Sysmon/Operational | User created Process | 1 |
End of explanation
df = spark.sql(
'''
SELECT o.`@timestamp`, o.Hostname, o.SubjectUserName, o.TargetUserName, o.NewProcessName, o.CommandLine, a.IpAddress
FROM sdTable o
INNER JOIN (
SELECT Hostname,TargetUserName,TargetLogonId,IpAddress
FROM sdTable
WHERE LOWER(Channel) = "security"
AND EventID = 4624
AND LogonType = 3
AND NOT TargetUserName LIKE "%$"
) a
ON o.TargetLogonId = a.TargetLogonId
WHERE LOWER(o.Channel) = "security"
AND o.EventID = 4688
AND lower(o.ParentProcessName) LIKE "%wmiprvse.exe"
AND NOT o.TargetLogonId = "0x3e7"
'''
)
df.show(10,False)
Explanation: Analytic III
Look for non-system accounts leveraging WMI over the netwotk to execute code
| Data source | Event Provider | Relationship | Event |
|:------------|:---------------|--------------|-------|
| Process | Microsoft-Windows-Security-Auditing | Process created Process | 4688 |
| Process | Microsoft-Windows-Security-Auditing | User created Process | 4688 |
| Authentication log | Microsoft-Windows-Security-Auditing | User authenticated Host | 4624 |
End of explanation |
2,533 | Given the following text description, write Python code to implement the functionality described.
Description:
Return median of elements in the list l.
This is how the function will work:
median([3, 1, 2, 4, 5])
3
This is how the function will work:
median([-10, 4, 6, 1000, 10, 20])
15.0
| Python Code:
def median(l: list):
l = sorted(l)
if len(l) % 2 == 1:
return l[len(l) // 2]
else:
return (l[len(l) // 2 - 1] + l[len(l) // 2]) / 2.0 |
2,534 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2020 The Cirq Developers
Step1: Protocols
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https
Step2: Introduction
Cirq's protocols are very similar concept to Python's built-in protocols that were introduced in PEP 544.
Python's built-in protocols are extremely convenient. For example, behind all the for loops and list comprehensions you can find the Iterator protocol.
As long as an object has the __iter__() magic method that returns an iterator object, it has iterator support.
An iterator object has to define __iter__() and __next__() magic methods, that defines the iterator protocol.
The iter(val) builtin function returns an iterator for val if it defines the above methods, otherwise throws a TypeError. Cirq protocols work similarly.
A canonical Cirq protocol example is the unitary protocol that allows to check the unitary matrix of values that support the protocol by calling cirq.unitary(val).
Step3: When an object does not support a given protocol, an error is thrown.
Step4: What is a protocol?
A protocol is a combination of the following two items
Step5: Mixture
The *mixture protocol should be implemented by operators that are unitary-mixtures. These probabilistic operators are represented by a list of tuples ($p_i$, $U_i$), where each unitary effect $U_i$ occurs with a certain probability $p_i$, and $\sum p_i = 1$. Probabilities are a Python float between 0.0 and 1.0, and the unitary matrices are numpy arrays.
Constructing simple probabilistic gates in Cirq is easiest with the with_probability method.
Step6: In case an operator does not implement SupportsMixture, but does implement SupportsUnitary, *mixture functions fall back to the *unitary methods. It is easy to see that a unitary operator $U$ is just a "mixture" of a single unitary with probability $p=1$.
Step7: Channel
The kraus representation is the operator sum representation of a quantum operator (a channel)
Step8: In case the operator does not implement SupportsKraus, but it does implement SupportsMixture, the *kraus protocol will generate the Kraus operators based on the *mixture representation.
$$
((p_0, U_0),(p_1, U_1),\ldots,(p_n, U_n)) \rightarrow (\sqrt{p_0}U_0, \sqrt{p_1}U_1, \ldots, \sqrt{p_n}U_n)
$$
Thus for example ((0.25, X), (0.75, I)) -> (0.5 X, sqrt(0.75) I)
Step9: In the simplest case of a unitary operator, cirq.kraus returns a one-element tuple with the same unitary as returned by cirq.unitary | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2020 The Cirq Developers
End of explanation
try:
import cirq
except ImportError:
print("installing cirq...")
!pip install --quiet cirq
print("installed cirq.")
import cirq
Explanation: Protocols
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://quantumai.google/cirq/protocols"><img src="https://quantumai.google/site-assets/images/buttons/quantumai_logo_1x.png" />View on QuantumAI</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/quantumlib/Cirq/blob/master/docs/protocols.ipynb"><img src="https://quantumai.google/site-assets/images/buttons/colab_logo_1x.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/quantumlib/Cirq/blob/master/docs/protocols.ipynb"><img src="https://quantumai.google/site-assets/images/buttons/github_logo_1x.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/Cirq/docs/protocols.ipynb"><img src="https://quantumai.google/site-assets/images/buttons/download_icon_1x.png" />Download notebook</a>
</td>
</table>
End of explanation
print(cirq.X)
print("cirq.X unitary:\n", cirq.unitary(cirq.X))
a, b = cirq.LineQubit.range(2)
circuit = cirq.Circuit(cirq.X(a), cirq.Y(b))
print(circuit)
print("circuit unitary:\n", cirq.unitary(circuit))
Explanation: Introduction
Cirq's protocols are very similar concept to Python's built-in protocols that were introduced in PEP 544.
Python's built-in protocols are extremely convenient. For example, behind all the for loops and list comprehensions you can find the Iterator protocol.
As long as an object has the __iter__() magic method that returns an iterator object, it has iterator support.
An iterator object has to define __iter__() and __next__() magic methods, that defines the iterator protocol.
The iter(val) builtin function returns an iterator for val if it defines the above methods, otherwise throws a TypeError. Cirq protocols work similarly.
A canonical Cirq protocol example is the unitary protocol that allows to check the unitary matrix of values that support the protocol by calling cirq.unitary(val).
End of explanation
try:
print(cirq.unitary(a)) ## error!
except Exception as e:
print("As expected, a qubit does not have a unitary. The error: ")
print(e)
Explanation: When an object does not support a given protocol, an error is thrown.
End of explanation
print(cirq.unitary(cirq.Y))
Explanation: What is a protocol?
A protocol is a combination of the following two items:
- a SupportsXYZ class, which defines and documents all the magic functions that need to be implemented in order to support that given protocol
- the entrypoint function(s), which are exposed to the main cirq namespace as cirq.xyz()
Note: While the protocol is technically both of these things, we refer to the public utility functions interchangeably as protocols. See the list of them below.
Cirq's protocols
For a complete list of Cirq protocols, refer to the cirq.protocols package.
Here we provide a list of frequently used protocols for debugging, simulation and testing.
| Protocol | Description |
|----------|-------|
|cirq.act_on| Allows an object (operations or gates) to act on a state, particularly within simulators. |
|cirq.apply_channel| High performance evolution under a channel evolution. |
|cirq.apply_mixture| High performance evolution under a mixture of unitaries evolution. |
|cirq.apply_unitaries| Apply a series of unitaries onto a state tensor. |
|cirq.apply_unitary| High performance left-multiplication of a unitary effect onto a tensor. |
|cirq.approx_eq| Approximately compares two objects. |
|cirq.circuit_diagram_info| Retrieves information for drawing operations within circuit diagrams. |
|cirq.commutes| Determines whether two values commute. |
|cirq.control_keys| Gets the keys that the value is classically controlled by. |
|cirq.definitely_commutes| Determines whether two values definitely commute. |
|cirq.decompose| Recursively decomposes a value into cirq.Operations meeting a criteria. |
|cirq.decompose_once| Decomposes a value into operations, if possible. |
|cirq.decompose_once_with_qubits| Decomposes a value into operations on the given qubits. |
|cirq.equal_up_to_global_phase| Determine whether two objects are equal up to global phase. |
|cirq.has_kraus| Returns whether the value has a Kraus representation. |
|cirq.has_mixture| Returns whether the value has a mixture representation. |
|cirq.has_stabilizer_effect| Returns whether the input has a stabilizer effect. |
|cirq.has_unitary| Determines whether the value has a unitary effect. |
|cirq.inverse| Returns the inverse val**-1 of the given value, if defined. |
|cirq.is_measurement| Determines whether or not the given value is a measurement. |
|cirq.is_parameterized| Returns whether the object is parameterized with any Symbols. |
|cirq.kraus| Returns a Kraus representation of the given channel. |
|cirq.measurement_key| Get the single measurement key for the given value. |
|cirq.measurement_keys| Gets the measurement keys of measurements within the given value. |
|cirq.mixture| Return a sequence of tuples representing a probabilistic unitary. |
|cirq.num_qubits| Returns the number of qubits, qudits, or qids val operates on. |
|cirq.parameter_names| Returns parameter names for this object. |
|cirq.parameter_symbols| Returns parameter symbols for this object. |
|cirq.pauli_expansion| Returns coefficients of the expansion of val in the Pauli basis. |
|cirq.phase_by| Returns a phased version of the effect. |
|cirq.pow| Returns val**factor of the given value, if defined. |
|cirq.qasm| Returns QASM code for the given value, if possible. |
|cirq.qid_shape| Returns a tuple describing the number of quantum levels of each |
|cirq.quil| Returns the QUIL code for the given value. |
|cirq.read_json| Read a JSON file that optionally contains cirq objects. |
|cirq.resolve_parameters| Resolves symbol parameters in the effect using the param resolver. |
|cirq.to_json| Write a JSON file containing a representation of obj. |
|cirq.trace_distance_bound| Returns a maximum on the trace distance between this effect's input |
|cirq.trace_distance_from_angle_list| Given a list of arguments of the eigenvalues of a unitary matrix, |
|cirq.unitary| Returns a unitary matrix describing the given value. |
|cirq.validate_mixture| Validates that the mixture's tuple are valid probabilities. |
Quantum operator representation protocols
The following family of protocols is an important and frequently used set of features of Cirq and it is worthwhile mentioning them and and how they interact with each other. They are, in the order of increasing generality:
*unitary
*kraus
*mixture
All these protocols make it easier to work with different representations of quantum operators, namely:
- finding that representation (unitary, kraus, mixture),
- determining whether the operator has that representation (has_*)
- and applying them (apply_*) on a state vector.
Unitary
The *unitary protocol is the least generic, as only unitary operators should implement it. The cirq.unitary function returns the matrix representation of the operator in the computational basis. We saw an example of the unitary protocol above, but let's see the unitary matrix of the Pauli-Y operator as well:
End of explanation
probabilistic_x = cirq.X.with_probability(.3)
for p, op in cirq.mixture(probabilistic_x):
print(f"probability: {p}")
print("operator:")
print(op)
Explanation: Mixture
The *mixture protocol should be implemented by operators that are unitary-mixtures. These probabilistic operators are represented by a list of tuples ($p_i$, $U_i$), where each unitary effect $U_i$ occurs with a certain probability $p_i$, and $\sum p_i = 1$. Probabilities are a Python float between 0.0 and 1.0, and the unitary matrices are numpy arrays.
Constructing simple probabilistic gates in Cirq is easiest with the with_probability method.
End of explanation
# cirq.Y has a unitary effect but does not implement SupportsMixture
# thus mixture protocols will return ((1, cirq.unitary(Y)))
print(cirq.mixture(cirq.Y))
print(cirq.has_mixture(cirq.Y))
Explanation: In case an operator does not implement SupportsMixture, but does implement SupportsUnitary, *mixture functions fall back to the *unitary methods. It is easy to see that a unitary operator $U$ is just a "mixture" of a single unitary with probability $p=1$.
End of explanation
cirq.kraus(cirq.DepolarizingChannel(p=0.3))
Explanation: Channel
The kraus representation is the operator sum representation of a quantum operator (a channel):
$$
\rho \rightarrow \sum_{k=0}^{r-1} A_k \rho A_k^\dagger
$$
These matrices are required to satisfy the trace preserving condition
$$
\sum_{k=0}^{r-1} A_k^\dagger A_k = I
$$
where $I$ is the identity matrix. The matrices $A_k$ are sometimes called Kraus or noise operators.
The cirq.kraus returns a tuple of numpy arrays, one for each of the Kraus operators:
End of explanation
cirq.kraus(cirq.X.with_probability(0.25))
Explanation: In case the operator does not implement SupportsKraus, but it does implement SupportsMixture, the *kraus protocol will generate the Kraus operators based on the *mixture representation.
$$
((p_0, U_0),(p_1, U_1),\ldots,(p_n, U_n)) \rightarrow (\sqrt{p_0}U_0, \sqrt{p_1}U_1, \ldots, \sqrt{p_n}U_n)
$$
Thus for example ((0.25, X), (0.75, I)) -> (0.5 X, sqrt(0.75) I):
End of explanation
print(cirq.kraus(cirq.Y))
print(cirq.unitary(cirq.Y))
print(cirq.has_kraus(cirq.Y))
Explanation: In the simplest case of a unitary operator, cirq.kraus returns a one-element tuple with the same unitary as returned by cirq.unitary:
End of explanation |
2,535 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Toplevel
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Flux Correction
3. Key Properties --> Genealogy
4. Key Properties --> Software Properties
5. Key Properties --> Coupling
6. Key Properties --> Tuning Applied
7. Key Properties --> Conservation --> Heat
8. Key Properties --> Conservation --> Fresh Water
9. Key Properties --> Conservation --> Salt
10. Key Properties --> Conservation --> Momentum
11. Radiative Forcings
12. Radiative Forcings --> Greenhouse Gases --> CO2
13. Radiative Forcings --> Greenhouse Gases --> CH4
14. Radiative Forcings --> Greenhouse Gases --> N2O
15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
17. Radiative Forcings --> Greenhouse Gases --> CFC
18. Radiative Forcings --> Aerosols --> SO4
19. Radiative Forcings --> Aerosols --> Black Carbon
20. Radiative Forcings --> Aerosols --> Organic Carbon
21. Radiative Forcings --> Aerosols --> Nitrate
22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
24. Radiative Forcings --> Aerosols --> Dust
25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
27. Radiative Forcings --> Aerosols --> Sea Salt
28. Radiative Forcings --> Other --> Land Use
29. Radiative Forcings --> Other --> Solar
1. Key Properties
Key properties of the model
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 2. Key Properties --> Flux Correction
Flux correction properties of the model
2.1. Details
Is Required
Step7: 3. Key Properties --> Genealogy
Genealogy and history of the model
3.1. Year Released
Is Required
Step8: 3.2. CMIP3 Parent
Is Required
Step9: 3.3. CMIP5 Parent
Is Required
Step10: 3.4. Previous Name
Is Required
Step11: 4. Key Properties --> Software Properties
Software properties of model
4.1. Repository
Is Required
Step12: 4.2. Code Version
Is Required
Step13: 4.3. Code Languages
Is Required
Step14: 4.4. Components Structure
Is Required
Step15: 4.5. Coupler
Is Required
Step16: 5. Key Properties --> Coupling
**
5.1. Overview
Is Required
Step17: 5.2. Atmosphere Double Flux
Is Required
Step18: 5.3. Atmosphere Fluxes Calculation Grid
Is Required
Step19: 5.4. Atmosphere Relative Winds
Is Required
Step20: 6. Key Properties --> Tuning Applied
Tuning methodology for model
6.1. Description
Is Required
Step21: 6.2. Global Mean Metrics Used
Is Required
Step22: 6.3. Regional Metrics Used
Is Required
Step23: 6.4. Trend Metrics Used
Is Required
Step24: 6.5. Energy Balance
Is Required
Step25: 6.6. Fresh Water Balance
Is Required
Step26: 7. Key Properties --> Conservation --> Heat
Global heat convervation properties of the model
7.1. Global
Is Required
Step27: 7.2. Atmos Ocean Interface
Is Required
Step28: 7.3. Atmos Land Interface
Is Required
Step29: 7.4. Atmos Sea-ice Interface
Is Required
Step30: 7.5. Ocean Seaice Interface
Is Required
Step31: 7.6. Land Ocean Interface
Is Required
Step32: 8. Key Properties --> Conservation --> Fresh Water
Global fresh water convervation properties of the model
8.1. Global
Is Required
Step33: 8.2. Atmos Ocean Interface
Is Required
Step34: 8.3. Atmos Land Interface
Is Required
Step35: 8.4. Atmos Sea-ice Interface
Is Required
Step36: 8.5. Ocean Seaice Interface
Is Required
Step37: 8.6. Runoff
Is Required
Step38: 8.7. Iceberg Calving
Is Required
Step39: 8.8. Endoreic Basins
Is Required
Step40: 8.9. Snow Accumulation
Is Required
Step41: 9. Key Properties --> Conservation --> Salt
Global salt convervation properties of the model
9.1. Ocean Seaice Interface
Is Required
Step42: 10. Key Properties --> Conservation --> Momentum
Global momentum convervation properties of the model
10.1. Details
Is Required
Step43: 11. Radiative Forcings
Radiative forcings of the model for historical and scenario (aka Table 12.1 IPCC AR5)
11.1. Overview
Is Required
Step44: 12. Radiative Forcings --> Greenhouse Gases --> CO2
Carbon dioxide forcing
12.1. Provision
Is Required
Step45: 12.2. Additional Information
Is Required
Step46: 13. Radiative Forcings --> Greenhouse Gases --> CH4
Methane forcing
13.1. Provision
Is Required
Step47: 13.2. Additional Information
Is Required
Step48: 14. Radiative Forcings --> Greenhouse Gases --> N2O
Nitrous oxide forcing
14.1. Provision
Is Required
Step49: 14.2. Additional Information
Is Required
Step50: 15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
Troposheric ozone forcing
15.1. Provision
Is Required
Step51: 15.2. Additional Information
Is Required
Step52: 16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
Stratospheric ozone forcing
16.1. Provision
Is Required
Step53: 16.2. Additional Information
Is Required
Step54: 17. Radiative Forcings --> Greenhouse Gases --> CFC
Ozone-depleting and non-ozone-depleting fluorinated gases forcing
17.1. Provision
Is Required
Step55: 17.2. Equivalence Concentration
Is Required
Step56: 17.3. Additional Information
Is Required
Step57: 18. Radiative Forcings --> Aerosols --> SO4
SO4 aerosol forcing
18.1. Provision
Is Required
Step58: 18.2. Additional Information
Is Required
Step59: 19. Radiative Forcings --> Aerosols --> Black Carbon
Black carbon aerosol forcing
19.1. Provision
Is Required
Step60: 19.2. Additional Information
Is Required
Step61: 20. Radiative Forcings --> Aerosols --> Organic Carbon
Organic carbon aerosol forcing
20.1. Provision
Is Required
Step62: 20.2. Additional Information
Is Required
Step63: 21. Radiative Forcings --> Aerosols --> Nitrate
Nitrate forcing
21.1. Provision
Is Required
Step64: 21.2. Additional Information
Is Required
Step65: 22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
Cloud albedo effect forcing (RFaci)
22.1. Provision
Is Required
Step66: 22.2. Aerosol Effect On Ice Clouds
Is Required
Step67: 22.3. Additional Information
Is Required
Step68: 23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
Cloud lifetime effect forcing (ERFaci)
23.1. Provision
Is Required
Step69: 23.2. Aerosol Effect On Ice Clouds
Is Required
Step70: 23.3. RFaci From Sulfate Only
Is Required
Step71: 23.4. Additional Information
Is Required
Step72: 24. Radiative Forcings --> Aerosols --> Dust
Dust forcing
24.1. Provision
Is Required
Step73: 24.2. Additional Information
Is Required
Step74: 25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
Tropospheric volcanic forcing
25.1. Provision
Is Required
Step75: 25.2. Historical Explosive Volcanic Aerosol Implementation
Is Required
Step76: 25.3. Future Explosive Volcanic Aerosol Implementation
Is Required
Step77: 25.4. Additional Information
Is Required
Step78: 26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
Stratospheric volcanic forcing
26.1. Provision
Is Required
Step79: 26.2. Historical Explosive Volcanic Aerosol Implementation
Is Required
Step80: 26.3. Future Explosive Volcanic Aerosol Implementation
Is Required
Step81: 26.4. Additional Information
Is Required
Step82: 27. Radiative Forcings --> Aerosols --> Sea Salt
Sea salt forcing
27.1. Provision
Is Required
Step83: 27.2. Additional Information
Is Required
Step84: 28. Radiative Forcings --> Other --> Land Use
Land use forcing
28.1. Provision
Is Required
Step85: 28.2. Crop Change Only
Is Required
Step86: 28.3. Additional Information
Is Required
Step87: 29. Radiative Forcings --> Other --> Solar
Solar forcing
29.1. Provision
Is Required
Step88: 29.2. Additional Information
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'nasa-giss', 'sandbox-3', 'toplevel')
Explanation: ES-DOC CMIP6 Model Properties - Toplevel
MIP Era: CMIP6
Institute: NASA-GISS
Source ID: SANDBOX-3
Sub-Topics: Radiative Forcings.
Properties: 85 (42 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:21
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Flux Correction
3. Key Properties --> Genealogy
4. Key Properties --> Software Properties
5. Key Properties --> Coupling
6. Key Properties --> Tuning Applied
7. Key Properties --> Conservation --> Heat
8. Key Properties --> Conservation --> Fresh Water
9. Key Properties --> Conservation --> Salt
10. Key Properties --> Conservation --> Momentum
11. Radiative Forcings
12. Radiative Forcings --> Greenhouse Gases --> CO2
13. Radiative Forcings --> Greenhouse Gases --> CH4
14. Radiative Forcings --> Greenhouse Gases --> N2O
15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
17. Radiative Forcings --> Greenhouse Gases --> CFC
18. Radiative Forcings --> Aerosols --> SO4
19. Radiative Forcings --> Aerosols --> Black Carbon
20. Radiative Forcings --> Aerosols --> Organic Carbon
21. Radiative Forcings --> Aerosols --> Nitrate
22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
24. Radiative Forcings --> Aerosols --> Dust
25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
27. Radiative Forcings --> Aerosols --> Sea Salt
28. Radiative Forcings --> Other --> Land Use
29. Radiative Forcings --> Other --> Solar
1. Key Properties
Key properties of the model
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Top level overview of coupled model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of coupled model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.flux_correction.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Flux Correction
Flux correction properties of the model
2.1. Details
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how flux corrections are applied in the model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.year_released')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Genealogy
Genealogy and history of the model
3.1. Year Released
Is Required: TRUE Type: STRING Cardinality: 1.1
Year the model was released
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP3_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.2. CMIP3 Parent
Is Required: FALSE Type: STRING Cardinality: 0.1
CMIP3 parent if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP5_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.3. CMIP5 Parent
Is Required: FALSE Type: STRING Cardinality: 0.1
CMIP5 parent if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.previous_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.4. Previous Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Previously known as
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Software Properties
Software properties of model
4.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.components_structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.4. Components Structure
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how model realms are structured into independent software components (coupled via a coupler) and internal software components.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.coupler')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OASIS"
# "OASIS3-MCT"
# "ESMF"
# "NUOPC"
# "Bespoke"
# "Unknown"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 4.5. Coupler
Is Required: FALSE Type: ENUM Cardinality: 0.1
Overarching coupling framework for model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Coupling
**
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of coupling in the model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_double_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.2. Atmosphere Double Flux
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the atmosphere passing a double flux to the ocean and sea ice (as opposed to a single one)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_fluxes_calculation_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Atmosphere grid"
# "Ocean grid"
# "Specific coupler grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 5.3. Atmosphere Fluxes Calculation Grid
Is Required: FALSE Type: ENUM Cardinality: 0.1
Where are the air-sea fluxes calculated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_relative_winds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.4. Atmosphere Relative Winds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are relative or absolute winds used to compute the flux? I.e. do ocean surface currents enter the wind stress calculation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Tuning Applied
Tuning methodology for model
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics/diagnostics retained. Document the relative weight given to climate performance metrics/diagnostics versus process oriented metrics/diagnostics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics/diagnostics of the global mean state used in tuning model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics/diagnostics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics/diagnostics used in tuning model/component (such as 20th century)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.energy_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.5. Energy Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how energy balance was obtained in the full system: in the various components independently or at the components coupling stage?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.fresh_water_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.6. Fresh Water Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how fresh_water balance was obtained in the full system: in the various components independently or at the components coupling stage?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Key Properties --> Conservation --> Heat
Global heat convervation properties of the model
7.1. Global
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how heat is conserved globally
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. Atmos Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the atmosphere/ocean coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.3. Atmos Land Interface
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how heat is conserved at the atmosphere/land coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.4. Atmos Sea-ice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the atmosphere/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.5. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the ocean/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.land_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.6. Land Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the land/ocean coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Key Properties --> Conservation --> Fresh Water
Global fresh water convervation properties of the model
8.1. Global
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how fresh_water is conserved globally
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.2. Atmos Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh_water is conserved at the atmosphere/ocean coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.3. Atmos Land Interface
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how fresh water is conserved at the atmosphere/land coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.4. Atmos Sea-ice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh water is conserved at the atmosphere/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.5. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh water is conserved at the ocean/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.runoff')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.6. Runoff
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how runoff is distributed and conserved
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.iceberg_calving')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.7. Iceberg Calving
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how iceberg calving is modeled and conserved
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.endoreic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.8. Endoreic Basins
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how endoreic basins (no ocean access) are treated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.snow_accumulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.9. Snow Accumulation
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how snow accumulation over land and over sea-ice is treated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.salt.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Key Properties --> Conservation --> Salt
Global salt convervation properties of the model
9.1. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how salt is conserved at the ocean/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.momentum.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10. Key Properties --> Conservation --> Momentum
Global momentum convervation properties of the model
10.1. Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how momentum is conserved in the model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11. Radiative Forcings
Radiative forcings of the model for historical and scenario (aka Table 12.1 IPCC AR5)
11.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of radiative forcings (GHG and aerosols) implementation in model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12. Radiative Forcings --> Greenhouse Gases --> CO2
Carbon dioxide forcing
12.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Radiative Forcings --> Greenhouse Gases --> CH4
Methane forcing
13.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 13.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14. Radiative Forcings --> Greenhouse Gases --> N2O
Nitrous oxide forcing
14.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
Troposheric ozone forcing
15.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
Stratospheric ozone forcing
16.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17. Radiative Forcings --> Greenhouse Gases --> CFC
Ozone-depleting and non-ozone-depleting fluorinated gases forcing
17.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.equivalence_concentration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "Option 1"
# "Option 2"
# "Option 3"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.2. Equivalence Concentration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Details of any equivalence concentrations used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18. Radiative Forcings --> Aerosols --> SO4
SO4 aerosol forcing
18.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19. Radiative Forcings --> Aerosols --> Black Carbon
Black carbon aerosol forcing
19.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20. Radiative Forcings --> Aerosols --> Organic Carbon
Organic carbon aerosol forcing
20.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21. Radiative Forcings --> Aerosols --> Nitrate
Nitrate forcing
21.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 21.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
Cloud albedo effect forcing (RFaci)
22.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 22.2. Aerosol Effect On Ice Clouds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative effects of aerosols on ice clouds are represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
Cloud lifetime effect forcing (ERFaci)
23.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 23.2. Aerosol Effect On Ice Clouds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative effects of aerosols on ice clouds are represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.RFaci_from_sulfate_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 23.3. RFaci From Sulfate Only
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative forcing from aerosol cloud interactions from sulfate aerosol only?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 23.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 24. Radiative Forcings --> Aerosols --> Dust
Dust forcing
24.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 24.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
Tropospheric volcanic forcing
25.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.2. Historical Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in historical simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.3. Future Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in future simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 25.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
Stratospheric volcanic forcing
26.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26.2. Historical Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in historical simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26.3. Future Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in future simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27. Radiative Forcings --> Aerosols --> Sea Salt
Sea salt forcing
27.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 28. Radiative Forcings --> Other --> Land Use
Land use forcing
28.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.crop_change_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 28.2. Crop Change Only
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Land use change represented via crop change only?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "irradiance"
# "proton"
# "electron"
# "cosmic ray"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 29. Radiative Forcings --> Other --> Solar
Solar forcing
29.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How solar forcing is provided
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation |
2,536 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Know your customer (KYC) - [Lead Scoring]
Marketing a new product to customers
In this short note we discuss customer targeting through telemarketing phone calls to sell long-term deposits. More specifically, within a campaign, the human agents execute phone calls to a list of clients to sell the deposit (outbound clients) or, if meanwhile the client calls the contact-center for any other reason, he is asked to subscribe the deposit (inbound client). Thus, the result is a binary one, i.e. the client can either subscribe for a term deposit ('yes') or not ('no').
The data set we use is provided by the UCI ML Repository, has 41188 examples of customer (and prospects) respone in this telemarketing campaign and 20 other attributes. These attributes describe their personal characteristics (e.g. age, type of job, marital status, educational level), their credit and loan data (e.g. credit in default, existence of housing and/or personal loan), details concerning their behavior during the telemarketing campaign (e.g. number of contacts performed, number of days that passed by after the client was last contacted, number of contacts performed before this campaign) and some important socioeconomic indicators (e.g. CPI, CCI, euribor 3 month rate). These response data have been ordered by date (from May 2008 to November 2010), and it is very close to the data analyzed in Moro et al., 2014 some years ago.
Data Source / Bibliography
Step1: Data Dictionary
The original dataset has the following attribute information
Step2: It is also important to note that the original data set has many more prospects (36548) than existent customers (4640). However, may be a bad idea to make a stratified split over this data set since we will loose that way the time dimension of the problem. In order to better check if the time dimension is important for this problem and the record provided, we need to re-create the missing calendar dates and transform the original data set in a timeseries object.
Data Transformation and train/test split
In the few lines of code below
Step3: The provided data set, bank_marketing, has 41188 record lines describing various customers and prospects attributes, as well as their response in the telemarketing campaign of interest. The percentage of unique calendar dates across this record is low, whereas much more people seems to response positevely as the time goes by. However, some months are missing from the data set and adding time dimension in this problem cannot help to provide better predictions.
Step4: In order to evaluate our learning algorithms later, we need to make a train/test split of the bank_marketing SFrame. However, due to the class imbalance which is observed in contacts' response (it has much more prospects than original customers), we better do so in a stratified way.
Step5: ROI Calculation
Step6: Call everyone (assuming we have the budget & time to do so), ROI is 10.27%
Step7: Lead Scoring Modeling
Part 1
Step8: A large proportion of customers who opened deposit accounts were employed (not students), under 38
Target them as leads and measure our ROI
Step9: Result
Step10: The toolkit automatically evaluates several types of algorithms, including
Step11: This initial model can be considered accurate given that it correctly predicts the purchasing decisions of ~90% of the contacts. However, the toolkit_model leaves room for improvement. Specifically only ~66% of predicted sales actually convert to sales. Furthermore, only ~24% of actual sales were actually predicted by the model. In order to better understand the model we can review the importance of the input features.
Step12: Lead score the contact list and measure our ROI
Step13: Result
Step14: Next we add quadratic interactions between the four features below
Step15: and re-train the GraphLab Create AutoML Classifier for this new data set qtrain.
Step16: Next, we evaluate the new AutoML Classifier, new_toolkit_model, on the test data set.
Step17: Note that this model is almost as accurate as the previous one, with similar precision (~66% of the predicted sales were actually converted to sales) and recall (~24% of actual sales were actually predicted by the model). However, to have a better feeling of the model just trained (newtoolkit_model) and how this differs from the previous one (toolkit_model), we can review the importance of the input features in these two cases.
Step18: By comparing these two models we note that
Step19: Result
Step20: To group the age values of our contacts we leverage the FeatureBinner method of the feature_engineering toolkit of GraphLab Create as shown below.
Step21: Lets now train a boosted trees classifier model using this enriched data set, qtrain1. We have also tweak its parameters to achieve better predictive performance.
Step22: Next we evaluate the new_boostedtrees_model on the test data set.
Step23: This new model (new_boostedtrees_model) is almost as accurate as the previous one, has higher precision (~66% of the predicted sales were actually converted to sales) and similar recall (~23% of actual sales were actually predicted by the model). To have a better feeling of the model just trained (newtoolkit_model) and how this differs from the previous one (new_toolkit_model), we can review the importance of the input features in these two cases.
Step24: By comparing these two cases, we note that
Step25: Conclusion | Python Code:
import graphlab as gl
import pandas as pd
from datetime import datetime
from sklearn.cross_validation import StratifiedKFold
## load data set from a locally saved csv file
bank_marketing = gl.SFrame.read_csv('./../../../04.UCI.ML.REPO/Bank_Marketing/bank-additional/bank-additional-full.csv',
delimiter=';')
## other methods of loading data sets...
# data = gl.SFrame('s3://' or 'hdfs://')
# data # pySpark RDD or SchemaRDD / Spark DataFrame
# data = gl.SFrame.read_json('')
# With a DB: configure ODBC manager / driver on the machine
# data = gl.connect_odbc?
# data = gl.from_sql?
bank_marketing.head()
Explanation: Know your customer (KYC) - [Lead Scoring]
Marketing a new product to customers
In this short note we discuss customer targeting through telemarketing phone calls to sell long-term deposits. More specifically, within a campaign, the human agents execute phone calls to a list of clients to sell the deposit (outbound clients) or, if meanwhile the client calls the contact-center for any other reason, he is asked to subscribe the deposit (inbound client). Thus, the result is a binary one, i.e. the client can either subscribe for a term deposit ('yes') or not ('no').
The data set we use is provided by the UCI ML Repository, has 41188 examples of customer (and prospects) respone in this telemarketing campaign and 20 other attributes. These attributes describe their personal characteristics (e.g. age, type of job, marital status, educational level), their credit and loan data (e.g. credit in default, existence of housing and/or personal loan), details concerning their behavior during the telemarketing campaign (e.g. number of contacts performed, number of days that passed by after the client was last contacted, number of contacts performed before this campaign) and some important socioeconomic indicators (e.g. CPI, CCI, euribor 3 month rate). These response data have been ordered by date (from May 2008 to November 2010), and it is very close to the data analyzed in Moro et al., 2014 some years ago.
Data Source / Bibliography:
[Moro et al., 2014] S. Moro, P. Cortez and P. Rita. A Data-Driven Approach to Predict the Success of Bank Telemarketing. Decision Support Systems, Elsevier, 62:22-31, June 2014
Dataset has been provided from the UCI ML Repository: https://archive.ics.uci.edu/ml/datasets/Bank+Marketing
Libraries and Necessary Data Transformation
First we fire up GraphLab Create, all the other necessary libraries and load the bank-marketing data set in an SFrame.
End of explanation
gl.canvas.set_target('ipynb')
bank_marketing.show()
Explanation: Data Dictionary
The original dataset has the following attribute information:
| Field Num | Field Name | Description |
|---|---|---|
| 1 | age | (numeric) |
| 2 | job | type of job (categorical: 'admin.', 'blue-collar', 'entrepreneur', 'housemaid', 'management', 'retired', 'self-employed', 'services', 'student', 'technician', 'unemployed', 'unknown')|
| 3 | marital | marital status (categorical: 'divorced', 'married', 'single', 'unknown'; note: 'divorced' means divorced or widowed) |
| 4 | education | (categorical: 'basic.4y', 'basic.6y', 'basic.9y', 'high.school', 'illiterate', 'professional.course', 'university.degree', 'unknown') |
| 5 | default | has credit in default? (categorical: 'no', 'yes', 'unknown') |
| 6 | housing | has housing loan? (categorical: 'no', 'yes', 'unknown') |
| 7 | loan | has personal loan? (categorical: 'no', 'yes', 'unknown') |
|---|---|---|
| 8 | contact | contact communication type (categorical: 'cellular', 'telephone') |
| 9 | month | last contact month of year (categorical: 'jan', 'feb', 'mar', ..., 'nov', 'dec') |
| 10 | day_of_week | last contact day of the week (categorical: 'mon', 'tue', 'wed', 'thu', 'fri') |
| 11 | duration | last contact duration, in seconds (numeric). Important note: this attribute highly affects the output target (e.g., if duration=0 then y='no'). Yet, the duration is not known before a call is performed. Also, after the end of the call y is obviously known. Thus, this input should only be included for benchmark purposes and should be discarded if the intention is to have a realistic predictive model. |
|---|---|---|
| 12 | campaign | number of contacts performed during this campaign and for this client (numeric, includes last contact) |
| 13 | pdays | number of days that passed by after the client was last contacted from a previous campaign (numeric; 999 means client was not previously contacted) |
| 14 | previous | number of contacts performed before this campaign and for this client (numeric) |
| 15 | poutcome | outcome of the previous marketing campaign (categorical: 'failure', 'nonexistent' , 'success') |
|---|---|---|
| 16 | emp.var.rate | employment variation rate - quarterly indicator (numeric) |
| 17 | cons.price.idx | consumer price index - monthly indicator (numeric) |
| 18 | cons.conf.idx | consumer confidence index - monthly indicator (numeric) |
| 19 | euribor3m | euribor 3 month rate - daily indicator (numeric) |
| 20 | nr.employed | number of employees - quarterly indicator (numeric) |
|---|---|---|
| 21 | y | has the client subscribed a term deposit? (binary: 'yes', 'no') [outcome of the marketing campaign]|
Exploratory Data Analysis
As shown below, there is no undefined value in any of the provided record lines, neither a strange set of values or outliers different from the expected ones for any of the attributes.
End of explanation
from helper_functions import *
def _month_to_number(x):
from dateutil import parser
return parser.parse(x).strftime('%m')
def _wkday_to_number(x):
from dateutil import parser
return parser.parse(x).strftime('%w')
def _str_to_datetime(x):
import datetime
import pytz
from dateutil import parser
return parser.parse(x).strftime('%Y-%m-%d')
def _unix_timestamp_to_datetime(x):
import time
import datetime
import pytz
from dateutil import parser
return parser.parse(x)
bank_marketing['y'] = bank_marketing['y'].apply(lambda x: 1 if x=='yes' else 0)
bank_marketing['month_nr'] = bank_marketing['month'].apply(_month_to_number)
bank_marketing['wkday_nr'] = bank_marketing['day_of_week'].apply(_wkday_to_number)
bank_marketing['year'] = add_running_year(bank_marketing['month_nr'], 2008)
bank_marketing['date'] = add_running_date(bank_marketing, 'year', 'month_nr', 'wkday_nr')
bank_marketing['date'] = bank_marketing.apply(lambda row: '-'.join(map(str,(row['year'], row['month_nr'], row['date']))))
bank_marketing['date'] = bank_marketing['date'].apply(_str_to_datetime)
bank_marketing['date'] = bank_marketing['date'].apply(_unix_timestamp_to_datetime)
bank_marketing
bank_marketing = gl.TimeSeries(bank_marketing, index='date')
Explanation: It is also important to note that the original data set has many more prospects (36548) than existent customers (4640). However, may be a bad idea to make a stratified split over this data set since we will loose that way the time dimension of the problem. In order to better check if the time dimension is important for this problem and the record provided, we need to re-create the missing calendar dates and transform the original data set in a timeseries object.
Data Transformation and train/test split
In the few lines of code below:
We add calendar dates (date) for the year, month and day of week of each provided record line, and produce the corresponding datetimes.
Transform the data set in a Timeseries object, and
Take a second look in the data we have available to check if the time-dimension is necessary for this problem.
End of explanation
print 'Number of record lines [bank_marketing]: %d' % len(bank_marketing)
print 'Unique calendar dates across data set [bank_marketing]: %d' % len(bank_marketing['date'].unique())
unique_dates_pct = (len(bank_marketing['date'].unique())*100/float(len(bank_marketing)))
print 'Percentage of unique calendar dates across data set [bank_marketing]: %.2f%%'% unique_dates_pct
bank_marketing.filter_by(2008,'year')['month_nr'].unique().sort()
print 'Full Data Set [year: 2008]:'
print '------------------------------'
bank_marketing_2008 = bank_marketing.filter_by(2008,'year')
customers = len(bank_marketing_2008[bank_marketing_2008['y']==1])
prospects = len(bank_marketing_2008[bank_marketing_2008['y']==0])
print 'Number of examples in year segment [bank_marketing]: %d' % len(bank_marketing_2008)
print 'Number of existent customers: %d (%.2f%%)' % (customers, 100*customers/float(len(bank_marketing_2008)))
print 'Number of prospects: %d (%.2f%%)\n' % (prospects, 100*prospects/float(len(bank_marketing_2008)))
bank_marketing.filter_by(2009,'year')['month_nr'].unique().sort()
print 'Full Data Set [year: 2009]:'
print '------------------------------'
bank_marketing_2009 = bank_marketing.filter_by(2009,'year')
customers = len(bank_marketing_2009[bank_marketing_2009['y']==1])
prospects = len(bank_marketing_2009[bank_marketing_2009['y']==0])
print 'Number of examples in year segment [bank_marketing]: %d' % len(bank_marketing_2009)
print 'Number of existent customers: %d (%.2f%%)' % (customers, 100*customers/float(len(bank_marketing_2009)))
print 'Number of prospects: %d (%.2f%%)\n' % (prospects, 100*prospects/float(len(bank_marketing_2009)))
bank_marketing.filter_by(2010,'year')['month_nr'].unique().sort()
print 'Full Data Set [year: 2010]:'
print '------------------------------'
bank_marketing_2010 = bank_marketing.filter_by(2010,'year')
customers = len(bank_marketing_2010[bank_marketing_2010['y']==1])
prospects = len(bank_marketing_2010[bank_marketing_2010['y']==0])
print 'Number of examples in year segment [bank_marketing]: %d' % len(bank_marketing_2010)
print 'Number of existent customers: %d (%.2f%%)' % (customers, 100*customers/float(len(bank_marketing_2010)))
print 'Number of prospects: %d (%.2f%%)' % (prospects, 100*prospects/float(len(bank_marketing_2010)))
Explanation: The provided data set, bank_marketing, has 41188 record lines describing various customers and prospects attributes, as well as their response in the telemarketing campaign of interest. The percentage of unique calendar dates across this record is low, whereas much more people seems to response positevely as the time goes by. However, some months are missing from the data set and adding time dimension in this problem cannot help to provide better predictions.
End of explanation
## remove the time dimension of the problem
## transform the Timeseries object in a Numpy array
bank_marketing = bank_marketing.to_sframe().remove_column('date')
features = bank_marketing.column_names()
bank_marketing_np = bank_marketing.to_numpy()
## provide the stratified train/test split
skf = StratifiedKFold(bank_marketing['y'], n_folds=2, shuffle=True, random_state=1)
for train_idx, test_idx in skf:
train, test = bank_marketing_np[train_idx], bank_marketing_np[test_idx]
train = pd.DataFrame(train, index=train_idx, columns=features)
train = gl.SFrame(train, format='dataframe')
test = pd.DataFrame(test, index=test_idx, columns=features)
test = gl.SFrame(test, format='dataframe')
## restore original dtypes
for attrib in features:
train[attrib] = train[attrib].astype(bank_marketing[attrib].dtype())
test[attrib] = test[attrib].astype(bank_marketing[attrib].dtype())
print 'Training Data Set:'
print '---------------------'
train_customers = len(train[train['y']==1])
train_prospects = len(train[train['y']==0])
print 'Number of examples in training set [train]: %d' % len(train)
print 'Number of existent customers: %d (%.2f%%)' % (train_customers, 100*train_customers/float(len(train)))
print 'Number of prospects: %d (%.2f%%)\n' % (train_prospects, 100*train_prospects/float(len(train)))
print 'Test Data Set:'
print '-----------------'
test_customers = len(test[test['y']==1])
test_prospects = len(test[test['y']==0])
print 'Number of examples in validation set [test]: %d' % len(test)
print 'Number of existent customers: %d (%.2f%%)' % (test_customers, 100*test_customers/float(len(test)))
print 'Number of prospects: %d (%.2f%%)' % (test_prospects, 100*test_prospects/float(len(test)))
Explanation: In order to evaluate our learning algorithms later, we need to make a train/test split of the bank_marketing SFrame. However, due to the class imbalance which is observed in contacts' response (it has much more prospects than original customers), we better do so in a stratified way.
End of explanation
def calc_call_roi(contact_list, lead_score, pct_tocall):
#assumptions
cost_ofcall = 1.00
cust_ltv = 100.00 #customer lifetime value
num_calls = int(len(contact_list) * pct_tocall)
if 'lead_score' in contact_list.column_names():
contact_list.remove_column('lead_score')
contact_list = contact_list.add_column(lead_score, name='lead_score')
sorted_bymodel = contact_list.sort('lead_score', ascending=False)
call_list = sorted_bymodel[:num_calls]
num_subscriptions = len(call_list[call_list['y']==1])
roi = (num_subscriptions * cust_ltv - num_calls * cost_ofcall) / float(num_calls * cost_ofcall)
return roi
Explanation: ROI Calculation: Classical Use Case
Measuring the effectiveness of our lead scoring model
Before we start, let's assume that each phone call to a contact costs 1 USD and that the customer lifetime value for a contact that purchases a term deposit is 100 USD. Then the ROI for calling all the customers in our training dataset is:
End of explanation
init_leadscores = gl.SArray([1 for _ in test])
init_roi = calc_call_roi(test, init_leadscores, 1)
print 'ROI for calling all contacts [test]: %.2f%%' % init_roi
Explanation: Call everyone (assuming we have the budget & time to do so), ROI is 10.27%
End of explanation
num_customers = float(len(train))
numY = gl.Sketch(train['y']).frequency_count(1)
print "%.2f%% of contacts in training set opened long-term deposit accounts." % (numY/num_customers * 100.0)
median_age = gl.Sketch(train['age']).quantile(0.5)
num_purchasing_emp_under_median_age = sum(train.apply(lambda x: 1 if x['age']<median_age
and ((x['job']!='unemployed') &
(x['job']!='student') &
(x['job']!='unknown'))
and x['y']==1 else 0))
probY_emp_under_median_age = (num_purchasing_emp_under_median_age / float(numY)) * 100.0
print "%.2f%% of the clients who opened long-term deposit accounts, were employed (but not students) and had age < %d (median)." % (probY_emp_under_median_age, median_age)
Explanation: Lead Scoring Modeling
Part 1: Informed Decision
Targeting employed contacts with age less than 38 (median)
Usually middle-aged, employed people who have good annual earnings are much better prospects to contact and keep them informed for a new product. Indeed, as shown below 11.27% of the contacts in the training set opened a long-term deposit account, whereas 43.06% of them were employed (but not students) and had age less than 38.
End of explanation
target_leadscore = test.apply(lambda x: 1 if x['age']<median_age
and ((x['job']!='unemployed') & (x['job']!='student') & (x['job']!='unknown'))
and x['y']==1 else 0)
age_targeting_roi = calc_call_roi(test, target_leadscore, 0.2)
print 'ROI for targeted calls [employed (not students) and age < %d (median)] to 20%% of contacts: %.2f%%' % (median_age, age_targeting_roi)
Explanation: A large proportion of customers who opened deposit accounts were employed (not students), under 38
Target them as leads and measure our ROI: Major improvement, 28.75%
End of explanation
## remove features that introduce noise in ML prediction
features = train.column_names()
features.remove('duration')
features.remove('y')
features.remove('month_nr')
features.remove('wkday_nr')
features.remove('year')
## GLC AutoML Classifier
toolkit_model = gl.classifier.create(train, features=features, target='y')
Explanation: Result:
ROI for 20% of targeted contacts (employed and not students, had age < 38): 28.75%
Major improvement over the ROI achieved by calling EVERYONE in list
Calling everyone in list does not have greater ROI!
Part 2: Train a Machine Learning model instead
Learn from ALL features, not just age or status of employment
GraphLab Create AutoML to choose the most effective classifer model automatically!
Dato's classifier toolkit can choose the most effective classifier model automatically.
End of explanation
results = toolkit_model.evaluate(test)
print "accuracy: %.5f, precision: %.5f, recall: %.5f" % (results['accuracy'], results['precision'], results['recall'])
Explanation: The toolkit automatically evaluates several types of algorithms, including: Boosted Trees, Random Forests, Decision Trees, Support Vector Machines, Logistic regression - with intelligent default paramters. Based on a validation set, it chooses the most accurate model which in our case is a Boosted Trees Classifier. We can then evaluate this model on the test data set.
End of explanation
toolkit_model.get_feature_importance()
Explanation: This initial model can be considered accurate given that it correctly predicts the purchasing decisions of ~90% of the contacts. However, the toolkit_model leaves room for improvement. Specifically only ~66% of predicted sales actually convert to sales. Furthermore, only ~24% of actual sales were actually predicted by the model. In order to better understand the model we can review the importance of the input features.
End of explanation
toolkit_leadscore = toolkit_model.predict(test,output_type='probability')
toolkit_roi = calc_call_roi(test, toolkit_leadscore, 0.2 )
print 'ROI for calling 20%% of highest predicted contacts: %.2f%%' % toolkit_roi
Explanation: Lead score the contact list and measure our ROI: Major improvement again, 34.87%
After scoring the list by probability to purchase, the ROI for calling the top 20% of the list is:
End of explanation
import matplotlib.pyplot as plt
%matplotlib inline
qfeatures0 = ['emp.var.rate','cons.price.idx','cons.conf.idx','euribor3m']
plt.figure(figsize=(10,10))
subplot_idx = 1
for attrib1 in qfeatures0:
for attrib2 in qfeatures0:
if(attrib2 != attrib1):
if subplot_idx < 5:
plt.subplot(2,2,subplot_idx)
plt.scatter(train[attrib1], train[attrib2])
plt.xlabel(attrib1)
plt.ylabel(attrib2)
plt.title('\'%s\' vs \'%s\'' % (attrib1, attrib2))
subplot_idx +=1
plt.show()
Explanation: Result:
ROI for 20% of contacts as sorted by the descending lead score which was returned by the AutoML Classifier: 34.87%.
Huge improvement (3x greater) over the ROI achieved by calling EVERYONE in list.
Improved ROI (6.12% greater) than the ROI achieved by calling 20% of the targeted contacts of Part 1 above (employed, middle-aged people).
Part 3: Tweak the ML Classifier
Adding quadratic interactions, age grouping, tweak parameters
The scatter plots diagrams among the various int and float variables of the train data set, and more specifically the four plots below, suggest to investigate if interactions between these features exist:
python
qfeatures0 = ['emp.var.rate','cons.price.idx','cons.conf.idx','euribor3m']
End of explanation
## define a quadratic transformer object
quadratic_transformer = gl.feature_engineering.QuadraticFeatures(features=qfeatures0)
## fit the quadratic transformer object over the train set
quadratic = gl.feature_engineering.create(train, quadratic_transformer)
## transform the train data set
qtrain = quadratic.transform(train)
## remove the features that may worse our predictions
qfeatures = qtrain.column_names()
qfeatures.remove('duration')
qfeatures.remove('y')
qfeatures.remove('month_nr')
qfeatures.remove('wkday_nr')
qfeatures.remove('year')
qtrain.head(5)
Explanation: Next we add quadratic interactions between the four features below:
python
qfeatures0 = ['emp.var.rate','cons.price.idx','cons.conf.idx','euribor3m']
End of explanation
new_toolkit_model = gl.classifier.create(qtrain, target='y', features=qfeatures)
Explanation: and re-train the GraphLab Create AutoML Classifier for this new data set qtrain.
End of explanation
results = new_toolkit_model.evaluate(quadratic.transform(test))
print "accuracy: %.5f, precision: %.5f, recall: %.5f" % (results['accuracy'], results['precision'], results['recall'])
Explanation: Next, we evaluate the new AutoML Classifier, new_toolkit_model, on the test data set.
End of explanation
print '\'newtoolkit_model\'\n[GLC AutoML Classifier wt quadratic interactions]:\n'
print new_toolkit_model.get_feature_importance()
print '\'toolkit_model\'\n[GLC AutoML Classifier wo quadratic interactions]:\n'
print toolkit_model.get_feature_importance()
Explanation: Note that this model is almost as accurate as the previous one, with similar precision (~66% of the predicted sales were actually converted to sales) and recall (~24% of actual sales were actually predicted by the model). However, to have a better feeling of the model just trained (newtoolkit_model) and how this differs from the previous one (toolkit_model), we can review the importance of the input features in these two cases.
End of explanation
## show ROI for experimentation model
newtoolkit_leadscore = new_toolkit_model.predict(quadratic.transform(test),output_type='probability')
newtoolkit_roi = calc_call_roi(quadratic.transform(test), newtoolkit_leadscore, 0.2)
print 'ROI for calling predicted contacts: %.2f%%' % newtoolkit_roi
Explanation: By comparing these two models we note that:
The quadratic interactions:
emp.var.rate ∗ euribor3m
cons.conf.idx ∗ euribor3m
seems to be important if we want to build a more accurate model and they should not be neglected.
2. age, euribor3m and campaign features are significant in both cases.
3. The number of days that passed by after the client was last contacted from a previous campaign (pdays), the number of contacts performed before this campaign and for this client (previous), as well as the channel through which the contact has been made (contact: 'telephone') are important for both models.
4. The specific day_of_week ('mon') that the contact has been made seems to be important for both models.
Lead score the contact list and measure our ROI: New Improvement achieved, 35.01%
As a last step of evaluating the newtoolkit_model, lets compute our ROI if we again contact the 20% of the leads that this new model scored.
End of explanation
qtrain['age'].show()
Explanation: Result:
ROI for 20% of the leads returned by the AutoML Classifier [wt quadratic interactions]: 34.87%.
Improved ROI (0.14% greater) than the ROI achieved by calling 20% of the leads returned by the AutoML Classifier [wo quadratic interactions].
GraphLab Create Boosted Trees Classifier:
Contacts' age grouping, hyperparameters fine-tuning
As it is obvious from the histogram below most of our contacts were between 30 to 36 years old, 4518 contacts were between 36 to 43, 2931 people between 43 to 50 and about 2500 people between 23 to 30 and 50 to 56 years old. The remaining contacts were either younger or older than these ages, but certainly not more than 1000 cases or so in a specific age group. Therefore, grouping the age values of our contacts into a pre-defined number of bins may be beneficial for the learning algorithm of choice, and may improve the ROI of our telemarketing campaign even further.
End of explanation
## define a binning transformer for the age attribute of contacts
age_binning_transformer = gl.feature_engineering.FeatureBinner(features='age', strategy='quantile', num_bins=12)
## fit the age binning transformer over the train set
age_binning = gl.feature_engineering.create(train, age_binning_transformer)
## transform the train data set
qtrain1 = age_binning.transform(qtrain)
## remove the features that may worse our predictions
qfeatures1 = qtrain1.column_names()
qfeatures1.remove('duration')
qfeatures1.remove('y')
qfeatures1.remove('month_nr')
qfeatures1.remove('wkday_nr')
qfeatures1.remove('year')
qtrain1['age'].show()
Explanation: To group the age values of our contacts we leverage the FeatureBinner method of the feature_engineering toolkit of GraphLab Create as shown below.
End of explanation
## We create a boosted trees classifier with the enriched dataset.
new_boostedtrees_model = gl.boosted_trees_classifier.create(qtrain1, target='y', features = qfeatures1,
max_iterations = 100,
max_depth=5,
step_size=0.1,
min_child_weight=0.06,
random_seed=1,
early_stopping_rounds=10)
Explanation: Lets now train a boosted trees classifier model using this enriched data set, qtrain1. We have also tweak its parameters to achieve better predictive performance.
End of explanation
results = new_boostedtrees_model.evaluate(age_binning.transform(quadratic.transform(test)))
print "accuracy: %.5f, precision: %.5f, recall: %.5f" % (results['accuracy'], results['precision'], results['recall'])
Explanation: Next we evaluate the new_boostedtrees_model on the test data set.
End of explanation
print '\'new_boostedtrees_model\'\n[GLC Boosted Trees Classifier wt quadratic interactions,\
age grouping & hyperparams tuned]:\n'
new_boostedtrees_model.get_feature_importance().print_rows(num_rows=20)
print '\'newtoolkit_model\'\n[GLC AutoML Classifier wt quadratic interactions]:\n'
print new_toolkit_model.get_feature_importance()
Explanation: This new model (new_boostedtrees_model) is almost as accurate as the previous one, has higher precision (~66% of the predicted sales were actually converted to sales) and similar recall (~23% of actual sales were actually predicted by the model). To have a better feeling of the model just trained (newtoolkit_model) and how this differs from the previous one (new_toolkit_model), we can review the importance of the input features in these two cases.
End of explanation
## show ROI for experimentation model
test1 = age_binning.transform(quadratic.transform(test))
boostedtrees_leadscore = new_boostedtrees_model.predict(test1, output_type='probability')
boostedtrees_roi = calc_call_roi(test1, boostedtrees_leadscore, 0.2)
print 'ROI for calling predicted contacts: %.2f%%' % boostedtrees_roi
Explanation: By comparing these two cases, we note that:
These two quadratic interactions:
cons.price.idx ∗ euribor3m
cons.price.idx ∗ emp.var.rate
enters the new_boostedtrees_model as significant attributes.
euribor3m and campaign features are significant in both cases whereas age is far less important in this new tweaked model.
Various characteristics concerning the campaign, such as the number of days that passed by after the client was last contacted from a previous campaign (pdays), the number of contacts performed before this campaign and for this client (previous), the channel through which the contact has been made (contact: 'telephone'), as well as the specific day_of_week ('mon') that this contact occured seems to be important for both models.
In this tweaked new_boostedtrees_model, attributes describing if the contact has a personal loan (loan) or credit in default (default), his/her educational (education) and employment status (job), as well as the outcome of the previous marketing campaign (poutcome) gets greater importance than before.
Lead score the contact list and measure our ROI: New Improvement achieved, 35.13%
As a last step of evaluating the new_boostedtrees_model, lets compute our ROI if we again contact 20% of the contact leads as scored by this new model.
End of explanation
pct_tocall = 0.2
boostedtrees_list = test1.sort('lead_score', ascending=False)
num_calls = int(len(boostedtrees_list)*pct_tocall)
print 'Assuming we have time and resources to call %d%% of the lead scored contact list, we\
need to make %d phone calls.\n' % (pct_tocall*100, num_calls)
print 'Lead Scored Contact List:'
boostedtrees_list['lead_score', 'age','campaign','euribor3m','job','loan', 'default', 'poutcome'].\
print_rows(num_rows=50, max_row_width=100)
Explanation: Conclusion:
We will choose the new_boostedtrees_model to lead score the contact list. By doing so, we can achieve a 35.13% ROI by only contacting 20% of the people in this list. Assuming we had the budget and time to contact everyone in list, we will have an ROI of only 10.27%, a fact that emphasize the importance of lead scoring as a method. Furthermore, the ROI achieved by our tuned new_boostedtrees_model is significantly greater than the ROI returned by simply targeting employed, middle-aged people which was found to be 28.75%.
Ranked List for Marketing / Sales Teams as returned by the best ML Model
Who should be prioritized to be called next!
Assuming we have time and resources to call 20% of the lead scored contact list, these would be the first 30 people that we should call first:
End of explanation |
2,537 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Setup
We're going to download the collected works of Nietzsche to use as our data for this class.
Step1: Sometimes it's useful to have a zero value in the dataset, e.g. for padding
Step2: Map from chars to indices and back again
Step3: idx will be the data we use from now own - it simply converts all the characters to their index (based on the mapping above)
Step4: 3 char model
Create inputs
Create a list of every 4th character, starting at the 0th, 1st, 2nd, then 3rd characters
Step5: Our inputs
Step6: Our output
Step7: The first 4 inputs and outputs
Step8: The number of latent factors to create (i.e. the size of the embedding matrix)
Step9: Create inputs and embedding outputs for each of our 3 character inputs
Step10: Create and train model
Pick a size for our hidden state
Step11: This is the 'green arrow' from our diagram - the layer operation from input to hidden.
Step12: Our first hidden activation is simply this function applied to the result of the embedding of the first character.
Step13: This is the 'orange arrow' from our diagram - the layer operation from hidden to hidden.
Step14: Our second and third hidden activations sum up the previous hidden state (after applying dense_hidden) to the new input state.
Step15: This is the 'blue arrow' from our diagram - the layer operation from hidden to output.
Step16: The third hidden state is the input to our output layer.
Step17: Test model
Step18: Our first RNN!
Create inputs
This is the size of our unrolled RNN.
Step19: For each of 0 through 7, create a list of every 8th character with that starting point. These will be the 8 inputs to out model.
Step20: Then create a list of the next character in each of these series. This will be the labels for our model.
Step21: So each column below is one series of 8 characters from the text.
Step22: ...and this is the next character after each sequence.
Step23: Create and train model
Step24: The first character of each sequence goes through dense_in(), to create our first hidden activations.
Step25: Then for each successive layer we combine the output of dense_in() on the next character with the output of dense_hidden() on the current hidden state, to create the new hidden state.
Step26: Putting the final hidden state through dense_out() gives us our output.
Step27: So now we can create our model.
Step28: Test model
Step29: Our first RNN with keras!
Step30: This is nearly exactly equivalent to the RNN we built ourselves in the previous section.
Step31: Returning sequences
Create inputs
To use a sequence model, we can leave our input unchanged - but we have to change our output to a sequence (of course!)
Here, c_out_dat is identical to c_in_dat, but moved across 1 character.
Step32: Reading down each column shows one set of inputs and outputs.
Step33: Create and train model
Step34: We're going to pass a vector of all zeros as our starting point - here's our input layers for that
Step35: Test model
Step36: Sequence model with keras
Step37: To convert our previous keras model into a sequence model, simply add the 'return_sequences=True' parameter, and add TimeDistributed() around our dense layer.
Step38: One-hot sequence model with keras
This is the keras version of the theano model that we're about to create.
Step39: Stateful model with keras
Step40: A stateful model is easy to create (just add "stateful=True") but harder to train. We had to add batchnorm and use LSTM to get reasonable results.
When using stateful in keras, you have to also add 'batch_input_shape' to the first layer, and fix the batch size there.
Step41: Since we're using a fixed batch shape, we have to ensure our inputs and outputs are a even multiple of the batch size.
Step42: Theano RNN
Step43: Using raw theano, we have to create our weight matrices and bias vectors ourselves - here are the functions we'll use to do so (using glorot initialization).
The return values are wrapped in shared(), which is how we tell theano that it can manage this data (copying it to and from the GPU as necessary).
Step44: We return the weights and biases together as a tuple. For the hidden weights, we'll use an identity initialization (as recommended by Hinton.)
Step45: Theano doesn't actually do any computations until we explicitly compile and evaluate the function (at which point it'll be turned into CUDA code and sent off to the GPU). So our job is to describe the computations that we'll want theano to do - the first step is to tell theano what inputs we'll be providing to our computation
Step46: Now we're ready to create our intial weight matrices.
Step47: Theano handles looping by using the GPU scan operation. We have to tell theano what to do at each step through the scan - this is the function we'll use, which does a single forward pass for one character
Step48: Now we can provide everything necessary for the scan operation, so we can setup that up - we have to pass in the function to call at each step, the sequence to step through, the initial values of the outputs, and any other arguments to pass to the step function.
Step49: We can now calculate our loss function, and all of our gradients, with just a couple of lines of code!
Step50: We even have to show theano how to do SGD - so we set up this dictionary of updates to complete after every forward pass, which apply to standard SGD update rule to every weight.
Step51: We're finally ready to compile the function!
Step52: To use it, we simply loop through our input data, calling the function compiled above, and printing our progress from time to time.
Step53: Pure python RNN!
Set up basic functions
Now we're going to try to repeat the above theano RNN, using just pure python (and numpy). Which means, we have to do everything ourselves, including defining the basic functions of a neural net! Below are all of the definitions, along with tests to check that they give the same answers as theano. The functions ending in _d are the derivatives of each function.
Step54: We also have to define our own scan function. Since we're not worrying about running things in parallel, it's very simple to implement
Step55: ...for instance, scan on + is the cumulative sum.
Step56: Set up training
Let's now build the functions to do the forward and backward passes of our RNN. First, define our data and shape.
Step57: Here's the function to do a single forward pass of an RNN, for a single character.
Step58: We use scan to apply the above to a whole sequence of characters.
Step59: Now we can define the backward step. We use a loop to go through every element of the sequence. The derivatives are applying the chain rule to each step, and accumulating the gradients across the sequence.
Step60: Now we can set up our initial weight matrices. Note that we're not using bias at all in this example, in order to keep things simpler.
Step61: Our loop looks much like the theano loop in the previous section, except that we have to call the backwards step ourselves.
Step62: Keras GRU
Identical to the last keras rnn, but a GRU!
Step63: Theano GRU
Separate weights
The theano GRU looks just like the simple theano RNN, except for the use of the reset and update gates. Each of these gates requires its own hidden and input weights, so we add those to our weight matrices.
Step64: Here's the definition of a gate - it's just a sigmoid applied to the addition of the dot products of the input vectors.
Step65: Our step is nearly identical to before, except that we multiply our hidden state by our reset gate, and we update our hidden state based on the update gate.
Step66: Everything from here on is identical to our simple RNN in theano.
Step67: Combined weights
We can make the previous section simpler and faster by concatenating the hidden and input matrices and inputs together. We're not going to step through this cell by cell - you'll see it's identical to the previous section except for this concatenation. | Python Code:
path = get_file('nietzsche.txt', origin="https://s3.amazonaws.com/text-datasets/nietzsche.txt")
text = open(path).read()
print('corpus length:', len(text))
chars = sorted(list(set(text)))
vocab_size = len(chars)+1
print('total chars:', vocab_size)
Explanation: Setup
We're going to download the collected works of Nietzsche to use as our data for this class.
End of explanation
chars.insert(0, "\0")
''.join(chars[1:-6])
Explanation: Sometimes it's useful to have a zero value in the dataset, e.g. for padding
End of explanation
char_indices = dict((c, i) for i, c in enumerate(chars))
indices_char = dict((i, c) for i, c in enumerate(chars))
Explanation: Map from chars to indices and back again
End of explanation
idx = [char_indices[c] for c in text]
idx[:10]
''.join(indices_char[i] for i in idx[:70])
Explanation: idx will be the data we use from now own - it simply converts all the characters to their index (based on the mapping above)
End of explanation
cs=3
c1_dat = [idx[i] for i in xrange(0, len(idx)-1-cs, cs)]
c2_dat = [idx[i+1] for i in xrange(0, len(idx)-1-cs, cs)]
c3_dat = [idx[i+2] for i in xrange(0, len(idx)-1-cs, cs)]
c4_dat = [idx[i+3] for i in xrange(0, len(idx)-1-cs, cs)]
Explanation: 3 char model
Create inputs
Create a list of every 4th character, starting at the 0th, 1st, 2nd, then 3rd characters
End of explanation
x1 = np.stack(c1_dat[:-2])
x2 = np.stack(c2_dat[:-2])
x3 = np.stack(c3_dat[:-2])
Explanation: Our inputs
End of explanation
y = np.stack(c4_dat[:-2])
Explanation: Our output
End of explanation
x1[:4], x2[:4], x3[:4]
y[:4]
x1.shape, y.shape
Explanation: The first 4 inputs and outputs
End of explanation
n_fac = 42
Explanation: The number of latent factors to create (i.e. the size of the embedding matrix)
End of explanation
def embedding_input(name, n_in, n_out):
inp = Input(shape=(1,), dtype='int64', name=name)
emb = Embedding(n_in, n_out, input_length=1)(inp)
return inp, Flatten()(emb)
c1_in, c1 = embedding_input('c1', vocab_size, n_fac)
c2_in, c2 = embedding_input('c2', vocab_size, n_fac)
c3_in, c3 = embedding_input('c3', vocab_size, n_fac)
Explanation: Create inputs and embedding outputs for each of our 3 character inputs
End of explanation
n_hidden = 256
Explanation: Create and train model
Pick a size for our hidden state
End of explanation
dense_in = Dense(n_hidden, activation='relu')
Explanation: This is the 'green arrow' from our diagram - the layer operation from input to hidden.
End of explanation
c1_hidden = dense_in(c1)
Explanation: Our first hidden activation is simply this function applied to the result of the embedding of the first character.
End of explanation
dense_hidden = Dense(n_hidden, activation='tanh')
Explanation: This is the 'orange arrow' from our diagram - the layer operation from hidden to hidden.
End of explanation
c2_dense = dense_in(c2)
hidden_2 = dense_hidden(c1_hidden)
c2_hidden = merge([c2_dense, hidden_2])
c3_dense = dense_in(c3)
hidden_3 = dense_hidden(c2_hidden)
c3_hidden = merge([c3_dense, hidden_3])
Explanation: Our second and third hidden activations sum up the previous hidden state (after applying dense_hidden) to the new input state.
End of explanation
dense_out = Dense(vocab_size, activation='softmax')
Explanation: This is the 'blue arrow' from our diagram - the layer operation from hidden to output.
End of explanation
c4_out = dense_out(c3_hidden)
model = Model([c1_in, c2_in, c3_in], c4_out)
model.compile(loss='sparse_categorical_crossentropy', optimizer=Adam())
model.optimizer.lr=0.000001
model.fit([x1, x2, x3], y, batch_size=64, nb_epoch=4)
model.optimizer.lr=0.01
model.fit([x1, x2, x3], y, batch_size=64, nb_epoch=4)
model.optimizer.lr.set_value(0.000001)
model.fit([x1, x2, x3], y, batch_size=64, nb_epoch=4)
model.optimizer.lr.set_value(0.01)
model.fit([x1, x2, x3], y, batch_size=64, nb_epoch=4)
Explanation: The third hidden state is the input to our output layer.
End of explanation
def get_next(inp):
idxs = [char_indices[c] for c in inp]
arrs = [np.array(i)[np.newaxis] for i in idxs]
p = model.predict(arrs)
i = np.argmax(p)
return chars[i]
get_next('phi')
get_next(' th')
get_next(' an')
Explanation: Test model
End of explanation
cs=8
Explanation: Our first RNN!
Create inputs
This is the size of our unrolled RNN.
End of explanation
c_in_dat = [[idx[i+n] for i in xrange(0, len(idx)-1-cs, cs)]
for n in range(cs)]
Explanation: For each of 0 through 7, create a list of every 8th character with that starting point. These will be the 8 inputs to out model.
End of explanation
c_out_dat = [idx[i+cs] for i in xrange(0, len(idx)-1-cs, cs)]
xs = [np.stack(c[:-2]) for c in c_in_dat]
len(xs), xs[0].shape
y = np.stack(c_out_dat[:-2])
Explanation: Then create a list of the next character in each of these series. This will be the labels for our model.
End of explanation
[xs[n][:cs] for n in range(cs)]
Explanation: So each column below is one series of 8 characters from the text.
End of explanation
y[:cs]
n_fac = 42
Explanation: ...and this is the next character after each sequence.
End of explanation
def embedding_input(name, n_in, n_out):
inp = Input(shape=(1,), dtype='int64', name=name+'_in')
emb = Embedding(n_in, n_out, input_length=1, name=name+'_emb')(inp)
return inp, Flatten()(emb)
c_ins = [embedding_input('c'+str(n), vocab_size, n_fac) for n in range(cs)]
n_hidden = 256
dense_in = Dense(n_hidden, activation='relu')
dense_hidden = Dense(n_hidden, activation='relu', init='identity')
dense_out = Dense(vocab_size, activation='softmax')
Explanation: Create and train model
End of explanation
hidden = dense_in(c_ins[0][1])
Explanation: The first character of each sequence goes through dense_in(), to create our first hidden activations.
End of explanation
for i in range(1,cs):
c_dense = dense_in(c_ins[i][1])
hidden = dense_hidden(hidden)
hidden = merge([c_dense, hidden])
Explanation: Then for each successive layer we combine the output of dense_in() on the next character with the output of dense_hidden() on the current hidden state, to create the new hidden state.
End of explanation
c_out = dense_out(hidden)
Explanation: Putting the final hidden state through dense_out() gives us our output.
End of explanation
model = Model([c[0] for c in c_ins], c_out)
model.compile(loss='sparse_categorical_crossentropy', optimizer=Adam())
model.fit(xs, y, batch_size=64, nb_epoch=12)
Explanation: So now we can create our model.
End of explanation
def get_next(inp):
idxs = [np.array(char_indices[c])[np.newaxis] for c in inp]
p = model.predict(idxs)
return chars[np.argmax(p)]
get_next('for thos')
get_next('part of ')
get_next('queens a')
Explanation: Test model
End of explanation
n_hidden, n_fac, cs, vocab_size = (256, 42, 8, 86)
Explanation: Our first RNN with keras!
End of explanation
model=Sequential([
Embedding(vocab_size, n_fac, input_length=cs),
SimpleRNN(n_hidden, activation='relu', inner_init='identity'),
Dense(vocab_size, activation='softmax')
])
model.summary()
model.compile(loss='sparse_categorical_crossentropy', optimizer=Adam())
model.fit(np.stack(xs,1), y, batch_size=64, nb_epoch=8)
def get_next_keras(inp):
idxs = [char_indices[c] for c in inp]
arrs = np.array(idxs)[np.newaxis,:]
p = model.predict(arrs)[0]
return chars[np.argmax(p)]
get_next_keras('this is ')
get_next_keras('part of ')
get_next_keras('queens a')
Explanation: This is nearly exactly equivalent to the RNN we built ourselves in the previous section.
End of explanation
#c_in_dat = [[idx[i+n] for i in xrange(0, len(idx)-1-cs, cs)]
# for n in range(cs)]
c_out_dat = [[idx[i+n] for i in xrange(1, len(idx)-cs, cs)]
for n in range(cs)]
ys = [np.stack(c[:-2]) for c in c_out_dat]
Explanation: Returning sequences
Create inputs
To use a sequence model, we can leave our input unchanged - but we have to change our output to a sequence (of course!)
Here, c_out_dat is identical to c_in_dat, but moved across 1 character.
End of explanation
[xs[n][:cs] for n in range(cs)]
[ys[n][:cs] for n in range(cs)]
Explanation: Reading down each column shows one set of inputs and outputs.
End of explanation
dense_in = Dense(n_hidden, activation='relu')
dense_hidden = Dense(n_hidden, activation='relu', init='identity')
dense_out = Dense(vocab_size, activation='softmax', name='output')
Explanation: Create and train model
End of explanation
inp1 = Input(shape=(n_fac,), name='zeros')
hidden = dense_in(inp1)
outs = []
for i in range(cs):
c_dense = dense_in(c_ins[i][1])
hidden = dense_hidden(hidden)
hidden = merge([c_dense, hidden], mode='sum')
# every layer now has an output
outs.append(dense_out(hidden))
model = Model([inp1] + [c[0] for c in c_ins], outs)
model.compile(loss='sparse_categorical_crossentropy', optimizer=Adam())
zeros = np.tile(np.zeros(n_fac), (len(xs[0]),1))
zeros.shape
model.fit([zeros]+xs, ys, batch_size=64, nb_epoch=12)
Explanation: We're going to pass a vector of all zeros as our starting point - here's our input layers for that:
End of explanation
def get_nexts(inp):
idxs = [char_indices[c] for c in inp]
arrs = [np.array(i)[np.newaxis] for i in idxs]
p = model.predict([np.zeros(n_fac)[np.newaxis,:]] + arrs)
print(list(inp))
return [chars[np.argmax(o)] for o in p]
get_nexts(' this is')
get_nexts(' part of')
Explanation: Test model
End of explanation
n_hidden, n_fac, cs, vocab_size
Explanation: Sequence model with keras
End of explanation
model=Sequential([
Embedding(vocab_size, n_fac, input_length=cs),
SimpleRNN(n_hidden, return_sequences=True, activation='relu', inner_init='identity'),
TimeDistributed(Dense(vocab_size, activation='softmax')),
])
model.summary()
model.compile(loss='sparse_categorical_crossentropy', optimizer=Adam())
xs[0].shape
x_rnn=np.stack(xs, axis=1)
y_rnn=np.expand_dims(np.stack(ys, axis=1), -1)
x_rnn.shape, y_rnn.shape
model.fit(x_rnn, y_rnn, batch_size=64, nb_epoch=8)
def get_nexts_keras(inp):
idxs = [char_indices[c] for c in inp]
arr = np.array(idxs)[np.newaxis,:]
p = model.predict(arr)[0]
print(list(inp))
return [chars[np.argmax(o)] for o in p]
get_nexts_keras(' this is')
Explanation: To convert our previous keras model into a sequence model, simply add the 'return_sequences=True' parameter, and add TimeDistributed() around our dense layer.
End of explanation
model=Sequential([
SimpleRNN(n_hidden, return_sequences=True, input_shape=(cs, vocab_size),
activation='relu', inner_init='identity'),
TimeDistributed(Dense(vocab_size, activation='softmax')),
])
model.compile(loss='categorical_crossentropy', optimizer=Adam())
oh_ys = [to_categorical(o, vocab_size) for o in ys]
oh_y_rnn=np.stack(oh_ys, axis=1)
oh_xs = [to_categorical(o, vocab_size) for o in xs]
oh_x_rnn=np.stack(oh_xs, axis=1)
oh_x_rnn.shape, oh_y_rnn.shape
model.fit(oh_x_rnn, oh_y_rnn, batch_size=64, nb_epoch=8)
def get_nexts_oh(inp):
idxs = np.array([char_indices[c] for c in inp])
arr = to_categorical(idxs, vocab_size)
p = model.predict(arr[np.newaxis,:])[0]
print(list(inp))
return [chars[np.argmax(o)] for o in p]
get_nexts_oh(' this is')
Explanation: One-hot sequence model with keras
This is the keras version of the theano model that we're about to create.
End of explanation
bs=64
Explanation: Stateful model with keras
End of explanation
model=Sequential([
Embedding(vocab_size, n_fac, input_length=cs, batch_input_shape=(bs,8)),
BatchNormalization(),
LSTM(n_hidden, return_sequences=True, stateful=True),
TimeDistributed(Dense(vocab_size, activation='softmax')),
])
model.compile(loss='sparse_categorical_crossentropy', optimizer=Adam())
Explanation: A stateful model is easy to create (just add "stateful=True") but harder to train. We had to add batchnorm and use LSTM to get reasonable results.
When using stateful in keras, you have to also add 'batch_input_shape' to the first layer, and fix the batch size there.
End of explanation
mx = len(x_rnn)//bs*bs
model.fit(x_rnn[:mx], y_rnn[:mx], batch_size=bs, nb_epoch=4, shuffle=False)
model.optimizer.lr=1e-4
model.fit(x_rnn[:mx], y_rnn[:mx], batch_size=bs, nb_epoch=4, shuffle=False)
model.fit(x_rnn[:mx], y_rnn[:mx], batch_size=bs, nb_epoch=4, shuffle=False)
Explanation: Since we're using a fixed batch shape, we have to ensure our inputs and outputs are a even multiple of the batch size.
End of explanation
n_input = vocab_size
n_output = vocab_size
Explanation: Theano RNN
End of explanation
def init_wgts(rows, cols):
scale = math.sqrt(2/rows)
return shared(normal(scale=scale, size=(rows, cols)).astype(np.float32))
def init_bias(rows):
return shared(np.zeros(rows, dtype=np.float32))
Explanation: Using raw theano, we have to create our weight matrices and bias vectors ourselves - here are the functions we'll use to do so (using glorot initialization).
The return values are wrapped in shared(), which is how we tell theano that it can manage this data (copying it to and from the GPU as necessary).
End of explanation
def wgts_and_bias(n_in, n_out):
return init_wgts(n_in, n_out), init_bias(n_out)
def id_and_bias(n):
return shared(np.eye(n, dtype=np.float32)), init_bias(n)
Explanation: We return the weights and biases together as a tuple. For the hidden weights, we'll use an identity initialization (as recommended by Hinton.)
End of explanation
t_inp = T.matrix('inp')
t_outp = T.matrix('outp')
t_h0 = T.vector('h0')
lr = T.scalar('lr')
all_args = [t_h0, t_inp, t_outp, lr]
Explanation: Theano doesn't actually do any computations until we explicitly compile and evaluate the function (at which point it'll be turned into CUDA code and sent off to the GPU). So our job is to describe the computations that we'll want theano to do - the first step is to tell theano what inputs we'll be providing to our computation:
End of explanation
W_h = id_and_bias(n_hidden)
W_x = wgts_and_bias(n_input, n_hidden)
W_y = wgts_and_bias(n_hidden, n_output)
w_all = list(chain.from_iterable([W_h, W_x, W_y]))
Explanation: Now we're ready to create our intial weight matrices.
End of explanation
def step(x, h, W_h, b_h, W_x, b_x, W_y, b_y):
# Calculate the hidden activations
h = nnet.relu(T.dot(x, W_x) + b_x + T.dot(h, W_h) + b_h)
# Calculate the output activations
y = nnet.softmax(T.dot(h, W_y) + b_y)
# Return both (the 'Flatten()' is to work around a theano bug)
return h, T.flatten(y, 1)
Explanation: Theano handles looping by using the GPU scan operation. We have to tell theano what to do at each step through the scan - this is the function we'll use, which does a single forward pass for one character:
End of explanation
[v_h, v_y], _ = theano.scan(step, sequences=t_inp,
outputs_info=[t_h0, None], non_sequences=w_all)
Explanation: Now we can provide everything necessary for the scan operation, so we can setup that up - we have to pass in the function to call at each step, the sequence to step through, the initial values of the outputs, and any other arguments to pass to the step function.
End of explanation
error = nnet.categorical_crossentropy(v_y, t_outp).sum()
g_all = T.grad(error, w_all)
Explanation: We can now calculate our loss function, and all of our gradients, with just a couple of lines of code!
End of explanation
def upd_dict(wgts, grads, lr):
return OrderedDict({w: w-g*lr for (w,g) in zip(wgts,grads)})
upd = upd_dict(w_all, g_all, lr)
Explanation: We even have to show theano how to do SGD - so we set up this dictionary of updates to complete after every forward pass, which apply to standard SGD update rule to every weight.
End of explanation
fn = theano.function(all_args, error, updates=upd, allow_input_downcast=True)
X = oh_x_rnn
Y = oh_y_rnn
X.shape, Y.shape
Explanation: We're finally ready to compile the function!
End of explanation
err=0.0; l_rate=0.01
for i in range(len(X)):
err+=fn(np.zeros(n_hidden), X[i], Y[i], l_rate)
if i % 1000 == 999:
print ("Error:{:.3f}".format(err/1000))
err=0.0
f_y = theano.function([t_h0, t_inp], v_y, allow_input_downcast=True)
pred = np.argmax(f_y(np.zeros(n_hidden), X[6]), axis=1)
act = np.argmax(X[6], axis=1)
[indices_char[o] for o in act]
[indices_char[o] for o in pred]
Explanation: To use it, we simply loop through our input data, calling the function compiled above, and printing our progress from time to time.
End of explanation
def sigmoid(x): return 1/(1+np.exp(-x))
def sigmoid_d(x):
output = sigmoid(x)
return output*(1-output)
def relu(x): return np.maximum(0., x)
def relu_d(x): return (x > 0.)*1.
relu(np.array([3.,-3.])), relu_d(np.array([3.,-3.]))
def dist(a,b): return pow(a-b,2)
def dist_d(a,b): return 2*(a-b)
import pdb
eps = 1e-7
def x_entropy(pred, actual):
return -np.sum(actual * np.log(np.clip(pred, eps, 1-eps)))
def x_entropy_d(pred, actual): return -actual/pred
def softmax(x): return np.exp(x)/np.exp(x).sum()
def softmax_d(x):
sm = softmax(x)
res = np.expand_dims(-sm,-1)*sm
res[np.diag_indices_from(res)] = sm*(1-sm)
return res
test_preds = np.array([0.2,0.7,0.1])
test_actuals = np.array([0.,1.,0.])
nnet.categorical_crossentropy(test_preds, test_actuals).eval()
x_entropy(test_preds, test_actuals)
test_inp = T.dvector()
test_out = nnet.categorical_crossentropy(test_inp, test_actuals)
test_grad = theano.function([test_inp], T.grad(test_out, test_inp))
test_grad(test_preds)
x_entropy_d(test_preds, test_actuals)
pre_pred = random(oh_x_rnn[0][0].shape)
preds = softmax(pre_pred)
actual = oh_x_rnn[0][0]
np.allclose(softmax_d(pre_pred).dot(loss_d(preds,actual)), preds-actual)
softmax(test_preds)
nnet.softmax(test_preds).eval()
test_out = T.flatten(nnet.softmax(test_inp))
test_grad = theano.function([test_inp], theano.gradient.jacobian(test_out, test_inp))
test_grad(test_preds)
softmax_d(test_preds)
act=relu
act_d = relu_d
loss=x_entropy
loss_d=x_entropy_d
Explanation: Pure python RNN!
Set up basic functions
Now we're going to try to repeat the above theano RNN, using just pure python (and numpy). Which means, we have to do everything ourselves, including defining the basic functions of a neural net! Below are all of the definitions, along with tests to check that they give the same answers as theano. The functions ending in _d are the derivatives of each function.
End of explanation
def scan(fn, start, seq):
res = []
prev = start
for s in seq:
app = fn(prev, s)
res.append(app)
prev = app
return res
Explanation: We also have to define our own scan function. Since we're not worrying about running things in parallel, it's very simple to implement:
End of explanation
scan(lambda prev,curr: prev+curr, 0, range(5))
Explanation: ...for instance, scan on + is the cumulative sum.
End of explanation
inp = oh_x_rnn
outp = oh_y_rnn
n_input = vocab_size
n_output = vocab_size
inp.shape, outp.shape
Explanation: Set up training
Let's now build the functions to do the forward and backward passes of our RNN. First, define our data and shape.
End of explanation
def one_char(prev, item):
# Previous state
tot_loss, pre_hidden, pre_pred, hidden, ypred = prev
# Current inputs and output
x, y = item
pre_hidden = np.dot(x,w_x) + np.dot(hidden,w_h)
hidden = act(pre_hidden)
pre_pred = np.dot(hidden,w_y)
ypred = softmax(pre_pred)
return (
# Keep track of loss so we can report it
tot_loss+loss(ypred, y),
# Used in backprop
pre_hidden, pre_pred,
# Used in next iteration
hidden,
# To provide predictions
ypred)
Explanation: Here's the function to do a single forward pass of an RNN, for a single character.
End of explanation
def get_chars(n): return zip(inp[n], outp[n])
def one_fwd(n): return scan(one_char, (0,0,0,np.zeros(n_hidden),0), get_chars(n))
Explanation: We use scan to apply the above to a whole sequence of characters.
End of explanation
# "Columnify" a vector
def col(x): return x[:,newaxis]
def one_bkwd(args, n):
global w_x,w_y,w_h
i=inp[n] # 8x86
o=outp[n] # 8x86
d_pre_hidden = np.zeros(n_hidden) # 256
for p in reversed(range(len(i))):
totloss, pre_hidden, pre_pred, hidden, ypred = args[p]
x=i[p] # 86
y=o[p] # 86
d_pre_pred = softmax_d(pre_pred).dot(loss_d(ypred,y)) # 86
d_pre_hidden = (np.dot(d_pre_hidden, w_h.T)
+ np.dot(d_pre_pred,w_y.T)) * act_d(pre_hidden) # 256
# d(loss)/d(w_y) = d(loss)/d(pre_pred) * d(pre_pred)/d(w_y)
w_y -= col(hidden) * d_pre_pred * alpha
# d(loss)/d(w_h) = d(loss)/d(pre_hidden[p-1]) * d(pre_hidden[p-1])/d(w_h)
if (p>0): w_h -= args[p-1][3].dot(d_pre_hidden) * alpha
w_x -= col(x)*d_pre_hidden * alpha
return d_pre_hidden
Explanation: Now we can define the backward step. We use a loop to go through every element of the sequence. The derivatives are applying the chain rule to each step, and accumulating the gradients across the sequence.
End of explanation
scale=math.sqrt(2./n_input)
w_x = normal(scale=scale, size=(n_input,n_hidden))
w_y = normal(scale=scale, size=(n_hidden, n_output))
w_h = np.eye(n_hidden, dtype=np.float32)
Explanation: Now we can set up our initial weight matrices. Note that we're not using bias at all in this example, in order to keep things simpler.
End of explanation
overallError=0
alpha=0.0001
for n in range(10000):
res = one_fwd(n)
overallError+=res[-1][0]
deriv = one_bkwd(res, n)
if(n % 1000 == 999):
print ("Error:{:.4f}; Gradient:{:.5f}".format(
overallError/1000, np.linalg.norm(deriv)))
overallError=0
Explanation: Our loop looks much like the theano loop in the previous section, except that we have to call the backwards step ourselves.
End of explanation
model=Sequential([
GRU(n_hidden, return_sequences=True, input_shape=(cs, vocab_size),
activation='relu', inner_init='identity'),
TimeDistributed(Dense(vocab_size, activation='softmax')),
])
model.compile(loss='categorical_crossentropy', optimizer=Adam())
model.fit(oh_x_rnn, oh_y_rnn, batch_size=64, nb_epoch=8)
get_nexts_oh(' this is')
Explanation: Keras GRU
Identical to the last keras rnn, but a GRU!
End of explanation
W_h = id_and_bias(n_hidden)
W_x = init_wgts(n_input, n_hidden)
W_y = wgts_and_bias(n_hidden, n_output)
rW_h = init_wgts(n_hidden, n_hidden)
rW_x = wgts_and_bias(n_input, n_hidden)
uW_h = init_wgts(n_hidden, n_hidden)
uW_x = wgts_and_bias(n_input, n_hidden)
w_all = list(chain.from_iterable([W_h, W_y, uW_x, rW_x]))
w_all.extend([W_x, uW_h, rW_h])
Explanation: Theano GRU
Separate weights
The theano GRU looks just like the simple theano RNN, except for the use of the reset and update gates. Each of these gates requires its own hidden and input weights, so we add those to our weight matrices.
End of explanation
def gate(x, h, W_h, W_x, b_x):
return nnet.sigmoid(T.dot(x, W_x) + b_x + T.dot(h, W_h))
Explanation: Here's the definition of a gate - it's just a sigmoid applied to the addition of the dot products of the input vectors.
End of explanation
def step(x, h, W_h, b_h, W_y, b_y, uW_x, ub_x, rW_x, rb_x, W_x, uW_h, rW_h):
reset = gate(x, h, rW_h, rW_x, rb_x)
update = gate(x, h, uW_h, uW_x, ub_x)
h_new = gate(x, h * reset, W_h, W_x, b_h)
h = update*h + (1-update)*h_new
y = nnet.softmax(T.dot(h, W_y) + b_y)
return h, T.flatten(y, 1)
Explanation: Our step is nearly identical to before, except that we multiply our hidden state by our reset gate, and we update our hidden state based on the update gate.
End of explanation
[v_h, v_y], _ = theano.scan(step, sequences=t_inp,
outputs_info=[t_h0, None], non_sequences=w_all)
error = nnet.categorical_crossentropy(v_y, t_outp).sum()
g_all = T.grad(error, w_all)
upd = upd_dict(w_all, g_all, lr)
fn = theano.function(all_args, error, updates=upd, allow_input_downcast=True)
err=0.0; l_rate=0.1
for i in range(len(X)):
err+=fn(np.zeros(n_hidden), X[i], Y[i], l_rate)
if i % 1000 == 999:
l_rate *= 0.95
print ("Error:{:.2f}".format(err/1000))
err=0.0
Explanation: Everything from here on is identical to our simple RNN in theano.
End of explanation
W = (shared(np.concatenate([np.eye(n_hidden), normal(size=(n_input, n_hidden))])
.astype(np.float32)), init_bias(n_hidden))
rW = wgts_and_bias(n_input+n_hidden, n_hidden)
uW = wgts_and_bias(n_input+n_hidden, n_hidden)
W_y = wgts_and_bias(n_hidden, n_output)
w_all = list(chain.from_iterable([W, W_y, uW, rW]))
def gate(m, W, b): return nnet.sigmoid(T.dot(m, W) + b)
def step(x, h, W, b, W_y, b_y, uW, ub, rW, rb):
m = T.concatenate([h, x])
reset = gate(m, rW, rb)
update = gate(m, uW, ub)
m = T.concatenate([h*reset, x])
h_new = gate(m, W, b)
h = update*h + (1-update)*h_new
y = nnet.softmax(T.dot(h, W_y) + b_y)
return h, T.flatten(y, 1)
[v_h, v_y], _ = theano.scan(step, sequences=t_inp,
outputs_info=[t_h0, None], non_sequences=w_all)
def upd_dict(wgts, grads, lr):
return OrderedDict({w: w-g*lr for (w,g) in zip(wgts,grads)})
error = nnet.categorical_crossentropy(v_y, t_outp).sum()
g_all = T.grad(error, w_all)
upd = upd_dict(w_all, g_all, lr)
fn = theano.function(all_args, error, updates=upd, allow_input_downcast=True)
err=0.0; l_rate=0.01
for i in range(len(X)):
err+=fn(np.zeros(n_hidden), X[i], Y[i], l_rate)
if i % 1000 == 999:
print ("Error:{:.2f}".format(err/1000))
err=0.0
Explanation: Combined weights
We can make the previous section simpler and faster by concatenating the hidden and input matrices and inputs together. We're not going to step through this cell by cell - you'll see it's identical to the previous section except for this concatenation.
End of explanation |
2,538 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Objectives
* Learn how to parse html.
* Create models that capture different aspects of the problem.
* How to learn processes in parallel ?
Step1: Text Features based on the boiler plate
Text Features based on the parsed raw html
Numerical features
Train different models on different datasets and then use their predictions in the next stage of classifier and predict.
Step2: Split into training and test sets.
Step3: Load Textual Features Prepared from raw content
Step5: Text features from Boilerplate
Step6: Pipeline involving Stemming
Step7: Blending
Step8: Train on full dataset.
Step9: Submissions | Python Code:
import pandas as pd
import numpy as np
import os, sys
import re, json
from urllib.parse import urlparse
from sklearn.base import BaseEstimator, TransformerMixin
from sklearn.preprocessing import Imputer, FunctionTransformer
from sklearn.pipeline import Pipeline, FeatureUnion
from sklearn.preprocessing import StandardScaler, LabelEncoder, MinMaxScaler
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import roc_auc_score
from sklearn.externals import joblib
from sklearn.decomposition import TruncatedSVD
from sklearn.feature_extraction.text import ENGLISH_STOP_WORDS
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.feature_selection import chi2, SelectKBest
from sklearn.linear_model import LogisticRegression
from sklearn.cross_validation import KFold
from nltk.stem.snowball import SnowballStemmer
from nltk.stem import WordNetLemmatizer
from nltk import word_tokenize
import xgboost as xgb
import warnings
warnings.filterwarnings('ignore')
basepath = os.path.expanduser('~/Desktop/src/Stumbleupon_classification_challenge/')
sys.path.append(os.path.join(basepath, 'src'))
np.random.seed(4)
from data import load_datasets
from models import train_test_split, cross_val_scheme
# Initialize Stemmer
sns = SnowballStemmer(language='english')
train, test, sample_sub = load_datasets.load_dataset()
train['is_news'] = train.is_news.fillna(-999)
test['is_news'] = test.is_news.fillna(-999)
Explanation: Objectives
* Learn how to parse html.
* Create models that capture different aspects of the problem.
* How to learn processes in parallel ?
End of explanation
def extract_top_level_domain(url):
parsed_url = urlparse(url)
top_level = parsed_url[1].split('.')[-1]
return top_level
def get_tlds(urls):
return np.array([extract_top_level_domain(url) for url in urls])
train['tlds'] = get_tlds(train.url)
test['tlds'] = get_tlds(test.url)
ohe = pd.get_dummies(list(train.tlds) + list(test.tlds))
train = pd.concat((train, ohe.iloc[:len(train)]), axis=1)
test = pd.concat((test, ohe.iloc[len(train):]), axis=1)
class NumericalFeatures(BaseEstimator, TransformerMixin):
@staticmethod
def url_depth(url):
parsed_url = urlparse(url)
path = parsed_url.path
return len(list(filter(lambda x: len(x)> 0, path.split('/'))))
@staticmethod
def get_url_depths(urls):
return np.array([NumericalFeatures.url_depth(url) for url in urls])
def __init__(self, numerical_features):
self.features = numerical_features
def fit(self, X, y=None):
return self
def transform(self, df):
df['url_depth'] = self.get_url_depths(df.url)
numeric_features = self.features + ['url_depth']
df_numeric = df[numeric_features]
return df_numeric
Explanation: Text Features based on the boiler plate
Text Features based on the parsed raw html
Numerical features
Train different models on different datasets and then use their predictions in the next stage of classifier and predict.
End of explanation
params = {
'test_size': 0.2,
'random_state': 2,
'stratify': train.is_news
}
itrain, itest = train_test_split.tr_ts_split(len(train), **params)
X_train = train.iloc[itrain]
X_test = train.iloc[itest]
y_train = train.iloc[itrain].label
y_test = train.iloc[itest].label
numeric_features = list(train.select_dtypes(exclude=['object']).columns[1:])
numeric_features.remove('label')
pipeline = Pipeline([
('feature_extractor', NumericalFeatures(numeric_features)),
('imputer', Imputer(strategy='mean')),
('scaler', StandardScaler()),
('model', xgb.XGBClassifier(learning_rate=.08, max_depth=6))
])
pipeline.fit(X_train, y_train)
# cross validation
params = {
'n_folds': 5,
'shuffle': True,
'random_state': 3
}
scores, mean_score, std_score = cross_val_scheme.cv_scheme(pipeline, X_train, y_train, train.iloc[itrain].is_news, **params)
print('CV Scores: %s'%(scores))
print('Mean CV Score: %f'%(mean_score))
print('Std Cv Scoes: %f'%(std_score))
y_preds = pipeline.predict_proba(X_test)[:, 1]
print('ROC AUC score on the test set ', roc_auc_score(y_test, y_preds))
joblib.dump(pipeline, os.path.join(basepath, 'data/processed/pipeline_numeric/pipeline_numeric.pkl'))
Explanation: Split into training and test sets.
End of explanation
train = joblib.load(os.path.join(basepath, 'data/processed/train_raw_content.pkl'))
test = joblib.load(os.path.join(basepath, 'data/processed/test_raw_content.pkl'))
Explanation: Load Textual Features Prepared from raw content
End of explanation
train_json = list(map(json.loads, train.boilerplate))
test_json = list(map(json.loads, test.boilerplate))
train['boilerplate'] = train_json
test['boilerplate'] = test_json
def get_component(boilerplate, key):
Get value for a particular key in boilerplate json,
if present return the value else return an empty string
boilerplate: list of boilerplate text in json format
key: key for which we want to fetch value e.g. body, title and url
return np.array([bp[key] if key in bp and bp[key] else u'' for bp in boilerplate])
train['body_bp'] = get_component(train.boilerplate, 'body')
test['body_bp'] = get_component(test.boilerplate, 'body')
train['title_bp'] = get_component(train.boilerplate, 'title')
test['title_bp'] = get_component(test.boilerplate, 'title')
train['url_component'] = get_component(train.boilerplate, 'url')
test['url_component'] = get_component(test.boilerplate, 'url')
class LemmaTokenizer(object):
def __init__(self):
self.wnl = WordNetLemmatizer()
def __call__(self, doc):
return [self.wnl.lemmatize(t) for t in word_tokenize(doc)]
class VarSelect(BaseEstimator, TransformerMixin):
def __init__(self, keys):
self.keys = keys
def fit(self, X, y=None):
return self
def transform(self, df):
return df[self.keys]
class StemTokenizer(object):
def __init__(self):
self.sns = sns
def __call__(self, doc):
return [self.sns.stem(t) for t in word_tokenize(doc)]
def remove_non_alphanumeric(df):
return df.replace(r'[^A-Za-z0-9]+', ' ', regex=True)
strip_non_words = FunctionTransformer(remove_non_alphanumeric, validate=False)
# Lemma Tokenizer
pipeline_lemma = Pipeline([
('strip', strip_non_words),
('union', FeatureUnion([
('body', Pipeline([
('var', VarSelect(keys='body_bp')),
('tfidf', TfidfVectorizer(strip_accents='unicode', tokenizer=LemmaTokenizer(),
ngram_range=(1, 2), min_df=3, sublinear_tf=True)),
('svd', TruncatedSVD(n_components=100))
])),
('title', Pipeline([
('var', VarSelect(keys='title_bp')),
('tfidf', TfidfVectorizer(strip_accents='unicode', tokenizer=LemmaTokenizer(),
ngram_range=(1, 2), min_df=3, sublinear_tf=True)),
('svd', TruncatedSVD(n_components=100))
])),
('url', Pipeline([
('var', VarSelect(keys='url_component')),
('tfidf', TfidfVectorizer(strip_accents='unicode', tokenizer=LemmaTokenizer(),
ngram_range=(1,2), min_df=3, sublinear_tf=True)),
('svd', TruncatedSVD(n_components=50))
]))
])),
('scaler', MinMaxScaler()),
('selection', SelectKBest(chi2, k=100)),
('model', LogisticRegression())
])
params = {
'test_size': 0.2,
'random_state': 2,
'stratify': train.is_news
}
itrain, itest = train_test_split.tr_ts_split(len(train), **params)
features = ['url_component', 'body_bp', 'title_bp']
X_train = train.iloc[itrain][features]
X_test = train.iloc[itest][features]
y_train = train.iloc[itrain].label
y_test = train.iloc[itest].label
pipeline.fit(X_train, y_train)
y_preds = pipeline.predict_proba(X_test)[:, 1]
print('AUC score on unseen examples are: ', roc_auc_score(y_test, y_preds))
# train on full dataset
X = train[features]
y = train.label
pipeline_lemma.fit(X, y)
# save this model to disk
joblib.dump(pipeline_lemma, os.path.join(basepath, 'data/processed/pipeline_boilerplate_lemma/model_lemma.pkl'))
Explanation: Text features from Boilerplate
End of explanation
# Stemming Tokenizer
pipeline_stemming = Pipeline([
('strip', strip_non_words),
('union', FeatureUnion([
('body', Pipeline([
('var', VarSelect(keys='body_bp')),
('tfidf', TfidfVectorizer(strip_accents='unicode', tokenizer=StemTokenizer(),
ngram_range=(1, 2), min_df=3, sublinear_tf=True)),
('svd', TruncatedSVD(n_components=100))
])),
('title', Pipeline([
('var', VarSelect(keys='title_bp')),
('tfidf', TfidfVectorizer(strip_accents='unicode', tokenizer=StemTokenizer(),
ngram_range=(1, 2), min_df=3, sublinear_tf=True)),
('svd', TruncatedSVD(n_components=100))
])),
('url', Pipeline([
('var', VarSelect(keys='url_component')),
('tfidf', TfidfVectorizer(strip_accents='unicode', tokenizer=StemTokenizer(),
ngram_range=(1,2), min_df=3, sublinear_tf=True)),
('svd', TruncatedSVD(n_components=50))
]))
])),
('scaler', MinMaxScaler()),
('selection', SelectKBest(chi2, k=100)),
('model', LogisticRegression())
])
params = {
'test_size': 0.2,
'random_state': 2,
'stratify': train.is_news
}
itrain, itest = train_test_split.tr_ts_split(len(train), **params)
features = ['url_component', 'body', 'title']
X_train = train.iloc[itrain][features]
X_test = train.iloc[itest][features]
y_train = train.iloc[itrain].label
y_test = train.iloc[itest].label
pipeline_stemming.fit(X_train, y_train)
y_preds = pipeline_stemming.predict_proba(X_test)[:, 1]
print('AUC score on unseen examples are: ', roc_auc_score(y_test, y_preds))
# train on full dataset
X = train[features]
y = train.label
pipeline_stemming.fit(X, y)
# save this model to disk
joblib.dump(pipeline_stemming, os.path.join(basepath, 'data/processed/pipeline_boilerplate_stem/model_stem.pkl'))
Explanation: Pipeline involving Stemming
End of explanation
class Blending(object):
def __init__(self, models):
self.models = models # dict
def predict(self, X, X_test, y=None):
cv = KFold(len(X), n_folds=3, shuffle=True, random_state=10)
dataset_blend_train = np.zeros((X.shape[0], len(self.models.keys())))
dataset_blend_test = np.zeros((X_test.shape[0], len(self.models.keys())))
for index, key in enumerate(self.models.keys()):
dataset_blend_test_index = np.zeros((X_test.shape[0], len(cv)))
model = self.models[key][1]
feature_list = self.models[key][0]
print('Training model of type: ', key)
for i , (itrain, itest) in enumerate(cv):
Xtr = X.iloc[itrain][feature_list]
ytr = y.iloc[itrain]
Xte = X.iloc[itest][feature_list]
yte = y.iloc[itest]
y_preds = model.predict_proba(Xte)[:, 1]
dataset_blend_train[itest, index] = y_preds
dataset_blend_test_index[:, i] = model.predict_proba(X_test)[:, 1]
dataset_blend_test[:, index] = dataset_blend_test_index.mean(1)
print('\nBlending')
clf = LogisticRegression()
clf.fit(dataset_blend_train, y)
y_submission = clf.predict_proba(dataset_blend_test)[:, 1]
y_submission = (y_submission - y_submission.min()) / (y_submission.max() - y_submission.min())
return y_submission
def stem_tokens(x):
return ' '.join([sns.stem(word) for word in word_tokenize(x)])
def preprocess_string(s):
return stem_tokens(s)
class Weights(BaseEstimator, TransformerMixin):
def __init__(self, weight):
self.weight = weight
def fit(self, X, y=None):
return self
def transform(self, X):
return self.weight * X
# load all the models from the disk
# pipeline_numeric = joblib.load(os.path.join(basepath, 'data/processed/pipeline_numeric/pipeline_numeric.pkl'))
# pipeline_lemma = joblib.load(os.path.join(basepath, 'data/processed/pipeline_boilerplate_lemma/model_lemma.pkl'))
# pipeline_stemming = joblib.load(os.path.join(basepath, 'data/processed/pipeline_boilerplate_stem/model_stem.pkl'))
pipeline_raw = joblib.load(os.path.join(basepath, 'data/processed/pipeline_raw/model_raw.pkl'))
numeric_features = list(train.select_dtypes(exclude=['object']).columns[1:]) + ['url']
numeric_features.remove('label')
boilerplate_features = ['body_bp', 'title_bp', 'url_component']
raw_features = ['body', 'title', 'h1', 'h2', 'h3', 'h4', 'span', 'a', 'label_',\
'meta-title', 'meta-description', 'li']
models = {
# 'numeric': [numeric_features, pipeline_numeric],
'boilerplate_lemma': [boilerplate_features, pipeline_lemma],
'boilerplate_stem': [boilerplate_features, pipeline_stemming],
'boilerplate_raw': [raw_features, pipeline_raw]
}
params = {
'test_size': 0.2,
'random_state': 2,
'stratify': train.is_news
}
itrain, itest = train_test_split.tr_ts_split(len(train), **params)
features = list(boilerplate_features) + list(raw_features)
X_train = train.iloc[itrain][features]
X_test = train.iloc[itest][features]
y_train = train.iloc[itrain].label
y_test = train.iloc[itest].label
blend = Blending(models)
y_blend = blend.predict(X_train, X_test, y_train)
print('AUC score after blending ', roc_auc_score(y_test, y_blend))
Explanation: Blending
End of explanation
X = train[features]
X_test = test[features]
y = train.label
assert X.shape[1] == X_test.shape[1]
blend = Blending(models)
predictions = blend.predict(X, X_test, y)
Explanation: Train on full dataset.
End of explanation
sample_sub['label'] = predictions
sample_sub.to_csv(os.path.join(basepath, 'submissions/blend_3.csv'), index=False)
Explanation: Submissions
End of explanation |
2,539 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
StateFarm Distracted Driver Detection Full Dataset
Step1: Create Batches
Step2: Use Previous Conv sample model on full dataset
The previous model used in the sample data should work better with more data. Lets try it out
Step3: Improve with Data Augmentation
Step4: Deeper Conv/Pooling pair model + Dropout
If the results are still unstable - the validation accuracy jumps from epoch to epoch, creating a deeper model with dropout will help.
Create a Deeper model with dropout
Step5: The model is underfitting, lets increase the learning rate
Step6: If the model was overfitting, you would need to decrease the learning rate.
Let me decrease the learning rate and see if we get better results
Step7: The accuracy is similar and there is more stability. However, its try with VGG16 model
Use ImageNet Conv Features
Since we have so little data, and it is similar to imagenet images (full color photos), using pre-trained VGG weights is likely to be helpful - in fact it seems likely that we won't need to fine-tune the convolutional layer weights much, if at all. So we can pre-compute the output of the last convolutional layer, as we did in lesson 3 when we experimented with dropout. (However this means that we can't use full data augmentation, since we can't pre-compute something that changes every image.)
Step8: Create Batchnorm dense layers under the Conv layers
Create a network that would sit under the prior conv layers to predict the 10 classes. This is a simplified version on the VGG's dense layers
Step9: Pre-computed data augmentation + more dropout
Lets add the augmented data and adding larger dense layers, and therefore more dropout to the pre-trained model
Step10: Create a dataset of convolutional features that is 5x bigger than the original training set (5 variations of data augmentation from the ImageDataGenerator)
Step11: Add the real training data in its non-augmented form
Step12: Pseudo Labeling
Try using a combination of pseudo labeling and knowledge distillation to allow us to use unlabeled data (i.e. do semi-supervised learning). For our initial experiment we'll use the validation set as the unlabeled data, so that we can see that it is working without using the test set
Step13: Generate Predictions from Test data
Step14: Submit to competition | Python Code:
%cd /home/ubuntu/kaggle/state-farm-distracted-driver-detection
# Make sure you are in the main directory (state-farm-distracted-driver-detection)
%pwd
# Create references to key directories
import os, sys
from glob import glob
from matplotlib import pyplot as plt
import numpy as np
import keras
np.set_printoptions(precision=4, linewidth=100)
current_dir = os.getcwd()
CHALLENGE_HOME_DIR = current_dir
DATA_HOME_DIR = current_dir+'/data'
#Allow relative imports to directories
sys.path.insert(1, os.path.join(sys.path[0], '..'))
#import modules
from utils import *
from utils.vgg16 import Vgg16
import utils; reload(utils)
from utils import *
from utils.utils import *
#Instantiate plotting tool
%matplotlib inline
#Need to correctly import utils.py
import bcolz
from numpy.random import random, permutation
%cd $DATA_HOME_DIR
path = DATA_HOME_DIR + '/'
test_path = path + 'test/'
results_path= path + 'results/'
train_path=path + 'train/'
valid_path=path + 'valid/'
#Set constants. You can experiment with no_of_epochs to improve the model
batch_size=64
no_of_epochs=3
Explanation: StateFarm Distracted Driver Detection Full Dataset
End of explanation
batches = get_batches(train_path, batch_size=batch_size)
val_batches = get_batches(valid_path, batch_size=batch_size*2, shuffle=False)
(val_classes, trn_classes, val_labels, trn_labels, val_filenames, filenames,
test_filename) = get_classes(path)
Explanation: Create Batches
End of explanation
def simple_conv(batches):
model = Sequential([
BatchNormalization(axis=1, input_shape=(3,224,224)),
Convolution2D(32,3,3, activation='relu'),
BatchNormalization(axis=1),
MaxPooling2D((3,3)),
Convolution2D(64,3,3, activation='relu'),
BatchNormalization(axis=1),
MaxPooling2D((3,3)),
Flatten(),
Dense(200, activation='relu'),
BatchNormalization(),
Dense(10, activation='softmax')
])
model.compile(Adam(lr=1e-4), loss='categorical_crossentropy', metrics=['accuracy'])
model.fit_generator(batches, batches.nb_sample, nb_epoch=2, validation_data=val_batches,
nb_val_samples=val_batches.nb_sample)
model.optimizer.lr = 0.001
model.fit_generator(batches, batches.nb_sample, nb_epoch=4, validation_data=val_batches,
nb_val_samples=val_batches.nb_sample)
return model
model = simple_conv(batches)
model.save_weights(path+'models/simple_conv.h5')
Explanation: Use Previous Conv sample model on full dataset
The previous model used in the sample data should work better with more data. Lets try it out
End of explanation
gen_t = image.ImageDataGenerator(rotation_range=15, height_shift_range=0.1,
shear_range=0.1, channel_shift_range=25, width_shift_range=0.1)
da_batches = get_batches(train_path, gen_t, batch_size=batch_size)
model = simple_conv(da_batches)
model.save_weights(path+'models/simple_conv_da_1.h5')
model.optimizer.lr = 0.0001
model.fit_generator(da_batches, da_batches.nb_sample, nb_epoch=4, validation_data=val_batches,
nb_val_samples=val_batches.nb_sample)
model.save_weights(path+'models/simple_conv_da_2.h5')
Explanation: Improve with Data Augmentation
End of explanation
gen_t = image.ImageDataGenerator(rotation_range=15, height_shift_range=0.1,
shear_range=0.1, channel_shift_range=25, width_shift_range=0.1)
batches = get_batches(train_path, gen_t, batch_size=batch_size)
model = Sequential([
BatchNormalization(axis=1, input_shape=(3,224,224)),
Convolution2D(32,3,3, activation='relu'),
BatchNormalization(axis=1),
MaxPooling2D(),
Convolution2D(64,3,3, activation='relu'),
BatchNormalization(axis=1),
MaxPooling2D(),
Convolution2D(128,3,3, activation='relu'),
BatchNormalization(axis=1),
MaxPooling2D(),
Flatten(),
Dense(200, activation='relu'),
BatchNormalization(),
Dropout(0.5),
Dense(200, activation='relu'),
BatchNormalization(),
Dropout(0.5),
Dense(10, activation='softmax')
])
model.compile(Adam(lr=10e-5), loss='categorical_crossentropy', metrics=['accuracy'])
model.fit_generator(batches, batches.nb_sample, nb_epoch=2, validation_data=val_batches,
nb_val_samples=val_batches.nb_sample)
model.save_weights(path+'models/deep_conv_da_1.h5')
model.load_weights(path+'models/deep_conv_da_1.h5')
Explanation: Deeper Conv/Pooling pair model + Dropout
If the results are still unstable - the validation accuracy jumps from epoch to epoch, creating a deeper model with dropout will help.
Create a Deeper model with dropout
End of explanation
model.optimizer.lr=0.001
model.fit_generator(batches, batches.nb_sample, nb_epoch=10, validation_data=val_batches,
nb_val_samples=val_batches.nb_sample)
model.save_weights(path+'models/deep_conv_da_2.h5')
Explanation: The model is underfitting, lets increase the learning rate
End of explanation
model.optimizer.lr=0.00001
model.fit_generator(batches, batches.nb_sample, nb_epoch=5, validation_data=val_batches,
nb_val_samples=val_batches.nb_sample)
model.save_weights(path+'models/deep_conv_da_3.h5')
Explanation: If the model was overfitting, you would need to decrease the learning rate.
Let me decrease the learning rate and see if we get better results
End of explanation
vgg = Vgg16()
model=vgg.model
last_conv_idx = [i for i,l in enumerate(model.layers) if type(l) is Convolution2D][-1]
conv_layers = model.layers[:last_conv_idx+1]
conv_model = Sequential(conv_layers)
# Lets pre-compute the features. Thus, shuffle should be set to False
batches = get_batches(train_path, batch_size=batch_size, shuffle=False)
(val_classes, trn_classes, val_labels, trn_labels,
val_filenames, filenames, test_filenames) = get_classes(path)
# Compute features for the conv layers for the training, validation, and test data
conv_feat = conv_model.predict_generator(batches, batches.nb_sample)
conv_val_feat = conv_model.predict_generator(val_batches, val_batches.nb_sample)
conv_test_feat = conv_model.predict_generator(test_batches, test_batches.nb_sample)
# save the features for future use
save_array(path+'results/conv_val_feat.dat', conv_val_feat)
save_array(path+'results/conv_test_feat.dat', conv_test_feat)
save_array(path+'results/conv_feat.dat', conv_feat)
conv_val_feat.shape
Explanation: The accuracy is similar and there is more stability. However, its try with VGG16 model
Use ImageNet Conv Features
Since we have so little data, and it is similar to imagenet images (full color photos), using pre-trained VGG weights is likely to be helpful - in fact it seems likely that we won't need to fine-tune the convolutional layer weights much, if at all. So we can pre-compute the output of the last convolutional layer, as we did in lesson 3 when we experimented with dropout. (However this means that we can't use full data augmentation, since we can't pre-compute something that changes every image.)
End of explanation
def get_bn_layers(p):
return [
MaxPooling2D(input_shape=conv_layers[-1].output_shape[1:]),
Flatten(),
Dropout(p/2),
Dense(128, activation='relu'),
BatchNormalization(),
Dropout(p/2),
Dense(128, activation='relu'),
BatchNormalization(),
Dropout(p),
Dense(10, activation='softmax')
]
p=0.8
bn_model = Sequential(get_bn_layers(p))
bn_model.compile(Adam(lr=0.001), loss='categorical_crossentropy', metrics=['accuracy'])
bn_model.fit(conv_feat, trn_labels, batch_size=batch_size, nb_epoch=1,
validation_data=(conv_val_feat, val_labels)
bn_model.optimizer.lr = 0.01
bn_model.fit(conv_feat, trn_labels, batch_size=batch_size, nb_epoch=2,
validation_data=(conv_val_feat, val_labels))
bn_model.save_weights(path+'models/bn_dense.h5')
Explanation: Create Batchnorm dense layers under the Conv layers
Create a network that would sit under the prior conv layers to predict the 10 classes. This is a simplified version on the VGG's dense layers
End of explanation
gen_t = image.ImageDataGenerator(rotation_range=15, height_shift_range=0.1,
shear_range=0.1, channel_shift_range=25, width_shift_range=0.1)
batches = get_batches(train_path, gen_t, batch_size=batch_size, shuffle=False)
Explanation: Pre-computed data augmentation + more dropout
Lets add the augmented data and adding larger dense layers, and therefore more dropout to the pre-trained model
End of explanation
da_conv_feat = conv_model.predict_generator(da_batches, da_batches.nb_sample*5)
save_array(path+'results/da_conv_feat.dat', da_conv_feat)
Explanation: Create a dataset of convolutional features that is 5x bigger than the original training set (5 variations of data augmentation from the ImageDataGenerator)
End of explanation
da_conv_feat = np.concatenate([da_conv_feat, conv_feat])
# Since we've now gotten a dataset 6x bigger than before, we'll need to copy our labels 6x too
da_trn_labels = np.concatenate([trn_labels]*6)
def get_bn_da_layers(p):
return [
MaxPooling2D(input_shape=conv_layers[-1].output_shape[1:]),
Flatten(),
Dropout(p),
Dense(256, activation='relu'),
BatchNormalization(),
Dropout(p),
Dense(256, activation='relu'),
BatchNormalization(),
Dropout(p),
Dense(10, activation='softmax')
]
p=0.8
bn_model = Sequential(get_bn_da_layers(p))
bn_model.compile(Adam(lr=0.001), loss='categorical_crossentropy', metrics=['accuracy'])
# Lets train the model with the larger set of pre-computed augemented data
bn_model.fit(da_conv_feat, da_trn_labels, batch_size=batch_size, nb_epoch=1,
validation_data=(conv_val_feat, val_labels))
bn_model.optimizer.lr=0.01
bn_model.fit(da_conv_feat, da_trn_labels, batch_size=batch_size, nb_epoch=4,
validation_data=(conv_val_feat, val_labels))
bn_model.optimizer.lr=0.0001
bn_model.fit(da_conv_feat, da_trn_labels, batch_size=batch_size, nb_epoch=4,
validation_data=(conv_val_feat, val_labels))
bn_model.save_weights(path+'models/bn_da_dense.h5')
Explanation: Add the real training data in its non-augmented form
End of explanation
val_pseudo = bn_model.predict(conv_val_feat, batch_size=batch_size)
# Concatenate them with the original training set
comb_pseudo = np.concatenate([da_trn_labels, val_pseudo])
comb_feat = np.concatenate([da_conv_feat, conv_val_feat])
# fine-tune the model using this combined training set
bn_model.load_weights(path+'models/bn_da_dense.h5')
bn_model.fit(comb_feat, comb_pseudo, batch_size=batch_size, nb_epoch=1,
validation_data=(conv_val_feat, val_labels))
bn_model.fit(comb_feat, comb_pseudo, batch_size=batch_size, nb_epoch=4,
validation_data=(conv_val_feat, val_labels))
bn_model.optimizer.lr=0.00001
bn_model.fit(comb_feat, comb_pseudo, batch_size=batch_size, nb_epoch=4,
validation_data=(conv_val_feat, val_labels))
# There is a distinct improvement - altough the validation set isn't large.
# A sigfniicant improvement can be found when using the test data
bn_model.save_weights(path+'models/bn-ps8.h5')
Explanation: Pseudo Labeling
Try using a combination of pseudo labeling and knowledge distillation to allow us to use unlabeled data (i.e. do semi-supervised learning). For our initial experiment we'll use the validation set as the unlabeled data, so that we can see that it is working without using the test set
End of explanation
test_batches = get_batches(test_path, shuffle=False, batch_size=batch_size)
preds = model.predict_generator(test_batches, test_batches.nb_sample)
preds[:2]
Explanation: Generate Predictions from Test data
End of explanation
def do_clip(arr, mx): return np.clip(arr, (1-mx)/9, mx)
keras.metrics.categorical_crossentropy(val_labels, do_clip(val_preds, 0.93)).eval()
subm = do_clip(preds,0.93)
subm_name = path+'results/subm.csv'
classes = sorted(batches.class_indices, key=batches.class_indices.get)
submission = pd.DataFrame(subm, columns=classes)
submission.insert(0, 'img', [a[8:] for a in test_filename])
submission.head()
submission.tail()
submission.to_csv(subm_name, index=False, encoding='utf-8')
FileLink(subm_name)
Explanation: Submit to competition
End of explanation |
2,540 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
MNIST数据集介绍
大多数例子使用了手写数字的MNIST数据集。它包含了60000个训练数据和10000个测试数据。这些数字的尺寸已标准化,同时做了居中处理,所以每个数据可以表示成一个值为0到1大小为28 * 28矩阵。
预览
用法
在例子中,我们使用TFinput_data.py脚本来加载数据集。这对管理数据相当好用,具体操作:
数据集下载
加载整个数据集成numpy数组
Step1: 通过'next_batch'方法遍历整个数据集,只返回需要的部分数据(为了节省内存,避免加载整个数据集) | Python Code:
# 导入MNIST
from tensorflow.examples.tutorials.mnist import input_data
# 加载数据
X_train = mnist.train.images
Y_train = mnist.train.labels
X_test = mnist.test.images
Y_test = mnist.test.labels
print(X_train.shape)
print(Y_train.shape)
print(X_test.shape)
print(Y_test.shape)
Explanation: MNIST数据集介绍
大多数例子使用了手写数字的MNIST数据集。它包含了60000个训练数据和10000个测试数据。这些数字的尺寸已标准化,同时做了居中处理,所以每个数据可以表示成一个值为0到1大小为28 * 28矩阵。
预览
用法
在例子中,我们使用TFinput_data.py脚本来加载数据集。这对管理数据相当好用,具体操作:
数据集下载
加载整个数据集成numpy数组
End of explanation
# 获取下一组64张图像数组与分类列表
batch_X, batch_Y = mnist.train.next_batch(64)
print(batch_X.shape)
print(batch_Y.shape)
Explanation: 通过'next_batch'方法遍历整个数据集,只返回需要的部分数据(为了节省内存,避免加载整个数据集)
End of explanation |
2,541 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
K-nearest neighbors and scikit-learn
Review of the iris dataset
Step1: Terminology
150 observations (n=150)
Step2: K-nearest neighbors (KNN) classification
Pick a value for K.
Search for the K observations in the data that are "nearest" to the measurements of the unknown iris.
Euclidian distance is often used as the distance metric, but other metrics are allowed.
Use the most popular response value from the K "nearest neighbors" as the predicted response value for the unknown iris.
KNN classification map for iris (K=1)
KNN classification map for iris (K=5)
KNN classification map for iris (K=15)
KNN classification map for iris (K=50)
Question
Step3: scikit-learn's 4-step modeling pattern
Step 1
Step4: Step 2
Step5: Created an object that "knows" how to do K-nearest neighbors classification, and is just waiting for data
Name of the object does not matter
Can specify tuning parameters (aka "hyperparameters") during this step
All parameters not specified are set to their defaults
Step6: Step 3
Step7: Once a model has been fit with data, it's called a "fitted model"
Step 4
Step8: Returns a NumPy array, and we keep track of what the numbers "mean"
Can predict for multiple observations at once
Step9: Tuning a KNN model
Step10: Question | Python Code:
%matplotlib inline
import pandas as pd
url = 'http://archive.ics.uci.edu/ml/machine-learning-databases/iris/iris.data'
col_names = ['sepal_length', 'sepal_width', 'petal_length', 'petal_width', 'species']
iris = pd.read_csv(url, header=None, names=col_names)
iris.head()
Explanation: K-nearest neighbors and scikit-learn
Review of the iris dataset
End of explanation
import matplotlib.pyplot as plt
# increase default figure and font sizes for easier viewing
plt.rcParams['figure.figsize'] = (10, 8)
plt.rcParams['font.size'] = 14
# create a custom colormap
from matplotlib.colors import ListedColormap
cmap_bold = ListedColormap(['#FF0000', '#00FF00', '#0000FF'])
# map each iris species to a number
iris['species_num'] = iris.species.map({'Iris-setosa':0, 'Iris-versicolor':1, 'Iris-virginica':2})
# box plot of all numeric columns grouped by species
iris.drop('species_num', axis=1).boxplot(by='species', rot=45)
# create a scatter plot of PETAL LENGTH versus PETAL WIDTH and color by SPECIES
iris.plot(kind='scatter', x='petal_length', y='petal_width', c='species_num', colormap=cmap_bold)
# create a scatter plot of SEPAL LENGTH versus SEPAL WIDTH and color by SPECIES
iris.plot(kind='scatter', x='sepal_length', y='sepal_width', c='species_num', colormap=cmap_bold)
Explanation: Terminology
150 observations (n=150): each observation is one iris flower
4 features (p=4): sepal length, sepal width, petal length, and petal width
Response: iris species
Classification problem since response is categorical
Human learning on the iris dataset
How did we (as humans) predict the species of an iris?
We observed that the different species had (somewhat) dissimilar measurements.
We focused on features that seemed to correlate with the response.
We created a set of rules (using those features) to predict the species of an unknown iris.
We assumed that if an unknown iris has measurements similar to previous irises, then its species is most likely the same as those previous irises.
End of explanation
iris.head()
# store feature matrix in "X"
feature_cols = ['sepal_length', 'sepal_width', 'petal_length', 'petal_width']
X = iris[feature_cols]
# alternative ways to create "X"
X = iris.drop(['species', 'species_num'], axis=1)
X = iris.loc[:, 'sepal_length':'petal_width']
X = iris.iloc[:, 0:4]
# store response vector in "y"
y = iris.species_num
# check X's type
print(type(X))
print(type(X.values))
# check y's type
print(type(y))
print(type(y.values))
# check X's shape (n = number of observations, p = number of features)
print(X.shape)
# check y's shape (single dimension with length n)
print(y.shape)
Explanation: K-nearest neighbors (KNN) classification
Pick a value for K.
Search for the K observations in the data that are "nearest" to the measurements of the unknown iris.
Euclidian distance is often used as the distance metric, but other metrics are allowed.
Use the most popular response value from the K "nearest neighbors" as the predicted response value for the unknown iris.
KNN classification map for iris (K=1)
KNN classification map for iris (K=5)
KNN classification map for iris (K=15)
KNN classification map for iris (K=50)
Question: What's the "best" value for K in this case?
Answer: The value which produces the most accurate predictions on unseen data. We want to create a model that generalizes!
End of explanation
from sklearn.neighbors import KNeighborsClassifier
Explanation: scikit-learn's 4-step modeling pattern
Step 1: Import the class you plan to use
End of explanation
# make an instance of a KNeighborsClassifier object
knn = KNeighborsClassifier(n_neighbors=1)
type(knn)
Explanation: Step 2: "Instantiate" the "estimator"
"Estimator" is scikit-learn's term for "model"
"Instantiate" means "make an instance of"
End of explanation
print(knn)
Explanation: Created an object that "knows" how to do K-nearest neighbors classification, and is just waiting for data
Name of the object does not matter
Can specify tuning parameters (aka "hyperparameters") during this step
All parameters not specified are set to their defaults
End of explanation
knn.fit(X, y)
Explanation: Step 3: Fit the model with data (aka "model training")
Model is "learning" the relationship between X and y in our "training data"
Process through which learning occurs varies by model
Occurs in-place
End of explanation
knn.predict([[3, 5, 4, 2]])
Explanation: Once a model has been fit with data, it's called a "fitted model"
Step 4: Predict the response for a new observation
New observations are called "out-of-sample" data
Uses the information it learned during the model training process
End of explanation
X_new = [[3, 5, 4, 2], [5, 4, 3, 2]]
knn.predict(X_new)
Explanation: Returns a NumPy array, and we keep track of what the numbers "mean"
Can predict for multiple observations at once
End of explanation
# instantiate the model (using the value K=5)
knn = KNeighborsClassifier(n_neighbors=5)
# fit the model with data
knn.fit(X, y)
# predict the response for new observations
knn.predict(X_new)
Explanation: Tuning a KNN model
End of explanation
# calculate predicted probabilities of class membership
knn.predict_proba(X_new)
Explanation: Question: Which model produced the correct predictions for the two unknown irises?
Answer: We don't know, because these are out-of-sample observations, meaning that we don't know the true response values. Our goal with supervised learning is to build models that generalize to out-of-sample data. However, we can't truly measure how well our models will perform on out-of-sample data.
Question: Does that mean that we have to guess how well our models are likely to do?
Answer: Thankfully, no. In the next class, we'll discuss model evaluation procedures, which allow us to use our existing labeled data to estimate how well our models are likely to perform on out-of-sample data. These procedures will help us to tune our models and choose between different types of models.
End of explanation |
2,542 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Algo - jeux de dictionnaires, plus grand suffixe commun
Les dictionnaires sont très utilisés pour associer des choses entre elles, surtout quand ces choses ne sont pas entières. Le notebook montre l'intérêt de perdre un peu de temps pour transformer les données et rendre un calcul plus rapide.
Step2: Enoncé
Le texte suivant est un poème d'Arthur Rimbaud, Les Voyelles. On veut en extraire tous les mots.
Step3: Exercice 1
Step4: Exercice 2
Step5: Exercice 3
Step6: Exercice 4
Step7: Exercice 5
Step8: C'est illisible. On ne montre que les mots se terminant par tes.
Step9: Toujours pas très partique. On veut représenter l'arbre visuellement ou tout du moins une sous-partie. On utilise le langage DOT.
Step10: Le résultat est différent car le dictionnaire ne garantit pas que les éléments seront parcourus dans l'ordre alphabétique. | Python Code:
from jyquickhelper import add_notebook_menu
add_notebook_menu()
Explanation: Algo - jeux de dictionnaires, plus grand suffixe commun
Les dictionnaires sont très utilisés pour associer des choses entre elles, surtout quand ces choses ne sont pas entières. Le notebook montre l'intérêt de perdre un peu de temps pour transformer les données et rendre un calcul plus rapide.
End of explanation
poeme =
A noir, E blanc, I rouge, U vert, O bleu, voyelles,
Je dirai quelque jour vos naissances latentes.
A, noir corset velu des mouches éclatantes
Qui bombillent autour des puanteurs cruelles,
Golfe d'ombre; E, candeur des vapeurs et des tentes,
Lance des glaciers fiers, rois blancs, frissons d'ombelles;
I, pourpres, sang craché, rire des lèvres belles
Dans la colère ou les ivresses pénitentes;
U, cycles, vibrements divins des mers virides,
Paix des pâtis semés d'animaux, paix des rides
Que l'alchimie imprime aux grands fronts studieux;
O, suprême clairon plein de strideurs étranges,
Silences traversés des Mondes et des Anges:
—O l'Oméga, rayon violet de Ses Yeux!
def extract_words(text):
# ce n'est pas la plus efficace des fonctions mais ça fait ce qu'on veut
spl = text.lower().replace("!", "").replace(",", "").replace(
";", "").replace(".", "").replace(":", "").replace("'", " ").split()
return(spl)
print(extract_words(poeme))
Explanation: Enoncé
Le texte suivant est un poème d'Arthur Rimbaud, Les Voyelles. On veut en extraire tous les mots.
End of explanation
def plus_grand_suffix_commun(mots):
longueur_max = max([len(m) for m in mots])
meilleure_paire = None
meilleur_suffix = None
# On peut parcourir les tailles de suffixe dans un sens croissant
# mais c'est plus efficace dans un sens décroissant dans la mesure
# où le premier suffixe trouvé est alors nécessairement le plus long.
for i in range(longueur_max - 1, 0, -1):
for m1 in mots:
for m2 in mots: # ici, on pourrait ne parcourir qu'une partie des mots
# car m1,m2 ou m2,m1, c'est pareil.
if m1 == m2:
continue
if len(m1) < i or len(m2) < i:
continue
suffixe = m1[-i:]
if m2[-i:] == suffixe:
meilleur_suffix = suffixe
meilleure_paire = m1, m2
return meilleur_suffix, meilleure_paire
mots = extract_words(poeme)
plus_grand_suffix_commun(mots)
Explanation: Exercice 1 : trouver les deux mots qui partagent le plus grand suffixe en commun
Exercice 2 : constuire un dictionnaire qui associe à chaque lettre tous les mots se terminant par celle-ci
Exercice 3 : trouver les deux mots qui partagent le plus grand suffixe en commun en utilisant le dictionnaire précédent
Exercice 4 : mesurer le temps pris par chaque fonction
La fonction perf_counter est parfaite pour ça.
Exercice 5 : expliquer pourquoi telle méthode est plus rapide.
La réponse devrait guider vers une méthode encore plus rapide.
Exercice 6 : pousser l'idée plus loin et construire un trie
Indexer les mots par leur dernière lettre permet d'aller plus vite. Il faut maintenant trouver le suffixe le plus long dans chaque sous-groupe de mots. Ce problème est identique au précédent sur tous les mots précédents auxquels la dernière aurait été ôtée. Comment exploiter cette idée jusqu'au bout ?
Réponses
Exercice 1 : trouver les deux mots qui partagent le plus grand suffixe en commun
Ce n'est qu'une suggestion. La fonction repose sur trois boucles, la première parcourt différentes tailles de suffixe, les deux autres regardes toutes les paires de mots.
End of explanation
mots = extract_words(poeme)
suffix_map = {}
for mot in mots:
lettre = mot[-1]
if lettre in suffix_map:
suffix_map[lettre].append(mot)
else:
suffix_map[lettre] = [mot]
suffix_map
Explanation: Exercice 2 : constuire un dictionnaire qui associe à chaque lettre tous les mots se terminant par celle-ci
End of explanation
def plus_grand_suffix_commun_dictionnaire(mots):
suffix_map = {}
for mot in mots:
lettre = mot[-1]
if lettre in suffix_map:
suffix_map[lettre].append(mot)
else:
suffix_map[lettre] = [mot]
tout = []
for cle, valeur in suffix_map.items():
suffix = plus_grand_suffix_commun(valeur)
if suffix is None:
continue
tout.append((len(suffix[0]), suffix[0], suffix[1]))
return max(tout)
mots = extract_words(poeme)
plus_grand_suffix_commun_dictionnaire(mots)
Explanation: Exercice 3 : trouver les deux mots qui partagent le plus grand suffixe en commun en utilisant le dictionnaire précédent
On reprend les deux ingrédients.
End of explanation
from time import perf_counter
mots = extract_words(poeme)
debut = perf_counter()
for i in range(100):
plus_grand_suffix_commun(mots)
perf_counter() - debut
debut = perf_counter()
for i in range(100):
plus_grand_suffix_commun_dictionnaire(mots)
perf_counter() - debut
Explanation: Exercice 4 : mesurer le temps pris par chaque fonction
End of explanation
def build_trie(liste):
trie = {}
for mot in liste:
noeud = trie
for i in range(0, len(mot)):
lettre = mot[len(mot) - i - 1]
if lettre not in noeud:
noeud[lettre] = {}
noeud = noeud[lettre]
noeud['FIN'] = 0
return trie
liste = ['zabc', 'abc']
t = build_trie(liste)
t
mots = extract_words(poeme)
trie = build_trie(mots)
trie
Explanation: Exercice 5 : expliquer pourquoi telle méthode est plus rapide.
La seconde méthode est deux à trois fois plus rapide. Cela dépend du nombre de mots qu'on note N. Si on note L la longueur du plus grand mot, la première méthode a pour coût $O(LN^2)$. La seconde est une succession de deux étapes. La première étape construit un dictionnaire en parcourant une seule fois la liste des mots. Son coût est $O(N)$. La seconde utilise la première méthode mais sur des ensembles plus petits. Plus exactements, si $N_x$ est le nombre de mots se terminant pas $x$, alors le coût de la méthode est $O(L \sum_x N_x^2)$ avec $\sum_x N_x = N$. Il faut donc comparer $O(LN^2)$ à $O(N) + O(L \sum_x N_x^2)$. Le second coût est plus petit.
Exercice 6 : pousser l'idée plus loin et construire un trie
Un trie est une structure de données permettant de trouver rapidement tous les mots partageant le même préfixe ou suffixe.
End of explanation
trie['s']['e']['t']
Explanation: C'est illisible. On ne montre que les mots se terminant par tes.
End of explanation
def build_dot(trie, predecessor=None, root_name=None, depth=0):
rows = []
root = trie
if predecessor is None:
rows.append('digraph{')
rows.append('%s%d [label="%s"];' % (
root_name or 'ROOT', id(trie), root_name or 'ROOT'))
rows.append(build_dot(trie, root_name or 'ROOT', depth=depth))
rows.append("}")
elif isinstance(trie, dict):
for k, v in trie.items():
rows.append('%s%d [label="%s"];' % (k, id(v), k))
rows.append("%s%d -> %s%d;" % (predecessor, id(trie), k, id(v)))
rows.append(build_dot(v, k, depth=depth+1))
return "\n".join(rows)
text = build_dot(trie['s']['e']['t'], root_name='set')
print(text)
from jyquickhelper import RenderJsDot
RenderJsDot(text, width="100%")
def plus_grand_suffix_commun_dictionnaire_trie(mots):
whole_trie = build_trie(mots)
def walk(trie):
best = []
for k, v in trie.items():
if isinstance(v, int):
continue
r = walk(v)
if len(r) > 0 and len(r) + 1 > len(best):
best = [k] + r
if len(best) > 0:
return best
if len(trie) >= 2:
return ['FIN']
return []
return walk(whole_trie)
res = plus_grand_suffix_commun_dictionnaire_trie(mots)
res
res = plus_grand_suffix_commun_dictionnaire(mots)
res
Explanation: Toujours pas très partique. On veut représenter l'arbre visuellement ou tout du moins une sous-partie. On utilise le langage DOT.
End of explanation
debut = perf_counter()
for i in range(100):
plus_grand_suffix_commun_dictionnaire(mots)
perf_counter() - debut
debut = perf_counter()
for i in range(100):
plus_grand_suffix_commun_dictionnaire_trie(mots)
perf_counter() - debut
Explanation: Le résultat est différent car le dictionnaire ne garantit pas que les éléments seront parcourus dans l'ordre alphabétique.
End of explanation |
2,543 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
2D Registration Example
Most ndreg functions are convinence wrappers around the SimpleITK registration framework. Functions provided by ndreg should work reasonably well with most types of images. More complicated problems should probably be handled by writing your own SimpleITK functions. For a detailed handling of registration within SimpleITK see its official notebooks.
The basic idea behind image registration is that given an input and reference images I and J with domain X, we seek a set parameters p of given coordinate transfom T<sub>p</sub> such that a matching function (or metric as ITK calls it) M(I(T<sub>p</sub>(X)),J(X)) is minimized. A good example of M is the Mean Square Error. Mathematically it is the L<sub>2</sub> norm of the difference between the input and reference image ||I(T<sub>p</sub>(X)) - J(X)||.
Linear Registration
Linear Registration includes all registration algorithms in which T<sub>p</sub> is spatialy invariant. This means that T<sub>p</sub> applies the same function to all voxels in the domain. In ndreg linear registration is handled by the imgAffine. Lets begin with a simple "Hello World" registration. First we'll download two images from ndstore
Step1: Lets's display the input image
Step2: We want to align reference image to the input image. The reference image is a scaled and translated version of the input image.
Step3: Obviously these images don't overlap.
Step4: OK now lets register these images. When no optional parameters are specified imgAffine compute affine parameters which can be used to transform the input image into the reference image under MSE matching.
Step5: We can now apply these parameters to the input image using imgApplyAffine.
Step6: Clearly deformed input image defInImg overlaps the reference image refImg
Step7: Nonlinear registation
In nonlinear registration algorithms transform T<sub>p</sub> is not spatially invariant. A well known non-linear registation method is the Large Deformation Diffeomorphic Metric Mapping (LDDMM) algorithm. LDDMM computes a smooth invertable mapping between the input (template) and reference (target) images. In ndreg, it is implemented in the imgMetamorphosis function which returns both the transform parameters as a vector field and inverse parameters as invField. We run the registation using the default parameters but limit it to 100 iterations
Step8: Like in the affine example we apply the transform to the image.
Step9: We then show that the deformed input image overlaps with the reference image | Python Code:
import matplotlib.pyplot as plt
from ndreg import *
inImg = imgDownload("checkerBig")
refImg = imgDownload("checkerSmall")
Explanation: 2D Registration Example
Most ndreg functions are convinence wrappers around the SimpleITK registration framework. Functions provided by ndreg should work reasonably well with most types of images. More complicated problems should probably be handled by writing your own SimpleITK functions. For a detailed handling of registration within SimpleITK see its official notebooks.
The basic idea behind image registration is that given an input and reference images I and J with domain X, we seek a set parameters p of given coordinate transfom T<sub>p</sub> such that a matching function (or metric as ITK calls it) M(I(T<sub>p</sub>(X)),J(X)) is minimized. A good example of M is the Mean Square Error. Mathematically it is the L<sub>2</sub> norm of the difference between the input and reference image ||I(T<sub>p</sub>(X)) - J(X)||.
Linear Registration
Linear Registration includes all registration algorithms in which T<sub>p</sub> is spatialy invariant. This means that T<sub>p</sub> applies the same function to all voxels in the domain. In ndreg linear registration is handled by the imgAffine. Lets begin with a simple "Hello World" registration. First we'll download two images from ndstore
End of explanation
imgShow(inImg)
Explanation: Lets's display the input image
End of explanation
imgShow(refImg)
Explanation: We want to align reference image to the input image. The reference image is a scaled and translated version of the input image.
End of explanation
plt.imshow(sitk.GetArrayFromImage(refImg - inImg))
Explanation: Obviously these images don't overlap.
End of explanation
affine = imgAffine(inImg, refImg)
print(affine)
Explanation: OK now lets register these images. When no optional parameters are specified imgAffine compute affine parameters which can be used to transform the input image into the reference image under MSE matching.
End of explanation
defInImg = imgApplyAffine(inImg, affine, size=refImg.GetSize())
imgShow(defInImg)
Explanation: We can now apply these parameters to the input image using imgApplyAffine.
End of explanation
plt.imshow(sitk.GetArrayFromImage(refImg - defInImg))
Explanation: Clearly deformed input image defInImg overlaps the reference image refImg
End of explanation
(field, invField) = imgMetamorphosis(inImg, refImg, iterations=100, verbose=True)
Explanation: Nonlinear registation
In nonlinear registration algorithms transform T<sub>p</sub> is not spatially invariant. A well known non-linear registation method is the Large Deformation Diffeomorphic Metric Mapping (LDDMM) algorithm. LDDMM computes a smooth invertable mapping between the input (template) and reference (target) images. In ndreg, it is implemented in the imgMetamorphosis function which returns both the transform parameters as a vector field and inverse parameters as invField. We run the registation using the default parameters but limit it to 100 iterations
End of explanation
defInImg = imgApplyField(inImg, field, size=refImg.GetSize())
imgShow(defInImg)
Explanation: Like in the affine example we apply the transform to the image.
End of explanation
plt.imshow(sitk.GetArrayFromImage(refImg - defInImg))
Explanation: We then show that the deformed input image overlaps with the reference image
End of explanation |
2,544 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Aerosol
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Meteorological Forcings
5. Key Properties --> Resolution
6. Key Properties --> Tuning Applied
7. Transport
8. Emissions
9. Concentrations
10. Optical Radiative Properties
11. Optical Radiative Properties --> Absorption
12. Optical Radiative Properties --> Mixtures
13. Optical Radiative Properties --> Impact Of H2o
14. Optical Radiative Properties --> Radiative Scheme
15. Optical Radiative Properties --> Cloud Interactions
16. Model
1. Key Properties
Key properties of the aerosol model
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Scheme Scope
Is Required
Step7: 1.4. Basic Approximations
Is Required
Step8: 1.5. Prognostic Variables Form
Is Required
Step9: 1.6. Number Of Tracers
Is Required
Step10: 1.7. Family Approach
Is Required
Step11: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required
Step12: 2.2. Code Version
Is Required
Step13: 2.3. Code Languages
Is Required
Step14: 3. Key Properties --> Timestep Framework
Physical properties of seawater in ocean
3.1. Method
Is Required
Step15: 3.2. Split Operator Advection Timestep
Is Required
Step16: 3.3. Split Operator Physical Timestep
Is Required
Step17: 3.4. Integrated Timestep
Is Required
Step18: 3.5. Integrated Scheme Type
Is Required
Step19: 4. Key Properties --> Meteorological Forcings
**
4.1. Variables 3D
Is Required
Step20: 4.2. Variables 2D
Is Required
Step21: 4.3. Frequency
Is Required
Step22: 5. Key Properties --> Resolution
Resolution in the aersosol model grid
5.1. Name
Is Required
Step23: 5.2. Canonical Horizontal Resolution
Is Required
Step24: 5.3. Number Of Horizontal Gridpoints
Is Required
Step25: 5.4. Number Of Vertical Levels
Is Required
Step26: 5.5. Is Adaptive Grid
Is Required
Step27: 6. Key Properties --> Tuning Applied
Tuning methodology for aerosol model
6.1. Description
Is Required
Step28: 6.2. Global Mean Metrics Used
Is Required
Step29: 6.3. Regional Metrics Used
Is Required
Step30: 6.4. Trend Metrics Used
Is Required
Step31: 7. Transport
Aerosol transport
7.1. Overview
Is Required
Step32: 7.2. Scheme
Is Required
Step33: 7.3. Mass Conservation Scheme
Is Required
Step34: 7.4. Convention
Is Required
Step35: 8. Emissions
Atmospheric aerosol emissions
8.1. Overview
Is Required
Step36: 8.2. Method
Is Required
Step37: 8.3. Sources
Is Required
Step38: 8.4. Prescribed Climatology
Is Required
Step39: 8.5. Prescribed Climatology Emitted Species
Is Required
Step40: 8.6. Prescribed Spatially Uniform Emitted Species
Is Required
Step41: 8.7. Interactive Emitted Species
Is Required
Step42: 8.8. Other Emitted Species
Is Required
Step43: 8.9. Other Method Characteristics
Is Required
Step44: 9. Concentrations
Atmospheric aerosol concentrations
9.1. Overview
Is Required
Step45: 9.2. Prescribed Lower Boundary
Is Required
Step46: 9.3. Prescribed Upper Boundary
Is Required
Step47: 9.4. Prescribed Fields Mmr
Is Required
Step48: 9.5. Prescribed Fields Mmr
Is Required
Step49: 10. Optical Radiative Properties
Aerosol optical and radiative properties
10.1. Overview
Is Required
Step50: 11. Optical Radiative Properties --> Absorption
Absortion properties in aerosol scheme
11.1. Black Carbon
Is Required
Step51: 11.2. Dust
Is Required
Step52: 11.3. Organics
Is Required
Step53: 12. Optical Radiative Properties --> Mixtures
**
12.1. External
Is Required
Step54: 12.2. Internal
Is Required
Step55: 12.3. Mixing Rule
Is Required
Step56: 13. Optical Radiative Properties --> Impact Of H2o
**
13.1. Size
Is Required
Step57: 13.2. Internal Mixture
Is Required
Step58: 14. Optical Radiative Properties --> Radiative Scheme
Radiative scheme for aerosol
14.1. Overview
Is Required
Step59: 14.2. Shortwave Bands
Is Required
Step60: 14.3. Longwave Bands
Is Required
Step61: 15. Optical Radiative Properties --> Cloud Interactions
Aerosol-cloud interactions
15.1. Overview
Is Required
Step62: 15.2. Twomey
Is Required
Step63: 15.3. Twomey Minimum Ccn
Is Required
Step64: 15.4. Drizzle
Is Required
Step65: 15.5. Cloud Lifetime
Is Required
Step66: 15.6. Longwave Bands
Is Required
Step67: 16. Model
Aerosol model
16.1. Overview
Is Required
Step68: 16.2. Processes
Is Required
Step69: 16.3. Coupling
Is Required
Step70: 16.4. Gas Phase Precursors
Is Required
Step71: 16.5. Scheme Type
Is Required
Step72: 16.6. Bulk Scheme Species
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'thu', 'sandbox-1', 'aerosol')
Explanation: ES-DOC CMIP6 Model Properties - Aerosol
MIP Era: CMIP6
Institute: THU
Source ID: SANDBOX-1
Topic: Aerosol
Sub-Topics: Transport, Emissions, Concentrations, Optical Radiative Properties, Model.
Properties: 69 (37 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:40
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Meteorological Forcings
5. Key Properties --> Resolution
6. Key Properties --> Tuning Applied
7. Transport
8. Emissions
9. Concentrations
10. Optical Radiative Properties
11. Optical Radiative Properties --> Absorption
12. Optical Radiative Properties --> Mixtures
13. Optical Radiative Properties --> Impact Of H2o
14. Optical Radiative Properties --> Radiative Scheme
15. Optical Radiative Properties --> Cloud Interactions
16. Model
1. Key Properties
Key properties of the aerosol model
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of aerosol model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of aerosol model code
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Scheme Scope
Is Required: TRUE Type: ENUM Cardinality: 1.N
Atmospheric domains covered by the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: STRING Cardinality: 1.1
Basic approximations made in the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/volume ratio for aerosols"
# "3D number concenttration for aerosols"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.5. Prognostic Variables Form
Is Required: TRUE Type: ENUM Cardinality: 1.N
Prognostic variables in the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 1.6. Number Of Tracers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of tracers in the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 1.7. Family Approach
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are aerosol calculations generalized into families of species?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses atmospheric chemistry time stepping"
# "Specific timestepping (operator splitting)"
# "Specific timestepping (integrated)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Timestep Framework
Physical properties of seawater in ocean
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Mathematical method deployed to solve the time evolution of the prognostic variables
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Split Operator Advection Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for aerosol advection (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.3. Split Operator Physical Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for aerosol physics (in seconds).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.4. Integrated Timestep
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Timestep for the aerosol model (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3.5. Integrated Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the type of timestep scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_3D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Meteorological Forcings
**
4.1. Variables 3D
Is Required: FALSE Type: STRING Cardinality: 0.1
Three dimensionsal forcing variables, e.g. U, V, W, T, Q, P, conventive mass flux
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_2D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Variables 2D
Is Required: FALSE Type: STRING Cardinality: 0.1
Two dimensionsal forcing variables, e.g. land-sea mask definition
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.3. Frequency
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Frequency with which meteological forcings are applied (in seconds).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Resolution
Resolution in the aersosol model grid
5.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.2. Canonical Horizontal Resolution
Is Required: FALSE Type: STRING Cardinality: 0.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 5.3. Number Of Horizontal Gridpoints
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 5.4. Number Of Vertical Levels
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Number of vertical levels resolved on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.5. Is Adaptive Grid
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Tuning Applied
Tuning methodology for aerosol model
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Transport
Aerosol transport
7.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of transport in atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Specific transport scheme (eulerian)"
# "Specific transport scheme (semi-lagrangian)"
# "Specific transport scheme (eulerian and semi-lagrangian)"
# "Specific transport scheme (lagrangian)"
# TODO - please enter value(s)
Explanation: 7.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for aerosol transport modeling
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.mass_conservation_scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Mass adjustment"
# "Concentrations positivity"
# "Gradients monotonicity"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7.3. Mass Conservation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method used to ensure mass conservation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.convention')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Convective fluxes connected to tracers"
# "Vertical velocities connected to tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7.4. Convention
Is Required: TRUE Type: ENUM Cardinality: 1.N
Transport by convention
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Emissions
Atmospheric aerosol emissions
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of emissions in atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Prescribed (climatology)"
# "Prescribed CMIP6"
# "Prescribed above surface"
# "Interactive"
# "Interactive above surface"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.2. Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method used to define aerosol species (several methods allowed because the different species may not use the same method).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Volcanos"
# "Bare ground"
# "Sea surface"
# "Lightning"
# "Fires"
# "Aircraft"
# "Anthropogenic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.3. Sources
Is Required: FALSE Type: ENUM Cardinality: 0.N
Sources of the aerosol species are taken into account in the emissions scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Interannual"
# "Annual"
# "Monthly"
# "Daily"
# TODO - please enter value(s)
Explanation: 8.4. Prescribed Climatology
Is Required: FALSE Type: ENUM Cardinality: 0.1
Specify the climatology type for aerosol emissions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.5. Prescribed Climatology Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and prescribed via a climatology
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.6. Prescribed Spatially Uniform Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and prescribed as spatially uniform
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.7. Interactive Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and specified via an interactive method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.8. Other Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and specified via an "other method"
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_method_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.9. Other Method Characteristics
Is Required: FALSE Type: STRING Cardinality: 0.1
Characteristics of the "other method" used for aerosol emissions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Concentrations
Atmospheric aerosol concentrations
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of concentrations in atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.2. Prescribed Lower Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the lower boundary.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.3. Prescribed Upper Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the upper boundary.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.4. Prescribed Fields Mmr
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed as mass mixing ratios.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.5. Prescribed Fields Mmr
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed as AOD plus CCNs.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10. Optical Radiative Properties
Aerosol optical and radiative properties
10.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of optical and radiative properties
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.black_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11. Optical Radiative Properties --> Absorption
Absortion properties in aerosol scheme
11.1. Black Carbon
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of black carbon at 550nm (if non-absorbing enter 0)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.dust')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.2. Dust
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of dust at 550nm (if non-absorbing enter 0)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.organics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.3. Organics
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of organics at 550nm (if non-absorbing enter 0)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.external')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12. Optical Radiative Properties --> Mixtures
**
12.1. External
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there external mixing with respect to chemical composition?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.internal')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12.2. Internal
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there internal mixing with respect to chemical composition?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.mixing_rule')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.3. Mixing Rule
Is Required: FALSE Type: STRING Cardinality: 0.1
If there is internal mixing with respect to chemical composition then indicate the mixinrg rule
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.size')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13. Optical Radiative Properties --> Impact Of H2o
**
13.1. Size
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does H2O impact size?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.internal_mixture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13.2. Internal Mixture
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does H2O impact internal mixture?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14. Optical Radiative Properties --> Radiative Scheme
Radiative scheme for aerosol
14.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of radiative scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.shortwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.2. Shortwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of shortwave bands
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.3. Longwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of longwave bands
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Optical Radiative Properties --> Cloud Interactions
Aerosol-cloud interactions
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of aerosol-cloud interactions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.2. Twomey
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the Twomey effect included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey_minimum_ccn')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.3. Twomey Minimum Ccn
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If the Twomey effect is included, then what is the minimum CCN number?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.drizzle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.4. Drizzle
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the scheme affect drizzle?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.cloud_lifetime')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.5. Cloud Lifetime
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the scheme affect cloud lifetime?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.6. Longwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of longwave bands
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16. Model
Aerosol model
16.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dry deposition"
# "Sedimentation"
# "Wet deposition (impaction scavenging)"
# "Wet deposition (nucleation scavenging)"
# "Coagulation"
# "Oxidation (gas phase)"
# "Oxidation (in cloud)"
# "Condensation"
# "Ageing"
# "Advection (horizontal)"
# "Advection (vertical)"
# "Heterogeneous chemistry"
# "Nucleation"
# TODO - please enter value(s)
Explanation: 16.2. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Processes included in the Aerosol model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Radiation"
# "Land surface"
# "Heterogeneous chemistry"
# "Clouds"
# "Ocean"
# "Cryosphere"
# "Gas phase chemistry"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.3. Coupling
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other model components coupled to the Aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.gas_phase_precursors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "DMS"
# "SO2"
# "Ammonia"
# "Iodine"
# "Terpene"
# "Isoprene"
# "VOC"
# "NOx"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.4. Gas Phase Precursors
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of gas phase aerosol precursors.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bulk"
# "Modal"
# "Bin"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.5. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Type(s) of aerosol scheme used by the aerosols model (potentially multiple: some species may be covered by one type of aerosol scheme and other species covered by another type).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.bulk_scheme_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon / soot"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.6. Bulk Scheme Species
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of species covered by the bulk scheme.
End of explanation |
2,545 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Title
Step1: Create dataframe
Step2: Scatterplot of preTestScore and postTestScore, with the size of each point determined by age
Step3: Scatterplot of preTestScore and postTestScore with the size = 300 and the color determined by sex | Python Code:
%matplotlib inline
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
Explanation: Title: Making A Matplotlib Scatterplot From A Pandas Dataframe
Slug: matplotlib_scatterplot_from_pandas
Summary: Making A Matplotlib Scatterplot From A Pandas Dataframe
Date: 2016-05-01 12:00
Category: Python
Tags: Data Visualization
Authors: Chris Albon
Based on: StackOverflow.
import modules
End of explanation
raw_data = {'first_name': ['Jason', 'Molly', 'Tina', 'Jake', 'Amy'],
'last_name': ['Miller', 'Jacobson', 'Ali', 'Milner', 'Cooze'],
'female': [0, 1, 1, 0, 1],
'age': [42, 52, 36, 24, 73],
'preTestScore': [4, 24, 31, 2, 3],
'postTestScore': [25, 94, 57, 62, 70]}
df = pd.DataFrame(raw_data, columns = ['first_name', 'last_name', 'age', 'female', 'preTestScore', 'postTestScore'])
df
Explanation: Create dataframe
End of explanation
plt.scatter(df.preTestScore, df.postTestScore
, s=df.age)
Explanation: Scatterplot of preTestScore and postTestScore, with the size of each point determined by age
End of explanation
plt.scatter(df.preTestScore, df.postTestScore, s=300, c=df.female)
Explanation: Scatterplot of preTestScore and postTestScore with the size = 300 and the color determined by sex
End of explanation |
2,546 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
1. Import the necessary packages to read in the data, plot, and create a linear regression model
Step1: 2. Read in the hanford.csv file
Step2: <img src="images/hanford_variables.png">
3. Calculate the basic descriptive statistics on the data
Central Tendency
Step3: Median
Step4: Mode
Step5: Spread
Step6: Interquartile Range
Step7: Standard Deviation
Step8: 4. Calculate the coefficient of correlation (r) and generate the scatter plot. Does there seem to be a correlation worthy of investigation?
Step9: Yes.
5. Create a linear regression model based on the available data to predict the mortality rate given a level of exposure
Step10: 6. Plot the linear regression line on the scatter plot of values. Calculate the r^2 (coefficient of determination)
Step11: 7. Predict the mortality rate (Cancer per 100,000 man years) given an index of exposure = 10 | Python Code:
import pandas as pd
import statsmodels.formula.api as smf
import matplotlib
%matplotlib inline
import matplotlib.pyplot as plt
Explanation: 1. Import the necessary packages to read in the data, plot, and create a linear regression model
End of explanation
df = pd.read_csv('../data/hanford.csv')
df.head()
Explanation: 2. Read in the hanford.csv file
End of explanation
df.mean()
Explanation: <img src="images/hanford_variables.png">
3. Calculate the basic descriptive statistics on the data
Central Tendency:
Mean
End of explanation
df.median()
Explanation: Median
End of explanation
df.mode()
Explanation: Mode
End of explanation
max(df['Exposure']) - min(df['Exposure'])
max(df['Mortality']) - min(df['Mortality'])
Explanation: Spread:
Range
End of explanation
df['Exposure'].quantile(q=0.75) - df['Exposure'].quantile(q=0.25)
df['Mortality'].quantile(q=0.75) - df['Mortality'].quantile(q=0.25)
Explanation: Interquartile Range
End of explanation
df.std()
Explanation: Standard Deviation
End of explanation
df.corr()
df.plot(kind = 'scatter', x = 'Exposure', y = 'Mortality')
Explanation: 4. Calculate the coefficient of correlation (r) and generate the scatter plot. Does there seem to be a correlation worthy of investigation?
End of explanation
lm = smf.ols(formula = 'Mortality~Exposure', data = df).fit()
b, m = lm.params
def predicted_mortality_rate(exposure):
y = m * exposure + b
return y
Explanation: Yes.
5. Create a linear regression model based on the available data to predict the mortality rate given a level of exposure
End of explanation
df.plot(kind = 'scatter', x = 'Exposure', y = 'Mortality')
plt.plot(df['Exposure'], m * df['Exposure'] + b, '-', color = 'red')
Explanation: 6. Plot the linear regression line on the scatter plot of values. Calculate the r^2 (coefficient of determination)
End of explanation
predicted_mortality_rate(10)
Explanation: 7. Predict the mortality rate (Cancer per 100,000 man years) given an index of exposure = 10
End of explanation |
2,547 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Generalized Linear Models (Formula)
This notebook illustrates how you can use R-style formulas to fit Generalized Linear Models.
To begin, we load the Star98 dataset and we construct a formula and pre-process the data
Step1: Then, we fit the GLM model
Step2: Finally, we define a function to operate customized data transformation using the formula framework
Step3: As expected, the coefficient for double_it(LOWINC) in the second model is half the size of the LOWINC coefficient from the first model | Python Code:
import statsmodels.api as sm
import statsmodels.formula.api as smf
star98 = sm.datasets.star98.load_pandas().data
formula = "SUCCESS ~ LOWINC + PERASIAN + PERBLACK + PERHISP + PCTCHRT + \
PCTYRRND + PERMINTE*AVYRSEXP*AVSALK + PERSPENK*PTRATIO*PCTAF"
dta = star98[
[
"NABOVE",
"NBELOW",
"LOWINC",
"PERASIAN",
"PERBLACK",
"PERHISP",
"PCTCHRT",
"PCTYRRND",
"PERMINTE",
"AVYRSEXP",
"AVSALK",
"PERSPENK",
"PTRATIO",
"PCTAF",
]
].copy()
endog = dta["NABOVE"] / (dta["NABOVE"] + dta.pop("NBELOW"))
del dta["NABOVE"]
dta["SUCCESS"] = endog
Explanation: Generalized Linear Models (Formula)
This notebook illustrates how you can use R-style formulas to fit Generalized Linear Models.
To begin, we load the Star98 dataset and we construct a formula and pre-process the data:
End of explanation
mod1 = smf.glm(formula=formula, data=dta, family=sm.families.Binomial()).fit()
print(mod1.summary())
Explanation: Then, we fit the GLM model:
End of explanation
def double_it(x):
return 2 * x
formula = "SUCCESS ~ double_it(LOWINC) + PERASIAN + PERBLACK + PERHISP + PCTCHRT + \
PCTYRRND + PERMINTE*AVYRSEXP*AVSALK + PERSPENK*PTRATIO*PCTAF"
mod2 = smf.glm(formula=formula, data=dta, family=sm.families.Binomial()).fit()
print(mod2.summary())
Explanation: Finally, we define a function to operate customized data transformation using the formula framework:
End of explanation
print(mod1.params[1])
print(mod2.params[1] * 2)
Explanation: As expected, the coefficient for double_it(LOWINC) in the second model is half the size of the LOWINC coefficient from the first model:
End of explanation |
2,548 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
We will build a logistic regression model to predict whether a student gets admitted into a university.
We want to determine each applicant’s chance of admission based on their results on two exams. We have historical data from previous applicants that we can use as a training set for logistic regression. For each training example, we have the applicant’s scores on two exams and the admissions decision.
The task is to build a classification model that estimates an applicant’s probability of admission based the scores from those two exams.
Step1: Now lets start with our Logistic/Sigmoid Function
Step2: We will move ahead with our cost function
Step3: Lets try to achieve the same thing using scipy.optimize function | Python Code:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
df = pd.read_csv('ex2data1.txt', header=None)
df.columns = ["score1", "score2", "res"]
pos = df[(df.res == 1)]
neg = df[(df.res == 0)]
plt.scatter(pos['score1'], pos['score2'], label='admitted')
plt.scatter(neg['score1'], neg['score2'], label='not admitted')
plt.legend()
plt.show()
#Data preparation
Xin = df.drop(['res'], axis=1).values
ones = np.ones((Xin.shape[0], 1), float)
X = np.concatenate((ones,Xin), axis=1)
y = df['res'].values
Explanation: We will build a logistic regression model to predict whether a student gets admitted into a university.
We want to determine each applicant’s chance of admission based on their results on two exams. We have historical data from previous applicants that we can use as a training set for logistic regression. For each training example, we have the applicant’s scores on two exams and the admissions decision.
The task is to build a classification model that estimates an applicant’s probability of admission based the scores from those two exams.
End of explanation
def sigmoid(z):
g = 1/(1 + np.exp(-z))
return g
Explanation: Now lets start with our Logistic/Sigmoid Function
End of explanation
def costFunction(theta, X, y):
m = X.shape[0]
h = sigmoid(X.dot(theta))
J = -1 * (1/m) * (np.log(h).T.dot(y) + np.log(1-h).T.dot(1-y))
return J
def gradient(theta, X, y):
m = X.shape[0]
h = sigmoid(X.dot(theta))
grad = (1/m) * (X.T.dot(h-y))
return (grad.flatten())
theta = np.zeros(X.shape[1])
cost = costFunction(theta, X, y)
grad = gradient(theta, X, y)
print('cost ', cost)
def gradientDescentMulti(X, y, theta, alpha, num_iters):
m = X.shape[0]
J_history = np.zeros(num_iters)
for iter in np.arange(num_iters):
theta = theta - alpha * gradient(theta, X, y)
J_history[iter] = costFunction(theta, X, y)
return (theta, J_history)
#now lots run the gradient descent
alpha = 0.00001;
num_iters = 200;
theta = np.zeros(X.shape[1])
theta, J_history = gradientDescentMulti(X, y, theta, alpha, num_iters)
plt.xlim(0,num_iters)
plt.plot(J_history)
plt.ylabel('Cost J')
plt.xlabel('Iterations')
plt.show()
print('theta ', theta)
testXs = np.array([[1, 56, 23], [1, 99, 89],[1, 52, 89],[1, 1, 99],[1, 82, 23],])
predictions = testXs.dot(theta)
print(predictions)
Explanation: We will move ahead with our cost function
End of explanation
from scipy.optimize import minimize
initial_theta = np.zeros(X.shape[1])
res = minimize(costFunction, initial_theta, args=(X,y), method=None, jac=gradient, options={'maxiter':400})
x1_min, x1_max = X[:,1].min(), X[:,1].max(),
x2_min, x2_max = X[:,2].min(), X[:,2].max(),
xx1, xx2 = np.meshgrid(np.linspace(x1_min, x1_max), np.linspace(x2_min, x2_max))
h = sigmoid(np.c_[np.ones((xx1.ravel().shape[0],1)), xx1.ravel(), xx2.ravel()].dot(res.x))
h = h.reshape(xx1.shape)
plt.contour(xx1, xx2, h, [0.5], linewidths=1, colors='b');
plt.scatter(pos['score1'], pos['score2'], label='admitted')
plt.scatter(neg['score1'], neg['score2'], label='not admitted')
plt.legend()
plt.show()
Explanation: Lets try to achieve the same thing using scipy.optimize function
End of explanation |
2,549 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Advanced Dictionaries
Unlike some of the other Data Structures we've worked with, most of the really useful methods available to us in Dictionaries have already been explored throughout this course. Here we will touch on just a few more for good measure
Step1: Dictionary Comprehensions
Just like List Comprehensions, Dictionary Data Types also support their own version of comprehension for quick creation. It is not as commonly used as List Comprehensions, but the syntax is
Step2: One of the reasons it is not as common is the difficulty in structuring the key names that are not based off the values.
Iteration over keys,values, and items
Dictionaries can be iterated over using the iter methods available in a dictionary. For example
Step3: view items,keys and values
You can use the view methods to view items keys and values. For example | Python Code:
d = {'k1':1,'k2':2}
Explanation: Advanced Dictionaries
Unlike some of the other Data Structures we've worked with, most of the really useful methods available to us in Dictionaries have already been explored throughout this course. Here we will touch on just a few more for good measure:
End of explanation
{x:x**2 for x in range(10)}
Explanation: Dictionary Comprehensions
Just like List Comprehensions, Dictionary Data Types also support their own version of comprehension for quick creation. It is not as commonly used as List Comprehensions, but the syntax is:
End of explanation
for k in d.iterkeys():
print k
for v in d.itervalues():
print v
for item in d.iteritems():
print item
Explanation: One of the reasons it is not as common is the difficulty in structuring the key names that are not based off the values.
Iteration over keys,values, and items
Dictionaries can be iterated over using the iter methods available in a dictionary. For example:
End of explanation
d.viewitems()
d.viewkeys()
d.viewvalues()
Explanation: view items,keys and values
You can use the view methods to view items keys and values. For example:
End of explanation |
2,550 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Different ways to load an input graph
We recommend using the GML graph format to load a graph. You can also use the DOT format, which requires additional dependencies (either pydot or pygraphviz).
DoWhy supports both loading a graph as a string, or as a file (with the extensions 'gml' or 'dot').
Below is an example showing the different ways of loading the same graph.
Step1: I. Generating dummy data
We generate some dummy data for three variables
Step3: II. Loading GML or DOT graphs
GML format
Step4: DOT format | Python Code:
import os, sys
import random
sys.path.append(os.path.abspath("../../../"))
import numpy as np
import pandas as pd
import dowhy
from dowhy import CausalModel
from IPython.display import Image, display
Explanation: Different ways to load an input graph
We recommend using the GML graph format to load a graph. You can also use the DOT format, which requires additional dependencies (either pydot or pygraphviz).
DoWhy supports both loading a graph as a string, or as a file (with the extensions 'gml' or 'dot').
Below is an example showing the different ways of loading the same graph.
End of explanation
z=[i for i in range(10)]
random.shuffle(z)
df = pd.DataFrame(data = {'Z': z, 'X': range(0,10), 'Y': range(0,100,10)})
df
Explanation: I. Generating dummy data
We generate some dummy data for three variables: X, Y and Z.
End of explanation
# With GML string
model=CausalModel(
data = df,
treatment='X',
outcome='Y',
graph=graph[directed 1 node[id "Z" label "Z"]
node[id "X" label "X"]
node[id "Y" label "Y"]
edge[source "Z" target "X"]
edge[source "Z" target "Y"]
edge[source "X" target "Y"]]
)
model.view_model()
display(Image(filename="causal_model.png"))
# With GML file
model=CausalModel(
data = df,
treatment='X',
outcome='Y',
graph="../example_graphs/simple_graph_example.gml"
)
model.view_model()
display(Image(filename="causal_model.png"))
Explanation: II. Loading GML or DOT graphs
GML format
End of explanation
# With DOT string
model=CausalModel(
data = df,
treatment='X',
outcome='Y',
graph="digraph {Z -> X;Z -> Y;X -> Y;}"
)
model.view_model()
from IPython.display import Image, display
display(Image(filename="causal_model.png"))
# With DOT file
model=CausalModel(
data = df,
treatment='X',
outcome='Y',
graph="../example_graphs/simple_graph_example.dot"
)
model.view_model()
display(Image(filename="causal_model.png"))
Explanation: DOT format
End of explanation |
2,551 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
OM10 Tutorial
In this notebook we demonstrate the basic functionality of the om10 package, including how to
Step1: Selecting Mock Lens Samples
Let's look at what we might expect from DES and LSST, by making two different selections from the OM10 database.
Step2: Visualizing Lens Systems
Let's pull out some lenses and see what they look like. | Python Code:
from __future__ import division, print_function
import os, numpy as np
import matplotlib
matplotlib.use('TkAgg')
%matplotlib inline
import om10
%load_ext autoreload
%autoreload 2
Explanation: OM10 Tutorial
In this notebook we demonstrate the basic functionality of the om10 package, including how to:
Make some "standard" mock lensed quasar samples;
Visualize those samples;
Inspect individual systems.
Requirements
You will need to have followed the installation instructions in the OM10 README.
End of explanation
quads, doubles = {}, {}
DES = om10.DB()
DES.select_random(maglim=23.6, area=5000.0, IQ=0.9)
quads['DES'] = DES.sample[DES.sample['NIMG'] == 4]
doubles['DES'] = DES.sample[DES.sample['NIMG'] == 2]
print('Predicted number of LSST quads, doubles: ', len(quads['DES']),',',len(doubles['DES']))
print('Predicted LSST quad fraction: ', str(int(100.0*len(quads['DES'])/(1.0*len(doubles['DES']))))+'%')
LSST = om10.DB()
LSST.select_random(maglim=23.3, area=18000.0, IQ=0.7)
quads['LSST'] = LSST.sample[LSST.sample['NIMG'] == 4]
doubles['LSST'] = LSST.sample[LSST.sample['NIMG'] == 2]
print('Predicted number of LSST quads, doubles: ', len(quads['LSST']),',',len(doubles['LSST']))
print('Predicted LSST quad fraction: ', str(int(100.0*len(quads['LSST'])/(1.0*len(doubles['LSST']))))+'%')
fig = om10.plot_sample(doubles['LSST'], color='blue')
fig = om10.plot_sample(quads['LSST'], color='red', fig=fig)
Explanation: Selecting Mock Lens Samples
Let's look at what we might expect from DES and LSST, by making two different selections from the OM10 database.
End of explanation
db = om10.DB()
# Pull out a specific lens and plot it:
id = 7176527
lens = db.get_lens(id)
om10.plot_lens(lens)
# Plot 3 random lenses from a given survey and plot them:
db.select_random(maglim=21.4, area=30000.0, IQ=1.0, Nlens=3)
for id in db.sample['LENSID']:
lens = db.get_lens(id)
om10.plot_lens(lens, IQ=1.0)
Explanation: Visualizing Lens Systems
Let's pull out some lenses and see what they look like.
End of explanation |
2,552 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Part I
Step1: We're going to investigate the set of data on the passengers of the Titanic. The datasets I'm providing come from the website http
Step2: Note that the above summary gives you information about missing values indirectly
Step3: Let's head towards developing a simple predictive model for who survives the shipwreck. For now we'll explore and clean the data; later on we'll implement the model. Let's look at the distributions of some of the data.
Step4: This will be easier to analyze via proportions. Below, 'axis=1' means to apply the sum horizontally ('axis = 0' would mean to apply the sum vertically)
Step5: At this point, we're going to start addressing the missing values in Age, Embarked, Cabin, etc. It is important that any changes we make to 'train' are also made to 'test,' otherwise any predictive model we build will be flawed. Let's make a combined data frame.
Step6: Below are some examples of how we could use the multi-index on the rows.
Step7: To use the .loc method of selecting data instead, you could first select the 'outer' indices, then the 'inner' indices.
Step8: This isn't a flawless fix; the downside is that mean imputation will change the mean, may reduce the variance, and may distort the model. On the other hand, no imputation method is perfect and you may get better results than if you did not impute the missing values. Later we'll look at more advanced methods for imputing missing values.
Step9: Exercise
How would you handle the missing values for Cabin, Embarked?
Step10: Solution
Don't look until you've thought about it a bit.
There are a lot of missing Cabin values
Step11: The simplest answer is to not include this column in the model for now. Three-quarters of the information in this column is missing, which makes it difficult to say much. However, it may be possible to infer some of the missing values based on implied relationships between passengers, perhaps by looking at last names, number of siblings on board, and so on. It's worth further investigation, but for now let's leave it.
Step12: We have only two missing values, and the majority of the passengers embarked in Southampton (UK). Absent any other information, I'd guess that the two passengers embarked in Southampton as well.
Step13: Saving your work as csv
Step14: or as a pickle
Step15: Part II
Pandas plays well with most databases. Form a connection to the database, then write ordinary SQL queries to bring data into Python for analysis. Here is an example with a sqlite3 database (this database is too large to host on github as is, so just follow along for this example). | Python Code:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams['figure.figsize']=(8,5) # optional
plt.style.use('bmh') # optional
Explanation: Part I
End of explanation
#change the paths as needed
train = pd.read_csv('../data/titanic_train.csv')
test = pd.read_csv('../data/titanic_test.csv')
print(train.info()) # overview of the training data
print(test.info()) # note no 'Survived' column
train.head() # first few rows
print(train.shape) # 891 rows, 12 columns
print(test.shape) # 418 rows, 11 columns
train.shape[0]
train.describe(include='all') # summary statistics
Explanation: We're going to investigate the set of data on the passengers of the Titanic. The datasets I'm providing come from the website http://www.kaggle.com/c/titanic. You can download it there - you'll need to sign up and agree to the rules of the competition.
End of explanation
print(train.isnull().sum()) # the sum is taken columnwise
Explanation: Note that the above summary gives you information about missing values indirectly: for instance, we have 891 observations in the data, but the count for 'Age' is only 714, implying that we don't have the age for 177 passengers. A direct way to get this is by combining the command isnull() with sum():
End of explanation
train.hist(column='Age');
train.hist(column=['Age', 'Fare'], figsize=(16, 6));
pd.crosstab(train['Sex'], train['Survived'], margins=True) # 'margins' gives total counts
Explanation: Let's head towards developing a simple predictive model for who survives the shipwreck. For now we'll explore and clean the data; later on we'll implement the model. Let's look at the distributions of some of the data.
End of explanation
# saving the table for plotting
table = pd.crosstab(train.Sex, train.Survived).apply(lambda x: x/x.sum(), axis=1) # no use for margins here
table
train['Survived'].sum() / train.shape[0]
table.plot(kind='bar', stacked=True); # another plot call directly from a Pandas object
table.plot(kind='bar'); # without stacking
Explanation: This will be easier to analyze via proportions. Below, 'axis=1' means to apply the sum horizontally ('axis = 0' would mean to apply the sum vertically)
End of explanation
test['Survived'] = 0 # create a new 'Survived' column in test, set all values to 0
test.head()
alldata = pd.concat([train,test], keys=['train', 'test']) # tag the training and test data with a multiindex
alldata.tail() #note the multiindex on the rows
alldata.shape[0] == 891 + 418
Explanation: At this point, we're going to start addressing the missing values in Age, Embarked, Cabin, etc. It is important that any changes we make to 'train' are also made to 'test,' otherwise any predictive model we build will be flawed. Let's make a combined data frame.
End of explanation
print(alldata.ix['train', 'Name'])
print(alldata.ix[1, 'Name'])
print(alldata.ix[892, 'Name']) # in test set
print(alldata.ix['test', 'Name'][:5]) # note the numerical indices start at 0
Explanation: Below are some examples of how we could use the multi-index on the rows.
End of explanation
print(alldata.loc['test', 'Name'][1])
sum(alldata.Age.isnull())
alldata.Age.var() # the variance of the Age column (missing values excluded)
alldata.Age.interpolate(inplace=True) # fills in missing values with mean column value
sum(alldata.Age.isnull())
Explanation: To use the .loc method of selecting data instead, you could first select the 'outer' indices, then the 'inner' indices.
End of explanation
alldata.Age.var()
Explanation: This isn't a flawless fix; the downside is that mean imputation will change the mean, may reduce the variance, and may distort the model. On the other hand, no imputation method is perfect and you may get better results than if you did not impute the missing values. Later we'll look at more advanced methods for imputing missing values.
End of explanation
# your code here
print(alldata.Embarked.isnull().sum())
print(alldata.Embarked.value_counts())
Explanation: Exercise
How would you handle the missing values for Cabin, Embarked?
End of explanation
alldata.Cabin.isnull().sum()
Explanation: Solution
Don't look until you've thought about it a bit.
There are a lot of missing Cabin values:
End of explanation
alldata.drop('Cabin', inplace=True, axis=1)
alldata.Embarked.isnull().sum()
alldata.Embarked.value_counts()
Explanation: The simplest answer is to not include this column in the model for now. Three-quarters of the information in this column is missing, which makes it difficult to say much. However, it may be possible to infer some of the missing values based on implied relationships between passengers, perhaps by looking at last names, number of siblings on board, and so on. It's worth further investigation, but for now let's leave it.
End of explanation
# use the 'fillna' method
alldata.Embarked.fillna('S', inplace=True)
alldata.Embarked.isnull().sum()
cleaned_train = alldata.loc['train', :] #first 891 rows, use multi-index to select
cleaned_test = alldata.loc['test',:] #last 418 rows, same
print(cleaned_train.shape)
print(cleaned_test.shape)
Explanation: We have only two missing values, and the majority of the passengers embarked in Southampton (UK). Absent any other information, I'd guess that the two passengers embarked in Southampton as well.
End of explanation
cleaned_train.to_csv('../data/cleaned_train.csv', index=False)
cleaned_test.to_csv('../data/cleaned_test.csv', index=False)
Explanation: Saving your work as csv:
End of explanation
cleaned_train.to_pickle('../data/cleaned_train.p') # index is not an option for a pickle
cleaned_test.to_pickle('../data/cleaned_test.p')
del cleaned_train
whos
# verify that the pickle looks like the same object
ct = pd.read_pickle('../data/cleaned_train.p')
ct.describe(include='all')
# other options - autocomplete below:
ct.to
Explanation: or as a pickle:
End of explanation
import sqlite3
con = sqlite3.connect('/Volumes/data/taxis_old/taxis.sqlite3')
pd.read_sql_query('select * from sqlite_master', con)
# metadata
pd.read_sql_query("select name from sqlite_master where type = 'table'", con)
names = pd.read_sql_query('select sql from sqlite_master where tbl_name="trip_data"', con)
names.ix[0,0]
pd.read_sql_query("select count(*) from trip_data", con) # takes a while
tips = pd.read_sql_query('select tip_amount from fare_data where tip_amount > 0', con) # takes a while
tips.tip_amount.max()
tips.describe()
tips.hist(bins=500)
plt.xlim(0, 20);
whos
tips.info()
tips[tips.tip_amount > 800].shape
names = pd.read_sql_query('select sql from sqlite_master where tbl_name="fare_data"', con)
names.ix[0,0]
# how much time do you have?
# big_fares = pd.read_sql_query('select * from fare_data ' \
# 'inner join trip_data where trip_data.pickup_datetime = fare_data.pickup_datetime and ' \
# 'trip_data.medallion = fare_data.medallion and ' \
# 'trip_data.hack_license = fare_data.medallion and ' \
# 'fare_data.fare_amount > 100 limit 500', con)
con.close() # don't forget to close the connection!!!!!
pd.read_sql_query("select count(*) from trip_data", con)
whos
Explanation: Part II
Pandas plays well with most databases. Form a connection to the database, then write ordinary SQL queries to bring data into Python for analysis. Here is an example with a sqlite3 database (this database is too large to host on github as is, so just follow along for this example).
End of explanation |
2,553 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Below done so far
Step1: Create variables for URLs. The base_url is for the search_recipes API call. The metadata_url is for searching for valid search terms.
Step2: Extracting data
Step3: Next few blocks contain some old code keeping in case needed
Step4: Deal with ingredients - below code is messy and work in progress
Step5: Build recipe info DataFrame
Step6: Build Courses DF
Step7: Build Cuisine DF
Step8: Build Ingredients DF
Step9: Metadata Searches and Building Master Lists | Python Code:
# imports
import requests
import json
import pandas as pd
import numpy as np
# ID and Key
app_id = 'e2b9bebc'
app_key = '4193215272970d956cfd5384a08580a9'
Explanation: Below done so far:
- access Yummly API with "Search Recipes API Call"
- search for "chicken" recipes
- convert JSON into dicts and lists with .json() function
- extract data from converted JSON
- "recipe" table
- "course" table
- "ingredients" table
- "flavors" table
- search Yummly metadata dictionary for valid search terms
- look at an individual recipe in detail with "Get Recipe API Call"
To do:
- identify if any info from the recipe_detail can be used
- come up with systematic way to pull recipes from API
- one possibility:
- store data from API calls in SQL database / csvs
End of explanation
# URLs
base_url = 'http://api.yummly.com/v1/api/recipes?'
metadata_url = 'http://api.yummly.com/v1/api/metadata/'
# headers with yummly ID and Key
headers = {'X-Yummly-App-ID':'e2b9bebc', 'X-Yummly-App-Key':'4193215272970d956cfd5384a08580a9'}
# params
parameters = {'q':'fajitas', 'maxResult': 100}
# NOTE: maxResult can be 1,000, limiting to 100 for now
# Call API
response = requests.get(base_url, headers=headers, params=parameters)
# Check status code
response.status_code
# Convert JSON to python dictionaries and lists
guac = response.json()
# View type of object it is
type(guac)
# View top level keys
response_keys = guac.keys()
response_keys
guac['totalMatchCount']
# The matches key has all the data in it - view a sub dictionary
guac['matches'][11]
Explanation: Create variables for URLs. The base_url is for the search_recipes API call. The metadata_url is for searching for valid search terms.
End of explanation
# Create dicts to put data into
recipe_info_dict = {}
flavors_dict = {}
ingredients_dict = {}
courses_dict = {}
cuisine_dict = {}
# pull data in for loop
for item in guac['matches']:
# Get basic recipe info and put into list
recipe_info = []
recipe_info.append(item.get('recipeName'))
recipe_info.append(item.get('totalTimeInSeconds'))
recipe_info.append(item.get('sourceDisplayName'))
recipe_info.append(item.get('rating'))
# Add to recipe_info_dict
recipe_info_dict[item.get('id')] = recipe_info
# Add data to dicts for courses, flavors and cuisines
courses_dict[item.get('id')] = item['attributes'].get('course')
flavors_dict[item.get('id')] = item.get('flavors')
cuisine_dict[item.get('id')] = item['attributes'].get('cuisine')
Explanation: Extracting data
End of explanation
# for item in guac['matches']:
# recipe_info = [item.get('recipeName'), item['attributes'].get('cuisine'),
# item.get('totalTimeInSeconds'), item.get('sourceDisplayName'),
# item.get('rating')]
# recipe_info.append(item.get('recipeName'))
# recipe_info.append(item['attributes'].get('cuisine'))
# recipe_info.append(item.get('totalTimeInSeconds'))
# recipe_info.append(item.get('rating'))
# recipe_info.append(item.get('sourceDisplayName'))
# recipe_info.append(item.get('imageUrlsBySize').keys())
# rec_id.append(item.get('id'))
# rec_name.append(item.get('recipeName'))
# cuisine.append(item['attributes'].get('cuisine'))
# tot_time_sec.append(item.get('totalTimeInSeconds'))
# rec_source.append(item.get('sourceDisplayName'))
# image_size.append(item.get('imageUrlsBySize').keys())
# rating.append(item.get('rating'))
# rec_id.append(item.get('id'))
# rating.append(item.get('rating'))
# rec_source.append(item.get('sourceDisplayName'))
# recipe_info_dict[item.get('id')] =
# ingredients_dict[item.get('id')] = item.get('ingredients')
# for ingredient in item.get('ingredients'):
# ingredients_all_set.add(ingredient)
Explanation: Next few blocks contain some old code keeping in case needed:
End of explanation
ingredients_dict_2 = {}
for pair in ingredients_dict.items():
# for i in range(len(pair)):
# if pair[i] in ingredients_all_set:
ing_list = []
for ing in ingredients_all_set:
ing_list.append
for i in range(len(pair[1])):
if pair[1][i] in ingredients_all:
ing_list.append(1)
else:
ing_list.append(0)
ingredients_dict_2[pair[0]] = ing_list
for pair in ingredients_dict.items():
for i in range(len(ingredients_all_set)):
for i in range(len(pair[1])):
if pair[1][i] in ingredients_all_set
ing_list = []
if
for i in range (len(ingredients_dict)):
ing_list = []
for ingredient in ingredients_all_set:
if ingredient in ingredients_dict.values()[i]:
ing_list.append(1)
else:
ing_list.append(0)
ingredients_dict_2[ingredients_dict.keys()[i]] = ing_list
ingredients_dict
ingredients_dict_2
for pair in ingredients_dict.items():
# for i in range(len(pair)):
# if pair[i] in ingredients_all_set:
ing_list = []
counter = len
for ing in ingredients_all_set:
if pair
for i in range(len(pair[1])):
if pair[1][i] in ingredients_all:
ing_list.append(1)
else:
ing_list.append(0)
ingredients_dict_2[pair[0]] = ing_list
Explanation: Deal with ingredients - below code is messy and work in progress
End of explanation
recipe_info_dict[recipe_info_dict.keys()[1]]
recipe_info_dict.keys()[1]
recipe_info_df = pd.DataFrame.from_dict(recipe_info_dict, orient='index')
recipe_info_df.columns = ['rec_name', 'tot_time_seconds', 'rec_source',
'rating']
recipe_info_df.head()
# recipe_info_list = [rec_id, rec_name, cuisine, tot_time_sec, rec_source, image_size, rating]
for i in recipe_info_list:
print len(i)
# create flavor_df
flavor_df = pd.DataFrame(flavors_dict).transpose()
# view flavor_df
flavor_df.head()
Explanation: Build recipe info DataFrame
End of explanation
# Code from stackoverflow
courses_df = pd.DataFrame(dict([ (k,pd.Series(v)) for k,v in courses_dict.iteritems() ])).transpose()
courses_df.columns = ['course_a', 'course_b']
courses_df.head(20)
Explanation: Build Courses DF
End of explanation
cuisine_df = pd.DataFrame(dict([ (k, pd.Series(v)) for k,v in cuisine_dict.iteritems() ])).transpose()
Explanation: Build Cuisine DF
End of explanation
# this is one way of doing it, but will need to change
ingredients_df = pd.DataFrame.from_dict(ingredients_dict, orient='index')
ingredients_df.head()
Explanation: Build Ingredients DF
End of explanation
# Search for valid course terms
meta_search_ingredient = requests.get('http://api.yummly.com/v1/api/metadata/ingredient?', headers=headers)
# Search for valid holiday terms
meta_search_course = requests.get('http://api.yummly.com/v1/api/metadata/course?', headers=headers)
# Search for valid cuisine terms
meta_search_cuisine = requests.get('http://api.yummly.com/v1/api/metadata/cuisine?', headers=headers)
meta_search_course.status_code
meta_search_course.text
# Can't do .json() for metadata response, instead do .text
# response = meta_search_ingredient.text[26:-2]
response = meta_search_course.text[23:-2]
course_list = json.loads(response)
course_list[11]
master_courses = []
for course in course_list:
master_courses.append(course['description'])
master_ingredients = []
for ingr in thing:
master_ingredients.append(ingr['description'])
master_ingr_series = pd.Series(master_ingredients)
# master_ingr_series.to_csv('master_ingredients', encoding='utf-8')
ind_recipe = requests.get('http://api.yummly.com/v1/api/recipe/French-Onion-Soup-The-Pioneer-Woman-Cooks-_-Ree-Drummond-41364?_app_id=e2b9bebc&_app_key=4193215272970d956cfd5384a08580a9')
french_id = 'French-Onion-Soup-The-Pioneer-Woman-Cooks-_-Ree-Drummond-41364'
get_recipe_url = 'http://api.yummly.com/v1/api/recipe/'
id_and_key = '_app_id=e2b9bebc&_app_key=4193215272970d956cfd5384a08580a9'
get_recipe_url + french_id + '?' + id_and_key
id_parameters = {'recipe_id': 'French-Onion-Soup-The-Pioneer-Woman-Cooks-_-Ree-Drummond-41364'}
veal_scaloppine = requests.get(get_recipe_url, headers=headers, params=id_parameters)
veal_scaloppine.status_code
ind_recipe.status_code
another_try = requests.get(get_recipe_url + french_id + '?' + id_and_key)
another_try.status_code
guac_ids = []
for item in guac['matches']:
guac_ids.append(item.get('id'))
thingy = another_try.json()
thingy.keys()
thingy['numberOfServings']
ids_series = pd.Series(ids)
guac_ids_series = pd.Series(guac_ids)
# guac_ids_series.to_csv('guac_ids')
Explanation: Metadata Searches and Building Master Lists
End of explanation |
2,554 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Getting info on Priming experiment dataset that's needed for modeling
Info
Step1: Init
Step2: Loading OTU table (filter to just bulk samples)
Step3: Which gradient(s) to simulate?
Step4: Notes
Samples to simulate
Isotope
Step5: Total richness of starting (bulk-soil) community
Method
Step6: Number of taxa in all fractions corresponding to each bulk soil sample
Trying to see the difference between richness of bulk vs gradients (veil line effect)
Step7: Distribution of total sequences per fraction
Number of sequences per sample
Using all samples to assess this one
Just fraction samples
Method
Step8: Distribution fitting
Step9: Notes
Step10: Loading metadata
Step11: Determining association
Step12: Number of taxa along the gradient
Step13: Notes
Step14: For each sample, writing a table of OTU_ID and count
Step15: Making directories for simulations
Step16: Rank-abundance distribution for each sample
Step17: Taxon abundance range for each sample-fraction
Step18: Total abundance of each target taxon
Step19: For each sample, writing a table of OTU_ID and count | Python Code:
baseDir = '/home/nick/notebook/SIPSim/dev/priming_exp/'
workDir = os.path.join(baseDir, 'exp_info')
otuTableFile = '/var/seq_data/priming_exp/data/otu_table.txt'
otuTableSumFile = '/var/seq_data/priming_exp/data/otu_table_summary.txt'
metaDataFile = '/var/seq_data/priming_exp/data/allsample_metadata_nomock.txt'
#otuRepFile = '/var/seq_data/priming_exp/otusn.pick.fasta'
#otuTaxFile = '/var/seq_data/priming_exp/otusn_tax/otusn_tax_assignments.txt'
#genomeDir = '/home/nick/notebook/SIPSim/dev/bac_genome1210/genomes/'
Explanation: Getting info on Priming experiment dataset that's needed for modeling
Info:
Which gradient(s) to simulate?
For each gradient to simulate:
Infer total richness of starting community
Get distribution of total OTU abundances per fraction
Number of sequences per sample
Infer total abundance of each target taxon
User variables
End of explanation
import glob
%load_ext rpy2.ipython
%%R
library(ggplot2)
library(dplyr)
library(tidyr)
library(gridExtra)
library(fitdistrplus)
if not os.path.isdir(workDir):
os.makedirs(workDir)
Explanation: Init
End of explanation
%%R -i otuTableFile
tbl = read.delim(otuTableFile, sep='\t')
# filter
tbl = tbl %>%
select(ends_with('.NA'))
tbl %>% ncol %>% print
tbl[1:4,1:4]
%%R
tbl.h = tbl %>%
gather('sample', 'count', 1:ncol(tbl)) %>%
separate(sample, c('isotope','treatment','day','rep','fraction'), sep='\\.', remove=F)
tbl.h %>% head
Explanation: Loading OTU table (filter to just bulk samples)
End of explanation
%%R -w 900 -h 400
tbl.h.s = tbl.h %>%
group_by(sample) %>%
summarize(total_count = sum(count)) %>%
separate(sample, c('isotope','treatment','day','rep','fraction'), sep='\\.', remove=F)
ggplot(tbl.h.s, aes(day, total_count, color=rep %>% as.character)) +
geom_point() +
facet_grid(isotope ~ treatment) +
theme(
text = element_text(size=16)
)
%%R
tbl.h.s$sample[grepl('700', tbl.h.s$sample)] %>% as.vector %>% sort
Explanation: Which gradient(s) to simulate?
End of explanation
%%R
# bulk soil samples for gradients to simulate
samples.to.use = c(
"X12C.700.14.05.NA",
"X12C.700.28.03.NA",
"X12C.700.45.01.NA",
"X13C.700.14.08.NA",
"X13C.700.28.06.NA",
"X13C.700.45.01.NA"
)
Explanation: Notes
Samples to simulate
Isotope:
12C vs 13C
Treatment:
700
Days:
14
28
45
End of explanation
%%R -i otuTableFile
tbl = read.delim(otuTableFile, sep='\t')
# filter
tbl = tbl %>%
select(ends_with('.NA'))
tbl$OTUId = rownames(tbl)
tbl %>% ncol %>% print
tbl[1:4,1:4]
%%R
tbl.h = tbl %>%
gather('sample', 'count', 1:(ncol(tbl)-1)) %>%
separate(sample, c('isotope','treatment','day','rep','fraction'), sep='\\.', remove=F)
tbl.h %>% head
%%R -w 800
tbl.s = tbl.h %>%
filter(count > 0) %>%
group_by(sample, isotope, treatment, day, rep, fraction) %>%
summarize(n_taxa = n())
ggplot(tbl.s, aes(day, n_taxa, color=rep %>% as.character)) +
geom_point() +
facet_grid(isotope ~ treatment) +
theme_bw() +
theme(
text = element_text(size=16),
axis.text.x = element_blank()
)
%%R -w 800 -h 350
# filter to just target samples
tbl.s.f = tbl.s %>% filter(sample %in% samples.to.use)
ggplot(tbl.s.f, aes(day, n_taxa, fill=rep %>% as.character)) +
geom_bar(stat='identity') +
facet_grid(. ~ isotope) +
labs(y = 'Number of taxa') +
theme_bw() +
theme(
text = element_text(size=16),
axis.text.x = element_blank()
)
%%R
message('Bulk soil total observed richness: ')
tbl.s.f %>% select(-fraction) %>% as.data.frame %>% print
Explanation: Total richness of starting (bulk-soil) community
Method:
Total number of OTUs in OTU table (i.e., gamma richness)
Just looking at bulk soil samples
Loading just bulk soil
End of explanation
%%R -i otuTableFile
# loading OTU table
tbl = read.delim(otuTableFile, sep='\t') %>%
select(-ends_with('.NA'))
tbl.h = tbl %>%
gather('sample', 'count', 2:ncol(tbl)) %>%
separate(sample, c('isotope','treatment','day','rep','fraction'), sep='\\.', remove=F)
tbl.h %>% head
%%R
# basename of fractions
samples.to.use.base = gsub('\\.[0-9]+\\.NA', '', samples.to.use)
samps = tbl.h$sample %>% unique
fracs = sapply(samples.to.use.base, function(x) grep(x, samps, value=TRUE))
for (n in names(fracs)){
n.frac = length(fracs[[n]])
cat(n, '-->', 'Number of fraction samples: ', n.frac, '\n')
}
%%R
# function for getting all OTUs in a sample
n.OTUs = function(samples, otu.long){
otu.long.f = otu.long %>%
filter(sample %in% samples,
count > 0)
n.OTUs = otu.long.f$OTUId %>% unique %>% length
return(n.OTUs)
}
num.OTUs = lapply(fracs, n.OTUs, otu.long=tbl.h)
num.OTUs = do.call(rbind, num.OTUs) %>% as.data.frame
colnames(num.OTUs) = c('n_taxa')
num.OTUs$sample = rownames(num.OTUs)
num.OTUs
%%R
tbl.s.f %>% as.data.frame
%%R
# joining with bulk soil sample summary table
num.OTUs$data = 'fractions'
tbl.s.f$data = 'bulk_soil'
tbl.j = rbind(num.OTUs,
tbl.s.f %>% ungroup %>% select(sample, n_taxa, data)) %>%
mutate(isotope = gsub('X|\\..+', '', sample),
sample = gsub('\\.[0-9]+\\.NA', '', sample))
tbl.j
%%R -h 300 -w 800
ggplot(tbl.j, aes(sample, n_taxa, fill=data)) +
geom_bar(stat='identity', position='dodge') +
facet_grid(. ~ isotope, scales='free_x') +
labs(y = 'Number of OTUs') +
theme(
text = element_text(size=16)
# axis.text.x = element_text(angle=90)
)
Explanation: Number of taxa in all fractions corresponding to each bulk soil sample
Trying to see the difference between richness of bulk vs gradients (veil line effect)
End of explanation
%%R -i otuTableFile
tbl = read.delim(otuTableFile, sep='\t')
# filter
tbl = tbl %>%
select(-ends_with('.NA'))
tbl %>% ncol %>% print
tbl[1:4,1:4]
%%R
tbl.h = tbl %>%
gather('sample', 'count', 2:ncol(tbl)) %>%
separate(sample, c('isotope','treatment','day','rep','fraction'), sep='\\.', remove=F)
tbl.h %>% head
%%R -h 400
tbl.h.s = tbl.h %>%
group_by(sample) %>%
summarize(total_seqs = sum(count))
p = ggplot(tbl.h.s, aes(total_seqs)) +
theme_bw() +
theme(
text = element_text(size=16)
)
p1 = p + geom_histogram(binwidth=200)
p2 = p + geom_density()
grid.arrange(p1,p2,ncol=1)
Explanation: Distribution of total sequences per fraction
Number of sequences per sample
Using all samples to assess this one
Just fraction samples
Method:
Total number of sequences (total abundance) per sample
Loading OTU table
End of explanation
%%R -w 700 -h 350
plotdist(tbl.h.s$total_seqs)
%%R -w 450 -h 400
descdist(tbl.h.s$total_seqs, boot=1000)
%%R
f.n = fitdist(tbl.h.s$total_seqs, 'norm')
f.ln = fitdist(tbl.h.s$total_seqs, 'lnorm')
f.ll = fitdist(tbl.h.s$total_seqs, 'logis')
#f.c = fitdist(tbl.s$count, 'cauchy')
f.list = list(f.n, f.ln, f.ll)
plot.legend = c('normal', 'log-normal', 'logistic')
par(mfrow = c(2,1))
denscomp(f.list, legendtext=plot.legend)
qqcomp(f.list, legendtext=plot.legend)
%%R
gofstat(list(f.n, f.ln, f.ll), fitnames=plot.legend)
%%R
summary(f.ln)
Explanation: Distribution fitting
End of explanation
%%R -i otuTableFile
tbl = read.delim(otuTableFile, sep='\t')
# filter
tbl = tbl %>%
select(-ends_with('.NA')) %>%
select(-starts_with('X0MC'))
tbl = tbl %>%
gather('sample', 'count', 2:ncol(tbl)) %>%
mutate(sample = gsub('^X', '', sample))
tbl %>% head
%%R
# summarize
tbl.s = tbl %>%
group_by(sample) %>%
summarize(total_count = sum(count))
tbl.s %>% head(n=3)
Explanation: Notes:
best fit:
lognormal
mean = 10.113
sd = 1.192
Does sample size correlate to buoyant density?
Loading OTU table
End of explanation
%%R -i metaDataFile
tbl.meta = read.delim(metaDataFile, sep='\t')
tbl.meta %>% head(n=3)
Explanation: Loading metadata
End of explanation
%%R -w 700
tbl.j = inner_join(tbl.s, tbl.meta, c('sample' = 'Sample'))
ggplot(tbl.j, aes(Density, total_count, color=rep)) +
geom_point() +
facet_grid(Treatment ~ Day)
%%R -w 600 -h 350
ggplot(tbl.j, aes(Density, total_count)) +
geom_point(aes(color=Treatment)) +
geom_smooth(method='lm') +
labs(x='Buoyant density', y='Total sequences') +
theme_bw() +
theme(
text = element_text(size=16)
)
Explanation: Determining association
End of explanation
%%R
tbl.s = tbl %>%
filter(count > 0) %>%
group_by(sample) %>%
summarize(n_taxa = sum(count > 0))
tbl.j = inner_join(tbl.s, tbl.meta, c('sample' = 'Sample'))
tbl.j %>% head(n=3)
%%R -w 900 -h 600
ggplot(tbl.j, aes(Density, n_taxa, fill=rep, color=rep)) +
#geom_area(stat='identity', alpha=0.5, position='dodge') +
geom_point() +
geom_line() +
labs(x='Buoyant density', y='Number of taxa') +
facet_grid(Treatment ~ Day) +
theme_bw() +
theme(
text = element_text(size=16),
legend.position = 'none'
)
Explanation: Number of taxa along the gradient
End of explanation
%%R -i otuTableFile
# loading OTU table
tbl = read.delim(otuTableFile, sep='\t')
# filter
tbl = tbl %>%
select(matches('OTUId'), ends_with('.NA'))
tbl %>% ncol %>% print
tbl[1:4,1:4]
%%R
# long table format w/ selecting samples of interest
tbl.h = tbl %>%
gather('sample', 'count', 2:ncol(tbl)) %>%
separate(sample, c('isotope','treatment','day','rep','fraction'), sep='\\.', remove=F) %>%
filter(sample %in% samples.to.use,
count > 0)
tbl.h %>% head
%%R
message('Number of samples: ', tbl.h$sample %>% unique %>% length)
message('Number of OTUs: ', tbl.h$OTUId %>% unique %>% length)
%%R
tbl.hs = tbl.h %>%
group_by(OTUId) %>%
summarize(
total_count = sum(count),
mean_count = mean(count),
median_count = median(count),
sd_count = sd(count)
) %>%
filter(total_count > 0)
tbl.hs %>% head
Explanation: Notes:
Many taxa out to the tails of the gradient.
It seems that the DNA fragments were quite diffuse in the gradients.
Total abundance of each target taxon: bulk soil approach
Getting relative abundances from bulk soil samples
This has the caveat of likely undersampling richness vs using all gradient fraction samples.
i.e., veil line effect
End of explanation
%%R -i workDir
setwd(workDir)
samps = tbl.h$sample %>% unique %>% as.vector
for(samp in samps){
outFile = paste(c(samp, 'OTU.txt'), collapse='_')
tbl.p = tbl.h %>%
filter(sample == samp, count > 0)
write.table(tbl.p, outFile, sep='\t', quote=F, row.names=F)
message('Table written: ', outFile)
message(' Number of OTUs: ', tbl.p %>% nrow)
}
Explanation: For each sample, writing a table of OTU_ID and count
End of explanation
p = os.path.join(workDir, '*_OTU.txt')
files = glob.glob(p)
baseDir = os.path.split(workDir)[0]
newDirs = [os.path.split(x)[1].rstrip('.NA_OTU.txt') for x in files]
newDirs = [os.path.join(baseDir, x) for x in newDirs]
for newDir,f in zip(newDirs, files):
if not os.path.isdir(newDir):
print 'Making new directory: {}'.format(newDir)
os.makedirs(newDir)
else:
print 'Directory exists: {}'.format(newDir)
# symlinking file
linkPath = os.path.join(newDir, os.path.split(f)[1])
if not os.path.islink(linkPath):
os.symlink(f, linkPath)
Explanation: Making directories for simulations
End of explanation
%%R -i otuTableFile
tbl = read.delim(otuTableFile, sep='\t')
# filter
tbl = tbl %>%
select(matches('OTUId'), ends_with('.NA'))
tbl %>% ncol %>% print
tbl[1:4,1:4]
%%R
# long table format w/ selecting samples of interest
tbl.h = tbl %>%
gather('sample', 'count', 2:ncol(tbl)) %>%
separate(sample, c('isotope','treatment','day','rep','fraction'), sep='\\.', remove=F) %>%
filter(sample %in% samples.to.use,
count > 0)
tbl.h %>% head
%%R
# ranks of relative abundances
tbl.r = tbl.h %>%
group_by(sample) %>%
mutate(perc_rel_abund = count / sum(count) * 100,
rank = row_number(-perc_rel_abund)) %>%
unite(day_rep, day, rep, sep='-')
tbl.r %>% as.data.frame %>% head(n=3)
%%R -w 900 -h 350
ggplot(tbl.r, aes(rank, perc_rel_abund)) +
geom_point() +
# labs(x='Buoyant density', y='Number of taxa') +
facet_wrap(~ day_rep) +
theme_bw() +
theme(
text = element_text(size=16),
legend.position = 'none'
)
Explanation: Rank-abundance distribution for each sample
End of explanation
%%R -i otuTableFile
tbl = read.delim(otuTableFile, sep='\t')
# filter
tbl = tbl %>%
select(-ends_with('.NA')) %>%
select(-starts_with('X0MC'))
tbl = tbl %>%
gather('sample', 'count', 2:ncol(tbl)) %>%
mutate(sample = gsub('^X', '', sample))
tbl %>% head
%%R
tbl.ar = tbl %>%
#mutate(fraction = gsub('.+\\.', '', sample) %>% as.numeric) %>%
#mutate(treatment = gsub('(.+)\\..+', '\\1', sample)) %>%
group_by(sample) %>%
mutate(rel_abund = count / sum(count)) %>%
summarize(abund_range = max(rel_abund) - min(rel_abund)) %>%
ungroup() %>%
separate(sample, c('isotope','treatment','day','rep','fraction'), sep='\\.', remove=F)
tbl.ar %>% head(n=3)
%%R -w 800
tbl.ar = tbl.ar %>%
mutate(fraction = as.numeric(fraction))
ggplot(tbl.ar, aes(fraction, abund_range, fill=rep, color=rep)) +
geom_point() +
geom_line() +
labs(x='Buoyant density', y='Range of relative abundance values') +
facet_grid(treatment ~ day) +
theme_bw() +
theme(
text = element_text(size=16),
legend.position = 'none'
)
Explanation: Taxon abundance range for each sample-fraction
End of explanation
%%R -i otuTableFile
# loading OTU table
tbl = read.delim(otuTableFile, sep='\t') %>%
select(-ends_with('.NA'))
tbl.h = tbl %>%
gather('sample', 'count', 2:ncol(tbl)) %>%
separate(sample, c('isotope','treatment','day','rep','fraction'), sep='\\.', remove=F)
tbl.h %>% head
%%R
# basename of fractions
samples.to.use.base = gsub('\\.[0-9]+\\.NA', '', samples.to.use)
samps = tbl.h$sample %>% unique
fracs = sapply(samples.to.use.base, function(x) grep(x, samps, value=TRUE))
for (n in names(fracs)){
n.frac = length(fracs[[n]])
cat(n, '-->', 'Number of fraction samples: ', n.frac, '\n')
}
%%R
# function for getting mean OTU abundance from all fractions
OTU.abund = function(samples, otu.long){
otu.rel.abund = otu.long %>%
filter(sample %in% samples,
count > 0) %>%
ungroup() %>%
group_by(sample) %>%
mutate(total_count = sum(count)) %>%
ungroup() %>%
mutate(perc_abund = count / total_count * 100) %>%
group_by(OTUId) %>%
summarize(mean_perc_abund = mean(perc_abund),
median_perc_abund = median(perc_abund),
max_perc_abund = max(perc_abund))
return(otu.rel.abund)
}
## calling function
otu.rel.abund = lapply(fracs, OTU.abund, otu.long=tbl.h)
otu.rel.abund = do.call(rbind, otu.rel.abund) %>% as.data.frame
otu.rel.abund$sample = gsub('\\.[0-9]+$', '', rownames(otu.rel.abund))
otu.rel.abund %>% head
%%R -h 600 -w 900
# plotting
otu.rel.abund.l = otu.rel.abund %>%
gather('abund_stat', 'value', mean_perc_abund, median_perc_abund, max_perc_abund)
otu.rel.abund.l$OTUId = reorder(otu.rel.abund.l$OTUId, -otu.rel.abund.l$value)
ggplot(otu.rel.abund.l, aes(OTUId, value, color=abund_stat)) +
geom_point(shape='O', alpha=0.7) +
scale_y_log10() +
facet_grid(abund_stat ~ sample) +
theme_bw() +
theme(
text = element_text(size=16),
axis.text.x = element_blank(),
legend.position = 'none'
)
Explanation: Total abundance of each target taxon: all fraction samples approach
Getting relative abundances from all fraction samples for the gradient
I will need to calculate (mean|max?) relative abundances for each taxon and then re-scale so that cumsum = 1
End of explanation
%%R -i workDir
setwd(workDir)
# each sample is a file
samps = otu.rel.abund.l$sample %>% unique %>% as.vector
for(samp in samps){
outFile = paste(c(samp, 'frac_OTU.txt'), collapse='_')
tbl.p = otu.rel.abund %>%
filter(sample == samp, mean_perc_abund > 0)
write.table(tbl.p, outFile, sep='\t', quote=F, row.names=F)
cat('Table written: ', outFile, '\n')
cat(' Number of OTUs: ', tbl.p %>% nrow, '\n')
}
Explanation: For each sample, writing a table of OTU_ID and count
End of explanation |
2,555 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: <p style = "font-size
Step2: Preamble
Step3: EM and MNIST
The $\TeX$ markup used here uses the "align*" environment and thus should not be viewed though nbViewer.
Before proceeding, it seems pedagogically necessary (at least for myself) to revise the EM-slgorithm and show its "correctness", so to say.
A brief description of the EM algorithm
The EM algorithm seeks to maximize the likelihood by means of successive application of two steps
Step4: Classify using the maximum aposteriori rule.
Step5: A procedure to compute the log-likelihood of each observaton with respect to each mixture component. Used in the posterior computation.
Step6: The actual procedure for computing the E*-step
Step7: Analytic solution
Step8: A wrapper to match the assignment specifications.
Step9: A it has been mentioned eariler, the EM algorithm switches between E and M steps until convergence.
Step10: The procedure above actually invokes the true EM core, defined below.
Step11: Define a convenient procedure for running experiments. By setting relative error to zero the algorithm is forced to exhaust all the allocated iterations.
Step12: Miscellanea
Step13: The folowing pair of procedures are used to plot the digits in a clear manner. The first one just creates a canvas for the image
Step14: This procedure displays the images on a nice plot. Used for one-line visualization.
Step15: MIscellanea
Step17: Define a function that produces (using ffmpeg) and embeds a video in HTML into IPython
Step18: Miscellanea
Step19: Or obtain the data from the provided CSV files.
Step20: Study
First of all load and binarize the training data using the value 127 as the threshold.
Step21: Case
Step22: They do indeed look quite distinct. Now collect them into a single dataset and estimate the model.
Step23: The estimate deltas show that the EM algorithm's E-step actually transfers the unlikely observations between classes, as is expected by constructon of the algorithm.
Judging by the plot below, it turns out that 30 iterations is more than enough for the EM to get meaninful estimates the class ideals, represented by the probability porduct-measure on $\Omega^{28\times 28}$.
Step24: Now let's see how well the EM algorithm performs on a model with more classes. But before that let's have a look at a random sample of the handwritten digits.
Step25: Case
Step26: Run the procedure that perfoems EM algorithm and return the history of the parameter estimates as well as the dynamics of the log-likelihood lower bonud.
Step27: One can clearly see, that $50$ iterations were not enough for the alogirithm to converge
Step28: Let's see if changing $K$ does the trick.
Step29: For what values of $K$ was it possible to infer the templates of all digits?
Step30: Obviously, the model with more mixture components is more likely to produce "templates" for all digits. For larger $K$ this is indeed the case.
Having run this algorithm for many times we are able to say that the digits $3$ and $8$, $4$ and $9$ and sometimes $5$ tend to be poorly separated. Furthermore due to there being many different handwritten variations of the same digit one should estimate a model with more classes.
The returned templates of the mixture components are clearly suboptimal
Step31: As one can see, increasing the number of iterations does not necessarily improve the results.
Step32: Judging by the plot of the log-likelihood, the fact that the EM is guaranteed to converge to local maxima and does so extremely fast, there was no need for more than 120-130 iterations. The chages in the log-likelihood around that number of iterations are of the order $10^{-4}$. Since we are working in finite precision arithmetic (double), the smallest precision is $\approx 10^{-14}$.
Let's see the dynamics of the estimated of the EM iterations. You will have to ensure that ffMPEG is installed (Windows
Step33: The parameter estimates of the EM stabilize pretty quickly. In fact most templates stabilize by iterations 100-120.
Choosing $K$
Among many methods of model selection, let's use simple training sample fittness score, givne by the value of the log-likelihood. Becasue the models are nested with respect to the number of mixture components, one should expect the likelihood to be a non decreasing function of $K$ (on average dut to randomization of the initial parameter values).
For large enough $K$ this method may lead to overfitting.
Step34: Indeed the log-likelihood does not decrease with $K$ on average. Nevertheless the model with the highes likelihood turs out to have this many mixture components
Step35: A nice, yet expected coincidence
Step36: ... and get the posterior mixture component probabilities.
Step37: Use a simple majority rule to automatically assign lables to templates.
Step38: Assign the labels $l$ to templates $t$ according to its score, based on the average of the top-$5$ log-likelihoods of observations with label $l$ and classfified with template $t$.
Step39: Compare the label assignments. Here are the templates.
Step40: These are the templates, which were assigned different labels by the majority and "trust" methods.
Step41: Here are the pictures of templates ordered according to their label.
Step42: Classification
Step43: Let's see the best template for each test observation in some sub-sample.
Step44: The digits are shown in pairs
Step45: Not surprisingly, majority- and likelihood-based classification accuracies are close.
Let's see which test observations the model considers an artefact and for which it cannot reliably assign a template
Step46: </hr>
Let's see how more pasrimonious models fare with respect to accuracy on thte test sample.
Accuracy of $K=30$
Step47: Accuracy of $K=20$
Step48: Accuracy of $K=15$
Step49: Accuracy of $K=10$
Step50: As one can see test sample accuracy of the model falls drammatically for less number of mixture components. This was expected, since due to various reasons, one being thet the data is handwritten, it is higly unlikely, that a single digit would have only one template.
Step51: <br/><p style="font-size
Step52: <hr/>
A random variable $X\sim \text{Beta}(\alpha,\beta)$ if the law of $X$ has density
$$p(u) = \frac{\Gamma(\alpha+\beta)}{\Gamma(\alpha)\Gamma(\beta)} u^{\alpha-1}(1-u)^{\beta-1} $$
$$ \log p(X,Z|\Theta) = \sum_{s=1}^n \log \prod_{k=1}^K \Bigl[ \pi_k
\prod_{i=1}^N \prod_{j=1}^M
\frac{\Gamma(\alpha_{kij}+\beta_{kij})}{\Gamma(\alpha_{kij})\Gamma(\beta_{kij})} x_{sij}^{\alpha_{kij}-1}(1-x_{sij})^{\beta_{kij}-1} \Bigr]^{1_{z_s = k}}$$
\begin{align}
\mathbb{E}q \log p(X,Z|\Theta)
&= \sum{k=1}^K \sum_{s=1}^n q_{sk} \log \pi_k \
&+ \sum_{k=1}^K \sum_{i=1}^N \sum_{j=1}^M \sum_{s=1}^n q_{sk} \bigl(
\log \Gamma(\alpha_{kij}+\beta_{kij}) - \log \Gamma(\alpha_{kij}) - \log \Gamma(\beta_{kij}) \bigr) \
&+ \sum_{k=1}^K \sum_{i=1}^N \sum_{j=1}^M \sum_{s=1}^n q_{sk} \bigl(
(\alpha_{kij}-1) \log x_{sij} + (\beta_{kij}-1) \log(1-x_{sij}) \bigr) \
\end{align}
Derivative of a Gamma function does not seem to yeild analytically tracktable solutions.
<hr/>
<p style="font-size | Python Code:
## Add JS-based table of contents
from IPython.display import HTML as add_TOC
add_TOC( u<h1 id="tocheading">Table of Contents</h1></br><div id="toc"></div>
<script src="https://kmahelona.github.io/ipython_notebook_goodies/ipython_notebook_toc.js"></script></br></hr></br> )
Explanation: <p style = "font-size: 24pt;text-align: center;"><strong>Expectation Minimization and MNIST</strong></p>
<p style = "font-size: 16pt;text-align: center;"><strong><i>Nazarov Ivan, 101мНОД (ИССА)</i></strong></p>
End of explanation
import os, time as tm, warnings
warnings.filterwarnings( "ignore" )
# from IPython.core.display import HTML
from IPython.display import display, HTML
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
np.random.seed( 569034853 )
## This is the correct way to use the random number generator,
## since it allows finer control.
rand = np.random.RandomState( np.random.randint( 0x7FFFFFFF ) )
Explanation: Preamble
End of explanation
## A bunch of wrappers to match the task specifications
def posterior( x, clusters ) :
pi = np.ones( clusters.shape[ 0 ], dtype = np.float ) / clusters.shape[ 0 ]
q, ll = __posterior( x, theta = clusters, pi = pi )
return q
## The likelihood is a byproduct of the E-step's minimization of Kullback-Leibler
def likelihood( x, clusters ) :
pi = np.ones( clusters.shape[ 0 ], dtype = np.float ) / clusters.shape[ 0 ]
q, ll = __posterior( x, theta = clusters, pi = pi )
return np.sum( ll )
Explanation: EM and MNIST
The $\TeX$ markup used here uses the "align*" environment and thus should not be viewed though nbViewer.
Before proceeding, it seems pedagogically necessary (at least for myself) to revise the EM-slgorithm and show its "correctness", so to say.
A brief description of the EM algorithm
The EM algorithm seeks to maximize the likelihood by means of successive application of two steps: the E-step and the M-step.
For any probability measure $Q$ on the space of latent variables $Z$ with density $q$ the following holds:
\begin{align}
\log p(X|\Theta)
&= \int q(Z) \log p(X|\Theta) dZ
= \mathbb{E}q \log p(X|\Theta) \
%% &= \Bigl[p(X,Z|\Theta) = p(Z|X,\Theta) p(X|\Theta) \Bigr] \
&= \mathbb{E}{Z\sim q} \log \frac{p(X,Z|\Theta)}{p(Z|X\Theta)}
= \mathbb{E}{Z\sim q} \log \frac{q(Z)}{p(Z|X,\Theta)}
+ \mathbb{E}{Z\sim q} \log \frac{p(X,Z|\Theta)}{q(Z)} \
&= KL\bigl(q\|p(\cdot|X,\Theta)\bigr) + \mathcal{L}\bigl(q, \Theta\bigr)\,,
\end{align}
since the Bayes theorem posits that $p(X,Z|\Theta) = p(Z|X,\Theta) p(X|\Theta)$. Call this equiation the "master equation".
Now note that since the Kullback-Leibler divergence is always non-negative, one has the following inequality:
$$\log p(X|\Theta) \geq \mathcal{L}\bigl(q, \Theta\bigr) \,.$$
Let's try to make the lower bound as large as possible by changing $\Theta$ and varying $q$. But first note that the
left-hand side of the master equation is independent of $q$, whence maximization of $\mathcal{L}$ with respect to $q$ (with $\Theta$ fixed) is equivalent to minimization of $KL\bigl(q\|p(\cdot|X,\Theta)\bigr)$ with respect to $q$ taking $\Theta$ fixed. Since $q$ is arbitrary, the optimal minimizer $q^_\Theta$ is $q^(Z|\Theta) = p(Z|X,\Theta)$ for all $Z$.
Now at the optimal distributuion $q^_\Theta$ the master equation becomes
$$ \log p(X|\Theta)
= \mathcal{L}\bigl(q^\Theta, \Theta\bigr)
= \mathbb{E}{Z\sim q^_\Theta} \log \frac{p(X,Z|\Theta)}{q^(Z|\Theta)}
= \mathbb{E}{Z\sim q^\Theta} \log p(X,Z|\Theta) - \mathbb{E}{Z\sim q^\Theta} \log q^*(Z|\Theta) \,,
$$
for any $\Theta$. Thus the problem of log-likelihood maximization reduces to that of maximizing the sum of expectations on the right-hand side.
This new problem does not seem to be tractable in general since the optimization paramters $\Theta$ affect both the expected log-likelihood $\log p(X,Z|\Theta)$ under $Z\sim q^*_\Theta$ and the entropy of the optimal distribution of the latent variables $Z$.
Hopefully using an iterative procedure which switches between the computation of $q^_\Theta$ and the maximization of $\Theta$ might be effective. Consider the folowing :
* E-step: considering $\Theta_i$ as given and fixed find $q^{\Theta_i} = \mathop{\text{argmin}}_q\,\, KL\bigl(q\|p(\cdot|X,\Theta_i)\bigr)$ and set $q{i+1} = q^_{\Theta_i}$;
* M*-step: considering $q_{i+1}$ as given, solve $\mathcal{L}(q_{i+1},\Theta) \to \mathop{\text{max}}\Theta$, where
$$ \mathcal{L}(q,\Theta) = \mathbb{E}{Z\sim q} \log p(X,Z|\Theta) - \mathbb{E}_{Z\sim q} \log q(Z) \,.$$
The fact that $q_i$ is considered fixed makes the optimization of $\mathcal{L}(q_i,\Theta)$ equivalent to maximization of the expected log-likelihood, since the entropy term is fixed. Therefore the M-step becomes:
* given $q_{i+1}$ find $\Theta^{i+1} = \mathop{\text{argmax}}\Theta\,\, \mathbb{E}{Z\sim q{i+1}} \log p(X,Z|\Theta)$ and put $\Theta_{i+1} = \Theta^_{i+1}$.
Now, if the latent variables are mutually independent, then the optimal $q$ must be factorizable into marginal densities and:
\begin{align}
KL\bigl(q\|p(\cdot|X,\Theta)\bigr)
&= \mathbb{E}{Z\sim q} \log q(Z) - \sum_j \mathbb{E}{z_j\sim q_j} \log p(z_j|X,\Theta)\
&= \sum_j \mathbb{E}{z_j\sim q_j} \log q_j(z_j) - \sum_j \mathbb{E}{z_j\sim q_j} \log p(z_j|X,\Theta)
= \sum_j KL\bigl(q_j\|p_j(|X,\Theta)\bigr) \,,
\end{align}
where $q_j$ is the marginal desity of $z_j$ in $q(Z)$ (the last term in the first line comes from the Fubini theorem).
Therefore the E-step could be reduced to a set of minimization problems with respect to one-dimensional density functions:
$$ q_j^* = \mathop{\text{argmin}}_{q_j}\,\, KL\bigl(q_j\|p_j(\cdot|X,\Theta)\bigr) \,, $$
since the Kulback-Leibler divergence in this case in additively separable.
Correctness
Recall that the master equation is an identity: for all densities $q$ on $Z$ and for all admissible parameters $\Theta$
$$ \log p(X|\Theta) = KL\bigl(q\|p(\cdot|X,\Theta)\bigr) + \mathcal{L}\bigl(q, \Theta\bigr) \,.$$
Hence if after the E-step the Kulback-Leibler divergence is reduced:
$$ KL\bigl(q'\|p(\cdot|X,\Theta)\bigr) \leq KL\bigl(q\|p(\cdot|X,\Theta)\bigr) \,,$$
then for the same set of parameters $\Theta$ one has
$$ \mathcal{L}(q,\Theta) \leq \mathcal{L}(q',\Theta) \,.$$
Just after the E-step one has $q_{i+1} = p(Z|X,\Theta_i)$, whence $KL\bigl(q_{i+1}\|p(\cdot|X,\Theta_i)\bigr) = 0$. In turn, this implies via the master equation that the following equality holds:
$$ \log p(X|\Theta_i) = \mathcal{L}(q_{i+1},\Theta_i) \,.$$
After the M-step, since $\Theta_{i+1}$ is a maximizer, or at least an "improver" of $\mathcal{L}(q_{i+1},\Theta)$ compared to its value at $(q_i,\Theta_i)$, one has
$$ \mathcal{L}(q_{i+1},\Theta_i) \leq \mathcal{L}(q_{i+1},\Theta_{i+1}) \,.$$
Threfore the effect of a single complete round of EM on the log-likelihood itself is:
$$ \log p(X|\Theta_i) = \mathcal{L}(q_{i+1},\Theta_i) \leq \mathcal{L}(q_{i+1},\Theta_{i+1}) \leq \mathcal{L}(q_{i+2},\Theta_{i+1}) = \log p(X|\Theta_{i+1}) \,,$$
where the equality is achieved between the E and the M step within one round. This implies that EM indeed iteratively improves the log-likihood.
Note, that in the general case, without attaining zero Kulback-Leibler divergence at the $E$-step, one cannot be sure that the real log-likelihood is improved by each iteration and one can just say that
$$ \mathcal{L}(q_{i+1},\Theta_i) \leq \log p(X|\Theta_i) \,,$$
which does not uncover a relationship with $\log p(X|\Theta_{i+1})$. And without the guarantee that EM improves the log-likelihood to the maximum one cannot be sure about the consistency of the estimators. The key question is whether the lower bound $\mathcal{L}(q,\Theta)$ is any good.
Application of the EM to MNIST data
Each image is a random element in a discrete probability space $\Omega = {0,1}^{N\times M}$ with product-measure
$$ \mathbb{P}(\omega) = \prod_{i=1}^N\prod_{j=1}^M \theta_{ij}^{\omega_{ij}} (1-\theta_{ij})^{1-\omega_{ij}} \,,$$
for any $\omega\in \Omega$. In particular $M=N=28$. Basically each bit of the image is independent of any other bit and each one is a Bernoulli random variable with parameter $\theta_{ij}$: $\omega_{ij}\sim \text{Bern}(\theta_{ij})$.
Let's apply the EM algorithm to this dataset. The proposed model is the following.
Consider a mixture model of discrete probability spaces. Suppose there are $K$ componets in the mixture. Then each image is distributed according to the following law:
$$p(\omega|\Theta)
= \sum_{k=1}^K \pi_k p_k(\omega|\theta_k)
= \sum_{k=1}^K \pi_k \prod_{i=1}^N \prod_{j=1}^M \theta_{kij}^{\omega_{ij}} (1-\theta_{kij})^{1-\omega_{ij}}$$
where $\theta_{kij}$ is the paramter of the probability distribution of the $(i,j)$-th random variable (pixel) in the $k$-th class, and $\pi_k$ is the (prior) porbability of the $k$-th mixutre to generate a random element, $\sum_{k=1}^K \pi_k= 1$.
Suppose $X=(x_i){i=1}^n \in \Omega^n$ is the dataset. The log-likelihood is given by
$$ \log p(X|\Theta) = \sum{s=1}^n \log \sum_{k=1}^K \pi_k
\prod_{i=1}^N \prod_{j=1}^M \theta_{kij}^{x_{sij}} (1-\theta_{kij})^{1-x_{sij}} \,,$$
where $x_{sij}\in{0,1}$ -- is the value of the the $(i,j)$-th pixel at the $s$-th observation.
If the source $Z=(z_i){i=1}^n$ components of the mixture at each datapoint were known, then the log-likelihood would have been
$$ \log p(X,Z|\Theta) = \sum{s=1}^n \log \prod_{k=1}^K \Bigl[ \pi_k
\prod_{i=1}^N \prod_{j=1}^M \theta_{kij}^{x_{sij}} (1-\theta_{kij})^{1-x_{sij}} \Bigr]^{1_{z_s = k}} \,,$$
where $1_{z_s = k}$ is the indicator and take the value $1$ if ${z_s = k}$ and $0$ otherwise ($1_{{k}}(z_s)$ is another notation).
The log-likelihood simplifies to
$$ \log p(X,Z|\Theta) = \sum_{s=1}^n \sum_{k=1}^K 1_{z_s = k} \Bigl( \log \pi_k +
\sum_{i=1}^N \sum_{j=1}^M \bigl( x_{sij} \log \theta_{kij} + (1-x_{sij}) \log (1-\theta_{kij}) \bigr) \Bigr) \,,$$
and further into a more separable form
$$ \log p(X,Z|\Theta)
= \sum_{s=1}^n \sum_{k=1}^K 1_{z_s = k} \log \pi_k
+ \sum_{s=1}^n \sum_{k=1}^K 1_{z_s = k} \Bigl( \sum_{i=1}^N \sum_{j=1}^M x_{sij} \log \theta_{kij}
+ \sum_{i=1}^N \sum_{j=1}^M (1-x_{sij}) \log (1-\theta_{kij}) \Bigr) \,.$$
The expected log-likelihood under $z_s\sim q_s$ with $\mathbb{P}(z_s=k|X) = q_{sk}$, is given by
$$ \mathbb{E}\log p(X,Z|\Theta)
= \sum_{s=1}^n \sum_{k=1}^K q_{sk} \log \pi_k
+ \sum_{s=1}^n \sum_{k=1}^K q_{sk} \sum_{i=1}^N \sum_{j=1}^M \bigl( x_{sij} \log \theta_{kij} + (1-x_{sij}) \log (1-\theta_{kij}) \bigr) \,.$$
Analytic solution: E-step
At the E-step one must compute $q^(Z) = \mathbb{P}(z_s=k|X) = \hat{q}{sk}$ based on the value of $\Theta = ((\pi_k), (\theta{kij}))$.
$$\hat{q}{sk}
= \frac{p(x_s|z_s=k,\Theta) p(z_s=k)}{\sum{l=1}^K p(x_s|z_s=l,\Theta) p(z_s=l)}
\propto \pi_k \prod_{i=1}^N \prod_{j=1}^M \theta_{kij}^{x_{sij}} (1-\theta_{kij})^{1-x_{sij}}
$$
and
$$ q^(Z) = \prod_{s=1}^n q_{s z_s} $$
Note that the denominator is actually the log-likelihood of the data.
In order to improve numerical stability and avoid numerical underflow it is better to use the following procedure for computation of the conditional probability:
$$ l_{sk} = \sum_{i=1}^N \sum_{j=1}^M \log \bigl( \theta_{kij}^{x_{sij}} (1-\theta_{kij})^{1-x_{sij}} \bigr) \,,$$
set $l^s = \max_k l{sk}$ and compute the log-sum
$$ \hat{l}s = \log \sum{k=1}^K \text{exp}\Bigl{ ( l_{sk} - l^s ) + \log \pi_k \Bigr} \,,$$
and then compute the consitional distribution:
$$ \hat{q}{sk} = \text{exp}\Bigl{ l_{sk} + \log \pi_k - ( \hat{l}_s + l^*_s ) \Bigr} \,.$$
This seemingly redunant subctration and addition of $l^_s$ helps avoid underflow during the numerical exponentiation. After this sanitization the E*-step's optimal distribution would be numerically accurate.
If $l^s >> l{sk}$ for all $k$ but such that $l^s = l{sk}$ (let it be $k^$), then an underflow occurs at the sum-exp step, whence for some very small $\epsilon > 0$ one has
$$ \hat{l}_s = l^s + \log (1+\epsilon) \,,$$
whence
$$ \hat{q}{sk} = \text{exp}\bigl{l_{sk} - \hat{l}s\bigr} = (1+\epsilon)^{-1} \cdot \text{exp}\bigl{l{sk} - l^_s \bigr} \,.$$
For $k=k^$ on has $\hat{q}{sk} = \frac{1}{1+\epsilon}\approx 1$, and for $k\neq k^*$ -- $\hat{q}{sk} = \frac{\eta}{1+\epsilon} \approx 0$ for some extremely small $\eta>0$.
The variables in the code have the following dimensions:
* $\theta \in [0,1]^{K\times (N\times M)}$;
* $\pi \in [0,1]^{1\times K}$;
* $x \in {0,1}^{n\times (N\times M)}$;
* $z \in [0,1]^{n\times K}$.
Wrappers required for the assignment.
End of explanation
## Classifier
def classify( x, theta, pi = None ) :
pi = pi if pi is not None else np.ones( theta.shape[ 0 ], dtype = np.float ) / theta.shape[ 0 ]
## Compute the posterior probabilities of the data
q_sk, ll_s = __posterior( x, theta = theta, pi = pi )
## Classify according to max pasterior:
c_s = np.argmax( q_sk, axis = 1 )
return c_s, q_sk, ll_s
Explanation: Classify using the maximum aposteriori rule.
End of explanation
def __component_likelihood( x, theta ) :
## Unfortunately sometimes there can be negative machine zeros, which
## spoil the log-likelihood computation by poisoning with NANs.
## That is why the theta array is restricted to [0,1].
theta_clipped = np.clip( theta, 0.0, 1.0 )
## Iterate over classes
ll_sk = np.zeros( ( x.shape[ 0 ], theta.shape[ 0 ] ), dtype = np.float )
## Make a binary mask of the data
mask = x > 0
for k in xrange( theta.shape[ 0 ] ) :
## Note that the power formulation is just a mathematically convenient way of
## writing \theta if x=1 or (1-\theta) otherwise.
ll_sk[ :, k ] = np.sum( np.where( mask,
np.log( theta_clipped[ k ] ), np.log( 1 - theta_clipped[ k ] ) ), axis = ( 1, ) )
return ll_sk
Explanation: A procedure to compute the log-likelihood of each observaton with respect to each mixture component. Used in the posterior computation.
End of explanation
## The core procedure for computing the conditional density of classes
def __posterior( x, theta, pi ) :
## Get the log-likelihoods of each observation in each mixture component.
ll_sk = __component_likelihood( x, theta )
## Find the largest unnormalized probability.
llstar_s = np.reshape( np.max( ll_sk, axis = ( 1, ) ), ( ll_sk.shape[ 0 ], 1 ) )
## Subtract the largest exponent
ll_sk -= llstar_s
## In the rare case when the largest exponent is -Inf, force the differences
## to zero. This effective treaks such observations as having unfiorm likelihood
## across classes. This way the priors don't get masked by really small numbers.
## I could've used ``np.nan_to_num( ll_sk - llstar_s )'' but it actually copies
## the ll_sk array.
ll_sk[ np.isnan( ll_sk ) ] = 0.0
## Don't forget to add the log-prior probability (Numpy broadcasting applies!).
## Adding priors before dealing with infinities would mask then and yield
## incorrect estimates of the log-likelihoods!
ll_sk += np.log( np.reshape( pi, ( 1, ll_sk.shape[ 1 ] ) ) )
## Compute the log-sum-exp of the individual log-likelihoods. Negative infinities
## resolve to 0.0 while the largest exponent resolves to a one. This step cannot
## produce NaNs
ll_s = np.reshape( np.log( np.sum( np.exp( ll_sk ), axis = ( 1, ) ) ), ( ll_sk.shape[ 0 ], 1 ) )
## The sum-exp could never be anything lower than 1, since at least one
## element of each row of ll_sk has to be lstar_s, whence the respective
## difference should be zero and the exponent -- 1. Thus even if the
## rest of the sum is close to machine zero, the logarithm would still
## return 0.
## Normalise the likelihoods to get conditional probability, and compute
## the sum of the log-denominator, which is the log-likelihood.
return np.exp( ll_sk - ll_s ), ll_s + llstar_s
Explanation: The actual procedure for computing the E*-step: the conditional distribution and the log-likelihood scores.
End of explanation
## The E-step is simple: just compute the optimal parameters under
## the current conditional distribution of the latent variables.
def __learn_clusters( x, z ) :
## The prior class probabilities
pi = z.sum( axis = ( 0, ) )
## Pixel probabilities conditional on the calss
theta = np.tensordot( z, x, ( 0, 0 ) ) / pi.reshape( ( pi.shape[ 0 ], 1 ) )
## Return: regularization should be done at **E**-step!
# return np.clip( theta, 1.0/784, 1.0 - 1.0/784 ), pi / x.shape[ 0 ]
return theta, pi / x.shape[ 0 ]
Explanation: Analytic solution: M-step
At the M-step for some fixed $q(Z)$ one solves $\mathbb{E}\log p(X,Z|\Theta)\to \max_\Theta$ subject to $\sum_{k=1}^K \pi_k = 1$ which is a convex optimization problem with respect to $\Theta$, since the log-likelihood as a linear combination of convex functions is convex. The first order condition is $\sum_{s=1}^n \frac{q_{sk}}{\pi_k} - \lambda = 0$ for all $k=1,\ldots,K$, whence $ \lambda = \sum_{s=1}^n \sum_{l=1}^K q_{sl} = n $ and finally
$$ \hat{\pi}k = \frac{\sum{s=1}^n q_{sk}}{n} \,.$$
For $\theta_{kij}$, $i=1,\ldots,N$, $j=1,\ldots,M$ and $k=1,\ldots,K$ the FOC is
$$ \sum_{s=1}^n q_{sk} \frac{x_{sij}}{\theta_{kij}} - \sum_{s=1}^n q_{sk} \frac{1-x_{sij}}{1-\theta_{kij}} = 0 \,,$$
whence
$$ \hat{\theta}{kij} = \frac{\sum{s=1}^n q_{sk} x_{sij}}{ \sum_{s=1}^n q_{sk} } = \frac{\sum_{s=1}^n q_{sk} x_{sij}}{ n \hat{\pi}_k } \,.$$
This M-step procedure is implemented below.
End of explanation
## A wrapper for the above function
def learn_clusters( x, z ) :
theta, pi = __learn_clusters( x, z )
## Just return theta: in the condtional model the pi are fixed.
return theta
Explanation: A wrapper to match the assignment specifications.
End of explanation
## A wrapper for the core em algorithm below
def em_algorithm( x, K, maxiter, verbose = True, rel_eps = 1e-4, full = False ) :
## Initialize the model parameters with uniform [0.25,0.75] random numbers
theta_1 = rand.uniform( size = ( K, x.shape[ 1 ] ) ) * 0.5 + 0.25
pi_1 = None if not full else np.ones( K, dtype = np.float ) / K
## Run the em algorithm
tick = tm.time( )
ll, theta, pi, status = __em_algorithm( x, theta_1 = theta_1,
pi_1 = pi_1, niter = maxiter, rel_eps = rel_eps, verbose = verbose )
tock = tm.time( )
print( "total %.3f, %.3f/iter" % ( ( tock - tick ), ( tock - tick ) / len( ll ), ) )
## Return the history of theta and the final log liklihood
if verbose :
if status[ 'status' ] != 0 :
print "Convergence not achieved. %d" % ( status[ 'status' ], )
if full :
return ( theta, pi ), ll
return theta, ll
Explanation: A it has been mentioned eariler, the EM algorithm switches between E and M steps until convergence.
End of explanation
## The core of the EM algorithm
def __em_algorithm( x, theta_1, pi_1 = None, niter = 1000, rel_eps = 1e-4, verbose = True ) :
## If we were supplied with an initial estimate of the prior distribution,
## then assume the full model is needed.
full_model = pi_1 is not None
## If the prior cluster probabilities are not supplied, assume uniform distribution.
pi_1 = pi_1 if full_model else np.ones( theta_1.shape[ 0 ], dtype = np.float ) / theta_1.shape[ 0 ]
## Allocate the necessary space for the history of model estimates
theta_hist, pi_hist = theta_1[ np.newaxis ].copy( ), pi_1[ np.newaxis ].copy( )
ll_hist = np.asarray( [ -np.inf ], dtype = np.float )
## Set "old" estimated to zero. At this line the current estimates are in fact
## the initially provided ones.
theta_0, pi_0 = np.zeros_like( theta_1 ), np.zeros_like( pi_1 )
## Initialize the loop
status, kiter, rel_theta, rel_pi, ll = -1, 0, np.nan, np.nan, -np.inf
while kiter < niter :
## Dump the current estimators and other information.
if verbose :
print( "Iteration %d: avg. log-lik: %.3f, $\\Theta$ div. %.3f, $\\Pi$ div. %.3f" % (
kiter, ll / x.shape[ 0 ], rel_theta, rel_pi ) )
show_data( theta_1 - theta_0 if True else theta_1, n = theta_0.shape[ 0 ],
n_col = min( 10, theta_0.shape[ 0 ] ), cmap = plt.cm.hot, interpolation = 'nearest' )
## The convergence criterion is the L^∞ norm of relative L^1 errors
if max( rel_pi, rel_theta ) < rel_eps :
status = 0
break ;
## Overwrite the initial estimates
theta_0, pi_0 = theta_1, pi_1
## E-step: call the core posterior function to get both the log-likelihood
## and the estimate of the conditional distribution.
z_1, ll_s = __posterior( x, theta_0, pi_0 )
## Sum the individual log-likelihoods of observations
ll = ll_s.sum()
## M-step: compute the optimal parameters under the current estimate of the posterior
theta_1, pi_1 = __learn_clusters( x, z_1 )
## Discard the computed estimate of pi if the model is discriminative (conditional likelihood).
if not full_model :
pi_1 = pi_0
## Record the current estimates to the history
theta_hist = np.vstack( ( theta_hist, theta_1[np.newaxis] ) )
pi_hist = np.vstack( ( pi_hist, pi_1[np.newaxis] ) )
ll_hist = np.append( ll_hist, ll )
## Check for bad float numbers
if not ( np.all( np.isfinite( theta_1 ) ) and np.all( np.isfinite( pi_1 ) ) ) :
status= -2
break ;
## Check convergence: L^1 relative error. If the relative margin is exactly
## zero, then return NaNs. This makes teh loop exhaust all iterations, since
## any comprison agains a NaN returns False.
rel_theta = np.sum( np.abs( theta_1 - theta_0 ) / ( np.abs( theta_0 ) + rel_eps ) ) if rel_eps > 0 else np.nan
rel_pi = np.sum( np.abs( pi_1 - pi_0 ) / ( np.abs( pi_0 ) + rel_eps ) ) if rel_eps > 0 else np.nan
## Next iteration
kiter += 1
return ll_hist, theta_hist, pi_hist, { 'status': status, 'iter': kiter }
Explanation: The procedure above actually invokes the true EM core, defined below.
End of explanation
def experiment( data, K, maxiter, verbose = True, until_convergence = False, full = False ) :
## Run the EM
return em_algorithm( data, K, maxiter, rel_eps = 1.0e-4 if until_convergence else 0.0, verbose = verbose, full = full )
Explanation: Define a convenient procedure for running experiments. By setting relative error to zero the algorithm is forced to exhaust all the allocated iterations.
End of explanation
## A more flexible image arrangement
def arrange_flex( images, n_row = 10, n_col = 10, N = 28, M = 28, fill_value = 0 ) :
## Create the final grid of images row-by-row
im_grid = np.full( ( n_row * N, n_col * M ), fill_value, dtype = images.dtype )
for k in range( min( images.shape[ 0 ], n_col * n_row ) ) :
## Get the grid cell at which to place the image
i, j = ( k // n_col ) * N, ( k % n_col ) * M
## Just put the image in the cell
im_grid[ i:i+N, j:j+M ] = np.reshape( images[ k ], ( N, M, ) )
return im_grid
Explanation: Miscellanea: visualization
In order to be able to plot more flexibly, define another arranger.
End of explanation
def setup_canvas( axis, n_row, n_col, N = 28, M = 28 ) :
## Setup major tick marks to the seam between images and disable their labels
axis.set_yticks( np.arange( 1, n_row + 1 ) * N, minor = False )
axis.set_xticks( np.arange( 1, n_col + 1 ) * M, minor = False )
axis.set_yticklabels( [ ], minor = False ) ; axis.set_xticklabels( [ ], minor = False )
## Set minor ticks so that they are exactly between the major ones
axis.set_yticks( ( np.arange( n_row + 1 ) + 0.5 ) * N, minor = True )
axis.set_xticks( ( np.arange( n_col + 1 ) + 0.5 ) * M, minor = True )
## Make their labels into cell x-y coordinates
axis.set_yticklabels( [ "%d" % (i,) for i in 1+np.arange( n_row + 1 ) ], minor = True )
axis.set_xticklabels( [ "%d" % (i,) for i in 1+np.arange( n_col + 1 ) ], minor = True )
## Tick marks should be oriented outward
axis.tick_params( axis = 'both', which = 'both', direction = 'out' )
## Return nothing!
axis.grid( color = 'white', linestyle = '--' )
Explanation: The folowing pair of procedures are used to plot the digits in a clear manner. The first one just creates a canvas for the image: it sets up both axes properly and adds labels to them.
End of explanation
def show_data( data, n, n_col = 10, transpose = False, **kwargs ) :
## Get the number of rows necessary to plot the needed number of images
n_row = ( n + n_col - 1 ) // n_col
## Transpose if necessary
if transpose :
n_col, n_row = n_row, n_col
## Set the dimensions of the figure
fig = plt.figure( figsize = ( n_col, n_row ) )
axis = fig.add_subplot( 111 )
## Plot!
setup_canvas( axis, n_row, n_col )
axis.imshow( arrange_flex( data[:n], n_col = n_col, n_row = n_row ), **kwargs )
## Plot
plt.show( )
def visualize( data, clusters, ll, n_col = 2, plot_ll = True ) :
## Display the result
print "Final conditional log-likelihood value per observation achieved %f in %d iteration(s)" % (
ll[-1] / data.shape[ 0 ], len( ll ) )
## Plot the first difference of average log-likelihood
if plot_ll :
plt.figure( figsize = ( 12, 7 ) )
ax = plt.subplot(111)
ax.set_title( r"avg. log-likelihood change between successive iterations (log scale)" )
ax.plot( np.diff( ll / data.shape[ 0 ] ) )
# ax.set_ylabel( r"$\Delta_i \frac{1}{n} \sum_{s=1}^n \mathbb{E}_{z_s\sim q_i} \log p(x_s,z_s|\Theta_i)$" )
ax.set_ylabel( r"$\Delta_i \frac{1}{n} \sum_{s=1}^n \log p(x_s|\Theta_i)$" )
ax.set_yscale( 'log' )
## Plot the final estimates
if n_col > 0 :
show_data( clusters[-1], n = clusters.shape[1], n_col = n_col, cmap = plt.cm.spectral, interpolation = 'nearest' )
Explanation: This procedure displays the images on a nice plot. Used for one-line visualization.
End of explanation
def animate( theta, ll, pi = None, n_col = 10, n_row = 10, interval = 1, **kwargs ) :
## Create a background
bg = arrange_flex( np.zeros_like( theta[ 0 ] ), n_col = n_col, n_row = n_row )
## Compute log-likelihood differences and sanitize them.
ll_diff = np.maximum( np.diff( ll ), np.finfo(np.float).eps )
ll_diff[ ~np.isfinite( ll_diff ) ] = np.nan
## Set up the figure, the axis, and the plot elements we want to animate
fig = plt.figure( figsize = ( 12, 12 ) )
## Create three subplots and sposition them specifically
if pi is None :
ax1, ax3, ax2 = fig.add_subplot( 311 ), fig.add_subplot( 312 ), fig.add_subplot( 313 )
else :
ax1, ax4 = fig.add_subplot( 411 ), fig.add_subplot( 412 )
ax3, ax2 = fig.add_subplot( 413 ), fig.add_subplot( 414 )
## Initialize different ranges for the image aritsts
setup_canvas( ax1, n_row = n_row, n_col = n_col )
ax1.set_title( r"Current estimate of the mixture components" )
setup_canvas( ax2, n_row = n_row, n_col = n_col )
ax2.set_title( r"Change between successive iterations" )
## Initialize geomtery for the delta log-likelihood plot.
ax3.set_xlim( -0.1, ll.shape[ 0 ] + 0.1 )
ax3.set_yscale( 'log' ) #; ax3.set_yticklabels( [ ] )
ax3.set_title( r"Change between successive iterations of EM (log scale)" )
ax3.set_ylabel( r"$\Delta_i \sum_{s=1}^n \log p(x_s|\Theta_i)$" )
ax3.set_ylim( np.nanmin( ll_diff ) * 0.9, np.nanmax( ll_diff ) * 1.1 )
ax3.grid( )
## Setup a plot for prior probabilites
if pi is not None :
classes = 1 + np.arange( len( pi[ 0 ] ) )
ax4.set_xticks( classes )
ax4.set_ylim( 0.0, 1.0 )
ax4.set_title( r"Current estimate of the mixture weights" )
ba1 = ax4.bar( classes, pi[ 0 ], align = "center" )
## Setup the artists
im1 = ax1.imshow( bg, vmin = +0.0, vmax = +1.0, **kwargs )
im2 = ax2.imshow( bg, vmin = -1.0, vmax = +1.0, **kwargs )
line1, = ax3.plot( [ ], linestyle = "-", color = 'blue' )
## Animation function. This is called sequentially
def update( i ) :
## Compute the frame
frame = theta[ i ] - theta[ i-1 ] if i > 0 else theta[ 0 ]
frame /= np.max( np.abs( frame ) )
## Draw frames on the image artists
im1.set_data( arrange_flex( theta[ i ], n_col = n_col, n_row = n_row ) )
im2.set_data( arrange_flex( frame, n_col = n_col, n_row = n_row ) )
if i > 0 :
## Show history on the line artist
line1.set_data( np.arange( i ), ll_diff[ :i ] )
if pi is not None :
[ b.set_height( h ) for b, h in zip( ba1, pi[ i ] ) ]
if i > 0 :
[ b.set_color( 'green' if h > p else 'red' ) for b, h, p in zip( ba1, pi[ i ], pi[ i-1 ] ) ]
## Return an iterator of artists in this frame
return ( im1, im2, line1, ) + ba1
return im1, im2, line1,
## Call the animator.
return animation.FuncAnimation( fig, update, frames = theta.shape[ 0 ], interval = interval, blit = True )
Explanation: MIscellanea: animating the EM
This function creates an animation of successive iterations of a run of the EM.
End of explanation
## Make simple animations of the EM estimatora
## http://jakevdp.github.io/blog/2013/05/12/embedding-matplotlib-animations/
## http://jakevdp.github.io/blog/2012/08/18/matplotlib-animation-tutorial/
from matplotlib import animation
from IPython.display import HTML
from tempfile import NamedTemporaryFile
def embed_video( anim ) :
VIDEO_TAG = <video controls autoplay muted loop><source src="data:video/x-m4v;base64,{0}"
type="video/mp4">Your browser does not support the video tag.</video>
plt.close( anim._fig )
if not hasattr( anim, '_encoded_video' ) :
ffmpeg_writer = animation.FFMpegWriter( )
with NamedTemporaryFile( suffix = '.mp4' ) as f:
anim.save( 'myanim.mp4', fps = 12, extra_args = [ '-vcodec', 'libx264' ] )# , writer = ffmpeg_writer )
video = open( 'myanim.mp4', "rb" ).read( )
anim._encoded_video = video.encode( "base64" )
return HTML( VIDEO_TAG.format( anim._encoded_video ) )
Explanation: Define a function that produces (using ffmpeg) and embeds a video in HTML into IPython
End of explanation
if False :
## Fetch MNIST dataset from SciKit and create a local copy.
from sklearn.datasets import fetch_mldata
mnist = fetch_mldata( "MNIST original", data_home = './data/' )
np.savez_compressed('./data/mnist/mnist_scikit.npz', data = mnist.data, labels = mnist.target )
Explanation: Miscellanea: obtaining the data
Try to download the MNIST data from the SciKit repository.
End of explanation
if False :
## The procedure below loads the MNIST data from a comma-separated text file.
def load_mnist_from_csv( filename ) :
## Read the CSV file
data = np.loadtxt( open( filename, "rb" ), dtype = np.short, delimiter = ",", skiprows = 0 )
## Peel off the lables
return data[:,1:], data[:,0]
## Fetch the data from the provided CSV (!) files and save as a compressed data blob
data, labels = load_mnist_from_csv( "./data/mnist/mnist_train.csv" )
np.savez_compressed( './data/mnist/mnist_train.npz', labels = labels, data = data )
data, labels = load_mnist_from_csv( "./data/mnist/mnist_test.csv" )
np.savez_compressed( './data/mnist/mnist_test.npz', labels = labels, data = data )
Explanation: Or obtain the data from the provided CSV files.
End of explanation
assert( os.path.exists( './data/mnist/mnist_train.npz' ) )
with np.load( './data/mnist/mnist_train.npz', 'r' ) as npz :
mnist_labels, mnist_data = npz[ 'labels' ], np.array( npz[ 'data' ] > 127, np.int )
assert( os.path.exists( './data/mnist/mnist_test.npz' ) )
with np.load( './data/mnist/mnist_test.npz', 'r' ) as npz :
test_labels, test_data = npz[ 'labels' ], np.array( npz[ 'data' ] > 127, np.int )
Explanation: Study
First of all load and binarize the training data using the value 127 as the threshold.
End of explanation
## Mask
inx_sixes, inx_nines = np.where( mnist_labels == 6 )[ 0 ], np.where( mnist_labels == 9 )[ 0 ]
## Extract
sixes = mnist_data[ rand.choice( inx_sixes, 90, replace = False ) ]
nines = mnist_data[ rand.choice( inx_nines, 90, replace = False ) ]
## Show
show_data( sixes, n = 45, n_col = 15, cmap = plt.cm.gray, interpolation = 'nearest' )
show_data( nines, n = 45, n_col = 15, cmap = plt.cm.gray, interpolation = 'nearest' )
Explanation: Case : $K=2$
Let's have a look at some 6s and 9s.
End of explanation
data = mnist_data[ np.append( inx_sixes, inx_nines ) ]
clusters, ll = experiment( data, 2, 30 )
Explanation: They do indeed look quite distinct. Now collect them into a single dataset and estimate the model.
End of explanation
visualize( data, clusters, ll )
Explanation: The estimate deltas show that the EM algorithm's E-step actually transfers the unlikely observations between classes, as is expected by constructon of the algorithm.
Judging by the plot below, it turns out that 30 iterations is more than enough for the EM to get meaninful estimates the class ideals, represented by the probability porduct-measure on $\Omega^{28\times 28}$.
End of explanation
indices = np.arange( mnist_data.shape[ 0 ] )
rand.shuffle( indices )
show_data( mnist_data[ indices[:100] ] , n = 100, n_col = 10, cmap = plt.cm.gray, interpolation = 'nearest' )
Explanation: Now let's see how well the EM algorithm performs on a model with more classes. But before that let's have a look at a random sample of the handwritten digits.
End of explanation
sub_sample = np.concatenate( tuple( [ rand.choice( np.where( mnist_labels == i )[ 0 ], size = 200 ) for i in range( 10 ) ] ) )
train_data, train_labels = mnist_data[ sub_sample ], mnist_labels[ sub_sample ]
# train_data, train_labels = mnist_data, mnist_labels
Explanation: Case : $K=10, 15, 20, 30, 60$ and $90$
The original size of the trainig sample is too large to fit in these RAM banks :) That is why I had to limit the sample to a random subset of 2000 observations.
End of explanation
clusters_10, ll_10 = experiment( train_data, 10, 50 )
Explanation: Run the procedure that perfoems EM algorithm and return the history of the parameter estimates as well as the dynamics of the log-likelihood lower bonud.
End of explanation
visualize( train_data, clusters_10, ll_10, n_col = 10, plot_ll = True )
Explanation: One can clearly see, that $50$ iterations were not enough for the alogirithm to converge: though the changes are tiny, even on the log-scale, they are still unstable.
End of explanation
clusters_15, ll_15 = experiment( train_data, 15, 50, verbose = False, until_convergence = False )
clusters_20, ll_20 = experiment( train_data, 20, 50, verbose = False, until_convergence = False )
clusters_30, ll_30 = experiment( train_data, 30, 50, verbose = False, until_convergence = False )
Explanation: Let's see if changing $K$ does the trick.
End of explanation
visualize( train_data, clusters_15, ll_15, n_col = 10, plot_ll = False )
visualize( train_data, clusters_20, ll_20, n_col = 10, plot_ll = False )
visualize( train_data, clusters_30, ll_30, n_col = 10, plot_ll = False )
Explanation: For what values of $K$ was it possible to infer the templates of all digits?
End of explanation
clusters_60, ll_60 = experiment( train_data, 60, 500, verbose = False, until_convergence = True )
Explanation: Obviously, the model with more mixture components is more likely to produce "templates" for all digits. For larger $K$ this is indeed the case.
Having run this algorithm for many times we are able to say that the digits $3$ and $8$, $4$ and $9$ and sometimes $5$ tend to be poorly separated. Furthermore due to there being many different handwritten variations of the same digit one should estimate a model with more classes.
The returned templates of the mixture components are clearly suboptimal: the procedure seems to get stuck at individual examples. This may happen for any $K$ and allowing for more iterations does not remedy this.
Some possibilities do exist: add regularizers to the E step, that tie neighbouring pixel distributions together.
End of explanation
visualize( train_data, clusters_60, ll_60, n_col = 15 )
Explanation: As one can see, increasing the number of iterations does not necessarily improve the results.
End of explanation
## If you want to see this animation ensure that ffmpeg is installed and uncomment the following lines.
anim_60 = animate( clusters_60, ll_60, n_col = 15, n_row = 4,
interval = 1, cmap = plt.cm.hot, interpolation = 'nearest' )
embed_video( anim_60 )
Explanation: Judging by the plot of the log-likelihood, the fact that the EM is guaranteed to converge to local maxima and does so extremely fast, there was no need for more than 120-130 iterations. The chages in the log-likelihood around that number of iterations are of the order $10^{-4}$. Since we are working in finite precision arithmetic (double), the smallest precision is $\approx 10^{-14}$.
Let's see the dynamics of the estimated of the EM iterations. You will have to ensure that ffMPEG is installed (Windows: and is on the PATH environment variable).
End of explanation
## Test model with K from 12 up to 42 with a step of 4
classes = 12 + np.arange( 11, dtype = np.int ) * 3
ll_hist = np.full( len( classes ), -np.inf, dtype = np.float )
## Store parameters
parameter_hist = list( )
for i, K in enumerate( classes ) :
## Run the experiment
c, l = experiment( train_data, K, 50, verbose = False, until_convergence = False )
ll_hist[ i ] = l[ -1 ]
parameter_hist.append( c[ -1 ] )
## Visualize the final parameters
show_data( c[-1], n = K, n_col = 13, cmap = plt.cm.hot, interpolation = 'nearest' )
Explanation: The parameter estimates of the EM stabilize pretty quickly. In fact most templates stabilize by iterations 100-120.
Choosing $K$
Among many methods of model selection, let's use simple training sample fittness score, givne by the value of the log-likelihood. Becasue the models are nested with respect to the number of mixture components, one should expect the likelihood to be a non decreasing function of $K$ (on average dut to randomization of the initial parameter values).
For large enough $K$ this method may lead to overfitting.
End of explanation
print classes[ np.argmax( ll_hist ) ]
Explanation: Indeed the log-likelihood does not decrease with $K$ on average. Nevertheless the model with the highes likelihood turs out to have this many mixture components:
End of explanation
# clusters = parameter_hist[ np.argmax( ll_hist ) ] * 0.999 + 0.0005
clusters = clusters_60[-1]
Explanation: A nice, yet expected coincidence :)
Classification: label assignment
Select a model ...
End of explanation
## Compute the posterior component probabilities, and use max-aposteriori
## for the best class selection.
c_s, q_sk, ll_s = classify( train_data, clusters )
Explanation: ... and get the posterior mixture component probabilities.
End of explanation
template_x_label_maj_60 = np.full( clusters.shape[ 0 ], -1, np.int )
for t in range( clusters.shape[ 0 ] ) :
l, f = np.unique( train_labels[ c_s == t ], return_counts = True)
if len( l ) > 0 :
## This is too blunt an approach: it does not guarantee surjectivity of the mapping.
template_x_label_maj_60[ t ] = l[ np.argmax( f ) ]
Explanation: Use a simple majority rule to automatically assign lables to templates.
End of explanation
## there are 10 labels and K templates
label_cluster_score = np.full( ( clusters.shape[ 0 ], 10 ), -np.inf, np.float )
## Loop over each template
for t in range( clusters.shape[ 0 ] ) :
## The selected templates are chosen according to max-aposteriori rule.
inx = np.where( c_s == t )[ 0 ]
## Get the assigned lables and their frequencies
actual_labels = train_labels[ inx ]
l, f = np.unique( actual_labels, return_counts = True )
## For each template and each associated label in the training set the
## score is average of the top-5 highest log-likelihoods.
label_cluster_score[ t, l ] = [ np.average( sorted(
ll_s[ inx[ actual_labels == a ] ].flatten( ), reverse = True )[ : 5 ] ) for a in l ]
## For each template choose the label with the highes likelihood.
template_x_label_lik_60 = np.argmax( label_cluster_score, axis = 1 )
Explanation: Assign the labels $l$ to templates $t$ according to its score, based on the average of the top-$5$ log-likelihoods of observations with label $l$ and classfified with template $t$.
End of explanation
show_data( clusters, clusters.shape[ 0 ], 10,
cmap = plt.cm.spectral, interpolation = 'nearest' )
Explanation: Compare the label assignments. Here are the templates.
End of explanation
mask = np.asarray( template_x_label_maj_60 != template_x_label_lik_60, dtype = np.float ).reshape( (-1,1) )
show_data( clusters * mask, clusters.shape[ 0 ], 10,
cmap = plt.cm.spectral, interpolation = 'nearest' )
print "\nLikelihood based: ", template_x_label_lik_60[ mask[:,0] > 0 ]
print "Majority bassed: ", template_x_label_maj_60[ mask[:,0] > 0 ]
Explanation: These are the templates, which were assigned different labels by the majority and "trust" methods.
End of explanation
show_data( clusters[ np.argsort( template_x_label_lik_60 ) ], clusters.shape[ 0 ],
10, cmap = plt.cm.spectral, interpolation = 'nearest' )
show_data( clusters[ np.argsort( template_x_label_maj_60 ) ], clusters.shape[ 0 ],
10, cmap = plt.cm.spectral, interpolation = 'nearest' )
Explanation: Here are the pictures of templates ordered according to their label.
End of explanation
## Run the classifier on the test data
c_s_60, q_sk, ll_s = classify( test_data, clusters )
Explanation: Classification: test sample
Shall we try running the classifier on the test data?
End of explanation
## Show a sample of images and their templates
sample = np.random.permutation( test_data.shape[ 0 ] )[:64]
## Stack image and tis best template atop one another
display_stack = np.empty( ( 2 * len( sample ), test_data.shape[ 1 ] ), dtype = np.float )
display_stack[0::2] = test_data[ sample ] * q_sk[ sample, c_s_60[ sample ], np.newaxis ]
display_stack[1::2] = clusters[ c_s_60[ sample ] ]
## Display
show_data( display_stack, n = display_stack.shape[ 0 ], n_col = 16,
transpose = False, cmap = plt.cm.spectral, interpolation = 'nearest' )
Explanation: Let's see the best template for each test observation in some sub-sample.
End of explanation
print "Accuracy of likelihood based labelling: %.2f" % (
100 * np.average( template_x_label_lik_60[ c_s_60 ] == test_labels ), )
print "Accuracy of simple majority labelling: %.2f" % (
100 * np.average( template_x_label_maj_60[ c_s_60 ] == test_labels ), )
Explanation: The digits are shown in pairs: each first digit is the test observation (colour is determined by the confidence of the classifier -- the whiter the higher), and each second -- is the best template.
Let's see how accurate the classification was. Recall that the component assignment was done using
$$ \hat{t}_s = \mathop{\text{argmax}}_k p\bigl( C_s = k \, \big\vert\, X = x_s\bigr) \,, $$
i.e. maximum aposteriori rule.
Then, with the classes assigned, labels were deduced based on either :
* simple majority;
* observations with top largest likelihoods in each class.
By accuracy I understand the following socre:
$$ \alpha = 1 - \frac{1}{|\text{TEST}|} \sum_{s\,\in\,\text{TEST}} \mathbf{1}{ l_s \neq L(\hat{t}_s)}\,, $$
where $l_s$ -- is the actual lablesof an observation $s$, $\hat{t}_s$ -- is the inferred mixture component (class) of that observation, and $k\mapsto L(k)$ is the component to label mapping.
End of explanation
## Now display the test observations, which the model cold nor classify at all.
bad_tests = np.where( np.isinf( ll_s ) )[ 0 ]
show_data( test_data[ bad_tests ], n = max( len( bad_tests ), 10 ), n_col = 15, cmap = plt.cm.gray, interpolation = 'nearest' )
# print q_sk[ bad_tests ]
Explanation: Not surprisingly, majority- and likelihood-based classification accuracies are close.
Let's see which test observations the model considers an artefact and for which it cannot reliably assign a template: i.e. the posterior class probablity for these cases coincides with the prior. This happens when the likelihood of an observation is identical within each class.
End of explanation
clusters = clusters_30[-1]
c_s, q_sk, ll_s = classify( train_data, clusters )
template_x_label_maj_30 = np.full( clusters.shape[ 0 ], -1, np.int )
for t in range( clusters.shape[ 0 ] ) :
l, f = np.unique( train_labels[ c_s == t ], return_counts = True)
if len( l ) > 0 :
template_x_label_maj_30[ t ] = l[ np.argmax( f ) ]
label_cluster_score_30 = np.full( ( clusters.shape[ 0 ], 10 ), -np.inf, np.float )
for t in range( clusters.shape[ 0 ] ) :
inx = np.where( c_s == t )[ 0 ]
actual_labels = train_labels[ inx ]
l, f = np.unique( actual_labels, return_counts = True )
label_cluster_score_30[ t, l ] = [ np.average( sorted(
ll_s[ inx[ actual_labels == a ] ].flatten( ), reverse = True )[ : 5 ] ) for a in l ]
template_x_label_lik_30 = np.argmax( label_cluster_score_30, axis = 1 )
show_data( clusters[ np.argsort( template_x_label_lik_30 ) ], clusters.shape[ 0 ],
10, cmap = plt.cm.spectral, interpolation = 'nearest' )
print template_x_label_lik_30[ np.argsort( template_x_label_lik_30 ) ].reshape((3,-1))
c_s_30, q_sk, ll_s = classify( test_data, clusters )
print "Accuracy of likelihood based labelling: %.2f" % (
100 * np.average( template_x_label_lik_30[ c_s_30 ] == test_labels ), )
print "Accuracy of simple majority labelling: %.2f" % (
100 * np.average( template_x_label_maj_30[ c_s_30 ] == test_labels ), )
Explanation: </hr>
Let's see how more pasrimonious models fare with respect to accuracy on thte test sample.
Accuracy of $K=30$
End of explanation
clusters = clusters_20[-1]
c_s, q_sk, ll_s = classify( train_data, clusters )
template_x_label_maj_20 = np.full( clusters.shape[ 0 ], -1, np.int )
for t in range( clusters.shape[ 0 ] ) :
l, f = np.unique( train_labels[ c_s == t ], return_counts = True)
if len( l ) > 0 :
template_x_label_maj_20[ t ] = l[ np.argmax( f ) ]
label_cluster_score_20 = np.full( ( clusters.shape[ 0 ], 10 ), -np.inf, np.float )
for t in range( clusters.shape[ 0 ] ) :
inx = np.where( c_s == t )[ 0 ]
actual_labels = train_labels[ inx ]
l, f = np.unique( actual_labels, return_counts = True )
label_cluster_score_20[ t, l ] = [ np.average( sorted(
ll_s[ inx[ actual_labels == a ] ].flatten( ), reverse = True )[ : 5 ] ) for a in l ]
template_x_label_lik_20 = np.argmax( label_cluster_score_20, axis = 1 )
show_data( clusters[ np.argsort( template_x_label_lik_20 ) ], clusters.shape[ 0 ],
10, cmap = plt.cm.spectral, interpolation = 'nearest' )
print template_x_label_lik_20[ np.argsort( template_x_label_lik_20 ) ].reshape((2,-1))
c_s_20, q_sk, ll_s = classify( test_data, clusters )
print "Accuracy of likelihood based labelling: %.2f" % (
100 * np.average( template_x_label_lik_20[ c_s_20 ] == test_labels ), )
print "Accuracy of simple majority labelling: %.2f" % (
100 * np.average( template_x_label_maj_20[ c_s_20 ] == test_labels ), )
Explanation: Accuracy of $K=20$
End of explanation
clusters = clusters_15[-1]
c_s, q_sk, ll_s = classify( train_data, clusters )
template_x_label_maj_15 = np.full( clusters.shape[ 0 ], -1, np.int )
for t in range( clusters.shape[ 0 ] ) :
l, f = np.unique( train_labels[ c_s == t ], return_counts = True)
if len( l ) > 0 :
template_x_label_maj_15[ t ] = l[ np.argmax( f ) ]
label_cluster_score_15 = np.full( ( clusters.shape[ 0 ], 10 ), -np.inf, np.float )
for t in range( clusters.shape[ 0 ] ) :
inx = np.where( c_s == t )[ 0 ]
actual_labels = train_labels[ inx ]
l, f = np.unique( actual_labels, return_counts = True )
label_cluster_score_15[ t, l ] = [ np.average( sorted(
ll_s[ inx[ actual_labels == a ] ].flatten( ), reverse = True )[ : 5 ] ) for a in l ]
template_x_label_lik_15 = np.argmax( label_cluster_score_15, axis = 1 )
show_data( clusters[ np.argsort( template_x_label_lik_15 ) ], clusters.shape[ 0 ],
15, cmap = plt.cm.spectral, interpolation = 'nearest' )
print template_x_label_lik_15[ np.argsort( template_x_label_lik_15 ) ].reshape((1,-1))
c_s_15, q_sk, ll_s = classify( test_data, clusters )
print "Accuracy of likelihood based labelling: %.2f" % (
100 * np.average( template_x_label_lik_15[ c_s_15 ] == test_labels ), )
print "Accuracy of simple majority labelling: %.2f" % (
100 * np.average( template_x_label_maj_15[ c_s_15 ] == test_labels ), )
Explanation: Accuracy of $K=15$
End of explanation
clusters = clusters_10[-1]
c_s, q_sk, ll_s = classify( train_data, clusters )
template_x_label_maj_10 = np.full( clusters.shape[ 0 ], -1, np.int )
for t in range( clusters.shape[ 0 ] ) :
l, f = np.unique( train_labels[ c_s == t ], return_counts = True)
if len( l ) > 0 :
template_x_label_maj_10[ t ] = l[ np.argmax( f ) ]
label_cluster_score_10 = np.full( ( clusters.shape[ 0 ], 10 ), -np.inf, np.float )
for t in range( clusters.shape[ 0 ] ) :
inx = np.where( c_s == t )[ 0 ]
actual_labels = train_labels[ inx ]
l, f = np.unique( actual_labels, return_counts = True )
label_cluster_score_10[ t, l ] = [ np.average( sorted(
ll_s[ inx[ actual_labels == a ] ].flatten( ), reverse = True )[ : 5 ] ) for a in l ]
template_x_label_lik_10 = np.argmax( label_cluster_score_10, axis = 1 )
show_data( clusters[ np.argsort( template_x_label_lik_10 ) ], clusters.shape[ 0 ],
10, cmap = plt.cm.spectral, interpolation = 'nearest' )
print template_x_label_lik_10[ np.argsort( template_x_label_lik_10 ) ].reshape((1,-1))
c_s_10, q_sk, ll_s = classify( test_data, clusters )
print "Accuracy of likelihood based labelling: %.2f" % (
100 * np.average( template_x_label_lik_10[ c_s_10 ] == test_labels ), )
print "Accuracy of simple majority labelling: %.2f" % (
100 * np.average( template_x_label_maj_10[ c_s_10 ] == test_labels ), )
Explanation: Accuracy of $K=10$
End of explanation
print "Model with K = 10: %.2f" % ( 100 * np.average( template_x_label_lik_10[ c_s_10 ] == test_labels ), )
print "Model with K = 15: %.2f" % ( 100 * np.average( template_x_label_lik_15[ c_s_15 ] == test_labels ), )
print "Model with K = 20: %.2f" % ( 100 * np.average( template_x_label_lik_20[ c_s_20 ] == test_labels ), )
print "Model with K = 30: %.2f" % ( 100 * np.average( template_x_label_lik_30[ c_s_30 ] == test_labels ), )
print "Model with K = 60: %.2f" % ( 100 * np.average( template_x_label_lik_60[ c_s_60 ] == test_labels ), )
Explanation: As one can see test sample accuracy of the model falls drammatically for less number of mixture components. This was expected, since due to various reasons, one being thet the data is handwritten, it is higly unlikely, that a single digit would have only one template.
End of explanation
( clusters_full, pi_full ), ll_full = experiment( data, 30, 1000, False, True, True )
anim_full = animate( clusters_full, ll_full, pi = pi_full, n_col = 15, n_row = 2, interval = 1, cmap = plt.cm.hot, interpolation = 'nearest' )
embed_video( anim_full )
Explanation: <br/><p style="font-size: 20pt;font-weight: bold; text-align: center;font-family: Courier New"> Ignore everything below </p><br/>
Let digress for a moment and consider the full model and create a video to see how the estimates are refined.
End of explanation
from sklearn.neighbors import KernelDensity
from sklearn.decomposition import PCA
from sklearn.grid_search import GridSearchCV
pca = PCA( n_components = 50 )
X_train_pca = pca.fit_transform( X_train )
params = { 'bandwidth' : np.logspace( -1, 1, 20 ) }
grid = GridSearchCV( KernelDensity( ), params )
grid.fit( X_train_pca )
print("best bandwidth: {0}".format( grid.best_estimator_.bandwidth ) )
params
kde = grid.best_estimator_
new_data = kde.sample( 100 )
new_data = pca.inverse_transform( new_data )
print new_data.shape
plt.figure( figsize = ( 9, 9 ) )
plt.imshow( arrange_flex( new_data ), cmap = plt.cm.gray, interpolation = 'nearest' )
plt.show( )
Explanation: <hr/>
A random variable $X\sim \text{Beta}(\alpha,\beta)$ if the law of $X$ has density
$$p(u) = \frac{\Gamma(\alpha+\beta)}{\Gamma(\alpha)\Gamma(\beta)} u^{\alpha-1}(1-u)^{\beta-1} $$
$$ \log p(X,Z|\Theta) = \sum_{s=1}^n \log \prod_{k=1}^K \Bigl[ \pi_k
\prod_{i=1}^N \prod_{j=1}^M
\frac{\Gamma(\alpha_{kij}+\beta_{kij})}{\Gamma(\alpha_{kij})\Gamma(\beta_{kij})} x_{sij}^{\alpha_{kij}-1}(1-x_{sij})^{\beta_{kij}-1} \Bigr]^{1_{z_s = k}}$$
\begin{align}
\mathbb{E}q \log p(X,Z|\Theta)
&= \sum{k=1}^K \sum_{s=1}^n q_{sk} \log \pi_k \
&+ \sum_{k=1}^K \sum_{i=1}^N \sum_{j=1}^M \sum_{s=1}^n q_{sk} \bigl(
\log \Gamma(\alpha_{kij}+\beta_{kij}) - \log \Gamma(\alpha_{kij}) - \log \Gamma(\beta_{kij}) \bigr) \
&+ \sum_{k=1}^K \sum_{i=1}^N \sum_{j=1}^M \sum_{s=1}^n q_{sk} \bigl(
(\alpha_{kij}-1) \log x_{sij} + (\beta_{kij}-1) \log(1-x_{sij}) \bigr) \
\end{align}
Derivative of a Gamma function does not seem to yeild analytically tracktable solutions.
<hr/>
<p style="font-size: 20pt; text-align: center;font-family: Courier New">Non-parametric approach</p>
End of explanation |
2,556 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Visualize Evoked data
In this tutorial we focus on the plotting functions of
Step1: First we read the evoked object from a file. Check out
tut_epoching_and_averaging to get to this stage from raw data.
Step2: Notice that evoked is a list of
Step3: Let's start with a simple one. We plot event related potentials / fields
(ERP/ERF). The bad channels are not plotted by default. Here we explicitly
set the exclude parameter to show the bad channels in red. All plotting
functions of MNE-python return a handle to the figure instance. When we have
the handle, we can customise the plots to our liking.
Step4: All plotting functions of MNE-python return a handle to the figure instance.
When we have the handle, we can customise the plots to our liking. For
example, we can get rid of the empty space with a simple function call.
Step5: Now we will make it a bit fancier and only use MEG channels. Many of the
MNE-functions include a picks parameter to include a selection of
channels. picks is simply a list of channel indices that you can easily
construct with
Step6: Notice the legend on the left. The colors would suggest that there may be two
separate sources for the signals. This wasn't obvious from the first figure.
Try painting the slopes with left mouse button. It should open a new window
with topomaps (scalp plots) of the average over the painted area. There is
also a function for drawing topomaps separately.
Step7: By default the topomaps are drawn from evenly spread out points of time over
the evoked data. We can also define the times ourselves.
Step8: Or we can automatically select the peaks.
Step9: You can take a look at the documentation of
Step10: Notice that we created five axes, but had only four categories. The fifth
axes was used for drawing the colorbar. You must provide room for it when you
create this kind of custom plots or turn the colorbar off with
colorbar=False. That's what the warnings are trying to tell you. Also, we
used show=False for the three first function calls. This prevents the
showing of the figure prematurely. The behavior depends on the mode you are
using for your python session. See http
Step11: Sometimes, you may want to compare two or more conditions at a selection of
sensors, or e.g. for the Global Field Power. For this, you can use the
function
Step12: We can also plot the activations as images. The time runs along the x-axis
and the channels along the y-axis. The amplitudes are color coded so that
the amplitudes from negative to positive translates to shift from blue to
red. White means zero amplitude. You can use the cmap parameter to define
the color map yourself. The accepted values include all matplotlib colormaps.
Step13: Finally we plot the sensor data as a topographical view. In the simple case
we plot only left auditory responses, and then we plot them all in the same
figure for comparison. Click on the individual plots to open them bigger.
Step14: We can also plot the activations as arrow maps on top of the topoplot.
The arrows represent an estimation of the current flow underneath the MEG
sensors. Here, sample number 175 corresponds to the time of the maximum
sensor space activity.
Step15: Visualizing field lines in 3D
We now compute the field maps to project MEG and EEG data to the MEG helmet
and scalp surface.
To do this, we need coregistration information. See
tut_forward for more details. Here we just illustrate usage. | Python Code:
import os.path as op
import numpy as np
import matplotlib.pyplot as plt
import mne
# sphinx_gallery_thumbnail_number = 9
Explanation: Visualize Evoked data
In this tutorial we focus on the plotting functions of :class:mne.Evoked.
End of explanation
data_path = mne.datasets.sample.data_path()
fname = op.join(data_path, 'MEG', 'sample', 'sample_audvis-ave.fif')
evoked = mne.read_evokeds(fname, baseline=(None, 0), proj=True)
print(evoked)
Explanation: First we read the evoked object from a file. Check out
tut_epoching_and_averaging to get to this stage from raw data.
End of explanation
evoked_l_aud = evoked[0]
evoked_r_aud = evoked[1]
evoked_l_vis = evoked[2]
evoked_r_vis = evoked[3]
Explanation: Notice that evoked is a list of :class:evoked <mne.Evoked> instances.
You can read only one of the categories by passing the argument condition
to :func:mne.read_evokeds. To make things more simple for this tutorial, we
read each instance to a variable.
End of explanation
fig = evoked_l_aud.plot(exclude=(), time_unit='s')
Explanation: Let's start with a simple one. We plot event related potentials / fields
(ERP/ERF). The bad channels are not plotted by default. Here we explicitly
set the exclude parameter to show the bad channels in red. All plotting
functions of MNE-python return a handle to the figure instance. When we have
the handle, we can customise the plots to our liking.
End of explanation
fig.tight_layout()
Explanation: All plotting functions of MNE-python return a handle to the figure instance.
When we have the handle, we can customise the plots to our liking. For
example, we can get rid of the empty space with a simple function call.
End of explanation
picks = mne.pick_types(evoked_l_aud.info, meg=True, eeg=False, eog=False)
evoked_l_aud.plot(spatial_colors=True, gfp=True, picks=picks, time_unit='s')
Explanation: Now we will make it a bit fancier and only use MEG channels. Many of the
MNE-functions include a picks parameter to include a selection of
channels. picks is simply a list of channel indices that you can easily
construct with :func:mne.pick_types. See also :func:mne.pick_channels and
:func:mne.pick_channels_regexp.
Using spatial_colors=True, the individual channel lines are color coded
to show the sensor positions - specifically, the x, y, and z locations of
the sensors are transformed into R, G and B values.
End of explanation
evoked_l_aud.plot_topomap(time_unit='s')
Explanation: Notice the legend on the left. The colors would suggest that there may be two
separate sources for the signals. This wasn't obvious from the first figure.
Try painting the slopes with left mouse button. It should open a new window
with topomaps (scalp plots) of the average over the painted area. There is
also a function for drawing topomaps separately.
End of explanation
times = np.arange(0.05, 0.151, 0.05)
evoked_r_aud.plot_topomap(times=times, ch_type='mag', time_unit='s')
Explanation: By default the topomaps are drawn from evenly spread out points of time over
the evoked data. We can also define the times ourselves.
End of explanation
evoked_r_aud.plot_topomap(times='peaks', ch_type='mag', time_unit='s')
Explanation: Or we can automatically select the peaks.
End of explanation
fig, ax = plt.subplots(1, 5, figsize=(8, 2))
kwargs = dict(times=0.1, show=False, vmin=-300, vmax=300, time_unit='s')
evoked_l_aud.plot_topomap(axes=ax[0], colorbar=True, **kwargs)
evoked_r_aud.plot_topomap(axes=ax[1], colorbar=False, **kwargs)
evoked_l_vis.plot_topomap(axes=ax[2], colorbar=False, **kwargs)
evoked_r_vis.plot_topomap(axes=ax[3], colorbar=False, **kwargs)
for ax, title in zip(ax[:4], ['Aud/L', 'Aud/R', 'Vis/L', 'Vis/R']):
ax.set_title(title)
plt.show()
Explanation: You can take a look at the documentation of :func:mne.Evoked.plot_topomap
or simply write evoked_r_aud.plot_topomap? in your python console to
see the different parameters you can pass to this function. Most of the
plotting functions also accept axes parameter. With that, you can
customise your plots even further. First we create a set of matplotlib
axes in a single figure and plot all of our evoked categories next to each
other.
End of explanation
ts_args = dict(gfp=True, time_unit='s')
topomap_args = dict(sensors=False, time_unit='s')
evoked_r_aud.plot_joint(title='right auditory', times=[.09, .20],
ts_args=ts_args, topomap_args=topomap_args)
Explanation: Notice that we created five axes, but had only four categories. The fifth
axes was used for drawing the colorbar. You must provide room for it when you
create this kind of custom plots or turn the colorbar off with
colorbar=False. That's what the warnings are trying to tell you. Also, we
used show=False for the three first function calls. This prevents the
showing of the figure prematurely. The behavior depends on the mode you are
using for your python session. See http://matplotlib.org/users/shell.html for
more information.
We can combine the two kinds of plots in one figure using the
:func:mne.Evoked.plot_joint method of Evoked objects. Called as-is
(evoked.plot_joint()), this function should give an informative display
of spatio-temporal dynamics.
You can directly style the time series part and the topomap part of the plot
using the topomap_args and ts_args parameters. You can pass key-value
pairs as a python dictionary. These are then passed as parameters to the
topomaps (:func:mne.Evoked.plot_topomap) and time series
(:func:mne.Evoked.plot) of the joint plot.
For an example of specific styling using these topomap_args and
ts_args arguments, here, topomaps at specific time points
(90 and 200 ms) are shown, sensors are not plotted (via an argument
forwarded to plot_topomap), and the Global Field Power is shown:
End of explanation
conditions = ["Left Auditory", "Right Auditory", "Left visual", "Right visual"]
evoked_dict = dict()
for condition in conditions:
evoked_dict[condition.replace(" ", "/")] = mne.read_evokeds(
fname, baseline=(None, 0), proj=True, condition=condition)
print(evoked_dict)
colors = dict(Left="Crimson", Right="CornFlowerBlue")
linestyles = dict(Auditory='-', visual='--')
pick = evoked_dict["Left/Auditory"].ch_names.index('MEG 1811')
mne.viz.plot_compare_evokeds(evoked_dict, picks=pick, colors=colors,
linestyles=linestyles, split_legend=True)
Explanation: Sometimes, you may want to compare two or more conditions at a selection of
sensors, or e.g. for the Global Field Power. For this, you can use the
function :func:mne.viz.plot_compare_evokeds. The easiest way is to create
a Python dictionary, where the keys are condition names and the values are
:class:mne.Evoked objects. If you provide lists of :class:mne.Evoked
objects, such as those for multiple subjects, the grand average is plotted,
along with a confidence interval band - this can be used to contrast
conditions for a whole experiment.
First, we load in the evoked objects into a dictionary, setting the keys to
'/'-separated tags (as we can do with event_ids for epochs). Then, we plot
with :func:mne.viz.plot_compare_evokeds.
The plot is styled with dict arguments, again using "/"-separated tags.
We plot a MEG channel with a strong auditory response.
For move advanced plotting using :func:mne.viz.plot_compare_evokeds.
See also sphx_glr_auto_tutorials_plot_metadata_epochs.py.
End of explanation
evoked_r_aud.plot_image(picks=picks, time_unit='s')
Explanation: We can also plot the activations as images. The time runs along the x-axis
and the channels along the y-axis. The amplitudes are color coded so that
the amplitudes from negative to positive translates to shift from blue to
red. White means zero amplitude. You can use the cmap parameter to define
the color map yourself. The accepted values include all matplotlib colormaps.
End of explanation
title = 'MNE sample data\n(condition : %s)'
evoked_l_aud.plot_topo(title=title % evoked_l_aud.comment,
background_color='k', color=['white'])
mne.viz.plot_evoked_topo(evoked, title=title % 'Left/Right Auditory/Visual',
background_color='w')
Explanation: Finally we plot the sensor data as a topographical view. In the simple case
we plot only left auditory responses, and then we plot them all in the same
figure for comparison. Click on the individual plots to open them bigger.
End of explanation
evoked_l_aud_mag = evoked_l_aud.copy().pick_types(meg='mag')
mne.viz.plot_arrowmap(evoked_l_aud_mag.data[:, 175], evoked_l_aud_mag.info)
Explanation: We can also plot the activations as arrow maps on top of the topoplot.
The arrows represent an estimation of the current flow underneath the MEG
sensors. Here, sample number 175 corresponds to the time of the maximum
sensor space activity.
End of explanation
subjects_dir = data_path + '/subjects'
trans_fname = data_path + '/MEG/sample/sample_audvis_raw-trans.fif'
maps = mne.make_field_map(evoked_l_aud, trans=trans_fname, subject='sample',
subjects_dir=subjects_dir, n_jobs=1)
# Finally, explore several points in time
field_map = evoked_l_aud.plot_field(maps, time=.1)
Explanation: Visualizing field lines in 3D
We now compute the field maps to project MEG and EEG data to the MEG helmet
and scalp surface.
To do this, we need coregistration information. See
tut_forward for more details. Here we just illustrate usage.
End of explanation |
2,557 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Bolometric correction grids
Bolometric correction is defined as the difference between the apparent bolometric magnitude of a star and its apparent magnitude in a particular bandpass
Step1: The bandpasses provided to initialize the grid object are parsed according to the .get_band method, which returns the photometric system and the name of the band in the system
Step2: Not all bands have cute nicknames to them, so you can also be explicit, e.g. | Python Code:
from isochrones.mist.bc import MISTBolometricCorrectionGrid
bc_grid = MISTBolometricCorrectionGrid(['J', 'H', 'K', 'G', 'BP', 'RP', 'g', 'r', 'i'])
bc_grid.df.head()
bc_grid.interp.index_names
bc_grid.interp([5770, 4.44, 0.0, 0.], ['G', 'K'])
Explanation: Bolometric correction grids
Bolometric correction is defined as the difference between the apparent bolometric magnitude of a star and its apparent magnitude in a particular bandpass:
$$BC_x = m_{bol} - m_x$$
The MIST project provide grids of bolometric corrections in many photometric systems as a function of stellar temperature, surface gravity, metallicity, and $A_V$ extinction. This allows for accurate conversion of bolometric magnitude of a star (available from the theoretical grids) to magnitude in any band, at any extinction (and distance), without the need for any "effective wavelength" approximation (used in isochrones prior to v2.0), which breaks down for broad bandpasses and large extinctions. These grids are downloaded, organized, stored, and interpolated in much the same manner as the model grids.
End of explanation
bc_grid.get_band('G'), bc_grid.get_band('g')
Explanation: The bandpasses provided to initialize the grid object are parsed according to the .get_band method, which returns the photometric system and the name of the band in the system:
End of explanation
bc_grid.get_band('DECam_g')
Explanation: Not all bands have cute nicknames to them, so you can also be explicit, e.g.:
End of explanation |
2,558 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
glstring
using the get_ functions
Each of these functions take a GL String as an argument
Step1: get_alleles() & get_loci()
Each of these functions returns a set of objects.
Step2: get_allele_lists(), get_genotypes(), get_genotype_lists(), get_locus_blocks(), get_genotype_blocks(), & get_genotype_list_blocks
Each of these functions return a list of objects found in the GL String.
Locus blocks are separated by a ^.
Step3: Genotype list blocks are separated by |
Step4: Genotype blocks are separated by +
Step5: Genotype lists are found in locus blocks. The contain | delimiters, which separate the possible genotypes. There is one genotype list in this example.
Step6: Genotypes contain a + delimiter and may contain allele lists
Step7: Allele lists contain a / delimiter
Step8: A more complex example | Python Code:
import glstring
print(glstring.__file__)
from glstring.glstring import *
a = "HLA-A*01:01/HLA-A*01:02+HLA-A*24:02|HLA-A*01:03+HLA-A*24:03^HLA-B*44:01+HLA-B*44:02"
print(a)
Explanation: glstring
using the get_ functions
Each of these functions take a GL String as an argument
End of explanation
get_alleles(a)
get_loci(a)
Explanation: get_alleles() & get_loci()
Each of these functions returns a set of objects.
End of explanation
get_locus_blocks(a)
Explanation: get_allele_lists(), get_genotypes(), get_genotype_lists(), get_locus_blocks(), get_genotype_blocks(), & get_genotype_list_blocks
Each of these functions return a list of objects found in the GL String.
Locus blocks are separated by a ^.
End of explanation
get_genotype_list_blocks(a)
Explanation: Genotype list blocks are separated by |
End of explanation
get_genotype_blocks(a)
Explanation: Genotype blocks are separated by +
End of explanation
get_genotype_lists(a)
Explanation: Genotype lists are found in locus blocks. The contain | delimiters, which separate the possible genotypes. There is one genotype list in this example.
End of explanation
get_genotypes(a)
Explanation: Genotypes contain a + delimiter and may contain allele lists
End of explanation
get_allele_lists(a)
Explanation: Allele lists contain a / delimiter
End of explanation
a = ("HLA-A*01:01/HLA-A*01:02+HLA-A*24:02|HLA-A*01:03+HLA-A*24:03^"
"HLA-B*08:01+HLA-B*44:01/HLA-B*44:02^"
"HLA-C*01:02+HLA-C*01:03^"
"HLA-DRB5*01:01~HLA-DRB1*03:01+HLA-DRB1*04:07:01/HLA-DRB1*04:92~HLA-DRB1*03:01")
print(a)
get_loci(a)
get_alleles(a)
get_allele_lists(a)
get_genotypes(a)
get_genotype_lists(a)
get_locus_blocks(a)
get_genotypes(get_locus_blocks(a)[0])
get_genotypes(get_locus_blocks(a)[1])
get_allele_lists(get_genotypes(get_locus_blocks(a)[0])[0])
get_alleles(get_allele_lists(get_genotypes(get_locus_blocks(a)[0])[0])[0])
get_haplotypes(a)
Explanation: A more complex example
End of explanation |
2,559 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
1A.algo - Optimisation sous contrainte
L'optimisation sous contrainte est un problème résolu. Ce notebook utilise une librairie externe et la compare avec l'algorithme Arrow-Hurwicz qu'il faudra implémenter. Plus de précision dans cet article Damped Arrow-Hurwicz algorithm for sphere packing.
Step1: Le langage Python propose des modules qui permettent de résoudre des problèmes d'optimisation sous contraintes et il n'est pas forcément nécessaire de connaître la théorie derrière les algorithmes de résolution pour s'en servir. Au cours de cette séance, on verra comment faire. Même si comprendre comment utiliser une fonction d'un module tel que cvxopt requiert parfois un peu de temps et de lecture.
On verra également un algorithme simple d'optimisation. C'est une bonne façon de comprendre que cela prend du temps si on veut implémenter soi-même ce type de solution tout en étant aussi rapide et efficace.
Exercice 1
Step2: La documentation cvxopt est parfois peu explicite. Il ne faut pas hésiter à regarder les exemples d'abord et à la lire avec attention les lignes qui décrivent les valeurs que doivent prendre chaque paramètre de la fonction. Le plus intéressant pour le cas qui nous intéresse est celui-ci (tiré de la page problems with nonlinear objectives)
Step3: Cet exemple résoud le problème de minimisation suivant | Python Code:
from jyquickhelper import add_notebook_menu
add_notebook_menu()
Explanation: 1A.algo - Optimisation sous contrainte
L'optimisation sous contrainte est un problème résolu. Ce notebook utilise une librairie externe et la compare avec l'algorithme Arrow-Hurwicz qu'il faudra implémenter. Plus de précision dans cet article Damped Arrow-Hurwicz algorithm for sphere packing.
End of explanation
from cvxopt import solvers, matrix
m = matrix( [ [2.0, 1.1] ] ) # mettre des réels (float) et non des entiers
# cvxopt ne fait pas de conversion implicite
t = m.T # la transposée
t.size # affiche les dimensions de la matrice
Explanation: Le langage Python propose des modules qui permettent de résoudre des problèmes d'optimisation sous contraintes et il n'est pas forcément nécessaire de connaître la théorie derrière les algorithmes de résolution pour s'en servir. Au cours de cette séance, on verra comment faire. Même si comprendre comment utiliser une fonction d'un module tel que cvxopt requiert parfois un peu de temps et de lecture.
On verra également un algorithme simple d'optimisation. C'est une bonne façon de comprendre que cela prend du temps si on veut implémenter soi-même ce type de solution tout en étant aussi rapide et efficace.
Exercice 1 : optimisation avec cvxopt
On souhaite résoudre le problème d'optimisation suivant :
$\left { \begin{array}{l} \min_{x,y} \left { x^2 + y^2 - xy + y \right } \ sous \; contrainte \; x + 2y = 1 \end{array}\right .$
Le module cvxopt est un des modules les plus indiqués pour résoudre ce problème. Voici quelques instructions qui l'utilisent :
End of explanation
from cvxopt import solvers, matrix, spdiag, log
def acent(A, b):
m, n = A.size
def F(x=None, z=None):
if x is None:
# l'algorithme fonctionne de manière itérative
# il faut choisir un x initial, c'est ce qu'on fait ici
return 0, matrix(1.0, (n,1))
if min(x) <= 0.0:
return None # cas impossible
# ici commence le code qui définit ce qu'est une itération
f = -sum(log(x))
Df = -(x**-1).T
if z is None: return f, Df
H = spdiag(z[0] * x**(-2))
return f, Df, H
return solvers.cp(F, A=A, b=b)['x']
A = matrix ( [[1.0,2.0]] ).T
b = matrix ( [[ 1.0 ]] )
print(acent(A,b))
# il existe un moyen d'éviter l'affichage des logs (pratique si on doit faire
# un grand nombre d'optimisation)
from cvxopt import solvers
solvers.options['show_progress'] = False
print(acent(A,b))
solvers.options['show_progress'] = True
Explanation: La documentation cvxopt est parfois peu explicite. Il ne faut pas hésiter à regarder les exemples d'abord et à la lire avec attention les lignes qui décrivent les valeurs que doivent prendre chaque paramètre de la fonction. Le plus intéressant pour le cas qui nous intéresse est celui-ci (tiré de la page problems with nonlinear objectives) :
End of explanation
import cvxopt
m = cvxopt.matrix( [[ 0, 1.5], [ 4.5, -6] ] )
print(m)
Explanation: Cet exemple résoud le problème de minimisation suivant :
$\left{ \begin{array}{l} \min_{X} \left{ - \sum_{i=1}^n \ln x_i \right } \ sous \; contrainte \; AX = b \end{array} \right.$
Les deux modules numpy et cvxopt n'utilisent pas les mêmes matrices (les mêmes objets matrix) bien qu'elles portent le même nom dans les deux modules. Les fonctions de cvxopt ne fonctionnent qu'avec les matrices de ce module. Il ne faut pas oublier de convertir la matrice quand elle est décrite par une autre classe.
End of explanation |
2,560 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
probability mass function - maps each value to its probability. Alows you to compare two distributions independently from sample size.
probability - frequency expressed as a fraction of the sample size, n.
normalization - dividing frequencies by n.
given a Hist, we can make a dictionary that maps each value to its probability
Step1: To plot a PMF
Step2: Good idea to zoom in on the mode, where the biggest differences occur
Step3: Class Size Paradox
Step4: For each class size, x, in the following funtion, we multiply the probability by x, the number of students who observe that class size. This gives a biased distribution
Step5: Conclusion
Step6: DataFrame indexing
Step7: Exercise 3.2
PMFs can be used to calculate probability
Step8: Exercise 3.3
Step9: Exercise 3.4
Step10: Exercise 3.4
Write a function called ObservedPmf that takes a Pmf representing the actual distribution of runners' speeds and the speed of the running observer and returns a new PMF representing the distribution of runner's speeds as seen by the observer. | Python Code:
import thinkstats2
pmf = thinkstats2.Pmf([1,2,2,3,5])
#getting pmf values
print pmf.Items()
print pmf.Values()
print pmf.Prob(2)
print pmf[2]
#modifying pmf values
pmf.Incr(2, 0.2)
print pmf.Prob(2)
pmf.Mult(2, 0.5)
print pmf.Prob(2)
#if you modify, probabilities may no longer add up to 1
#to check:
print pmf.Total()
print pmf.Normalize()
print pmf.Total()
#Copy method is also available
Explanation: probability mass function - maps each value to its probability. Alows you to compare two distributions independently from sample size.
probability - frequency expressed as a fraction of the sample size, n.
normalization - dividing frequencies by n.
given a Hist, we can make a dictionary that maps each value to its probability:
n = hist.Total()
d = {}
or x, freq in hist.Items():
d[x] = freq/n
End of explanation
from probability import *
live, firsts, others = first.MakeFrames()
first_pmf = thinkstats2.Pmf(firsts.prglngth, label="firsts")
other_pmf = thinkstats2.Pmf(others.prglngth, label="others")
width = 0.45
#cols option makes grid of figures.
thinkplot.PrePlot(2, cols=2)
thinkplot.Hist(first_pmf, align='right', width=width)
thinkplot.Hist(other_pmf, align='left', width=width)
thinkplot.Config(xlabel='weeks',
ylabel='probability',
axis=[27,46,0,0.6])
#second call to preplot resets the color generator
thinkplot.PrePlot(2)
thinkplot.SubPlot(2)
thinkplot.Pmfs([first_pmf, other_pmf])
thinkplot.Config(xlabel='weeks',
ylabel='probability',
axis=[27,46,0,0.6])
thinkplot.Show()
Explanation: To plot a PMF:
* bargraph using thinkplot.Hist
* as step function: thinkplot.Pmf--for use when large number of smooth values.
End of explanation
weeks = range(35, 46)
diffs = []
for week in weeks:
p1 = first_pmf.Prob(week)
p2 = other_pmf.Prob(week)
#diff between two points in percentage points
diff = 100 * (p1 - p2)
diffs.append(diff)
thinkplot.Bar(weeks, diffs)
thinkplot.Config(title="Difference in PMFs",
xlabel="weeks",
ylabel="percentage points")
thinkplot.Show()
Explanation: Good idea to zoom in on the mode, where the biggest differences occur:
End of explanation
d = {7:8, 12:8, 17:14, 22:4, 27:6,
32:12, 37:8, 42:3, 47:2}
pmf = thinkstats2.Pmf(d, label='actual')
print ('mean', pmf.Mean())
Explanation: Class Size Paradox
End of explanation
def BiasPmf(pmf, label):
new_pmf = pmf.Copy(label=label)
for x, p in pmf.Items():
new_pmf.Mult(x, x)
new_pmf.Normalize()
return new_pmf
thinkplot.PrePlot(2)
biased_pmf = BiasPmf(pmf, label="observed")
thinkplot.Pmfs([pmf, biased_pmf])
thinkplot.Config(root='class_size1',
xlabel='class size',
ylabel='PMF',
axis=[0, 52, 0, 0.27])
# thinkplot.Show()
print "actual mean", pmf.Mean()
print "biased mean", biased_pmf.Mean()
Explanation: For each class size, x, in the following funtion, we multiply the probability by x, the number of students who observe that class size. This gives a biased distribution
End of explanation
def UnbiasPmf(pmf, label):
new_pmf = pmf.Copy(label=label)
for x, p in pmf.Items():
new_pmf.Mult(x, 1.0 / x)
new_pmf.Normalize()
return new_pmf
print 'unbiased mean:', UnbiasPmf(biased_pmf, "unbiased").Mean()
Explanation: Conclusion: the students are biased because the amount of students in a large class is large, so students who are taking multiple classes are likely taking at least one of these classes, which offsets their personal average class size from the actual.
Think of it this way: if you had one of each class size in range of class sizes from 1 to 10, the average size of the classes would be 5, but far more people would report being in a larger class than being in a smaller class.
this can be corrected, however...
End of explanation
import numpy as np
import pandas
array = np.random.randn(4,2)
df = pandas.DataFrame(array)
df
columns = ['A','B']
df = pandas.DataFrame(array, columns=columns)
df
index = ['a','b','c','d']
df = pandas.DataFrame(array, columns=columns, index=index)
df
#to select a row by label, use loc,
#which returns a series
df.loc['a']
#iloc finds a row by integer position of the row
df.iloc[0]
#loc can also take a list of labels
#in this case it returns a df
indices = ['a','c']
df.loc[indices]
#slicing
#NOTE: first slice method selects inclusively
print df['a':'c']
df[0:2]
Explanation: DataFrame indexing:
End of explanation
def PmfMean(pmf):
mean = 0
for key, prob in pmf.Items():
mean += key * prob
return mean
def PmfVar(pmf):
mean = PmfMean(pmf)
var = 0
for key, prob in pmf.Items():
var += prob * (key - mean) ** 2
return var
print "my Mean:", PmfMean(pmf)
print "answer mean:", pmf.Mean()
print "my Variance:", PmfVar(pmf)
print "answer variance:", pmf.Var()
Explanation: Exercise 3.2
PMFs can be used to calculate probability:
$$
\bar{x} = \sum_{i}p_ix_i
$$
where $x_i$ are the unique values in the PMF and $p_i = PMF(x_i)$
Variance can also be calulated:
$$
S^2 = \sum_{i}p_i(x_i -\bar{x})^2
$$
Write functions PmfMean and PmfVar that take a Pmf object and compute the mean and variance.
End of explanation
df = nsfg.ReadFemPreg()
pregMap = nsfg.MakePregMap(df[df.outcome==1])
lengthDiffs = []
for caseid, pregList in pregMap.iteritems():
first = df[df.index==pregList[0]].prglngth
first = int(first)
for idx in pregList[1:]:
other = df[df.index==idx].prglngth
other = int(other)
diff = first - other
lengthDiffs.append(diff)
diffHist = thinkstats2.Hist(lengthDiffs)
print diffHist
diffPmf = thinkstats2.Pmf(lengthDiffs)
thinkplot.PrePlot(2, cols=2)
thinkplot.SubPlot(1)
thinkplot.Hist(diffHist, label='')
thinkplot.Config(title="Differences (weeks) between first baby and other babies \n born to same mother",
xlabel = 'first_preg_lngth - other_preg_lngth (weeks)',
ylabel = 'freq')
thinkplot.SubPlot(2)
thinkplot.Hist(diffPmf, label='')
thinkplot.Config(title="Differences (weeks) between first baby and other babies \n born to same mother",
xlabel = 'first_preg_lngth - other_preg_lngth (weeks)',
ylabel = 'freq')
thinkplot.Show()
Explanation: Exercise 3.3
End of explanation
pwDiff = defaultdict(list)
for caseid, pregList in pregMap.iteritems():
first = df[df.index==pregList[0]].prglngth
first = int(first)
for i,idx in enumerate(pregList[1:]):
other = df[df.index==idx].prglngth
other = int(other)
diff = first - other
pwDiff[i + 1].append(diff)
pmf_s = []
for i in range(1,6):
diff_pmf = thinkstats2.Pmf(pwDiff[i + 1], label='diff to kid num %d' % i)
pmf_s.append(diff_pmf)
thinkplot.Pmfs(pmf_s)
thinkplot.Config(axis=[-10,10,0,1])
thinkplot.Show()
Explanation: Exercise 3.4
End of explanation
import relay
def ObservedPmf(pmf, runnerSpeed, label):
new_pmf = pmf.Copy(label=label)
for x,p in pmf.Items():
diff = abs(runnerSpeed - x)
#if runner speed is very large wrt x, likely to pass that runner
#else likely to be passed by that runnner
#not likely to see those in between.
new_pmf.Mult(x, diff)
new_pmf.Normalize()
return new_pmf
results = relay.ReadResults()
speeds = relay.GetSpeeds(results)
speeds = relay.BinData(speeds, 3, 12, 100)
pmf = thinkstats2.Pmf(speeds, 'unbiased speeds')
thinkplot.PrePlot(2)
thinkplot.Pmf(pmf)
biased_pmf = ObservedPmf(pmf, 7.5, 'biased at 7.5 mph')
thinkplot.Pmf(biased_pmf)
thinkplot.Config(title='PMF of running speed',
xlabel='speed (mph)',
ylabel='probability')
thinkplot.Show()
Explanation: Exercise 3.4
Write a function called ObservedPmf that takes a Pmf representing the actual distribution of runners' speeds and the speed of the running observer and returns a new PMF representing the distribution of runner's speeds as seen by the observer.
End of explanation |
2,561 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Decoding (MVPA)
.. include
Step1: Transformation classes
Scaler
The
Step2: PSDEstimator
The
Step3: Source power comodulation (SPoC)
Source Power Comodulation (
Step4: Decoding over time
This strategy consists in fitting a multivariate predictive model on each
time instant and evaluating its performance at the same instant on new
epochs. The
Step5: You can retrieve the spatial filters and spatial patterns if you explicitly
use a LinearModel
Step6: Temporal generalization
Temporal generalization is an extension of the decoding over time approach.
It consists in evaluating whether the model estimated at a particular
time instant accurately predicts any other time instant. It is analogous to
transferring a trained model to a distinct learning problem, where the
problems correspond to decoding the patterns of brain activity recorded at
distinct time instants.
The object to for Temporal generalization is
Step7: Plot the full (generalization) matrix
Step8: Projecting sensor-space patterns to source space
If you use a linear classifier (or regressor) for your data, you can also
project these to source space. For example, using our evoked_time_gen
from before
Step9: And this can be visualized using | Python Code:
import numpy as np
import matplotlib.pyplot as plt
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import StandardScaler
from sklearn.linear_model import LogisticRegression
import mne
from mne.datasets import sample
from mne.decoding import (SlidingEstimator, GeneralizingEstimator, Scaler,
cross_val_multiscore, LinearModel, get_coef,
Vectorizer, CSP)
data_path = sample.data_path()
subjects_dir = data_path / 'subjects'
meg_path = data_path / 'MEG' / 'sample'
raw_fname = meg_path / 'sample_audvis_filt-0-40_raw.fif'
tmin, tmax = -0.200, 0.500
event_id = {'Auditory/Left': 1, 'Visual/Left': 3} # just use two
raw = mne.io.read_raw_fif(raw_fname)
raw.pick_types(meg='grad', stim=True, eog=True, exclude=())
# The subsequent decoding analyses only capture evoked responses, so we can
# low-pass the MEG data. Usually a value more like 40 Hz would be used,
# but here low-pass at 20 so we can more heavily decimate, and allow
# the example to run faster. The 2 Hz high-pass helps improve CSP.
raw.load_data().filter(2, 20)
events = mne.find_events(raw, 'STI 014')
# Set up bad channels (modify to your needs)
raw.info['bads'] += ['MEG 2443'] # bads + 2 more
# Read epochs
epochs = mne.Epochs(raw, events, event_id, tmin, tmax, proj=True,
picks=('grad', 'eog'), baseline=(None, 0.), preload=True,
reject=dict(grad=4000e-13, eog=150e-6), decim=3,
verbose='error')
epochs.pick_types(meg=True, exclude='bads') # remove stim and EOG
del raw
X = epochs.get_data() # MEG signals: n_epochs, n_meg_channels, n_times
y = epochs.events[:, 2] # target: auditory left vs visual left
Explanation: Decoding (MVPA)
.. include:: ../../links.inc
Design philosophy
Decoding (a.k.a. MVPA) in MNE largely follows the machine
learning API of the scikit-learn package.
Each estimator implements fit, transform, fit_transform, and
(optionally) inverse_transform methods. For more details on this design,
visit scikit-learn_. For additional theoretical insights into the decoding
framework in MNE :footcite:KingEtAl2018.
For ease of comprehension, we will denote instantiations of the class using
the same name as the class but in small caps instead of camel cases.
Let's start by loading data for a simple two-class problem:
sphinx_gallery_thumbnail_number = 6
End of explanation
# Uses all MEG sensors and time points as separate classification
# features, so the resulting filters used are spatio-temporal
clf = make_pipeline(
Scaler(epochs.info),
Vectorizer(),
LogisticRegression(solver='liblinear') # liblinear is faster than lbfgs
)
scores = cross_val_multiscore(clf, X, y, cv=5, n_jobs=None)
# Mean scores across cross-validation splits
score = np.mean(scores, axis=0)
print('Spatio-temporal: %0.1f%%' % (100 * score,))
Explanation: Transformation classes
Scaler
The :class:mne.decoding.Scaler will standardize the data based on channel
scales. In the simplest modes scalings=None or scalings=dict(...),
each data channel type (e.g., mag, grad, eeg) is treated separately and
scaled by a constant. This is the approach used by e.g.,
:func:mne.compute_covariance to standardize channel scales.
If scalings='mean' or scalings='median', each channel is scaled using
empirical measures. Each channel is scaled independently by the mean and
standand deviation, or median and interquartile range, respectively, across
all epochs and time points during :class:~mne.decoding.Scaler.fit
(during training). The :meth:~mne.decoding.Scaler.transform method is
called to transform data (training or test set) by scaling all time points
and epochs on a channel-by-channel basis. To perform both the fit and
transform operations in a single call, the
:meth:~mne.decoding.Scaler.fit_transform method may be used. To invert the
transform, :meth:~mne.decoding.Scaler.inverse_transform can be used. For
scalings='median', scikit-learn_ version 0.17+ is required.
<div class="alert alert-info"><h4>Note</h4><p>Using this class is different from directly applying
:class:`sklearn.preprocessing.StandardScaler` or
:class:`sklearn.preprocessing.RobustScaler` offered by
scikit-learn_. These scale each *classification feature*, e.g.
each time point for each channel, with mean and standard
deviation computed across epochs, whereas
:class:`mne.decoding.Scaler` scales each *channel* using mean and
standard deviation computed across all of its time points
and epochs.</p></div>
Vectorizer
Scikit-learn API provides functionality to chain transformers and estimators
by using :class:sklearn.pipeline.Pipeline. We can construct decoding
pipelines and perform cross-validation and grid-search. However scikit-learn
transformers and estimators generally expect 2D data
(n_samples * n_features), whereas MNE transformers typically output data
with a higher dimensionality
(e.g. n_samples * n_channels * n_frequencies * n_times). A Vectorizer
therefore needs to be applied between the MNE and the scikit-learn steps
like:
End of explanation
csp = CSP(n_components=3, norm_trace=False)
clf_csp = make_pipeline(
csp,
LinearModel(LogisticRegression(solver='liblinear'))
)
scores = cross_val_multiscore(clf_csp, X, y, cv=5, n_jobs=None)
print('CSP: %0.1f%%' % (100 * scores.mean(),))
Explanation: PSDEstimator
The :class:mne.decoding.PSDEstimator
computes the power spectral density (PSD) using the multitaper
method. It takes a 3D array as input, converts it into 2D and computes the
PSD.
FilterEstimator
The :class:mne.decoding.FilterEstimator filters the 3D epochs data.
Spatial filters
Just like temporal filters, spatial filters provide weights to modify the
data along the sensor dimension. They are popular in the BCI community
because of their simplicity and ability to distinguish spatially-separated
neural activity.
Common spatial pattern
:class:mne.decoding.CSP is a technique to analyze multichannel data based
on recordings from two classes :footcite:Koles1991 (see also
https://en.wikipedia.org/wiki/Common_spatial_pattern).
Let $X \in R^{C\times T}$ be a segment of data with
$C$ channels and $T$ time points. The data at a single time point
is denoted by $x(t)$ such that $X=[x(t), x(t+1), ..., x(t+T-1)]$.
Common spatial pattern (CSP) finds a decomposition that projects the signal
in the original sensor space to CSP space using the following transformation:
\begin{align}x_{CSP}(t) = W^{T}x(t)
:label: csp\end{align}
where each column of $W \in R^{C\times C}$ is a spatial filter and each
row of $x_{CSP}$ is a CSP component. The matrix $W$ is also
called the de-mixing matrix in other contexts. Let
$\Sigma^{+} \in R^{C\times C}$ and $\Sigma^{-} \in R^{C\times C}$
be the estimates of the covariance matrices of the two conditions.
CSP analysis is given by the simultaneous diagonalization of the two
covariance matrices
\begin{align}W^{T}\Sigma^{+}W = \lambda^{+}
:label: diagonalize_p\end{align}
\begin{align}W^{T}\Sigma^{-}W = \lambda^{-}
:label: diagonalize_n\end{align}
where $\lambda^{C}$ is a diagonal matrix whose entries are the
eigenvalues of the following generalized eigenvalue problem
\begin{align}\Sigma^{+}w = \lambda \Sigma^{-}w
:label: eigen_problem\end{align}
Large entries in the diagonal matrix corresponds to a spatial filter which
gives high variance in one class but low variance in the other. Thus, the
filter facilitates discrimination between the two classes.
.. topic:: Examples
* `ex-decoding-csp-eeg`
* `ex-decoding-csp-eeg-timefreq`
<div class="alert alert-info"><h4>Note</h4><p>The winning entry of the Grasp-and-lift EEG competition in Kaggle used
the :class:`~mne.decoding.CSP` implementation in MNE and was featured as
a [script of the week](sotw_).</p></div>
We can use CSP with these data with:
End of explanation
# Fit CSP on full data and plot
csp.fit(X, y)
csp.plot_patterns(epochs.info)
csp.plot_filters(epochs.info, scalings=1e-9)
Explanation: Source power comodulation (SPoC)
Source Power Comodulation (:class:mne.decoding.SPoC)
:footcite:DahneEtAl2014 identifies the composition of
orthogonal spatial filters that maximally correlate with a continuous target.
SPoC can be seen as an extension of the CSP where the target is driven by a
continuous variable rather than a discrete variable. Typical applications
include extraction of motor patterns using EMG power or audio patterns using
sound envelope.
.. topic:: Examples
* `ex-spoc-cmc`
xDAWN
:class:mne.preprocessing.Xdawn is a spatial filtering method designed to
improve the signal to signal + noise ratio (SSNR) of the ERP responses
:footcite:RivetEtAl2009. Xdawn was originally
designed for P300 evoked potential by enhancing the target response with
respect to the non-target response. The implementation in MNE-Python is a
generalization to any type of ERP.
.. topic:: Examples
* `ex-xdawn-denoising`
* `ex-xdawn-decoding`
Effect-matched spatial filtering
The result of :class:mne.decoding.EMS is a spatial filter at each time
point and a corresponding time course :footcite:SchurgerEtAl2013.
Intuitively, the result gives the similarity between the filter at
each time point and the data vector (sensors) at that time point.
.. topic:: Examples
* `ex-ems-filtering`
Patterns vs. filters
When interpreting the components of the CSP (or spatial filters in general),
it is often more intuitive to think about how $x(t)$ is composed of
the different CSP components $x_{CSP}(t)$. In other words, we can
rewrite Equation :eq:csp as follows:
\begin{align}x(t) = (W^{-1})^{T}x_{CSP}(t)
:label: patterns\end{align}
The columns of the matrix $(W^{-1})^T$ are called spatial patterns.
This is also called the mixing matrix. The example ex-linear-patterns
discusses the difference between patterns and filters.
These can be plotted with:
End of explanation
# We will train the classifier on all left visual vs auditory trials on MEG
clf = make_pipeline(
StandardScaler(),
LogisticRegression(solver='liblinear')
)
time_decod = SlidingEstimator(
clf, n_jobs=None, scoring='roc_auc', verbose=True)
# here we use cv=3 just for speed
scores = cross_val_multiscore(time_decod, X, y, cv=3, n_jobs=None)
# Mean scores across cross-validation splits
scores = np.mean(scores, axis=0)
# Plot
fig, ax = plt.subplots()
ax.plot(epochs.times, scores, label='score')
ax.axhline(.5, color='k', linestyle='--', label='chance')
ax.set_xlabel('Times')
ax.set_ylabel('AUC') # Area Under the Curve
ax.legend()
ax.axvline(.0, color='k', linestyle='-')
ax.set_title('Sensor space decoding')
Explanation: Decoding over time
This strategy consists in fitting a multivariate predictive model on each
time instant and evaluating its performance at the same instant on new
epochs. The :class:mne.decoding.SlidingEstimator will take as input a
pair of features $X$ and targets $y$, where $X$ has
more than 2 dimensions. For decoding over time the data $X$
is the epochs data of shape n_epochs × n_channels × n_times. As the
last dimension of $X$ is the time, an estimator will be fit
on every time instant.
This approach is analogous to SlidingEstimator-based approaches in fMRI,
where here we are interested in when one can discriminate experimental
conditions and therefore figure out when the effect of interest happens.
When working with linear models as estimators, this approach boils
down to estimating a discriminative spatial filter for each time instant.
Temporal decoding
We'll use a Logistic Regression for a binary classification as machine
learning model.
End of explanation
clf = make_pipeline(
StandardScaler(),
LinearModel(LogisticRegression(solver='liblinear'))
)
time_decod = SlidingEstimator(
clf, n_jobs=None, scoring='roc_auc', verbose=True)
time_decod.fit(X, y)
coef = get_coef(time_decod, 'patterns_', inverse_transform=True)
evoked_time_gen = mne.EvokedArray(coef, epochs.info, tmin=epochs.times[0])
joint_kwargs = dict(ts_args=dict(time_unit='s'),
topomap_args=dict(time_unit='s'))
evoked_time_gen.plot_joint(times=np.arange(0., .500, .100), title='patterns',
**joint_kwargs)
Explanation: You can retrieve the spatial filters and spatial patterns if you explicitly
use a LinearModel
End of explanation
# define the Temporal generalization object
time_gen = GeneralizingEstimator(clf, n_jobs=None, scoring='roc_auc',
verbose=True)
# again, cv=3 just for speed
scores = cross_val_multiscore(time_gen, X, y, cv=3, n_jobs=None)
# Mean scores across cross-validation splits
scores = np.mean(scores, axis=0)
# Plot the diagonal (it's exactly the same as the time-by-time decoding above)
fig, ax = plt.subplots()
ax.plot(epochs.times, np.diag(scores), label='score')
ax.axhline(.5, color='k', linestyle='--', label='chance')
ax.set_xlabel('Times')
ax.set_ylabel('AUC')
ax.legend()
ax.axvline(.0, color='k', linestyle='-')
ax.set_title('Decoding MEG sensors over time')
Explanation: Temporal generalization
Temporal generalization is an extension of the decoding over time approach.
It consists in evaluating whether the model estimated at a particular
time instant accurately predicts any other time instant. It is analogous to
transferring a trained model to a distinct learning problem, where the
problems correspond to decoding the patterns of brain activity recorded at
distinct time instants.
The object to for Temporal generalization is
:class:mne.decoding.GeneralizingEstimator. It expects as input $X$
and $y$ (similarly to :class:~mne.decoding.SlidingEstimator) but
generates predictions from each model for all time instants. The class
:class:~mne.decoding.GeneralizingEstimator is generic and will treat the
last dimension as the one to be used for generalization testing. For
convenience, here, we refer to it as different tasks. If $X$
corresponds to epochs data then the last dimension is time.
This runs the analysis used in :footcite:KingEtAl2014 and further detailed
in :footcite:KingDehaene2014:
End of explanation
fig, ax = plt.subplots(1, 1)
im = ax.imshow(scores, interpolation='lanczos', origin='lower', cmap='RdBu_r',
extent=epochs.times[[0, -1, 0, -1]], vmin=0., vmax=1.)
ax.set_xlabel('Testing Time (s)')
ax.set_ylabel('Training Time (s)')
ax.set_title('Temporal generalization')
ax.axvline(0, color='k')
ax.axhline(0, color='k')
cbar = plt.colorbar(im, ax=ax)
cbar.set_label('AUC')
Explanation: Plot the full (generalization) matrix:
End of explanation
cov = mne.compute_covariance(epochs, tmax=0.)
del epochs
fwd = mne.read_forward_solution(
meg_path / 'sample_audvis-meg-eeg-oct-6-fwd.fif')
inv = mne.minimum_norm.make_inverse_operator(
evoked_time_gen.info, fwd, cov, loose=0.)
stc = mne.minimum_norm.apply_inverse(evoked_time_gen, inv, 1. / 9., 'dSPM')
del fwd, inv
Explanation: Projecting sensor-space patterns to source space
If you use a linear classifier (or regressor) for your data, you can also
project these to source space. For example, using our evoked_time_gen
from before:
End of explanation
brain = stc.plot(hemi='split', views=('lat', 'med'), initial_time=0.1,
subjects_dir=subjects_dir)
Explanation: And this can be visualized using :meth:stc.plot <mne.SourceEstimate.plot>:
End of explanation |
2,562 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Python API to BeakerX Interactive Plotting
You can access Beaker's native interactive plotting library from Python.
Plot with simple properties
Python plots has syntax very similar to Groovy plots. Property names are the same.
Step1: Plot items
Lines, Bars, Points and Right yAxis
Step2: Lines, Points with Pandas
Step3: Areas, Stems and Crosshair
Step4: Constant Lines, Constant Bands
Step5: TimePlot
Step6: numpy datatime64
Step7: Timestamp
Step8: Datetime and date
Step9: NanoPlot
Step10: Stacking
Step11: SimpleTime Plot
Step12: Second Y Axis
The plot can have two y-axes. Just add a YAxis to the plot object, and specify its label.
Then for data that should be scaled according to this second axis,
specify the property yAxis with a value that coincides with the label given.
You can use upperMargin and lowerMargin to restrict the range of the data leaving more white, perhaps for the data on the other axis.
Step13: Combined Plot | Python Code:
from beakerx import *
import pandas as pd
tableRows = pd.read_csv('../resources/data/interest-rates.csv')
Plot(title="Title",
xLabel="Horizontal",
yLabel="Vertical",
initWidth=500,
initHeight=200)
Explanation: Python API to BeakerX Interactive Plotting
You can access Beaker's native interactive plotting library from Python.
Plot with simple properties
Python plots has syntax very similar to Groovy plots. Property names are the same.
End of explanation
x = [1, 4, 6, 8, 10]
y = [3, 6, 4, 5, 9]
pp = Plot(title='Bars, Lines, Points and 2nd yAxis',
xLabel="xLabel",
yLabel="yLabel",
legendLayout=LegendLayout.HORIZONTAL,
legendPosition=LegendPosition.RIGHT,
omitCheckboxes=True)
pp.add(YAxis(label="Right yAxis"))
pp.add(Bars(displayName="Bar",
x=[1,3,5,7,10],
y=[100, 120,90,100,80],
width=1))
pp.add(Line(displayName="Line",
x=x,
y=y,
width=6,
yAxis="Right yAxis"))
pp.add(Points(x=x,
y=y,
size=10,
shape=ShapeType.DIAMOND,
yAxis="Right yAxis"))
plot = Plot(title= "Setting line properties")
ys = [0, 1, 6, 5, 2, 8]
ys2 = [0, 2, 7, 6, 3, 8]
plot.add(Line(y= ys, width= 10, color= Color.red))
plot.add(Line(y= ys, width= 3, color= Color.yellow))
plot.add(Line(y= ys, width= 4, color= Color(33, 87, 141), style= StrokeType.DASH, interpolation= 0))
plot.add(Line(y= ys2, width= 2, color= Color(212, 57, 59), style= StrokeType.DOT))
plot.add(Line(y= [5, 0], x= [0, 5], style= StrokeType.LONGDASH))
plot.add(Line(y= [4, 0], x= [0, 5], style= StrokeType.DASHDOT))
plot = Plot(title= "Changing Point Size, Color, Shape")
y1 = [6, 7, 12, 11, 8, 14]
y2 = [4, 5, 10, 9, 6, 12]
y3 = [2, 3, 8, 7, 4, 10]
y4 = [0, 1, 6, 5, 2, 8]
plot.add(Points(y= y1))
plot.add(Points(y= y2, shape= ShapeType.CIRCLE))
plot.add(Points(y= y3, size= 8.0, shape= ShapeType.DIAMOND))
plot.add(Points(y= y4, size= 12.0, color= Color.orange, outlineColor= Color.red))
plot = Plot(title= "Changing point properties with list")
cs = [Color.black, Color.red, Color.orange, Color.green, Color.blue, Color.pink]
ss = [6.0, 9.0, 12.0, 15.0, 18.0, 21.0]
fs = [False, False, False, True, False, False]
plot.add(Points(y= [5] * 6, size= 12.0, color= cs))
plot.add(Points(y= [4] * 6, size= 12.0, color= Color.gray, outlineColor= cs))
plot.add(Points(y= [3] * 6, size= ss, color= Color.red))
plot.add(Points(y= [2] * 6, size= 12.0, color= Color.black, fill= fs, outlineColor= Color.black))
plot = Plot()
y1 = [1.5, 1, 6, 5, 2, 8]
cs = [Color.black, Color.red, Color.gray, Color.green, Color.blue, Color.pink]
ss = [StrokeType.SOLID, StrokeType.SOLID, StrokeType.DASH, StrokeType.DOT, StrokeType.DASHDOT, StrokeType.LONGDASH]
plot.add(Stems(y= y1, color= cs, style= ss, width= 5))
plot = Plot(title= "Setting the base of Stems")
ys = [3, 5, 2, 3, 7]
y2s = [2.5, -1.0, 3.5, 2.0, 3.0]
plot.add(Stems(y= ys, width= 2, base= y2s))
plot.add(Points(y= ys))
plot = Plot(title= "Bars")
cs = [Color(255, 0, 0, 128)] * 5 # transparent bars
cs[3] = Color.red # set color of a single bar, solid colored bar
plot.add(Bars(x= [1, 2, 3, 4, 5], y= [3, 5, 2, 3, 7], color= cs, outlineColor= Color.black, width= 0.3))
Explanation: Plot items
Lines, Bars, Points and Right yAxis
End of explanation
plot = Plot(title= "Pandas line")
plot.add(Line(y= tableRows.y1, width= 2, color= Color(216, 154, 54)))
plot.add(Line(y= tableRows.y10, width= 2, color= Color.lightGray))
plot
plot = Plot(title= "Pandas Series")
plot.add(Line(y= pd.Series([0, 6, 1, 5, 2, 4, 3]), width=2))
plot = Plot(title= "Bars")
cs = [Color(255, 0, 0, 128)] * 7 # transparent bars
cs[3] = Color.red # set color of a single bar, solid colored bar
plot.add(Bars(pd.Series([0, 6, 1, 5, 2, 4, 3]), color= cs, outlineColor= Color.black, width= 0.3))
Explanation: Lines, Points with Pandas
End of explanation
ch = Crosshair(color=Color.black, width=2, style=StrokeType.DOT)
plot = Plot(crosshair=ch)
y1 = [4, 8, 16, 20, 32]
base = [2, 4, 8, 10, 16]
cs = [Color.black, Color.orange, Color.gray, Color.yellow, Color.pink]
ss = [StrokeType.SOLID,
StrokeType.SOLID,
StrokeType.DASH,
StrokeType.DOT,
StrokeType.DASHDOT,
StrokeType.LONGDASH]
plot.add(Area(y=y1, base=base, color=Color(255, 0, 0, 50)))
plot.add(Stems(y=y1, base=base, color=cs, style=ss, width=5))
plot = Plot()
y = [3, 5, 2, 3]
x0 = [0, 1, 2, 3]
x1 = [3, 4, 5, 8]
plot.add(Area(x= x0, y= y))
plot.add(Area(x= x1, y= y, color= Color(128, 128, 128, 50), interpolation= 0))
p = Plot()
p.add(Line(y= [3, 6, 12, 24], displayName= "Median"))
p.add(Area(y= [4, 8, 16, 32], base= [2, 4, 8, 16],
color= Color(255, 0, 0, 50), displayName= "Q1 to Q3"))
ch = Crosshair(color= Color(255, 128, 5), width= 2, style= StrokeType.DOT)
pp = Plot(crosshair= ch, omitCheckboxes= True,
legendLayout= LegendLayout.HORIZONTAL, legendPosition= LegendPosition.TOP)
x = [1, 4, 6, 8, 10]
y = [3, 6, 4, 5, 9]
pp.add(Line(displayName= "Line", x= x, y= y, width= 3))
pp.add(Bars(displayName= "Bar", x= [1, 2, 3, 4, 5, 6, 7, 8, 9, 10], y= [2, 2, 4, 4, 2, 2, 0, 2, 2, 4], width= 0.5))
pp.add(Points(x= x, y= y, size= 10))
Explanation: Areas, Stems and Crosshair
End of explanation
p = Plot ()
p.add(Line(y=[-1, 1]))
p.add(ConstantLine(x=0.65, style=StrokeType.DOT, color=Color.blue))
p.add(ConstantLine(y=0.1, style=StrokeType.DASHDOT, color=Color.blue))
p.add(ConstantLine(x=0.3, y=0.4, color=Color.gray, width=5, showLabel=True))
Plot().add(Line(y=[-3, 1, 3, 4, 5])).add(ConstantBand(x=[1, 2], y=[1, 3]))
p = Plot()
p.add(Line(x= [-3, 1, 2, 4, 5], y= [4, 2, 6, 1, 5]))
p.add(ConstantBand(x= ['-Infinity', 1], color= Color(128, 128, 128, 50)))
p.add(ConstantBand(x= [1, 2]))
p.add(ConstantBand(x= [4, 'Infinity']))
from decimal import Decimal
pos_inf = Decimal('Infinity')
neg_inf = Decimal('-Infinity')
print (pos_inf)
print (neg_inf)
from beakerx.plot import Text as BeakerxText
plot = Plot()
xs = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
ys = [8.6, 6.1, 7.4, 2.5, 0.4, 0.0, 0.5, 1.7, 8.4, 1]
def label(i):
if ys[i] > ys[i+1] and ys[i] > ys[i-1]:
return "max"
if ys[i] < ys[i+1] and ys[i] < ys[i-1]:
return "min"
if ys[i] > ys[i-1]:
return "rising"
if ys[i] < ys[i-1]:
return "falling"
return ""
for i in xs:
i = i - 1
if i > 0 and i < len(xs)-1:
plot.add(BeakerxText(x= xs[i], y= ys[i], text= label(i), pointerAngle= -i/3.0))
plot.add(Line(x= xs, y= ys))
plot.add(Points(x= xs, y= ys))
plot = Plot(title= "Setting 2nd Axis bounds")
ys = [0, 2, 4, 6, 15, 10]
ys2 = [-40, 50, 6, 4, 2, 0]
ys3 = [3, 6, 3, 6, 70, 6]
plot.add(YAxis(label="Spread"))
plot.add(Line(y= ys))
plot.add(Line(y= ys2, yAxis="Spread"))
plot.setXBound([-2, 10])
#plot.setYBound(1, 5)
plot.getYAxes()[0].setBound(1,5)
plot.getYAxes()[1].setBound(3,6)
plot
plot = Plot(title= "Setting 2nd Axis bounds")
ys = [0, 2, 4, 6, 15, 10]
ys2 = [-40, 50, 6, 4, 2, 0]
ys3 = [3, 6, 3, 6, 70, 6]
plot.add(YAxis(label="Spread"))
plot.add(Line(y= ys))
plot.add(Line(y= ys2, yAxis="Spread"))
plot.setXBound([-2, 10])
plot.setYBound(1, 5)
plot
Explanation: Constant Lines, Constant Bands
End of explanation
import time
millis = current_milli_time()
hour = round(1000 * 60 * 60)
xs = []
ys = []
for i in range(11):
xs.append(millis + hour * i)
ys.append(i)
plot = TimePlot(timeZone="America/New_York")
# list of milliseconds
plot.add(Points(x=xs, y=ys, size=10, displayName="milliseconds"))
plot = TimePlot()
plot.add(Line(x=tableRows['time'], y=tableRows['m3']))
Explanation: TimePlot
End of explanation
y = pd.Series([7.5, 7.9, 7, 8.7, 8, 8.5])
dates = [np.datetime64('2015-02-01'),
np.datetime64('2015-02-02'),
np.datetime64('2015-02-03'),
np.datetime64('2015-02-04'),
np.datetime64('2015-02-05'),
np.datetime64('2015-02-06')]
plot = TimePlot()
plot.add(Line(x=dates, y=y))
Explanation: numpy datatime64
End of explanation
y = pd.Series([7.5, 7.9, 7, 8.7, 8, 8.5])
dates = pd.Series(['2015-02-01',
'2015-02-02',
'2015-02-03',
'2015-02-04',
'2015-02-05',
'2015-02-06']
, dtype='datetime64[ns]')
plot = TimePlot()
plot.add(Line(x=dates, y=y))
Explanation: Timestamp
End of explanation
import datetime
y = pd.Series([7.5, 7.9, 7, 8.7, 8, 8.5])
dates = [datetime.date(2015, 2, 1),
datetime.date(2015, 2, 2),
datetime.date(2015, 2, 3),
datetime.date(2015, 2, 4),
datetime.date(2015, 2, 5),
datetime.date(2015, 2, 6)]
plot = TimePlot()
plot.add(Line(x=dates, y=y))
import datetime
y = pd.Series([7.5, 7.9, 7, 8.7, 8, 8.5])
dates = [datetime.datetime(2015, 2, 1),
datetime.datetime(2015, 2, 2),
datetime.datetime(2015, 2, 3),
datetime.datetime(2015, 2, 4),
datetime.datetime(2015, 2, 5),
datetime.datetime(2015, 2, 6)]
plot = TimePlot()
plot.add(Line(x=dates, y=y))
Explanation: Datetime and date
End of explanation
millis = current_milli_time()
nanos = millis * 1000 * 1000
xs = []
ys = []
for i in range(11):
xs.append(nanos + 7 * i)
ys.append(i)
nanoplot = NanoPlot()
nanoplot.add(Points(x=xs, y=ys))
Explanation: NanoPlot
End of explanation
y1 = [1,5,3,2,3]
y2 = [7,2,4,1,3]
p = Plot(title='Plot with XYStacker', initHeight=200)
a1 = Area(y=y1, displayName='y1')
a2 = Area(y=y2, displayName='y2')
stacker = XYStacker()
p.add(stacker.stack([a1, a2]))
Explanation: Stacking
End of explanation
SimpleTimePlot(tableRows, ["y1", "y10"], # column names
timeColumn="time", # time is default value for a timeColumn
yLabel="Price",
displayNames=["1 Year", "10 Year"],
colors = [[216, 154, 54], Color.lightGray],
displayLines=True, # no lines (true by default)
displayPoints=False) # show points (false by default))
#time column base on DataFrame index
tableRows.index = tableRows['time']
SimpleTimePlot(tableRows, ['m3'])
rng = pd.date_range('1/1/2011', periods=72, freq='H')
ts = pd.Series(np.random.randn(len(rng)), index=rng)
df = pd.DataFrame(ts, columns=['y'])
SimpleTimePlot(df, ['y'])
Explanation: SimpleTime Plot
End of explanation
p = TimePlot(xLabel= "Time", yLabel= "Interest Rates")
p.add(YAxis(label= "Spread", upperMargin= 4))
p.add(Area(x= tableRows.time, y= tableRows.spread, displayName= "Spread",
yAxis= "Spread", color= Color(180, 50, 50, 128)))
p.add(Line(x= tableRows.time, y= tableRows.m3, displayName= "3 Month"))
p.add(Line(x= tableRows.time, y= tableRows.y10, displayName= "10 Year"))
Explanation: Second Y Axis
The plot can have two y-axes. Just add a YAxis to the plot object, and specify its label.
Then for data that should be scaled according to this second axis,
specify the property yAxis with a value that coincides with the label given.
You can use upperMargin and lowerMargin to restrict the range of the data leaving more white, perhaps for the data on the other axis.
End of explanation
import math
points = 100
logBase = 10
expys = []
xs = []
for i in range(0, points):
xs.append(i / 15.0)
expys.append(math.exp(xs[i]))
cplot = CombinedPlot(xLabel= "Linear")
logYPlot = Plot(title= "Linear x, Log y", yLabel= "Log", logY= True, yLogBase= logBase)
logYPlot.add(Line(x= xs, y= expys, displayName= "f(x) = exp(x)"))
logYPlot.add(Line(x= xs, y= xs, displayName= "g(x) = x"))
cplot.add(logYPlot, 4)
linearYPlot = Plot(title= "Linear x, Linear y", yLabel= "Linear")
linearYPlot.add(Line(x= xs, y= expys, displayName= "f(x) = exp(x)"))
linearYPlot.add(Line(x= xs, y= xs, displayName= "g(x) = x"))
cplot.add(linearYPlot,4)
cplot
plot = Plot(title= "Log x, Log y", xLabel= "Log", yLabel= "Log",
logX= True, xLogBase= logBase, logY= True, yLogBase= logBase)
plot.add(Line(x= xs, y= expys, displayName= "f(x) = exp(x)"))
plot.add(Line(x= xs, y= xs, displayName= "f(x) = x"))
plot
Explanation: Combined Plot
End of explanation |
2,563 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2019 The Sonnet Authors. All Rights Reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http
Step1: Finally lets take a quick look at the GPUs we have available
Step2: Distribution strategy
We need a strategy to distribute our computation across several devices. Since Google Colab only provides a single GPU we'll split it into four virtual GPUs
Step3: When using Sonnet optimizers, we must use either Replicator or TpuReplicator from snt.distribute, or we can use tf.distribute.OneDeviceStrategy. Replicator is equivalent to MirroredStrategy and TpuReplicator is equivalent to TPUStrategy.
Step4: Dataset
Basically the same as the MNIST example, but this time we're using CIFAR-10. CIFAR-10 contains 32x32 pixel color images in 10 different classes (airplanes, cars, birds, cats, deer, dogs, frogs, horses, ships, and trucks).
Step5: Model & Optimizer
Conveniently, there is a pre-built model in snt.nets designed specifically for this dataset.
We must build our model and optimizer within the strategy scope, to ensure that any variables created are distributed correctly. Alternatively, we could enter the scope for the entire program using tf.distribute.experimental_set_strategy.
Step8: Training the model
The Sonnet optimizers are designed to be as clean and simple as possible. They do not contain any code to deal with distributed execution. It therefore requires a few additional lines of code.
We must aggregate the gradients calculated on the different devices. This can be done using ReplicaContext.all_reduce.
Note that when using Replicator / TpuReplicator it is the user's responsibility to ensure that the values remain identical in all replicas.
Step10: Evaluating the model
Note the use of the axis parameter with strategy.reduce to reduce across the batch dimension. | Python Code:
import sys
assert sys.version_info >= (3, 6), "Sonnet 2 requires Python >=3.6"
!pip install dm-sonnet tqdm
import sonnet as snt
import tensorflow as tf
import tensorflow_datasets as tfds
print("TensorFlow version: {}".format(tf.__version__))
print(" Sonnet version: {}".format(snt.__version__))
Explanation: Copyright 2019 The Sonnet Authors. All Rights Reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
Introduction
This tutorial assumes you have already completed (and understood!) the Sonnet 2 "Hello, world!" example (MLP on MNIST).
In this tutorial, we're going to scale things up with a bigger model and bigger dataset, and we're going to distribute the computation across multiple devices.
Preamble
End of explanation
!grep Model: /proc/driver/nvidia/gpus/*/information | awk '{$1="";print$0}'
Explanation: Finally lets take a quick look at the GPUs we have available:
End of explanation
physical_gpus = tf.config.experimental.list_physical_devices("GPU")
physical_gpus
tf.config.experimental.set_virtual_device_configuration(
physical_gpus[0],
[tf.config.experimental.VirtualDeviceConfiguration(memory_limit=2000)] * 4
)
gpus = tf.config.experimental.list_logical_devices("GPU")
gpus
Explanation: Distribution strategy
We need a strategy to distribute our computation across several devices. Since Google Colab only provides a single GPU we'll split it into four virtual GPUs:
End of explanation
strategy = snt.distribute.Replicator(
["/device:GPU:{}".format(i) for i in range(4)],
tf.distribute.ReductionToOneDevice("GPU:0"))
Explanation: When using Sonnet optimizers, we must use either Replicator or TpuReplicator from snt.distribute, or we can use tf.distribute.OneDeviceStrategy. Replicator is equivalent to MirroredStrategy and TpuReplicator is equivalent to TPUStrategy.
End of explanation
# NOTE: This is the batch size across all GPUs.
batch_size = 100 * 4
def process_batch(images, labels):
images = tf.cast(images, dtype=tf.float32)
images = ((images / 255.) - .5) * 2.
return images, labels
def cifar10(split):
dataset = tfds.load("cifar10", split=split, as_supervised=True)
dataset = dataset.map(process_batch)
dataset = dataset.batch(batch_size)
dataset = dataset.prefetch(tf.data.experimental.AUTOTUNE)
dataset = dataset.cache()
return dataset
cifar10_train = cifar10("train").shuffle(10)
cifar10_test = cifar10("test")
Explanation: Dataset
Basically the same as the MNIST example, but this time we're using CIFAR-10. CIFAR-10 contains 32x32 pixel color images in 10 different classes (airplanes, cars, birds, cats, deer, dogs, frogs, horses, ships, and trucks).
End of explanation
learning_rate = 0.1
with strategy.scope():
model = snt.nets.Cifar10ConvNet()
optimizer = snt.optimizers.Momentum(learning_rate, 0.9)
Explanation: Model & Optimizer
Conveniently, there is a pre-built model in snt.nets designed specifically for this dataset.
We must build our model and optimizer within the strategy scope, to ensure that any variables created are distributed correctly. Alternatively, we could enter the scope for the entire program using tf.distribute.experimental_set_strategy.
End of explanation
def step(images, labels):
Performs a single training step, returning the cross-entropy loss.
with tf.GradientTape() as tape:
logits = model(images, is_training=True)["logits"]
loss = tf.reduce_mean(
tf.nn.sparse_softmax_cross_entropy_with_logits(labels=labels,
logits=logits))
grads = tape.gradient(loss, model.trainable_variables)
# Aggregate the gradients from the full batch.
replica_ctx = tf.distribute.get_replica_context()
grads = replica_ctx.all_reduce("mean", grads)
optimizer.apply(grads, model.trainable_variables)
return loss
@tf.function
def train_step(images, labels):
per_replica_loss = strategy.run(step, args=(images, labels))
return strategy.reduce("sum", per_replica_loss, axis=None)
def train_epoch(dataset):
Performs one epoch of training, returning the mean cross-entropy loss.
total_loss = 0.0
num_batches = 0
# Loop over the entire training set.
for images, labels in dataset:
total_loss += train_step(images, labels).numpy()
num_batches += 1
return total_loss / num_batches
cifar10_train_dist = strategy.experimental_distribute_dataset(cifar10_train)
for epoch in range(20):
print("Training epoch", epoch, "...", end=" ")
print("loss :=", train_epoch(cifar10_train_dist))
Explanation: Training the model
The Sonnet optimizers are designed to be as clean and simple as possible. They do not contain any code to deal with distributed execution. It therefore requires a few additional lines of code.
We must aggregate the gradients calculated on the different devices. This can be done using ReplicaContext.all_reduce.
Note that when using Replicator / TpuReplicator it is the user's responsibility to ensure that the values remain identical in all replicas.
End of explanation
num_cifar10_test_examples = 10000
def is_predicted(images, labels):
logits = model(images, is_training=False)["logits"]
# The reduction over the batch happens in `strategy.reduce`, below.
return tf.cast(tf.equal(labels, tf.argmax(logits, axis=1)), dtype=tf.int32)
cifar10_test_dist = strategy.experimental_distribute_dataset(cifar10_test)
@tf.function
def evaluate():
Returns the top-1 accuracy over the entire test set.
total_correct = 0
for images, labels in cifar10_test_dist:
per_replica_correct = strategy.run(is_predicted, args=(images, labels))
total_correct += strategy.reduce("sum", per_replica_correct, axis=0)
return tf.cast(total_correct, tf.float32) / num_cifar10_test_examples
print("Testing...", end=" ")
print("top-1 accuracy =", evaluate().numpy())
Explanation: Evaluating the model
Note the use of the axis parameter with strategy.reduce to reduce across the batch dimension.
End of explanation |
2,564 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
5. How to Log and Visualize Simulations
Here we explain how to take a log of simulation results and how to visualize it.
Step1: 5.1. Logging Simulations with Observers
E-Cell4 provides special classes for logging, named Observer. Observer class is given when you call the run function of Simulator.
Step2: One of most popular Observer is FixedIntervalNumberObserver, which logs the number of molecules with the given time interval. FixedIntervalNumberObserver requires an interval and a list of serials of Species for logging.
Step3: data function of FixedIntervalNumberObserver returns the data logged.
Step4: targets() returns a list of Species, which you specified as an argument of the constructor.
Step5: NumberObserver logs the number of molecules after every steps when a reaction occurs. This observer is useful to log all reactions, but not available for ode.
Step6: TimingNumberObserver allows you to give the times for logging as an argument of its constructor.
Step7: run function accepts multile Observers at once.
Step8: FixedIntervalHDF5Observedr logs the whole data in a World to an output file with the fixed interval. Its second argument is a prefix for output filenames. filename() returns the name of a file scheduled to be saved next. At most one format string like %02d is allowed to use a step count in the file name. When you do not use the format string, it overwrites the latest data to the file.
Step9: The usage of FixedIntervalCSVObserver is almost same with that of FixedIntervalHDF5Observer. It saves positions (x, y, z) of particles with the radius (r) and serial number of Species (sid) to a CSV file.
Step10: Here is the first 10 lines in the output CSV file.
Step11: For particle simulations, E-Cell4 also provides Observer to trace a trajectory of a molecule, named FixedIntervalTrajectoryObserver. When no ParticleID is specified, it logs all the trajectories. Once some ParticleID is lost for the reaction during a simulation, it just stop to trace the particle any more.
Step12: Generally, World assumes a periodic boundary for each plane. To avoid the big jump of a particle at the edge due to the boundary condition, FixedIntervalTrajectoryObserver tries to keep the shift of positions. Thus, the positions stored in the Observer are not necessarily limited in the cuboid given for the World. To track the diffusion over the boundary condition accurately, the step interval for logging must be small enough. Of course, you can disable this option. See help(FixedIntervalTrajectoryObserver).
5.2. Visualization of Data Logged
In this section, we explain the visualization tools for data logged by Observer.
Firstly, for time course data, plotting.plot_number_observer plots the data provided by NumberObserver, FixedIntervalNumberObserver and TimingNumberObserver. For the detailed usage of plotting.plot_number_observer, see help(plotting.plot_number_observer).
Step13: You can set the style for plotting, and even add an arbitrary function to plot.
Step14: Plotting in the phase plane is also available by specifing the x-axis and y-axis.
Step15: For spatial simulations, to visualize the state of World, plotting.plot_world is available. This function plots the points of particles in three-dimensional volume in the interactive way. You can save the image by clicking a right button on the drawing region.
Step16: You can also make a movie from a series of HDF5 files, given as a FixedIntervalHDF5Observer. plotting.plot_movie requires an extra library, ffmpeg.
Step17: Finally, corresponding to FixedIntervalTrajectoryObserver, plotting.plot_trajectory provides a visualization of particle trajectories.
Step18: show internally calls these plotting functions corresponding to the given observer. Thus, you can do simply as follows | Python Code:
%matplotlib inline
import math
from ecell4.prelude import *
Explanation: 5. How to Log and Visualize Simulations
Here we explain how to take a log of simulation results and how to visualize it.
End of explanation
def create_simulator(f=gillespie.Factory()):
m = NetworkModel()
A, B, C = Species('A', 0.005, 1), Species('B', 0.005, 1), Species('C', 0.005, 1)
m.add_species_attribute(A)
m.add_species_attribute(B)
m.add_species_attribute(C)
m.add_reaction_rule(create_binding_reaction_rule(A, B, C, 0.01))
m.add_reaction_rule(create_unbinding_reaction_rule(C, A, B, 0.3))
w = f.world()
w.bind_to(m)
w.add_molecules(C, 60)
sim = f.simulator(w)
sim.initialize()
return sim
Explanation: 5.1. Logging Simulations with Observers
E-Cell4 provides special classes for logging, named Observer. Observer class is given when you call the run function of Simulator.
End of explanation
obs1 = FixedIntervalNumberObserver(0.1, ['A', 'B', 'C'])
sim = create_simulator()
sim.run(1.0, obs1)
Explanation: One of most popular Observer is FixedIntervalNumberObserver, which logs the number of molecules with the given time interval. FixedIntervalNumberObserver requires an interval and a list of serials of Species for logging.
End of explanation
print(obs1.data())
Explanation: data function of FixedIntervalNumberObserver returns the data logged.
End of explanation
print([sp.serial() for sp in obs1.targets()])
Explanation: targets() returns a list of Species, which you specified as an argument of the constructor.
End of explanation
obs1 = NumberObserver(['A', 'B', 'C'])
sim = create_simulator()
sim.run(1.0, obs1)
print(obs1.data())
Explanation: NumberObserver logs the number of molecules after every steps when a reaction occurs. This observer is useful to log all reactions, but not available for ode.
End of explanation
obs1 = TimingNumberObserver([0.0, 0.1, 0.2, 0.5, 1.0], ['A', 'B', 'C'])
sim = create_simulator()
sim.run(1.0, obs1)
print(obs1.data())
Explanation: TimingNumberObserver allows you to give the times for logging as an argument of its constructor.
End of explanation
obs1 = NumberObserver(['C'])
obs2 = FixedIntervalNumberObserver(0.1, ['A', 'B'])
sim = create_simulator()
sim.run(1.0, [obs1, obs2])
print(obs1.data())
print(obs2.data())
Explanation: run function accepts multile Observers at once.
End of explanation
obs1 = FixedIntervalHDF5Observer(0.2, 'test%02d.h5')
print(obs1.filename())
sim = create_simulator()
sim.run(1.0, obs1) # Now you have steped 5 (1.0/0.2) times
print(obs1.filename())
w = load_world('test05.h5')
print(w.t(), w.num_molecules(Species('C')))
Explanation: FixedIntervalHDF5Observedr logs the whole data in a World to an output file with the fixed interval. Its second argument is a prefix for output filenames. filename() returns the name of a file scheduled to be saved next. At most one format string like %02d is allowed to use a step count in the file name. When you do not use the format string, it overwrites the latest data to the file.
End of explanation
obs1 = FixedIntervalCSVObserver(0.2, "test%02d.csv")
print(obs1.filename())
sim = create_simulator()
sim.run(1.0, obs1)
print(obs1.filename())
Explanation: The usage of FixedIntervalCSVObserver is almost same with that of FixedIntervalHDF5Observer. It saves positions (x, y, z) of particles with the radius (r) and serial number of Species (sid) to a CSV file.
End of explanation
print(''.join(open("test05.csv").readlines()[: 10]))
Explanation: Here is the first 10 lines in the output CSV file.
End of explanation
sim = create_simulator(spatiocyte.Factory(0.005))
obs1 = FixedIntervalTrajectoryObserver(0.01)
sim.run(0.1, obs1)
print([tuple(pos) for pos in obs1.data()[0]])
Explanation: For particle simulations, E-Cell4 also provides Observer to trace a trajectory of a molecule, named FixedIntervalTrajectoryObserver. When no ParticleID is specified, it logs all the trajectories. Once some ParticleID is lost for the reaction during a simulation, it just stop to trace the particle any more.
End of explanation
obs1 = NumberObserver(['C'])
obs2 = FixedIntervalNumberObserver(0.1, ['A', 'B'])
sim = create_simulator()
sim.run(10.0, [obs1, obs2])
plotting.plot_number_observer(obs1, obs2, step=True)
Explanation: Generally, World assumes a periodic boundary for each plane. To avoid the big jump of a particle at the edge due to the boundary condition, FixedIntervalTrajectoryObserver tries to keep the shift of positions. Thus, the positions stored in the Observer are not necessarily limited in the cuboid given for the World. To track the diffusion over the boundary condition accurately, the step interval for logging must be small enough. Of course, you can disable this option. See help(FixedIntervalTrajectoryObserver).
5.2. Visualization of Data Logged
In this section, we explain the visualization tools for data logged by Observer.
Firstly, for time course data, plotting.plot_number_observer plots the data provided by NumberObserver, FixedIntervalNumberObserver and TimingNumberObserver. For the detailed usage of plotting.plot_number_observer, see help(plotting.plot_number_observer).
End of explanation
plotting.plot_number_observer(obs1, '-', obs2, ':', lambda t: 60 * (1 + 2 * math.exp(-0.9 * t)) / (2 + math.exp(-0.9 * t)), '--', step=True)
Explanation: You can set the style for plotting, and even add an arbitrary function to plot.
End of explanation
plotting.plot_number_observer(obs2, 'o', x='A', y='B')
Explanation: Plotting in the phase plane is also available by specifing the x-axis and y-axis.
End of explanation
sim = create_simulator(spatiocyte.Factory(0.005))
plotting.plot_world(sim.world())
Explanation: For spatial simulations, to visualize the state of World, plotting.plot_world is available. This function plots the points of particles in three-dimensional volume in the interactive way. You can save the image by clicking a right button on the drawing region.
End of explanation
sim = create_simulator(spatiocyte.Factory(0.005))
obs1 = FixedIntervalHDF5Observer(0.02, 'test%02d.h5')
sim.run(1.0, obs1)
plotting.plot_movie(obs1)
Explanation: You can also make a movie from a series of HDF5 files, given as a FixedIntervalHDF5Observer. plotting.plot_movie requires an extra library, ffmpeg.
End of explanation
sim = create_simulator(spatiocyte.Factory(0.005))
obs1 = FixedIntervalTrajectoryObserver(1e-3)
sim.run(1, obs1)
plotting.plot_trajectory(obs1)
Explanation: Finally, corresponding to FixedIntervalTrajectoryObserver, plotting.plot_trajectory provides a visualization of particle trajectories.
End of explanation
show(obs1)
Explanation: show internally calls these plotting functions corresponding to the given observer. Thus, you can do simply as follows:
End of explanation |
2,565 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
What books topped the Hardcover Fiction NYT best-sellers list on Mother's Day in 2009 and 2010? How about Father's Day?
Step1: 2) What are all the different book categories the NYT ranked in June 6, 2009? How about June 6, 2015?
Step2: 3) Muammar Gaddafi's name can be transliterated many many ways. His last name is often a source of a million and one versions - Gadafi, Gaddafi, Kadafi, and Qaddafi to name a few. How many times has the New York Times referred to him by each of those names?
Tip
Step3: 4) What's the title of the first story to mention the word 'hipster' in 1995? What's the first paragraph?
Step4: 5) How many times was gay marriage mentioned in the NYT between 1950-1959, 1960-1969, 1970-1978, 1980-1989, 1990-2099, 2000-2009, and 2010-present?
Step5: 6) What section talks about motorcycles the most?
Tip
Step6: 7) How many of the last 20 movies reviewed by the NYT were Critics' Picks? How about the last 40? The last 60?
Tip
Step7: 8) Out of the last 40 movie reviews from the NYT, which critic has written the most reviews? | Python Code:
#my IPA key b577eb5b46ad4bec8ee159c89208e220
#base url http://api.nytimes.com/svc/books/{version}/lists
import requests
response = requests.get("http://api.nytimes.com/svc/books/v2/lists.json?list=hardcover-fiction&published-date=2009-05-10&api-key=b577eb5b46ad4bec8ee159c89208e220")
best_seller = response.json()
print(best_seller.keys())
print(type(best_seller))
print(type(best_seller['results']))
print(len(best_seller['results']))
print(best_seller['results'][0])
mother_best_seller_results_2009 = best_seller['results']
for item in mother_best_seller_results_2009:
print("This books ranks #", item['rank'], "on the list") #just to make sure they are in order
for book in item['book_details']:
print(book['title'])
print("The top 3 books in the Hardcover fiction NYT best-sellers on Mother's day 2009 were:")
for item in mother_best_seller_results_2009:
if item['rank']< 4: #to get top 3 books on the list
for book in item['book_details']:
print(book['title'])
import requests
response = requests.get("http://api.nytimes.com/svc/books/v2/lists.json?list=hardcover-fiction&published-date=2010-05-09&api-key=b577eb5b46ad4bec8ee159c89208e220")
best_seller_2010 = response.json()
print(best_seller.keys())
print(best_seller_2010['results'][0])
mother_best_seller_2010_results = best_seller_2010['results']
print("The top 3 books in the Hardcover fiction NYT best-sellers on Mother's day 2010 were:")
for item in mother_best_seller_2010_results:
if item['rank']< 4: #to get top 3 books on the list
for book in item['book_details']:
print(book['title'])
import requests
response = requests.get("http://api.nytimes.com/svc/books/v2/lists.json?list=hardcover-fiction&published-date=2009-06-21&api-key=b577eb5b46ad4bec8ee159c89208e220")
best_seller = response.json()
father_best_seller_results_2009 = best_seller['results']
print("The top 3 books in the Hardcover fiction NYT best-sellers on Father's day 2009 were:")
for item in father_best_seller_results_2009:
if item['rank']< 4: #to get top 3 books on the list
for book in item['book_details']:
print(book['title'])
import requests
response = requests.get("http://api.nytimes.com/svc/books/v2/lists.json?list=hardcover-fiction&published-date=2010-06-20&api-key=b577eb5b46ad4bec8ee159c89208e220")
best_seller = response.json()
father_best_seller_results_2010 = best_seller['results']
print("The top 3 books in the Hardcover fiction NYT best-sellers on Father's day 2010 were:")
for item in father_best_seller_results_2010:
if item['rank']< 4: #to get top 3 books on the list
for book in item['book_details']:
print(book['title'])
Explanation: What books topped the Hardcover Fiction NYT best-sellers list on Mother's Day in 2009 and 2010? How about Father's Day?
End of explanation
import requests
response = requests.get("http://api.nytimes.com/svc/books/v2/lists/names.json?published-date=2009-06-06&api-key=b577eb5b46ad4bec8ee159c89208e220")
best_seller = response.json()
print(best_seller.keys())
print(len(best_seller['results']))
book_categories_2009 = best_seller['results']
for item in book_categories_2009:
print(item['display_name'])
import requests
response = requests.get("http://api.nytimes.com/svc/books/v2/lists/names.json?published-date=2015-06-06&api-key=b577eb5b46ad4bec8ee159c89208e220")
best_seller = response.json()
print(len(best_seller['results']))
book_categories_2015 = best_seller['results']
for item in book_categories_2015:
print(item['display_name'])
Explanation: 2) What are all the different book categories the NYT ranked in June 6, 2009? How about June 6, 2015?
End of explanation
import requests
response = requests.get("http://api.nytimes.com/svc/search/v2/articlesearch.json?q=Gadafi&fq=Libya&api-key=b577eb5b46ad4bec8ee159c89208e220")
gadafi = response.json()
print(gadafi.keys())
print(gadafi['response'])
print(gadafi['response'].keys())
print(gadafi['response']['docs']) #so no results for GADAFI.
print('The New York times has not used the name Gadafi to refer to Muammar Gaddafi')
import requests
response = requests.get("http://api.nytimes.com/svc/search/v2/articlesearch.json?q=Gaddafi&fq=Libya&api-key=b577eb5b46ad4bec8ee159c89208e220")
gaddafi = response.json()
print(gaddafi.keys())
print(gaddafi['response'].keys())
print(type(gaddafi['response']['meta']))
print(gaddafi['response']['meta'])
print("'The New York times used the name Gaddafi to refer to Muammar Gaddafi", gaddafi['response']['meta']['hits'], "times")
import requests
response = requests.get("http://api.nytimes.com/svc/search/v2/articlesearch.json?q=Kadafi&fq=Libya&api-key=b577eb5b46ad4bec8ee159c89208e220")
kadafi = response.json()
print(kadafi.keys())
print(kadafi['response'].keys())
print(type(kadafi['response']['meta']))
print(kadafi['response']['meta'])
print("'The New York times used the name Kadafi to refer to Muammar Gaddafi", kadafi['response']['meta']['hits'], "times")
import requests
response = requests.get("http://api.nytimes.com/svc/search/v2/articlesearch.json?q=Qaddafi&fq=Libya&api-key=b577eb5b46ad4bec8ee159c89208e220")
qaddafi = response.json()
print(qaddafi.keys())
print(qaddafi['response'].keys())
print(type(qaddafi['response']['meta']))
print(qaddafi['response']['meta'])
print("'The New York times used the name Qaddafi to refer to Muammar Gaddafi", qaddafi['response']['meta']['hits'], "times")
Explanation: 3) Muammar Gaddafi's name can be transliterated many many ways. His last name is often a source of a million and one versions - Gadafi, Gaddafi, Kadafi, and Qaddafi to name a few. How many times has the New York Times referred to him by each of those names?
Tip: Add "Libya" to your search to make sure (-ish) you're talking about the right guy.
End of explanation
import requests
response = requests.get("https://api.nytimes.com/svc/search/v2/articlesearch.json?q=hipster&begin_date=19950101&end_date=19953112&sort=oldest&api-key=b577eb5b46ad4bec8ee159c89208e220")
hipster = response.json()
print(hipster.keys())
print(hipster['response'].keys())
print(hipster['response']['docs'][0])
hipster_info= hipster['response']['docs']
print('These articles all had the word hipster in them and were published in 1995') #ordered from oldest to newest
for item in hipster_info:
print(item['headline']['main'], item['pub_date'])
for item in hipster_info:
if item['headline']['main'] == "SOUND":
print("This is the first article to mention the word hispter in 1995 and was titled:", item['headline']['main'],"and it was publised on:", item['pub_date'])
print("This is the lead paragraph of", item['headline']['main'],item['lead_paragraph'])
Explanation: 4) What's the title of the first story to mention the word 'hipster' in 1995? What's the first paragraph?
End of explanation
import requests
response = requests.get('https://api.nytimes.com/svc/search/v2/articlesearch.json?q="gay marriage"&begin_date=19500101&end_date=19593112&api-key=b577eb5b46ad4bec8ee159c89208e220')
marriage_1959 = response.json()
print(marriage_1959.keys())
print(marriage_1959['response'].keys())
print(marriage_1959['response']['meta'])
print("___________")
print("Gay marriage was mentioned", marriage_1959['response']['meta']['hits'], "between 1950-1959")
import requests
response = requests.get("https://api.nytimes.com/svc/search/v2/articlesearch.json?q='gay marriage'&begin_date=19600101&end_date=19693112&api-key=b577eb5b46ad4bec8ee159c89208e220")
marriage_1969 = response.json()
print(marriage_1969.keys())
print(marriage_1969['response'].keys())
print(marriage_1969['response']['meta'])
print("___________")
print("Gay marriage was mentioned", marriage_1969['response']['meta']['hits'], "between 1960-1969")
import requests
response = requests.get("https://api.nytimes.com/svc/search/v2/articlesearch.json?q='gay marriage'&begin_date=19700101&end_date=19783112&api-key=b577eb5b46ad4bec8ee159c89208e220")
marriage_1978 = response.json()
print(marriage_1978.keys())
print(marriage_1978['response'].keys())
print(marriage_1978['response']['meta'])
print("___________")
print("Gay marriage was mentioned", marriage_1978['response']['meta']['hits'], "between 1970-1978")
import requests
response = requests.get("https://api.nytimes.com/svc/search/v2/articlesearch.json?q='gay marriage'&begin_date=19800101&end_date=19893112&api-key=b577eb5b46ad4bec8ee159c89208e220")
marriage_1989 = response.json()
print(marriage_1989.keys())
print(marriage_1989['response'].keys())
print(marriage_1989['response']['meta'])
print("___________")
print("Gay marriage was mentioned", marriage_1989['response']['meta']['hits'], "between 1980-1989")
import requests
response = requests.get("https://api.nytimes.com/svc/search/v2/articlesearch.json?q='gay marriage'&begin_date=19900101&end_date=20003112&api-key=b577eb5b46ad4bec8ee159c89208e220")
marriage_2000 = response.json()
print(marriage_2000.keys())
print(marriage_2000['response'].keys())
print(marriage_2000['response']['meta'])
print("___________")
print("Gay marriage was mentioned", marriage_2000['response']['meta']['hits'], "between 1990-2000")
import requests
response = requests.get("https://api.nytimes.com/svc/search/v2/articlesearch.json?q='gay marriage'&begin_date=20000101&end_date=20093112&api-key=b577eb5b46ad4bec8ee159c89208e220")
marriage_2009 = response.json()
print(marriage_2009.keys())
print(marriage_2009['response'].keys())
print(marriage_2009['response']['meta'])
print("___________")
print("Gay marriage was mentioned", marriage_2009['response']['meta']['hits'], "between 2000-2009")
import requests
response = requests.get("https://api.nytimes.com/svc/search/v2/articlesearch.json?q='gay marriage'&begin_date=20100101&end_date=20160609&api-key=b577eb5b46ad4bec8ee159c89208e220")
marriage_2016 = response.json()
print(marriage_2016.keys())
print(marriage_2016['response'].keys())
print(marriage_2016['response']['meta'])
print("___________")
print("Gay marriage was mentioned", marriage_2016['response']['meta']['hits'], "between 2010-present")
Explanation: 5) How many times was gay marriage mentioned in the NYT between 1950-1959, 1960-1969, 1970-1978, 1980-1989, 1990-2099, 2000-2009, and 2010-present?
End of explanation
import requests
response = requests.get("http://api.nytimes.com/svc/search/v2/articlesearch.json?q=motorcycles&facet_field=section_name&api-key=b577eb5b46ad4bec8ee159c89208e220")
motorcycles = response.json()
print(motorcycles.keys())
print(motorcycles['response'].keys())
print(motorcycles['response']['facets']['section_name']['terms'])
motorcycles_info= motorcycles['response']['facets']['section_name']['terms']
print(motorcycles_info)
print("These are the sections that talk the most about motorcycles:")
print("_________________")
for item in motorcycles_info:
print("The",item['term'],"section mentioned motorcycle", item['count'], "times")
motorcycle_info= motorcycles['response']['facets']['section_name']['terms']
most_motorcycle_section = 0
section_name = ""
for item in motorcycle_info:
if item['count']>most_motorcycle_section:
most_motorcycle_section = item['count']
section_name = item['term']
print(section_name, "is the sections that talks the most about motorcycles, with", most_motorcycle_section, "mentions of the word")
Explanation: 6) What section talks about motorcycles the most?
Tip: You'll be using facets
End of explanation
import requests
response = requests.get('http://api.nytimes.com/svc/movies/v2/reviews/search.json?api-key=b577eb5b46ad4bec8ee159c89208e220')
movies_reviews_20 = response.json()
print(movies_reviews_20.keys())
print(movies_reviews_20['results'][0])
critics_pick = 0
not_a_critics_pick = 0
for item in movies_reviews_20['results']:
print(item['display_title'], item['critics_pick'])
if item['critics_pick'] == 1:
print("-------------CRITICS PICK!")
critics_pick = critics_pick + 1
else:
print("-------------NOT CRITICS PICK!")
not_a_critics_pick = not_a_critics_pick + 1
print("______________________")
print("There were", critics_pick, "critics picks in the last 20 revies by the NYT")
import requests
response = requests.get('http://api.nytimes.com/svc/movies/v2/reviews/search.json?offset=20&api-key=b577eb5b46ad4bec8ee159c89208e220')
movies_reviews_40 = response.json()
print(movies_reviews_40.keys())
import requests
response = requests.get('http://api.nytimes.com/svc/movies/v2/reviews/search.json?offset=40&api-key=b577eb5b46ad4bec8ee159c89208e220')
movies_reviews_60 = response.json()
print(movies_reviews_60.keys())
new_medium_list = movies_reviews_20['results'] + movies_reviews_40['results']
print(len(new_medium_list))
critics_pick = 0
not_a_critics_pick = 0
for item in new_medium_list:
print(item['display_title'], item['critics_pick'])
if item['critics_pick'] == 1:
print("-------------CRITICS PICK!")
critics_pick = critics_pick + 1
else:
print("-------------NOT CRITICS PICK!")
not_a_critics_pick = not_a_critics_pick + 1
print("______________________")
print("There were", critics_pick, "critics picks in the last 40 revies by the NYT")
new_big_list = movies_reviews_20['results'] + movies_reviews_40['results'] + movies_reviews_60['results']
print(new_big_list[0])
print(len(new_big_list))
critics_pick = 0
not_a_critics_pick = 0
for item in new_big_list:
print(item['display_title'], item['critics_pick'])
if item['critics_pick'] == 1:
print("-------------CRITICS PICK!")
critics_pick = critics_pick + 1
else:
print("-------------NOT CRITICS PICK!")
not_a_critics_pick = not_a_critics_pick + 1
print("______________________")
print("There were", critics_pick, "critics picks in the last 60 revies by the NYT")
Explanation: 7) How many of the last 20 movies reviewed by the NYT were Critics' Picks? How about the last 40? The last 60?
Tip: You really don't want to do this 3 separate times (1-20, 21-40 and 41-60) and add them together. What if, perhaps, you were able to figure out how to combine two lists? Then you could have a 1-20 list, a 1-40 list, and a 1-60 list, and then just run similar code for each of them.
End of explanation
medium_list = movies_reviews_20['results'] + movies_reviews_40['results']
print(type(medium_list))
print(medium_list[0])
for item in medium_list:
print(item['byline'])
all_critics = []
for item in medium_list:
all_critics.append(item['byline'])
print(all_critics)
unique_medium_list = set(all_critics)
print(unique_medium_list)
print("___________________________________________________")
print("This is a list of the authors who have written the NYT last 40 movie reviews, in descending order:")
from collections import Counter
count = Counter(all_critics)
print(count)
print("___________________________________________________")
print("This is a list of the top 3 authors who have written the NYT last 40 movie reviews:")
count.most_common(3)
Explanation: 8) Out of the last 40 movie reviews from the NYT, which critic has written the most reviews?
End of explanation |
2,566 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Water-filling Visualized
This code is provided as supplementary material of the lecture Machine Learning and Optimization in Communications (MLOC).<br>
This code illustrates
Step1: Specify total power p_tot as well as the noise levels of each channel
Step2: Illustration of the water-filling algorithm for 3 channels with configurable noise powers.
Step3: Interactive version with more channels and adjustable water level | Python Code:
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.cm as cm
from ipywidgets import interactive
import ipywidgets as widgets
%matplotlib inline
Explanation: Water-filling Visualized
This code is provided as supplementary material of the lecture Machine Learning and Optimization in Communications (MLOC).<br>
This code illustrates:
* Implementation of the water-filling algorithm
* Interactive illustration of the water-filling principle
End of explanation
# Function returns the water-level 1/v
def get_waterlevel(sigma_nq, p_tot):
# Sort noise values from lowest to largest
sigma_nq_sort = np.append(np.sort(sigma_nq), 9e99)
index = 0
# start filling from bottom until we reach the next channel
while index < len(sigma_nq):
waterlevel = (p_tot + np.sum(sigma_nq_sort[0:(index+1)]))/(index+1)
if waterlevel < sigma_nq_sort[index+1]:
return waterlevel
else:
index = index + 1
Explanation: Specify total power p_tot as well as the noise levels of each channel
End of explanation
font = {'size' : 20}
plt.rc('font', **font)
plt.rc('text', usetex=True)
p_tot = 2
sigma_nq = np.array([0.1,3,0.8])
waterlevel = get_waterlevel(sigma_nq, p_tot)
water = np.maximum(waterlevel - sigma_nq,0)
print("Water level 1/v: ", waterlevel)
print("Powers per channel: ", water)
plt.figure(1,figsize=(9,6))
plt.rcParams.update({'font.size': 14})
cmap = cm.get_cmap('coolwarm')
x = np.arange(0.5,len(sigma_nq)+0.5, 1/100)
y1 = np.repeat(sigma_nq,100)
y2 = np.repeat(water,100)
plt.stackplot(x,y1,y2,colors=(cmap(0.9),cmap(0.1)), edgecolor='black')
plt.xlim(0.5,len(sigma_nq)+0.5)
plt.ylim(0,max(sigma_nq+water)*1.1)
nzindex = (water != 0).argmax(axis=0)
plt.text(nzindex+1,sigma_nq[nzindex]+water[nzindex],r'$1/{\nu^\star} = %1.2f$' % waterlevel, horizontalalignment='center', verticalalignment='bottom')
plt.xticks(np.arange(1,len(sigma_nq)+1))
plt.xlabel("Channel index $i$")
plt.ylabel("")
plt.show()
#plt.xlabel("x")
#plt.ylabel("y=f(x)")
Explanation: Illustration of the water-filling algorithm for 3 channels with configurable noise powers.
End of explanation
sigma_nq = np.array([0.2, 0.3, 1.6, 0.6, 0.17, 0.25, 0.93, 0.78, 1.3, 1.2, 0.66, 0.1, 0.25, 0.29, 0.19, 0.73])
def interactive_waterfilling_stack(p_tot):
waterlevel = get_waterlevel(sigma_nq, p_tot)
water = np.maximum(waterlevel - sigma_nq,0)
plt.figure(1,figsize=(15,6))
plt.rcParams.update({'font.size': 14})
x = np.arange(0.5,len(sigma_nq)+0.5, 1/100)
y1 = np.repeat(sigma_nq,100)
y2 = np.repeat(water,100)
plt.stackplot(x,y1,y2,colors=(cmap(0.9),cmap(0.1)), edgecolor='black')
plt.xlim(0.5,len(sigma_nq)+0.5)
plt.ylim(0,max(sigma_nq+water)*1.1)
nzindex = (water != 0).argmax(axis=0)
plt.text(nzindex+0.8,sigma_nq[nzindex]+water[nzindex],r'$1/{\nu^\star} = %1.2f$' % waterlevel, horizontalalignment='left', verticalalignment='bottom')
plt.xticks(np.arange(1,len(sigma_nq)+1))
plt.xlabel("Channel index $i$")
plt.ylabel("")
plt.legend([r'$\sigma_{\textrm{n},i}^2$','$p_i$'])
plt.show()
interactive_update = interactive(interactive_waterfilling_stack, \
p_tot = widgets.FloatSlider(min=0.1,max=15.0,step=0.1,value=3, continuous_update=False, description='p_tot',layout=widgets.Layout(width='70%')))
output = interactive_update.children[-1]
output.layout.height = '400px'
interactive_update
Explanation: Interactive version with more channels and adjustable water level
End of explanation |
2,567 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
A First Look at an X-ray Image Dataset
Images are data. They can be 2D, from cameras, or 1D, from spectrographs, or 3D, from IFUs (integral field units). In each case, the data come packaged as an array of numbers, which we can visualize, and do calculations with.
Let's suppose we are interested in clusters of galaxies. We choose one, Abell 1835, and propose to observe it with the XMM-Newton space telescope. We are successful, we design the observations, and they are taken for us. Next
Step1: Download the example data files if we don't already have them.
Step2: The XMM MOS2 image
Let's find the "science" image taken with the MOS2 camera, and display it.
Step3: imfits is a FITS object, containing multiple data structures. The image itself is an array of integer type, and size 648x648 pixels, stored in the primary "header data unit" or HDU.
If we need it to be floating point for some reason, we need to cast it
Step4: Let's look at this with ds9.
Step5: If you don't have the image viewing tool ds9, you should install it - it's very useful astronomical software. You can download it (later!) from this webpage.
We can also display the image in the notebook
Step6: Exercise
What is going on in this image?
Make a list of everything that is interesting about this image with your neighbor, and we'll discuss the features you identify in about 5 minutes time. | Python Code:
from __future__ import print_function
import astropy.io.fits as pyfits
import numpy as np
import os
import urllib
import astropy.visualization as viz
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 10.0)
Explanation: A First Look at an X-ray Image Dataset
Images are data. They can be 2D, from cameras, or 1D, from spectrographs, or 3D, from IFUs (integral field units). In each case, the data come packaged as an array of numbers, which we can visualize, and do calculations with.
Let's suppose we are interested in clusters of galaxies. We choose one, Abell 1835, and propose to observe it with the XMM-Newton space telescope. We are successful, we design the observations, and they are taken for us. Next: we download the data, and take a look at it.
Getting the Data
We will download our images from HEASARC, the online archive where XMM data are stored.
End of explanation
targdir = 'a1835_xmm'
if not os.path.isdir(targdir):
os.mkdir()
filenames = ('P0098010101M2U009IMAGE_3000.FTZ',
'P0098010101M2U009EXPMAP3000.FTZ',
'P0098010101M2X000BKGMAP3000.FTZ')
remotedir = 'http://heasarc.gsfc.nasa.gov/FTP/xmm/data/rev0/0098010101/PPS/'
for filename in filenames:
path = os.path.join(targdir, filename)
url = os.path.join(remotedir, filename)
if not os.path.isfile(path):
urllib.urlretrieve(url, path)
imagefile, expmapfile, bkgmapfile = [os.path.join(targdir, filename) for filename in filenames]
for filename in os.listdir(targdir):
print('{0:>10.2f} KB {1}'.format(os.path.getsize(os.path.join(targdir, filename))/1024.0, filename))
Explanation: Download the example data files if we don't already have them.
End of explanation
imfits = pyfits.open(imagefile)
imfits.info()
Explanation: The XMM MOS2 image
Let's find the "science" image taken with the MOS2 camera, and display it.
End of explanation
im = imfits[0].data
Explanation: imfits is a FITS object, containing multiple data structures. The image itself is an array of integer type, and size 648x648 pixels, stored in the primary "header data unit" or HDU.
If we need it to be floating point for some reason, we need to cast it:
im = imfits[0].data.astype('np.float32')
Note that this (probably?) prevents us from using the pyfits "writeto" method to save any changes. Assuming the integer type is ok, just get a pointer to the image data.
Accessing the .data member of the FITS object returns the image data as a numpy ndarray.
End of explanation
!ds9 -log "$imagefile"
Explanation: Let's look at this with ds9.
End of explanation
plt.imshow(viz.scale_image(im, scale='log', max_cut=40), cmap='gray', origin='lower');
Explanation: If you don't have the image viewing tool ds9, you should install it - it's very useful astronomical software. You can download it (later!) from this webpage.
We can also display the image in the notebook:
End of explanation
index = np.unravel_index(im.argmax(), im.shape)
print("image dimensions:",im.shape)
print("location of maximum pixel value:",index)
print("maximum pixel value: ",im[index])
Explanation: Exercise
What is going on in this image?
Make a list of everything that is interesting about this image with your neighbor, and we'll discuss the features you identify in about 5 minutes time.
End of explanation |
2,568 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Level 2
Einstieg
In diesem Level werden wir lernen, wie die Ausführung von bestimmten Code an Bedingungen knüpfen. Dafür werden wir erst den Typ des boolean und im Anschluss unsere ersten Kontrollstrukturen, die if-Bedingung und die while-Schleife einführen. Dabei werden wir die Schlüsselwörter True, False, if, elif, else, is, while, break und continue kennenlernen.
Die if-Bedingung wird es uns ermöglichen Code auszuführen, wenn eine Bedingung erfüllt ist.
Die while-Schleife wird es uns ermöglichen Code solange auszuführen, wie eine Bedingung erfüllt ist.
Step1: Schauen wir uns obigen Code an. Wir erwarten eine Eingabe und versuchen aus dieser Eingabe einen integer zu entnehmen. Dies klappt auch, wenn der Benutzer eine ganze Zahl eingibt - gibt er jedoch stattdessen zum Beispiel eine Zeichenkette ein, wird ein ValueError geworfen. Ebenso wird ein Fehler geworfen, wenn der Benutzer nichts eingibt.
Im Verlauf dieses Levels werden wir, lernen Benutzereingaben zu prüfen und entsprechend zu reagieren.
Der Typ boolean
Der boolean ist ein Typ, der genau zwei Werte besitzt
Step2: Der Typ eines Objektes bestimmt, wie dieses in einen boolean umgewandelt wird. Für die bisherigen Typen integer, float und string gilt
Step3: Genauso wie integer und floats gibt es auch für booleans Operatoren
Step4: Häufig möchte man Werte mit einander vergleichen, dafür gibt es die Vergleichsoperatoren, die für viele Typen definiert sind
Step5: "==" und "is"
== prüft, ob die Objekte, auf die die Variablen zeigen, äquivalent sind. <br>
is prüft, ob die Variablen auf dasselbe Objekt zeigen.
Step6: if-Bedingung
Nun da wir gelernt haben, was boolean-Werte sind, können wir diese in einer if-Abfrage benutzen
Step7: Im obigen Codebeispiel prüft die if-Bedingung, ob der String eingabe leer ist. Dies geschieht implizit, d.h. es wird ausgenutzt, dass der Interpreter die Bedingung in einen boolean umwandelt. In dem Kommentar sind alternative Bedingungen beschrieben, die daselbe erreichen, allerdings umständlicher sind. <br>
Wenn wir uns aber an unser Problem aus der Einleitung erinnern, war unser Ziel eine Zahl aus der Eingabe zu lesen und Fehler durch falsche Benutzereingaben abzufangen. Wir wollen also darauf reagieren, wenn nichts eingegeben wurde, wenn eine Zahl eingegeben wurde und wenn eine Zeichenkette eingegeben wurde, die nicht als integer interpretiert werden kann.
Step8: Wie wir sehen können, passiert im obigen Codebeispiel eine Menge auf einmal, gehen wir es also in Ruhe durch.
In der if-Abfrage wird mit der str.isdigit() Methode geprüft, ob der string eingabe nicht leer ist und nur aus Ziffern besteht; wenn dem so ist, erstellen wir einen integer zahl aus der Eingabe und geben diesen aus. In einem else Zweig, der ausgeführt wird, wenn die Bedingung der if-Abfrage nicht zutraf, geben wir dem Benutzer Feedback über seine falsche Eingabe zurück.
Step9: if-Bedingungen können auch verschachtelt werden, d.h. wir können also in einer if-Bedingung eine weitere if-Bedingung definieren. Allerdings können wir dies durch die Benutzung des Schlüsselwortes elif vereinfachen
Step10: Eine if-Bedingung enthält einen if-Zweig, beliebig viele optionale elif-Zweige und optional einen else-Zweig. Dabei wird immer der erste Zweig, dessen Bedingung zutrifft ausgeführt und nachher keine weiteren.
Daher ist es wichtig auf die Reihenfolge der Zweige zu achten.
while-Schleife
Bisher ist es sehr schwierig Code mehrfach auszuführen, wenn wir unsere Programme wiederholen müssen wir sie neu starten. Nicht nur das, sondern haben wir keine Möglichkeit Befehle beliebig häufig auszuführen. Die Möglichkeit Code wiederholt auszuführen ist allerdings für viele Programme ein elementarer Bestandteil. Daher möchten wir uns im dritten Abschnitt dieses Levels mit der while-Schleife beschäftigen, die diese Probleme löst.
Die while-Schleife ist im Aufbau ähnlich der if-Bedingung
Step11: Im obigen Codeblock sehen wir eine einfache Anwendung der while-Schleife. Kommentieren wir jedoch die letzte Zeile aus, kommt es zu einer Endlosschleife. In diesem Zusammenhang der Hinweis, dass sich im Interpreter Python Programme durch drücken von Strg + C abbrechen lassen.
Innerhalb einer while-Schleife ist es möglich mit dem Schlüsselwort break den Durchlauf der Schleife abzubrechen oder mit dem Schlüsselwort continue den aktuellen Durchlauf zu überspringen. Bei der Benutzung von continuemüssen wir wieder darauf achten keine Endlosschleifen zu erstellen.
Step12: Das Schlüsselwort else kann nicht nur in einer if-Bedingung, sonder auch in Verbindung mit einer while-Schleife benutzt werden. Dabei wird der else-Zweig ans Ende der entsprechenden while-Schleife angefügt. Der Code des else-Zweiges wird dann nur ausgeführt, wenn die Schleife nicht durch ein break abgebrochen wurde. | Python Code:
eingabe = input("Bitte etwas eingeben: ")
zahl = int(eingabe)
print(zahl)
Explanation: Level 2
Einstieg
In diesem Level werden wir lernen, wie die Ausführung von bestimmten Code an Bedingungen knüpfen. Dafür werden wir erst den Typ des boolean und im Anschluss unsere ersten Kontrollstrukturen, die if-Bedingung und die while-Schleife einführen. Dabei werden wir die Schlüsselwörter True, False, if, elif, else, is, while, break und continue kennenlernen.
Die if-Bedingung wird es uns ermöglichen Code auszuführen, wenn eine Bedingung erfüllt ist.
Die while-Schleife wird es uns ermöglichen Code solange auszuführen, wie eine Bedingung erfüllt ist.
End of explanation
b1 = True
b2 = False
print(type(b1))
print(type(b2))
Explanation: Schauen wir uns obigen Code an. Wir erwarten eine Eingabe und versuchen aus dieser Eingabe einen integer zu entnehmen. Dies klappt auch, wenn der Benutzer eine ganze Zahl eingibt - gibt er jedoch stattdessen zum Beispiel eine Zeichenkette ein, wird ein ValueError geworfen. Ebenso wird ein Fehler geworfen, wenn der Benutzer nichts eingibt.
Im Verlauf dieses Levels werden wir, lernen Benutzereingaben zu prüfen und entsprechend zu reagieren.
Der Typ boolean
Der boolean ist ein Typ, der genau zwei Werte besitzt: True und False. In Python3 sind diese beiden Literale Schlüsselwörter und können somit nicht als Variablennamen benutzt werden. Mit der bool()Funktion kann ein Wert in einen boolean umgewandelt werden.
End of explanation
print(bool(""))
print(bool(0))
print(bool(0.0))
Explanation: Der Typ eines Objektes bestimmt, wie dieses in einen boolean umgewandelt wird. Für die bisherigen Typen integer, float und string gilt:
* ein integer ist True, solange er nicht 0 ist
* ein float ist True, solange er nicht 0.0 ist
* ein string ist True, solange er nicht leer, d.h. '' ist
End of explanation
print("not True:", not True)
print("True or False:", True or False)
print("True and False:", True and False)
print("True ^ False:", True ^ False)
Explanation: Genauso wie integer und floats gibt es auch für booleans Operatoren:
* and das logische "und"
* or das logische "oder"
* not die logische Negation
Außerdem lässt sich auch xor (^) auf booleans anwenden.
End of explanation
print(5 < 3)
Explanation: Häufig möchte man Werte mit einander vergleichen, dafür gibt es die Vergleichsoperatoren, die für viele Typen definiert sind:
==: prüft auf Äquivalenz
!=: prüft auf Nicht-Äquivalenz
>: echtes größer
<: echtes kleiner
>=: größer gleich
<=: kleiner gleich
is: prüft auf Gleichheit
Diese Operatoren liefern alle einen boolschen Wert, d.h. einen Wert vom Typ boolean zurück.
End of explanation
print("==:", 10**3 == 1000)
print("is:", 10**3 is 1000)
Explanation: "==" und "is"
== prüft, ob die Objekte, auf die die Variablen zeigen, äquivalent sind. <br>
is prüft, ob die Variablen auf dasselbe Objekt zeigen.
End of explanation
eingabe = input("Bitte etwas eingeben: ")
if eingabe: # alternativ: bool(eingabe) oder eingabe != ""
print(eingabe)
Explanation: if-Bedingung
Nun da wir gelernt haben, was boolean-Werte sind, können wir diese in einer if-Abfrage benutzen: Codeteile nur dann ausführen, wenn eine Bedingung erfüllt ist. Sehen wir uns zunächst die Syntax einer if-Abfrage in Python an:
python
if Bedingung:
Befehle
Wir starten mit dem Schlüsselwort if, dann kommt eine Bedingung, diese sollte einen boolschen Ausdruck zurückgeben, wir können diesen explizit angeben, der Interpreter ruft allerdings auf unsere Bedingung bool() auf und führt unsere Befehle aus, wenn dies Truezurück gibt. Nach der Bedingung folgt ein Doppelpunkt :. Die nächste Zeile wird nun eingerückt, hierbei hat man sich auf vier Leerzeichen geeinigt.
End of explanation
eingabe = input("Bitte etwas eingeben: ")
if eingabe.isdigit():
zahl = int(eingabe)
print(zahl)
else:
print("Ungültige Eingabe:", eingabe)
Explanation: Im obigen Codebeispiel prüft die if-Bedingung, ob der String eingabe leer ist. Dies geschieht implizit, d.h. es wird ausgenutzt, dass der Interpreter die Bedingung in einen boolean umwandelt. In dem Kommentar sind alternative Bedingungen beschrieben, die daselbe erreichen, allerdings umständlicher sind. <br>
Wenn wir uns aber an unser Problem aus der Einleitung erinnern, war unser Ziel eine Zahl aus der Eingabe zu lesen und Fehler durch falsche Benutzereingaben abzufangen. Wir wollen also darauf reagieren, wenn nichts eingegeben wurde, wenn eine Zahl eingegeben wurde und wenn eine Zeichenkette eingegeben wurde, die nicht als integer interpretiert werden kann.
End of explanation
eingabe = input("Bitte eine Zahl eingeben: ")
if eingabe:
# die Eingabe ist nicht leer.
if eingabe.isdigit():
zahl = int(eingabe)
print(zahl, "ist eine gültige Zahl.")
else:
print("Die Eingabe ''" + eingabe + "' ist keine gültige Zahl")
else:
print("Die Eingabe ist leer.")
Explanation: Wie wir sehen können, passiert im obigen Codebeispiel eine Menge auf einmal, gehen wir es also in Ruhe durch.
In der if-Abfrage wird mit der str.isdigit() Methode geprüft, ob der string eingabe nicht leer ist und nur aus Ziffern besteht; wenn dem so ist, erstellen wir einen integer zahl aus der Eingabe und geben diesen aus. In einem else Zweig, der ausgeführt wird, wenn die Bedingung der if-Abfrage nicht zutraf, geben wir dem Benutzer Feedback über seine falsche Eingabe zurück.
End of explanation
eingabe = input("Bitte eine Zahl eingeben: ")
if eingabe.isdigit():
# die Eingabe ist eine gültige Zahl
zahl = int(eingabe)
print(zahl, "ist eine gültige Zahl.")
elif not eingabe:
# die Eingabe ist leer
print("Die Eingabe ist leer")
else:
# die Eingabe ist nicht leer, aber auch keine
# gültige Zahl
print("Die Eingabe '" + eingabe + "' ist keine gültige Zahl")
Explanation: if-Bedingungen können auch verschachtelt werden, d.h. wir können also in einer if-Bedingung eine weitere if-Bedingung definieren. Allerdings können wir dies durch die Benutzung des Schlüsselwortes elif vereinfachen:
End of explanation
counter = 0
while counter < 10:
print(counter)
counter += 1
Explanation: Eine if-Bedingung enthält einen if-Zweig, beliebig viele optionale elif-Zweige und optional einen else-Zweig. Dabei wird immer der erste Zweig, dessen Bedingung zutrifft ausgeführt und nachher keine weiteren.
Daher ist es wichtig auf die Reihenfolge der Zweige zu achten.
while-Schleife
Bisher ist es sehr schwierig Code mehrfach auszuführen, wenn wir unsere Programme wiederholen müssen wir sie neu starten. Nicht nur das, sondern haben wir keine Möglichkeit Befehle beliebig häufig auszuführen. Die Möglichkeit Code wiederholt auszuführen ist allerdings für viele Programme ein elementarer Bestandteil. Daher möchten wir uns im dritten Abschnitt dieses Levels mit der while-Schleife beschäftigen, die diese Probleme löst.
Die while-Schleife ist im Aufbau ähnlich der if-Bedingung:
python
while Bedingung:
Befehle
Der Unterschied ist, dass unsere Befehle, solange wiederholt werden, wie die Bedingung gültig (d.h. == True) ist. Hierbei ist Vorsicht geboten, da es zu Endlosschleifen kommen kann.
End of explanation
print("Start.")
while True:
eingabe = input("Bitte etwas eingeben: ")
if not eingabe:
break
elif eingabe == "Q":
break
elif eingabe == "C":
continue
else:
print(eingabe)
print(len(eingabe)*"-")
print("Fertig.")
Explanation: Im obigen Codeblock sehen wir eine einfache Anwendung der while-Schleife. Kommentieren wir jedoch die letzte Zeile aus, kommt es zu einer Endlosschleife. In diesem Zusammenhang der Hinweis, dass sich im Interpreter Python Programme durch drücken von Strg + C abbrechen lassen.
Innerhalb einer while-Schleife ist es möglich mit dem Schlüsselwort break den Durchlauf der Schleife abzubrechen oder mit dem Schlüsselwort continue den aktuellen Durchlauf zu überspringen. Bei der Benutzung von continuemüssen wir wieder darauf achten keine Endlosschleifen zu erstellen.
End of explanation
# Zahlen raten
gesucht = 56
versuche = 10
zähler = 0
print("Die gesuchte Zahl x ist 0 < x < 100.")
while zähler < versuche:
eingabe = input(": ")
zähler += 1
# eingabe überprüfen
if eingabe.isdigit():
zahl = int(eingabe)
else:
print("Ungültige Eingabe.")
continue
# Benutzer Feedback
if zahl == gesucht:
print("Richtig!")
break
else:
if zahl > gesucht:
print("Kleiner.")
else:
print("Größer")
print("Noch", versuche-zähler, "Versuche.")
else:
# kein break <=> zahl wurde nicht erraten
print("Die Zahl wurde nicht erraten.")
print("Die richtige Zahl war:", gesucht)
Explanation: Das Schlüsselwort else kann nicht nur in einer if-Bedingung, sonder auch in Verbindung mit einer while-Schleife benutzt werden. Dabei wird der else-Zweig ans Ende der entsprechenden while-Schleife angefügt. Der Code des else-Zweiges wird dann nur ausgeführt, wenn die Schleife nicht durch ein break abgebrochen wurde.
End of explanation |
2,569 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Intro a Matplotlib
Matplotlib = Libreria para graficas cosas matematicas
Que es Matplotlib?
Matplotlin es un libreria para crear imagenes 2D de manera facil.
Checate mas en
Step1: Crear graficas (plot)
Crear graficas es muy facil en matplotlib, si tienes una lista de valores X y otra y..solo basta usar
Step2: Arreglando los Datos
Head nos permite darle un vistazo a los datos... asi a puro ojo vemos que las columnas son anios y los renglones los paises...ponder reversar esto con transpose, pero tambien vemos que esta con indices enumerados, prefeririamos que los indices fueran los paises, entonces los cambiamos y tiramos la columna que ya no sirve...al final un head para ver que todo esta bien... a este juego de limpiar y arreglar datos se llama "Data Wrangling"
Step3: Entonces ahora podemos ver la calidad de vida en Mexico atravez del tiempo
Step4: de esta visualizacion vemos que la caldiad ha ido subiendo apartir de 1900, ademas vemos mucho movimiento entre 1890 y 1950, justo cuando habia muchas guerras en Mexico.
Tambien podemos seleccionar un rango selecto de años, vemos que este rango es interesante entonces
Step5: o sin tanto rollo, podemos restringuir el rango de nuestra grafica con xlim (los limites del eje X)
Step6: Tambien es importante ver como esto se compara con otros paises, podemos comparar con todo Norteamerica | Python Code:
import numpy as np # modulo de computo numerico
import matplotlib.pyplot as plt # modulo de graficas
import pandas as pd # modulo de datos
# esta linea hace que las graficas salgan en el notebook
%matplotlib inline
Explanation: Intro a Matplotlib
Matplotlib = Libreria para graficas cosas matematicas
Que es Matplotlib?
Matplotlin es un libreria para crear imagenes 2D de manera facil.
Checate mas en :
Pagina oficial : http://matplotlib.org/
Galleria de ejemplo: http://matplotlib.org/gallery.html
Una libreria mas avanzada que usa matplotlib, Seaborn: http://stanford.edu/~mwaskom/software/seaborn/
Libreria de visualizacion interactiva: http://bokeh.pydata.org/
Buenisimo Tutorial: http://www.labri.fr/perso/nrougier/teaching/matplotlib/
Para usar matplotlib, solo tiene que importar el modulo ..tambien te conviene importar numpy pues es muy util
End of explanation
xurl="http://spreadsheets.google.com/pub?key=phAwcNAVuyj2tPLxKvvnNPA&output=xls"
df=pd.read_excel(xurl)
print("Tamano completo es %s"%str(df.shape))
df.head()
Explanation: Crear graficas (plot)
Crear graficas es muy facil en matplotlib, si tienes una lista de valores X y otra y..solo basta usar :
Podemos usar la funcion np.linspace para crear valores en un rango, por ejemplo si queremos 100 numeros entre 0 y 10 usamos:
Y podemos graficar dos cosas al mismo tiempo:
Que tal si queremos distinguir cada linea? Pues usamos legend(), de leyenda..tambien tenemos que agregarles nombres a cada plot
Tambien podemos hacer mas cosas, como dibujar solamente los puntos, o las lineas con los puntos usando linestyle:
Dibujando puntos (scatter)
Aveces no queremos dibujar lineas, sino puntos, esto nos da informacion de donde se encuentras datos de manera espacial. Para esto podemos usarlo de la siguiente manera:
Pero ademas podemos meter mas informacion, por ejemplo dar colores cada punto, o darle tamanos diferentes:
Histogramas (hist)
Los histogramas nos muestran distribuciones de datos, la forma de los datos, nos muestran el numero de datos de diferentes tipos:
otro tipo de datos, tomados de una campana de gauss, es decir una distribucion normal:
Bases de datos en el internet
Aveces los datos que queremos se encuentran en el internet. Asumiendo que se encuentran ordenados y en un formato amigable siempre los podemos bajar y guardar como un DataFrame.
Por ejemplo:
Gapminder es una pagina con mas de 500 conjunto de daatos relacionado a indicadores globales como ingresos, producto interno bruto (PIB=GDP) y esperanza de vida.
Aqui bajamos la base de datos de esperanza de vida, lo guardamos en memoria y lo lodeamos como un excel:
Ojo! Aqui usamos .head() para imprimir los primeros 5 renglones del dataframe pues son gigantescos los datos.
End of explanation
df = df.rename(columns={'Life expectancy with projections. Yellow is IHME': 'Life expectancy'})
df.index=df['Life expectancy']
df=df.drop('Life expectancy',axis=1)
df=df.transpose()
df.head()
Explanation: Arreglando los Datos
Head nos permite darle un vistazo a los datos... asi a puro ojo vemos que las columnas son anios y los renglones los paises...ponder reversar esto con transpose, pero tambien vemos que esta con indices enumerados, prefeririamos que los indices fueran los paises, entonces los cambiamos y tiramos la columna que ya no sirve...al final un head para ver que todo esta bien... a este juego de limpiar y arreglar datos se llama "Data Wrangling"
End of explanation
df['Mexico'].plot()
print("== Esperanza de Vida en Mexico ==")
Explanation: Entonces ahora podemos ver la calidad de vida en Mexico atravez del tiempo:
End of explanation
subdf=df[ df.index >= 1890 ]
subdf=subdf[ subdf.index <= 1955 ]
subdf['Mexico'].plot()
plt.title("Esperanza de Vida en Mexico entre 1890 y 1955")
plt.show()
Explanation: de esta visualizacion vemos que la caldiad ha ido subiendo apartir de 1900, ademas vemos mucho movimiento entre 1890 y 1950, justo cuando habia muchas guerras en Mexico.
Tambien podemos seleccionar un rango selecto de años, vemos que este rango es interesante entonces
End of explanation
df['Mexico'].plot()
plt.xlim(1890,1955)
plt.title("Esperanza de Vida en Mexico entre 1890 y 1955")
plt.show()
Explanation: o sin tanto rollo, podemos restringuir el rango de nuestra grafica con xlim (los limites del eje X)
End of explanation
df[['Mexico','United States','Canada']].plot()
plt.title("Esperanza de Vida en Norte-America")
plt.show()
Explanation: Tambien es importante ver como esto se compara con otros paises, podemos comparar con todo Norteamerica:
End of explanation |
2,570 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Chapter 5 – Support Vector Machines
This notebook contains all the sample code and solutions to the exercises in chapter 5.
Setup
First, let's make sure this notebook works well in both python 2 and 3, import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures
Step1: Large margin classification
The next few code cells generate the first figures in chapter 5. The first actual code sample comes after
Step2: Sensitivity to feature scales
Step3: Sensitivity to outliers
Step4: Large margin vs margin violations
This is the first code example in chapter 5
Step5: Now let's generate the graph comparing different regularization settings
Step6: Non-linear classification
Step7: Regression
Step8: Under the hood
Step9: Small weight vector results in a large margin
Step10: Hinge loss
Step11: Extra material
Training time
Step12: Linear SVM classifier implementation using Batch Gradient Descent
Step13: Exercise solutions
1. to 7.
See appendix A.
8.
Exercise
Step14: Let's plot the decision boundaries of these three models
Step15: Close enough!
9.
Exercise
Step16: Many training algorithms are sensitive to the order of the training instances, so it's generally good practice to shuffle them first
Step17: Let's start simple, with a linear SVM classifier. It will automatically use the One-vs-All (also called One-vs-the-Rest, OvR) strategy, so there's nothing special we need to do. Easy!
Step18: Let's make predictions on the training set and measure the accuracy (we don't want to measure it on the test set yet, since we have not selected and trained the final model yet)
Step19: Wow, 82% accuracy on MNIST is a really bad performance. This linear model is certainly too simple for MNIST, but perhaps we just needed to scale the data first
Step20: That's much better (we cut the error rate in two), but still not great at all for MNIST. If we want to use an SVM, we will have to use a kernel. Let's try an SVC with an RBF kernel (the default).
Warning
Step21: That's promising, we get better performance even though we trained the model on 6 times less data. Let's tune the hyperparameters by doing a randomized search with cross validation. We will do this on a small dataset just to speed up the process
Step22: This looks pretty low but remember we only trained the model on 1,000 instances. Let's retrain the best estimator on the whole training set (run this at night, it will take hours)
Step23: Ah, this looks good! Let's select this model. Now we can test it on the test set
Step24: Not too bad, but apparently the model is overfitting slightly. It's tempting to tweak the hyperparameters a bit more (e.g. decreasing C and/or gamma), but we would run the risk of overfitting the test set. Other people have found that the hyperparameters C=5 and gamma=0.005 yield even better performance (over 98% accuracy). By running the randomized search for longer and on a larger part of the training set, you may be able to find this as well.
10.
Exercise
Step25: Split it into a training set and a test set
Step26: Don't forget to scale the data
Step27: Let's train a simple LinearSVR first
Step28: Let's see how it performs on the training set
Step29: Let's look at the RMSE
Step30: In this training set, the targets are tens of thousands of dollars. The RMSE gives a rough idea of the kind of error you should expect (with a higher weight for large errors)
Step31: Now let's measure the RMSE on the training set
Step32: Looks much better than the linear model. Let's select this model and evaluate it on the test set | Python Code:
# To support both python 2 and python 3
from __future__ import division, print_function, unicode_literals
# Common imports
import numpy as np
import os
# to make this notebook's output stable across runs
np.random.seed(42)
# To plot pretty figures
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
plt.rcParams['axes.labelsize'] = 14
plt.rcParams['xtick.labelsize'] = 12
plt.rcParams['ytick.labelsize'] = 12
# Where to save the figures
PROJECT_ROOT_DIR = "."
CHAPTER_ID = "svm"
def save_fig(fig_id, tight_layout=True):
path = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID, fig_id + ".png")
print("Saving figure", fig_id)
if tight_layout:
plt.tight_layout()
plt.savefig(path, format='png', dpi=300)
Explanation: Chapter 5 – Support Vector Machines
This notebook contains all the sample code and solutions to the exercises in chapter 5.
Setup
First, let's make sure this notebook works well in both python 2 and 3, import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures:
End of explanation
from sklearn.svm import SVC
from sklearn import datasets
iris = datasets.load_iris()
X = iris["data"][:, (2, 3)] # petal length, petal width
y = iris["target"]
setosa_or_versicolor = (y == 0) | (y == 1)
X = X[setosa_or_versicolor]
y = y[setosa_or_versicolor]
# SVM Classifier model
svm_clf = SVC(kernel="linear", C=float("inf"))
svm_clf.fit(X, y)
# Bad models
x0 = np.linspace(0, 5.5, 200)
pred_1 = 5*x0 - 20
pred_2 = x0 - 1.8
pred_3 = 0.1 * x0 + 0.5
def plot_svc_decision_boundary(svm_clf, xmin, xmax):
w = svm_clf.coef_[0]
b = svm_clf.intercept_[0]
# At the decision boundary, w0*x0 + w1*x1 + b = 0
# => x1 = -w0/w1 * x0 - b/w1
x0 = np.linspace(xmin, xmax, 200)
decision_boundary = -w[0]/w[1] * x0 - b/w[1]
margin = 1/w[1]
gutter_up = decision_boundary + margin
gutter_down = decision_boundary - margin
svs = svm_clf.support_vectors_
plt.scatter(svs[:, 0], svs[:, 1], s=180, facecolors='#FFAAAA')
plt.plot(x0, decision_boundary, "k-", linewidth=2)
plt.plot(x0, gutter_up, "k--", linewidth=2)
plt.plot(x0, gutter_down, "k--", linewidth=2)
plt.figure(figsize=(12,2.7))
plt.subplot(121)
plt.plot(x0, pred_1, "g--", linewidth=2)
plt.plot(x0, pred_2, "m-", linewidth=2)
plt.plot(x0, pred_3, "r-", linewidth=2)
plt.plot(X[:, 0][y==1], X[:, 1][y==1], "bs", label="Iris-Versicolor")
plt.plot(X[:, 0][y==0], X[:, 1][y==0], "yo", label="Iris-Setosa")
plt.xlabel("Petal length", fontsize=14)
plt.ylabel("Petal width", fontsize=14)
plt.legend(loc="upper left", fontsize=14)
plt.axis([0, 5.5, 0, 2])
plt.subplot(122)
plot_svc_decision_boundary(svm_clf, 0, 5.5)
plt.plot(X[:, 0][y==1], X[:, 1][y==1], "bs")
plt.plot(X[:, 0][y==0], X[:, 1][y==0], "yo")
plt.xlabel("Petal length", fontsize=14)
plt.axis([0, 5.5, 0, 2])
save_fig("large_margin_classification_plot")
plt.show()
Explanation: Large margin classification
The next few code cells generate the first figures in chapter 5. The first actual code sample comes after:
End of explanation
Xs = np.array([[1, 50], [5, 20], [3, 80], [5, 60]]).astype(np.float64)
ys = np.array([0, 0, 1, 1])
svm_clf = SVC(kernel="linear", C=100)
svm_clf.fit(Xs, ys)
plt.figure(figsize=(12,3.2))
plt.subplot(121)
plt.plot(Xs[:, 0][ys==1], Xs[:, 1][ys==1], "bo")
plt.plot(Xs[:, 0][ys==0], Xs[:, 1][ys==0], "ms")
plot_svc_decision_boundary(svm_clf, 0, 6)
plt.xlabel("$x_0$", fontsize=20)
plt.ylabel("$x_1$ ", fontsize=20, rotation=0)
plt.title("Unscaled", fontsize=16)
plt.axis([0, 6, 0, 90])
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
X_scaled = scaler.fit_transform(Xs)
svm_clf.fit(X_scaled, ys)
plt.subplot(122)
plt.plot(X_scaled[:, 0][ys==1], X_scaled[:, 1][ys==1], "bo")
plt.plot(X_scaled[:, 0][ys==0], X_scaled[:, 1][ys==0], "ms")
plot_svc_decision_boundary(svm_clf, -2, 2)
plt.xlabel("$x_0$", fontsize=20)
plt.title("Scaled", fontsize=16)
plt.axis([-2, 2, -2, 2])
save_fig("sensitivity_to_feature_scales_plot")
Explanation: Sensitivity to feature scales
End of explanation
X_outliers = np.array([[3.4, 1.3], [3.2, 0.8]])
y_outliers = np.array([0, 0])
Xo1 = np.concatenate([X, X_outliers[:1]], axis=0)
yo1 = np.concatenate([y, y_outliers[:1]], axis=0)
Xo2 = np.concatenate([X, X_outliers[1:]], axis=0)
yo2 = np.concatenate([y, y_outliers[1:]], axis=0)
svm_clf2 = SVC(kernel="linear", C=10**9)
svm_clf2.fit(Xo2, yo2)
plt.figure(figsize=(12,2.7))
plt.subplot(121)
plt.plot(Xo1[:, 0][yo1==1], Xo1[:, 1][yo1==1], "bs")
plt.plot(Xo1[:, 0][yo1==0], Xo1[:, 1][yo1==0], "yo")
plt.text(0.3, 1.0, "Impossible!", fontsize=24, color="red")
plt.xlabel("Petal length", fontsize=14)
plt.ylabel("Petal width", fontsize=14)
plt.annotate("Outlier",
xy=(X_outliers[0][0], X_outliers[0][1]),
xytext=(2.5, 1.7),
ha="center",
arrowprops=dict(facecolor='black', shrink=0.1),
fontsize=16,
)
plt.axis([0, 5.5, 0, 2])
plt.subplot(122)
plt.plot(Xo2[:, 0][yo2==1], Xo2[:, 1][yo2==1], "bs")
plt.plot(Xo2[:, 0][yo2==0], Xo2[:, 1][yo2==0], "yo")
plot_svc_decision_boundary(svm_clf2, 0, 5.5)
plt.xlabel("Petal length", fontsize=14)
plt.annotate("Outlier",
xy=(X_outliers[1][0], X_outliers[1][1]),
xytext=(3.2, 0.08),
ha="center",
arrowprops=dict(facecolor='black', shrink=0.1),
fontsize=16,
)
plt.axis([0, 5.5, 0, 2])
save_fig("sensitivity_to_outliers_plot")
plt.show()
Explanation: Sensitivity to outliers
End of explanation
import numpy as np
from sklearn import datasets
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler
from sklearn.svm import LinearSVC
iris = datasets.load_iris()
X = iris["data"][:, (2, 3)] # petal length, petal width
y = (iris["target"] == 2).astype(np.float64) # Iris-Virginica
svm_clf = Pipeline([
("scaler", StandardScaler()),
("linear_svc", LinearSVC(C=1, loss="hinge", random_state=42)),
])
svm_clf.fit(X, y)
svm_clf.predict([[5.5, 1.7]])
Explanation: Large margin vs margin violations
This is the first code example in chapter 5:
End of explanation
scaler = StandardScaler()
svm_clf1 = LinearSVC(C=1, loss="hinge", random_state=42)
svm_clf2 = LinearSVC(C=100, loss="hinge", random_state=42)
scaled_svm_clf1 = Pipeline([
("scaler", scaler),
("linear_svc", svm_clf1),
])
scaled_svm_clf2 = Pipeline([
("scaler", scaler),
("linear_svc", svm_clf2),
])
scaled_svm_clf1.fit(X, y)
scaled_svm_clf2.fit(X, y)
# Convert to unscaled parameters
b1 = svm_clf1.decision_function([-scaler.mean_ / scaler.scale_])
b2 = svm_clf2.decision_function([-scaler.mean_ / scaler.scale_])
w1 = svm_clf1.coef_[0] / scaler.scale_
w2 = svm_clf2.coef_[0] / scaler.scale_
svm_clf1.intercept_ = np.array([b1])
svm_clf2.intercept_ = np.array([b2])
svm_clf1.coef_ = np.array([w1])
svm_clf2.coef_ = np.array([w2])
# Find support vectors (LinearSVC does not do this automatically)
t = y * 2 - 1
support_vectors_idx1 = (t * (X.dot(w1) + b1) < 1).ravel()
support_vectors_idx2 = (t * (X.dot(w2) + b2) < 1).ravel()
svm_clf1.support_vectors_ = X[support_vectors_idx1]
svm_clf2.support_vectors_ = X[support_vectors_idx2]
plt.figure(figsize=(12,3.2))
plt.subplot(121)
plt.plot(X[:, 0][y==1], X[:, 1][y==1], "g^", label="Iris-Virginica")
plt.plot(X[:, 0][y==0], X[:, 1][y==0], "bs", label="Iris-Versicolor")
plot_svc_decision_boundary(svm_clf1, 4, 6)
plt.xlabel("Petal length", fontsize=14)
plt.ylabel("Petal width", fontsize=14)
plt.legend(loc="upper left", fontsize=14)
plt.title("$C = {}$".format(svm_clf1.C), fontsize=16)
plt.axis([4, 6, 0.8, 2.8])
plt.subplot(122)
plt.plot(X[:, 0][y==1], X[:, 1][y==1], "g^")
plt.plot(X[:, 0][y==0], X[:, 1][y==0], "bs")
plot_svc_decision_boundary(svm_clf2, 4, 6)
plt.xlabel("Petal length", fontsize=14)
plt.title("$C = {}$".format(svm_clf2.C), fontsize=16)
plt.axis([4, 6, 0.8, 2.8])
save_fig("regularization_plot")
Explanation: Now let's generate the graph comparing different regularization settings:
End of explanation
X1D = np.linspace(-4, 4, 9).reshape(-1, 1)
X2D = np.c_[X1D, X1D**2]
y = np.array([0, 0, 1, 1, 1, 1, 1, 0, 0])
plt.figure(figsize=(11, 4))
plt.subplot(121)
plt.grid(True, which='both')
plt.axhline(y=0, color='k')
plt.plot(X1D[:, 0][y==0], np.zeros(4), "bs")
plt.plot(X1D[:, 0][y==1], np.zeros(5), "g^")
plt.gca().get_yaxis().set_ticks([])
plt.xlabel(r"$x_1$", fontsize=20)
plt.axis([-4.5, 4.5, -0.2, 0.2])
plt.subplot(122)
plt.grid(True, which='both')
plt.axhline(y=0, color='k')
plt.axvline(x=0, color='k')
plt.plot(X2D[:, 0][y==0], X2D[:, 1][y==0], "bs")
plt.plot(X2D[:, 0][y==1], X2D[:, 1][y==1], "g^")
plt.xlabel(r"$x_1$", fontsize=20)
plt.ylabel(r"$x_2$", fontsize=20, rotation=0)
plt.gca().get_yaxis().set_ticks([0, 4, 8, 12, 16])
plt.plot([-4.5, 4.5], [6.5, 6.5], "r--", linewidth=3)
plt.axis([-4.5, 4.5, -1, 17])
plt.subplots_adjust(right=1)
save_fig("higher_dimensions_plot", tight_layout=False)
plt.show()
from sklearn.datasets import make_moons
X, y = make_moons(n_samples=100, noise=0.15, random_state=42)
def plot_dataset(X, y, axes):
plt.plot(X[:, 0][y==0], X[:, 1][y==0], "bs")
plt.plot(X[:, 0][y==1], X[:, 1][y==1], "g^")
plt.axis(axes)
plt.grid(True, which='both')
plt.xlabel(r"$x_1$", fontsize=20)
plt.ylabel(r"$x_2$", fontsize=20, rotation=0)
plot_dataset(X, y, [-1.5, 2.5, -1, 1.5])
plt.show()
from sklearn.datasets import make_moons
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import PolynomialFeatures
polynomial_svm_clf = Pipeline([
("poly_features", PolynomialFeatures(degree=3)),
("scaler", StandardScaler()),
("svm_clf", LinearSVC(C=10, loss="hinge", random_state=42))
])
polynomial_svm_clf.fit(X, y)
def plot_predictions(clf, axes):
x0s = np.linspace(axes[0], axes[1], 100)
x1s = np.linspace(axes[2], axes[3], 100)
x0, x1 = np.meshgrid(x0s, x1s)
X = np.c_[x0.ravel(), x1.ravel()]
y_pred = clf.predict(X).reshape(x0.shape)
y_decision = clf.decision_function(X).reshape(x0.shape)
plt.contourf(x0, x1, y_pred, cmap=plt.cm.brg, alpha=0.2)
plt.contourf(x0, x1, y_decision, cmap=plt.cm.brg, alpha=0.1)
plot_predictions(polynomial_svm_clf, [-1.5, 2.5, -1, 1.5])
plot_dataset(X, y, [-1.5, 2.5, -1, 1.5])
save_fig("moons_polynomial_svc_plot")
plt.show()
from sklearn.svm import SVC
poly_kernel_svm_clf = Pipeline([
("scaler", StandardScaler()),
("svm_clf", SVC(kernel="poly", degree=3, coef0=1, C=5))
])
poly_kernel_svm_clf.fit(X, y)
poly100_kernel_svm_clf = Pipeline([
("scaler", StandardScaler()),
("svm_clf", SVC(kernel="poly", degree=10, coef0=100, C=5))
])
poly100_kernel_svm_clf.fit(X, y)
plt.figure(figsize=(11, 4))
plt.subplot(121)
plot_predictions(poly_kernel_svm_clf, [-1.5, 2.5, -1, 1.5])
plot_dataset(X, y, [-1.5, 2.5, -1, 1.5])
plt.title(r"$d=3, r=1, C=5$", fontsize=18)
plt.subplot(122)
plot_predictions(poly100_kernel_svm_clf, [-1.5, 2.5, -1, 1.5])
plot_dataset(X, y, [-1.5, 2.5, -1, 1.5])
plt.title(r"$d=10, r=100, C=5$", fontsize=18)
save_fig("moons_kernelized_polynomial_svc_plot")
plt.show()
def gaussian_rbf(x, landmark, gamma):
return np.exp(-gamma * np.linalg.norm(x - landmark, axis=1)**2)
gamma = 0.3
x1s = np.linspace(-4.5, 4.5, 200).reshape(-1, 1)
x2s = gaussian_rbf(x1s, -2, gamma)
x3s = gaussian_rbf(x1s, 1, gamma)
XK = np.c_[gaussian_rbf(X1D, -2, gamma), gaussian_rbf(X1D, 1, gamma)]
yk = np.array([0, 0, 1, 1, 1, 1, 1, 0, 0])
plt.figure(figsize=(11, 4))
plt.subplot(121)
plt.grid(True, which='both')
plt.axhline(y=0, color='k')
plt.scatter(x=[-2, 1], y=[0, 0], s=150, alpha=0.5, c="red")
plt.plot(X1D[:, 0][yk==0], np.zeros(4), "bs")
plt.plot(X1D[:, 0][yk==1], np.zeros(5), "g^")
plt.plot(x1s, x2s, "g--")
plt.plot(x1s, x3s, "b:")
plt.gca().get_yaxis().set_ticks([0, 0.25, 0.5, 0.75, 1])
plt.xlabel(r"$x_1$", fontsize=20)
plt.ylabel(r"Similarity", fontsize=14)
plt.annotate(r'$\mathbf{x}$',
xy=(X1D[3, 0], 0),
xytext=(-0.5, 0.20),
ha="center",
arrowprops=dict(facecolor='black', shrink=0.1),
fontsize=18,
)
plt.text(-2, 0.9, "$x_2$", ha="center", fontsize=20)
plt.text(1, 0.9, "$x_3$", ha="center", fontsize=20)
plt.axis([-4.5, 4.5, -0.1, 1.1])
plt.subplot(122)
plt.grid(True, which='both')
plt.axhline(y=0, color='k')
plt.axvline(x=0, color='k')
plt.plot(XK[:, 0][yk==0], XK[:, 1][yk==0], "bs")
plt.plot(XK[:, 0][yk==1], XK[:, 1][yk==1], "g^")
plt.xlabel(r"$x_2$", fontsize=20)
plt.ylabel(r"$x_3$ ", fontsize=20, rotation=0)
plt.annotate(r'$\phi\left(\mathbf{x}\right)$',
xy=(XK[3, 0], XK[3, 1]),
xytext=(0.65, 0.50),
ha="center",
arrowprops=dict(facecolor='black', shrink=0.1),
fontsize=18,
)
plt.plot([-0.1, 1.1], [0.57, -0.1], "r--", linewidth=3)
plt.axis([-0.1, 1.1, -0.1, 1.1])
plt.subplots_adjust(right=1)
save_fig("kernel_method_plot")
plt.show()
x1_example = X1D[3, 0]
for landmark in (-2, 1):
k = gaussian_rbf(np.array([[x1_example]]), np.array([[landmark]]), gamma)
print("Phi({}, {}) = {}".format(x1_example, landmark, k))
rbf_kernel_svm_clf = Pipeline([
("scaler", StandardScaler()),
("svm_clf", SVC(kernel="rbf", gamma=5, C=0.001))
])
rbf_kernel_svm_clf.fit(X, y)
from sklearn.svm import SVC
gamma1, gamma2 = 0.1, 5
C1, C2 = 0.001, 1000
hyperparams = (gamma1, C1), (gamma1, C2), (gamma2, C1), (gamma2, C2)
svm_clfs = []
for gamma, C in hyperparams:
rbf_kernel_svm_clf = Pipeline([
("scaler", StandardScaler()),
("svm_clf", SVC(kernel="rbf", gamma=gamma, C=C))
])
rbf_kernel_svm_clf.fit(X, y)
svm_clfs.append(rbf_kernel_svm_clf)
plt.figure(figsize=(11, 7))
for i, svm_clf in enumerate(svm_clfs):
plt.subplot(221 + i)
plot_predictions(svm_clf, [-1.5, 2.5, -1, 1.5])
plot_dataset(X, y, [-1.5, 2.5, -1, 1.5])
gamma, C = hyperparams[i]
plt.title(r"$\gamma = {}, C = {}$".format(gamma, C), fontsize=16)
save_fig("moons_rbf_svc_plot")
plt.show()
Explanation: Non-linear classification
End of explanation
np.random.seed(42)
m = 50
X = 2 * np.random.rand(m, 1)
y = (4 + 3 * X + np.random.randn(m, 1)).ravel()
from sklearn.svm import LinearSVR
svm_reg = LinearSVR(epsilon=1.5, random_state=42)
svm_reg.fit(X, y)
svm_reg1 = LinearSVR(epsilon=1.5, random_state=42)
svm_reg2 = LinearSVR(epsilon=0.5, random_state=42)
svm_reg1.fit(X, y)
svm_reg2.fit(X, y)
def find_support_vectors(svm_reg, X, y):
y_pred = svm_reg.predict(X)
off_margin = (np.abs(y - y_pred) >= svm_reg.epsilon)
return np.argwhere(off_margin)
svm_reg1.support_ = find_support_vectors(svm_reg1, X, y)
svm_reg2.support_ = find_support_vectors(svm_reg2, X, y)
eps_x1 = 1
eps_y_pred = svm_reg1.predict([[eps_x1]])
def plot_svm_regression(svm_reg, X, y, axes):
x1s = np.linspace(axes[0], axes[1], 100).reshape(100, 1)
y_pred = svm_reg.predict(x1s)
plt.plot(x1s, y_pred, "k-", linewidth=2, label=r"$\hat{y}$")
plt.plot(x1s, y_pred + svm_reg.epsilon, "k--")
plt.plot(x1s, y_pred - svm_reg.epsilon, "k--")
plt.scatter(X[svm_reg.support_], y[svm_reg.support_], s=180, facecolors='#FFAAAA')
plt.plot(X, y, "bo")
plt.xlabel(r"$x_1$", fontsize=18)
plt.legend(loc="upper left", fontsize=18)
plt.axis(axes)
plt.figure(figsize=(9, 4))
plt.subplot(121)
plot_svm_regression(svm_reg1, X, y, [0, 2, 3, 11])
plt.title(r"$\epsilon = {}$".format(svm_reg1.epsilon), fontsize=18)
plt.ylabel(r"$y$", fontsize=18, rotation=0)
#plt.plot([eps_x1, eps_x1], [eps_y_pred, eps_y_pred - svm_reg1.epsilon], "k-", linewidth=2)
plt.annotate(
'', xy=(eps_x1, eps_y_pred), xycoords='data',
xytext=(eps_x1, eps_y_pred - svm_reg1.epsilon),
textcoords='data', arrowprops={'arrowstyle': '<->', 'linewidth': 1.5}
)
plt.text(0.91, 5.6, r"$\epsilon$", fontsize=20)
plt.subplot(122)
plot_svm_regression(svm_reg2, X, y, [0, 2, 3, 11])
plt.title(r"$\epsilon = {}$".format(svm_reg2.epsilon), fontsize=18)
save_fig("svm_regression_plot")
plt.show()
np.random.seed(42)
m = 100
X = 2 * np.random.rand(m, 1) - 1
y = (0.2 + 0.1 * X + 0.5 * X**2 + np.random.randn(m, 1)/10).ravel()
from sklearn.svm import SVR
svm_poly_reg = SVR(kernel="poly", degree=2, C=100, epsilon=0.1)
svm_poly_reg.fit(X, y)
from sklearn.svm import SVR
svm_poly_reg1 = SVR(kernel="poly", degree=2, C=100, epsilon=0.1)
svm_poly_reg2 = SVR(kernel="poly", degree=2, C=0.01, epsilon=0.1)
svm_poly_reg1.fit(X, y)
svm_poly_reg2.fit(X, y)
plt.figure(figsize=(9, 4))
plt.subplot(121)
plot_svm_regression(svm_poly_reg1, X, y, [-1, 1, 0, 1])
plt.title(r"$degree={}, C={}, \epsilon = {}$".format(svm_poly_reg1.degree, svm_poly_reg1.C, svm_poly_reg1.epsilon), fontsize=18)
plt.ylabel(r"$y$", fontsize=18, rotation=0)
plt.subplot(122)
plot_svm_regression(svm_poly_reg2, X, y, [-1, 1, 0, 1])
plt.title(r"$degree={}, C={}, \epsilon = {}$".format(svm_poly_reg2.degree, svm_poly_reg2.C, svm_poly_reg2.epsilon), fontsize=18)
save_fig("svm_with_polynomial_kernel_plot")
plt.show()
Explanation: Regression
End of explanation
iris = datasets.load_iris()
X = iris["data"][:, (2, 3)] # petal length, petal width
y = (iris["target"] == 2).astype(np.float64) # Iris-Virginica
from mpl_toolkits.mplot3d import Axes3D
def plot_3D_decision_function(ax, w, b, x1_lim=[4, 6], x2_lim=[0.8, 2.8]):
x1_in_bounds = (X[:, 0] > x1_lim[0]) & (X[:, 0] < x1_lim[1])
X_crop = X[x1_in_bounds]
y_crop = y[x1_in_bounds]
x1s = np.linspace(x1_lim[0], x1_lim[1], 20)
x2s = np.linspace(x2_lim[0], x2_lim[1], 20)
x1, x2 = np.meshgrid(x1s, x2s)
xs = np.c_[x1.ravel(), x2.ravel()]
df = (xs.dot(w) + b).reshape(x1.shape)
m = 1 / np.linalg.norm(w)
boundary_x2s = -x1s*(w[0]/w[1])-b/w[1]
margin_x2s_1 = -x1s*(w[0]/w[1])-(b-1)/w[1]
margin_x2s_2 = -x1s*(w[0]/w[1])-(b+1)/w[1]
ax.plot_surface(x1s, x2, 0, color="b", alpha=0.2, cstride=100, rstride=100)
ax.plot(x1s, boundary_x2s, 0, "k-", linewidth=2, label=r"$h=0$")
ax.plot(x1s, margin_x2s_1, 0, "k--", linewidth=2, label=r"$h=\pm 1$")
ax.plot(x1s, margin_x2s_2, 0, "k--", linewidth=2)
ax.plot(X_crop[:, 0][y_crop==1], X_crop[:, 1][y_crop==1], 0, "g^")
ax.plot_wireframe(x1, x2, df, alpha=0.3, color="k")
ax.plot(X_crop[:, 0][y_crop==0], X_crop[:, 1][y_crop==0], 0, "bs")
ax.axis(x1_lim + x2_lim)
ax.text(4.5, 2.5, 3.8, "Decision function $h$", fontsize=15)
ax.set_xlabel(r"Petal length", fontsize=15)
ax.set_ylabel(r"Petal width", fontsize=15)
ax.set_zlabel(r"$h = \mathbf{w}^t \cdot \mathbf{x} + b$", fontsize=18)
ax.legend(loc="upper left", fontsize=16)
fig = plt.figure(figsize=(11, 6))
ax1 = fig.add_subplot(111, projection='3d')
plot_3D_decision_function(ax1, w=svm_clf2.coef_[0], b=svm_clf2.intercept_[0])
save_fig("iris_3D_plot")
plt.show()
Explanation: Under the hood
End of explanation
def plot_2D_decision_function(w, b, ylabel=True, x1_lim=[-3, 3]):
x1 = np.linspace(x1_lim[0], x1_lim[1], 200)
y = w * x1 + b
m = 1 / w
plt.plot(x1, y)
plt.plot(x1_lim, [1, 1], "k:")
plt.plot(x1_lim, [-1, -1], "k:")
plt.axhline(y=0, color='k')
plt.axvline(x=0, color='k')
plt.plot([m, m], [0, 1], "k--")
plt.plot([-m, -m], [0, -1], "k--")
plt.plot([-m, m], [0, 0], "k-o", linewidth=3)
plt.axis(x1_lim + [-2, 2])
plt.xlabel(r"$x_1$", fontsize=16)
if ylabel:
plt.ylabel(r"$w_1 x_1$ ", rotation=0, fontsize=16)
plt.title(r"$w_1 = {}$".format(w), fontsize=16)
plt.figure(figsize=(12, 3.2))
plt.subplot(121)
plot_2D_decision_function(1, 0)
plt.subplot(122)
plot_2D_decision_function(0.5, 0, ylabel=False)
save_fig("small_w_large_margin_plot")
plt.show()
from sklearn.svm import SVC
from sklearn import datasets
iris = datasets.load_iris()
X = iris["data"][:, (2, 3)] # petal length, petal width
y = (iris["target"] == 2).astype(np.float64) # Iris-Virginica
svm_clf = SVC(kernel="linear", C=1)
svm_clf.fit(X, y)
svm_clf.predict([[5.3, 1.3]])
Explanation: Small weight vector results in a large margin
End of explanation
t = np.linspace(-2, 4, 200)
h = np.where(1 - t < 0, 0, 1 - t) # max(0, 1-t)
plt.figure(figsize=(5,2.8))
plt.plot(t, h, "b-", linewidth=2, label="$max(0, 1 - t)$")
plt.grid(True, which='both')
plt.axhline(y=0, color='k')
plt.axvline(x=0, color='k')
plt.yticks(np.arange(-1, 2.5, 1))
plt.xlabel("$t$", fontsize=16)
plt.axis([-2, 4, -1, 2.5])
plt.legend(loc="upper right", fontsize=16)
save_fig("hinge_plot")
plt.show()
Explanation: Hinge loss
End of explanation
X, y = make_moons(n_samples=1000, noise=0.4, random_state=42)
plt.plot(X[:, 0][y==0], X[:, 1][y==0], "bs")
plt.plot(X[:, 0][y==1], X[:, 1][y==1], "g^")
import time
tol = 0.1
tols = []
times = []
for i in range(10):
svm_clf = SVC(kernel="poly", gamma=3, C=10, tol=tol, verbose=1)
t1 = time.time()
svm_clf.fit(X, y)
t2 = time.time()
times.append(t2-t1)
tols.append(tol)
print(i, tol, t2-t1)
tol /= 10
plt.semilogx(tols, times)
Explanation: Extra material
Training time
End of explanation
# Training set
X = iris["data"][:, (2, 3)] # petal length, petal width
y = (iris["target"] == 2).astype(np.float64).reshape(-1, 1) # Iris-Virginica
from sklearn.base import BaseEstimator
class MyLinearSVC(BaseEstimator):
def __init__(self, C=1, eta0=1, eta_d=10000, n_epochs=1000, random_state=None):
self.C = C
self.eta0 = eta0
self.n_epochs = n_epochs
self.random_state = random_state
self.eta_d = eta_d
def eta(self, epoch):
return self.eta0 / (epoch + self.eta_d)
def fit(self, X, y):
# Random initialization
if self.random_state:
np.random.seed(self.random_state)
w = np.random.randn(X.shape[1], 1) # n feature weights
b = 0
m = len(X)
t = y * 2 - 1 # -1 if t==0, +1 if t==1
X_t = X * t
self.Js=[]
# Training
for epoch in range(self.n_epochs):
support_vectors_idx = (X_t.dot(w) + t * b < 1).ravel()
X_t_sv = X_t[support_vectors_idx]
t_sv = t[support_vectors_idx]
J = 1/2 * np.sum(w * w) + self.C * (np.sum(1 - X_t_sv.dot(w)) - b * np.sum(t_sv))
self.Js.append(J)
w_gradient_vector = w - self.C * np.sum(X_t_sv, axis=0).reshape(-1, 1)
b_derivative = -C * np.sum(t_sv)
w = w - self.eta(epoch) * w_gradient_vector
b = b - self.eta(epoch) * b_derivative
self.intercept_ = np.array([b])
self.coef_ = np.array([w])
support_vectors_idx = (X_t.dot(w) + b < 1).ravel()
self.support_vectors_ = X[support_vectors_idx]
return self
def decision_function(self, X):
return X.dot(self.coef_[0]) + self.intercept_[0]
def predict(self, X):
return (self.decision_function(X) >= 0).astype(np.float64)
C=2
svm_clf = MyLinearSVC(C=C, eta0 = 10, eta_d = 1000, n_epochs=60000, random_state=2)
svm_clf.fit(X, y)
svm_clf.predict(np.array([[5, 2], [4, 1]]))
plt.plot(range(svm_clf.n_epochs), svm_clf.Js)
plt.axis([0, svm_clf.n_epochs, 0, 100])
print(svm_clf.intercept_, svm_clf.coef_)
svm_clf2 = SVC(kernel="linear", C=C)
svm_clf2.fit(X, y.ravel())
print(svm_clf2.intercept_, svm_clf2.coef_)
yr = y.ravel()
plt.figure(figsize=(12,3.2))
plt.subplot(121)
plt.plot(X[:, 0][yr==1], X[:, 1][yr==1], "g^", label="Iris-Virginica")
plt.plot(X[:, 0][yr==0], X[:, 1][yr==0], "bs", label="Not Iris-Virginica")
plot_svc_decision_boundary(svm_clf, 4, 6)
plt.xlabel("Petal length", fontsize=14)
plt.ylabel("Petal width", fontsize=14)
plt.title("MyLinearSVC", fontsize=14)
plt.axis([4, 6, 0.8, 2.8])
plt.subplot(122)
plt.plot(X[:, 0][yr==1], X[:, 1][yr==1], "g^")
plt.plot(X[:, 0][yr==0], X[:, 1][yr==0], "bs")
plot_svc_decision_boundary(svm_clf2, 4, 6)
plt.xlabel("Petal length", fontsize=14)
plt.title("SVC", fontsize=14)
plt.axis([4, 6, 0.8, 2.8])
from sklearn.linear_model import SGDClassifier
sgd_clf = SGDClassifier(loss="hinge", alpha = 0.017, n_iter = 50, random_state=42)
sgd_clf.fit(X, y.ravel())
m = len(X)
t = y * 2 - 1 # -1 if t==0, +1 if t==1
X_b = np.c_[np.ones((m, 1)), X] # Add bias input x0=1
X_b_t = X_b * t
sgd_theta = np.r_[sgd_clf.intercept_[0], sgd_clf.coef_[0]]
print(sgd_theta)
support_vectors_idx = (X_b_t.dot(sgd_theta) < 1).ravel()
sgd_clf.support_vectors_ = X[support_vectors_idx]
sgd_clf.C = C
plt.figure(figsize=(5.5,3.2))
plt.plot(X[:, 0][yr==1], X[:, 1][yr==1], "g^")
plt.plot(X[:, 0][yr==0], X[:, 1][yr==0], "bs")
plot_svc_decision_boundary(sgd_clf, 4, 6)
plt.xlabel("Petal length", fontsize=14)
plt.ylabel("Petal width", fontsize=14)
plt.title("SGDClassifier", fontsize=14)
plt.axis([4, 6, 0.8, 2.8])
Explanation: Linear SVM classifier implementation using Batch Gradient Descent
End of explanation
from sklearn import datasets
iris = datasets.load_iris()
X = iris["data"][:, (2, 3)] # petal length, petal width
y = iris["target"]
setosa_or_versicolor = (y == 0) | (y == 1)
X = X[setosa_or_versicolor]
y = y[setosa_or_versicolor]
from sklearn.svm import SVC, LinearSVC
from sklearn.linear_model import SGDClassifier
from sklearn.preprocessing import StandardScaler
C = 5
alpha = 1 / (C * len(X))
lin_clf = LinearSVC(loss="hinge", C=C, random_state=42)
svm_clf = SVC(kernel="linear", C=C)
sgd_clf = SGDClassifier(loss="hinge", learning_rate="constant", eta0=0.001, alpha=alpha,
n_iter=100000, random_state=42)
scaler = StandardScaler()
X_scaled = scaler.fit_transform(X)
lin_clf.fit(X_scaled, y)
svm_clf.fit(X_scaled, y)
sgd_clf.fit(X_scaled, y)
print("LinearSVC: ", lin_clf.intercept_, lin_clf.coef_)
print("SVC: ", svm_clf.intercept_, svm_clf.coef_)
print("SGDClassifier(alpha={:.5f}):".format(sgd_clf.alpha), sgd_clf.intercept_, sgd_clf.coef_)
Explanation: Exercise solutions
1. to 7.
See appendix A.
8.
Exercise: train a LinearSVC on a linearly separable dataset. Then train an SVC and a SGDClassifier on the same dataset. See if you can get them to produce roughly the same model.
Let's use the Iris dataset: the Iris Setosa and Iris Versicolor classes are linearly separable.
End of explanation
# Compute the slope and bias of each decision boundary
w1 = -lin_clf.coef_[0, 0]/lin_clf.coef_[0, 1]
b1 = -lin_clf.intercept_[0]/lin_clf.coef_[0, 1]
w2 = -svm_clf.coef_[0, 0]/svm_clf.coef_[0, 1]
b2 = -svm_clf.intercept_[0]/svm_clf.coef_[0, 1]
w3 = -sgd_clf.coef_[0, 0]/sgd_clf.coef_[0, 1]
b3 = -sgd_clf.intercept_[0]/sgd_clf.coef_[0, 1]
# Transform the decision boundary lines back to the original scale
line1 = scaler.inverse_transform([[-10, -10 * w1 + b1], [10, 10 * w1 + b1]])
line2 = scaler.inverse_transform([[-10, -10 * w2 + b2], [10, 10 * w2 + b2]])
line3 = scaler.inverse_transform([[-10, -10 * w3 + b3], [10, 10 * w3 + b3]])
# Plot all three decision boundaries
plt.figure(figsize=(11, 4))
plt.plot(line1[:, 0], line1[:, 1], "k:", label="LinearSVC")
plt.plot(line2[:, 0], line2[:, 1], "b--", linewidth=2, label="SVC")
plt.plot(line3[:, 0], line3[:, 1], "r-", label="SGDClassifier")
plt.plot(X[:, 0][y==1], X[:, 1][y==1], "bs") # label="Iris-Versicolor"
plt.plot(X[:, 0][y==0], X[:, 1][y==0], "yo") # label="Iris-Setosa"
plt.xlabel("Petal length", fontsize=14)
plt.ylabel("Petal width", fontsize=14)
plt.legend(loc="upper center", fontsize=14)
plt.axis([0, 5.5, 0, 2])
plt.show()
Explanation: Let's plot the decision boundaries of these three models:
End of explanation
from sklearn.datasets import fetch_mldata
mnist = fetch_mldata("MNIST original")
X = mnist["data"]
y = mnist["target"]
X_train = X[:60000]
y_train = y[:60000]
X_test = X[60000:]
y_test = y[60000:]
Explanation: Close enough!
9.
Exercise: train an SVM classifier on the MNIST dataset. Since SVM classifiers are binary classifiers, you will need to use one-versus-all to classify all 10 digits. You may want to tune the hyperparameters using small validation sets to speed up the process. What accuracy can you reach?
First, let's load the dataset and split it into a training set and a test set. We could use train_test_split() but people usually just take the first 60,000 instances for the training set, and the last 10,000 instances for the test set (this makes it possible to compare your model's performance with others):
End of explanation
np.random.seed(42)
rnd_idx = np.random.permutation(60000)
X_train = X_train[rnd_idx]
y_train = y_train[rnd_idx]
Explanation: Many training algorithms are sensitive to the order of the training instances, so it's generally good practice to shuffle them first:
End of explanation
lin_clf = LinearSVC(random_state=42)
lin_clf.fit(X_train, y_train)
Explanation: Let's start simple, with a linear SVM classifier. It will automatically use the One-vs-All (also called One-vs-the-Rest, OvR) strategy, so there's nothing special we need to do. Easy!
End of explanation
from sklearn.metrics import accuracy_score
y_pred = lin_clf.predict(X_train)
accuracy_score(y_train, y_pred)
Explanation: Let's make predictions on the training set and measure the accuracy (we don't want to measure it on the test set yet, since we have not selected and trained the final model yet):
End of explanation
scaler = StandardScaler()
X_train_scaled = scaler.fit_transform(X_train.astype(np.float32))
X_test_scaled = scaler.transform(X_test.astype(np.float32))
lin_clf = LinearSVC(random_state=42)
lin_clf.fit(X_train_scaled, y_train)
y_pred = lin_clf.predict(X_train_scaled)
accuracy_score(y_train, y_pred)
Explanation: Wow, 82% accuracy on MNIST is a really bad performance. This linear model is certainly too simple for MNIST, but perhaps we just needed to scale the data first:
End of explanation
svm_clf = SVC(decision_function_shape="ovr")
svm_clf.fit(X_train_scaled[:10000], y_train[:10000])
y_pred = svm_clf.predict(X_train_scaled)
accuracy_score(y_train, y_pred)
Explanation: That's much better (we cut the error rate in two), but still not great at all for MNIST. If we want to use an SVM, we will have to use a kernel. Let's try an SVC with an RBF kernel (the default).
Warning: if you are using Scikit-Learn ≤ 0.19, the SVC class will use the One-vs-One (OvO) strategy by default, so you must explicitly set decision_function_shape="ovr" if you want to use the OvR strategy instead (OvR is the default since 0.19).
End of explanation
from sklearn.model_selection import RandomizedSearchCV
from scipy.stats import reciprocal, uniform
param_distributions = {"gamma": reciprocal(0.001, 0.1), "C": uniform(1, 10)}
rnd_search_cv = RandomizedSearchCV(svm_clf, param_distributions, n_iter=10, verbose=2)
rnd_search_cv.fit(X_train_scaled[:1000], y_train[:1000])
rnd_search_cv.best_estimator_
rnd_search_cv.best_score_
Explanation: That's promising, we get better performance even though we trained the model on 6 times less data. Let's tune the hyperparameters by doing a randomized search with cross validation. We will do this on a small dataset just to speed up the process:
End of explanation
rnd_search_cv.best_estimator_.fit(X_train_scaled, y_train)
y_pred = rnd_search_cv.best_estimator_.predict(X_train_scaled)
accuracy_score(y_train, y_pred)
Explanation: This looks pretty low but remember we only trained the model on 1,000 instances. Let's retrain the best estimator on the whole training set (run this at night, it will take hours):
End of explanation
y_pred = rnd_search_cv.best_estimator_.predict(X_test_scaled)
accuracy_score(y_test, y_pred)
Explanation: Ah, this looks good! Let's select this model. Now we can test it on the test set:
End of explanation
from sklearn.datasets import fetch_california_housing
housing = fetch_california_housing()
X = housing["data"]
y = housing["target"]
Explanation: Not too bad, but apparently the model is overfitting slightly. It's tempting to tweak the hyperparameters a bit more (e.g. decreasing C and/or gamma), but we would run the risk of overfitting the test set. Other people have found that the hyperparameters C=5 and gamma=0.005 yield even better performance (over 98% accuracy). By running the randomized search for longer and on a larger part of the training set, you may be able to find this as well.
10.
Exercise: train an SVM regressor on the California housing dataset.
Let's load the dataset using Scikit-Learn's fetch_california_housing() function:
End of explanation
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
Explanation: Split it into a training set and a test set:
End of explanation
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
X_train_scaled = scaler.fit_transform(X_train)
X_test_scaled = scaler.transform(X_test)
Explanation: Don't forget to scale the data:
End of explanation
from sklearn.svm import LinearSVR
lin_svr = LinearSVR(random_state=42)
lin_svr.fit(X_train_scaled, y_train)
Explanation: Let's train a simple LinearSVR first:
End of explanation
from sklearn.metrics import mean_squared_error
y_pred = lin_svr.predict(X_train_scaled)
mse = mean_squared_error(y_train, y_pred)
mse
Explanation: Let's see how it performs on the training set:
End of explanation
np.sqrt(mse)
Explanation: Let's look at the RMSE:
End of explanation
from sklearn.svm import SVR
from sklearn.model_selection import RandomizedSearchCV
from scipy.stats import reciprocal, uniform
param_distributions = {"gamma": reciprocal(0.001, 0.1), "C": uniform(1, 10)}
rnd_search_cv = RandomizedSearchCV(SVR(), param_distributions, n_iter=10, verbose=2, random_state=42)
rnd_search_cv.fit(X_train_scaled, y_train)
rnd_search_cv.best_estimator_
Explanation: In this training set, the targets are tens of thousands of dollars. The RMSE gives a rough idea of the kind of error you should expect (with a higher weight for large errors): so with this model we can expect errors somewhere around $10,000. Not great. Let's see if we can do better with an RBF Kernel. We will use randomized search with cross validation to find the appropriate hyperparameter values for C and gamma:
End of explanation
y_pred = rnd_search_cv.best_estimator_.predict(X_train_scaled)
mse = mean_squared_error(y_train, y_pred)
np.sqrt(mse)
Explanation: Now let's measure the RMSE on the training set:
End of explanation
y_pred = rnd_search_cv.best_estimator_.predict(X_test_scaled)
mse = mean_squared_error(y_test, y_pred)
np.sqrt(mse)
Explanation: Looks much better than the linear model. Let's select this model and evaluate it on the test set:
End of explanation |
2,571 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Autoregressions
This notebook introduces autoregression modeling using the AutoReg model. It also covers aspects of ar_select_order assists in selecting models that minimize an information criteria such as the AIC.
An autoregressive model has dynamics given by
$$ y_t = \delta + \phi_1 y_{t-1} + \ldots + \phi_p y_{t-p} + \epsilon_t. $$
AutoReg also permits models with
Step1: This cell sets the plotting style, registers pandas date converters for matplotlib, and sets the default figure size.
Step2: The first set of examples uses the month-over-month growth rate in U.S. Housing starts that has not been seasonally adjusted. The seasonality is evident by the regular pattern of peaks and troughs. We set the frequency for the time series to "MS" (month-start) to avoid warnings when using AutoReg.
Step3: We can start with an AR(3). While this is not a good model for this data, it demonstrates the basic use of the API.
Step4: AutoReg supports the same covariance estimators as OLS. Below, we use cov_type="HC0", which is White's covariance estimator. While the parameter estimates are the same, all of the quantities that depend on the standard error change.
Step5: plot_predict visualizes forecasts. Here we produce a large number of forecasts which show the string seasonality captured by the model.
Step6: plot_diagnositcs indicates that the model captures the key features in the data.
Step7: Seasonal Dummies
AutoReg supports seasonal dummies which are an alternative way to model seasonality. Including the dummies shortens the dynamics to only an AR(2).
Step8: The seasonal dummies are obvious in the forecasts which has a non-trivial seasonal component in all periods 10 years in to the future.
Step9: Seasonal Dynamics
While AutoReg does not directly support Seasonal components since it uses OLS to estimate parameters, it is possible to capture seasonal dynamics using an over-parametrized Seasonal AR that does not impose the restrictions in the Seasonal AR.
Step10: We start by selecting a model using the simple method that only chooses the maximum lag. All lower lags are automatically included. The maximum lag to check is set to 13 since this allows the model to next a Seasonal AR that has both a short-run AR(1) component and a Seasonal AR(1) component, so that
$$ (1-\phi_s L^{12})(1-\phi_1 L)y_t = \epsilon_t $$
which becomes
$$ y_t = \phi_1 y_{t-1} +\phi_s Y_{t-12} - \phi_1\phi_s Y_{t-13} + \epsilon_t $$
when expanded. AutoReg does not enforce the structure, but can estimate the nesting model
$$ y_t = \phi_1 y_{t-1} +\phi_{12} Y_{t-12} - \phi_{13} Y_{t-13} + \epsilon_t. $$
We see that all 13 lags are selected.
Step11: It seems unlikely that all 13 lags are required. We can set glob=True to search all $2^{13}$ models that include up to 13 lags.
Here we see that the first three are selected, as is the 7th, and finally the 12th and 13th are selected. This is superficially similar to the structure described above.
After fitting the model, we take a look at the diagnostic plots that indicate that this specification appears to be adequate to capture the dynamics in the data.
Step12: We can also include seasonal dummies. These are all insignificant since the model is using year-over-year changes.
Step13: Industrial Production
We will use the industrial production index data to examine forecasting.
Step14: We will start by selecting a model using up to 12 lags. An AR(13) minimizes the BIC criteria even though many coefficients are insignificant.
Step15: We can also use a global search which allows longer lags to enter if needed without requiring the shorter lags. Here we see many lags dropped. The model indicates there may be some seasonality in the data.
Step16: plot_predict can be used to produce forecast plots along with confidence intervals. Here we produce forecasts starting at the last observation and continuing for 18 months.
Step17: The forecasts from the full model and the restricted model are very similar. I also include an AR(5) which has very different dynamics
Step18: The diagnostics indicate the model captures most of the the dynamics in the data. The ACF shows a patters at the seasonal frequency and so a more complete seasonal model (SARIMAX) may be needed.
Step19: Forecasting
Forecasts are produced using the predict method from a results instance. The default produces static forecasts which are one-step forecasts. Producing multi-step forecasts requires using dynamic=True.
In this next cell, we produce 12-step-heard forecasts for the final 24 periods in the sample. This requires a loop.
Note
Step20: Comparing to SARIMAX
SARIMAX is an implementation of a Seasonal Autoregressive Integrated Moving Average with eXogenous regressors model. It supports
Step21: Custom Deterministic Processes
The deterministic parameter allows a custom DeterministicProcess to be used. This allows for more complex deterministic terms to be constructed, for example one that includes seasonal components with two periods, or, as the next example shows, one that uses a Fourier series rather than seasonal dummies. | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import pandas as pd
import pandas_datareader as pdr
import seaborn as sns
from statsmodels.tsa.ar_model import AutoReg, ar_select_order
from statsmodels.tsa.api import acf, pacf, graphics
Explanation: Autoregressions
This notebook introduces autoregression modeling using the AutoReg model. It also covers aspects of ar_select_order assists in selecting models that minimize an information criteria such as the AIC.
An autoregressive model has dynamics given by
$$ y_t = \delta + \phi_1 y_{t-1} + \ldots + \phi_p y_{t-p} + \epsilon_t. $$
AutoReg also permits models with:
Deterministic terms (trend)
n: No deterministic term
c: Constant (default)
ct: Constant and time trend
t: Time trend only
Seasonal dummies (seasonal)
True includes $s-1$ dummies where $s$ is the period of the time series (e.g., 12 for monthly)
Custom deterministic terms (deterministic)
Accepts a DeterministicProcess
Exogenous variables (exog)
A DataFrame or array of exogenous variables to include in the model
Omission of selected lags (lags)
If lags is an iterable of integers, then only these are included in the model.
The complete specification is
$$ y_t = \delta_0 + \delta_1 t + \phi_1 y_{t-1} + \ldots + \phi_p y_{t-p} + \sum_{i=1}^{s-1} \gamma_i d_i + \sum_{j=1}^{m} \kappa_j x_{t,j} + \epsilon_t. $$
where:
$d_i$ is a seasonal dummy that is 1 if $mod(t, period) = i$. Period 0 is excluded if the model contains a constant (c is in trend).
$t$ is a time trend ($1,2,\ldots$) that starts with 1 in the first observation.
$x_{t,j}$ are exogenous regressors. Note these are time-aligned to the left-hand-side variable when defining a model.
$\epsilon_t$ is assumed to be a white noise process.
This first cell imports standard packages and sets plats to appear inline.
End of explanation
sns.set_style('darkgrid')
pd.plotting.register_matplotlib_converters()
# Default figure size
sns.mpl.rc('figure',figsize=(16, 6))
Explanation: This cell sets the plotting style, registers pandas date converters for matplotlib, and sets the default figure size.
End of explanation
data = pdr.get_data_fred('HOUSTNSA', '1959-01-01', '2019-06-01')
housing = data.HOUSTNSA.pct_change().dropna()
# Scale by 100 to get percentages
housing = 100 * housing.asfreq('MS')
fig, ax = plt.subplots()
ax = housing.plot(ax=ax)
Explanation: The first set of examples uses the month-over-month growth rate in U.S. Housing starts that has not been seasonally adjusted. The seasonality is evident by the regular pattern of peaks and troughs. We set the frequency for the time series to "MS" (month-start) to avoid warnings when using AutoReg.
End of explanation
mod = AutoReg(housing, 3, old_names=False)
res = mod.fit()
print(res.summary())
Explanation: We can start with an AR(3). While this is not a good model for this data, it demonstrates the basic use of the API.
End of explanation
res = mod.fit(cov_type="HC0")
print(res.summary())
sel = ar_select_order(housing, 13, old_names=False)
sel.ar_lags
res = sel.model.fit()
print(res.summary())
Explanation: AutoReg supports the same covariance estimators as OLS. Below, we use cov_type="HC0", which is White's covariance estimator. While the parameter estimates are the same, all of the quantities that depend on the standard error change.
End of explanation
fig = res.plot_predict(720, 840)
Explanation: plot_predict visualizes forecasts. Here we produce a large number of forecasts which show the string seasonality captured by the model.
End of explanation
fig = plt.figure(figsize=(16,9))
fig = res.plot_diagnostics(fig=fig, lags=30)
Explanation: plot_diagnositcs indicates that the model captures the key features in the data.
End of explanation
sel = ar_select_order(housing, 13, seasonal=True, old_names=False)
sel.ar_lags
res = sel.model.fit()
print(res.summary())
Explanation: Seasonal Dummies
AutoReg supports seasonal dummies which are an alternative way to model seasonality. Including the dummies shortens the dynamics to only an AR(2).
End of explanation
fig = res.plot_predict(720, 840)
fig = plt.figure(figsize=(16,9))
fig = res.plot_diagnostics(lags=30, fig=fig)
Explanation: The seasonal dummies are obvious in the forecasts which has a non-trivial seasonal component in all periods 10 years in to the future.
End of explanation
yoy_housing = data.HOUSTNSA.pct_change(12).resample("MS").last().dropna()
_, ax = plt.subplots()
ax = yoy_housing.plot(ax=ax)
Explanation: Seasonal Dynamics
While AutoReg does not directly support Seasonal components since it uses OLS to estimate parameters, it is possible to capture seasonal dynamics using an over-parametrized Seasonal AR that does not impose the restrictions in the Seasonal AR.
End of explanation
sel = ar_select_order(yoy_housing, 13, old_names=False)
sel.ar_lags
Explanation: We start by selecting a model using the simple method that only chooses the maximum lag. All lower lags are automatically included. The maximum lag to check is set to 13 since this allows the model to next a Seasonal AR that has both a short-run AR(1) component and a Seasonal AR(1) component, so that
$$ (1-\phi_s L^{12})(1-\phi_1 L)y_t = \epsilon_t $$
which becomes
$$ y_t = \phi_1 y_{t-1} +\phi_s Y_{t-12} - \phi_1\phi_s Y_{t-13} + \epsilon_t $$
when expanded. AutoReg does not enforce the structure, but can estimate the nesting model
$$ y_t = \phi_1 y_{t-1} +\phi_{12} Y_{t-12} - \phi_{13} Y_{t-13} + \epsilon_t. $$
We see that all 13 lags are selected.
End of explanation
sel = ar_select_order(yoy_housing, 13, glob=True, old_names=False)
sel.ar_lags
res = sel.model.fit()
print(res.summary())
fig = plt.figure(figsize=(16,9))
fig = res.plot_diagnostics(fig=fig, lags=30)
Explanation: It seems unlikely that all 13 lags are required. We can set glob=True to search all $2^{13}$ models that include up to 13 lags.
Here we see that the first three are selected, as is the 7th, and finally the 12th and 13th are selected. This is superficially similar to the structure described above.
After fitting the model, we take a look at the diagnostic plots that indicate that this specification appears to be adequate to capture the dynamics in the data.
End of explanation
sel = ar_select_order(yoy_housing, 13, glob=True, seasonal=True, old_names=False)
sel.ar_lags
res = sel.model.fit()
print(res.summary())
Explanation: We can also include seasonal dummies. These are all insignificant since the model is using year-over-year changes.
End of explanation
data = pdr.get_data_fred('INDPRO', '1959-01-01', '2019-06-01')
ind_prod = data.INDPRO.pct_change(12).dropna().asfreq('MS')
_, ax = plt.subplots(figsize=(16,9))
ind_prod.plot(ax=ax)
Explanation: Industrial Production
We will use the industrial production index data to examine forecasting.
End of explanation
sel = ar_select_order(ind_prod, 13, 'bic', old_names=False)
res = sel.model.fit()
print(res.summary())
Explanation: We will start by selecting a model using up to 12 lags. An AR(13) minimizes the BIC criteria even though many coefficients are insignificant.
End of explanation
sel = ar_select_order(ind_prod, 13, 'bic', glob=True, old_names=False)
sel.ar_lags
res_glob = sel.model.fit()
print(res.summary())
Explanation: We can also use a global search which allows longer lags to enter if needed without requiring the shorter lags. Here we see many lags dropped. The model indicates there may be some seasonality in the data.
End of explanation
ind_prod.shape
fig = res_glob.plot_predict(start=714, end=732)
Explanation: plot_predict can be used to produce forecast plots along with confidence intervals. Here we produce forecasts starting at the last observation and continuing for 18 months.
End of explanation
res_ar5 = AutoReg(ind_prod, 5, old_names=False).fit()
predictions = pd.DataFrame({"AR(5)": res_ar5.predict(start=714, end=726),
"AR(13)": res.predict(start=714, end=726),
"Restr. AR(13)": res_glob.predict(start=714, end=726)})
_, ax = plt.subplots()
ax = predictions.plot(ax=ax)
Explanation: The forecasts from the full model and the restricted model are very similar. I also include an AR(5) which has very different dynamics
End of explanation
fig = plt.figure(figsize=(16,9))
fig = res_glob.plot_diagnostics(fig=fig, lags=30)
Explanation: The diagnostics indicate the model captures most of the the dynamics in the data. The ACF shows a patters at the seasonal frequency and so a more complete seasonal model (SARIMAX) may be needed.
End of explanation
import numpy as np
start = ind_prod.index[-24]
forecast_index = pd.date_range(start, freq=ind_prod.index.freq, periods=36)
cols = ['-'.join(str(val) for val in (idx.year, idx.month)) for idx in forecast_index]
forecasts = pd.DataFrame(index=forecast_index,columns=cols)
for i in range(1, 24):
fcast = res_glob.predict(start=forecast_index[i], end=forecast_index[i+12], dynamic=True)
forecasts.loc[fcast.index, cols[i]] = fcast
_, ax = plt.subplots(figsize=(16, 10))
ind_prod.iloc[-24:].plot(ax=ax, color="black", linestyle="--")
ax = forecasts.plot(ax=ax)
Explanation: Forecasting
Forecasts are produced using the predict method from a results instance. The default produces static forecasts which are one-step forecasts. Producing multi-step forecasts requires using dynamic=True.
In this next cell, we produce 12-step-heard forecasts for the final 24 periods in the sample. This requires a loop.
Note: These are technically in-sample since the data we are forecasting was used to estimate parameters. Producing OOS forecasts requires two models. The first must exclude the OOS period. The second uses the predict method from the full-sample model with the parameters from the shorter sample model that excluded the OOS period.
End of explanation
from statsmodels.tsa.api import SARIMAX
sarimax_mod = SARIMAX(ind_prod, order=((1,5,12,13),0, 0), trend='c')
sarimax_res = sarimax_mod.fit()
print(sarimax_res.summary())
sarimax_params = sarimax_res.params.iloc[:-1].copy()
sarimax_params.index = res_glob.params.index
params = pd.concat([res_glob.params, sarimax_params], axis=1, sort=False)
params.columns = ["AutoReg", "SARIMAX"]
params
Explanation: Comparing to SARIMAX
SARIMAX is an implementation of a Seasonal Autoregressive Integrated Moving Average with eXogenous regressors model. It supports:
Specification of seasonal and nonseasonal AR and MA components
Inclusion of Exogenous variables
Full maximum-likelihood estimation using the Kalman Filter
This model is more feature rich than AutoReg. Unlike SARIMAX, AutoReg estimates parameters using OLS. This is faster and the problem is globally convex, and so there are no issues with local minima. The closed-form estimator and its performance are the key advantages of AutoReg over SARIMAX when comparing AR(P) models. AutoReg also support seasonal dummies, which can be used with SARIMAX if the user includes them as exogenous regressors.
End of explanation
from statsmodels.tsa.deterministic import DeterministicProcess
dp = DeterministicProcess(housing.index, constant=True, period=12, fourier=2)
mod = AutoReg(housing,2, trend="n",seasonal=False, deterministic=dp)
res = mod.fit()
print(res.summary())
fig = res.plot_predict(720, 840)
Explanation: Custom Deterministic Processes
The deterministic parameter allows a custom DeterministicProcess to be used. This allows for more complex deterministic terms to be constructed, for example one that includes seasonal components with two periods, or, as the next example shows, one that uses a Fourier series rather than seasonal dummies.
End of explanation |
2,572 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href="https
Step1: Train a DDSP Autoencoder on GPU
This notebook demonstrates how to install the DDSP library and train it for synthesis based on your own data using our command-line scripts. If run inside of Colab, it will automatically use a free Google Cloud GPU.
At the end, you'll have a custom-trained checkpoint that you can download to use with the DDSP Timbre Transfer Colab.
<img src="https
Step2: Setup Google Drive (Optional, Recommeded)
This notebook requires uploading audio and saving checkpoints. While you can do this with direct uploads / downloads, it is recommended to connect to your google drive account. This will enable faster file transfer, and regular saving of checkpoints so that you do not lose your work if the colab kernel restarts (common for training more than 12 hours).
Login and mount your drive
This will require an authentication code. You should then be able to see your drive in the file browser on the left panel.
Step3: Set your base directory
In drive, put all of the audio (.wav, .mp3) files with which you would like to train in a single folder.
Typically works well with 10-20 minutes of audio from a single monophonic source (also, one acoustic environment).
Use the file browser in the left panel to find a folder with your audio, right-click "Copy Path", paste below, and run the cell.
Step4: Make directories to save model and data
Step5: Prepare Dataset
Upload training audio
Upload audio files to use for training your model. Uses DRIVE_DIR if connected to drive, otherwise prompts local upload.
Step6: Preprocess raw audio into TFRecord dataset
We need to do some preprocessing on the raw audio you uploaded to get it into the correct format for training. This involves turning the full audio into short (4-second) examples, inferring the fundamental frequency (or "pitch") with CREPE, and computing the loudness. These features will then be stored in a sharded TFRecord file for easier loading. Depending on the amount of input audio, this process usually takes a few minutes.
(Optional) Transfer dataset from drive. If you've already created a dataset, from a previous run, this cell will skip the dataset creation step and copy the dataset from $DRIVE_DIR/data
Step7: Save dataset statistics for timbre transfer
Quantile normalization helps match loudness of timbre transfer inputs to the
loudness of the dataset, so let's calculate it here and save in a pickle file.
Step8: Let's load the dataset in the ddsp library and have a look at one of the examples.
Step9: Train Model
We will now train a "solo instrument" model. This means the model is conditioned only on the fundamental frequency (f0) and loudness with no instrument ID or latent timbre feature. If you uploaded audio of multiple instruemnts, the neural network you train will attempt to model all timbres, but will likely associate certain timbres with different f0 and loudness conditions.
First, let's start up a TensorBoard to monitor our loss as training proceeds.
Initially, TensorBoard will report No dashboards are active for the current data set., but once training begins, the dashboards should appear.
Step10: We will now begin training.
Note that we specify gin configuration files for the both the model architecture (solo_instrument.gin) and the dataset (tfrecord.gin), which are both predefined in the library. You could also create your own. We then override some of the spefic params for batch_size (which is defined in in the model gin file) and the tfrecord path (which is defined in the dataset file).
Training Notes
Step11: Resynthesis
Check how well the model reconstructs the training data
Step12: Download Checkpoint
Below you can download the final checkpoint. You are now ready to use it in the DDSP Timbre Tranfer Colab. | Python Code:
# Copyright 2020 Google LLC. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
Explanation: <a href="https://colab.research.google.com/github/magenta/ddsp/blob/main/ddsp/colab/demos/train_autoencoder.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Copyright 2020 Google LLC.
Licensed under the Apache License, Version 2.0 (the "License");
End of explanation
!pip install -qU ddsp[data_preparation]==1.6.3
# Initialize global path for using google drive.
DRIVE_DIR = ''
Explanation: Train a DDSP Autoencoder on GPU
This notebook demonstrates how to install the DDSP library and train it for synthesis based on your own data using our command-line scripts. If run inside of Colab, it will automatically use a free Google Cloud GPU.
At the end, you'll have a custom-trained checkpoint that you can download to use with the DDSP Timbre Transfer Colab.
<img src="https://storage.googleapis.com/ddsp/additive_diagram/ddsp_autoencoder.png" alt="DDSP Autoencoder figure" width="700">
Note that we prefix bash commands with a ! inside of Colab, but you would leave them out if running directly in a terminal.
Install Dependencies
First we install the required dependencies with pip.
End of explanation
from google.colab import drive
drive.mount('/content/drive')
Explanation: Setup Google Drive (Optional, Recommeded)
This notebook requires uploading audio and saving checkpoints. While you can do this with direct uploads / downloads, it is recommended to connect to your google drive account. This will enable faster file transfer, and regular saving of checkpoints so that you do not lose your work if the colab kernel restarts (common for training more than 12 hours).
Login and mount your drive
This will require an authentication code. You should then be able to see your drive in the file browser on the left panel.
End of explanation
#@markdown (ex. `/content/drive/My Drive/...`) Leave blank to skip loading from Drive.
DRIVE_DIR = '' #@param {type: "string"}
import os
assert os.path.exists(DRIVE_DIR)
print('Drive Folder Exists:', DRIVE_DIR)
Explanation: Set your base directory
In drive, put all of the audio (.wav, .mp3) files with which you would like to train in a single folder.
Typically works well with 10-20 minutes of audio from a single monophonic source (also, one acoustic environment).
Use the file browser in the left panel to find a folder with your audio, right-click "Copy Path", paste below, and run the cell.
End of explanation
AUDIO_DIR = 'data/audio'
AUDIO_FILEPATTERN = AUDIO_DIR + '/*'
!mkdir -p $AUDIO_DIR
if DRIVE_DIR:
SAVE_DIR = os.path.join(DRIVE_DIR, 'ddsp-solo-instrument')
else:
SAVE_DIR = '/content/models/ddsp-solo-instrument'
!mkdir -p "$SAVE_DIR"
Explanation: Make directories to save model and data
End of explanation
import glob
import os
from ddsp.colab import colab_utils
if DRIVE_DIR:
mp3_files = glob.glob(os.path.join(DRIVE_DIR, '*.mp3'))
wav_files = glob.glob(os.path.join(DRIVE_DIR, '*.wav'))
audio_files = mp3_files + wav_files
else:
audio_files, _ = colab_utils.upload()
for fname in audio_files:
target_name = os.path.join(AUDIO_DIR,
os.path.basename(fname).replace(' ', '_'))
print('Copying {} to {}'.format(fname, target_name))
!cp "$fname" $target_name
Explanation: Prepare Dataset
Upload training audio
Upload audio files to use for training your model. Uses DRIVE_DIR if connected to drive, otherwise prompts local upload.
End of explanation
import glob
import os
TRAIN_TFRECORD = 'data/train.tfrecord'
TRAIN_TFRECORD_FILEPATTERN = TRAIN_TFRECORD + '*'
# Copy dataset from drive if dataset has already been created.
drive_data_dir = os.path.join(DRIVE_DIR, 'data')
drive_dataset_files = glob.glob(drive_data_dir + '/*')
if DRIVE_DIR and len(drive_dataset_files) > 0:
!cp "$drive_data_dir"/* data/
else:
# Make a new dataset.
if not glob.glob(AUDIO_FILEPATTERN):
raise ValueError('No audio files found. Please use the previous cell to '
'upload.')
!ddsp_prepare_tfrecord \
--input_audio_filepatterns=$AUDIO_FILEPATTERN \
--output_tfrecord_path=$TRAIN_TFRECORD \
--num_shards=10 \
--alsologtostderr
# Copy dataset to drive for safe-keeping.
if DRIVE_DIR:
!mkdir "$drive_data_dir"/
print('Saving to {}'.format(drive_data_dir))
!cp $TRAIN_TFRECORD_FILEPATTERN "$drive_data_dir"/
Explanation: Preprocess raw audio into TFRecord dataset
We need to do some preprocessing on the raw audio you uploaded to get it into the correct format for training. This involves turning the full audio into short (4-second) examples, inferring the fundamental frequency (or "pitch") with CREPE, and computing the loudness. These features will then be stored in a sharded TFRecord file for easier loading. Depending on the amount of input audio, this process usually takes a few minutes.
(Optional) Transfer dataset from drive. If you've already created a dataset, from a previous run, this cell will skip the dataset creation step and copy the dataset from $DRIVE_DIR/data
End of explanation
from ddsp.colab import colab_utils
import ddsp.training
data_provider = ddsp.training.data.TFRecordProvider(TRAIN_TFRECORD_FILEPATTERN)
dataset = data_provider.get_dataset(shuffle=False)
PICKLE_FILE_PATH = os.path.join(SAVE_DIR, 'dataset_statistics.pkl')
_ = colab_utils.save_dataset_statistics(data_provider, PICKLE_FILE_PATH, batch_size=1)
Explanation: Save dataset statistics for timbre transfer
Quantile normalization helps match loudness of timbre transfer inputs to the
loudness of the dataset, so let's calculate it here and save in a pickle file.
End of explanation
from ddsp.colab import colab_utils
import ddsp.training
from matplotlib import pyplot as plt
import numpy as np
data_provider = ddsp.training.data.TFRecordProvider(TRAIN_TFRECORD_FILEPATTERN)
dataset = data_provider.get_dataset(shuffle=False)
try:
ex = next(iter(dataset))
except StopIteration:
raise ValueError(
'TFRecord contains no examples. Please try re-running the pipeline with '
'different audio file(s).')
colab_utils.specplot(ex['audio'])
colab_utils.play(ex['audio'])
f, ax = plt.subplots(3, 1, figsize=(14, 4))
x = np.linspace(0, 4.0, 1000)
ax[0].set_ylabel('loudness_db')
ax[0].plot(x, ex['loudness_db'])
ax[1].set_ylabel('F0_Hz')
ax[1].set_xlabel('seconds')
ax[1].plot(x, ex['f0_hz'])
ax[2].set_ylabel('F0_confidence')
ax[2].set_xlabel('seconds')
ax[2].plot(x, ex['f0_confidence'])
Explanation: Let's load the dataset in the ddsp library and have a look at one of the examples.
End of explanation
%reload_ext tensorboard
import tensorboard as tb
tb.notebook.start('--logdir "{}"'.format(SAVE_DIR))
Explanation: Train Model
We will now train a "solo instrument" model. This means the model is conditioned only on the fundamental frequency (f0) and loudness with no instrument ID or latent timbre feature. If you uploaded audio of multiple instruemnts, the neural network you train will attempt to model all timbres, but will likely associate certain timbres with different f0 and loudness conditions.
First, let's start up a TensorBoard to monitor our loss as training proceeds.
Initially, TensorBoard will report No dashboards are active for the current data set., but once training begins, the dashboards should appear.
End of explanation
!ddsp_run \
--mode=train \
--alsologtostderr \
--save_dir="$SAVE_DIR" \
--gin_file=models/solo_instrument.gin \
--gin_file=datasets/tfrecord.gin \
--gin_param="TFRecordProvider.file_pattern='$TRAIN_TFRECORD_FILEPATTERN'" \
--gin_param="batch_size=16" \
--gin_param="train_util.train.num_steps=30000" \
--gin_param="train_util.train.steps_per_save=300" \
--gin_param="trainers.Trainer.checkpoints_to_keep=10"
Explanation: We will now begin training.
Note that we specify gin configuration files for the both the model architecture (solo_instrument.gin) and the dataset (tfrecord.gin), which are both predefined in the library. You could also create your own. We then override some of the spefic params for batch_size (which is defined in in the model gin file) and the tfrecord path (which is defined in the dataset file).
Training Notes:
Models typically perform well when the loss drops to the range of ~4.5-5.0.
Depending on the dataset this can take anywhere from 5k-30k training steps usually.
The default is set to 30k, but you can stop training at any time, and for timbre transfer, it's best to stop before the loss drops too far below ~5.0 to avoid overfitting.
On the colab GPU, this can take from around 3-20 hours.
We highly recommend saving checkpoints directly to your drive account as colab will restart naturally after about 12 hours and you may lose all of your checkpoints.
By default, checkpoints will be saved every 300 steps with a maximum of 10 checkpoints (at ~60MB/checkpoint this is ~600MB). Feel free to adjust these numbers depending on the frequency of saves you would like and space on your drive.
If you're restarting a session and DRIVE_DIR points a directory that was previously used for training, training should resume at the last checkpoint.
End of explanation
from ddsp.colab.colab_utils import play, specplot
import ddsp.training
import gin
from matplotlib import pyplot as plt
import numpy as np
data_provider = ddsp.training.data.TFRecordProvider(TRAIN_TFRECORD_FILEPATTERN)
dataset = data_provider.get_batch(batch_size=1, shuffle=False)
try:
batch = next(iter(dataset))
except OutOfRangeError:
raise ValueError(
'TFRecord contains no examples. Please try re-running the pipeline with '
'different audio file(s).')
# Parse the gin config.
gin_file = os.path.join(SAVE_DIR, 'operative_config-0.gin')
gin.parse_config_file(gin_file)
# Load model
model = ddsp.training.models.Autoencoder()
model.restore(SAVE_DIR)
# Resynthesize audio.
outputs = model(batch, training=False)
audio_gen = model.get_audio_from_outputs(outputs)
audio = batch['audio']
print('Original Audio')
specplot(audio)
play(audio)
print('Resynthesis')
specplot(audio_gen)
play(audio_gen)
Explanation: Resynthesis
Check how well the model reconstructs the training data
End of explanation
from ddsp.colab import colab_utils
import tensorflow as tf
import os
CHECKPOINT_ZIP = 'my_solo_instrument.zip'
latest_checkpoint_fname = os.path.basename(tf.train.latest_checkpoint(SAVE_DIR))
!cd "$SAVE_DIR" && zip $CHECKPOINT_ZIP $latest_checkpoint_fname* operative_config-0.gin dataset_statistics.pkl
!cp "$SAVE_DIR/$CHECKPOINT_ZIP" ./
colab_utils.download(CHECKPOINT_ZIP)
Explanation: Download Checkpoint
Below you can download the final checkpoint. You are now ready to use it in the DDSP Timbre Tranfer Colab.
End of explanation |
2,573 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
SMC2017
Step1: IV.1 Particle Metropolis-Hastings
Consider the state-space model
$$
\begin{array}{rcll}
x_t & = & \cos\left(\theta x_{t - 1}\right) + v_t, &\qquad v_t \sim \mathcal{N}(0, 1)\
y_t & = & x_t + e_t, &\qquad e_t \sim \mathcal{N}(0, 1) \
x_0 & \sim & \mathcal{N}(0, 1) &
\end{array}
$$
which admits the probabilistic model
$$
\begin{array}{lcl}
p(x_0) & = & \mathcal{N}\left(x_0;\,0,\,1\right) \
p(x_t\,\big|\,x_{t - 1}) & = & \mathcal{N}\left(x_t;\,\cos\left(\theta x_{t - 1}\right),\,1\right) \
p(y_t\,\big|\,x_t) & = & \mathcal{N}\left(y_t;\,x_t,\,1\right)
\end{array}
$$
For now, I will use the bootstrap particle filter (for simplicity).
Simulate data
During the simulation $\theta = 1$ will be assumed. During the inference it will be assumed that $\theta \sim \mathcal{N}(0, 1)$.
Step2: Bootstrap particle filter giving an estimate $\widehat{z}\theta$ of the joint likelihood $p(y{1
Step3: As a proposal we can use $q(\theta'\,\big|\,\theta[k - 1]) = \mathcal{N}\left(\theta';\,\theta[k - 1], \tau\right)$ with an appropriately chosen $\tau$.
Implement a Metropolis-Hastings sampler with the above.
Step4: IV.2 Conditional Particle Filter
I will turn the fully adapted particle filter from exercise II.2 into a conditional particle filter by including a reference state trajectory and in each propagation step the refernence state trajectory delivers one of the particles. States and their ancestors will be saved and the algorithm returns a new state trajectory conditional on the old one.
The state-space model under consideration is (normal distribution parametrized with $\sigma$)
$$
\begin{array}{rll}
x_{t + 1} &= \cos(x_t)^2 + v_t, & v_t \sim N(0, 1) \
y_t &= 2 x_t + e_t, & e_t \sim N(0, 0.1)
\end{array}
$$
which leads to the probabilistic model
$$
\begin{align}
p(x_t\,|\,x_{t - 1}) &= N\left(x_t;\,\cos(x_t)^2,\,1\right) \
p(y_t\,|\,x_t) &= N\left(y_t;\,2 x_t,\,0.1\right)
\end{align}
$$
This admits the necessary pdfs
$$
\begin{align}
p(y_t\,|\,x_{t - 1}) &= N(y_t;\,2 \cos(x_{t - 1})^2,\,\sqrt{4.01}) \
p(x_t\,|\,x_{t - 1},\,y_t) &= N\left(x_t;\,\frac{2 y_t + 0.01 \cos(x_{t - 1})^2}{4.01}, \frac{0.1}{\sqrt{4.01}}\right)
\end{align}
$$
Step5: Simulate from the model given above.
Step6: This is a Markov kernel which can be used in Gibbs sampling where the parameters and the hidden state are sampled repeatedly consecutively.
Step7: IV.3 Conditional importance sampling
a) Conditional importance sampling with few particles
Sample from $\pi(x) = \mathcal{N}\left(x\,\big|\,1,\,1\right)$ by using conditional importance sampling with the proposal $q(x) = \mathcal{N}\left(x\,\big|\,0,\,1\right)$.
Step8: Use that kernel to sample from the target distribution.
Step9: Run the sampler
Step10: Plot the result
Step11: b) Lower bound for probability that draw from cond. imp. sampling kernel falls in a set $A$
Theoretical exercise. Solution will be in exercises_on_paper.
IV.4 An SMC sampler for localization
A point $x_0$ is supposed to be localized in the plane $[-12,\,12]^2$.
There are some measurements $y_{1
Step12: Visualize simulated observations and true $x_0$
Step13: b) Likelihood
As derived on paper, it holds that
$$
p\left(y_m^j\,\big|\,x_0^j\right) =
\begin{cases}
\frac{1}{4} \exp\left(-\frac{y_m^j - x_0^j}{2}\right) & y_m^j > x_0 \
\frac{1}{4} \exp\left(\frac{y_m^j - x_0^j}{2}\right) & y_m^j < x_0
\end{cases}
$$
and since the components of $y_m$ are independent we get
$$
p\left(y_m\,\big|\,x_0\right) = p\left(y_m^1\,\big|\,x_0^1\right) \cdot p\left(y_m^2\,\big|\,x_0^2\right)
$$
Step14: c) Metropolis-Hastings kernel for $\pi_k$
This function evaluates $\log\left(\pi_k\right)$
Step15: The Metropolis-Hastings kernel produces one new sample of the Markov chain, conditional on the last sample.
Step16: e) Putting together the actual SMC sampler
Step17: f) Visualisation and testing of the SMC sampling
Sample the probability distributions of interest to be able to draw contour lines.
Step18: g) Comparison to standard Metropolis Hastings sampler
This is the Metropolis Hastings sampler for the distribution $\pi_k$
Step19: Some visualisations of the marginal distributions for the two coordinates determined by the Metropolis-Hastings run. | Python Code:
import numpy as np
from scipy import stats
from tqdm import tqdm_notebook
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_style()
Explanation: SMC2017: Exercise sheet IV
Setup
End of explanation
T = 50
xs_sim = np.zeros((T + 1,))
ys_sim = np.zeros((T,))
# Initial state
xs_sim[0] = 0.
for t in range(T):
xs_sim[t + 1] = np.cos(xs_sim[t]) + stats.norm.rvs()
ys_sim = xs_sim[1:] + stats.norm.rvs(0, 1, T)
fig, axs = plt.subplots(2, 1, figsize=(10, 10))
axs[0].plot(xs_sim, 'o-')
axs[1].plot(range(1, T + 1), ys_sim, 'o-r')
Explanation: IV.1 Particle Metropolis-Hastings
Consider the state-space model
$$
\begin{array}{rcll}
x_t & = & \cos\left(\theta x_{t - 1}\right) + v_t, &\qquad v_t \sim \mathcal{N}(0, 1)\
y_t & = & x_t + e_t, &\qquad e_t \sim \mathcal{N}(0, 1) \
x_0 & \sim & \mathcal{N}(0, 1) &
\end{array}
$$
which admits the probabilistic model
$$
\begin{array}{lcl}
p(x_0) & = & \mathcal{N}\left(x_0;\,0,\,1\right) \
p(x_t\,\big|\,x_{t - 1}) & = & \mathcal{N}\left(x_t;\,\cos\left(\theta x_{t - 1}\right),\,1\right) \
p(y_t\,\big|\,x_t) & = & \mathcal{N}\left(y_t;\,x_t,\,1\right)
\end{array}
$$
For now, I will use the bootstrap particle filter (for simplicity).
Simulate data
During the simulation $\theta = 1$ will be assumed. During the inference it will be assumed that $\theta \sim \mathcal{N}(0, 1)$.
End of explanation
def log_likelihood_bootstrap_pf(y, N=20, theta=1):
# Cumulatively build up log-likelihood
ll = 0.0
# Initialisation
samples = stats.norm.rvs(0, 1, N)
weights = 1 / N * np.ones((N,))
# Determine the number of time steps
T = len(y)
# Loop through all time steps
for t in range(T):
# Resample
ancestors = np.random.choice(samples, size=N,
replace=True, p=weights)
# Propagate
samples = stats.norm.rvs(0, 1, N) + np.cos(theta * ancestors)
# Weight
weights = stats.norm.logpdf(y[t], loc=samples, scale=1)
# Calculate the max of the weights
max_weights = np.max(weights)
# Subtract the max
weights = weights - max_weights
# Update log-likelihood
ll += max_weights + np.log(np.sum(np.exp(weights))) - np.log(N)
# Normalize weights to be probabilities
weights = np.exp(weights) / np.sum(np.exp(weights))
return ll
log_likelihood_bootstrap_pf(ys_sim, N=50, theta=3)
Explanation: Bootstrap particle filter giving an estimate $\widehat{z}\theta$ of the joint likelihood $p(y{1:T}\,\big|\,\theta)$.
End of explanation
def particle_metropolis_hastings(y, M=10000, N=20, tau=1):
theta = np.zeros((M + 1,))
alpha = np.zeros((M,))
z = np.zeros((M + 1,))
# Initial state
theta[0] = 0
z[0] = log_likelihood_bootstrap_pf(y, N=N, theta=theta[0])
# Iterate the chain
t = tqdm_notebook(range(M))
for i in t:
# Sample a new value
theta_prop = stats.norm.rvs(theta[i], tau, 1)
# Sample to be compared to the acceptance probability
u = stats.uniform.rvs()
# Terms in the second part of the acceptance probability -
# Proposal is symmetric, so terms containing the proposal will
# cancel each other out
z_prop = log_likelihood_bootstrap_pf(y, N=N, theta=theta_prop)
num = z_prop + stats.norm.logpdf(theta_prop)
denom = z[i] + stats.norm.logpdf(theta[i])
# Acceptance probability
alpha[i] = min(1, np.exp(num - denom))
t.set_postfix({'a_mean': np.mean(alpha[:(i + 1)])})
# Set next state depending on acceptance probability
if u <= alpha[i]:
z[i + 1] = z_prop
theta[i + 1] = theta_prop
else:
z[i + 1] = z[i]
theta[i + 1] = theta[i]
return theta, alpha
theta, alpha = particle_metropolis_hastings(ys_sim, M=10000, N=50, tau=0.7)
np.mean(alpha)
fig, ax = plt.subplots()
ax.plot(theta, '.-')
fig, ax = plt.subplots()
ax.hist(theta[2000:], normed=True, bins=60);
Explanation: As a proposal we can use $q(\theta'\,\big|\,\theta[k - 1]) = \mathcal{N}\left(\theta';\,\theta[k - 1], \tau\right)$ with an appropriately chosen $\tau$.
Implement a Metropolis-Hastings sampler with the above.
End of explanation
def conditional_FAPF(x_ref, y, N=200):
# Determine length of data
T = len(y)
# Save the paths of all final particles
xs = np.zeros((N, T + 1))
# Initialisation
xs[:, 0] = stats.norm.rvs(0, 1, N)
# Replace last state with state from reference trajectory
xs[N - 1, 0] = x_ref[0]
for t in range(T):
# Calculate resampling weights in case of FAPF
ws = stats.norm.logpdf(y[t], loc=2*np.power(np.cos(xs[:, t]), 2),
scale=np.sqrt(4.01))
# Subtract maximum weight
ws -= np.max(ws)
# Normalize the resampling weights
ws = np.exp(ws) / np.sum(np.exp(ws))
# Resample
ancestors = np.random.choice(range(N), size=N, replace=True, p=ws)
# Propagate
xs[:, t + 1] = stats.norm.rvs(0, 1, N) * 0.1 / np.sqrt(4.01) + \
(2 / 4.01) * y[t] + (0.01 / 4.01) * \
np.power(np.cos(xs[ancestors, t]), 2)
# Replace last sample with reference trajectory
ancestors[N - 1] = N - 1
xs[N - 1, t + 1] = x_ref[t + 1]
# Update the ancestor lines
xs[:, 0:t] = xs[ancestors, 0:t]
# Randomly choose trajectory which will be returned
# All normalized weights are 1 / N, so that no draw from
# a categorical distribution is necessary. A uniform draw
# is satisfactory.
b = np.random.randint(N)
return xs[b, :]
Explanation: IV.2 Conditional Particle Filter
I will turn the fully adapted particle filter from exercise II.2 into a conditional particle filter by including a reference state trajectory and in each propagation step the refernence state trajectory delivers one of the particles. States and their ancestors will be saved and the algorithm returns a new state trajectory conditional on the old one.
The state-space model under consideration is (normal distribution parametrized with $\sigma$)
$$
\begin{array}{rll}
x_{t + 1} &= \cos(x_t)^2 + v_t, & v_t \sim N(0, 1) \
y_t &= 2 x_t + e_t, & e_t \sim N(0, 0.1)
\end{array}
$$
which leads to the probabilistic model
$$
\begin{align}
p(x_t\,|\,x_{t - 1}) &= N\left(x_t;\,\cos(x_t)^2,\,1\right) \
p(y_t\,|\,x_t) &= N\left(y_t;\,2 x_t,\,0.1\right)
\end{align}
$$
This admits the necessary pdfs
$$
\begin{align}
p(y_t\,|\,x_{t - 1}) &= N(y_t;\,2 \cos(x_{t - 1})^2,\,\sqrt{4.01}) \
p(x_t\,|\,x_{t - 1},\,y_t) &= N\left(x_t;\,\frac{2 y_t + 0.01 \cos(x_{t - 1})^2}{4.01}, \frac{0.1}{\sqrt{4.01}}\right)
\end{align}
$$
End of explanation
T = 100
# Allocate arrays for results
ys_sim = np.zeros((T,))
xs_sim = np.zeros((T + 1,))
# Initial value for state
xs_sim[0] = 0.1
# Walk through all time steps
for t in range(T):
xs_sim[t + 1] = np.power(np.cos(xs_sim[t]), 2) + stats.norm.rvs(0, 1, 1)
ys_sim[t] = 2 * xs_sim[t + 1] + stats.norm.rvs(0, 0.1, 1)
fig, axs = plt.subplots(2, 1, figsize=(10, 10))
axs[0].plot(range(T + 1), xs_sim, 'o-');
axs[1].plot(range(1, T + 1), ys_sim, 'o-r');
Explanation: Simulate from the model given above.
End of explanation
xs = conditional_FAPF(xs_sim, ys_sim, N=1000)
fig, ax = plt.subplots()
ax.plot(xs_sim, 'o-')
ax.plot(xs, 'x-');
Explanation: This is a Markov kernel which can be used in Gibbs sampling where the parameters and the hidden state are sampled repeatedly consecutively.
End of explanation
def cond_imp_sampling_kernel(x, N=2):
# Sample new proposals
xs = stats.norm.rvs(0, 1, N)
# Set the last sample to the reference
xs[N - 1] = x
# Calculate weights
ws = stats.norm.logpdf(xs, loc=1, scale=1) - \
stats.norm.logpdf(xs, loc=0, scale=1)
ws -= np.max(ws)
ws = np.exp(ws) / np.sum(np.exp(ws))
return xs[np.random.choice(range(N), size=1, p=ws)[0]]
Explanation: IV.3 Conditional importance sampling
a) Conditional importance sampling with few particles
Sample from $\pi(x) = \mathcal{N}\left(x\,\big|\,1,\,1\right)$ by using conditional importance sampling with the proposal $q(x) = \mathcal{N}\left(x\,\big|\,0,\,1\right)$.
End of explanation
def cond_imp_sampling_mcmc(M=1000, N=2):
# Initialisation
xs = np.zeros((M + 1,))
for m in tqdm_notebook(range(M)):
xs[m + 1] = cond_imp_sampling_kernel(xs[m], N=N)
return xs
Explanation: Use that kernel to sample from the target distribution.
End of explanation
xs = cond_imp_sampling_mcmc(M=70000)
Explanation: Run the sampler
End of explanation
fig, ax = plt.subplots()
ax.hist(xs, normed=True, bins=40);
Explanation: Plot the result
End of explanation
M = 50
x0 = np.array([6.0, -5.5])
ns = np.reshape(stats.expon.rvs(scale=2, size=2 * M), (2, M))
bs = np.reshape(np.random.choice([-1, 1], size=2 * M,
replace=True, p=[0.5, 0.5]),
(2, M))
ys = np.reshape(np.repeat(x0, M), (2, M)) + ns * bs
ys = ys.T
Explanation: b) Lower bound for probability that draw from cond. imp. sampling kernel falls in a set $A$
Theoretical exercise. Solution will be in exercises_on_paper.
IV.4 An SMC sampler for localization
A point $x_0$ is supposed to be localized in the plane $[-12,\,12]^2$.
There are some measurements $y_{1:M}$ which are corrupted by heavy-tailed noise from an exponential distribution.
We want to find the distribution $p\left(x_0\,\big|\,y_{1:M}\right)$.
a) Simulate data
$M$ simulated measurements from the model
$$
\begin{align}
y_t^1 &= x_0^1 + n_m^1 b_m^1 \
y_t^2 &= x_0^2 + n_m^2 b_m^2
\end{align}
$$
where
$$
\begin{align}
m &= 1, 2, \dots, M \
x_0 &= \left(x_0^1, x_0^2\right) \
n_m^1, n_m^2 &\sim \mathrm{Exp}\left(2\right) \
\mathbb{P}\left(b_m^1 = 1\right) &= \mathbb{P}\left(b_m^1 = -1\right) = \frac{1}{2}
\end{align}
$$
and analogously for $b_m^2$.
End of explanation
fig, ax = plt.subplots(figsize=(8, 8))
ax.scatter(ys[:, 0], ys[:, 1])
ax.set_xlim([-12, 12])
ax.set_ylim([-12, 12])
ax.scatter(x0[0], x0[1], facecolors='none', edgecolors='r', s=100)
Explanation: Visualize simulated observations and true $x_0$
End of explanation
def log_likelihood(x, ys):
return np.sum(np.log(0.25) + 0.5 *
np.power(-1, ((ys - x) > 0).astype('int')) * (ys - x))
Explanation: b) Likelihood
As derived on paper, it holds that
$$
p\left(y_m^j\,\big|\,x_0^j\right) =
\begin{cases}
\frac{1}{4} \exp\left(-\frac{y_m^j - x_0^j}{2}\right) & y_m^j > x_0 \
\frac{1}{4} \exp\left(\frac{y_m^j - x_0^j}{2}\right) & y_m^j < x_0
\end{cases}
$$
and since the components of $y_m$ are independent we get
$$
p\left(y_m\,\big|\,x_0\right) = p\left(y_m^1\,\big|\,x_0^1\right) \cdot p\left(y_m^2\,\big|\,x_0^2\right)
$$
End of explanation
def tempered_logpdf(x, ys, k, K=10):
# k / K comes from likelihood tempering
return k / K * log_likelihood(x, ys) + \
stats.multivariate_normal.logpdf(x, mean=[0, 0],
cov=7 * np.eye(2))
Explanation: c) Metropolis-Hastings kernel for $\pi_k$
This function evaluates $\log\left(\pi_k\right)$
End of explanation
def mh_kernel(x, ys, k, K=10, tau=0.5):
# Propose a new value
x_prop = stats.multivariate_normal.rvs(mean=x,
cov=tau**2 * np.eye(2),
size=1)
# Terms in the second part of the acceptance probability
# Proposal is symmetric, so terms containing the proposal will
# cancel each other out
# Acceptance probability
alpha = min(0, tempered_logpdf(x_prop, ys, k, K=K) -
tempered_logpdf(x, ys, k, K=K))
# Sample to be compared to the acceptance probability
u = stats.uniform.rvs()
# Set next state depending on acceptance probability
if np.log(u) <= alpha:
return x_prop, np.exp(alpha)
else:
return x, np.exp(alpha)
mh_kernel(x0, ys, 2)
Explanation: The Metropolis-Hastings kernel produces one new sample of the Markov chain, conditional on the last sample.
End of explanation
def smc_sampler(ys, K=10, N=100, ess_min=50, tau=0.5, progressbar=True):
# Vectors for saving
xs = np.zeros((K + 1, N, 2))
ancs = np.zeros((K, N), dtype='int64')
ws = np.zeros((K + 1, N))
# Initialisation
xs[0, :, :] = stats.multivariate_normal.rvs(mean=[0, 0],
cov=7 * np.eye(2),
size=N)
ws[0, :] = 1 / N * np.ones((N,))
if progressbar:
t = tqdm_notebook(range(K))
else:
t = range(K)
for k in t:
# Update weights
for i in range(N):
ws[k + 1, i] = np.log(ws[k, i]) + \
tempered_logpdf(xs[k, i, :], ys, k=k + 1, K=K) - \
tempered_logpdf(xs[k, i, :], ys, k=k, K=K)
# and normalize them
ws[k + 1, :] -= np.max(ws[k + 1, :])
ws[k + 1, :] = np.exp(ws[k + 1, :]) / np.sum(np.exp(ws[k + 1, :]))
# Resample depending on ESS
if 1 / np.sum(np.power(ws[k + 1, :], 2)) < ess_min:
ancs[k, :] = np.random.choice(range(N), size=N,
replace=True, p=ws[k + 1, :])
ws[k + 1, :] = 1 / N * np.ones((N,))
else:
ancs[k, :] = range(N)
# Propagate / Sample from next element in the sequence
# Here, via a Metropolis-Hastings kernel
for i in range(N):
xs[k + 1, i, :] = mh_kernel(xs[k, ancs[k, i], :], ys,
k=k + 1, K=K, tau=tau)[0]
return xs, ancs, ws
xs, ancs, ws = smc_sampler(ys, N=1000, ess_min=750)
np.sum(xs[10, :, 0] * ws[10])
np.sum(xs[10, :, 1] * ws[10])
Explanation: e) Putting together the actual SMC sampler
End of explanation
x = np.arange(-12, 12, 0.25)
y = np.arange(-12, 12, 0.25)
X, Y = np.meshgrid(x, y)
Z = np.zeros((len(x), len(y), 10))
for k in tqdm_notebook(range(10)):
for i in range(len(x)):
for j in range(len(y)):
Z[i, j, k] = tempered_logpdf(np.array([X[i, j], Y[i, j]]),
ys, k, K=10)
Z[:, :, k] -= np.max(Z[:, :, k])
Z[:, :, k] = np.exp(Z[:, :, k])
fig, axs = plt.subplots(5, 2, figsize=(8.5, 20))
for k in range(10):
levels=np.linspace(np.min(Z[:, :, k]),
np.max(Z[:, :, k]), 8)
axs[k // 2, k % 2].contour(X, Y, Z[:, :, k])
axs[k // 2, k % 2].scatter(x0[0], x0[1],
facecolors='none', edgecolors='r', s=100)
axs[k // 2, k % 2].scatter(xs[k, :, 0], xs[k, :, 1], color='k')
fig.tight_layout()
Explanation: f) Visualisation and testing of the SMC sampling
Sample the probability distributions of interest to be able to draw contour lines.
End of explanation
def mh_sampler(ys, k=10, K=10, M=1000, tau=0.5, progressbar=True):
# Prepare vectors for saving
xs = np.zeros((M + 1, 2))
alpha = np.zeros((M,))
# Initial state
# Choose zero as the initial state
# Iterate the chain
if progressbar:
t = tqdm_notebook(range(M))
else:
t = range(M)
for i in t:
xs[i + 1], alpha[i] = mh_kernel(xs[i], ys, k, K=K, tau=tau)
if progressbar:
t.set_postfix({'mean acc': np.mean(alpha[:(i + 1)])})
return xs, alpha
xs, _ = mh_sampler(ys, M=30000, tau=0.7, progressbar=True)
Explanation: g) Comparison to standard Metropolis Hastings sampler
This is the Metropolis Hastings sampler for the distribution $\pi_k$
End of explanation
fig, axs = plt.subplots(2, 1, figsize=(8, 6))
burnin = 500
axs[0].hist(xs[burnin:, 0], normed=True, bins=50);
axs[0].axvline(np.mean(xs[burnin:, 0]), color='r', linestyle='--')
axs[0].axvline(np.median(xs[burnin:, 0]), color='k', linestyle='--')
axs[1].hist(xs[burnin:, 1], normed=True, bins=50);
axs[1].axvline(np.mean(xs[burnin:, 1]), color='r', linestyle='--')
axs[1].axvline(np.median(xs[burnin:, 1]), color='k', linestyle='--')
means_mh = np.zeros((10, 2))
means_smc = np.zeros((10, 2))
for m in tqdm_notebook(range(10)):
xs, _ = mh_sampler(ys, M=25000, tau=0.7, progressbar=True)
means_mh[m, :] = np.mean(xs[500:], axis=0)
xs, _, ws = smc_sampler(ys, N=2000, ess_min=1500, progressbar=True)
means_smc[m, :] = [np.sum(xs[10, :, 0] * ws[10]),
np.sum(xs[10, :, 1] * ws[10])]
np.mean(np.linalg.norm(means_smc - x0, axis=1, ord=1))
np.mean(np.linalg.norm(means_mh - x0, axis=1, ord=1))
Explanation: Some visualisations of the marginal distributions for the two coordinates determined by the Metropolis-Hastings run.
End of explanation |
2,574 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
用pandas做数据分析
关于数据分析
根据jetbrains公司2018年对python开发人员的调查, 从事数据分析的python使用者超过了
web开发和自动化测试.
在诸多数据科学的框架和库中,numpy pandas是最流行的
而numpy为pandas提供了基础的底层数据结构和处理函数, 用ndarray和ufunc解决了性能问题.
## pandas的核心数据结构 Series 和 DataFrame
Series 是个定长的字典序列, 可以看成是只有一列的Excel, 或者数据库表里面的一行记录
Series有两个基本属性:index 和 values
index如果不指定默认是<code>[0,1,2,3...]</code> 也可以自己指定索引 <code>index=['a', 'b', 'c', 'd']</code>
Step1: Dataframe 则类似于excel里面的一张表,或者数据库的一张表. 可以看出是一组相同的index组成的Series组成的一个dict. 或者说一个多列的excel表
Step2: 数据的导入和输出
pandas提供了非常简单的方式来读取excel csv 数据库 html pickle 甚至是剪贴板中的的数据成为pandas中的DataFrame类型, 也可以很方便的将DataFrame转换成dict list json 数据库 甚至是html里面
Step3: 数据清洗
比方说有以下场景
删除不必要的行 pandas提供了一个drop方法
Step4: 对列名或者行名进行重命名操作, pandas提供了rename方法
Step5: 有时候数据可能有重复的值, 可以使用drop_duplicates方法来去除
Step6: 排序可以用sort_values
Step7: 做数据清洗的时候,可能由于是爬回来的数据, 数据不完整,有空的情况
Step8: 做数据清洗的时候, 有时候可能想根据原有的列,做计算, 然后增加新列. 我们模拟一下场景
Step9: 我们希望计算出一列总热量来
Step10: 数据统计
pandas 带了好多数据统计函数, 如果是不能执行的,比如算平均数不是数字的行会自动忽略
Step11: 数据表合并
DataFrame就类似于数据库的表, 有时候希望做一些join操作
Step12: 针对指定列进行连接
Step13: 内连接, 左连接, 右连接 , 内连接
Step14: 用sql操作pandas
Step15: 将json导入到mysql
Step16: 练习
现在有两个csv, 一个是从s查询的结果, 有两列一个是url , 另一个是黑白 . 另一个csv是从url_detect接口查出来的. 一列是url 另一列是检出威胁的引擎的列表用逗号隔开的字符串, 有可能是空字符串或者Nan. 现在要求汇总这两个csv. 如果url_detect接口里面的结果不是Nan或者是空字符串或者是字符串safe, 不是这三种情况结果就按黑, 否则就按s的结果.
Step17: 方法二
Step18: 将NaN填充为safe就好解决了
Step19: 再看一下还有没有空白
Step20: 甚至可以看一下个数有多少
Step21: 实际上我们如果不知道哪个是最多的, 我们填充NAN值也经常用平均值或者出现个数最多的值来填充.怎样用出现次数最多的值填充呢
Step22: 发现黑白这一列里面有未知, 应该改成白
Step23: 秒出 | Python Code:
import pandas as pd
x1 = pd.Series([1,2,3,4])
x2 = pd.Series(data=[1,2,3,4], index=['a', 'b', 'c', 'd'])
print("x1".center(100,"*"))
print(x1)
print("x2".center(100,"*"))
print(x2)
d = {'a':1, 'b':2, 'c':3, 'd':4}
x3 = pd.Series(d)
print(x3)
Explanation: 用pandas做数据分析
关于数据分析
根据jetbrains公司2018年对python开发人员的调查, 从事数据分析的python使用者超过了
web开发和自动化测试.
在诸多数据科学的框架和库中,numpy pandas是最流行的
而numpy为pandas提供了基础的底层数据结构和处理函数, 用ndarray和ufunc解决了性能问题.
## pandas的核心数据结构 Series 和 DataFrame
Series 是个定长的字典序列, 可以看成是只有一列的Excel, 或者数据库表里面的一行记录
Series有两个基本属性:index 和 values
index如果不指定默认是<code>[0,1,2,3...]</code> 也可以自己指定索引 <code>index=['a', 'b', 'c', 'd']</code>
End of explanation
data = {'Chinese': [66, 95, 93, 90,80],'English': [65, 85, 92, 88, 90],'Math': [30, 98, 96, 77, 90]}
df1 = pd.DataFrame(data)
df2 = pd.DataFrame(data,
index=['ZhangFei', 'GuanYu', 'ZhaoYun', 'HuangZhong', 'DianWei'],
columns=['English', 'Math', "Chinese"])
print("df1".center(100,"*"))
print(df1)
print("df2".center(100,"*"))
print(df2)
Explanation: Dataframe 则类似于excel里面的一张表,或者数据库的一张表. 可以看出是一组相同的index组成的Series组成的一个dict. 或者说一个多列的excel表
End of explanation
print("列出当前路径".center(100,"*"))
!ls
print("用pandas读取csv".center(100,"*"))
df = pd.read_csv("肉类热量表.csv")
print(df)
df.to_excel("pandas导出的肉类热量表.xlsx")
!ls
# 为了保证程序能像预料中那样再次运行, 删除掉生成的excel
!rm pandas导出的肉类热量表.xlsx
!ls
df
Explanation: 数据的导入和输出
pandas提供了非常简单的方式来读取excel csv 数据库 html pickle 甚至是剪贴板中的的数据成为pandas中的DataFrame类型, 也可以很方便的将DataFrame转换成dict list json 数据库 甚至是html里面
End of explanation
df["测试"] = "啦啦啦"
df.loc["冰淇淋"] = "乱入"
df
df.drop(index=["冰淇淋"], inplace=True)
print("删除index".center(100,"*"))
print(df)
df.drop(columns=["测试"], inplace=True)
print("删除columns".center(100,"*"))
print(df)
Explanation: 数据清洗
比方说有以下场景
删除不必要的行 pandas提供了一个drop方法
End of explanation
df.rename(columns={"食品":"食品名称","数量":"计量单位"},inplace=True)
df
Explanation: 对列名或者行名进行重命名操作, pandas提供了rename方法
End of explanation
df.loc[17] = ["烧鸭","1 份 (120 克)",356]
df
df.drop_duplicates(subset="食品名称",inplace=True)
df
Explanation: 有时候数据可能有重复的值, 可以使用drop_duplicates方法来去除
End of explanation
df.sort_values("热量(大卡)", inplace=True, ascending=False)
df
Explanation: 排序可以用sort_values
End of explanation
import numpy as np
df.loc[15,"计量单位"] = np.nan
df.isnull()
df
df = df.reset_index()
df
Explanation: 做数据清洗的时候,可能由于是爬回来的数据, 数据不完整,有空的情况
End of explanation
size = np.random.randint(1,20,size=17)
df["份数"] = size
df
Explanation: 做数据清洗的时候, 有时候可能想根据原有的列,做计算, 然后增加新列. 我们模拟一下场景
End of explanation
df["总热量"] = df["热量(大卡)"] * df["份数"]
df
Explanation: 我们希望计算出一列总热量来
End of explanation
print("count".center(100, "*"))
print(df.count())
print("min".center(100, "*"))
print(df.min())
print("sum".center(100, "*"))
print(df.sum())
print("describe".center(100, "*"))
print(df.describe())
print(df["热量(大卡)"].min())
Explanation: 数据统计
pandas 带了好多数据统计函数, 如果是不能执行的,比如算平均数不是数字的行会自动忽略
End of explanation
df1 = pd.DataFrame({'name':['ZhangFei', 'GuanYu', 'a', 'b', 'c'], 'data1':range(5)})
df2 = pd.DataFrame({'name':['ZhangFei', 'GuanYu', 'A', 'B', 'C'], 'data2':range(5)})
print("df1".center(100, "*"))
print(df1)
print("df2".center(100, "*"))
print(df2)
Explanation: 数据表合并
DataFrame就类似于数据库的表, 有时候希望做一些join操作
End of explanation
df3 = pd.merge(df1, df2, on='name')
df3
Explanation: 针对指定列进行连接
End of explanation
print("inner".center(100,"*"))
df3 = pd.merge(df1, df2, how='inner')
print(df3)
print("left".center(100,"*"))
df3 = pd.merge(df1, df2, how='left')
print(df3)
print("right".center(100,"*"))
df3 = pd.merge(df1, df2, how='right')
print(df3)
print("outer".center(100,"*"))
df3 = pd.merge(df1, df2, how='outer')
print(df3)
Explanation: 内连接, 左连接, 右连接 , 内连接
End of explanation
import pandas as pd
from pandas import DataFrame
from pandasql import sqldf
df1 = DataFrame({'name':['ZhangFei', 'GuanYu', 'a', 'b', 'c'], 'data1':range(5)})
print("df1".center(100, "*"))
print(df1)
sql = "select * from df1 where name ='ZhangFei'"
print("执行sql".center(100, "*"))
print(sqldf(sql, globals()))
Explanation: 用sql操作pandas
End of explanation
df = pd.read_json("menzhen_jk.json")
df
from sqlalchemy import create_engine
# mac下安装mysqlclient失败了, 至今没有装好, 不过可以用pymysql
SQLALCHEMY_DATABASE_URI = 'mysql+pymysql://root:123456@localhost:3306/data_analyze?charset=utf8mb4'
conn= create_engine(SQLALCHEMY_DATABASE_URI)
df.to_sql("menzhen_jk", con=conn,if_exists='replace',index=False, chunksize=100)
Explanation: 将json导入到mysql
End of explanation
!ls
!head url黑白.csv
df_url_detect = pd.read_csv("url黑白.csv")
df_url_detect
df_s = pd.read_csv("url黑白_from_s.csv")
df_s
import pandas as pd
import numpy as np
new_df = pd.merge(df_url_detect, df_s, on="url")
# new_df
new_df.rename(columns={"黑白_x": "url_detect", "黑白_y": "s"}, inplace=True)
def new_bw(df):
df["黑白"] = df["s"]
if df["黑白"] != "黑":
if not (df["url_detect"] is np.nan or df["url_detect"] == "" or df["url_detect"] == "safe"):
df["黑白"] = "黑"
if df["黑白"] == "未知":
df["黑白"] = "白"
return df
new_df = new_df.apply(new_bw, axis=1)
new_df
# new_df.drop(columns=["url_detect", "s"], inplace=True)
new_df.count()
new_df.to_csv("汇总黑白.csv", index=False)
new_df.to_excel("汇总黑白.xlsx", index=False)
!ls
!rm 汇总黑白.csv 汇总黑白.xlsx
!ls
Explanation: 练习
现在有两个csv, 一个是从s查询的结果, 有两列一个是url , 另一个是黑白 . 另一个csv是从url_detect接口查出来的. 一列是url 另一列是检出威胁的引擎的列表用逗号隔开的字符串, 有可能是空字符串或者Nan. 现在要求汇总这两个csv. 如果url_detect接口里面的结果不是Nan或者是空字符串或者是字符串safe, 不是这三种情况结果就按黑, 否则就按s的结果.
End of explanation
df_url_detect
Explanation: 方法二
End of explanation
df_url_detect = df_url_detect.fillna("safe")
Explanation: 将NaN填充为safe就好解决了
End of explanation
df_url_detect["黑白"].unique()
Explanation: 再看一下还有没有空白
End of explanation
df_url_detect["黑白"].value_counts()
Explanation: 甚至可以看一下个数有多少
End of explanation
max_bk = df_url_detect["黑白"].value_counts().index[0]
print(max_bk)
df_url_detect["黑白"].fillna(max_bk , inplace= True)
df_url_detect
new_df = pd.merge(df_url_detect, df_s, on="url")
new_df.rename(columns={"黑白_x": "url_detect", "黑白_y": "s"}, inplace=True)
new_df
new_df["黑白"] = np.where(new_df["url_detect"] != "safe", "黑", new_df["s"])
new_df
Explanation: 实际上我们如果不知道哪个是最多的, 我们填充NAN值也经常用平均值或者出现个数最多的值来填充.怎样用出现次数最多的值填充呢
End of explanation
new_df[new_df["黑白"] == "未知"]
new_df.loc[new_df["黑白"] == "未知", "黑白"] = "白"
new_df.drop(columns=["url_detect", "s"], inplace=True)
new_df
Explanation: 发现黑白这一列里面有未知, 应该改成白
End of explanation
new_df["黑白"].value_counts()
new_df.to_csv
Explanation: 秒出
End of explanation |
2,575 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Workshop 4 - Performance Metrics
In this workshop we study 2 performance metrics(Spread and Inter-Generational Distance) on GA optimizing the POM3 model.
Step2: To compute most measures, data(i.e objectives) is normalized. Normalization is scaling the data between 0 and 1. Why do we normalize?
TODO2
Step10: Data Format
For our experiments we store the data in the following format.
data = {
"expt1"
Step13: Reference Set
Almost all the traditional measures you consider need a reference set for its computation. A theoritical reference set would be the ideal pareto frontier. This is fine for
a) Mathematical Models
Step17: Spread
Calculating spread
Step20: IGD = inter-generational distance; i.e. how good are you compared to the best known?
Find a reference set (the best possible solutions)
For each optimizer
For each item in its final Pareto frontier
Find the nearest item in the reference set and compute the distance to it.
Take the mean of all the distances. This is IGD for the optimizer
Note that the less the mean IGD, the better the optimizer since
this means its solutions are closest to the best of the best. | Python Code:
%matplotlib inline
# All the imports
from __future__ import print_function, division
import pom3_ga, sys
import pickle
# TODO 1: Enter your unity ID here
__author__ = "tchhabr"
Explanation: Workshop 4 - Performance Metrics
In this workshop we study 2 performance metrics(Spread and Inter-Generational Distance) on GA optimizing the POM3 model.
End of explanation
def normalize(problem, points):
Normalize all the objectives
in each point and return them
meta = problem.objectives
all_objs = []
for point in points:
objs = []
for i, o in enumerate(problem.evaluate(point)):
low, high = meta[i].low, meta[i].high
# TODO 3: Normalize 'o' between 'low' and 'high'; Then add the normalized value to 'objs'
if high == low: objs.append(0); continue;
objs.append((o-low)/(high-low))
all_objs.append(objs)
return all_objs
Explanation: To compute most measures, data(i.e objectives) is normalized. Normalization is scaling the data between 0 and 1. Why do we normalize?
TODO2 : To put on the same level playing field. Making it easier to compare two data points from different sets.
End of explanation
Performing experiments for [5, 10, 50] generations.
problem = pom3_ga.POM3()
pop_size = 10
repeats = 10
test_gens = [5, 10, 50]
def save_data(file_name, data):
Save 'data' to 'file_name.pkl'
with open(file_name + ".pkl", 'wb') as f:
pickle.dump(data, f, pickle.HIGHEST_PROTOCOL)
def load_data(file_name):
Retrieve data from 'file_name.pkl'
with open(file_name + ".pkl", 'rb') as f:
return pickle.load(f)
def build(problem, pop_size, repeats, test_gens):
Repeat the experiment for 'repeats' number of repeats for each value in 'test_gens'
tests = {t: [] for t in test_gens}
tests[0] = [] # For Initial Population
for _ in range(repeats):
init_population = pom3_ga.populate(problem, pop_size)
pom3_ga.say(".")
for gens in test_gens:
tests[gens].append(normalize(problem, pom3_ga.ga(problem, init_population, retain_size=pop_size, gens=gens)[1]))
tests[0].append(normalize(problem, init_population))
print("\nCompleted")
return tests
Repeat Experiments
# tests = build(problem, pop_size, repeats, test_gens)
Save Experiment Data into a file
# save_data("dump", tests)
Load the experimented data from dump.
tests = load_data("dump")
print (tests.keys())
Explanation: Data Format
For our experiments we store the data in the following format.
data = {
"expt1":[repeat1, repeat2, ...],
"expt2":[repeat1, repeat2, ...],
.
.
.
}
repeatx = [objs1, objs2, ....] // All of the final population
objs1 = [norm_obj1, norm_obj2, ...] // Normalized objectives of each member of the final population.
End of explanation
def make_reference(problem, *fronts):
Make a reference set comparing all the fronts.
Here the comparison we use is bdom. It can
be altered to use cdom as well
retain_size = len(fronts[0])
reference = []
for front in fronts:
reference+=front
def bdom(one, two):
Return True if 'one' dominates 'two'
else return False
:param one - [pt1_obj1, pt1_obj2, pt1_obj3, pt1_obj4]
:param two - [pt2_obj1, pt2_obj2, pt2_obj3, pt2_obj4]
dominates = False
for i, obj in enumerate(problem.objectives):
gt, lt = pom3_ga.gt, pom3_ga.lt
better = lt if obj.do_minimize else gt
# TODO 3: Use the varaibles declared above to check if one dominates two
if better(one[i], two[i]):
dominates = True
elif one[i] != two[i]:
return False
return dominates
def fitness(one, dom):
return len([1 for another in reference if dom(one, another)])
fitnesses = []
for point in reference:
fitnesses.append((fitness(point, bdom), point))
reference = [tup[1] for tup in sorted(fitnesses, reverse=True)]
return reference[:retain_size]
assert len(make_reference(problem, tests[5][0], tests[10][0], tests[50][0])) == len(tests[5][0])
Explanation: Reference Set
Almost all the traditional measures you consider need a reference set for its computation. A theoritical reference set would be the ideal pareto frontier. This is fine for
a) Mathematical Models: Where we can solve the problem to obtain the set.
b) Low Runtime Models: Where we can do a one time exaustive run to obtain the model.
But most real world problems are neither mathematical nor have a low runtime. So what do we do?. Compute an approximate reference set
One possible way of constructing it is:
1. Take the final generation of all the treatments.
2. Select the best set of solutions from all the final generations
End of explanation
def eucledian(one, two):
Compute Eucledian Distance between
2 vectors. We assume the input vectors
are normalized.
:param one: Vector 1
:param two: Vector 2
:return:
# TODO 4: Code up the eucledian distance. https://en.wikipedia.org/wiki/Euclidean_distance
return sum([(o-t)**2 for o,t in zip(one, two)])**0.5
def sort_solutions(solutions):
Sort a list of list before computing spread
def sorter(lst):
m = len(lst)
weights = reversed([10 ** i for i in xrange(m)])
return sum([element * weight for element, weight in zip(lst, weights)])
return sorted(solutions, key=sorter)
def closest(one, many):
min_dist = sys.maxint
closest_point = None
for this in many:
dist = eucledian(this, one)
if dist < min_dist:
min_dist = dist
closest_point = this
return min_dist, closest_point
def spread(obtained, ideals):
Calculate the spread (a.k.a diversity)
for a set of solutions
s_obtained = sort_solutions(obtained)
s_ideals = sort_solutions(ideals)
d_f = closest(s_ideals[0], s_obtained)[0]
d_l = closest(s_ideals[-1], s_obtained)[0]
n = len(s_ideals)
distances = []
for i in range(len(s_obtained)-1):
distances.append(eucledian(s_obtained[i], s_obtained[i+1]))
d_bar = sum(distances)/len(distances)
# TODO 5: Compute the value of spread using the definition defined in the previous cell.
d_sum = sum([abs(d_i - d_bar) for d_i in distances])
delta = (d_f + d_l + d_sum)/ (d_f + d_l + (n-1)*d_bar)
return delta
ref = make_reference(problem, tests[5][0], tests[10][0], tests[50][0])
print(spread(tests[5][0], ref))
print(spread(tests[10][0], ref))
print(spread(tests[50][0], ref))
Explanation: Spread
Calculating spread:
<img width=300 src="http://mechanicaldesign.asmedigitalcollection.asme.org/data/Journals/JMDEDB/27927/022006jmd3.jpeg">
Consider the population of final gen(P) and the Pareto Frontier(R).
Find the distances between the first point of P and first point of R(d<sub>f</sub>) and last point of P and last point of R(d<sub>l</sub>)
Find the distance between all points and their nearest neighbor d<sub>i</sub> and
their nearest neighbor
Then:
<img width=300 src="https://raw.githubusercontent.com/txt/ase16/master/img/spreadcalc.png">
If all data is maximally spread, then all distances d<sub>i</sub> are near mean d
which would make Δ=0 ish.
Note that less the spread of each point to its neighbor, the better
since this means the optimiser is offering options across more of the frontier.
End of explanation
def igd(obtained, ideals):
Compute the IGD for a
set of solutions
:param obtained: Obtained pareto front
:param ideals: Ideal pareto front
:return:
# TODO 6: Compute the value of IGD using the definition defined in the previous cell.
igd_val = sum([closest (ideal,obtained)[0] for ideal in ideals])/ len(ideals)
return igd_val
ref = make_reference(problem, tests[5][0], tests[10][0], tests[50][0])
print(igd(tests[5][0], ref))
print(igd(tests[10][0], ref))
print(igd(tests[50][0], ref))
import sk
sk = reload(sk)
def format_for_sk(problem, data, measure):
Convert the experiment data into the format
required for sk.py and computet the desired
'measure' for all the data.
gens = data.keys()
reps = len(data[gens[0]])
measured = {gen:["gens_%d"%gen] for gen in gens}
for i in range(reps):
ref_args = [data[gen][i] for gen in gens]
ref = make_reference(problem, *ref_args)
for gen in gens:
measured[gen].append(measure(data[gen][i], ref))
return measured
def report(problem, tests, measure):
measured = format_for_sk(problem, tests, measure).values()
sk.rdivDemo(measured)
print("*** IGD ***")
report(problem, tests, igd)
print("\n*** Spread ***")
report(problem, tests, spread)
Explanation: IGD = inter-generational distance; i.e. how good are you compared to the best known?
Find a reference set (the best possible solutions)
For each optimizer
For each item in its final Pareto frontier
Find the nearest item in the reference set and compute the distance to it.
Take the mean of all the distances. This is IGD for the optimizer
Note that the less the mean IGD, the better the optimizer since
this means its solutions are closest to the best of the best.
End of explanation |
2,576 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Weibel instability
This notebook shows a demonstration of the Weibel (electromagnetic filamentation) instability in the collision of neutral electron/positron plasma clouds. In this example we collide two neutral electron / positron plasma clouds, moving out of the simulation plane with opposing generalized velocities of $u_3 = \pm 0.6 \mathrm{c}$. All simulation species have the same thermal velocity $u_{th} = 0.1 \mathrm{c}$ in all directions.
Step1: We want to look at the energy evolution, so we run a customized loop to store field energy at every iteration
Step2: Current filamentation
The Weibel instability will lead to current filamentation, as shown below
Step3: Evolution of magnetic field energy
The instability will cause a transfer of energy betweem the kinetica energy in the particles and the transverse magnetic field. For this simulaiton, the instability saturates around $t \simeq 8 \,\, \omega_n^{-1}$
Step4: Magnetic Field structure
The plots below show the structure of the magnetic field at the end of the simulation. The magnetic field structres act as a buffer separating regions of opposite electric current
Step5: Charge Density
We present the charge density for both the up moving (defined as positive / red) and down moving (defined as negative / blue) positron species. This is made possible by the fact that these do not overlap. | Python Code:
import em2d as zpic
eup = zpic.Species( "electrons up", -1.0, ppc = [2,2],
ufl = [0.0,0.0,0.6], uth = [0.1,0.1,0.1] )
pup = zpic.Species( "positrons up", +1.0, ppc = [2,2],
ufl = [0.0,0.0,0.6], uth = [0.1,0.1,0.1] )
edown = zpic.Species( "electrons down", -1.0, ppc = [2,2],
ufl = [0.0,0.0,-0.6], uth = [0.1,0.1,0.1] )
pdown = zpic.Species( "positrons down", +1.0, ppc = [2,2],
ufl = [0.0,0.0,-0.6], uth = [0.1,0.1,0.1] )
dt = 0.07
sim = zpic.Simulation( nx = [128,128], box = [12.8,12.8], dt = dt,
species = [eup,pup,edown,pdown] )
Explanation: Weibel instability
This notebook shows a demonstration of the Weibel (electromagnetic filamentation) instability in the collision of neutral electron/positron plasma clouds. In this example we collide two neutral electron / positron plasma clouds, moving out of the simulation plane with opposing generalized velocities of $u_3 = \pm 0.6 \mathrm{c}$. All simulation species have the same thermal velocity $u_{th} = 0.1 \mathrm{c}$ in all directions.
End of explanation
import numpy as np
tmax = 15
niter = int(tmax / dt) + 1
Bperp = np.zeros(niter)
norm = 0.5 * sim.emf.nx[0] * sim.emf.nx[1]
print("\nRunning simulation up to t = {:g} ...".format(tmax))
while sim.t < tmax:
print('n = {:d}, t = {:g}'.format(sim.n,sim.t), end = '\r')
# Get energy in perpendicular B field components
Bperp[sim.n] = np.sum(sim.emf.Bx**2+sim.emf.By**2) * norm
sim.iter()
print("\nDone.")
Explanation: We want to look at the energy evolution, so we run a customized loop to store field energy at every iteration:
End of explanation
import matplotlib.pyplot as plt
J3 = sim.current.Jz
range = [[0,sim.box[0]],[0,sim.box[1]]]
plt.imshow( J3, interpolation = 'bilinear', origin = 'lower',
extent = ( range[0][0], range[0][1], range[1][0], range[1][1] ),
aspect = 'auto', cmap = 'Spectral',clim = (-1.6,1.6))
plt.colorbar().set_label('Electric Current')
plt.xlabel("$x_1$")
plt.ylabel("$x_2$")
plt.title("Electric Current\nt = {:g}".format(sim.t))
plt.show()
Explanation: Current filamentation
The Weibel instability will lead to current filamentation, as shown below:
End of explanation
import matplotlib.pyplot as plt
plt.plot(np.linspace(0, sim.t, num = sim.n),Bperp)
plt.yscale('log')
plt.ylim(ymin=1e4)
plt.grid(True)
plt.xlabel("$t$ [$1/\omega_n$]")
plt.ylabel("$B_{\perp}$ energy [$m_e c^2$]")
plt.title("Magnetic field energy")
plt.show()
Explanation: Evolution of magnetic field energy
The instability will cause a transfer of energy betweem the kinetica energy in the particles and the transverse magnetic field. For this simulaiton, the instability saturates around $t \simeq 8 \,\, \omega_n^{-1}$:
End of explanation
import matplotlib.pyplot as plt
import numpy as np
Bperp = np.sqrt( sim.emf.Bx**2 + sim.emf.By**2 )
range = [[0,sim.box[0]],[0,sim.box[1]]]
plt.imshow( Bperp, interpolation = 'bilinear', origin = 'lower',
extent = ( range[0][0], range[0][1], range[1][0], range[1][1] ),
aspect = 'auto', cmap = 'jet')
plt.colorbar().set_label('Magnetic Field')
plt.xlabel("$x_1$")
plt.ylabel("$x_2$")
plt.title("Magnetic Field\nt = {:g}".format(sim.t))
plt.show()
y,x = np.mgrid[ 0:sim.nx[1],0:sim.nx[0] ]
y = y*(sim.box[1]/sim.nx[1])
x = x*(sim.box[0]/sim.nx[0])
plt.streamplot( x,y,sim.emf.Bx, sim.emf.By, linewidth = 1.0, density = 1.5,
color = Bperp, cmap = 'viridis' )
plt.colorbar().set_label('Magnetic Field')
plt.xlabel("$x_1$")
plt.ylabel("$x_2$")
plt.title("Magnetic Field\nt = {:g}".format(sim.t))
plt.show()
Explanation: Magnetic Field structure
The plots below show the structure of the magnetic field at the end of the simulation. The magnetic field structres act as a buffer separating regions of opposite electric current:
End of explanation
import matplotlib.pyplot as plt
range = [[0,sim.box[0]],[0,sim.box[1]]]
plt.imshow( pup.charge() - pdown.charge(), interpolation = 'bilinear', origin = 'lower',
extent = ( range[0][0], range[0][1], range[1][0], range[1][1] ),
aspect = 'auto', cmap = 'seismic',clim = (-8.6,8.6))
plt.colorbar().set_label('Density')
plt.xlabel("$x_1$")
plt.ylabel("$x_2$")
plt.title("Positron density\nt = {:g}".format(sim.t))
plt.show()
Explanation: Charge Density
We present the charge density for both the up moving (defined as positive / red) and down moving (defined as negative / blue) positron species. This is made possible by the fact that these do not overlap.
End of explanation |
2,577 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Optimización de funciones escalares diferenciables con Sympy
Mediante optimización se obtienen soluciones elegantes tanto en teoría como en ciertas aplicaciones.
La teoría de optimización usa elementos comenzando con cálculo elemental y álgebra lineal básica, y luego se extiende con análisis funcional y convexo.
Las aplicaciones en optimización involucran ciencia, ingeniería, economía, finanzas e industria.
El amplio y creciente uso de la optimización lo hace escencial para estudiantes y profesionales de cualquier rama de la ciencia y la tecnología.
Referencia
- http
Step1: Veamos la gráfica...
Step2: Otra manera de hacer lo anterior
Step3: El converso del teorema anterior no es cierto.
Actividad
Considere $g(x)=x^3$.
- Usando sympy, muestre que $g'(0)=0$.
- Sin embargo, descartar que $x=0$ es un extremo de $g(x)$ viendo su gráfica.
2. Criterio de la segunda derivada
Sea $f(x)$ una función tal que $f’(c)=0$ y cuya segunda derivada existe en un intervalo abierto que contiene a $c$.
- Si $f’’(c)>0$, entonces $f(c)$ es un mínimo relativo.
- Si $f’’(c)<0$, entonces $f(c)$ es un máximo relativo.
- Si $f’’(c)=0$, entonces el criterio no decide.
Ejemplo
Mostrar, usando sympy, que la función $f(x)=x^2$ tiene un mínimo relativo en $x=0$.
Ya vimos que $f'(0)=0$. Notemos que
Step4: Por tanto, por el criterio de la segunda derivada, $f(0)=0$ es un mínimo relativo (en efecto, el mínimo global).
Actividad
¿Qué pasa con $g(x)=x^3$ al intentar utilizar el criterio de la segunda derivada? (usar sympy).
3. Método para determinar extremos absolutos de una función continua y=f(x) en [a,b]
Evaluar $f$ en los extremos $x=a$ y $x=b$.
Determinar todos los valores críticos $c_1, c_2, c_3, \dots, c_n$ en $(a,b)$.
Evaluar $f$ en todos los valores críticos.
El más grande y el más pequeño de los valores de la lista $f(a), f(b), f(c_1), f(c_2), \dots, f(c_n)$ son el máximo absoluto y el mínimo absoluto, respectivamente, de f en el intervalo [a,b].
Ejemplo
Determinar los extremos absolutos de $f(x)=x^2-6x$ en $\left[0,5\right]$.
Obtenemos los puntos críticos de $f$ en $\left[0,5\right]$
Step5: Evaluamos $f$ en los extremos y en los puntos críticos
Step6: Concluimos que el máximo absoluto de $f$ en $\left[0,5\right]$ es $0$ y se alcanza en $x=0$, y que el mínimo absoluto es $-9$ y se alcanza en $x=3$.
Step7: Actividad
Determinar los valores extremos absolutos de $h(x)=x^3-3x$ en $\left[-2.2,1.8\right]$, usando sympy. Mostrar en una gráfica.
En varias variables...
El procedimiento es análogo.
Si una función $f | Python Code:
# Librería de cálculo simbólico
import sympy as sym
# Para imprimir en formato TeX
from sympy import init_printing; init_printing(use_latex='mathjax')
sym.var('x', real = True)
f = x**2
f
df = sym.diff(f, x)
df
x_c = sym.solve(df, x)
x_c[0]
Explanation: Optimización de funciones escalares diferenciables con Sympy
Mediante optimización se obtienen soluciones elegantes tanto en teoría como en ciertas aplicaciones.
La teoría de optimización usa elementos comenzando con cálculo elemental y álgebra lineal básica, y luego se extiende con análisis funcional y convexo.
Las aplicaciones en optimización involucran ciencia, ingeniería, economía, finanzas e industria.
El amplio y creciente uso de la optimización lo hace escencial para estudiantes y profesionales de cualquier rama de la ciencia y la tecnología.
Referencia
- http://www.math.uwaterloo.ca/~hwolkowi//henry/reports/talks.d/t06talks.d/06msribirs.d/optimportance.shtml
Algunas aplicaciones son:
Ingeniería
Encontrar la composición de equilibrio de una mezcla de diferentes átomos.
Planeación de ruta para un robot (o vehículo aéreo no tripulado).
Distribución óptima de recursos.
Distribución de rutas de vuelo.
Encontrar una dieta óptima.
Optimización financiera
Administración de riesgos.
En esta clase veremos aspectos básicos de optimización. En específico, veremos cómo obtener máximos y mínimos de una función escalar de una variable (como en cálculo diferencial).
Basamos todos los resultados en los siguientes teoremas:
1. Teorema de Fermat (análisis)
Si una función $f(x)$ alcanza un máximo o mínimo local en $x=c$, y si la derivada $f'(c)$ existe en el punto $c$, entonces $f'(c) = 0$.
Ejemplo
Sabemos que la función $f(x)=x^2$ tiene un mínimo global en $x=0$, pues
$$f(x)=x^2\geq0,\qquad\text{y}\qquad f(x)=x^2=0 \qquad\text{si y solo si}\qquad x=0.$$
End of explanation
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
f_num = sym.lambdify([x], f, 'numpy')
x_vec = np.linspace(-5, 5, 100)
plt.plot(x_vec, f_num(x_vec))
plt.xlabel('$x$')
plt.ylabel('$x^2$')
plt.show()
Explanation: Veamos la gráfica...
End of explanation
def f(x):
return x**2
f_sym = f(x)
f_sym
df = sym.diff(f(x), x)
df
x_c = sym.solve(df, x)
x_c[0]
Explanation: Otra manera de hacer lo anterior
End of explanation
f = x**2
#d2f = sym.diff(f, x, x)
d2f = sym.diff(f, x, 2)
d2f
d2f>0
Explanation: El converso del teorema anterior no es cierto.
Actividad
Considere $g(x)=x^3$.
- Usando sympy, muestre que $g'(0)=0$.
- Sin embargo, descartar que $x=0$ es un extremo de $g(x)$ viendo su gráfica.
2. Criterio de la segunda derivada
Sea $f(x)$ una función tal que $f’(c)=0$ y cuya segunda derivada existe en un intervalo abierto que contiene a $c$.
- Si $f’’(c)>0$, entonces $f(c)$ es un mínimo relativo.
- Si $f’’(c)<0$, entonces $f(c)$ es un máximo relativo.
- Si $f’’(c)=0$, entonces el criterio no decide.
Ejemplo
Mostrar, usando sympy, que la función $f(x)=x^2$ tiene un mínimo relativo en $x=0$.
Ya vimos que $f'(0)=0$. Notemos que:
End of explanation
f = x**2-6*x
f
df = sym.diff(f, x)
df
x_c = sym.solve(df, x)
x_c
Explanation: Por tanto, por el criterio de la segunda derivada, $f(0)=0$ es un mínimo relativo (en efecto, el mínimo global).
Actividad
¿Qué pasa con $g(x)=x^3$ al intentar utilizar el criterio de la segunda derivada? (usar sympy).
3. Método para determinar extremos absolutos de una función continua y=f(x) en [a,b]
Evaluar $f$ en los extremos $x=a$ y $x=b$.
Determinar todos los valores críticos $c_1, c_2, c_3, \dots, c_n$ en $(a,b)$.
Evaluar $f$ en todos los valores críticos.
El más grande y el más pequeño de los valores de la lista $f(a), f(b), f(c_1), f(c_2), \dots, f(c_n)$ son el máximo absoluto y el mínimo absoluto, respectivamente, de f en el intervalo [a,b].
Ejemplo
Determinar los extremos absolutos de $f(x)=x^2-6x$ en $\left[0,5\right]$.
Obtenemos los puntos críticos de $f$ en $\left[0,5\right]$:
End of explanation
f.subs(x, 0), f.subs(x, 5), f.subs(x, x_c[0])
Explanation: Evaluamos $f$ en los extremos y en los puntos críticos:
End of explanation
f_num = sym.lambdify([x], f, 'numpy')
x_vec = np.linspace(0, 5, 100)
plt.figure(figsize=(8,6))
plt.plot(x_vec, f_num(x_vec), 'k', label = '$y=f(x)$')
plt.plot([0], [0], '*r', label = '$(0,0=\max_{0\leq x\leq 5} f(x))$')
plt.plot([3], [-9], '*b', label = '$(3,-9=\min_{0\leq x\leq 5} f(x))$')
plt.legend(loc='best')
plt.xlabel('x')
plt.show()
Explanation: Concluimos que el máximo absoluto de $f$ en $\left[0,5\right]$ es $0$ y se alcanza en $x=0$, y que el mínimo absoluto es $-9$ y se alcanza en $x=3$.
End of explanation
sym.var('x y')
x, y
def f(x, y):
return x**2 + y**2
dfx = sym.diff(f(x,y), x)
dfy = sym.diff(f(x,y), y)
dfx, dfy
xy_c = sym.solve([dfx, dfy], [x, y])
xy_c
x_c, y_c = xy_c[x], xy_c[y]
d2fx = sym.diff(f(x,y), x, 2)
d2fy = sym.diff(f(x,y), y, 2)
dfxy = sym.diff(f(x,y), x, y)
Jf = sym.Matrix([[d2fx, dfxy], [dfxy, d2fy]])
Jf.eigenvals()
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
x = np.linspace(-2, 2, 100)
y = x
X, Y = np.meshgrid(x, y)
ax.plot_surface(X, Y, f(X, Y))
ax.plot([0], [0], [0], '*r')
Explanation: Actividad
Determinar los valores extremos absolutos de $h(x)=x^3-3x$ en $\left[-2.2,1.8\right]$, usando sympy. Mostrar en una gráfica.
En varias variables...
El procedimiento es análogo.
Si una función $f:\mathbb{R}^n\to\mathbb{R}$ alcanza un máximo o mínimo local en $\boldsymbol{x}=\boldsymbol{c}\in\mathbb{R}^n$, y $f$ es diferenciable en el punto $\boldsymbol{x}=\boldsymbol{c}$, entonces $\left.\frac{\partial f}{\partial \boldsymbol{x}}\right|_{\boldsymbol{x}=\boldsymbol{c}}=\boldsymbol{0}$ (todas las derivadas parciales en el punto $\boldsymbol{x}=\boldsymbol{c}$ son cero).
Criterio de la segunda derivada: para ver si es máximo o mínimo, se toma la segunda derivada (matriz jacobiana) y se verifica definición negativa o positiva, respectivamente.
Si se restringe a cierta región, hay ciertas técnicas. La más general, pero también la más compleja es la de multiplicadores de Lagrange.
End of explanation |
2,578 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
1. Frame the Problem
"I think, therefore I am"
What type of questions can be answered?
Developing a hypothesis drive approach.
Making the case.
Questions we will answer on alcohol topic across countries
Descriptive
Which are the leading alcohol drinking countries by beer, wine and spirit?
What is the trend in alcohol consumption in Singapore across the years?
Exploratory / Inferential
Has alcohol consumption declined in Singapore in the recent years?
2. Acquire the Data
"Data is the new oil"
Download from an internal system
Obtained from client, or other 3rd party
Extracted from a web-based API
Scraped from a website
Extracted from a PDF file
Gathered manually and recorded
We will using the Global Information System on Alcohol and Health (GISAH) maintained by WHO to answer the questions.
The WHO Global Information System on Alcohol and Health (GISAH) provides easy and rapid access to a wide range of alcohol-related health indicators. It is an essential tool for assessing and monitoring the health situation and trends related to alcohol consumption, alcohol-related harm, and policy responses in countries.
You can see an overview at http
Step1: Principle
Step2: Exercise
Load the drinks1960 and drinks1980 csv files and fix the column header
Step3: 3. Refine the Data
"Data is messy"
We will be performing the following operation on our Onion price to refine it
- Remove e.g. remove redundant data from the data frame
- Derive e.g. Country and Beverage from the description field
- Missing e.g. Check for missing or incomplete data
- Merge e.g. Take the three dataframes and make them one
- Filter e.g. exclude based on location
Other stuff you may need to do to refine are...
- Parse e.g. extract date from year and month column
- Quality e.g. Check for duplicates, accuracy, unusual data
- Convert e.g. free text to coded value
- Calculate e.g. percentages, proportion
- Aggregate e.g. rollup by year, cluster by area
- Sample e.g. extract a representative data
- Summary e.g. show summary stats like mean
Principle
Step4: Principle
Step5: Principle
Step6: We can now drop the description column from our dataframe
Step7: Principle
Step8: Lets check in the value whether we have numeric or not
Step9: We will use pd.to_numeric which will coerce to NaN everything that cannot be converted to a numeric value, so strings that represent numeric values will not be removed. For example '1.25' will be recognized as the numeric value 1.25
Step10: Principle
Step11: Principle
Step12: PRINCIPLE
Step13: 4. Explore the Data
"I don't know, what I don't know"
Understand Data Structure & Types
Explore single variable graphs - (Quantitative, Categorical)
Explore dual variable graphs - (Q & Q, Q & C, C & C)
Explore multi variable graphs
We want to first visually explore the data to see if we can confirm some of our initial hypotheses as well as make new hypothesis about the problem we are trying to solve.
Step14: Model the Data
"All models are wrong, Some of them are useful"
Statistical testing
The power and limits of models
Tradeoff between Prediction Accuracy and Model Interpretability
Assessing Model Accuracy
Regression models (Simple, Multiple)
Classification model
We want to test whether the alcohol consumption has really declined in recent times. To do that we will do a t-test, in which we will check whether the consumption before and after 1990 are really different.
https
Step15: Share the Insight
"The goal is to turn data into insight"
Why do we need to communicate insight?
Types of communication - Exploration vs. Explanation
Explanation | Python Code:
# Import the libraries we need, which is Pandas and Numpy
import pandas as pd
import numpy as np
df1 = pd.read_csv('data/drinks2000.csv')
df1.head()
df1.shape
Explanation: 1. Frame the Problem
"I think, therefore I am"
What type of questions can be answered?
Developing a hypothesis drive approach.
Making the case.
Questions we will answer on alcohol topic across countries
Descriptive
Which are the leading alcohol drinking countries by beer, wine and spirit?
What is the trend in alcohol consumption in Singapore across the years?
Exploratory / Inferential
Has alcohol consumption declined in Singapore in the recent years?
2. Acquire the Data
"Data is the new oil"
Download from an internal system
Obtained from client, or other 3rd party
Extracted from a web-based API
Scraped from a website
Extracted from a PDF file
Gathered manually and recorded
We will using the Global Information System on Alcohol and Health (GISAH) maintained by WHO to answer the questions.
The WHO Global Information System on Alcohol and Health (GISAH) provides easy and rapid access to a wide range of alcohol-related health indicators. It is an essential tool for assessing and monitoring the health situation and trends related to alcohol consumption, alcohol-related harm, and policy responses in countries.
You can see an overview at http://www.who.int/gho/alcohol/en/.
Principle: Load the Data
The datasets from GISAH are available at http://apps.who.int/gho/data/node.main.GISAH?lang=en&showonly=GISAH
We want the alcohol consumption by country
Recorded alcohol per capita consumption, 1960-1979 by country - http://apps.who.int/gho/data/node.main.A1025?lang=en&showonly=GISAH
Recorded alcohol per capita consumption, 1980-1999 by country - http://apps.who.int/gho/data/node.main.A1024?lang=en&showonly=GISAH
Recorded alcohol per capita consumption, 2000 onwards by country http://apps.who.int/gho/data/node.main.A1026?lang=en&showonly=GISAH
End of explanation
years1 = list(range(2015, 1999, -1))
years1
header1 = ['description']
header1.extend(years1)
header1
df1.columns = header1
df1.head()
Explanation: Principle: Fix the Column Header
End of explanation
df2 = pd.read_csv('data/drinks1980.csv')
years2 = list(range(1999, 1979, -1))
header2 = ['description']
header2.extend(years2)
df2.columns = header2
df2.head()
df3 = pd.read_csv('data/drinks1960.csv')
years3 = list(range(1979, 1959, -1))
header3 = ['description']
header3.extend(years3)
df3.columns = header3
df3.head()
Explanation: Exercise
Load the drinks1960 and drinks1980 csv files and fix the column header
End of explanation
df1.head()
df1 = pd.melt(df1, id_vars=['description'], var_name='year')
df1.head()
df2 = pd.melt(df2, id_vars=['description'], var_name='year')
df3 = pd.melt(df3, id_vars=['description'], var_name='year')
Explanation: 3. Refine the Data
"Data is messy"
We will be performing the following operation on our Onion price to refine it
- Remove e.g. remove redundant data from the data frame
- Derive e.g. Country and Beverage from the description field
- Missing e.g. Check for missing or incomplete data
- Merge e.g. Take the three dataframes and make them one
- Filter e.g. exclude based on location
Other stuff you may need to do to refine are...
- Parse e.g. extract date from year and month column
- Quality e.g. Check for duplicates, accuracy, unusual data
- Convert e.g. free text to coded value
- Calculate e.g. percentages, proportion
- Aggregate e.g. rollup by year, cluster by area
- Sample e.g. extract a representative data
- Summary e.g. show summary stats like mean
Principle: melt to convert from Wide format to Tall format
We will need to convert the data frame from wide format to tall format (and vice versa). This is needed as we want to combine the three data frame and we can only do that once we have the data in a tall format
End of explanation
df1.shape
df2.shape
df = df1.append(df2)
df.shape
df = df.append(df3)
df.shape
Explanation: Principle: append one dataframe to another
End of explanation
df.head()
df['country'] = df.description.str.split(';').str[0]
df.head()
df['beverage'] = df.description.str.split(";").str[-1]
df.tail()
Explanation: Principle: str to extract text from a strings
String manipulation is very common and we often need to extract a substring from a long string. In this case, we want to get the country and type of beverage from the description
End of explanation
df.drop('description', axis = 1, inplace= True)
df.head()
Explanation: We can now drop the description column from our dataframe
End of explanation
df.dtypes
df.year.unique()
df.year = pd.to_numeric(df.year)
df.dtypes
df.head()
Explanation: Principle: Dealing with Missing Values
By “missing” it simply mean null or “not present for whatever reason”. Many data sets have missing data, either because it exists and was not collected or it never existed. Pandas default way for treating missing value is to mark it as NaN
End of explanation
df.value.unique()
df[df.value.str.isnumeric() == False].shape
Explanation: Lets check in the value whether we have numeric or not
End of explanation
df.value = pd.to_numeric(df.value, errors='coerce')
df.value.unique()
df.dtypes
df.country.unique()
df.beverage.unique()
# Convert from an np array to a list
beverage_old = df.beverage.unique().tolist()
beverage_old
# Create a new list with white space removed and shorter names
beverage_new = ['all', 'beer', 'wine', 'spirits', 'others']
beverage_new
df.beverage = df.beverage.replace(beverage_old, beverage_new)
Explanation: We will use pd.to_numeric which will coerce to NaN everything that cannot be converted to a numeric value, so strings that represent numeric values will not be removed. For example '1.25' will be recognized as the numeric value 1.25
End of explanation
df.dtypes
df['serving'] = round(df['value']/0.018, 0)
df.head()
Explanation: Principle: mutate to create new variables
It is hard to think of alcohol in terms of 'litres of pure alcohol content'. It is easy to understand in terms of number of typical serving of drinks.
http://rethinkingdrinking.niaaa.nih.gov/How-much-is-too-much/what-counts-as-a-drink/whats-A-Standard-drink.aspx
For one standard serving -
- One glasses of wine: 12% alcohol in 5fl oz of standard serving of wine glass (0.6 fl oz)
- One can of beer: 5% alcohol in 12fl oz of a standard serving of beer can (0.6 fl oz)
- One shot of spirits: 40% alcohol in 1.5fl oz of standard shot of spirit (0.6 fl oz)
1 US fluid ounce (fl oz) = 0.0295735 litres (l) ~ 30ml
So for:
- 1 Standard serving of Wine = 12% * 5 * 0.03= 0.018 litres of pure alcohol
- 1 Standard serving of Beer = 5% * 12 * 0.03= 0.018 litres of pure alcohol
- 1 Standard serving of Spirit = 40% * 1.5 * 0.03 = 0.018 litres of pure alcohol
End of explanation
df2010 = df[df.year == 2010]
df2010.head()
dfSing = df[df.country == 'Singapore']
df2010Sing = df[(df.year == 2010) & (df.country == 'Singapore')]
df2010Sing.head()
Explanation: Principle: filter for rows in a dataframe
To select the rows from the dataframe
End of explanation
# Let us create a pivot for just serving in 2010
df2010Serving = pd.pivot_table(df2010, values = "serving", columns = "beverage", index = "country")
df2010Serving = df2010Serving.reset_index()
df2010Serving.head()
dfSing.head()
# Let us create a pivot for just serving in 2010
dfSingServing = pd.pivot_table(dfSing, values = "serving", columns = "beverage", index = "year")
dfSingServing = dfSingServing.reset_index()
dfSingServing.head()
Explanation: PRINCIPLE: Pivot Table
Pivot table is a way to summarize data frame data into rows, columns and value
End of explanation
# Load the visualisation libraries - Matplotlib
import matplotlib.pyplot as plt
# Let us see the output plots in the notebook itself
%matplotlib inline
# Set some parameters to get good visuals - style to ggplot and size to 15,10
plt.style.use('ggplot')
plt.rcParams['figure.figsize'] = (15, 10)
# Sort them on Beer values
df2010Serving.sort_values(by = "spirits", ascending = False, inplace=True)
df2010Serving.head(10)
# Plot the Data
df2010Serving.head(40).plot(kind ="barh", x = 'country', y = 'beer')
# Plot the histogram
df2010Serving.beer.plot(kind ="hist",bins = 30, alpha=0.5)
# Plots the Singapore Data
dfSingServing.plot(kind = "line", x = "year", y = ['all', 'beer', 'wine', 'spirits', 'others'])
Explanation: 4. Explore the Data
"I don't know, what I don't know"
Understand Data Structure & Types
Explore single variable graphs - (Quantitative, Categorical)
Explore dual variable graphs - (Q & Q, Q & C, C & C)
Explore multi variable graphs
We want to first visually explore the data to see if we can confirm some of our initial hypotheses as well as make new hypothesis about the problem we are trying to solve.
End of explanation
dfSing.head()
dfSingAll = dfSing[dfSing.beverage == 'all'].copy()
dfSingAll.head()
# Create a new column
dfSingAll['split'] = dfSingAll.year < 1990
dfSingAll.head()
# Let us plot the two samples
dfSingAll.hist(column = "serving", by = "split", sharex = True, sharey= True)
from scipy import stats
np.random.seed(12345678)
sampleA = dfSingAll[dfSingAll.split == True].serving
sampleB = dfSingAll[dfSingAll.split == False].serving
sampleA.shape
sampleB.shape
stats.ttest_ind(sampleA, sampleB, equal_var = False, nan_policy = 'omit')
Explanation: Model the Data
"All models are wrong, Some of them are useful"
Statistical testing
The power and limits of models
Tradeoff between Prediction Accuracy and Model Interpretability
Assessing Model Accuracy
Regression models (Simple, Multiple)
Classification model
We want to test whether the alcohol consumption has really declined in recent times. To do that we will do a t-test, in which we will check whether the consumption before and after 1990 are really different.
https://en.wikipedia.org/wiki/Student%27s_t-test#Independent_two-sample_t-test
https://en.wikipedia.org/wiki/Welch%27s_t-test
End of explanation
df2010Serving.head()
beerMean = df2010Serving.beer.mean()
beerMean
wineMean = df2010Serving.wine.mean()
wineMean
df2010Serving.plot(kind = "scatter", x ="beer", y= "wine", s = df2010Serving['all'], alpha = 0.7)
plt.axvline(beerMean, color='r')
plt.axhline(wineMean, color='r')
Explanation: Share the Insight
"The goal is to turn data into insight"
Why do we need to communicate insight?
Types of communication - Exploration vs. Explanation
Explanation: Telling a story with data
Exploration: Building an interface for people to find stories
End of explanation |
2,579 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2020 The TensorFlow Authors.
Step1: Keras를 사용한 반복적 인 신경망 (RNN)
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https
Step2: 내장 RNN 레이어
Step3: 내장 RNN은 여러 유용한 기능을 지원합니다.
dropout 및 recurrent_dropout 인수를 통한 반복 드롭아웃
go_backwards 인수를 통해 입력 시퀀스를 반대로 처리할 수 있음
unroll 인수를 통한 루프 언롤링(CPU에서 짧은 시퀀스를 처리할 때 속도가 크게 향상될 수 있음)
기타 등등
자세한 내용은 RNN API 설명서를 참조하세요.
출력과 상태
기본적으로, RNN 레이어의 출력에는 샘플당 하나의 벡터가 포함됩니다. 이 벡터는 마지막 타임스텝에 해당하는 RNN 셀 출력으로, 전체 입력 시퀀스에 대한 정보를 포함합니다. 이 출력의 형상은 (batch_size, units)이고, 여기서 units는 레이어의 생성자에 전달된 units 인수에 해당합니다.
return_sequences=True를 설정하면 RNN 레이어가 각 샘플(샘플 및 타임스텝당 하나의 벡터)에 대한 전체 출력 시퀀스도 반환할 수 있습니다. 이 출력의 형상은 (batch_size, timesteps, units)입니다.
Step4: 또한 RNN 레이어는 최종 내부 상태를 반환할 수 있습니다. 반환된 상태는 나중에 RNN 실행을 재개하거나 다른 RNN을 초기화하는 데 사용될 수 있습니다. 이 설정은 인코더의 최종 상태가 디코더의 초기 상태로 사용되는 인코더-디코더 시퀀스-시퀀스 모델에서 일반적으로 사용됩니다.
내부 상태를 반환하도록 RNN 레이어를 구성하려면 레이어를 생성할 때 return_state 매개 변수를 True 설정합니다. 참고로 LSTM에는 두 개의 상태 텐서가 있지만 GRU에는 하나만 있습니다.
레이어의 초기 상태를 구성하려면 추가 키워드 인수 initial_state로 레이어를 호출하면 됩니다. 상태의 형상은 아래 예와 같이 레이어의 단위 크기와 일치해야 합니다.
Step5: RNN 레이어 및 RNN 셀
내장된 RNN 레이어 외에도 RNN API는 셀 수준의 API도 제공합니다. 입력 시퀀스의 전체 배치를 처리하는 RNN 레이어와 달리 RNN 셀은 단일 타임스텝만 처리합니다.
셀은 RNN 레이어의 for 루프 내부입니다. keras.layers.RNN 레이어 내에 셀을 래핑하면 RNN(LSTMCell(10))과 같은 시퀀스 배치를 처리할 수 있는 레이어가 얻어집니다.
수학적으로, RNN(LSTMCell(10))은 LSTM(10)과 동일한 결과를 생성합니다. 실제로, TF v1.x에서 이 레이어를 구현하는 것은 해당 RNN 셀을 생성하고 이를 RNN 레이어에 랩핑하는 것과 같았습니다. 그러나 내장 GRU 및 LSTM 레이어를 사용하면 CuDNN을 사용할 수 있으므로 성능이 향상될 수 있습니다.
3개의 내장 RNN 셀이 있으며 각 셀은 대응하는 RNN 레이어에 해당합니다.
keras.layers.SimpleRNNCell은 SimpleRNN 레이어에 해당합니다.
keras.layers.GRUCell은 GRU 레이어에 해당합니다.
keras.layers.LSTMCell은 LSTM 레이어에 해당합니다.
일반 keras.layers.RNN 클래스와 함께 셀 추상화를 통해 연구를 위한 사용자 정의 RNN 아키텍처를 매우 쉽게 구현할 수 있습니다.
크로스 배치 상태 저장
무한할 수도 있는 매우 긴 시퀀스를 처리할 때 크로스 배치 상태 저장 패턴을 사용할 수 있습니다.
보통의 경우, RNN 레이어의 내부 상태는 새로운 배치가 인식될 때마다 재설정됩니다(즉, 레이어에 드러나는 모든 샘플은 이전과 독립적인 것으로 가정됨). 레이어는 주어진 샘플을 처리하는 동안에만 상태를 유지합니다.
그래도 시퀀스가 매우 긴 경우, 시퀀스를 더 짧은 시퀀스로 나눈 다음 레이어의 상태를 재설정하지 않고 이 짧은 시퀀스를 RNN 레이어에 순차적으로 공급하는 것이 좋습니다. 이렇게 하면 레이어가 한 번에 하나의 하위 시퀀스만 받더라도 시퀀스 전체에 대한 정보를 유지할 수 있습니다.
생성자에서 stateful=True를 설정하여 이 작업을 수행할 수 있습니다.
시퀀스 s = [t0, t1, ... t1546, t1547]의 경우 다음과 같이 분할합니다.
s1 = [t0, t1, ... t100]
s2 = [t101, ... t201]
...
s16 = [t1501, ... t1547]
그리고 다음을 통해 처리합니다.
python
lstm_layer = layers.LSTM(64, stateful=True)
for s in sub_sequences
Step6: RNN 상태 재사용
<a id="rnn_state_reuse"></a>
RNN 레이어의 기록된 상태는 layer.weights()에 포함되지 않습니다. RNN 레이어의 상태를 재사용하려면 layer.states로 상태 값을 가져오고 new_layer(inputs, initial_state=layer.states) 또는 모델 하위 클래싱과 같은 Keras functional API를 통해 이를 새 레이어의 초기 상태로 사용할 수 있습니다.
순차 모델은 하나의 입력 및 출력이 있는 레이어만 지원하여 초기 상태가 추가로 입력되면 사용이 불가능하므로 이 경우에 사용할 수 없습니다.
Step7: 양방향 RNN
시계열 이외의 시퀀스(예
Step8: 막후에서 Bidirectional은 전달된 RNN 레이어를 복사하고 새로 복사된 레이어의 go_backwards 필드를 뒤집어 입력을 역순으로 처리합니다.
Bidirectional RNN의 출력은 기본적으로, 정뱡향 레이어 출력과 역방향 레이어 출력이 합산된 것입니다. 연결과 같은 다른 병합 동작이 필요한 경우 Bidirectional 래퍼 생성자에서 merge_mode 매개변수를 변경합니다. Bidirectional에 대한 자세한 내용은 API 설명서를 확인하세요.
성능 최적화 및 CuDNN 커널
TensorFlow 2.0에서 내장 LSTM 및 GRU 레이어는 GPU를 사용할 수 있을 때 기본적으로 CuDNN 커널을 활용하도록 업데이트되었습니다. 이 변경으로 인해 이전 keras.layers.CuDNNLSTM/CuDNNGRU 레이어는 더 이상 사용되지 않으며 실행 기반이 되는 하드웨어를 신경 쓰지 않고 모델을 빌드할 수 있습니다.
CuDNN 커널은 특정한 가정 하에 구축되므로 다음과 같이 내장 LSTM 또는 GRU 레이어의 기본값을 변경하면 레이어가 CuDNN 커널을 사용할 수 없게 됩니다.
activation 함수를 tanh에서 다른 값으로 변경
recurrent_activation 함수를 sigmoid에서 다른 값으로 변경
0보다 큰 recurrent_dropout 사용
unroll을 True로 설정(이 경우, LSTM/GRU가 내부 tf.while_loop를 언롤된 for 루프로 분해함)
use_bias를 False로 설정
입력 데이터가 정확히 오른쪽 패딩 처리되지 경우에 마스킹 사용(마스크가 정확히 오른쪽 패딩 처리된 데이터에 해당하는 경우에는 CuDNN을 사용할 수 있으며, 이것이 가장 일반적인 경우임)
제약 조건의 자세한 목록은 LSTM 및 GRU 레이어에 대한 설명서를 참조하십시오.
가능한 경우 CuDNN 커널 사용
성능 차이를 보여주기 위해 간단한 LSTM 모델을 만들어 보겠습니다.
입력 시퀀스로 MNIST 숫자의 행 시퀀스(각 픽셀 행을 하나의 타임스텝으로 취급)를 사용하고 숫자의 레이블을 예측해 보겠습니다.
Step9: MNIST 데이터세트를 로드합니다.
Step10: 모델 인스턴스를 만들고 훈련시키겠습니다.
모델의 손실 함수로 sparse_categorical_crossentropy를 선택합니다. 모델의 출력은 [batch_size, 10] 형상을 갖습니다. 모델의 목표는 정수 벡터이며 각 정수는 0에서 9 사이의 범위에 있습니다.
Step11: 이제 CuDNN 커널을 사용하지 않는 모델과 비교해 보겠습니다.
Step12: NVIDIA GPU 및 CuDNN이 설치된 시스템에서 실행하는 경우, CuDNN으로 빌드된 모델은 일반 TensorFlow 커널을 사용하는 모델에 비해 훈련 속도가 훨씬 빠릅니다.
동일한 CuDNN 지원 모델을 사용하여 CPU 전용 환경에서 추론을 실행할 수도 있습니다. 아래의 tf.device 주석은 장치 배치를 강제합니다. 사용 가능한 GPU가 없는 경우 모델은 기본적으로 CPU에서 실행됩니다.
더 이상 실행 기반이 되는 하드웨어에 대해 걱정할 필요가 없습니다. 정말 멋지지 않나요?
Step13: 목록/사전 입력 또는 중첩 입력이 있는 RNN
중첩 구조를 통해 실행자는 단일 타임스텝 내에 더 많은 정보를 포함할 수 있습니다. 예를 들어, 비디오 프레임에는 오디오 및 비디오 입력이 동시에 있을 수 있습니다. 이 경우의 데이터 형상은 다음과 같습니다.
[batch, timestep, {"video"
Step14: 중첩된 입력/출력으로 RNN 모델 구축
keras.layers.RNN 레이어와 방금 정의한 사용자 정의 셀을 이용하는 Keras 모델을 빌드해 보겠습니다.
Step15: 무작위로 생성된 데이터로 모델 훈련
이 모델에는 적합한 후보 데이터세트가 없기 때문에 임의의 Numpy 데이터를 데모용으로 사용합니다. | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2020 The TensorFlow Authors.
End of explanation
import numpy as np
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
Explanation: Keras를 사용한 반복적 인 신경망 (RNN)
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https://www.tensorflow.org/guide/keras/rnn"><img src="https://www.tensorflow.org/images/tf_logo_32px.png">TensorFlow.org에서 보기</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ko/guide/keras/rnn.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png">Google Colab에서 실행</a></td>
<td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ko/guide/keras/rnn.ipynb"> <img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png"> GitHub에서 소스 보기</a></td>
<td><a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ko/guide/keras/rnn.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png">노트북 다운로드</a></td>
</table>
시작하기
RNN (Recurrent Neural Network)은 시계열 또는 자연어와 같은 시퀀스 데이터를 모델링하는 데 강력한 신경망 클래스입니다.
도식적으로, RNN 계층은 for 루프를 사용하여 시퀀스의 시간 단계를 반복하고, 지금까지 본 시간 단계에 대한 정보를 인코딩하는 내부 상태를 유지합니다.
Keras RNN API는 다음에 중점을두고 설계되었습니다.
사용 편리성: 내장 keras.layers.RNN, keras.layers.LSTM, keras.layers.GRU 레이어를 사용하여 어려운 구성 선택 없이도 반복 모델을 빠르게 구축할 수 있습니다.
사용자 정의 용이성 : 사용자 정의 동작으로 자체 RNN 셀 계층 ( for 루프의 내부 부분)을 정의하고 일반 keras.layers.RNN 계층 ( for 루프 자체)과 함께 사용할 수 있습니다. 이를 통해 최소한의 코드로 다양한 연구 아이디어를 유연한 방식으로 신속하게 프로토 타이핑 할 수 있습니다.
Setup
End of explanation
model = keras.Sequential()
# Add an Embedding layer expecting input vocab of size 1000, and
# output embedding dimension of size 64.
model.add(layers.Embedding(input_dim=1000, output_dim=64))
# Add a LSTM layer with 128 internal units.
model.add(layers.LSTM(128))
# Add a Dense layer with 10 units.
model.add(layers.Dense(10))
model.summary()
Explanation: 내장 RNN 레이어: 간단한 예
Keras에는 세 개의 내장 RNN 레이어가 있습니다.
keras.layers.SimpleRNN: 이전 타임스텝의 출력이 다음 타임스텝으로 공급되는 완전히 연결된 RNN입니다.
keras.layers.GRU: Cho 등(2014년)에 의해 처음 제안되었습니다.
keras.layers.LSTM: Hochreiter 및 Schmidhuber(1997년)에 의해 처음 제안되었습니다.
2015년 초, LSTM 및 GRU의 재사용 가능한 오픈 소스 Python 구현이 Keras에 처음 이루어졌습니다.
다음은 정수 시퀀스를 처리하고 각 정수를 64차원 벡터에 포함시킨 다음 LSTM 레이어를 사용하여 벡터 시퀀스를 처리하는 Sequential 모델의 간단한 예입니다.
End of explanation
model = keras.Sequential()
model.add(layers.Embedding(input_dim=1000, output_dim=64))
# The output of GRU will be a 3D tensor of shape (batch_size, timesteps, 256)
model.add(layers.GRU(256, return_sequences=True))
# The output of SimpleRNN will be a 2D tensor of shape (batch_size, 128)
model.add(layers.SimpleRNN(128))
model.add(layers.Dense(10))
model.summary()
Explanation: 내장 RNN은 여러 유용한 기능을 지원합니다.
dropout 및 recurrent_dropout 인수를 통한 반복 드롭아웃
go_backwards 인수를 통해 입력 시퀀스를 반대로 처리할 수 있음
unroll 인수를 통한 루프 언롤링(CPU에서 짧은 시퀀스를 처리할 때 속도가 크게 향상될 수 있음)
기타 등등
자세한 내용은 RNN API 설명서를 참조하세요.
출력과 상태
기본적으로, RNN 레이어의 출력에는 샘플당 하나의 벡터가 포함됩니다. 이 벡터는 마지막 타임스텝에 해당하는 RNN 셀 출력으로, 전체 입력 시퀀스에 대한 정보를 포함합니다. 이 출력의 형상은 (batch_size, units)이고, 여기서 units는 레이어의 생성자에 전달된 units 인수에 해당합니다.
return_sequences=True를 설정하면 RNN 레이어가 각 샘플(샘플 및 타임스텝당 하나의 벡터)에 대한 전체 출력 시퀀스도 반환할 수 있습니다. 이 출력의 형상은 (batch_size, timesteps, units)입니다.
End of explanation
encoder_vocab = 1000
decoder_vocab = 2000
encoder_input = layers.Input(shape=(None,))
encoder_embedded = layers.Embedding(input_dim=encoder_vocab, output_dim=64)(
encoder_input
)
# Return states in addition to output
output, state_h, state_c = layers.LSTM(64, return_state=True, name="encoder")(
encoder_embedded
)
encoder_state = [state_h, state_c]
decoder_input = layers.Input(shape=(None,))
decoder_embedded = layers.Embedding(input_dim=decoder_vocab, output_dim=64)(
decoder_input
)
# Pass the 2 states to a new LSTM layer, as initial state
decoder_output = layers.LSTM(64, name="decoder")(
decoder_embedded, initial_state=encoder_state
)
output = layers.Dense(10)(decoder_output)
model = keras.Model([encoder_input, decoder_input], output)
model.summary()
Explanation: 또한 RNN 레이어는 최종 내부 상태를 반환할 수 있습니다. 반환된 상태는 나중에 RNN 실행을 재개하거나 다른 RNN을 초기화하는 데 사용될 수 있습니다. 이 설정은 인코더의 최종 상태가 디코더의 초기 상태로 사용되는 인코더-디코더 시퀀스-시퀀스 모델에서 일반적으로 사용됩니다.
내부 상태를 반환하도록 RNN 레이어를 구성하려면 레이어를 생성할 때 return_state 매개 변수를 True 설정합니다. 참고로 LSTM에는 두 개의 상태 텐서가 있지만 GRU에는 하나만 있습니다.
레이어의 초기 상태를 구성하려면 추가 키워드 인수 initial_state로 레이어를 호출하면 됩니다. 상태의 형상은 아래 예와 같이 레이어의 단위 크기와 일치해야 합니다.
End of explanation
paragraph1 = np.random.random((20, 10, 50)).astype(np.float32)
paragraph2 = np.random.random((20, 10, 50)).astype(np.float32)
paragraph3 = np.random.random((20, 10, 50)).astype(np.float32)
lstm_layer = layers.LSTM(64, stateful=True)
output = lstm_layer(paragraph1)
output = lstm_layer(paragraph2)
output = lstm_layer(paragraph3)
# reset_states() will reset the cached state to the original initial_state.
# If no initial_state was provided, zero-states will be used by default.
lstm_layer.reset_states()
Explanation: RNN 레이어 및 RNN 셀
내장된 RNN 레이어 외에도 RNN API는 셀 수준의 API도 제공합니다. 입력 시퀀스의 전체 배치를 처리하는 RNN 레이어와 달리 RNN 셀은 단일 타임스텝만 처리합니다.
셀은 RNN 레이어의 for 루프 내부입니다. keras.layers.RNN 레이어 내에 셀을 래핑하면 RNN(LSTMCell(10))과 같은 시퀀스 배치를 처리할 수 있는 레이어가 얻어집니다.
수학적으로, RNN(LSTMCell(10))은 LSTM(10)과 동일한 결과를 생성합니다. 실제로, TF v1.x에서 이 레이어를 구현하는 것은 해당 RNN 셀을 생성하고 이를 RNN 레이어에 랩핑하는 것과 같았습니다. 그러나 내장 GRU 및 LSTM 레이어를 사용하면 CuDNN을 사용할 수 있으므로 성능이 향상될 수 있습니다.
3개의 내장 RNN 셀이 있으며 각 셀은 대응하는 RNN 레이어에 해당합니다.
keras.layers.SimpleRNNCell은 SimpleRNN 레이어에 해당합니다.
keras.layers.GRUCell은 GRU 레이어에 해당합니다.
keras.layers.LSTMCell은 LSTM 레이어에 해당합니다.
일반 keras.layers.RNN 클래스와 함께 셀 추상화를 통해 연구를 위한 사용자 정의 RNN 아키텍처를 매우 쉽게 구현할 수 있습니다.
크로스 배치 상태 저장
무한할 수도 있는 매우 긴 시퀀스를 처리할 때 크로스 배치 상태 저장 패턴을 사용할 수 있습니다.
보통의 경우, RNN 레이어의 내부 상태는 새로운 배치가 인식될 때마다 재설정됩니다(즉, 레이어에 드러나는 모든 샘플은 이전과 독립적인 것으로 가정됨). 레이어는 주어진 샘플을 처리하는 동안에만 상태를 유지합니다.
그래도 시퀀스가 매우 긴 경우, 시퀀스를 더 짧은 시퀀스로 나눈 다음 레이어의 상태를 재설정하지 않고 이 짧은 시퀀스를 RNN 레이어에 순차적으로 공급하는 것이 좋습니다. 이렇게 하면 레이어가 한 번에 하나의 하위 시퀀스만 받더라도 시퀀스 전체에 대한 정보를 유지할 수 있습니다.
생성자에서 stateful=True를 설정하여 이 작업을 수행할 수 있습니다.
시퀀스 s = [t0, t1, ... t1546, t1547]의 경우 다음과 같이 분할합니다.
s1 = [t0, t1, ... t100]
s2 = [t101, ... t201]
...
s16 = [t1501, ... t1547]
그리고 다음을 통해 처리합니다.
python
lstm_layer = layers.LSTM(64, stateful=True)
for s in sub_sequences:
output = lstm_layer(s)
상태를 지우려면 layer.reset_states()를 사용할 수 있습니다.
참고: 이 설정에서 주어진 배치의 샘플 i는 이전 배치의 샘플 i가 연속된 것으로 간주됩니다. 즉, 모든 배치에는 동일한 수의 샘플(배치 크기)이 포함되어야 합니다. 예를 들어 배치에 [sequence_A_from_t0_to_t100, sequence_B_from_t0_to_t100]이 포함된 경우 다음 배치에는 [sequence_A_from_t101_to_t200, sequence_B_from_t101_to_t200]이 포함되어야 합니다.
다음은 완전한 예입니다.
End of explanation
paragraph1 = np.random.random((20, 10, 50)).astype(np.float32)
paragraph2 = np.random.random((20, 10, 50)).astype(np.float32)
paragraph3 = np.random.random((20, 10, 50)).astype(np.float32)
lstm_layer = layers.LSTM(64, stateful=True)
output = lstm_layer(paragraph1)
output = lstm_layer(paragraph2)
existing_state = lstm_layer.states
new_lstm_layer = layers.LSTM(64)
new_output = new_lstm_layer(paragraph3, initial_state=existing_state)
Explanation: RNN 상태 재사용
<a id="rnn_state_reuse"></a>
RNN 레이어의 기록된 상태는 layer.weights()에 포함되지 않습니다. RNN 레이어의 상태를 재사용하려면 layer.states로 상태 값을 가져오고 new_layer(inputs, initial_state=layer.states) 또는 모델 하위 클래싱과 같은 Keras functional API를 통해 이를 새 레이어의 초기 상태로 사용할 수 있습니다.
순차 모델은 하나의 입력 및 출력이 있는 레이어만 지원하여 초기 상태가 추가로 입력되면 사용이 불가능하므로 이 경우에 사용할 수 없습니다.
End of explanation
model = keras.Sequential()
model.add(
layers.Bidirectional(layers.LSTM(64, return_sequences=True), input_shape=(5, 10))
)
model.add(layers.Bidirectional(layers.LSTM(32)))
model.add(layers.Dense(10))
model.summary()
Explanation: 양방향 RNN
시계열 이외의 시퀀스(예: 텍스트)의 경우, RNN 모델이 시퀀스를 처음부터 끝까지 처리할 뿐만 아니라 역방향으로도 처리하면 성능이 더 좋아지는 경우가 종종 있습니다. 예를 들어, 문장에서 다음 단어를 예측하려면 단어 앞에 오는 단어뿐만 아니라 단어 주변의 컨텍스트까지 있으면 유용한 경우가 많습니다.
Keras는 keras.layers.Bidirectional 래퍼와 같은 이러한 양방향 RNN을 쉽게 구축할 수 있는 API를 제공합니다.
End of explanation
batch_size = 64
# Each MNIST image batch is a tensor of shape (batch_size, 28, 28).
# Each input sequence will be of size (28, 28) (height is treated like time).
input_dim = 28
units = 64
output_size = 10 # labels are from 0 to 9
# Build the RNN model
def build_model(allow_cudnn_kernel=True):
# CuDNN is only available at the layer level, and not at the cell level.
# This means `LSTM(units)` will use the CuDNN kernel,
# while RNN(LSTMCell(units)) will run on non-CuDNN kernel.
if allow_cudnn_kernel:
# The LSTM layer with default options uses CuDNN.
lstm_layer = keras.layers.LSTM(units, input_shape=(None, input_dim))
else:
# Wrapping a LSTMCell in a RNN layer will not use CuDNN.
lstm_layer = keras.layers.RNN(
keras.layers.LSTMCell(units), input_shape=(None, input_dim)
)
model = keras.models.Sequential(
[
lstm_layer,
keras.layers.BatchNormalization(),
keras.layers.Dense(output_size),
]
)
return model
Explanation: 막후에서 Bidirectional은 전달된 RNN 레이어를 복사하고 새로 복사된 레이어의 go_backwards 필드를 뒤집어 입력을 역순으로 처리합니다.
Bidirectional RNN의 출력은 기본적으로, 정뱡향 레이어 출력과 역방향 레이어 출력이 합산된 것입니다. 연결과 같은 다른 병합 동작이 필요한 경우 Bidirectional 래퍼 생성자에서 merge_mode 매개변수를 변경합니다. Bidirectional에 대한 자세한 내용은 API 설명서를 확인하세요.
성능 최적화 및 CuDNN 커널
TensorFlow 2.0에서 내장 LSTM 및 GRU 레이어는 GPU를 사용할 수 있을 때 기본적으로 CuDNN 커널을 활용하도록 업데이트되었습니다. 이 변경으로 인해 이전 keras.layers.CuDNNLSTM/CuDNNGRU 레이어는 더 이상 사용되지 않으며 실행 기반이 되는 하드웨어를 신경 쓰지 않고 모델을 빌드할 수 있습니다.
CuDNN 커널은 특정한 가정 하에 구축되므로 다음과 같이 내장 LSTM 또는 GRU 레이어의 기본값을 변경하면 레이어가 CuDNN 커널을 사용할 수 없게 됩니다.
activation 함수를 tanh에서 다른 값으로 변경
recurrent_activation 함수를 sigmoid에서 다른 값으로 변경
0보다 큰 recurrent_dropout 사용
unroll을 True로 설정(이 경우, LSTM/GRU가 내부 tf.while_loop를 언롤된 for 루프로 분해함)
use_bias를 False로 설정
입력 데이터가 정확히 오른쪽 패딩 처리되지 경우에 마스킹 사용(마스크가 정확히 오른쪽 패딩 처리된 데이터에 해당하는 경우에는 CuDNN을 사용할 수 있으며, 이것이 가장 일반적인 경우임)
제약 조건의 자세한 목록은 LSTM 및 GRU 레이어에 대한 설명서를 참조하십시오.
가능한 경우 CuDNN 커널 사용
성능 차이를 보여주기 위해 간단한 LSTM 모델을 만들어 보겠습니다.
입력 시퀀스로 MNIST 숫자의 행 시퀀스(각 픽셀 행을 하나의 타임스텝으로 취급)를 사용하고 숫자의 레이블을 예측해 보겠습니다.
End of explanation
mnist = keras.datasets.mnist
(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0
sample, sample_label = x_train[0], y_train[0]
Explanation: MNIST 데이터세트를 로드합니다.
End of explanation
model = build_model(allow_cudnn_kernel=True)
model.compile(
loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),
optimizer="sgd",
metrics=["accuracy"],
)
model.fit(
x_train, y_train, validation_data=(x_test, y_test), batch_size=batch_size, epochs=1
)
Explanation: 모델 인스턴스를 만들고 훈련시키겠습니다.
모델의 손실 함수로 sparse_categorical_crossentropy를 선택합니다. 모델의 출력은 [batch_size, 10] 형상을 갖습니다. 모델의 목표는 정수 벡터이며 각 정수는 0에서 9 사이의 범위에 있습니다.
End of explanation
noncudnn_model = build_model(allow_cudnn_kernel=False)
noncudnn_model.set_weights(model.get_weights())
noncudnn_model.compile(
loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),
optimizer="sgd",
metrics=["accuracy"],
)
noncudnn_model.fit(
x_train, y_train, validation_data=(x_test, y_test), batch_size=batch_size, epochs=1
)
Explanation: 이제 CuDNN 커널을 사용하지 않는 모델과 비교해 보겠습니다.
End of explanation
import matplotlib.pyplot as plt
with tf.device("CPU:0"):
cpu_model = build_model(allow_cudnn_kernel=True)
cpu_model.set_weights(model.get_weights())
result = tf.argmax(cpu_model.predict_on_batch(tf.expand_dims(sample, 0)), axis=1)
print(
"Predicted result is: %s, target result is: %s" % (result.numpy(), sample_label)
)
plt.imshow(sample, cmap=plt.get_cmap("gray"))
Explanation: NVIDIA GPU 및 CuDNN이 설치된 시스템에서 실행하는 경우, CuDNN으로 빌드된 모델은 일반 TensorFlow 커널을 사용하는 모델에 비해 훈련 속도가 훨씬 빠릅니다.
동일한 CuDNN 지원 모델을 사용하여 CPU 전용 환경에서 추론을 실행할 수도 있습니다. 아래의 tf.device 주석은 장치 배치를 강제합니다. 사용 가능한 GPU가 없는 경우 모델은 기본적으로 CPU에서 실행됩니다.
더 이상 실행 기반이 되는 하드웨어에 대해 걱정할 필요가 없습니다. 정말 멋지지 않나요?
End of explanation
class NestedCell(keras.layers.Layer):
def __init__(self, unit_1, unit_2, unit_3, **kwargs):
self.unit_1 = unit_1
self.unit_2 = unit_2
self.unit_3 = unit_3
self.state_size = [tf.TensorShape([unit_1]), tf.TensorShape([unit_2, unit_3])]
self.output_size = [tf.TensorShape([unit_1]), tf.TensorShape([unit_2, unit_3])]
super(NestedCell, self).__init__(**kwargs)
def build(self, input_shapes):
# expect input_shape to contain 2 items, [(batch, i1), (batch, i2, i3)]
i1 = input_shapes[0][1]
i2 = input_shapes[1][1]
i3 = input_shapes[1][2]
self.kernel_1 = self.add_weight(
shape=(i1, self.unit_1), initializer="uniform", name="kernel_1"
)
self.kernel_2_3 = self.add_weight(
shape=(i2, i3, self.unit_2, self.unit_3),
initializer="uniform",
name="kernel_2_3",
)
def call(self, inputs, states):
# inputs should be in [(batch, input_1), (batch, input_2, input_3)]
# state should be in shape [(batch, unit_1), (batch, unit_2, unit_3)]
input_1, input_2 = tf.nest.flatten(inputs)
s1, s2 = states
output_1 = tf.matmul(input_1, self.kernel_1)
output_2_3 = tf.einsum("bij,ijkl->bkl", input_2, self.kernel_2_3)
state_1 = s1 + output_1
state_2_3 = s2 + output_2_3
output = (output_1, output_2_3)
new_states = (state_1, state_2_3)
return output, new_states
def get_config(self):
return {"unit_1": self.unit_1, "unit_2": unit_2, "unit_3": self.unit_3}
Explanation: 목록/사전 입력 또는 중첩 입력이 있는 RNN
중첩 구조를 통해 실행자는 단일 타임스텝 내에 더 많은 정보를 포함할 수 있습니다. 예를 들어, 비디오 프레임에는 오디오 및 비디오 입력이 동시에 있을 수 있습니다. 이 경우의 데이터 형상은 다음과 같습니다.
[batch, timestep, {"video": [height, width, channel], "audio": [frequency]}]
또 다른 예로, 필기 데이터는 펜의 압력 정보뿐만 아니라 현재 위치에 대한 좌표 x 및 y를 모두 가질 수 있습니다. 따라서 데이터 표현은 다음과 같습니다.
[batch, timestep, {"location": [x, y], "pressure": [force]}]
다음 코드는 이러한 구조화된 입력을 허용하는 사용자 정의 RNN 셀을 빌드하는 방법의 예를 보여줍니다.
중첩된 입력/출력을 지원하는 사용자 정의 셀 정의하기
고유한 레이어를 작성하기 위한 자세한 내용은 하위 클래화를 통해 새로운 레이어 및 모델 만들기를 참조하세요.
End of explanation
unit_1 = 10
unit_2 = 20
unit_3 = 30
i1 = 32
i2 = 64
i3 = 32
batch_size = 64
num_batches = 10
timestep = 50
cell = NestedCell(unit_1, unit_2, unit_3)
rnn = keras.layers.RNN(cell)
input_1 = keras.Input((None, i1))
input_2 = keras.Input((None, i2, i3))
outputs = rnn((input_1, input_2))
model = keras.models.Model([input_1, input_2], outputs)
model.compile(optimizer="adam", loss="mse", metrics=["accuracy"])
Explanation: 중첩된 입력/출력으로 RNN 모델 구축
keras.layers.RNN 레이어와 방금 정의한 사용자 정의 셀을 이용하는 Keras 모델을 빌드해 보겠습니다.
End of explanation
input_1_data = np.random.random((batch_size * num_batches, timestep, i1))
input_2_data = np.random.random((batch_size * num_batches, timestep, i2, i3))
target_1_data = np.random.random((batch_size * num_batches, unit_1))
target_2_data = np.random.random((batch_size * num_batches, unit_2, unit_3))
input_data = [input_1_data, input_2_data]
target_data = [target_1_data, target_2_data]
model.fit(input_data, target_data, batch_size=batch_size)
Explanation: 무작위로 생성된 데이터로 모델 훈련
이 모델에는 적합한 후보 데이터세트가 없기 때문에 임의의 Numpy 데이터를 데모용으로 사용합니다.
End of explanation |
2,580 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Average Speed of Answer, Average Handling Time, and After Call Work for Field Support Center
<em>Chris Rucker, Associate Data Scientist</em>
<em>14 Nov 2016</em>
Observation
Three discrete quantitative variables were chosen for exploration from a data set of 42 observations from the Field Support Center's (FSC) CS Monthly Flash reports.
<ul><li>Average Handling Time (AHT) is the amount of time it takes an Agent to deal with all aspects of a call including talk time plus After Call Work increases.</li></ul>
<ul><li>After Call Work (ACW) is the period of time immediately after contact with the customer is completed and any supplementary work is undertaken by the Agent.</li></ul>
<ul><li>Average Speed of Answer (ASA) is the amount of time it takes to answer a typical call once it has been routed to the FSC.</li></ul>
Step1: <p><b>ASA ↑ AHT ↑</b></p>
<em><b>As the amount of time it takes to answer a typical call once it has been routed to the FSC increases, the amount of time it takes an Agent to deal with all aspects of a call including talk time plus ACW increases.</b></em>
A bivariate distribution of ASA/AHT variables along with the univariate (or marginal) distribution of each on separate axes shows a Pearson correlation coefficient of 0.77 and a p-value of 1.8e-09. The p-value roughly indicates the probability of an uncorrelated system producing datasets that have a Pearson correlation at least as extreme as the one computed from these datasets. The p-values are not entirely reliable but are probably reasonable for datasets larger than 500 or so. Positive correlations imply that as AHT increases, so does ASA.
An altogether different approach is to fit a nonparametric regression using a lowess smoother. This approach has the fewest assumptions, although it is computationally intensive and so currently confidence intervals are not computed at all.
Step2: <p><b>ACW ↓ ASA ↑</b></p>
<em><b>As the period of time immediately after contact with the customer is completed and any supplementary work is undertaken by the Agent decreases, the amount of time it takes to answer a typical call once it has been routed to the FSC increases.</b></em>
A bivariate distribution of ACW/ASA variables along with the univariate (or marginal) distribution of each on separate axes shows a Pearson correlation coefficient of -0.48 and a p-value of 0.0013. The p-value roughly indicates the probability of an uncorrelated system producing datasets that have a Pearson correlation at least as extreme as the one computed from these datasets. The p-values are not entirely reliable but are probably reasonable for datasets larger than 500 or so. Negative correlations imply that as ASA increases, ACW decreases.
An altogether different approach is to fit a nonparametric regression using a lowess smoother. This approach has the fewest assumptions, although it is computationally intensive and so currently confidence intervals are not computed at all.
Step3: <p><b>ACW ↓ AHT ↑</b></p>
<em><b>As the period of time immediately after contact with the customer is completed and any supplementary work is
undertaken by the Agent decreases, the amount of time it takes an Agent to deal with all aspects of a call increases.</b></em>
A bivariate distribution of AHT/ACW variables along with the univariate (or marginal) distribution of each on separate axes shows a Pearson correlation coefficient of -0.096 and a p-value of 0.55. The p-value roughly indicates the probability of an uncorrelated system producing datasets that have a Pearson correlation at least as extreme as the one computed from these datasets. The p-values are not entirely reliable but are probably reasonable for datasets larger than 500 or so. Negative correlations imply that as AHT increases, ACW decreases.
An altogether different approach is to fit a nonparametric regression using a lowess smoother. This approach has the fewest assumptions, although it is computationally intensive and so currently confidence intervals are not computed at all. | Python Code:
import pandas as pd
import seaborn as sns
data = pd.read_csv('C:\Users\crucker\calls.csv')
data.head()
Explanation: Average Speed of Answer, Average Handling Time, and After Call Work for Field Support Center
<em>Chris Rucker, Associate Data Scientist</em>
<em>14 Nov 2016</em>
Observation
Three discrete quantitative variables were chosen for exploration from a data set of 42 observations from the Field Support Center's (FSC) CS Monthly Flash reports.
<ul><li>Average Handling Time (AHT) is the amount of time it takes an Agent to deal with all aspects of a call including talk time plus After Call Work increases.</li></ul>
<ul><li>After Call Work (ACW) is the period of time immediately after contact with the customer is completed and any supplementary work is undertaken by the Agent.</li></ul>
<ul><li>Average Speed of Answer (ASA) is the amount of time it takes to answer a typical call once it has been routed to the FSC.</li></ul>
End of explanation
sns.lmplot(x='AHT', y='ASA', data=data, robust=True)
sns.lmplot(x='AHT', y='ASA', data=data, lowess=True)
sns.jointplot(x='AHT', y='ASA', data=data, kind="reg", robust=True);
Explanation: <p><b>ASA ↑ AHT ↑</b></p>
<em><b>As the amount of time it takes to answer a typical call once it has been routed to the FSC increases, the amount of time it takes an Agent to deal with all aspects of a call including talk time plus ACW increases.</b></em>
A bivariate distribution of ASA/AHT variables along with the univariate (or marginal) distribution of each on separate axes shows a Pearson correlation coefficient of 0.77 and a p-value of 1.8e-09. The p-value roughly indicates the probability of an uncorrelated system producing datasets that have a Pearson correlation at least as extreme as the one computed from these datasets. The p-values are not entirely reliable but are probably reasonable for datasets larger than 500 or so. Positive correlations imply that as AHT increases, so does ASA.
An altogether different approach is to fit a nonparametric regression using a lowess smoother. This approach has the fewest assumptions, although it is computationally intensive and so currently confidence intervals are not computed at all.
End of explanation
sns.lmplot(x='ASA', y='ACW', data=data, robust=True)
sns.lmplot(x='ASA', y='ACW', data=data, lowess=True)
sns.jointplot(x='ASA', y='ACW', data=data, kind="reg", robust=True);
Explanation: <p><b>ACW ↓ ASA ↑</b></p>
<em><b>As the period of time immediately after contact with the customer is completed and any supplementary work is undertaken by the Agent decreases, the amount of time it takes to answer a typical call once it has been routed to the FSC increases.</b></em>
A bivariate distribution of ACW/ASA variables along with the univariate (or marginal) distribution of each on separate axes shows a Pearson correlation coefficient of -0.48 and a p-value of 0.0013. The p-value roughly indicates the probability of an uncorrelated system producing datasets that have a Pearson correlation at least as extreme as the one computed from these datasets. The p-values are not entirely reliable but are probably reasonable for datasets larger than 500 or so. Negative correlations imply that as ASA increases, ACW decreases.
An altogether different approach is to fit a nonparametric regression using a lowess smoother. This approach has the fewest assumptions, although it is computationally intensive and so currently confidence intervals are not computed at all.
End of explanation
sns.lmplot(x='AHT', y='ACW', data=data, robust=True)
sns.lmplot(x='AHT', y='ACW', data=data, lowess=True)
sns.jointplot(x='AHT', y='ACW', data=data, kind="reg", robust=True);
Explanation: <p><b>ACW ↓ AHT ↑</b></p>
<em><b>As the period of time immediately after contact with the customer is completed and any supplementary work is
undertaken by the Agent decreases, the amount of time it takes an Agent to deal with all aspects of a call increases.</b></em>
A bivariate distribution of AHT/ACW variables along with the univariate (or marginal) distribution of each on separate axes shows a Pearson correlation coefficient of -0.096 and a p-value of 0.55. The p-value roughly indicates the probability of an uncorrelated system producing datasets that have a Pearson correlation at least as extreme as the one computed from these datasets. The p-values are not entirely reliable but are probably reasonable for datasets larger than 500 or so. Negative correlations imply that as AHT increases, ACW decreases.
An altogether different approach is to fit a nonparametric regression using a lowess smoother. This approach has the fewest assumptions, although it is computationally intensive and so currently confidence intervals are not computed at all.
End of explanation |
2,581 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Testing Neural Network based Anomaly Detection on actual data
This code reads PerfSONAR measured packet loss rates between a specified endpoint and all other endpoints in a selected time range. It tries to train neural network to distinguish measurements belonging to the timebin under investigation from measurements in a reference time period.
Step1: parameters to set
Step2: get data from ES
we connect to elasticsearch, create query and execute scan. Query requires three things
Step3: Loading the data
This is the slowest part. It reads ~5k documents per second and will load 1M documents. Expect wait time of ~1 minutes. Actual time might vary depending on your connection and how busy is the Elasticsearch cluster.
Step4: Puts together data from different links.
Step5: preselecting X worst links but not really the worst
Step6: plot timeseries
only a subset of all the links will be shown
Step7: fix NANs and add accuracy column
Step8: create Network Model
only class is defined, no output is expected.
Step9: functions
Step10: Actually create the object, give it a data, run anomally detection.
This part can take significant time. It takes 10-30 seconds per hour of data analyzed. Total number of steps will be equal to number of subject intervals in the period tested. For every 5th step and intervals where anomaly has been detected ROC curve will be shown.
Step11: plot again full timeseries
Step12: shade regions where an anomaly has been dected | Python Code:
%matplotlib inline
from elasticsearch import Elasticsearch
from elasticsearch.helpers import scan
from time import time
import numpy as np
import pandas as pd
import random
import matplotlib
matplotlib.rc('xtick', labelsize=14)
matplotlib.rc('ytick', labelsize=14)
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.utils import shuffle
from keras.models import Sequential
from keras.layers.core import Dense, Activation, Dropout
from pandas.tseries.offsets import *
Explanation: Testing Neural Network based Anomaly Detection on actual data
This code reads PerfSONAR measured packet loss rates between a specified endpoint and all other endpoints in a selected time range. It tries to train neural network to distinguish measurements belonging to the timebin under investigation from measurements in a reference time period.
End of explanation
n_series = 20
start_date = '2017-05-13 00:00:00'
end_date = '2017-05-16 23:59:59'
# tuning parameters
ref = 24
sub = 1
chance = ref/(sub+ref)
cut = chance + (1-chance) * 0.05
print('chance:',chance, '\tcut:', cut)
ref = ref * Hour()
sub = sub * Hour()
srcSiteOWDServer = "128.142.223.247" # CERN site
# destSiteOWDServer = "193.109.172.188" # pic site
Explanation: parameters to set
End of explanation
es = Elasticsearch(['atlas-kibana.mwt2.org:9200'],timeout=60)
indices = "network_weather-2017.*"
start = pd.Timestamp(start_date)
end = pd.Timestamp(end_date)
my_query = {
'query': {
'bool':{
'must':[
{'range': {'timestamp': {'gte': start.strftime('%Y%m%dT%H%M00Z'), 'lt': end.strftime('%Y%m%dT%H%M00Z')}}},
{'term': {'src': srcSiteOWDServer}},
# {'term': {'dest': destSiteOWDServer}},
{'term': {'_type': 'packet_loss_rate'}}
]
}
}
}
scroll = scan(client=es, index=indices, query=my_query)
Explanation: get data from ES
we connect to elasticsearch, create query and execute scan. Query requires three things: data must be in the given timerange, must be measured by the selected endpoint and be packet loss data. Actual data access does not happen here but in the next cell.
End of explanation
count = 0
allData={} # will be like this: {'dest_host':[[timestamp],[value]], ...}
for res in scroll:
# if count<2: print(res)
if not count%100000: print(count)
# if count>1000000: break
dst = res['_source']['dest'] # old data - dest, new data - dest_host
if dst not in allData: allData[dst]=[[],[]]
allData[dst][0].append(res['_source']['timestamp'] )
allData[dst][1].append(res['_source']['packet_loss'])
count=count+1
dfs=[]
for dest,data in allData.items():
ts=pd.to_datetime(data[0],unit='ms')
df=pd.DataFrame({dest:data[1]}, index=ts )
df.sort_index(inplace=True)
df.index = df.index.map(lambda t: t.replace(second=0))
df = df[~df.index.duplicated(keep='last')]
dfs.append(df)
#print(df.head(2))
print(count, "\nData loaded.")
full_df = pd.concat(dfs, axis=1)
Explanation: Loading the data
This is the slowest part. It reads ~5k documents per second and will load 1M documents. Expect wait time of ~1 minutes. Actual time might vary depending on your connection and how busy is the Elasticsearch cluster.
End of explanation
print(full_df.shape)
full_df.head()
#print(full_df.columns )
Explanation: Puts together data from different links.
End of explanation
del full_df['134.158.73.243']
means=full_df.mean()
means.sort_values(ascending=False, inplace=True)
means=means[:n_series]
print(means)
df = full_df[means.index.tolist()]
df.shape
Explanation: preselecting X worst links but not really the worst
End of explanation
df.plot(figsize=(20,7))
Explanation: plot timeseries
only a subset of all the links will be shown
End of explanation
# full_df.interpolate(method='nearest', axis=0, inplace=True)
df=df.fillna(0)
auc_df = pd.DataFrame(np.nan, index=df.index, columns=['accuracy'])
Explanation: fix NANs and add accuracy column
End of explanation
class ANN(object):
def __init__(self, n_series):
self.n_series = n_series
self.df = None
self.auc_df = None
self.nn = Sequential()
self.nn.add(Dense(units=n_series*2, input_shape=(n_series,), activation='relu' ))
# self.nn.add(Dropout(0.5))
self.nn.add(Dense(units=n_series*2, activation='relu'))
# self.nn.add(Dropout(0.5))
self.nn.add(Dense(units=1, activation='sigmoid'))
# self.nn.compile(loss='hinge', optimizer='sgd', metrics=['binary_accuracy'])
# self.nn.compile(loss='mse',optimizer='rmsprop', metrics=['accuracy'])
self.nn.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['accuracy','binary_accuracy' ])
# self.nn.compile(loss='mse', optimizer='rmsprop', metrics=['accuracy','binary_accuracy' ])
# self.nn.compile(loss='categorical_crossentropy', optimizer='sgd', metrics=['binary_accuracy'])
self.nn.summary()
self.initial_weights = self.nn.get_weights()
def set_data(self, df, auc_df):
self.df = df
self.auc_df = auc_df
def plot_hist(self, hist):
es=len(hist.history['loss'])
x = np.linspace(0,es-1,es)
plt.plot(x, hist.history['loss'], '--', linewidth=2, label='loss')
plt.plot(x, hist.history['acc'], '-', linewidth=2, label='acc')
plt.legend()
plt.show()
def check_for_anomaly(self,ref, sub, count):
y_ref = pd.Series([0] * ref.shape[0])
X_ref = ref
y_sub = pd.Series([1] * sub.shape[0])
X_sub = sub
# separate Reference and Subject into Train and Test
X_ref_train, X_ref_test, y_ref_train, y_ref_test = train_test_split(X_ref, y_ref, test_size=0.3)#, random_state=42)
X_sub_train, X_sub_test, y_sub_train, y_sub_test = train_test_split(X_sub, y_sub, test_size=0.3)#, random_state=42)
# combine training ref and sub samples
X_train = pd.concat([X_ref_train, X_sub_train])
y_train = pd.concat([y_ref_train, y_sub_train])
# combine testing ref and sub samples
X_test = pd.concat([X_ref_test, X_sub_test])
y_test = pd.concat([y_ref_test, y_sub_test])
X_train = X_train.reset_index(drop=True)
y_train = y_train.reset_index(drop=True)
X_train_s, y_train_s = shuffle(X_train, y_train)
self.nn.set_weights(self.initial_weights)
hist = self.nn.fit(X_train_s.values, y_train_s.values, epochs=500, verbose=0, shuffle=True)#, batch_size=10)
loss_and_metrics = self.nn.evaluate(X_test.values, y_test.values)#, batch_size=256)
print(loss_and_metrics)
if loss_and_metrics[1] > cut or not count%5:
self.plot_hist(hist)
return scaled_accuracy(loss_and_metrics[1], ref.shape[0], sub.shape[0])
def loop_over_intervals(self):
lstart = self.df.index.min()
lend = self.df.index.max()
#round start
lstart.seconds=0
lstart.minutes=0
# loop over them
ti = lstart + ref + sub
count = 0
while ti < lend + 1 * Minute():
print(count)
startt = time()
ref_start = ti-ref-sub
ref_end = ti-sub
ref_df = self.df[(self.df.index >= ref_start) & (self.df.index < ref_end)]
sub_df = self.df[(self.df.index >= ref_end) & (self.df.index < ti)]
# print('ref:',ref_df.head())
# print("sub:",sub_df.head())
accuracy = self.check_for_anomaly(ref_df, sub_df, count)
self.auc_df.loc[(self.auc_df.index >= ref_end) & (self.auc_df.index < ti), ['accuracy']] = accuracy
print('\n',ti,"\trefes:" , ref_df.shape, "\tsubjects:", sub_df.shape, '\tacc:', accuracy)
ti = ti + sub
print("took:", time()-startt)
count = count + 1
#if count>2: break
Explanation: create Network Model
only class is defined, no output is expected.
End of explanation
def scaled_accuracy(accuracy, ref_samples, sub_samples):
print(accuracy)
chance = float(ref_samples)/(ref_samples+sub_samples)
return (accuracy-chance)/(1-chance)
Explanation: functions
End of explanation
ann = ANN(n_series)
ann.set_data(df, auc_df)
ann.loop_over_intervals()
Explanation: Actually create the object, give it a data, run anomally detection.
This part can take significant time. It takes 10-30 seconds per hour of data analyzed. Total number of steps will be equal to number of subject intervals in the period tested. For every 5th step and intervals where anomaly has been detected ROC curve will be shown.
End of explanation
ndf=df.applymap(np.sqrt)
ax = ndf.plot(figsize=(20,7))
ax.set_xlim([pd.to_datetime('2017-05-13'),pd.to_datetime('2017-05-17')])
auc_df['Detected'] = 0
auc_df.loc[auc_df.accuracy>0.05, ['Detected']]=1
auc_df.accuracy.plot( ax=ax,color='b')
auc_df.Detected.plot( ax=ax, color='b', alpha=0.3)
ax.legend(loc='upper left')
ax.set_ylabel("sqrt(packet loss [%])", fontsize=14)
plt.show()
ax.get_figure().savefig('ANN_actual_data.png')
Explanation: plot again full timeseries
End of explanation
fig, ax = plt.subplots(figsize=(20,7))
auc_df['Detected'] = 0
auc_df.loc[auc_df.accuracy>0.05, ['Detected']]=1
ax.plot( auc_df.accuracy,'black')
ax.fill( auc_df.Detected, 'b', alpha=0.3)
ax.legend(loc='upper left')
ax.set_xlim([pd.to_datetime('2017-05-13'),pd.to_datetime('2017-05-17')])
plt.show()
fig.savefig('ANN_shaded_actual_data.png')
Explanation: shade regions where an anomaly has been dected
End of explanation |
2,582 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Functions
Introduction to Functions
This lecture will consist of explaining what a function is in Python and how to create one. Functions will be one of our main building blocks when we construct larger and larger amounts of code to solve problems.
So what is a function?
Formally, a function is a useful device that groups together a set of statements so they can be run more than once. They can also let us specify parameters that can serve as inputs to the functions.
On a more fundamental level, functions allow us to not have to repeatedly write the same code again and again. If you remember back to the lessons on strings and lists, remember that we used a function len() to get the length of a string. Since checking the length of a sequence is a common task you would want to write a function that can do this repeatedly at command.
Functions will be one of most basic levels of reusing code in Python, and it will also allow us to start thinking of program design (we will dive much deeper into the ideas of design when we learn about Object Oriented Programming).
def Statements
Let's see how to build out a function's syntax in Python. It has the following form
Step1: We begin with def then a space followed by the name of the function. Try to keep names relevant, for example len() is a good name for a length() function. Also be careful with names, you wouldn't want to call a function the same name as a built-in function in Python (such as len).
Next come a pair of parenthesis with a number of arguments separated by a comma. These arguments are the inputs for your function. You'll be able to use these inputs in your function and reference them. After this you put a colon.
Now here is the important step, you must indent to begin the code inside your function correctly. Python makes use of whitespace to organize code. Lots of other programing languages do not do this, so keep that in mind.
Next you'll see the doc-string, this is where you write a basic description of the function. Using iPython and iPython Notebooks, you'll be ab;e to read these doc-strings by pressing Shift+Tab after a function name. Doc strings are not necessary for simple functions, but its good practice to put them in so you or other people can easily understand the code you write.
After all this you begin writing the code you wish to execute.
The best way to learn functions is by going through examples. So let's try to go through examples that relate back to the various objects and data structures we learned about before.
Example 1
Step2: Call the function
Step3: Example 2
Step4: Using return
Let's see some example that use a return statement. return allows a function to return a result that can then be stored as a variable, or used in whatever manner a user wants.
Example 3
Step5: What happens if we input two strings?
Step6: Note that because we don't declare variable types in Python, this function could be used to add numbers or sequences together! We'll later learn about adding in checks to make sure a user puts in the correct arguments into a function.
Lets also start using break,continue, and pass statements in our code. We introduced these during the while lecture.
Finally lets go over a full example of creating a function to check if a number is prime ( a common interview exercise).
We know a number is prime if that number is only evenly divisible by 1 and itself. Let's write our first version of the function to check all the numbers from 1 to N and perform modulo checks.
Step7: Note how we break the code after the print statement! We can actually improve this by only checking to the square root of the target number, also we can disregard all even numbers after checking for 2. We'll also switch to returning a boolean value to get an example of using return statements | Python Code:
def name_of_function(arg1,arg2):
'''
This is where the function's Document String (doc-string) goes
'''
# Do stuff here
#return desired result
Explanation: Functions
Introduction to Functions
This lecture will consist of explaining what a function is in Python and how to create one. Functions will be one of our main building blocks when we construct larger and larger amounts of code to solve problems.
So what is a function?
Formally, a function is a useful device that groups together a set of statements so they can be run more than once. They can also let us specify parameters that can serve as inputs to the functions.
On a more fundamental level, functions allow us to not have to repeatedly write the same code again and again. If you remember back to the lessons on strings and lists, remember that we used a function len() to get the length of a string. Since checking the length of a sequence is a common task you would want to write a function that can do this repeatedly at command.
Functions will be one of most basic levels of reusing code in Python, and it will also allow us to start thinking of program design (we will dive much deeper into the ideas of design when we learn about Object Oriented Programming).
def Statements
Let's see how to build out a function's syntax in Python. It has the following form:
End of explanation
def say_hello():
print 'hello'
Explanation: We begin with def then a space followed by the name of the function. Try to keep names relevant, for example len() is a good name for a length() function. Also be careful with names, you wouldn't want to call a function the same name as a built-in function in Python (such as len).
Next come a pair of parenthesis with a number of arguments separated by a comma. These arguments are the inputs for your function. You'll be able to use these inputs in your function and reference them. After this you put a colon.
Now here is the important step, you must indent to begin the code inside your function correctly. Python makes use of whitespace to organize code. Lots of other programing languages do not do this, so keep that in mind.
Next you'll see the doc-string, this is where you write a basic description of the function. Using iPython and iPython Notebooks, you'll be ab;e to read these doc-strings by pressing Shift+Tab after a function name. Doc strings are not necessary for simple functions, but its good practice to put them in so you or other people can easily understand the code you write.
After all this you begin writing the code you wish to execute.
The best way to learn functions is by going through examples. So let's try to go through examples that relate back to the various objects and data structures we learned about before.
Example 1: A simple print 'hello' function
End of explanation
say_hello()
Explanation: Call the function
End of explanation
def greeting(name):
print 'Hello %s' %name
greeting('Jose')
Explanation: Example 2: A simple greeting function
Let's write a function that greets people with their name.
End of explanation
def add_num(num1,num2):
return num1+num2
add_num(4,5)
# Can also save as variable due to return
result = add_num(4,5)
print result
Explanation: Using return
Let's see some example that use a return statement. return allows a function to return a result that can then be stored as a variable, or used in whatever manner a user wants.
Example 3: Addition function
End of explanation
print add_num('one','two')
Explanation: What happens if we input two strings?
End of explanation
def is_prime(num):
'''
Naive method of checking for primes.
'''
for n in range(2,num):
if num % n == 0:
print 'not prime'
break
else: # If never mod zero, then prime
print 'prime'
is_prime(16)
Explanation: Note that because we don't declare variable types in Python, this function could be used to add numbers or sequences together! We'll later learn about adding in checks to make sure a user puts in the correct arguments into a function.
Lets also start using break,continue, and pass statements in our code. We introduced these during the while lecture.
Finally lets go over a full example of creating a function to check if a number is prime ( a common interview exercise).
We know a number is prime if that number is only evenly divisible by 1 and itself. Let's write our first version of the function to check all the numbers from 1 to N and perform modulo checks.
End of explanation
import math
def is_prime(num):
'''
Better method of checking for primes.
'''
if num % 2 == 0 and num > 2:
return False
for i in range(3, int(math.sqrt(num)) + 1, 2):
if num % i == 0:
return False
return True
is_prime(14)
Explanation: Note how we break the code after the print statement! We can actually improve this by only checking to the square root of the target number, also we can disregard all even numbers after checking for 2. We'll also switch to returning a boolean value to get an example of using return statements:
End of explanation |
2,583 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Table of Contents
<p><div class="lev1 toc-item"><a href="#Short-study-of-the-Lempel-Ziv-complexity" data-toc-modified-id="Short-study-of-the-Lempel-Ziv-complexity-1"><span class="toc-item-num">1 </span>Short study of the Lempel-Ziv complexity</a></div><div class="lev2 toc-item"><a href="#Short-definition" data-toc-modified-id="Short-definition-11"><span class="toc-item-num">1.1 </span>Short definition</a></div><div class="lev2 toc-item"><a href="#Python-implementation" data-toc-modified-id="Python-implementation-12"><span class="toc-item-num">1.2 </span>Python implementation</a></div><div class="lev2 toc-item"><a href="#Tests-(1/2)" data-toc-modified-id="Tests-(1/2)-13"><span class="toc-item-num">1.3 </span>Tests (1/2)</a></div><div class="lev2 toc-item"><a href="#Cython-implementation" data-toc-modified-id="Cython-implementation-14"><span class="toc-item-num">1.4 </span>Cython implementation</a></div><div class="lev2 toc-item"><a href="#Numba-implementation" data-toc-modified-id="Numba-implementation-15"><span class="toc-item-num">1.5 </span>Numba implementation</a></div><div class="lev2 toc-item"><a href="#Tests-(2/2)" data-toc-modified-id="Tests-(2/2)-16"><span class="toc-item-num">1.6 </span>Tests (2/2)</a></div><div class="lev2 toc-item"><a href="#Benchmarks" data-toc-modified-id="Benchmarks-17"><span class="toc-item-num">1.7 </span>Benchmarks</a></div><div class="lev2 toc-item"><a href="#Complexity-?" data-toc-modified-id="Complexity-?-18"><span class="toc-item-num">1.8 </span>Complexity ?</a></div><div class="lev2 toc-item"><a href="#Conclusion" data-toc-modified-id="Conclusion-19"><span class="toc-item-num">1.9 </span>Conclusion</a></div><div class="lev2 toc-item"><a href="#(Experimental)-Julia-implementation" data-toc-modified-id="(Experimental)-Julia-implementation-110"><span class="toc-item-num">1.10 </span>(Experimental) <a href="http
Step2: Tests (1/2)
Step4: We can start to see that the time complexity of this function seems to grow exponentially as the complexity grows.
Cython implementation
As this blog post explains it, we can easily try to use Cython in a notebook cell.
See the Cython documentation for more information.
Step5: Let try it!
Step8: $\implies$ Yay! It seems faster indeed!
Numba implementation
As this blog post explains it, we can also try to use Numba in a notebook cell.
Step9: Let try it!
Step11: $\implies$ Well... It doesn't seem that much faster from the naive Python code.
We specified the signature when calling @numba.jit, and used the more appropriate data structure (string is probably the smaller, numpy array are probably faster).
But even these tricks didn't help that much.
I tested, and without specifying the signature, the fastest approach is using string, compared to using lists or numpy arrays.
Note that the @jit-powered function is compiled at runtime when first being called, so the signature used for the first call is determining the signature used by the compile function
Tests (2/2)
To test more robustly, let us generate some (uniformly) random binary sequences.
Step13: That's probably not optimal, but we can generate a string with
Step14: And so, this function can test to check that the three implementations (naive, Cython-powered, Numba-powered) always give the same result.
Step15: Benchmarks
On two example of strings (binary sequences), we can compare our three implementation.
Step16: Let check the time used by all the three functions, for longer and longer sequences
Step17: Complexity ?
$\implies$ The function lempel_ziv_complexity_cython seems to be indeed (almost) linear in $n$, the length of the binary sequence $S$.
But let check more precisely, as it could also have a complexity of $\mathcal{O}(n \log n)$.
Step18: It's durty, but let us capture manually the times given by the experiments above.
Step20: It is linear in $\log\log$ scale, so indeed the algorithm seems to have a linear complexity.
To sum-up, for a sequence $S$ of length $n$, it takes $\mathcal{O}(n)$ basic operations to compute its Lempel-Ziv complexity $\mathrm{Lempel}-\mathrm{Ziv}(S)$.
Conclusion
The Lempel-Ziv complexity is not too hard to implement, and it indeed represents a certain complexity of a binary sequence, capturing the regularity and reproducibility of the sequence.
Using the Cython was quite useful to have a $\simeq \times 100$ speed up on our manual naive implementation !
The algorithm is not easy to analyze, we have a trivial $\mathcal{O}(n^2)$ bound but experiments showed it is more likely to be $\mathcal{O}(n \log n)$ in the worst case, and $\mathcal{O}(n)$ in practice for "not too complicated sequences" (or in average, for random sequences).
(Experimental) Julia implementation
I want to (quickly) try to see if I can use Julia to write a faster version of this function.
See issue #1.
Disclaimer
Step22: And to compare it fairly, let us use Pypy for comparison. | Python Code:
def lempel_ziv_complexity(binary_sequence):
Lempel-Ziv complexity for a binary sequence, in simple Python code.
u, v, w = 0, 1, 1
v_max = 1
length = len(binary_sequence)
complexity = 1
while True:
if binary_sequence[u + v - 1] == binary_sequence[w + v - 1]:
v += 1
if w + v >= length:
complexity += 1
break
else:
if v > v_max:
v_max = v
u += 1
if u == w:
complexity += 1
w += v_max
if w > length:
break
else:
u = 0
v = 1
v_max = 1
else:
v = 1
return complexity
Explanation: Table of Contents
<p><div class="lev1 toc-item"><a href="#Short-study-of-the-Lempel-Ziv-complexity" data-toc-modified-id="Short-study-of-the-Lempel-Ziv-complexity-1"><span class="toc-item-num">1 </span>Short study of the Lempel-Ziv complexity</a></div><div class="lev2 toc-item"><a href="#Short-definition" data-toc-modified-id="Short-definition-11"><span class="toc-item-num">1.1 </span>Short definition</a></div><div class="lev2 toc-item"><a href="#Python-implementation" data-toc-modified-id="Python-implementation-12"><span class="toc-item-num">1.2 </span>Python implementation</a></div><div class="lev2 toc-item"><a href="#Tests-(1/2)" data-toc-modified-id="Tests-(1/2)-13"><span class="toc-item-num">1.3 </span>Tests (1/2)</a></div><div class="lev2 toc-item"><a href="#Cython-implementation" data-toc-modified-id="Cython-implementation-14"><span class="toc-item-num">1.4 </span>Cython implementation</a></div><div class="lev2 toc-item"><a href="#Numba-implementation" data-toc-modified-id="Numba-implementation-15"><span class="toc-item-num">1.5 </span>Numba implementation</a></div><div class="lev2 toc-item"><a href="#Tests-(2/2)" data-toc-modified-id="Tests-(2/2)-16"><span class="toc-item-num">1.6 </span>Tests (2/2)</a></div><div class="lev2 toc-item"><a href="#Benchmarks" data-toc-modified-id="Benchmarks-17"><span class="toc-item-num">1.7 </span>Benchmarks</a></div><div class="lev2 toc-item"><a href="#Complexity-?" data-toc-modified-id="Complexity-?-18"><span class="toc-item-num">1.8 </span>Complexity ?</a></div><div class="lev2 toc-item"><a href="#Conclusion" data-toc-modified-id="Conclusion-19"><span class="toc-item-num">1.9 </span>Conclusion</a></div><div class="lev2 toc-item"><a href="#(Experimental)-Julia-implementation" data-toc-modified-id="(Experimental)-Julia-implementation-110"><span class="toc-item-num">1.10 </span>(Experimental) <a href="http://julialang.org" target="_blank">Julia</a> implementation</a></div><div class="lev2 toc-item"><a href="#Ending-notes" data-toc-modified-id="Ending-notes-111"><span class="toc-item-num">1.11 </span>Ending notes</a></div>
# Short study of the Lempel-Ziv complexity
In this short [Jupyter notebook](https://www.Jupyter.org/) aims at defining and explaining the [Lempel-Ziv complexity](https://en.wikipedia.org/wiki/Lempel-Ziv_complexity).
[I](http://perso.crans.org/besson/) will give examples, and benchmarks of different implementations.
- **Reference:** Abraham Lempel and Jacob Ziv, *« On the Complexity of Finite Sequences »*, IEEE Trans. on Information Theory, January 1976, p. 75–81, vol. 22, n°1.
----
## Short definition
The Lempel-Ziv complexity is defined as the number of different substrings encountered as the stream is viewed from begining to the end.
As an example:
```python
>>> s = '1001111011000010'
>>> lempel_ziv_complexity(s) # 1 / 0 / 01 / 1110 / 1100 / 0010
6
```
Marking in the different substrings, this sequence $s$ has complexity $\mathrm{Lempel}$-$\mathrm{Ziv}(s) = 6$ because $s = 1001111011000010 = 1 / 0 / 01 / 1110 / 1100 / 0010$.
- See the page https://en.wikipedia.org/wiki/Lempel-Ziv_complexity for more details.
Other examples:
```python
>>> lempel_ziv_complexity('1010101010101010') # 1 / 0 / 10
3
>>> lempel_ziv_complexity('1001111011000010000010') # 1 / 0 / 01 / 1110 / 1100 / 0010 / 000 / 010
7
>>> lempel_ziv_complexity('100111101100001000001010') # 1 / 0 / 01 / 1110 / 1100 / 0010 / 000 / 010 / 10
8
```
----
## Python implementation
End of explanation
s = '1001111011000010'
lempel_ziv_complexity(s) # 1 / 0 / 01 / 1110 / 1100 / 0010
%timeit lempel_ziv_complexity(s)
lempel_ziv_complexity('1010101010101010') # 1 / 0 / 10
lempel_ziv_complexity('1001111011000010000010') # 1 / 0 / 01 / 1110
lempel_ziv_complexity('100111101100001000001010') # 1 / 0 / 01 / 1110 / 1100 / 0010 / 000 / 010 / 10
%timeit lempel_ziv_complexity('100111101100001000001010')
Explanation: Tests (1/2)
End of explanation
%load_ext cython
%%cython
from __future__ import division
import cython
ctypedef unsigned int DTYPE_t
@cython.boundscheck(False) # turn off bounds-checking for entire function, quicker but less safe
def lempel_ziv_complexity_cython(str binary_sequence not None):
Lempel-Ziv complexity for a binary sequence, in simple Cython code (C extension).
cdef DTYPE_t u = 0
cdef DTYPE_t v = 1
cdef DTYPE_t w = 1
cdef DTYPE_t v_max = 1
cdef DTYPE_t length = len(binary_sequence)
cdef DTYPE_t complexity = 1
# that was the only needed part, typing statically all the variables
while True:
if binary_sequence[u + v - 1] == binary_sequence[w + v - 1]:
v += 1
if w + v >= length:
complexity += 1
break
else:
if v > v_max:
v_max = v
u += 1
if u == w:
complexity += 1
w += v_max
if w > length:
break
else:
u = 0
v = 1
v_max = 1
else:
v = 1
return complexity
Explanation: We can start to see that the time complexity of this function seems to grow exponentially as the complexity grows.
Cython implementation
As this blog post explains it, we can easily try to use Cython in a notebook cell.
See the Cython documentation for more information.
End of explanation
s = '1001111011000010'
lempel_ziv_complexity_cython(s) # 1 / 0 / 01 / 1110 / 1100 / 0010
%timeit lempel_ziv_complexity_cython(s)
lempel_ziv_complexity_cython('1010101010101010') # 1 / 0 / 10
lempel_ziv_complexity_cython('1001111011000010000010') # 1 / 0 / 01 / 1110
lempel_ziv_complexity_cython('100111101100001000001010') # 1 / 0 / 01 / 1110 / 1100 / 0010 / 000 / 010 / 10
%timeit lempel_ziv_complexity_cython('100111101100001000001010')
Explanation: Let try it!
End of explanation
from numba import jit
@jit("int32(boolean[:])")
def lempel_ziv_complexity_numba_x(binary_sequence):
Lempel-Ziv complexity for a binary sequence, in Python code using numba.jit() for automatic speedup (hopefully).
u, v, w = 0, 1, 1
v_max = 1
length = len(binary_sequence)
complexity = 1
while True:
if binary_sequence[u + v - 1] == binary_sequence[w + v - 1]:
v += 1
if w + v >= length:
complexity += 1
break
else:
if v > v_max:
v_max = v
u += 1
if u == w:
complexity += 1
w += v_max
if w > length:
break
else:
u = 0
v = 1
v_max = 1
else:
v = 1
return complexity
def str_to_numpy(s):
str to np.array of bool
return np.array([int(i) for i in s], dtype=np.bool)
def lempel_ziv_complexity_numba(s):
return lempel_ziv_complexity_numba_x(str_to_numpy(s))
Explanation: $\implies$ Yay! It seems faster indeed!
Numba implementation
As this blog post explains it, we can also try to use Numba in a notebook cell.
End of explanation
str_to_numpy(s)
s = '1001111011000010'
lempel_ziv_complexity_numba(s) # 1 / 0 / 01 / 1110 / 1100 / 0010
%timeit lempel_ziv_complexity_numba(s)
lempel_ziv_complexity_numba('1010101010101010') # 1 / 0 / 10
lempel_ziv_complexity_numba('1001111011000010000010') # 1 / 0 / 01 / 1110
lempel_ziv_complexity_numba('100111101100001000001010') # 1 / 0 / 01 / 1110 / 1100 / 0010 / 000 / 010 / 10
%timeit lempel_ziv_complexity_numba('100111101100001000001010')
Explanation: Let try it!
End of explanation
from numpy.random import binomial
def bernoulli(p, size=1):
One or more samples from a Bernoulli of probability p.
return binomial(1, p, size)
bernoulli(0.5, 20)
Explanation: $\implies$ Well... It doesn't seem that much faster from the naive Python code.
We specified the signature when calling @numba.jit, and used the more appropriate data structure (string is probably the smaller, numpy array are probably faster).
But even these tricks didn't help that much.
I tested, and without specifying the signature, the fastest approach is using string, compared to using lists or numpy arrays.
Note that the @jit-powered function is compiled at runtime when first being called, so the signature used for the first call is determining the signature used by the compile function
Tests (2/2)
To test more robustly, let us generate some (uniformly) random binary sequences.
End of explanation
''.join(str(i) for i in bernoulli(0.5, 20))
def random_binary_sequence(n, p=0.5):
Uniform random binary sequence of size n, with rate of 0/1 being p.
return ''.join(str(i) for i in bernoulli(p, n))
random_binary_sequence(50)
random_binary_sequence(50, p=0.1)
random_binary_sequence(50, p=0.25)
random_binary_sequence(50, p=0.5)
random_binary_sequence(50, p=0.75)
random_binary_sequence(50, p=0.9)
Explanation: That's probably not optimal, but we can generate a string with:
End of explanation
def tests_3_functions(n, p=0.5, debug=True):
s = random_binary_sequence(n, p=p)
c1 = lempel_ziv_complexity(s)
if debug:
print("Sequence s = {} ==> complexity C = {}".format(s, c1))
c2 = lempel_ziv_complexity_cython(s)
c3 = lempel_ziv_complexity_numba(s)
assert c1 == c2 == c3, "Error: the sequence {} gave different values of the Lempel-Ziv complexity from 3 functions ({}, {}, {})...".format(s, c1, c2, c3)
return c1
tests_3_functions(5)
tests_3_functions(20)
tests_3_functions(50)
tests_3_functions(500)
tests_3_functions(5000)
Explanation: And so, this function can test to check that the three implementations (naive, Cython-powered, Numba-powered) always give the same result.
End of explanation
%timeit lempel_ziv_complexity('100111101100001000001010')
%timeit lempel_ziv_complexity_cython('100111101100001000001010')
%timeit lempel_ziv_complexity_numba('100111101100001000001010')
%timeit lempel_ziv_complexity('10011110110000100000101000100100101010010111111011001111111110101001010110101010')
%timeit lempel_ziv_complexity_cython('10011110110000100000101000100100101010010111111011001111111110101001010110101010')
%timeit lempel_ziv_complexity_numba('10011110110000100000101000100100101010010111111011001111111110101001010110101010')
Explanation: Benchmarks
On two example of strings (binary sequences), we can compare our three implementation.
End of explanation
%timeit tests_3_functions(10, debug=False)
%timeit tests_3_functions(20, debug=False)
%timeit tests_3_functions(40, debug=False)
%timeit tests_3_functions(80, debug=False)
%timeit tests_3_functions(160, debug=False)
%timeit tests_3_functions(320, debug=False)
def test_cython(n):
s = random_binary_sequence(n)
c = lempel_ziv_complexity_cython(s)
return c
%timeit test_cython(10)
%timeit test_cython(20)
%timeit test_cython(40)
%timeit test_cython(80)
%timeit test_cython(160)
%timeit test_cython(320)
%timeit test_cython(640)
%timeit test_cython(1280)
%timeit test_cython(2560)
%timeit test_cython(5120)
%timeit test_cython(10240)
%timeit test_cython(20480)
Explanation: Let check the time used by all the three functions, for longer and longer sequences:
End of explanation
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
sns.set(context="notebook", style="darkgrid", palette="hls", font="sans-serif", font_scale=1.4)
Explanation: Complexity ?
$\implies$ The function lempel_ziv_complexity_cython seems to be indeed (almost) linear in $n$, the length of the binary sequence $S$.
But let check more precisely, as it could also have a complexity of $\mathcal{O}(n \log n)$.
End of explanation
x = [10, 20, 40, 80, 160, 320, 640, 1280, 2560, 5120, 10240, 20480]
y = [18, 30, 55, 107, 205, 471, 977, 2270, 5970, 17300, 56600, 185000]
plt.figure()
plt.plot(x, y, 'o-')
plt.xlabel("Length $n$ of the binary sequence $S$")
plt.ylabel(r"Time in $\mu\;\mathrm{s}$")
plt.title("Time complexity of Lempel-Ziv complexity")
plt.show()
plt.figure()
plt.semilogx(x, y, 'o-')
plt.xlabel("Length $n$ of the binary sequence $S$")
plt.ylabel(r"Time in $\mu\;\mathrm{s}$")
plt.title("Time complexity of Lempel-Ziv complexity, semilogx scale")
plt.show()
plt.figure()
plt.semilogy(x, y, 'o-')
plt.xlabel("Length $n$ of the binary sequence $S$")
plt.ylabel(r"Time in $\mu\;\mathrm{s}$")
plt.title("Time complexity of Lempel-Ziv complexity, semilogy scale")
plt.show()
plt.figure()
plt.loglog(x, y, 'o-')
plt.xlabel("Length $n$ of the binary sequence $S$")
plt.ylabel(r"Time in $\mu\;\mathrm{s}$")
plt.title("Time complexity of Lempel-Ziv complexity, loglog scale")
plt.show()
Explanation: It's durty, but let us capture manually the times given by the experiments above.
End of explanation
%%time
%%script julia
Lempel-Ziv complexity for a binary sequence, in simple Julia code.
function lempel_ziv_complexity(binary_sequence)
u, v, w = 0, 1, 1
v_max = 1
size = length(binary_sequence)
complexity = 1
while true
if binary_sequence[u + v] == binary_sequence[w + v]
v += 1
if w + v >= size
complexity += 1
break
end
else
if v > v_max
v_max = v
end
u += 1
if u == w
complexity += 1
w += v_max
if w > size
break
else
u = 0
v = 1
v_max = 1
end
else
v = 1
end
end
end
return complexity
end
s = "1001111011000010"
lempel_ziv_complexity(s) # 1 / 0 / 01 / 1110 / 1100 / 0010
M = 100;
N = 10000;
for _ in 1:M
s = join(rand(0:1, N));
lempel_ziv_complexity(s);
end
lempel_ziv_complexity(s) # 1 / 0 / 01 / 1110 / 1100 / 0010
Explanation: It is linear in $\log\log$ scale, so indeed the algorithm seems to have a linear complexity.
To sum-up, for a sequence $S$ of length $n$, it takes $\mathcal{O}(n)$ basic operations to compute its Lempel-Ziv complexity $\mathrm{Lempel}-\mathrm{Ziv}(S)$.
Conclusion
The Lempel-Ziv complexity is not too hard to implement, and it indeed represents a certain complexity of a binary sequence, capturing the regularity and reproducibility of the sequence.
Using the Cython was quite useful to have a $\simeq \times 100$ speed up on our manual naive implementation !
The algorithm is not easy to analyze, we have a trivial $\mathcal{O}(n^2)$ bound but experiments showed it is more likely to be $\mathcal{O}(n \log n)$ in the worst case, and $\mathcal{O}(n)$ in practice for "not too complicated sequences" (or in average, for random sequences).
(Experimental) Julia implementation
I want to (quickly) try to see if I can use Julia to write a faster version of this function.
See issue #1.
Disclaimer: I am still learning Julia!
End of explanation
%%time
%%pypy
def lempel_ziv_complexity(binary_sequence):
Lempel-Ziv complexity for a binary sequence, in simple Python code.
u, v, w = 0, 1, 1
v_max = 1
length = len(binary_sequence)
complexity = 1
while True:
if binary_sequence[u + v - 1] == binary_sequence[w + v - 1]:
v += 1
if w + v >= length:
complexity += 1
break
else:
if v > v_max:
v_max = v
u += 1
if u == w:
complexity += 1
w += v_max
if w > length:
break
else:
u = 0
v = 1
v_max = 1
else:
v = 1
return complexity
s = "1001111011000010"
lempel_ziv_complexity(s) # 1 / 0 / 01 / 1110 / 1100 / 0010
from random import random
M = 100
N = 10000
for _ in range(M):
s = ''.join(str(int(random() < 0.5)) for _ in range(N))
lempel_ziv_complexity(s)
Explanation: And to compare it fairly, let us use Pypy for comparison.
End of explanation |
2,584 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Directly compare Monaco to Elekta Linac iCOM
This notebook uses the PyMedPhys library to compare the collected iCOM delivery data directly to the recorded plan within Monaco's tel files.
Description of method
There are a few key stages required to make this notebook work. Firstly the iCOM stream from the Linac needs to be recorded to disk, this iCOM stream then needs to be archived and grouped by patient.
Once these iCOM streams are running the plan needs to be delivered on the machine being listened to. From this point the iCOM stream is then able to be compared directly to the Monaco tel file.
This comparison is done by calculating an MU Density upon which the difference is reported on using the Gamma comparison tool.
iCOM stream service
A Windows service is created to listen to the iCOM stream. This service is made from a .bat file, an example of which can be seen at
Step1: Importing PyMedPhys
PyMedPhys contains all the tooling to log the iCOM stream, read the iCOM stream, read Monaco tel files, create an MU Density, and compare the results using a Gamma implementation
Step2: Patient ID Configuration
Provide here the Patient ID, this will be used to look up the iCOM file record as well as the Monaco tel file record.
Step3: File Path Configurations
Here is where the various root paths of the data has been stored.
Input directories
Step4: Output directories
The output_directory below presents the location where raw results are saved for the permanent record.
The pdf_directory is designed to be a sub-directory of the Mosaiq eScan directory so that created PDFs can be easily imported into Mosaiq for reporting purposes.
Step5: MU Density and Gamma configuration
MU Density and Gamma have a range of options available to them. Here are is where those options are configured.
Step6: Choosing from the available Monaco Plans and iCOM Deliveries
For the Patient ID provided the following two cells list out all the tel files and iCOM delivery files that were found. Run both of these cells below and then choose the appropriate plan and delivery in the third cell.
Monaco Plans
Step7: iCOM Deliveries
Step8: Plan and iCOM choice
Provide the directory name for the monaco plan and the timestamp for the iCOM delivery which you wish to compare.
Step9: Resulting paths found matching provided search query
Step10: Loading the iCOM file
In order to save space on disk the patient iCOM streams are compressed using lzma. The following code opens these compressed files and gets the raw iCOM stream.
Step11: Delivery Objects
Within PyMedPhys there is a Delivery object. This object can be created from a range of sources such as RT Plan DICOM files, Mosaiq SQL queries, iCOM data, trf log files, as well as Monaco tel files.
From this Delivery object many tasks can be undergone. The available methods and attributes on the Delivery object are given below
Step12: Creating the Delivery Objects
We can create two of these Delivery objects, one from the iCOM stream, and the other from the Monaco tel file.
Step13: Using the Delivery Objects
Once we have two Delivery objects we can calculate the MU Density of these. Of note, this same method of using the Delivery object can also be employed to compare to an RT Plan file, Elekta Linac TRF log file, or Mosaiq SQL.
Step14: Calculating Gamma
PyMedPhys also has within it tooling to calculate Gamma. This is done below.
Step15: Create Plotting and Reporting Functions
So that we can view the result as well as create a PDF that can be stored within Mosaiq the following functions create these plots using the matplotlib library.
Step16: Plotting and saving the report
Now that have our data, and have formatted our report as we wish let's create and save this report as a png.
Step17: Converting PNG to PDF for importing into Mosaiq
To create a pdf, the just created png file can be converted to pdf. To do this the tool imagemagick needs to be installed on your system. If you install this now you will need to reset your Jupyter server in a new command prompt so that the magick command is available within your path. | Python Code:
import pathlib # for filepath path tooling
import lzma # to decompress the iCOM file
import numpy as np # for array tooling
import matplotlib.pyplot as plt # for plotting
Explanation: Directly compare Monaco to Elekta Linac iCOM
This notebook uses the PyMedPhys library to compare the collected iCOM delivery data directly to the recorded plan within Monaco's tel files.
Description of method
There are a few key stages required to make this notebook work. Firstly the iCOM stream from the Linac needs to be recorded to disk, this iCOM stream then needs to be archived and grouped by patient.
Once these iCOM streams are running the plan needs to be delivered on the machine being listened to. From this point the iCOM stream is then able to be compared directly to the Monaco tel file.
This comparison is done by calculating an MU Density upon which the difference is reported on using the Gamma comparison tool.
iCOM stream service
A Windows service is created to listen to the iCOM stream. This service is made from a .bat file, an example of which can be seen at:
https://github.com/CCA-Physics/physics-server/blob/8a0954e5/RCCC/icom/harry_listening.bat
To create this service the nssm tool is used, with an example of its usage available at:
https://github.com/CCA-Physics/physics-server/blob/8a0954e5/RCCC/icom/services-install.bat
Warning: Take Note
A force closing of the service which is listening to the iCOM stream done in such a way that it is not able to properly close down the listening socket will cause the Linac being listened to raise an interlock which won't interupt the beam, but it will not let a new beam be delivered until the machine is logged out of and re-logged back in.
This can happen when the service is force killed, the host machine is force shutdown, or the network connection is abruptly disconnected. Normal shutdown of the service or machine should not have this effect.
Grouping iCOM stream by patient
For the iCOM stream to be easily indexed in the future deliveries are stored by patient id and name. This is done by setting up a .bat file to run on machine boot. An example .bat file that achieves this can be seen at:
https://github.com/CCA-Physics/physics-server/blob/8a0954e5/RCCC/icom/patient_archiving.bat
Reading the iCOM file, Monaco file, and comparing them
The resulting files are then loaded into a PyMedPhys Delivery object from which an MU Density can be calculated and used as a comparison and reporting tool.
These steps will be further expanded on below, prior to the lines of code that implement them.
Importing the required libraries
Third party libraries
End of explanation
# Makes it so that any changes in pymedphys is automatically
# propagated into the notebook without needing a kernel reset.
from IPython.lib.deepreload import reload
%load_ext autoreload
%autoreload 2
import pymedphys
Explanation: Importing PyMedPhys
PyMedPhys contains all the tooling to log the iCOM stream, read the iCOM stream, read Monaco tel files, create an MU Density, and compare the results using a Gamma implementation
End of explanation
patient_id = '015112'
Explanation: Patient ID Configuration
Provide here the Patient ID, this will be used to look up the iCOM file record as well as the Monaco tel file record.
End of explanation
icom_directory = pathlib.Path(r'\\physics-server\iComLogFiles\patients')
monaco_directory = pathlib.Path(r'\\monacoda\FocalData\RCCC\1~Clinical')
Explanation: File Path Configurations
Here is where the various root paths of the data has been stored.
Input directories
End of explanation
output_directory = pathlib.Path(r'S:\Physics\Patient Specific Logfile Fluence')
pdf_directory = pathlib.Path(r'P:\Scanned Documents\RT\PhysChecks\Logfile PDFs')
Explanation: Output directories
The output_directory below presents the location where raw results are saved for the permanent record.
The pdf_directory is designed to be a sub-directory of the Mosaiq eScan directory so that created PDFs can be easily imported into Mosaiq for reporting purposes.
End of explanation
GRID = pymedphys.mudensity.grid()
COORDS = (GRID["jaw"], GRID["mlc"])
GAMMA_OPTIONS = {
'dose_percent_threshold': 2, # Not actually comparing dose though
'distance_mm_threshold': 0.5,
'local_gamma': True,
'quiet': True,
'max_gamma': 2,
}
Explanation: MU Density and Gamma configuration
MU Density and Gamma have a range of options available to them. Here are is where those options are configured.
End of explanation
all_tel_paths = list(monaco_directory.glob(f'*~{patient_id}/plan/*/tel.1'))
all_tel_paths
plan_names_to_choose_from = [
path.parent.name for path in all_tel_paths
]
plan_names_to_choose_from
Explanation: Choosing from the available Monaco Plans and iCOM Deliveries
For the Patient ID provided the following two cells list out all the tel files and iCOM delivery files that were found. Run both of these cells below and then choose the appropriate plan and delivery in the third cell.
Monaco Plans
End of explanation
icom_deliveries = list(icom_directory.glob(f'{patient_id}_*/*.xz'))
icom_deliveries
icom_files_to_choose_from = [
path.stem for path in icom_deliveries
]
icom_files_to_choose_from
Explanation: iCOM Deliveries
End of explanation
monaco_plan_name = 'LeftIlium1' # plan directory name
icom_delivery = '20200213_133208' # iCOM timestamp
Explanation: Plan and iCOM choice
Provide the directory name for the monaco plan and the timestamp for the iCOM delivery which you wish to compare.
End of explanation
tel_path = list(monaco_directory.glob(f'*~{patient_id}/plan/{monaco_plan_name}/tel.1'))[-1]
tel_path
icom_path = list(icom_directory.glob(f'{patient_id}_*/{icom_delivery}.xz'))[-1]
icom_path
Explanation: Resulting paths found matching provided search query
End of explanation
with lzma.open(icom_path, 'r') as f:
icom_stream = f.read()
Explanation: Loading the iCOM file
In order to save space on disk the patient iCOM streams are compressed using lzma. The following code opens these compressed files and gets the raw iCOM stream.
End of explanation
# Print out available methods and attributes on the Delivery object
[command for command in dir(pymedphys.Delivery) if not command.startswith('_')]
Explanation: Delivery Objects
Within PyMedPhys there is a Delivery object. This object can be created from a range of sources such as RT Plan DICOM files, Mosaiq SQL queries, iCOM data, trf log files, as well as Monaco tel files.
From this Delivery object many tasks can be undergone. The available methods and attributes on the Delivery object are given below:
End of explanation
delivery_icom = pymedphys.Delivery.from_icom(icom_stream)
delivery_tel = pymedphys.Delivery.from_monaco(tel_path)
Explanation: Creating the Delivery Objects
We can create two of these Delivery objects, one from the iCOM stream, and the other from the Monaco tel file.
End of explanation
mudensity_icom = delivery_icom.mudensity()
mudensity_tel = delivery_tel.mudensity()
Explanation: Using the Delivery Objects
Once we have two Delivery objects we can calculate the MU Density of these. Of note, this same method of using the Delivery object can also be employed to compare to an RT Plan file, Elekta Linac TRF log file, or Mosaiq SQL.
End of explanation
def to_tuple(array):
return tuple(map(tuple, array))
gamma = pymedphys.gamma(
COORDS,
to_tuple(mudensity_tel),
COORDS,
to_tuple(mudensity_icom),
**GAMMA_OPTIONS
)
Explanation: Calculating Gamma
PyMedPhys also has within it tooling to calculate Gamma. This is done below.
End of explanation
def plot_gamma_hist(gamma, percent, dist):
valid_gamma = gamma[~np.isnan(gamma)]
plt.hist(valid_gamma, 50, density=True)
pass_ratio = np.sum(valid_gamma <= 1) / len(valid_gamma)
plt.title(
"Local Gamma ({0}%/{1}mm) | Percent Pass: {2:.2f} % | Max Gamma: {3:.2f}".format(
percent, dist, pass_ratio * 100, np.max(valid_gamma)
)
)
def plot_and_save_results(
mudensity_tel,
mudensity_icom,
gamma,
png_filepath,
pdf_filepath,
header_text="",
footer_text="",
):
diff = mudensity_icom - mudensity_tel
largest_item = np.max(np.abs(diff))
widths = [1, 1]
heights = [0.3, 1, 1, 1, 0.1]
gs_kw = dict(width_ratios=widths, height_ratios=heights)
fig, axs = plt.subplots(5, 2, figsize=(10, 16), gridspec_kw=gs_kw)
gs = axs[0, 0].get_gridspec()
for ax in axs[0, 0:]:
ax.remove()
for ax in axs[1, 0:]:
ax.remove()
for ax in axs[4, 0:]:
ax.remove()
axheader = fig.add_subplot(gs[0, :])
axhist = fig.add_subplot(gs[1, :])
axfooter = fig.add_subplot(gs[4, :])
axheader.axis("off")
axfooter.axis("off")
axheader.text(0, 0, header_text, ha="left", wrap=True, fontsize=30)
axfooter.text(0, 1, footer_text, ha="left", va="top", wrap=True, fontsize=6)
plt.sca(axs[2, 0])
pymedphys.mudensity.display(GRID, mudensity_tel)
axs[2, 0].set_title("Monaco Plan MU Density")
plt.sca(axs[2, 1])
pymedphys.mudensity.display(GRID, mudensity_icom)
axs[2, 1].set_title("Recorded iCOM MU Density")
plt.sca(axs[3, 0])
pymedphys.mudensity.display(
GRID, diff, cmap="seismic", vmin=-largest_item, vmax=largest_item
)
plt.title("iCOM - Monaco")
plt.sca(axs[3, 1])
pymedphys.mudensity.display(GRID, gamma, cmap="coolwarm", vmin=0, vmax=2)
plt.title(
"Local Gamma | "
f"{GAMMA_OPTIONS['dose_percent_threshold']}%/"
f"{GAMMA_OPTIONS['distance_mm_threshold']}mm")
plt.sca(axhist)
plot_gamma_hist(
gamma,
GAMMA_OPTIONS['dose_percent_threshold'],
GAMMA_OPTIONS['distance_mm_threshold'])
return fig
Explanation: Create Plotting and Reporting Functions
So that we can view the result as well as create a PDF that can be stored within Mosaiq the following functions create these plots using the matplotlib library.
End of explanation
results_dir = output_directory.joinpath(patient_id, tel_path.parent.name, icom_path.stem)
results_dir.mkdir(exist_ok=True, parents=True)
header_text = (
f"Patient ID: {patient_id}\n"
f"Plan Name: {tel_path.parent.name}\n"
)
footer_text = (
f"tel.1 file path: {str(tel_path)}\n"
f"icom file path: {str(icom_path)}\n"
f"results path: {str(results_dir)}"
)
png_filepath = str(results_dir.joinpath("result.png").resolve())
pdf_filepath = str(pdf_directory.joinpath(f"{patient_id}.pdf").resolve())
fig = plot_and_save_results(
mudensity_tel, mudensity_icom,
gamma, png_filepath, pdf_filepath,
header_text=header_text, footer_text=footer_text
)
fig.tight_layout()
plt.savefig(png_filepath, dpi=300)
plt.show()
Explanation: Plotting and saving the report
Now that have our data, and have formatted our report as we wish let's create and save this report as a png.
End of explanation
!magick convert "{png_filepath}" "{pdf_filepath}"
Explanation: Converting PNG to PDF for importing into Mosaiq
To create a pdf, the just created png file can be converted to pdf. To do this the tool imagemagick needs to be installed on your system. If you install this now you will need to reset your Jupyter server in a new command prompt so that the magick command is available within your path.
End of explanation |
2,585 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
DFFs and Registers
This example demonstrates the use of d-flip-flops and registers.
Step1: DFF
To use a DFF we import the mantle circuit DFF.
Calling DFF() creates an instance of a DFF.
Although a sequential logic element like a DFF has internal state,
in Magma it is treated very similar
to a combinational logic element like a full adder.
Both combinational and sequential circuits have inputs and outputs.
The inputs and outputs are wired up in the same way in both cases.
Step2: Since a flip-flop is a sequential logic element,
it has a clock.
The clock generator is a peripheral on the FPGA.
We need to turn it on if we want to use the clock.
Turning it on creates a global clock signal on the FPGA.
Note that we did not need to wire the clock to the DFF;
magma automatically wires the global clock to the flip-flop's clock input.
Let's compile and build.
Step3: If we inspect the compiled verilog, we see that our mantle DFF uses the SB_DFF ice40 primitive. Notice also that the top-level main module has a CLKIN signal,
and that that signal has been wired to the clock of the SB_DFF.
Step4: Register
A register is simply an array of flip-flops.
To create an instance of a register, call Register
with the number of bits n in the register.
Step5: Registers and DFFs are very similar to each other.
The only difference is that the input and output to a DFF
are Bit values,
whereas the inputs and the outputs to registers are Bits(n).
Step6: If we inspect the compiled verilog, we see that our register is a module that instances a set of SB_DFFs.
Step7: Enables and Resets
Flip-flops and registers can have with clock enables and resets.
The flip-flop has a clock enable, its state will only be updated
if the clock enable is true.
Similarly, if a flip-flop has a reset signal,
it will be reset to its initial value if reset is true.
To create registers with these additional inputs,
set the optional arguments has_ce and/or has_reset
when instancing the register.
Step8: To wire the optional clock inputs, clock enable and reset,
use named arguments (ce and reset) when you call the register with its inputs.
In Magma, clock signals are handled differently than signals.
Compile, build, and upload.
Step9: Notice in the generated verilog the code uses the SB_DFFESR primitive and that the CE port is wired up to the E (enable) input of the flip flop. | Python Code:
import magma as m
m.set_mantle_target("ice40")
Explanation: DFFs and Registers
This example demonstrates the use of d-flip-flops and registers.
End of explanation
from loam.boards.icestick import IceStick
from mantle import DFF
icestick = IceStick()
icestick.Clock.on() # Need to turn on the clock for sequential logic
icestick.J1[0].input().on()
icestick.J3[0].output().on()
main = icestick.DefineMain()
dff = DFF()
main.J3 <= dff(main.J1)
m.EndDefine()
Explanation: DFF
To use a DFF we import the mantle circuit DFF.
Calling DFF() creates an instance of a DFF.
Although a sequential logic element like a DFF has internal state,
in Magma it is treated very similar
to a combinational logic element like a full adder.
Both combinational and sequential circuits have inputs and outputs.
The inputs and outputs are wired up in the same way in both cases.
End of explanation
m.compile("build/dff", main)
%%bash
cd build
yosys -q -p 'synth_ice40 -top main -blif dff.blif' dff.v
arachne-pnr -q -d 1k -o dff.txt -p dff.pcf dff.blif
icepack dff.txt dff.bin
#iceprog dff.bin
Explanation: Since a flip-flop is a sequential logic element,
it has a clock.
The clock generator is a peripheral on the FPGA.
We need to turn it on if we want to use the clock.
Turning it on creates a global clock signal on the FPGA.
Note that we did not need to wire the clock to the DFF;
magma automatically wires the global clock to the flip-flop's clock input.
Let's compile and build.
End of explanation
%cat build/dff.v
Explanation: If we inspect the compiled verilog, we see that our mantle DFF uses the SB_DFF ice40 primitive. Notice also that the top-level main module has a CLKIN signal,
and that that signal has been wired to the clock of the SB_DFF.
End of explanation
import magma as m
m.set_mantle_target("ice40")
from loam.boards.icestick import IceStick
from mantle import Register
icestick = IceStick()
icestick.Clock.on() # Need to turn on the clock for sequential logic
for i in range(4):
icestick.J1[i].input().on()
icestick.J3[i].output().on()
main = icestick.DefineMain()
register4 = Register(4)
main.J3 <= register4(main.J1)
m.EndDefine()
Explanation: Register
A register is simply an array of flip-flops.
To create an instance of a register, call Register
with the number of bits n in the register.
End of explanation
m.compile("build/register4", main)
%%bash
cd build
yosys -q -p 'synth_ice40 -top main -blif register4.blif' register4.v
arachne-pnr -q -d 1k -o register4.txt -p register4.pcf register4.blif
icepack register4.txt register4.bin
#iceprog register4.bin
Explanation: Registers and DFFs are very similar to each other.
The only difference is that the input and output to a DFF
are Bit values,
whereas the inputs and the outputs to registers are Bits(n).
End of explanation
%cat build/register4.v
Explanation: If we inspect the compiled verilog, we see that our register is a module that instances a set of SB_DFFs.
End of explanation
import magma as m
m.set_mantle_target("ice40")
from loam.boards.icestick import IceStick
from mantle import Register
icestick = IceStick()
icestick.Clock.on()
for i in range(4):
icestick.J1[i].input().on()
icestick.J3[i].output().on()
icestick.J1[4].input().on() # ce signal
icestick.J1[5].input().on() # reset signal
main = icestick.DefineMain()
register4 = Register(4, init=5, has_ce=True, has_reset=True )
main.J3 <= register4(main.J1[0:4], ce=main.J1[4], reset=main.J1[5])
m.EndDefine()
Explanation: Enables and Resets
Flip-flops and registers can have with clock enables and resets.
The flip-flop has a clock enable, its state will only be updated
if the clock enable is true.
Similarly, if a flip-flop has a reset signal,
it will be reset to its initial value if reset is true.
To create registers with these additional inputs,
set the optional arguments has_ce and/or has_reset
when instancing the register.
End of explanation
m.compile("build/register4ce", main)
%%bash
cd build
yosys -q -p 'synth_ice40 -top main -blif register4ce.blif' register4ce.v
arachne-pnr -q -d 1k -o register4ce.txt -p register4ce.pcf register4ce.blif
icepack register4ce.txt register4ce.bin
#iceprog register4ce.bin
Explanation: To wire the optional clock inputs, clock enable and reset,
use named arguments (ce and reset) when you call the register with its inputs.
In Magma, clock signals are handled differently than signals.
Compile, build, and upload.
End of explanation
%cat build/register4ce.v
Explanation: Notice in the generated verilog the code uses the SB_DFFESR primitive and that the CE port is wired up to the E (enable) input of the flip flop.
End of explanation |
2,586 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
On this notebook the best models and input parameters will be searched for. The problem at hand is predicting the price of any stock symbol 14 days ahead, assuming one model for all the symbols. The best training period length, base period length, and base period step will be determined, using the MRE metrics (and/or the R^2 metrics). The step for the rolling validation will be determined taking into consideration a compromise between having enough points (I consider about 1000 different target days may be good enough), and the time needed to compute the validation.
Step1: Let's get the data.
Step2: Let's find the best params set for some different models
- Dummy Predictor (mean)
Step3: - Linear Predictor
Step4: - Random Forest model | Python Code:
# Basic imports
import os
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import datetime as dt
import scipy.optimize as spo
import sys
from time import time
from sklearn.metrics import r2_score, median_absolute_error
%matplotlib inline
%pylab inline
pylab.rcParams['figure.figsize'] = (20.0, 10.0)
%load_ext autoreload
%autoreload 2
sys.path.append('../../')
import predictor.feature_extraction as fe
import utils.preprocessing as pp
import utils.misc as misc
AHEAD_DAYS = 14
Explanation: On this notebook the best models and input parameters will be searched for. The problem at hand is predicting the price of any stock symbol 14 days ahead, assuming one model for all the symbols. The best training period length, base period length, and base period step will be determined, using the MRE metrics (and/or the R^2 metrics). The step for the rolling validation will be determined taking into consideration a compromise between having enough points (I consider about 1000 different target days may be good enough), and the time needed to compute the validation.
End of explanation
datasets_params_list_df = pd.read_pickle('../../data/datasets_params_list_df.pkl')
print(datasets_params_list_df.shape)
datasets_params_list_df.head()
train_days_arr = 252 * np.array([1, 2, 3])
params_list_df = pd.DataFrame()
for train_days in train_days_arr:
temp_df = datasets_params_list_df[datasets_params_list_df['ahead_days'] == AHEAD_DAYS].copy()
temp_df['train_days'] = train_days
params_list_df = params_list_df.append(temp_df, ignore_index=True)
print(params_list_df.shape)
params_list_df.head()
Explanation: Let's get the data.
End of explanation
from predictor.dummy_mean_predictor import DummyPredictor
PREDICTOR_NAME = 'dummy'
# Global variables
eval_predictor = DummyPredictor()
step_eval_days = 60 # The step to move between training/validation pairs
params = {'eval_predictor': eval_predictor, 'step_eval_days': step_eval_days}
results_df = misc.parallelize_dataframe(params_list_df, misc.apply_mean_score_eval, params)
results_df['r2'] = results_df.apply(lambda x: x['scores'][0], axis=1)
results_df['mre'] = results_df.apply(lambda x: x['scores'][1], axis=1)
# Pickle that!
results_df.to_pickle('../../data/results_ahead{}_{}_df.pkl'.format(AHEAD_DAYS, PREDICTOR_NAME))
results_df['mre'].plot()
print('Minimum MRE param set: \n {}'.format(results_df.iloc[np.argmin(results_df['mre'])]))
print('Maximum R^2 param set: \n {}'.format(results_df.iloc[np.argmax(results_df['r2'])]))
Explanation: Let's find the best params set for some different models
- Dummy Predictor (mean)
End of explanation
from predictor.linear_predictor import LinearPredictor
PREDICTOR_NAME = 'linear'
# Global variables
eval_predictor = LinearPredictor()
step_eval_days = 60 # The step to move between training/validation pairs
params = {'eval_predictor': eval_predictor, 'step_eval_days': step_eval_days}
results_df = misc.parallelize_dataframe(params_list_df, misc.apply_mean_score_eval, params)
results_df['r2'] = results_df.apply(lambda x: x['scores'][0], axis=1)
results_df['mre'] = results_df.apply(lambda x: x['scores'][1], axis=1)
# Pickle that!
results_df.to_pickle('../../data/results_ahead{}_{}_df.pkl'.format(AHEAD_DAYS, PREDICTOR_NAME))
results_df['mre'].plot()
print('Minimum MRE param set: \n {}'.format(results_df.iloc[np.argmin(results_df['mre'])]))
print('Maximum R^2 param set: \n {}'.format(results_df.iloc[np.argmax(results_df['r2'])]))
Explanation: - Linear Predictor
End of explanation
from predictor.random_forest_predictor import RandomForestPredictor
PREDICTOR_NAME = 'random_forest'
# Global variables
eval_predictor = RandomForestPredictor()
step_eval_days = 60 # The step to move between training/validation pairs
params = {'eval_predictor': eval_predictor, 'step_eval_days': step_eval_days}
results_df = misc.parallelize_dataframe(params_list_df, misc.apply_mean_score_eval, params)
results_df['r2'] = results_df.apply(lambda x: x['scores'][0], axis=1)
results_df['mre'] = results_df.apply(lambda x: x['scores'][1], axis=1)
# Pickle that!
results_df.to_pickle('../../data/results_ahead{}_{}_df.pkl'.format(AHEAD_DAYS, PREDICTOR_NAME))
results_df['mre'].plot()
print('Minimum MRE param set: \n {}'.format(results_df.iloc[np.argmin(results_df['mre'])]))
print('Maximum R^2 param set: \n {}'.format(results_df.iloc[np.argmax(results_df['r2'])]))
Explanation: - Random Forest model
End of explanation |
2,587 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Possible data inputs to DataFrame constructor
2D ndarray A matrix of data, passing optional row and column labels
dict of arrays, lists, or tuples Each sequence becomes a column in the DataFrame. All sequences must be the same length.
NumPy structured/record array Treated as the “dict of arrays” case
dict of Series Each value becomes a column. Indexes from each Series are unioned together to form the result’s row index if no explicit index is passed.
dict of dicts Each inner dict becomes a column. Keys are unioned to form the row index as in the “dict of Series” case.
list of dicts or Series Each item becomes a row in the DataFrame. Union of dict keys or Series indexes become the DataFrame’s column labels
List of lists or tuples Treated as the “2D ndarray” case
Another DataFrame The DataFrame’s indexes are used unless different ones are passed
NumPy MaskedArray Like the “2D ndarray” case except masked values become NA/missing in the DataFrame result
Step1: Index methods and properties
append Concatenate with additional Index objects, producing a new Index
diff Compute set difference as an Index
intersection Compute set intersection
union Compute set union
isin Compute boolean array indicating whether each value is contained in the passed collection
delete Compute new Index with element at index i deleted
drop Compute new index by deleting passed values
insert Compute new Index by inserting element at index i
is_monotonic Returns True if each element is greater than or equal to the previous element
is_unique Returns True if the Index has no duplicate values
unique Compute the array of unique values in the Index
Step2: Reindex Series or DataFrme
index New sequence to use as index. Can be Index instance or any other sequence-like Python data structure. An Index will be used exactly as is without any copying
method Interpolation (fill) method, see Table 5-4 for options.
fill_value Substitute value to use when introducing missing data by reindexing
limit When forward- or backfilling, maximum size gap to fill
level Match simple Index on level of MultiIndex, otherwise select subset of
copy Do not copy underlying data if new index is equivalent to old index. True by default (i.e. always copy data).
Step3: Indexing, selection, and filtering
Step4: Indexing options with DataFrame
obj[val] Select single column or sequence of columns from the DataFrame. Special case con-veniences
Step5: Arithmetic methods with fill values
Step6: Flexible arithmetic methods
add Method for addition (+)
sub Method for subtraction (-)
div Method for division (/)
mul Method for multiplication (*)
Operations between DataFrame and Series
Step7: Function application and mapping
Step8: Sorting and ranking
Step9: Axis indexes with duplicate values | Python Code:
state = ['Ohio', 'Ohio', 'Ohio', 'Nevada', 'Nevada']
year = [2000, 2001, 2002, 2001, 2002]
pop = [1.5, 1.7, 3.6, 2.4, 2.9]
print(type(state), type(year), type(pop))
# creating dataframe
df = pd.DataFrame({'state':state, 'year':year, 'pop':pop})
print(df.info())
print(df)
sdata = {'state':state, 'year':year, 'pop':pop}
print(sdata,"\n",type(sdata))
df = pd.DataFrame(sdata, columns=['pop1', 'state1', 'year1']) # we can not rename columns like this, but create column names
# if doesn't exists
print(df)
df = pd.DataFrame(sdata, columns=['pop1', 'state', 'year']) # this will pick those columns from sdata which matched
print(df)
df = pd.DataFrame(sdata)
print(df.columns)
# renaming columns and index
df.columns = ['pop1', 'state1', 'year1']
df.index = ['one', 'two', 'three', 'four', 'five']
print(df)
# stats about dataframe
print(df.index, "\n", df.shape, "\n", df.columns)
df['pop1'] = 1.5
print(df)
df['pop1'] = range(5)
print(df)
# can access the data as
print(df['state1'])
print(df.state1)
# for deleting any columns
del df['pop1']
print(df)
# transpose the dataframe
dft = df.T
print(dft)
# using columns as an index
df.index = df['year1']
del df['year1']
print(df)
df.columns.name, df.index.name
df.columns
# printing values
df.values
Explanation: Possible data inputs to DataFrame constructor
2D ndarray A matrix of data, passing optional row and column labels
dict of arrays, lists, or tuples Each sequence becomes a column in the DataFrame. All sequences must be the same length.
NumPy structured/record array Treated as the “dict of arrays” case
dict of Series Each value becomes a column. Indexes from each Series are unioned together to form the result’s row index if no explicit index is passed.
dict of dicts Each inner dict becomes a column. Keys are unioned to form the row index as in the “dict of Series” case.
list of dicts or Series Each item becomes a row in the DataFrame. Union of dict keys or Series indexes become the DataFrame’s column labels
List of lists or tuples Treated as the “2D ndarray” case
Another DataFrame The DataFrame’s indexes are used unless different ones are passed
NumPy MaskedArray Like the “2D ndarray” case except masked values become NA/missing in the DataFrame result
End of explanation
# Series and DataFrames index are mutable
df.index
#df.index[2]=2009 # this will throw a error
Explanation: Index methods and properties
append Concatenate with additional Index objects, producing a new Index
diff Compute set difference as an Index
intersection Compute set intersection
union Compute set union
isin Compute boolean array indicating whether each value is contained in the passed collection
delete Compute new Index with element at index i deleted
drop Compute new index by deleting passed values
insert Compute new Index by inserting element at index i
is_monotonic Returns True if each element is greater than or equal to the previous element
is_unique Returns True if the Index has no duplicate values
unique Compute the array of unique values in the Index
End of explanation
print(df)
df.index
# df2 = df.reindex([2000, 2001, 2002, 2001, 2002, 2009])
# this will throw an value error, as index should be unique
frame = pd.DataFrame(np.arange(9).reshape((3, 3)), index=['a', 'c', 'd'],columns=['Ohio', 'Texas', 'California'])
print(frame)
frame2 = frame.reindex(['a', 'b', 'c', 'd'])
print(frame2)
# likewise let's revert the df
df['year'] = df.index
df.index = [0,1,2,3,4]
print(df)
# now we can reindex this df
df2 = df.reindex([1,2,3,4,5,6,7]) # again, reindex will first look into the df and then create the new
print(df2) # as here, it will keep 1,2,3,4 and drop 0 and create new 5,6,7 index
# better and faster way to do that is -
df3=df2.ix[[1,2,3,4,6]]
print(df3)
# CAN ALter the columns as well
new_columns = ['state1', 'year', 'population']
df4 = df3.ix[[1,2,3,4,6], new_columns]
print(df4)
df4.columns
# renaming columns
df4.columns = ['state', 'year', 'pop']
print(df4)
# dropping index or columns
df5=df4.drop([3])
print(df5)
df5 = df5.drop(['pop'], axis=1)
print(df5)
Explanation: Reindex Series or DataFrme
index New sequence to use as index. Can be Index instance or any other sequence-like Python data structure. An Index will be used exactly as is without any copying
method Interpolation (fill) method, see Table 5-4 for options.
fill_value Substitute value to use when introducing missing data by reindexing
limit When forward- or backfilling, maximum size gap to fill
level Match simple Index on level of MultiIndex, otherwise select subset of
copy Do not copy underlying data if new index is equivalent to old index. True by default (i.e. always copy data).
End of explanation
df4
df4[df4['state']=='Ohio']
df4[['state', 'year']]
df4['year'][df4['state']=='Ohio']=2004
df4
# ix enables you to select a subset of the rows and columns from a DataFrame with NumPy like notation plus axis labels
df4.ix[[1,2],['state']]
df4.ix[[3,6],[0,2]]
df4.ix[df4['year']<2003,[0,2]]
Explanation: Indexing, selection, and filtering
End of explanation
s1 = pd.Series([7.3, -2.5, 3.4, 1.5], index=['a', 'c', 'd', 'e'])
s2 = pd.Series([-2.1, 3.6, -1.5, 4, 3.1], index=['a', 'c', 'e', 'f', 'g'])
s1 + s2 #assigned NaN for those index which is not found in another series
df1 = pd.DataFrame(np.arange(9.).reshape((3, 3)), columns=list('bcd'), index=['Ohio', 'Texas', 'Colorado'])
df2 = pd.DataFrame(np.arange(12.).reshape((4, 3)), columns=list('bde'), index=['Utah', 'Ohio', 'Texas', 'Oregon'])
df1 + df2
Explanation: Indexing options with DataFrame
obj[val] Select single column or sequence of columns from the DataFrame. Special case con-veniences: boolean array (filter rows), slice (slice rows), or boolean DataFrame (set values based on some criterion).
obj.ix[val] Selects single row of subset of rows from the DataFrame.
obj.ix[:, val] Selects single column of subset of columns.
obj.ix[val1, val2] Select both rows and columns.
reindex method Conform one or more axes to new indexes.
xs method Select single row or column as a Series by label.
icol, irow methods Select single column or row, respectively, as a Series by integer location.
get_value, set_value methods Select single value by row and column label.
Arithmetic and data alignment
End of explanation
df1.add(df2, fill_value=0)
# when reindexing a Series or DataFrame, you can also specify a different fill value
df1.reindex(columns=df2.columns, fill_value=0)
Explanation: Arithmetic methods with fill values
End of explanation
frame = pd.DataFrame(np.arange(12.).reshape((4, 3)), columns=list('bde'), index=['Utah', 'Ohio', 'Texas', 'Oregon'])
frame
series = frame.ix[0] # pickng first row
series
frame * series
# By default, arithmetic between DataFrame and Series matches the index of the Series on the DataFrame's columns,
# broadcasting down the rows:
frame - series
series2 = pd.Series(range(3), index=['b', 'e', 'f'])
frame * series2
Explanation: Flexible arithmetic methods
add Method for addition (+)
sub Method for subtraction (-)
div Method for division (/)
mul Method for multiplication (*)
Operations between DataFrame and Series
End of explanation
f = lambda x : x.max() - x.min()
frame = pd.DataFrame(np.random.randn(4, 3), columns=list('bde'), index=['Utah', 'Ohio', 'Texas', 'Oregon'])
print(frame)
frame.apply(f)
frame.apply(f, axis=1)
# defining a func
def f(x):
return pd.Series([x.max(), x.min()], index=['max', 'min'])
frame.apply(f)
frame.apply(f, axis=1)
format = lambda x: '%.2f' % x
frame.applymap(format)
Explanation: Function application and mapping
End of explanation
obj = pd.Series(range(4), index=['d', 'a', 'b', 'c'])
obj
# sorting on index
obj.sort_index()
frame = pd.DataFrame(np.arange(8).reshape((2, 4)), index=['three', 'one'], columns=['d', 'a', 'b', 'c'])
frame
frame.sort_index()
frame.sort_index(axis=1)
frame.sort_index(axis=1).sort_index()
frame.sort_index(axis=1, ascending=False)
# To sort a Series by its values, use its order method
sr = pd.Series(['2', np.nan, '-3', '5'])
sr
# sorting by value
sr.sort_values()
frame = pd.DataFrame({'b': [4, 7, -3, 2], 'a': [0, 1, 0, 1]})
frame
frame.sort_values(by='b')
frame.sort_values(by=['a', 'b'])
# ranking # Explore more
obj = pd.Series([7, -5, 7, 4, 2, 0, 4])
obj
obj.rank()
Explanation: Sorting and ranking
End of explanation
obj = pd.Series(range(5), index=['a', 'a', 'b', 'b', 'c'])
obj
obj.index.unique() # get unique index
obj.index.is_unique # check if index are unique
df = pd.DataFrame(np.random.randn(4, 3), index=['a', 'a', 'b', 'b'])
df
df.index.is_unique
df.ix['a'] # ix is used to select rows by index
df.ix[0]
Explanation: Axis indexes with duplicate values
End of explanation |
2,588 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h1> Text Classification using TensorFlow/Keras on AI Platform </h1>
This notebook illustrates
Step1: We will look at the titles of articles and figure out whether the article came from the New York Times, TechCrunch or GitHub.
We will use hacker news as our data source. It is an aggregator that displays tech related headlines from various sources.
Creating Dataset from BigQuery
Hacker news headlines are available as a BigQuery public dataset. The dataset contains all headlines from the sites inception in October 2006 until October 2015.
Here is a sample of the dataset
Step2: Let's do some regular expression parsing in BigQuery to get the source of the newspaper article from the URL. For example, if the url is http
Step4: Now that we have good parsing of the URL to get the source, let's put together a dataset of source and titles. This will be our labeled dataset for AI Platform.
Step5: For ML training, we will need to split our dataset into training and evaluation datasets (and perhaps an independent test dataset if we are going to do model or feature selection based on the evaluation dataset).
A simple, repeatable way to do this is to use the hash of a well-distributed column in our data (See https
Step6: Below we can see that roughly 75% of the data is used for training, and 25% for evaluation.
We can also see that within each dataset, the classes are roughly balanced.
Step7: Finally we will save our data, which is currently in-memory, to disk.
Step8: TensorFlow/Keras Code
Please explore the code in this <a href="txtclsmodel/trainer">directory</a>
Step9: Train on the Cloud
Let's first copy our training data to the cloud
Step10: Change the job name appropriately. View the job in the console, and wait until the job is complete.
Step11: Results
What accuracy did you get? You should see around 80%.
Deploy trained model
Once your training completes you will see your exported models in the output directory specified in Google Cloud Storage.
You should see one model for each training checkpoint (default is every 1000 steps).
Step12: We will take the last export and deploy it as a REST API using Google AI Platform
Step13: Get Predictions
Here are some actual hacker news headlines gathered from July 2018. These titles were not part of the training or evaluation datasets.
Step14: Our serving input function expects the already tokenized representations of the headlines, so we do that pre-processing in the code before calling the REST API.
Note
Step15: How many of your predictions were correct?
Rerun with Pre-trained Embedding
In the previous model we trained our word embedding from scratch. Often times we get better performance and/or converge faster by leveraging a pre-trained embedding. This is a similar concept to transfer learning during image classification.
We will use the popular GloVe embedding which is trained on Wikipedia as well as various news sources like the New York Times.
You can read more about Glove at the project homepage | Python Code:
# change these to try this notebook out
BUCKET = 'cloud-training-demos-ml'
PROJECT = 'cloud-training-demos'
REGION = 'us-central1'
import os
os.environ['BUCKET'] = BUCKET
os.environ['PROJECT'] = PROJECT
os.environ['REGION'] = REGION
os.environ['TFVERSION'] = '1.14'
if 'COLAB_GPU' in os.environ: # this is always set on Colab, the value is 0 or 1 depending on whether a GPU is attached
from google.colab import auth
auth.authenticate_user()
# download "sidecar files" since on Colab, this notebook will be on Drive
!rm -rf txtclsmodel
!git clone --depth 1 https://github.com/GoogleCloudPlatform/training-data-analyst
!mv training-data-analyst/courses/machine_learning/deepdive/09_sequence/txtclsmodel/ .
!rm -rf training-data-analyst
# downgrade TensorFlow to the version this notebook has been tested with
!pip install --upgrade tensorflow==$TFVERSION
import tensorflow as tf
print(tf.__version__)
Explanation: <h1> Text Classification using TensorFlow/Keras on AI Platform </h1>
This notebook illustrates:
<ol>
<li> Creating datasets for AI Platform using BigQuery
<li> Creating a text classification model using the Estimator API with a Keras model
<li> Training on Cloud ML Engine
<li> Deploying the model
<li> Predicting with model
<li> Rerun with pre-trained embedding
</ol>
End of explanation
%load_ext google.cloud.bigquery
%%bigquery --project $PROJECT
SELECT
url, title, score
FROM
`bigquery-public-data.hacker_news.stories`
WHERE
LENGTH(title) > 10
AND score > 10
AND LENGTH(url) > 0
LIMIT 10
Explanation: We will look at the titles of articles and figure out whether the article came from the New York Times, TechCrunch or GitHub.
We will use hacker news as our data source. It is an aggregator that displays tech related headlines from various sources.
Creating Dataset from BigQuery
Hacker news headlines are available as a BigQuery public dataset. The dataset contains all headlines from the sites inception in October 2006 until October 2015.
Here is a sample of the dataset:
End of explanation
%%bigquery --project $PROJECT
SELECT
ARRAY_REVERSE(SPLIT(REGEXP_EXTRACT(url, '.*://(.[^/]+)/'), '.'))[OFFSET(1)] AS source,
COUNT(title) AS num_articles
FROM
`bigquery-public-data.hacker_news.stories`
WHERE
REGEXP_CONTAINS(REGEXP_EXTRACT(url, '.*://(.[^/]+)/'), '.com$')
AND LENGTH(title) > 10
GROUP BY
source
ORDER BY num_articles DESC
LIMIT 10
Explanation: Let's do some regular expression parsing in BigQuery to get the source of the newspaper article from the URL. For example, if the url is http://mobile.nytimes.com/...., I want to be left with <i>nytimes</i>
End of explanation
from google.cloud import bigquery
bq = bigquery.Client(project=PROJECT)
query=
SELECT source, LOWER(REGEXP_REPLACE(title, '[^a-zA-Z0-9 $.-]', ' ')) AS title FROM
(SELECT
ARRAY_REVERSE(SPLIT(REGEXP_EXTRACT(url, '.*://(.[^/]+)/'), '.'))[OFFSET(1)] AS source,
title
FROM
`bigquery-public-data.hacker_news.stories`
WHERE
REGEXP_CONTAINS(REGEXP_EXTRACT(url, '.*://(.[^/]+)/'), '.com$')
AND LENGTH(title) > 10
)
WHERE (source = 'github' OR source = 'nytimes' OR source = 'techcrunch')
df = bq.query(query + " LIMIT 5").to_dataframe()
df.head()
Explanation: Now that we have good parsing of the URL to get the source, let's put together a dataset of source and titles. This will be our labeled dataset for AI Platform.
End of explanation
traindf = bq.query(query + " AND ABS(MOD(FARM_FINGERPRINT(title), 4)) > 0").to_dataframe()
evaldf = bq.query(query + " AND ABS(MOD(FARM_FINGERPRINT(title), 4)) = 0").to_dataframe()
Explanation: For ML training, we will need to split our dataset into training and evaluation datasets (and perhaps an independent test dataset if we are going to do model or feature selection based on the evaluation dataset).
A simple, repeatable way to do this is to use the hash of a well-distributed column in our data (See https://www.oreilly.com/learning/repeatable-sampling-of-data-sets-in-bigquery-for-machine-learning).
End of explanation
traindf['source'].value_counts()
evaldf['source'].value_counts()
Explanation: Below we can see that roughly 75% of the data is used for training, and 25% for evaluation.
We can also see that within each dataset, the classes are roughly balanced.
End of explanation
import os, shutil
DATADIR='data/txtcls'
shutil.rmtree(DATADIR, ignore_errors=True)
os.makedirs(DATADIR)
traindf.to_csv( os.path.join(DATADIR,'train.tsv'), header=False, index=False, encoding='utf-8', sep='\t')
evaldf.to_csv( os.path.join(DATADIR,'eval.tsv'), header=False, index=False, encoding='utf-8', sep='\t')
!head -3 data/txtcls/train.tsv
!wc -l data/txtcls/*.tsv
Explanation: Finally we will save our data, which is currently in-memory, to disk.
End of explanation
%%bash
pip install google-cloud-storage
rm -rf txtcls_trained
gcloud ai-platform local train \
--module-name=trainer.task \
--package-path=${PWD}/txtclsmodel/trainer \
-- \
--output_dir=${PWD}/txtcls_trained \
--train_data_path=${PWD}/data/txtcls/train.tsv \
--eval_data_path=${PWD}/data/txtcls/eval.tsv \
--num_epochs=0.1
Explanation: TensorFlow/Keras Code
Please explore the code in this <a href="txtclsmodel/trainer">directory</a>: model.py contains the TensorFlow model and task.py parses command line arguments and launches off the training job.
In particular look for the following:
tf.keras.preprocessing.text.Tokenizer.fit_on_texts() to generate a mapping from our word vocabulary to integers
tf.keras.preprocessing.text.Tokenizer.texts_to_sequences() to encode our sentences into a sequence of their respective word-integers
tf.keras.preprocessing.sequence.pad_sequences() to pad all sequences to be the same length
The embedding layer in the keras model takes care of one-hot encoding these integers and learning a dense emedding represetation from them.
Finally we pass the embedded text representation through a CNN model pictured below
<img src=images/txtcls_model.png width=25%>
Run Locally (optional step)
Let's make sure the code compiles by running locally for a fraction of an epoch.
This may not work if you don't have all the packages installed locally for gcloud (such as in Colab).
This is an optional step; move on to training on the cloud.
End of explanation
%%bash
gsutil cp data/txtcls/*.tsv gs://${BUCKET}/txtcls/
%%bash
OUTDIR=gs://${BUCKET}/txtcls/trained_fromscratch
JOBNAME=txtcls_$(date -u +%y%m%d_%H%M%S)
gsutil -m rm -rf $OUTDIR
gcloud ai-platform jobs submit training $JOBNAME \
--region=$REGION \
--module-name=trainer.task \
--package-path=${PWD}/txtclsmodel/trainer \
--job-dir=$OUTDIR \
--scale-tier=BASIC_GPU \
--runtime-version=$TFVERSION \
-- \
--output_dir=$OUTDIR \
--train_data_path=gs://${BUCKET}/txtcls/train.tsv \
--eval_data_path=gs://${BUCKET}/txtcls/eval.tsv \
--num_epochs=5
Explanation: Train on the Cloud
Let's first copy our training data to the cloud:
End of explanation
!gcloud ai-platform jobs describe txtcls_190209_224828
Explanation: Change the job name appropriately. View the job in the console, and wait until the job is complete.
End of explanation
%%bash
gsutil ls gs://${BUCKET}/txtcls/trained_fromscratch/export/exporter/
Explanation: Results
What accuracy did you get? You should see around 80%.
Deploy trained model
Once your training completes you will see your exported models in the output directory specified in Google Cloud Storage.
You should see one model for each training checkpoint (default is every 1000 steps).
End of explanation
%%bash
MODEL_NAME="txtcls"
MODEL_VERSION="v1_fromscratch"
MODEL_LOCATION=$(gsutil ls gs://${BUCKET}/txtcls/trained_fromscratch/export/exporter/ | tail -1)
#gcloud ai-platform versions delete ${MODEL_VERSION} --model ${MODEL_NAME} --quiet
#gcloud ai-platform models delete ${MODEL_NAME}
gcloud ai-platform models create ${MODEL_NAME} --regions $REGION
gcloud ai-platform versions create ${MODEL_VERSION} --model ${MODEL_NAME} --origin ${MODEL_LOCATION} --runtime-version=$TFVERSION
Explanation: We will take the last export and deploy it as a REST API using Google AI Platform
End of explanation
techcrunch=[
'Uber shuts down self-driving trucks unit',
'Grover raises €37M Series A to offer latest tech products as a subscription',
'Tech companies can now bid on the Pentagon’s $10B cloud contract'
]
nytimes=[
'‘Lopping,’ ‘Tips’ and the ‘Z-List’: Bias Lawsuit Explores Harvard’s Admissions',
'A $3B Plan to Turn Hoover Dam into a Giant Battery',
'A MeToo Reckoning in China’s Workplace Amid Wave of Accusations'
]
github=[
'Show HN: Moon – 3kb JavaScript UI compiler',
'Show HN: Hello, a CLI tool for managing social media',
'Firefox Nightly added support for time-travel debugging'
]
Explanation: Get Predictions
Here are some actual hacker news headlines gathered from July 2018. These titles were not part of the training or evaluation datasets.
End of explanation
import pickle
from tensorflow.python.keras.preprocessing import sequence
from googleapiclient import discovery
from oauth2client.client import GoogleCredentials
import json
requests = techcrunch+nytimes+github
# Tokenize and pad sentences using same mapping used in the deployed model
tokenizer = pickle.load( open( "txtclsmodel/tokenizer.pickled", "rb" ) )
requests_tokenized = tokenizer.texts_to_sequences(requests)
requests_tokenized = sequence.pad_sequences(requests_tokenized,maxlen=50)
# JSON format the requests
request_data = {'instances':requests_tokenized.tolist()}
# Authenticate and call CMLE prediction API
credentials = GoogleCredentials.get_application_default()
api = discovery.build('ml', 'v1', credentials=credentials,
discoveryServiceUrl='https://storage.googleapis.com/cloud-ml/discovery/ml_v1_discovery.json')
parent = 'projects/%s/models/%s' % (PROJECT, 'txtcls') #version is not specified so uses default
response = api.projects().predict(body=request_data, name=parent).execute()
# Format and print response
for i in range(len(requests)):
print('\n{}'.format(requests[i]))
print(' github : {}'.format(response['predictions'][i]['dense'][0]))
print(' nytimes : {}'.format(response['predictions'][i]['dense'][1]))
print(' techcrunch: {}'.format(response['predictions'][i]['dense'][2]))
Explanation: Our serving input function expects the already tokenized representations of the headlines, so we do that pre-processing in the code before calling the REST API.
Note: Ideally we would do these transformation in the tensorflow graph directly instead of relying on separate client pre-processing code (see: training-serving skew), howevever the pre-processing functions we're using are python functions so cannot be embedded in a tensorflow graph.
See the <a href="text_classification_native.ipynb">text_classification_native</a> notebook for a solution to this.
End of explanation
!gsutil cp gs://cloud-training-demos/courses/machine_learning/deepdive/09_sequence/text_classification/glove.6B.200d.txt gs://$BUCKET/txtcls/
Explanation: How many of your predictions were correct?
Rerun with Pre-trained Embedding
In the previous model we trained our word embedding from scratch. Often times we get better performance and/or converge faster by leveraging a pre-trained embedding. This is a similar concept to transfer learning during image classification.
We will use the popular GloVe embedding which is trained on Wikipedia as well as various news sources like the New York Times.
You can read more about Glove at the project homepage: https://nlp.stanford.edu/projects/glove/
You can download the embedding files directly from the stanford.edu site, but we've rehosted it in a GCS bucket for faster download speed.
End of explanation |
2,589 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Load and preprocess the data.
Step1: Create train & test sets.
Step2: Define the cost function and how to compute the gradient.<br>
Both are needed for the subsequent optimization procedure.
Step3: Run a timed optimization and store the iteration values of the cost function (for latter investigation).
Step4: It's always interesting to take a more detailed look at the optimization results.
Step5: Now compute the Root Mean Square Error on both the train and the test set and hopefully they are similar to each other.
Step6: Finally, let's have a more intuitive look at the predictions. | Python Code:
data_original = np.loadtxt('stanford_dl_ex/ex1/housing.data')
data = np.insert(data_original, 0, 1, axis=1)
np.random.shuffle(data)
Explanation: Load and preprocess the data.
End of explanation
train_X = data[:400, :-1]
train_y = data[:400, -1]
test_X = data[400:, :-1]
test_y = data[400:, -1]
m, n = train_X.shape
Explanation: Create train & test sets.
End of explanation
def cost_function(theta, X, y):
squared_errors = (X.dot(theta) - y) ** 2
J = 0.5 * squared_errors.sum()
return J
def gradient(theta, X, y):
errors = X.dot(theta) - y
return errors.dot(X)
Explanation: Define the cost function and how to compute the gradient.<br>
Both are needed for the subsequent optimization procedure.
End of explanation
J_history = []
t0 = time.time()
res = scipy.optimize.minimize(
fun=cost_function,
x0=np.random.rand(n),
args=(train_X, train_y),
method='bfgs',
jac=gradient,
options={'maxiter': 200, 'disp': True},
callback=lambda x: J_history.append(cost_function(x, train_X, train_y)),
)
t1 = time.time()
print('Optimization took {s} seconds'.format(s=t1 - t0))
optimal_theta = res.x
Explanation: Run a timed optimization and store the iteration values of the cost function (for latter investigation).
End of explanation
plt.plot(J_history, marker='o')
plt.xlabel('Iterations')
plt.ylabel('J(theta)')
Explanation: It's always interesting to take a more detailed look at the optimization results.
End of explanation
for dataset, (X, y) in (
('train', (train_X, train_y)),
('test', (test_X, test_y)),
):
actual_prices = y
predicted_prices = X.dot(optimal_theta)
print(
'RMS {dataset} error: {error}'.format(
dataset=dataset,
error=np.sqrt(np.mean((predicted_prices - actual_prices) ** 2))
)
)
Explanation: Now compute the Root Mean Square Error on both the train and the test set and hopefully they are similar to each other.
End of explanation
plt.figure(figsize=(10, 8))
plt.scatter(np.arange(test_y.size), sorted(test_y), c='b', edgecolor='None', alpha=0.5, label='actual')
plt.scatter(np.arange(test_y.size), sorted(test_X.dot(optimal_theta)), c='g', edgecolor='None', alpha=0.5, label='predicted')
plt.legend(loc='upper left')
plt.ylabel('House price ($1000s)')
plt.xlabel('House #')
Explanation: Finally, let's have a more intuitive look at the predictions.
End of explanation |
2,590 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Preliminaries
In order to draw the network graphs in these examples (i.e. using r.draw()), you will need graphviz and pygraphviz installed. Please consult the Graphviz documentation for instructions on installing it on your platform. If you cannot install Graphviz and pygraphviz, you can still run the following examples, but the network diagrams will not be generated.
Also, due to limitations in pygraphviz, these examples can only be run in the Jupyter notebook, not the Tellurium notebook app.
Install pygraphviz (requires compilation)
Please run
<your-local-python-executable> -m pip install pygraphviz
from a terminal or command prompt to install pygraphviz. Then restart your kernel in this notebook (Language->Restart Running Kernel).
Troubleshooting Graphviz Installation
pygraphviz has known problems during installation on some platforms. On 64-bit Fedora Linux, we have been able to use the following command to install pygraphviz
Step1: Feedback oscillations
Step2: Bistable System
Example showing how to to multiple time course simulations, merging the data and plotting it onto one platting surface. Alternative is to use setHold()
Model is a bistable system, simulations start with different initial conditions resulting in different steady states reached.
Step3: Events
Step4: Gene network
Step5: Stoichiometric matrix
Step6: Lorenz attractor
Example showing how to describe a model using ODES. Example implements the Lorenz attractor.
Step7: Time Course Parameter Scan
Do 5 simulations on a simple model, for each simulation a parameter, k1 is changed. The script merges the data together and plots the merged array on to one plot.
Step8: Merge multiple simulations
Example of merging multiple simulations. In between simulations a parameter is changed.
Step9: Relaxation oscillator
Oscillator that uses positive and negative feedback. An example of a relaxation oscillator.
Step10: Scan hill coefficient
Negative Feedback model where we scan over the value of the Hill coefficient.
Step11: Compare simulations
Step12: Sinus injection
Example that show how to inject a sinusoidal into the model and use events to switch it off and on.
Step13: Protein phosphorylation cycle
Simple protein phosphorylation cycle. Steady state concentation of the phosphorylated protein is plotted as a funtion of the cycle kinase. In addition, the plot is repeated for various values of Km. | Python Code:
import warnings
warnings.filterwarnings("ignore")
import tellurium as te
te.setDefaultPlottingEngine('matplotlib')
%matplotlib inline
# model Definition
r = te.loada ('''
#J1: S1 -> S2; Activator*kcat1*S1/(Km1+S1);
J1: S1 -> S2; SE2*kcat1*S1/(Km1+S1);
J2: S2 -> S1; Vm2*S2/(Km2+S2);
J3: T1 -> T2; S2*kcat3*T1/(Km3+T1);
J4: T2 -> T1; Vm4*T2/(Km4+T2);
J5: -> E2; Vf5/(Ks5+T2^h5);
J6: -> E3; Vf6*T2^h6/(Ks6+T2^h6);
#J7: -> E1;
J8: -> S; kcat8*E1
J9: E2 -> ; k9*E2;
J10:E3 -> ; k10*E3;
J11: S -> SE2; E2*kcat11*S/(Km11+S);
J12: S -> SE3; E3*kcat12*S/(Km12+S);
J13: SE2 -> ; SE2*kcat13;
J14: SE3 -> ; SE3*kcat14;
Km1 = 0.01; Km2 = 0.01; Km3 = 0.01; Km4 = 0.01; Km11 = 1; Km12 = 0.1;
S1 = 6; S2 =0.1; T1=6; T2 = 0.1;
SE2 = 0; SE3=0;
S=0;
E2 = 0; E3 = 0;
kcat1 = 0.1; kcat3 = 3; kcat8 =1; kcat11 = 1; kcat12 = 1; kcat13 = 0.1; kcat14=0.1;
E1 = 1;
k9 = 0.1; k10=0.1;
Vf6 = 1;
Vf5 = 3;
Vm2 = 0.1;
Vm4 = 2;
h6 = 2; h5=2;
Ks6 = 1; Ks5 = 1;
Activator = 0;
at (time > 100): Activator = 5;
''')
r.draw(width=300)
result = r.simulate (0, 300, 2000, ['time', 'J11', 'J12']);
r.plot(result);
Explanation: Preliminaries
In order to draw the network graphs in these examples (i.e. using r.draw()), you will need graphviz and pygraphviz installed. Please consult the Graphviz documentation for instructions on installing it on your platform. If you cannot install Graphviz and pygraphviz, you can still run the following examples, but the network diagrams will not be generated.
Also, due to limitations in pygraphviz, these examples can only be run in the Jupyter notebook, not the Tellurium notebook app.
Install pygraphviz (requires compilation)
Please run
<your-local-python-executable> -m pip install pygraphviz
from a terminal or command prompt to install pygraphviz. Then restart your kernel in this notebook (Language->Restart Running Kernel).
Troubleshooting Graphviz Installation
pygraphviz has known problems during installation on some platforms. On 64-bit Fedora Linux, we have been able to use the following command to install pygraphviz:
bash
/path/to/python3 -m pip install pygraphviz --install-option="--include-path=/usr/include/graphviz" --install-option="--library-path=/usr/lib64/graphviz/"
You may need to modify the library/include paths in the above command. Some Linux distributions put 64-bit libraries in /usr/lib instead of /usr/lib64, in which case the command becomes:
bash
/path/to/python3 -m pip install pygraphviz --install-option="--include-path=/usr/include/graphviz" --install-option="--library-path=/usr/lib/graphviz/"
Case Studies
Activator system
End of explanation
# http://tellurium.analogmachine.org/testing/
import tellurium as te
r = te.loada ('''
model feedback()
// Reactions:
J0: $X0 -> S1; (VM1 * (X0 - S1/Keq1))/(1 + X0 + S1 + S4^h);
J1: S1 -> S2; (10 * S1 - 2 * S2) / (1 + S1 + S2);
J2: S2 -> S3; (10 * S2 - 2 * S3) / (1 + S2 + S3);
J3: S3 -> S4; (10 * S3 - 2 * S4) / (1 + S3 + S4);
J4: S4 -> $X1; (V4 * S4) / (KS4 + S4);
// Species initializations:
S1 = 0; S2 = 0; S3 = 0;
S4 = 0; X0 = 10; X1 = 0;
// Variable initialization:
VM1 = 10; Keq1 = 10; h = 10; V4 = 2.5; KS4 = 0.5;
end''')
r.integrator.setValue('variable_step_size', True)
res = r.simulate(0, 40)
r.plot()
Explanation: Feedback oscillations
End of explanation
import tellurium as te
import numpy as np
r = te.loada ('''
$Xo -> S1; 1 + Xo*(32+(S1/0.75)^3.2)/(1 +(S1/4.3)^3.2);
S1 -> $X1; k1*S1;
Xo = 0.09; X1 = 0.0;
S1 = 0.5; k1 = 3.2;
''')
print(r.selections)
initValue = 0.05
m = r.simulate (0, 4, 100, selections=["time", "S1"])
for i in range (0,12):
r.reset()
r['[S1]'] = initValue
res = r.simulate (0, 4, 100, selections=["S1"])
m = np.concatenate([m, res], axis=1)
initValue += 1
te.plotArray(m, color="black", alpha=0.7, loc=None,
xlabel="time", ylabel="[S1]", title="Bistable system");
Explanation: Bistable System
Example showing how to to multiple time course simulations, merging the data and plotting it onto one platting surface. Alternative is to use setHold()
Model is a bistable system, simulations start with different initial conditions resulting in different steady states reached.
End of explanation
import tellurium as te
import matplotlib.pyplot as plt
# Example showing use of events and how to set the y axis limits
r = te.loada ('''
$Xo -> S; Xo/(km + S^h);
S -> $w; k1*S;
# initialize
h = 1; # Hill coefficient
k1 = 1; km = 0.1;
S = 1.5; Xo = 2
at (time > 10): Xo = 5;
at (time > 20): Xo = 2;
''')
m1 = r.simulate (0, 30, 200, ['time', 'Xo', 'S'])
r.plot(ylim=(0,10))
Explanation: Events
End of explanation
import tellurium as te
import numpy
# Model desribes a cascade of two genes. First gene is activated
# second gene is repressed. Uses events to change the input
# to the gene regulatory network
r = te.loada ('''
v1: -> P1; Vm1*I^4/(Km1 + I^4);
v2: P1 -> ; k1*P1;
v3: -> P2; Vm2/(Km2 + P1^4);
v4: P2 -> ; k2*P2;
at (time > 60): I = 10;
at (time > 100): I = 0.01;
Vm1 = 5; Vm2 = 6; Km1 = 0.5; Km2 = 0.4;
k1 = 0.1; k2 = 0.1;
I = 0.01;
''')
result = r.simulate (0, 200, 100)
r.plot()
Explanation: Gene network
End of explanation
import tellurium as te
# Example of using antimony to create a stoichiometry matrix
r = te.loada('''
J1: -> S1; v1;
J2: S1 -> S2; v2;
J3: S2 -> ; v3;
J4: S3 -> S1; v4;
J5: S3 -> S2; v5;
J6: -> S3; v6;
v1=1; v2=1; v3=1; v4=1; v5=1; v6=1;
''')
print(r.getFullStoichiometryMatrix())
r.draw()
Explanation: Stoichiometric matrix
End of explanation
import tellurium as te
r = te.loada ('''
x' = sigma*(y - x);
y' = x*(rho - z) - y;
z' = x*y - beta*z;
x = 0.96259; y = 2.07272; z = 18.65888;
sigma = 10; rho = 28; beta = 2.67;
''')
result = r.simulate (0, 20, 1000, ['time', 'x', 'y', 'z'])
r.plot()
Explanation: Lorenz attractor
Example showing how to describe a model using ODES. Example implements the Lorenz attractor.
End of explanation
import tellurium as te
import numpy as np
r = te.loada ('''
J1: $X0 -> S1; k1*X0;
J2: S1 -> $X1; k2*S1;
X0 = 1.0; S1 = 0.0; X1 = 0.0;
k1 = 0.4; k2 = 2.3;
''')
m = r.simulate (0, 4, 100, ["Time", "S1"])
for i in range (0,4):
r.k1 = r.k1 + 0.1
r.reset()
m = np.hstack([m, r.simulate(0, 4, 100, ['S1'])])
# use plotArray to plot merged data
te.plotArray(m)
pass
Explanation: Time Course Parameter Scan
Do 5 simulations on a simple model, for each simulation a parameter, k1 is changed. The script merges the data together and plots the merged array on to one plot.
End of explanation
import tellurium as te
import numpy
r = te.loada ('''
# Model Definition
v1: $Xo -> S1; k1*Xo;
v2: S1 -> $w; k2*S1;
# Initialize constants
k1 = 1; k2 = 1; S1 = 15; Xo = 1;
''')
# Time course simulation
m1 = r.simulate (0, 15, 100, ["Time","S1"]);
r.k1 = r.k1 * 6;
m2 = r.simulate (15, 40, 100, ["Time","S1"]);
r.k1 = r.k1 / 6;
m3 = r.simulate (40, 60, 100, ["Time","S1"]);
m = numpy.vstack([m1, m2, m3])
p = te.plot(m[:,0], m[:,1], name='trace1')
Explanation: Merge multiple simulations
Example of merging multiple simulations. In between simulations a parameter is changed.
End of explanation
import tellurium as te
r = te.loada ('''
v1: $Xo -> S1; k1*Xo;
v2: S1 -> S2; k2*S1*S2^h/(10 + S2^h) + k3*S1;
v3: S2 -> $w; k4*S2;
# Initialize
h = 2; # Hill coefficient
k1 = 1; k2 = 2; Xo = 1;
k3 = 0.02; k4 = 1;
''')
result = r.simulate(0, 100, 100)
r.plot(result);
Explanation: Relaxation oscillator
Oscillator that uses positive and negative feedback. An example of a relaxation oscillator.
End of explanation
import tellurium as te
import numpy as np
r = te.loada ('''
// Reactions:
J0: $X0 => S1; (J0_VM1*(X0 - S1/J0_Keq1))/(1 + X0 + S1 + S4^J0_h);
J1: S1 => S2; (10*S1 - 2*S2)/(1 + S1 + S2);
J2: S2 => S3; (10*S2 - 2*S3)/(1 + S2 + S3);
J3: S3 => S4; (10*S3 - 2*S4)/(1 + S3 + S4);
J4: S4 => $X1; (J4_V4*S4)/(J4_KS4 + S4);
// Species initializations:
S1 = 0;
S2 = 0;
S3 = 0;
S4 = 0;
X0 = 10;
X1 = 0;
// Variable initializations:
J0_VM1 = 10;
J0_Keq1 = 10;
J0_h = 2;
J4_V4 = 2.5;
J4_KS4 = 0.5;
// Other declarations:
const J0_VM1, J0_Keq1, J0_h, J4_V4, J4_KS4;
''')
# time vector
result = r.simulate (0, 20, 201, ['time'])
h_values = [r.J0_h + k for k in range(0,8)]
for h in h_values:
r.reset()
r.J0_h = h
m = r.simulate(0, 20, 201, ['S1'])
result = numpy.hstack([result, m])
te.plotArray(result, labels=['h={}'.format(int(h)) for h in h_values])
pass
Explanation: Scan hill coefficient
Negative Feedback model where we scan over the value of the Hill coefficient.
End of explanation
import tellurium as te
r = te.loada ('''
v1: $Xo -> S1; k1*Xo;
v2: S1 -> $w; k2*S1;
//initialize. Deterministic process.
k1 = 1; k2 = 1; S1 = 20; Xo = 1;
''')
m1 = r.simulate (0,20,100);
# Stochastic process
r.resetToOrigin()
r.setSeed(1234)
m2 = r.gillespie(0, 20, 100, ['time', 'S1'])
# plot all the results together
te.plotArray(m1, color="black", show=False)
te.plotArray(m2, color="blue");
Explanation: Compare simulations
End of explanation
import tellurium as te
import numpy
r = te.loada ('''
# Inject sin wave into model
Xo := sin (time*0.5)*switch + 2;
# Model Definition
v1: $Xo -> S1; k1*Xo;
v2: S1 -> S2; k2*S1;
v3: S2 -> $X1; k3*S2;
at (time > 40): switch = 1;
at (time > 80): switch = 0.5;
# Initialize constants
k1 = 1; k2 = 1; k3 = 3; S1 = 3;
S2 = 0;
switch = 0;
''')
result = r.simulate (0, 100, 200, ['time', 'S1', 'S2'])
r.plot(result);
Explanation: Sinus injection
Example that show how to inject a sinusoidal into the model and use events to switch it off and on.
End of explanation
import tellurium as te
import numpy as np
r = te.loada ('''
S1 -> S2; k1*S1/(Km1 + S1);
S2 -> S1; k2*S2/(Km2 + S2);
k1 = 0.1; k2 = 0.4; S1 = 10; S2 = 0;
Km1 = 0.1; Km2 = 0.1;
''')
for i in range (1,8):
numbers = np.linspace (0, 1.2, 200)
result = np.empty ([0,2])
for value in numbers:
r.k1 = value
r.steadyState()
row = np.array ([value, r.S2])
result = np.vstack ((result, row))
te.plotArray(result, show=False, labels=['Km1={}'.format(r.Km1)],
resetColorCycle=False,
xlabel='k1', ylabel="S2",
title="Steady State S2 for different Km1 & Km2",
ylim=[-0.1, 11], grid=True)
r.k1 = 0.1
r.Km1 = r.Km1 + 0.5;
r.Km2 = r.Km2 + 0.5;
Explanation: Protein phosphorylation cycle
Simple protein phosphorylation cycle. Steady state concentation of the phosphorylated protein is plotted as a funtion of the cycle kinase. In addition, the plot is repeated for various values of Km.
End of explanation |
2,591 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Practical Bin Packing
This was motivated by a desire to buy just enough materials to get the job done. In this case the job was a chicken coop I was building. I can buy lumber in standard lengths of 12, 10, 8 or 6 feet at my local building supply store. So what is the lowest cost combination of stock boards that fills the need?
In my research I found lots of examples of bin packing with a single size of bin but nothing that fit my situation and limited appetite for in depth study.
This code uses a brute force approach to the problem. It enumerates all permutations, discards any that don't meet the bare minimum length then checks each remaining permutation for feasilbility. The feasible options are sorted to find the minmum cost option.
In the example below, I first define the stock lengths and their rates. Then I list the parts needed for the project. The part lengths are listed as integers but could just as well have been floats.
Step1: Then I use a method from Python's itertools module to generate the cartesian product (permuations with repitition). The input to the itertools.product function includes a list of choices for each item. Depending on the size of your problem you might need to extend the list to find the optimal solution.
Step2: I've printed a few samples of candidates that meet the minimum length criteria. I could also have thrown out candidates that have way too much length since they aren't likely to be cost effective. Each candidate is a list of quantities corresponding to stock sizes. For the example, if a candidiate equals [0, 0, 4], it has no 12' lengths, no 10' lengths and four 8' lengths.
The code uses a method called bestFit that tries to fit the parts into a set of bins with sizes c. For each piece, it tries to find the first bin with enough room to accomodate the piece. This is called a "first fit" algorithm. If room for any piece in the set of parts(weight) cannot be found it returns valid = false.
Step3: Then I iterate through each of the remaining candidates using the bestFit method. I merge all the lists into a pandas DataFrame and use a pandas function to find the lowest cost valid option.
Step4: So the lowest cost option is two 12' pieces and one 8' piece. How should I cut the pieces from the stock?
Step5: It may be useful to compare the costs of the top options. In this case for just one more dollar, I can buy three 12' pieces of stock and have some left over for the next project. | Python Code:
import itertools as it
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
stock = np.array([144, 120, 96]) # 12', 10' and 8' lengths
rates = np.array([9.17, 8.51, 7.52 ]) # costs for each length (1x4)
parts = [84, 72, 54, 36, 30, 30, 24, 24] # list of pieces needed (1x4)
minlength = sum(parts)
Explanation: Practical Bin Packing
This was motivated by a desire to buy just enough materials to get the job done. In this case the job was a chicken coop I was building. I can buy lumber in standard lengths of 12, 10, 8 or 6 feet at my local building supply store. So what is the lowest cost combination of stock boards that fills the need?
In my research I found lots of examples of bin packing with a single size of bin but nothing that fit my situation and limited appetite for in depth study.
This code uses a brute force approach to the problem. It enumerates all permutations, discards any that don't meet the bare minimum length then checks each remaining permutation for feasilbility. The feasible options are sorted to find the minmum cost option.
In the example below, I first define the stock lengths and their rates. Then I list the parts needed for the project. The part lengths are listed as integers but could just as well have been floats.
End of explanation
combos = it.product([0,1,2,3,4,5,6], repeat=len(stock))
candidates = []
cost = []
valid = []
# Discard combos that dont have the minimum length required
for item in combos:
x = list(item)
length = np.dot(x,stock)
if length >= minlength:
candidates.append(x)
cost.append(np.dot(x,rates))
valid.append(False)
print [candidates[i] for i in [0, 20, 40, 60]]
Explanation: Then I use a method from Python's itertools module to generate the cartesian product (permuations with repitition). The input to the itertools.product function includes a list of choices for each item. Depending on the size of your problem you might need to extend the list to find the optimal solution.
End of explanation
def bestFit(weight, combo, c):
'''
combo = combination of stock sizes to try
weight: items to be placed into the bins (or cut from stock)
c: bin (stock) sizes, list
returns
placed: boolean indicating sucessful placement
bin_rem: a list of unused space in each bin
bin_Usage: a list of lists that shows how the items were allocated to bins
'''
bins = []
for i in range(len(combo)):
for k in range(combo[i]):
bins.append(c[i])
n = len(bins) # number of bins
m = len(weight)
binUsage = [[]*i for i in range(n)] # to record how items are allocated to bins
for b in range(n):
binUsage[b] = [bins[b]]
bin_rem = bins[:] # list to store remaining space in bins
# Place items one by one
for ii in range(m): # for each piece/item/weight
placed = False
# Find the first bin that can accommodate weight[ii]
for j in range(n): # for each bin
if bin_rem[j] >= weight[ii]:
bin_rem[j] -= weight[ii]
binUsage[j].append(weight[ii])
placed = True
break
if not placed:
return False, bin_rem, []
return True, bin_rem, binUsage
Explanation: I've printed a few samples of candidates that meet the minimum length criteria. I could also have thrown out candidates that have way too much length since they aren't likely to be cost effective. Each candidate is a list of quantities corresponding to stock sizes. For the example, if a candidiate equals [0, 0, 4], it has no 12' lengths, no 10' lengths and four 8' lengths.
The code uses a method called bestFit that tries to fit the parts into a set of bins with sizes c. For each piece, it tries to find the first bin with enough room to accomodate the piece. This is called a "first fit" algorithm. If room for any piece in the set of parts(weight) cannot be found it returns valid = false.
End of explanation
usage = []
for i in range(len(candidates)):
#try to fit parts into each set of bins
usage.append([])
valid[i], bin_rem, usage[i] = bestFit(parts, candidates[i], stock)
results = pd.DataFrame({'candidate':candidates, 'cost':cost, 'valid':valid, 'usage':usage})
lowest_cost_idx = results[results.valid == True].cost.idxmin()
lowest_cost = results.iloc[lowest_cost_idx]
c = lowest_cost.candidate
print 'Lowest Cost Option\nSize Qty'
for i in range(len(c)):
if c[i]:
print('{:4d} {}'.format(stock[i], c[i]))
print('Cost: ${}'.format(lowest_cost.cost))
Explanation: Then I iterate through each of the remaining candidates using the bestFit method. I merge all the lists into a pandas DataFrame and use a pandas function to find the lowest cost valid option.
End of explanation
print('Stock Size Allocation')
for i in range(len(lowest_cost.usage[:])):
print('{:10d} {}'.format(lowest_cost.usage[i][0], lowest_cost.usage[i][1:]))
Explanation: So the lowest cost option is two 12' pieces and one 8' piece. How should I cut the pieces from the stock?
End of explanation
results[results.valid != False].sort_values('cost').head(10).plot(x='candidate', y='cost', kind='bar')
plt.ylabel('Cost $')
plt.tight_layout()
plt.show()
Explanation: It may be useful to compare the costs of the top options. In this case for just one more dollar, I can buy three 12' pieces of stock and have some left over for the next project.
End of explanation |
2,592 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Automating Multiple Single-Objective Spatial Optimization Models for Efficiency and Reproducibility
James D. Gaboardi | Association of American Geographers 2016
Florida State University | Department of Geography
Outline
Background & Information
Models
PMP
PCP
CentDian
PMCP Method
Data & Processing
Solutions
Visualizations
Future Work
COIN-OR
Initial Imports
Step1: Background & Information
$\Rightarrow$ Automating solutions for p-median and p-center problems with p={n(p)} facilities
$\Rightarrow$ Compare coverage and costs numerically and visually
PySAL 1.11.0
Python Spatial Analysis Library
[https
Step2: Data & Processing
Process Imports
Step3: Define the function to calculate the cost matrix and convert to miles
Step4: Define the function to solve the p-Median + p-Center Problems concurrently
Step5: Reproject the street network with GeoPandas
Step6: Instantiate Network and read in WAVERLY.shp
Step7: Create Buffer of 200 meters
Step8: Plot Buffers of Individual Streets
Step9: Create a Unary Union of the individual street buffers
Step10: Plot the unary union buffer
Step11: Create 1000 random ppoints within the bounds of WAVERLY.shp
Step12: Plot the 1000 random
Step13: Create GeoPandas DF of the random points within the Unary Buffer
Step14: Plot the points within the Unary Buffer
Step15: Add only intersecting records to a list
Step16: Keep the first 100 for clients and the last 15 for service facilities
Step17: Plot the Unary Union, Simulated Clients, Simulated Service, and Streets
Step18: Instaniate non-solution graphs to be drawn
Step19: Instantiate and fill Client and Service point dictionaries
Step20: Simulate weights for Client Demand
Step21: Instantiate Client .shp
Step22: Instantiate Service .shp
Step23: Snap Client and Service points to the network
Step24: Create lat/lon lists of snapped service coords
Step25: Instantiate snapped Service .shp
Step26: Call Client to Service Matrix Function
Step27: Create Lists to fill index and columns of GeoPandas Data Frames
Step28: Instantiate GeoPandas Dataframes
Step29: Create PMP, PCP, and CentDian solution graphs
Step30: Instantiate lists for objective values and average values of all models
Step31: Solutions
Solve all
Step32: Calculate and record percentage decrease
Step33: Data Frames adjust
Step34: Create Graphs of the PMCP results
Step35: Visualizations
Draw PMP figure [p=1] large $ \rightarrow $ small [p=15]
Step36: Pandas PMP Data Frame
Step38: Bokeh PMP [p vs. cost] trade off
Step39: Draw PCP figure [p=1] large $ \rightarrow $ small [p=15]
Step40: Pandas PCP Data Frame
Step42: Bokeh PCP [p vs. cost] trade off
Step43: Draw CentDian figure [p=1] large $ \rightarrow $ small [p=15]
Step44: Pandas CentDian Data Frame
Step46: Bokeh CentDian [p vs. cost] trade off
Step47: Draw PMCP figure
Step48: Pandas PMCP Data Frame
Step49: Bokeh PMP & PCP [p vs. cost] comparision
Step50: Convert Service Facilities Back to Longitude/Latitude for Google Maps Plots
Step51: Create Lists of Selected Locations for Google Maps Plot
Step53: Google Maps Plot
Step54: Future Work & Vision
$\Longrightarrow$ Develop a python library for bring together in one package spatial analysis & spatial optimization [spanoptpy] potentially incorporating
Step55: email $\Longrightarrow$ jgaboardi@fsu.edu
GitHub $\Longrightarrow$ https
Step56: System Specs | Python Code:
import IPython.display as IPd
# Local path on user's machine
path = '/Users/jgaboardi/AAG_16/Data/'
Explanation: Automating Multiple Single-Objective Spatial Optimization Models for Efficiency and Reproducibility
James D. Gaboardi | Association of American Geographers 2016
Florida State University | Department of Geography
Outline
Background & Information
Models
PMP
PCP
CentDian
PMCP Method
Data & Processing
Solutions
Visualizations
Future Work
COIN-OR
Initial Imports
End of explanation
# Conceptual Model Workflow
workflow = IPd.Image(path+'/AAG_16.png')
workflow
Explanation: Background & Information
$\Rightarrow$ Automating solutions for p-median and p-center problems with p={n(p)} facilities
$\Rightarrow$ Compare coverage and costs numerically and visually
PySAL 1.11.0
Python Spatial Analysis Library
[https://www.pysal.readthedocs.org]
Sergio Rey at Arizona State University leads the PySAL project. [https://geoplan.asu.edu/people/sergio-j-rey]
"PySAL is an open source library of spatial analysis functions written in Python intended to support the development of high level applications. PySAL is open source under the BSD License." [https://pysal.readthedocs.org/en/latest/]
I will be only be demonstrating a portion of the functionality in PySAL.Network, but there are many other classes and functions for statistical spatial analysis within PySAL.
PySAL.Network
PySAL.Network was principally developed by Jay Laura at Arizona State Universty and the United States Geological Suvery. [https://geoplan.asu.edu/people/jay-laura]
Gurobi 6.5.0
Relatively new company founded by optimization experts formerly at key positions with CPLEX.
[http://www.gurobi.com] [http://www.gurobi.com/company/about-gurobi]
gurobipy
Python wrapper for Gurobi
NumPy 1.10.4
"NumPy is the fundamental package for scientific computing with Python." [http://www.numpy.org]
Shapely 1.5.13
"Python package for manipulation and analysis of geometric objects in the Cartesian plane." [https://github.com/Toblerity/Shapely]
GeoPandas 0.1.1
"GeoPandas is an open source project to make working with geospatial data in python easier." [http://geopandas.org]
Pandas 0.17.1
"pandas is an open source, BSD-licensed library providing high-performance, easy-to-use data structures and data analysis tools..." [http://pandas.pydata.org]
Bokeh 0.11.1
"Bokeh is a Python interactive visualization library that targets modern web browsers for presentation." [http://bokeh.pydata.org/en/latest/]
Models
The p-Median Problem
The objective of the p-median problem, also know as the minimsum problem and the PMP, is to minimize the total weighted cost while siting [p] facilities to serve all demand/client nodes. It was originally proposed by Hakimi (1964) and is well-studied in Geography, Operations Research, Mathematics, etc. In this particular project the network-based vertice PMP is used meaning the cost will be calculated on a road network and solutions will be determined based on discrete locations. Cost is generally defined as either travel time or distance and it is the latter in the project. Population (demand) utilized as a weight at each client node. The average cost can be calculated by dividing the minimized total cost by the total demand.
For more information refer to references section.
Minimize
$\displaystyle {Z} = {\sum_{i \in 1}^n\sum_{j\in 1}^m a_i c_{ij} x_{ij}}$
Subject to
$\displaystyle\sum_{j\in m} x_{ij} = 1 ,$ $\forall i \in n$
$\displaystyle\sum_{j \in m} y_j = p$
$x_{ij} - y_j \geq 0,$ $\forall i \in n, j \in m$
$x_{ij}, y_j \in {0,1}$ $\forall i \in n , j \in m$
where
− $i$ = a specific origin
− $j$ = a specific destination
− $n$ = the set of origins
− $m$ = the set of destinations
− $a_i$ = weight at each node
− $c_{ij}$ = travel costs between nodes
− $x_{ij}$ = the decision variable at each node in the matrix
− $y_j$ = nodes chosen as service facilities
− $p$ = the number of facilities to be sited
Adapted from:
- Daskin, M. S. 1995. Network and Discrete Location: Models, Algorithms, and Applications. Hoboken, NJ, USA: John Wiley & Sons, Inc.
The p-Center Problem
The objective of the p-center problem, also know as the minimax problem and the PCP, is to
minimize the worst case cost (W) scenario while siting [p] facilities to serve all demand/client nodes. It was originally proposed by Minieka (1970) and, like the PMP, is well-studied in Geography, Operations Research, Mathematics, etc. In this particular project the network-based vertice PCP is used meaning the cost will be calculated on a road network and solutions will be determined based on discrete locations. Cost is generally defined as either travel time or distance and it is the latter in the project.
For more information refer to references section.
Minimize
$W$
Subject to
$\displaystyle\sum_{j\in m} x_{ij} = 1,$ $\forall i \in n$
$\displaystyle\sum_{j \in m} y_j = p$
$x_{ij} - y_j \geq 0,$ $\forall i\in n, j \in m$
$\displaystyle W \geq \sum_{j \in m} c_{ij} x_{ij}$ $\forall i \in n$
$x_{ij}, y_j \in {0,1}$ $\forall i \in n, j \in m$
where
− $W$ = the worst case cost between a client and a service node
− $i$ = a specific origin
− $j$ = a specific destination
− $n$ = the set of origins
− $m$ = the set of destinations
− $a_i$ = weight at each node
− $c_{ij}$ = travel costs between nodes
− $x_{ij}$ = the decision variable at each node in the matrix
− $y_j$ = nodes chosen as service facilities
− $p$ = the number of facilities to be sited
Adapted from:
- Daskin, M. S. 1995. Network and Discrete Location: Models, Algorithms, and Applications. Hoboken, NJ, USA: John Wiley & Sons, Inc.
The p-CentDian Problem
The p-CentDian Problem was first descibed by Halpern (1976). It is a combination of the p-median problem and the p-center problem with a dual objective of minimizing both the worst case scenario and the total travel distance. The objective used for the model in this demonstration is the average of (1) the p-center objective function and (2) the p-median objective function divided by the total demand. An alternative formulation is the p-$\lambda$-CentDian Problem, where ( $\lambda$ ) represents the weight attributed to the p-center objective function and (1 - $\lambda$) represents the weight attributed to the p-median objective function which was was proposed by Pérez-Brito, et al (1997).
For more information refer to references section.
Minimize
$ \displaystyle {W + {Z \over \sum_{i=1}a_i} \over 2}$
Subject to
$\displaystyle\sum_{j\in m} x_{ij} = 1,$ $\forall i \in n$
$\displaystyle\sum_{j \in m} y_j = p$
$x_{ij} - y_j \geq 0,$ $\forall i\in n, j \in m$
$\displaystyle W \geq \sum_{j \in m} c_{ij} x_{ij}$ $\forall i \in n$
$x_{ij}, y_j \in {0,1}$ $\forall i \in n, j \in m$
where
− $W$ = the maximum travel cost between client and service nodes
− $Z$ = the minimized total travel cost $\big({\sum_{i \in 1}^n\sum_{j\in 1}^m a_i c_{ij} x_{ij}}\big)$
− $i$ = a specific origin
− $j$ = a specific destination
− $n$ = the set of origins
− $m$ = the set of destinations
− $a_i$ = weight at each node
− $c_{ij}$ = travel costs between nodes
− $x_{ij}$ = the decision variable at each node in the matrix
− $y_j$ = nodes chosen as service facilities
− $p$ = the number of facilities to be sited
Adapted from:
Halpern, J. 1976. The Location of a Center-Median Convex Combination on an Undirected Tree*. Journal of Regional Science 16 (2):237–245
The PMCP Method
$\Rightarrow$ solve the p-median problem and the p-center problem concurrently to determine whether optimal locations can be sited with equivalent [p]
$\Rightarrow$ "poor man's" p-CentDian Problem?
automated & efficient decision making for those who don't have access to multiple-objective capable solvers
what it is:
a comparision to determine equivalent site selection of single objective solutions
probably best used with low cost sites
a opportunity for finding optimal solutions without sacrificing either efficiency or equity
what it is not:
an optimization solution with multiple objective functions
capable of a true 'best solution' trade-off between efficiency and equity
guaranteed to find identical solutions
Workflow
End of explanation
import pysal as ps
import geopandas as gpd
import numpy as np
import networkx as nx
import shapefile as shp
from shapely.geometry import Point
import shapely
from collections import OrderedDict
import pandas as pd
import qgrid
qgrid.nbinstall(overwrite=True) # copies javascript dependencies to your /nbextensions folder
qgrid.set_defaults(remote_js=True)
import gurobipy as gbp
import time
from bokeh.plotting import figure, show, ColumnDataSource
from bokeh.io import output_notebook
from bokeh.models import (HoverTool, BoxAnnotation, GeoJSONDataSource,
GMapPlot, GMapOptions, ColumnDataSource, Circle,
DataRange1d, PanTool, WheelZoomTool, BoxSelectTool)
import utm
from cylp.cy import CyCbcModel, CyClpSimplex
%pylab inline
figsize(15,15)
Explanation: Data & Processing
Process Imports
End of explanation
def c_s_matrix(): # Define Client to Service Matrix Function
global All_Dist_MILES # in meters
All_Neigh_Dist = ntw.allneighbordistances(
sourcepattern=ntw.pointpatterns['Rand_Points_CLIENT'],
destpattern=ntw.pointpatterns['Rand_Points_SERVICE'])
All_Dist_MILES = All_Neigh_Dist * 0.000621371 # to miles
Explanation: Define the function to calculate the cost matrix and convert to miles
End of explanation
def Gurobi_PMCP(sites, Ai, AiSum, All_Dist_Miles):
# Define Global Variables
global pydf_M
global selected_M
global NEW_Records_PMP
global VAL_PMP
global AVG_PMP
global pydf_C
global selected_C
global NEW_Records_PCP
global VAL_PCP
global pydf_CentDian
global selected_CentDian
global NEW_Records_Pcentdian
global VAL_CentDian
global pydf_MC
global VAL_PMCP
global p_dens
for p in range(1, sites+1):
# DATA
# [p] --> sites
# Demand --> Ai
# Demand Sum --> AiSum
# Travel Costs
Cij = All_Dist_MILES
# Weighted Costs
Sij = Ai * Cij
# Total Client and Service nodes
client_nodes = range(len(Sij))
service_nodes = range(len(Sij[0]))
##################################################################
# PMP
t1_PMP = time.time()
# Create Model, Add Variables, & Update Model
# Instantiate Model
mPMP = gbp.Model(' -- p-Median -- ')
# Turn off Gurobi's output
mPMP.setParam('OutputFlag',False)
# Add Client Decision Variables (iXj)
client_var = []
for orig in client_nodes:
client_var.append([])
for dest in service_nodes:
client_var[orig].append(mPMP.addVar(vtype=gbp.GRB.BINARY,
lb=0,
ub=1,
obj=Sij[orig][dest],
name='x'+str(orig+1)+'_'+str(dest+1)))
# Add Service Decision Variables (j)
serv_var = []
for dest in service_nodes:
serv_var.append([])
serv_var[dest].append(mPMP.addVar(vtype=gbp.GRB.BINARY,
lb=0,
ub=1,
name='y'+str(dest+1)))
# Update the model
mPMP.update()
# 3. Set Objective Function
mPMP.setObjective(gbp.quicksum(Sij[orig][dest]*client_var[orig][dest]
for orig in client_nodes for dest in service_nodes),
gbp.GRB.MINIMIZE)
# 4. Add Constraints
# Assignment Constraints
for orig in client_nodes:
mPMP.addConstr(gbp.quicksum(client_var[orig][dest]
for dest in service_nodes) == 1)
# Opening Constraints
for orig in service_nodes:
for dest in client_nodes:
mPMP.addConstr((serv_var[orig][0] - client_var[dest][orig] >= 0))
# Facility Constraint
mPMP.addConstr(gbp.quicksum(serv_var[dest][0] for dest in service_nodes) == p)
# 5. Optimize and Print Results
# Solve
mPMP.optimize()
# Write LP
mPMP.write(path+'LP_Files/PMP'+str(p)+'.lp')
t2_PMP = time.time()-t1_PMP
# Record and Display Results
print '\n*************************************************************************'
selected_M = OrderedDict()
dbf1 = ps.open(path+'Snapped/SERVICE_Snapped.dbf')
NEW_Records_PMP = []
for v in mPMP.getVars():
if 'x' in v.VarName:
pass
elif v.x > 0:
var = '%s' % v.VarName
selected_M[var]=(u"\u2588")
for i in range(dbf1.n_records):
if var in dbf1.read_record(i):
x = dbf1.read_record(i)
NEW_Records_PMP.append(x)
else:
pass
print ' | ', var
pydf_M = pydf_M.append(selected_M, ignore_index=True)
# Instantiate Shapefile
SHP_Median = shp.Writer(shp.POINT)
# Add Points
for idy,idx,x,y in NEW_Records_PMP:
SHP_Median.point(float(x), float(y))
# Add Fields
SHP_Median.field('y_ID')
SHP_Median.field('x_ID')
SHP_Median.field('LAT')
SHP_Median.field('LON')
# Add Records
for idy,idx,x,y in NEW_Records_PMP:
SHP_Median.record(idy,idx,x,y)
# Save Shapefile
SHP_Median.save(path+'Results/Selected_Locations_Pmedian'+str(p)+'.shp')
print ' | Selected Facility Locations -------------- ^^^^ '
print ' | Candidate Facilities [p] ----------------- ', len(selected_M)
val_m = mPMP.objVal
VAL_PMP.append(round(val_m, 3))
print ' | Objective Value (miles) ------------------ ', val_m
avg_m = float(mPMP.objVal)/float(AiSum)
AVG_PMP.append(round(avg_m, 3))
print ' | Avg. Value / Client (miles) -------------- ', avg_m
print ' | Real Time to Optimize (sec.) ------------- ', t2_PMP
print '*************************************************************************'
print ' -- The p-Median Problem -- '
print ' [p] = ', str(p), '\n\n'
##################################################################
# PCP
t1_PCP = time.time()
# Instantiate P-Center Model
mPCP = gbp.Model(' -- p-Center -- ')
# Add Client Decision Variables (iXj)
client_var_PCP = []
for orig in client_nodes:
client_var_PCP.append([])
for dest in service_nodes:
client_var_PCP[orig].append(mPCP.addVar(vtype=gbp.GRB.BINARY,
lb=0,
ub=1,
obj=Cij[orig][dest],
name='x'+str(orig+1)+'_'+str(dest+1)))
# Add Service Decision Variables (j)
serv_var_PCP = []
for dest in service_nodes:
serv_var_PCP.append([])
serv_var_PCP[dest].append(mPCP.addVar(vtype=gbp.GRB.BINARY,
lb=0,
ub=1,
name='y'+str(dest+1)))
# Add the Maximum travel cost variable
W = mPCP.addVar(vtype=gbp.GRB.CONTINUOUS,
lb=0.,
name='W')
# Update the model
mPCP.update()
# 3. Set Objective Function
mPCP.setObjective(W, gbp.GRB.MINIMIZE)
# 4. Add Constraints
# Assignment Constraints
for orig in client_nodes:
mPCP.addConstr(gbp.quicksum(client_var_PCP[orig][dest]
for dest in service_nodes) == 1)
# Opening Constraints
for orig in service_nodes:
for dest in client_nodes:
mPCP.addConstr((serv_var_PCP[orig][0] - client_var_PCP[dest][orig] >= 0))
# Add Maximum travel cost constraints
for orig in client_nodes:
mPCP.addConstr(gbp.quicksum(Cij[orig][dest]*client_var_PCP[orig][dest]
for dest in service_nodes) - W <= 0)
# Facility Constraint
mPCP.addConstr(gbp.quicksum(serv_var_PCP[dest][0] for dest in service_nodes) == p)
# 5. Optimize and Print Results
# Solve
mPCP.optimize()
# Write LP
mPCP.write(path+'LP_Files/PCP'+str(p)+'.lp')
t2_PCP = time.time()-t1_PCP
# Record and Display Results
print '\n*************************************************************************'
selected_C = OrderedDict()
dbf1 = ps.open(path+'Snapped/SERVICE_Snapped.dbf')
NEW_Records_PCP = []
for v in mPCP.getVars():
if 'x' in v.VarName:
pass
elif 'W' in v.VarName:
pass
elif v.x > 0:
var = '%s' % v.VarName
selected_C[var]=(u"\u2588")
for i in range(dbf1.n_records):
if var in dbf1.read_record(i):
x = dbf1.read_record(i)
NEW_Records_PCP.append(x)
else:
pass
print ' | ', var, ' '
pydf_C = pydf_C.append(selected_C, ignore_index=True)
# Instantiate Shapefile
SHP_Center = shp.Writer(shp.POINT)
# Add Points
for idy,idx,x,y in NEW_Records_PCP:
SHP_Center.point(float(x), float(y))
# Add Fields
SHP_Center.field('y_ID')
SHP_Center.field('x_ID')
SHP_Center.field('LAT')
SHP_Center.field('LON')
# Add Records
for idy,idx,x,y in NEW_Records_PCP:
SHP_Center.record(idy,idx,x,y)
# Save Shapefile
SHP_Center.save(path+'Results/Selected_Locations_Pcenter'+str(p)+'.shp')
print ' | Selected Facility Locations -------------- ^^^^ '
print ' | Candidate Facilities [p] ----------------- ', len(selected_C)
val_c = mPCP.objVal
VAL_PCP.append(round(val_c, 3))
print ' | Objective Value (miles) ------------------ ', val_c
print ' | Real Time to Optimize (sec.) ------------- ', t2_PCP
print '*************************************************************************'
print ' -- The p-Center Problem -- '
print ' [p] = ', str(p), '\n\n'
###########################################################################
# p-CentDian
t1_centdian = time.time()
# Instantiate P-Center Model
mPcentdian = gbp.Model(' -- p-CentDian -- ')
# Add Client Decision Variables (iXj)
client_var_CentDian = []
for orig in client_nodes:
client_var_CentDian.append([])
for dest in service_nodes:
client_var_CentDian[orig].append(mPcentdian.addVar(vtype=gbp.GRB.BINARY,
lb=0,
ub=1,
obj=Cij[orig][dest],
name='x'+str(orig+1)+'_'+str(dest+1)))
# Add Service Decision Variables (j)
serv_var_CentDian = []
for dest in service_nodes:
serv_var_CentDian.append([])
serv_var_CentDian[dest].append(mPcentdian.addVar(vtype=gbp.GRB.BINARY,
lb=0,
ub=1,
name='y'+str(dest+1)))
# Add the Maximum travel cost variable
W_CD = mPcentdian.addVar(vtype=gbp.GRB.CONTINUOUS,
lb=0.,
name='W')
# Update the model
mPcentdian.update()
# 3. Set Objective Function
M = gbp.quicksum(Sij[orig][dest]*client_var_CentDian[orig][dest]
for orig in client_nodes for dest in service_nodes)
Zt = M/AiSum
mPcentdian.setObjective((W_CD + Zt) / 2, gbp.GRB.MINIMIZE)
# 4. Add Constraints
# Assignment Constraints
for orig in client_nodes:
mPcentdian.addConstr(gbp.quicksum(client_var_CentDian[orig][dest]
for dest in service_nodes) == 1)
# Opening Constraints
for orig in service_nodes:
for dest in client_nodes:
mPcentdian.addConstr((serv_var_CentDian[orig][0] - client_var_CentDian[dest][orig]
>= 0))
# Add Maximum travel cost constraints
for orig in client_nodes:
mPcentdian.addConstr(gbp.quicksum(Cij[orig][dest]*client_var_CentDian[orig][dest]
for dest in service_nodes) - W_CD <= 0)
# Facility Constraint
mPcentdian.addConstr(gbp.quicksum(serv_var_CentDian[dest][0] for dest in service_nodes)
== p)
# 5. Optimize and Print Results
# Solve
mPcentdian.optimize()
# Write LP
mPcentdian.write(path+'LP_Files/CentDian'+str(p)+'.lp')
t2_centdian = time.time()-t1_centdian
# Record and Display Results
print '\n*************************************************************************'
selected_CentDian = OrderedDict()
dbf1 = ps.open(path+'Snapped/SERVICE_Snapped.dbf')
NEW_Records_Pcentdian = []
for v in mPcentdian.getVars():
if 'x' in v.VarName:
pass
elif 'W' in v.VarName:
pass
elif v.x > 0:
var = '%s' % v.VarName
selected_CentDian[var]=(u"\u2588")
for i in range(dbf1.n_records):
if var in dbf1.read_record(i):
x = dbf1.read_record(i)
NEW_Records_Pcentdian.append(x)
else:
pass
print ' | ', var, ' '
pydf_CentDian = pydf_CentDian.append(selected_CentDian, ignore_index=True)
# Instantiate Shapefile
SHP_CentDian = shp.Writer(shp.POINT)
# Add Points
for idy,idx,x,y in NEW_Records_Pcentdian:
SHP_CentDian.point(float(x), float(y))
# Add Fields
SHP_CentDian.field('y_ID')
SHP_CentDian.field('x_ID')
SHP_CentDian.field('LAT')
SHP_CentDian.field('LON')
# Add Records
for idy,idx,x,y in NEW_Records_Pcentdian:
SHP_CentDian.record(idy,idx,x,y)
# Save Shapefile
SHP_CentDian.save(path+'Results/Selected_Locations_CentDian'+str(p)+'.shp')
print ' | Selected Facility Locations -------------- ^^^^ '
print ' | Candidate Facilities [p] ----------------- ', len(selected_CentDian)
val_cd = mPcentdian.objVal
VAL_CentDian.append(round(val_cd, 3))
print ' | Objective Value (miles) ------------------ ', val_cd
print ' | Real Time to Optimize (sec.) ------------- ', t2_centdian
print '*************************************************************************'
print ' -- The p-CentDian Problem -- '
print ' [p] = ', str(p), '\n\n'
###########################################################################
# p-Median + p-Center Method
# Record solutions that record identical facility selection
if selected_M.keys() == selected_C.keys() == selected_CentDian.keys():
pydf_MC = pydf_MC.append(selected_C, ignore_index=True) # append PMCP dataframe
p_dens.append('p='+str(p)) # density of [p]
VAL_PMCP.append([round(val_m,3), round(avg_m,3),
round(val_c,3), round(val_cd,3)]) # append PMCP list
# Instantiate Shapefile
SHP_PMCP = shp.Writer(shp.POINT)
# Add Points
for idy,idx,x,y in NEW_Records_PCP:
SHP_PMCP.point(float(x), float(y))
# Add Fields
SHP_PMCP.field('y_ID')
SHP_PMCP.field('x_ID')
SHP_PMCP.field('LAT')
SHP_PMCP.field('LON')
# Add Records
for idy,idx,x,y in NEW_Records_PCP:
SHP_PMCP.record(idy,idx,x,y)
# Save Shapefile
SHP_PMCP.save(path+'Results/Selected_Locations_PMCP'+str(p)+'.shp')
else:
pass
Explanation: Define the function to solve the p-Median + p-Center Problems concurrently
End of explanation
STREETS_Orig = gpd.read_file(path+'Waverly_Trim/Waverly.shp')
STREETS = gpd.read_file(path+'Waverly_Trim/Waverly.shp')
STREETS.to_crs(epsg=2779, inplace=True) # NAD83(HARN) / Florida North
STREETS.to_file(path+'WAVERLY/WAVERLY.shp')
STREETS[:5]
Explanation: Reproject the street network with GeoPandas
End of explanation
ntw = ps.Network(path+'WAVERLY/WAVERLY.shp')
shp_W = ps.open(path+'WAVERLY/WAVERLY.shp')
Explanation: Instantiate Network and read in WAVERLY.shp
End of explanation
buff = STREETS.buffer(200) #Buffer
buff[:5]
Explanation: Create Buffer of 200 meters
End of explanation
buff.plot()
Explanation: Plot Buffers of Individual Streets
End of explanation
buffU = buff.unary_union #Buffer Union
buff1 = gpd.GeoSeries(buffU)
buff1.crs = STREETS.crs
Buff = gpd.GeoDataFrame(buff1, crs=STREETS.crs)
Buff.columns = ['geometry']
Buff
Explanation: Create a Unary Union of the individual street buffers
End of explanation
Buff.plot()
Explanation: Plot the unary union buffer
End of explanation
np.random.seed(352)
x = np.random.uniform(shp_W.bbox[0], shp_W.bbox[2], 1000)
np.random.seed(850)
y = np.random.uniform(shp_W.bbox[1], shp_W.bbox[3], 1000)
coords0= zip(x,y)
coords = [shapely.geometry.Point(i) for i in coords0]
Rand = gpd.GeoDataFrame(coords)
Rand.crs = STREETS.crs
Rand.columns = ['geometry']
Rand[:5]
Explanation: Create 1000 random ppoints within the bounds of WAVERLY.shp
End of explanation
Rand.plot()
Explanation: Plot the 1000 random
End of explanation
Inter = [Buff['geometry'].intersection(p) for p in Rand['geometry']]
INTER = gpd.GeoDataFrame(Inter, crs=STREETS.crs)
INTER.columns = ['geometry']
INTER[:5]
Explanation: Create GeoPandas DF of the random points within the Unary Buffer
End of explanation
INTER.plot()
Explanation: Plot the points within the Unary Buffer
End of explanation
# Add records that are points within the buffer
point_in = []
for p in INTER['geometry']:
if type(p) == shapely.geometry.point.Point:
point_in.append(p)
point_in[:5]
Explanation: Add only intersecting records to a list
End of explanation
CLIENT = gpd.GeoDataFrame(point_in[:100], crs=STREETS.crs)
CLIENT.columns = ['geometry']
SERVICE = gpd.GeoDataFrame(point_in[-15:], crs=STREETS.crs)
SERVICE.columns = ['geometry']
CLIENT.to_file(path+'CLIENT')
SERVICE.to_file(path+'SERVICE')
CLIENT[:5]
SERVICE[:5]
Explanation: Keep the first 100 for clients and the last 15 for service facilities
End of explanation
Buff.plot()
STREETS.plot()
CLIENT.plot()
SERVICE.plot(colormap=True)
Explanation: Plot the Unary Union, Simulated Clients, Simulated Service, and Streets
End of explanation
g = nx.Graph() # Roads & Nodes
g1 = nx.MultiGraph() # Edges and Vertices
GRAPH_client = nx.Graph() # Clients
g_client = nx.Graph() # Snapped Clients
GRAPH_service = nx.Graph() # Service
g_service = nx.Graph() # Snapped Service
Explanation: Instaniate non-solution graphs to be drawn
End of explanation
points_client = {}
points_service = {}
CLI = ps.open(path+'CLIENT/CLIENT.shp')
for idx, coords in enumerate(CLI):
GRAPH_client.add_node(idx)
points_client[idx] = coords
GRAPH_client.node[idx] = coords
SER = ps.open(path+'SERVICE/SERVICE.shp')
for idx, coords in enumerate(SER):
GRAPH_service.add_node(idx)
points_service[idx] = coords
GRAPH_service.node[idx] = coords
Explanation: Instantiate and fill Client and Service point dictionaries
End of explanation
# Client Weights for demand
np.random.seed(850)
Ai = np.random.randint(1, 5, len(CLI))
Ai = Ai.reshape(len(Ai),1)
AiSum = np.sum(Ai) # Sum of Weights (Total Demand)
Explanation: Simulate weights for Client Demand
End of explanation
client = shp.Writer(shp.POINT) # Client Shapefile
# Add Random Points
for i,j in CLI:
client.point(i,j)
# Add Fields
client.field('client_ID')
client.field('Weight')
counter = 0
for i in range(len(CLI)):
counter = counter + 1
client.record('client_' + str(counter), Ai[i])
client.save(path+'Simulated/RandomPoints_CLIENT') # Save Shapefile
Explanation: Instantiate Client .shp
End of explanation
service = shp.Writer(shp.POINT) #Service Shapefile
# Add Random Points
for i,j in SER:
service.point(i,j)
# Add Fields
service.field('y_ID')
service.field('x_ID')
counter = 0
for i in range(len(SER)):
counter = counter + 1
service.record('y' + str(counter), 'x' + str(counter))
service.save(path+'Simulated/RandomPoints_SERVICE') # Save Shapefile
Explanation: Instantiate Service .shp
End of explanation
# Snap
Snap_C = ntw.snapobservations(path+'Simulated/RandomPoints_CLIENT.shp',
'Rand_Points_CLIENT', attribute=True)
Snap_S = ntw.snapobservations(path+'Simulated/RandomPoints_SERVICE.shp',
'Rand_Points_SERVICE', attribute=True)
Explanation: Snap Client and Service points to the network
End of explanation
# Create Lat & Lon lists of the snapped service locations
y_snapped = []
x_snapped = []
for i,j in ntw.pointpatterns['Rand_Points_SERVICE'].snapped_coordinates.iteritems():
y_snapped.append(j[0])
x_snapped.append(j[1])
Explanation: Create lat/lon lists of snapped service coords
End of explanation
service_SNAP = shp.Writer(shp.POINT) # Snapped Service Shapefile
# Add Points
for i,j in ntw.pointpatterns['Rand_Points_SERVICE'].snapped_coordinates.iteritems():
service_SNAP.point(j[0],j[1])
# Add Fields
service_SNAP.field('y_ID')
service_SNAP.field('x_ID')
service_SNAP.field('LAT')
service_SNAP.field('LON')
counter = 0
for i in range(len(ntw.pointpatterns['Rand_Points_SERVICE'].snapped_coordinates)):
counter = counter + 1
service_SNAP.record('y' + str(counter), 'x' + str(counter), y_snapped[i], x_snapped[i])
service_SNAP.save(path+'Snapped/SERVICE_Snapped') # Save Shapefile
Explanation: Instantiate snapped Service .shp
End of explanation
# Call Client to Service Matrix Function
c_s_matrix()
Explanation: Call Client to Service Matrix Function
End of explanation
# PANDAS DATAFRAME OF p/y results
p_list = []
for i in range(1, len(SER)+1):
p = 'p='+str(i)
p_list.append(p)
y_list = []
for i in range(1, len(SER)+1):
y = 'y'+str(i)
y_list.append(y)
Explanation: Create Lists to fill index and columns of GeoPandas Data Frames
End of explanation
pydf_M = pd.DataFrame(index=p_list,columns=y_list)
pydf_C = pd.DataFrame(index=p_list,columns=y_list)
pydf_CentDian = pd.DataFrame(index=p_list,columns=y_list)
pydf_MC = pd.DataFrame(index=p_list,columns=y_list)
qgrid.show_grid(pydf_M)
Explanation: Instantiate GeoPandas Dataframes
End of explanation
# p-Median
P_Med_Graphs = OrderedDict()
for x in range(1, len(SER)+1):
P_Med_Graphs["{0}".format(x)] = nx.Graph()
# p-Center
P_Cent_Graphs = OrderedDict()
for x in range(1, len(SER)+1):
P_Cent_Graphs["{0}".format(x)] = nx.Graph()
# p-CentDian
P_CentDian_Graphs = OrderedDict()
for x in range(1, len(SER)+1):
P_CentDian_Graphs["{0}".format(x)] = nx.Graph()
Explanation: Create PMP, PCP, and CentDian solution graphs
End of explanation
# PMP
VAL_PMP = []
AVG_PMP = []
# PCP
VAL_PCP = []
# CentDian
VAL_CentDian = []
# PMCP
VAL_PMCP = []
p_dens = [] # when the facilities for the p-median & p-center are the same
Explanation: Instantiate lists for objective values and average values of all models
End of explanation
Gurobi_PMCP(len(SER), Ai, AiSum, All_Dist_MILES)
Explanation: Solutions
Solve all
End of explanation
# PMP Total
PMP_Tot_Diff = []
for i in range(len(VAL_PMP)):
if i == 0:
PMP_Tot_Diff.append('0%')
elif i <= len(VAL_PMP):
n1 = VAL_PMP[i-1]
n2 = VAL_PMP[i]
diff = n2 - n1
perc_change = (diff/n1)*100.
PMP_Tot_Diff.append(str(round(perc_change, 2))+'%')
# PMP Average
PMP_Avg_Diff = []
for i in range(len(AVG_PMP)):
if i == 0:
PMP_Avg_Diff.append('0%')
elif i <= len(AVG_PMP):
n1 = AVG_PMP[i-1]
n2 = AVG_PMP[i]
diff = n2 - n1
perc_change = (diff/n1)*100.
PMP_Avg_Diff.append(str(round(perc_change, 2))+'%')
# PCP
PCP_Diff = []
for i in range(len(VAL_PCP)):
if i == 0:
PCP_Diff.append('0%')
elif i <= len(VAL_PCP):
n1 = VAL_PCP[i-1]
n2 = VAL_PCP[i]
diff = n2 - n1
perc_change = (diff/n1)*100.
PCP_Diff.append(str(round(perc_change, 2))+'%')
# p-CentDian
CentDian_Diff = []
for i in range(len(VAL_CentDian)):
if i == 0:
CentDian_Diff.append('0%')
elif i <= len(VAL_CentDian):
n1 = VAL_CentDian[i-1]
n2 = VAL_CentDian[i]
diff = n2 - n1
perc_change = (diff/n1)*100.
CentDian_Diff.append(str(round(perc_change, 2))+'%')
# PMCP
PMCP_Diff = []
counter = 0
for i in range(len(VAL_PMCP)):
PMCP_Diff.append([])
for j in range(len(VAL_PMCP[0])):
if i == 0:
PMCP_Diff[i].append('0%')
elif i <= len(VAL_PMCP):
n1 = VAL_PMCP[i-1][j]
n2 = VAL_PMCP[i][j]
diff = n2 - n1
perc_change = (diff/n1*100.)
PMCP_Diff[i].append(str(round(perc_change, 2))+'%')
Explanation: Calculate and record percentage decrease
End of explanation
# PMP
pydf_M = pydf_M[len(SER):]
pydf_M.reset_index()
pydf_M.index = p_list
pydf_M.columns.name = 'Decision\nVariables'
pydf_M.index.name = 'Facility\nDensity'
pydf_M['Tot. Obj. Value'] = VAL_PMP
pydf_M['Tot. % Change'] = PMP_Tot_Diff
pydf_M['Avg. Obj. Value'] = AVG_PMP
pydf_M['Avg. % Change'] = PMP_Avg_Diff
pydf_M = pydf_M.fillna('')
#pydf_M.to_csv(path+'CSV') <-- need to change squares to alphanumeric to use
# PCP
pydf_C = pydf_C[len(SER):]
pydf_C.reset_index()
pydf_C.index = p_list
pydf_C.columns.name = 'Decision\nVariables'
pydf_C.index.name = 'Facility\nDensity'
pydf_C['Worst Case Obj. Value'] = VAL_PCP
pydf_C['Worst Case % Change'] = PCP_Diff
pydf_C = pydf_C.fillna('')
#pydf_C.to_csv(path+'CSV') <-- need to change squares to alphanumeric to use
pydf_CentDian = pydf_CentDian[len(SER):]
pydf_CentDian.reset_index()
pydf_CentDian.index = p_list
pydf_CentDian.columns.name = 'Decision\nVariables'
pydf_CentDian.index.name = 'Facility\nDensity'
pydf_CentDian['CentDian Obj. Value'] = VAL_CentDian
pydf_CentDian['CentDian % Change'] = CentDian_Diff
pydf_CentDian = pydf_CentDian.fillna('')
#pydf_CentDian.to_csv(path+'CSV') <-- need to change squares to alphanumeric to use
# PMCP
pydf_MC = pydf_MC[len(SER):]
pydf_MC.reset_index()
pydf_MC.index = p_dens
pydf_MC.columns.name = 'D.V.'
pydf_MC.index.name = 'F.D.'
pydf_MC['Min.\nTotal'] = [VAL_PMCP[x][0] for x in range(len(VAL_PMCP))]
pydf_MC['Min.\nTotal\n%\nChange'] = [PMCP_Diff[x][0] for x in range(len(PMCP_Diff))]
pydf_MC['Avg.\nTotal'] = [VAL_PMCP[x][1] for x in range(len(VAL_PMCP))]
pydf_MC['Avg.\nTotal\n%\nChange'] = [PMCP_Diff[x][1] for x in range(len(PMCP_Diff))]
pydf_MC['Worst\nCase'] = [VAL_PMCP[x][2] for x in range(len(VAL_PMCP))]
pydf_MC['Worst\nCase\n%\nChange'] = [PMCP_Diff[x][2] for x in range(len(PMCP_Diff))]
pydf_MC['Center\nMedian'] = [VAL_PMCP[x][3] for x in range(len(VAL_PMCP))]
pydf_MC['Center\nMedian\n%\nChange'] = [PMCP_Diff[x][3] for x in range(len(PMCP_Diff))]
pydf_MC = pydf_MC.fillna('')
#pydf_MC.to_csv(path+'CSV') <-- need to change squares to alphanumeric to use
Explanation: Data Frames adjust
End of explanation
# Create Graphs of the PMCP results
PMCP_Graphs = OrderedDict()
for x in pydf_MC.index:
PMCP_Graphs[x[2:]] = nx.Graph()
Explanation: Create Graphs of the PMCP results
End of explanation
figsize(10,10)
# Draw Network Actual Roads and Nodes
for e in ntw.edges:
g.add_edge(*e)
nx.draw(g, ntw.node_coords, node_size=5, alpha=0.25, edge_color='r', width=2)
#PMP
size = 3000
for i,j in P_Med_Graphs.iteritems():
size=size-120
# p-Median
P_Med = ps.open(path+'Results/Selected_Locations_Pmedian'+str(i)+'.shp')
points_median = {}
for idx, coords in enumerate(P_Med):
P_Med_Graphs[i].add_node(idx)
points_median[idx] = coords
P_Med_Graphs[i].node[idx] = coords
nx.draw(P_Med_Graphs[i],
points_median,
node_size=size,
alpha=.1,
node_color='k')
# Legend (Ordered Dictionary)
LEGEND = OrderedDict()
LEGEND['Network Nodes']=g
LEGEND['Roads']=g
for i in P_Med_Graphs:
LEGEND['Optimal PMP '+str(i)]=P_Med_Graphs[i]
legend(LEGEND,
loc='upper left',
fancybox=True,
framealpha=0.5,
scatterpoints=1)
# Title
title('Waverly Hills\n Tallahassee, Florida', family='Times New Roman',
size=40, color='k', backgroundcolor='w', weight='bold')
# North Arrow and 'N' --> Must be changed for different spatial resolutions, etc.
arrow(624000, 164050, 0.0, 500, width=50, head_width=125,
head_length=75, fc='k', ec='k',alpha=0.75,)
annotate('N', xy=(623900, 164700), fontstyle='italic', fontsize='xx-large',
fontweight='heavy', alpha=0.75)
Explanation: Visualizations
Draw PMP figure [p=1] large $ \rightarrow $ small [p=15]
End of explanation
qgrid.show_grid(pydf_M)
#pydf_M
Explanation: Pandas PMP Data Frame
End of explanation
#import bokeh
#from bokeh.charts import Scatter, show
from bokeh.plotting import figure, show, ColumnDataSource
from bokeh.io import output_notebook
from bokeh.models import HoverTool, BoxAnnotation
output_notebook()
source_m = ColumnDataSource(
data=dict(
x=range(1, len(SER)+1),
y=AVG_PMP,
avg=AVG_PMP,
desc=p_list,
change=PMP_Avg_Diff))
TOOLS = 'wheel_zoom, pan, reset, crosshair, save'
hover = HoverTool(line_policy="nearest", mode="hline", tooltips=
<div>
<div>
</div>
<div>
<span style="font-size: 17px; font-weight: bold;">@desc</span>
</div>
<div>
<span style="font-size: 15px;">Average Minimized Cost</span>
<span style="font-size: 15px; font-weight: bold; color: #ff4d4d;">[@avg]</span>
</div>
<div>
<span style="font-size: 15px;">Variation</span>
<span style="font-size: 15px; font-weight: bold; color: #ff4d4d;">[@change]</span>
</div>
</div>)
# Instantiate Plot
pmp_plot = figure(plot_width=600, plot_height=600, tools=[TOOLS,hover],
title="Average Distance vs. p-Facilities", y_range=(0,2))
# Create plot points and set source
pmp_plot.circle('x', 'y', size=15, color='red',source=source_m,
legend='Total Minimized Cost / Total Demand')
pmp_plot.line('x', 'y', line_width=2, color='red', alpha=.5, source=source_m,
legend='Total Minimized Cost / Total Demand')
pmp_plot.xaxis.axis_label = '[p = n]'
pmp_plot.yaxis.axis_label = 'Miles'
one_quarter = BoxAnnotation(plot=pmp_plot, top=.35,
fill_alpha=0.1, fill_color='green')
half = BoxAnnotation(plot=pmp_plot, bottom=.35, top=.7,
fill_alpha=0.1, fill_color='blue')
three_quarter = BoxAnnotation(plot=pmp_plot, bottom=.7, top=1.05,
fill_alpha=0.1, fill_color='gray')
pmp_plot.renderers.extend([one_quarter, half, three_quarter])
# Display the figure
show(pmp_plot)
Explanation: Bokeh PMP [p vs. cost] trade off
End of explanation
figsize(10,10)
# Draw Network Actual Roads and Nodes
for e in ntw.edges:
g.add_edge(*e)
nx.draw(g, ntw.node_coords, node_size=5, alpha=0.25, edge_color='r', width=2)
#PCP
size = 3000
for i,j in P_Cent_Graphs.iteritems():
size=size-150
# p-Center
P_Cent = ps.open(path+'Results/Selected_Locations_Pcenter'+str(i)+'.shp')
points_center = {}
for idx, coords in enumerate(P_Cent):
P_Cent_Graphs[i].add_node(idx)
points_center[idx] = coords
P_Cent_Graphs[i].node[idx] = coords
nx.draw(P_Cent_Graphs[i],
points_center,
node_size=size,
alpha=.1,
node_color='k')
# Legend (Ordered Dictionary)
LEGEND = OrderedDict()
LEGEND['Network Nodes']=g
LEGEND['Roads']=g
for i in P_Cent_Graphs:
LEGEND['Optimal PCP '+str(i)]=P_Cent_Graphs[i]
legend(LEGEND,
loc='upper left',
fancybox=True,
framealpha=0.5,
scatterpoints=1)
# Title
title('Waverly Hills\n Tallahassee, Florida', family='Times New Roman',
size=40, color='k', backgroundcolor='w', weight='bold')
# North Arrow and 'N' --> Must be changed for different spatial resolutions, etc.
arrow(624000, 164050, 0.0, 500, width=50, head_width=125,
head_length=75, fc='k', ec='k',alpha=0.75,)
annotate('N', xy=(623900, 164700), fontstyle='italic', fontsize='xx-large',
fontweight='heavy', alpha=0.75)
Explanation: Draw PCP figure [p=1] large $ \rightarrow $ small [p=15]
End of explanation
pydf_C
Explanation: Pandas PCP Data Frame
End of explanation
#output_notebook()
source_c = ColumnDataSource(
data=dict(
x=range(1, len(SER)+1),
y=VAL_PCP,
obj=VAL_PCP,
desc=p_list,
change=PCP_Diff))
TOOLS = 'wheel_zoom, pan, reset, crosshair, save'
hover = HoverTool(line_policy="nearest", mode="vline", tooltips=
<div>
<div>
</div>
<div>
<span style="font-size: 17px; font-weight: bold;">@desc</span>
</div>
<div>
<span style="font-size: 15px;">Worst Case Cost</span>
<span style="font-size: 15px; font-weight: bold; color: #00b300;">[@obj]</span>
</div>
<div>
<span style="font-size: 15px;">Variation</span>
<span style="font-size: 15px; font-weight: bold; color: #00b300;">[@change]</span>
</div>
</div>)
# Instantiate Plot
pcp_plot = figure(plot_width=600, plot_height=600, tools=[TOOLS,hover],
title="Worst Case Distance vs. p-Facilities", y_range=(0,2))
# Create plot points and set source
pcp_plot.circle('x', 'y', size=15, color='green', source=source_c,
legend='Minimized Worst Case')
pcp_plot.line('x', 'y', line_width=2, color='green', alpha=.5, source=source_c,
legend='Minimized Worst Case')
pcp_plot.xaxis.axis_label = '[p = n]'
pcp_plot.yaxis.axis_label = 'Miles'
one_quarter = BoxAnnotation(plot=pcp_plot, top=.35,
fill_alpha=0.1, fill_color='green')
half = BoxAnnotation(plot=pcp_plot, bottom=.35, top=.7,
fill_alpha=0.1, fill_color='blue')
three_quarter = BoxAnnotation(plot=pcp_plot, bottom=.7, top=1.05,
fill_alpha=0.1, fill_color='gray')
pcp_plot.renderers.extend([one_quarter, half, three_quarter])
# Display the figure
show(pcp_plot)
Explanation: Bokeh PCP [p vs. cost] trade off
End of explanation
pydf_CentDian
Explanation: Draw CentDian figure [p=1] large $ \rightarrow $ small [p=15]
End of explanation
figsize(10,10)
# Draw Network Actual Roads and Nodes
for e in ntw.edges:
g.add_edge(*e)
nx.draw(g, ntw.node_coords, node_size=5, alpha=0.25, edge_color='r', width=2)
#CentDian
size = 3000
for i,j in P_CentDian_Graphs.iteritems():
size=size-150
P_CentDian = ps.open(path+'Results/Selected_Locations_CentDian'+str(i)+'.shp')
points_centdian = {}
for idx, coords in enumerate(P_CentDian):
P_CentDian_Graphs[i].add_node(idx)
points_centdian[idx] = coords
P_CentDian_Graphs[i].node[idx] = coords
nx.draw(P_CentDian_Graphs[i],
points_centdian,
node_size=size,
alpha=.1,
node_color='k')
# Legend (Ordered Dictionary)
LEGEND = OrderedDict()
LEGEND['Network Nodes']=g
LEGEND['Roads']=g
for i in P_CentDian_Graphs:
LEGEND['Optimal CentDian '+str(i)]=P_CentDian_Graphs[i]
legend(LEGEND,
loc='upper left',
fancybox=True,
framealpha=0.5,
scatterpoints=1)
# Title
title('Waverly Hills\n Tallahassee, Florida', family='Times New Roman',
size=40, color='k', backgroundcolor='w', weight='bold')
# North Arrow and 'N' --> Must be changed for different spatial resolutions, etc.
arrow(624000, 164050, 0.0, 500, width=50, head_width=125,
head_length=75, fc='k', ec='k',alpha=0.75,)
annotate('N', xy=(623900, 164700), fontstyle='italic', fontsize='xx-large',
fontweight='heavy', alpha=0.75)
Explanation: Pandas CentDian Data Frame
End of explanation
#output_notebook()
source_centdian = ColumnDataSource(
data=dict(
x=range(1, len(SER)+1),
y=VAL_CentDian,
obj=VAL_CentDian,
desc=p_list,
change=CentDian_Diff))
TOOLS = 'wheel_zoom, pan, reset, crosshair, save'
hover = HoverTool(line_policy="nearest", mode="vline" ,tooltips=
<div>
<div>
</div>
<div>
<span style="font-size: 17px; font-weight: bold;">@desc</span>
</div>
<div>
<span style="font-size: 15px;">Center Median Cost</span>
<span style="font-size: 15px; font-weight: bold; color: #3385ff;">[@obj]</span>
</div>
<div>
<span style="font-size: 15px;">Variation</span>
<span style="font-size: 15px; font-weight: bold; color: #3385ff;">[@change]</span>
</div>
</div>)
# Instantiate Plot
centdian_plot = figure(plot_width=600, plot_height=600, tools=[TOOLS,hover],
title="Center Median Distance vs. p-Facilities", y_range=(0,2))
# Create plot points and set source
centdian_plot.circle('x', 'y', size=15, color='blue', source=source_centdian,
legend='Center Median')
centdian_plot.line('x', 'y', line_width=2, color='blue', alpha=.5, source=source_centdian,
legend='Center Median')
centdian_plot.xaxis.axis_label = '[p = n]'
centdian_plot.yaxis.axis_label = 'Miles'
one_quarter = BoxAnnotation(plot=pcp_plot, top=.35,
fill_alpha=0.1, fill_color='green')
half = BoxAnnotation(plot=pcp_plot, bottom=.35, top=.7,
fill_alpha=0.1, fill_color='blue')
three_quarter = BoxAnnotation(plot=pcp_plot, bottom=.7, top=1.05,
fill_alpha=0.1, fill_color='gray')
centdian_plot.renderers.extend([one_quarter, half, three_quarter])
# Display the figure
show(centdian_plot)
Explanation: Bokeh CentDian [p vs. cost] trade off
End of explanation
figsize(10,10)
# Draw Network Actual Roads and Nodes
for e in ntw.edges:
g.add_edge(*e)
nx.draw(g, ntw.node_coords, node_size=5, alpha=0.25, edge_color='r', width=2)
size = 500
shape = 'sdh^Vp<8>'
counter = -1
for i,j in PMCP_Graphs.iteritems():
if int(i) <= len(SER)-1:
counter = counter+1
pmcp = ps.open(path+'Results/Selected_Locations_PMCP'+str(i)+'.shp')
points_pmcp = {}
for idx, coords in enumerate(pmcp):
PMCP_Graphs[i].add_node(idx)
points_pmcp[idx] = coords
PMCP_Graphs[i].node[idx] = coords
nx.draw(PMCP_Graphs[i],
points_pmcp,
node_size=size,
node_shape=shape[counter],
alpha=.5,
node_color='k')
else:
pass
# Legend (Ordered Dictionary)
LEGEND = OrderedDict()
LEGEND['Network Nodes']=g
LEGEND['Roads']=g
for i in PMCP_Graphs:
if int(i) <= len(SER)-1:
LEGEND['PMP/PCP == '+str(i)]=PMCP_Graphs[i]
legend(LEGEND,
loc='upper left',
fancybox=True,
framealpha=0.5,
scatterpoints=1)
# Title
title('Waverly Hills\n Tallahassee, Florida', family='Times New Roman',
size=40, color='k', backgroundcolor='w', weight='bold')
# North Arrow and 'N' --> Must be changed for different spatial resolutions, etc.
arrow(624000, 164050, 0.0, 500, width=50, head_width=125,
head_length=75, fc='k', ec='k',alpha=0.75,)
annotate('N', xy=(623900, 164700), fontstyle='italic', fontsize='xx-large',
fontweight='heavy', alpha=0.75)
Explanation: Draw PMCP figure
End of explanation
pydf_MC
Explanation: Pandas PMCP Data Frame
End of explanation
#output_notebook()
TOOLS = 'wheel_zoom, pan, reset, crosshair, save, hover'
bokeh_df_PMCP = pd.DataFrame()
bokeh_df_PMCP['p'] = [int(i[2:]) for i in p_dens]
bokeh_df_PMCP['Total Obj. Value'] = [VAL_PMCP[x][0] for x in range(len(VAL_PMCP))]
bokeh_df_PMCP['Avg. Obj. Value'] = [VAL_PMCP[x][1] for x in range(len(VAL_PMCP))]
bokeh_df_PMCP['Worst Case Obj. Value'] = [VAL_PMCP[x][2] for x in range(len(VAL_PMCP))]
bokeh_df_PMCP['CentDian Obj. Value'] = [VAL_PMCP[x][3] for x in range(len(VAL_PMCP))]
plot_PMCP = figure(title="Optimal PMP & PCP Selections without Sacrifice",
plot_width=800, plot_height=600, tools=[TOOLS], y_range=(0,2))
plot_PMCP.circle('x', 'y', size=5, color='red', source=source_m, legend='PMP')
plot_PMCP.line('x', 'y',
color="#ff4d4d", alpha=0.2, line_width=2, source=source_m, legend='PMP')
plot_PMCP.circle('x', 'y', size=5, color='green', source=source_c, legend='PCP')
plot_PMCP.line('x', 'y',
color='#00b300', alpha=0.2, line_width=2, source=source_c, legend='PCP')
plot_PMCP.circle('x', 'y', size=5, color='blue', source=source_centdian, legend='CentDian')
plot_PMCP.line('x', 'y',
color='#3385ff', alpha=0.2, line_width=2, source=source_centdian, legend='CentDian')
plot_PMCP.circle_x(bokeh_df_PMCP['p'],
bokeh_df_PMCP['Avg. Obj. Value'],
legend="Location PMP=PCP for PM+CP",
color="#ff4d4d",
fill_alpha=0.2,
size=15)
plot_PMCP.circle_x(bokeh_df_PMCP['p'],
bokeh_df_PMCP['Worst Case Obj. Value'],
legend="Location PCP=PMP for PM+CP",
color='#00b300',
fill_alpha=0.2,
size=15)
plot_PMCP.circle_x(bokeh_df_PMCP['p'],
bokeh_df_PMCP['CentDian Obj. Value'],
legend="Location CentDian = PMCP",
color='#3385ff',
fill_alpha=0.2,
size=15)
plot_PMCP.xaxis.axis_label = '[p = n]'
plot_PMCP.yaxis.axis_label = 'Miles'
one_quarter = BoxAnnotation(plot=plot_PMCP, top=.35,
fill_alpha=0.1, fill_color='green')
half = BoxAnnotation(plot=plot_PMCP, bottom=.35, top=.7,
fill_alpha=0.1, fill_color='blue')
three_quarter = BoxAnnotation(plot=plot_PMCP, bottom=.7, top=1.05,
fill_alpha=0.1, fill_color='gray')
plot_PMCP.renderers.extend([one_quarter, half, three_quarter])
show(plot_PMCP)
Explanation: Bokeh PMP & PCP [p vs. cost] comparision
End of explanation
points = SERVICE
points.to_crs(epsg=32616, inplace=True) # UTM 16N
LonLat_Dict = OrderedDict()
LonLat_List = []
for i,j in points['geometry'].iteritems():
LonLat_Dict[y_list[i]] = utm.to_latlon(j.xy[0][-1], j.xy[1][-1], 16, 'N')
LonLat_List.append((utm.to_latlon(j.xy[0][-1], j.xy[1][-1], 16, 'N')))
Service_Lat_List = []
Service_Lon_List = []
for i in LonLat_List:
Service_Lat_List.append(i[0])
for i in LonLat_List:
Service_Lon_List.append(i[1])
Explanation: Convert Service Facilities Back to Longitude/Latitude for Google Maps Plots
End of explanation
# p-Median Selected Sites
list_of_p_MEDIAN = []
for y in range(len(y_list)):
list_of_p_MEDIAN.append([])
for p in range(len(p_list)):
if pydf_M[y_list[y]][p_list[p]] == u'\u2588':
list_of_p_MEDIAN[y].append([p_list[p]])
# p-Center Selected Sites
list_of_p_CENTER = []
for y in range(len(y_list)):
list_of_p_CENTER.append([])
for p in range(len(p_list)):
if pydf_C[y_list[y]][p_list[p]] == u'\u2588':
list_of_p_CENTER[y].append([p_list[p]])
# p-CentDian Selected Sites
list_of_p_CentDian = []
for y in range(len(y_list)):
list_of_p_CentDian.append([])
for p in range(len(p_list)):
if pydf_CentDian[y_list[y]][p_list[p]] == u'\u2588':
list_of_p_CentDian[y].append([p_list[p]])
# PMCP Selected Sites
list_of_PMCP = []
for y in range(len(y_list)):
list_of_PMCP.append([])
for p in p_dens:
if pydf_MC[y_list[y]][p] == u'\u2588':
list_of_PMCP[y].append(p)
Explanation: Create Lists of Selected Locations for Google Maps Plot
End of explanation
from bokeh.io import output_notebook, output_file, show
from bokeh.models import (GMapPlot, GMapOptions, ColumnDataSource, Circle, MultiLine,
DataRange1d, PanTool, WheelZoomTool, BoxSelectTool, ResetTool)
map_options = GMapOptions(lat=30.4855, lng=-84.265, map_type="hybrid", zoom=14)
plot = GMapPlot(
x_range=DataRange1d(), y_range=DataRange1d(), map_options=map_options, title="Waverly Hills")
hover = HoverTool(tooltips=
<div>
<div>
</div>
<div>
<span style="font-size: 30px; font-weight: bold;">Site @desc</span>
</div>
<div>
<span> \b </span>
</div>
<div>
<span style="font-size: 17px; font-weight: bold;">p-Median: </span>
</div>
<div>
<span style="font-size: 15px; font-weight: bold; color: #ff4d4d;">@p_select_median</span>
</div>
<div>
<span> \b </span>
</div>
<div>
<span style="font-size: 17px; font-weight: bold;">p-Center</span>
<div>
<span style="font-size: 14px; font-weight: bold; color: #00b300;">@p_select_center</span>
</div>
<div>
<span> \b </span>
</div>
<div>
<span style="font-size: 17px; font-weight: bold;">p-CentDian</span>
</div>
<div>
<span style="font-size: 14px; font-weight: bold; color: #3385ff;">@p_select_centdian</span>
</div>
<div>
<span> \b </span>
</div>
<span style="font-size: 17px; font-weight: bold;">PMCP Method</span>
</div>
<div>
<span style="font-size: 14px; font-weight: bold; color: 'gray';">@p_select_pmcp</span>
</div>
</div>)
source_1 = ColumnDataSource(
data=dict(
lat=Service_Lat_List,
lon=Service_Lon_List,
desc=y_list,
p_select_center=list_of_p_CENTER,
p_select_median=list_of_p_MEDIAN,
p_select_centdian= list_of_p_CentDian,
p_select_pmcp=list_of_PMCP))
#source_2 = ColumnDataSource(
# data=dict(
# xs=line1x,
# ys=line1y))
facilties = Circle(x="lon", y="lat", size=10, fill_color="yellow", fill_alpha=0.6, line_color=None)
#streets = MultiLine(xs="xs", ys="ys", line_width=20, line_color='red')
#plot.title = "Waverly"
plot.add_glyph(source_1, facilties)
#plot.add_glyph(source_2, streets)
plot.add_tools(PanTool(), WheelZoomTool(), BoxSelectTool(), ResetTool(), hover)
output_file("gmap_plot.html")
show(plot)
Explanation: Google Maps Plot
End of explanation
# Gurobi
m = gbp.Model()
m.setParam( 'OutputFlag', False )
x = m.addVar(vtype=gbp.GRB.CONTINUOUS, name='x')
y = m.addVar(vtype=gbp.GRB.CONTINUOUS, name='y')
m.update()
m.setObjective(3*x + 2*y, gbp.GRB.MINIMIZE)
m.addConstr(x >= 3)
m.addConstr(y >= 5)
m.addConstr(x - y <= 20)
m.optimize()
#m.write('path_m.lp')
print m.objVal
print m.getVars()
# CyLP
s = CyClpSimplex()
x = s.addVariable('x', 1)
y = s.addVariable('y', 1)
s += x >= 3
s += y >= 5
s += x - y <= 20
s.objective = 3 * x + 2 * y
s.primal()
#s.writeLp('path_s')
print s.objectiveValue
print s.primalVariableSolution
print 'Gurobi & CLP Objective Values match? --> ', m.objVal == s.objectiveValue
Explanation: Future Work & Vision
$\Longrightarrow$ Develop a python library for bring together in one package spatial analysis & spatial optimization [spanoptpy] potentially incorporating:
|QGIS| PySAL | NetworkX | Pandas | GeoPandas | NumPy | Shapely | Bokeh | CyLP
|----|---------|-------------------------------------------------------------------------
|GIS|network analysis|network analysis|data frames|geo dataframes|scientific computing|geometric objects|visualizations|optimization
$\Longrightarrow$ Need PySAL.Network to be able to handle larger networks
$\Longrightarrow$ Develop functionality within a Linux environment
$\Longrightarrow$ scipy.spatial.cKDTree(dist_matrix)
$\Longrightarrow$ query_ball_point() for close neighbors of the selected sites
$\Longrightarrow$ Master CyLP from COIN-OR or develop an open-source optimization suite
interface with CLP, CBC, CGL
[ http://mpy.github.io/CyLPdoc/ ]
relatively steep learning curve
Computational Optimization Infrastructure for Operations Research
[ http://www.coin-or.org ]
$\ast$ CyLP example
Minimize
$ 3x + 2y $
Subject To
$ x \geq 3$
$ y \geq 5$
$ x - y \leq 20$
End of explanation
IPd.HTML('https://github.com/jGaboardi')
Explanation: email $\Longrightarrow$ jgaboardi@fsu.edu
GitHub $\Longrightarrow$ https://github.com/jGaboardi/AAG_16
End of explanation
import datetime as dt
import os
import platform
import sys
import bokeh
import cylp
names = ['OSX', 'Processor ', 'Machine ', 'Python ','PySAL ','Gurobi ','Pandas ','GeoPandas ',
'Shapely ', 'NumPy ', 'Bokeh ', 'CyLP', 'Date & Time']
versions = [platform.mac_ver()[0], platform.processor(), platform.machine(), platform.python_version(),
ps.version, gbp.gurobi.version(), pd.__version__, gpd.__version__,
str(shapely.__version__), np.__version__,
bokeh.__version__, '0.7.1', dt.datetime.now()]
specs = pd.DataFrame(index=names, columns=['Version'])
specs.columns.name = 'Platform & Software Specs'
specs['Version'] = versions
specs # Pandas DF of specifications
Explanation: System Specs
End of explanation |
2,593 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Skip-gram word2vec
In this notebook, I'll lead you through using TensorFlow to implement the word2vec algorithm using the skip-gram architecture. By implementing this, you'll learn about embedding words for use in natural language processing. This will come in handy when dealing with things like machine translation.
Readings
Here are the resources I used to build this notebook. I suggest reading these either beforehand or while you're working on this material.
A really good conceptual overview of word2vec from Chris McCormick
First word2vec paper from Mikolov et al.
NIPS paper with improvements for word2vec also from Mikolov et al.
An implementation of word2vec from Thushan Ganegedara
TensorFlow word2vec tutorial
Word embeddings
When you're dealing with words in text, you end up with tens of thousands of classes to predict, one for each word. Trying to one-hot encode these words is massively inefficient, you'll have one element set to 1 and the other 50,000 set to 0. The matrix multiplication going into the first hidden layer will have almost all of the resulting values be zero. This a huge waste of computation.
To solve this problem and greatly increase the efficiency of our networks, we use what are called embeddings. Embeddings are just a fully connected layer like you've seen before. We call this layer the embedding layer and the weights are embedding weights. We skip the multiplication into the embedding layer by instead directly grabbing the hidden layer values from the weight matrix. We can do this because the multiplication of a one-hot encoded vector with a matrix returns the row of the matrix corresponding the index of the "on" input unit.
Instead of doing the matrix multiplication, we use the weight matrix as a lookup table. We encode the words as integers, for example "heart" is encoded as 958, "mind" as 18094. Then to get hidden layer values for "heart", you just take the 958th row of the embedding matrix. This process is called an embedding lookup and the number of hidden units is the embedding dimension.
<img src='assets/tokenize_lookup.png' width=500>
There is nothing magical going on here. The embedding lookup table is just a weight matrix. The embedding layer is just a hidden layer. The lookup is just a shortcut for the matrix multiplication. The lookup table is trained just like any weight matrix as well.
Embeddings aren't only used for words of course. You can use them for any model where you have a massive number of classes. A particular type of model called Word2Vec uses the embedding layer to find vector representations of words that contain semantic meaning.
Word2Vec
The word2vec algorithm finds much more efficient representations by finding vectors that represent the words. These vectors also contain semantic information about the words. Words that show up in similar contexts, such as "black", "white", and "red" will have vectors near each other. There are two architectures for implementing word2vec, CBOW (Continuous Bag-Of-Words) and Skip-gram.
<img src="assets/word2vec_architectures.png" width="500">
In this implementation, we'll be using the skip-gram architecture because it performs better than CBOW. Here, we pass in a word and try to predict the words surrounding it in the text. In this way, we can train the network to learn representations for words that show up in similar contexts.
First up, importing packages.
Step1: Load the text8 dataset, a file of cleaned up Wikipedia articles from Matt Mahoney. The next cell will download the data set to the data folder. Then you can extract it and delete the archive file to save storage space.
Step2: Preprocessing
Here I'm fixing up the text to make training easier. This comes from the utils module I wrote. The preprocess function coverts any punctuation into tokens, so a period is changed to <PERIOD>. In this data set, there aren't any periods, but it will help in other NLP problems. I'm also removing all words that show up five or fewer times in the dataset. This will greatly reduce issues due to noise in the data and improve the quality of the vector representations. If you want to write your own functions for this stuff, go for it.
Step3: And here I'm creating dictionaries to convert words to integers and backwards, integers to words. The integers are assigned in descending frequency order, so the most frequent word ("the") is given the integer 0 and the next most frequent is 1 and so on. The words are converted to integers and stored in the list int_words.
Step4: Subsampling
Words that show up often such as "the", "of", and "for" don't provide much context to the nearby words. If we discard some of them, we can remove some of the noise from our data and in return get faster training and better representations. This process is called subsampling by Mikolov. For each word $w_i$ in the training set, we'll discard it with probability given by
$$ P(w_i) = 1 - \sqrt{\frac{t}{f(w_i)}} $$
where $t$ is a threshold parameter and $f(w_i)$ is the frequency of word $w_i$ in the total dataset.
I'm going to leave this up to you as an exercise. This is more of a programming challenge, than about deep learning specifically. But, being able to prepare your data for your network is an important skill to have. Check out my solution to see how I did it.
Exercise
Step5: Making batches
Now that our data is in good shape, we need to get it into the proper form to pass it into our network. With the skip-gram architecture, for each word in the text, we want to grab all the words in a window around that word, with size $C$.
From Mikolov et al.
Step6: Here's a function that returns batches for our network. The idea is that it grabs batch_size words from a words list. Then for each of those words, it gets the target words in the window. I haven't found a way to pass in a random number of target words and get it to work with the architecture, so I make one row per input-target pair. This is a generator function by the way, helps save memory.
Step7: Building the graph
From Chris McCormick's blog, we can see the general structure of our network.
The input words are passed in as integers. This will go into a hidden layer of linear units, then into a softmax layer. We'll use the softmax layer to make a prediction like normal.
The idea here is to train the hidden layer weight matrix to find efficient representations for our words. We can discard the softmax layer becuase we don't really care about making predictions with this network. We just want the embedding matrix so we can use it in other networks we build from the dataset.
I'm going to have you build the graph in stages now. First off, creating the inputs and labels placeholders like normal.
Exercise
Step8: Embedding
The embedding matrix has a size of the number of words by the number of units in the hidden layer. So, if you have 10,000 words and 300 hidden units, the matrix will have size $10,000 \times 300$. Remember that we're using tokenized data for our inputs, usually as integers, where the number of tokens is the number of words in our vocabulary.
Exercise
Step9: Negative sampling
For every example we give the network, we train it using the output from the softmax layer. That means for each input, we're making very small changes to millions of weights even though we only have one true example. This makes training the network very inefficient. We can approximate the loss from the softmax layer by only updating a small subset of all the weights at once. We'll update the weights for the correct label, but only a small number of incorrect labels. This is called "negative sampling". Tensorflow has a convenient function to do this, tf.nn.sampled_softmax_loss.
Exercise
Step10: Validation
This code is from Thushan Ganegedara's implementation. Here we're going to choose a few common words and few uncommon words. Then, we'll print out the closest words to them. It's a nice way to check that our embedding table is grouping together words with similar semantic meanings.
Step11: Training
Below is the code to train the network. Every 100 batches it reports the training loss. Every 1000 batches, it'll print out the validation words.
Step12: Restore the trained network if you need to
Step13: Visualizing the word vectors
Below we'll use T-SNE to visualize how our high-dimensional word vectors cluster together. T-SNE is used to project these vectors into two dimensions while preserving local stucture. Check out this post from Christopher Olah to learn more about T-SNE and other ways to visualize high-dimensional data. | Python Code:
import time
import numpy as np
import tensorflow as tf
import utils
Explanation: Skip-gram word2vec
In this notebook, I'll lead you through using TensorFlow to implement the word2vec algorithm using the skip-gram architecture. By implementing this, you'll learn about embedding words for use in natural language processing. This will come in handy when dealing with things like machine translation.
Readings
Here are the resources I used to build this notebook. I suggest reading these either beforehand or while you're working on this material.
A really good conceptual overview of word2vec from Chris McCormick
First word2vec paper from Mikolov et al.
NIPS paper with improvements for word2vec also from Mikolov et al.
An implementation of word2vec from Thushan Ganegedara
TensorFlow word2vec tutorial
Word embeddings
When you're dealing with words in text, you end up with tens of thousands of classes to predict, one for each word. Trying to one-hot encode these words is massively inefficient, you'll have one element set to 1 and the other 50,000 set to 0. The matrix multiplication going into the first hidden layer will have almost all of the resulting values be zero. This a huge waste of computation.
To solve this problem and greatly increase the efficiency of our networks, we use what are called embeddings. Embeddings are just a fully connected layer like you've seen before. We call this layer the embedding layer and the weights are embedding weights. We skip the multiplication into the embedding layer by instead directly grabbing the hidden layer values from the weight matrix. We can do this because the multiplication of a one-hot encoded vector with a matrix returns the row of the matrix corresponding the index of the "on" input unit.
Instead of doing the matrix multiplication, we use the weight matrix as a lookup table. We encode the words as integers, for example "heart" is encoded as 958, "mind" as 18094. Then to get hidden layer values for "heart", you just take the 958th row of the embedding matrix. This process is called an embedding lookup and the number of hidden units is the embedding dimension.
<img src='assets/tokenize_lookup.png' width=500>
There is nothing magical going on here. The embedding lookup table is just a weight matrix. The embedding layer is just a hidden layer. The lookup is just a shortcut for the matrix multiplication. The lookup table is trained just like any weight matrix as well.
Embeddings aren't only used for words of course. You can use them for any model where you have a massive number of classes. A particular type of model called Word2Vec uses the embedding layer to find vector representations of words that contain semantic meaning.
Word2Vec
The word2vec algorithm finds much more efficient representations by finding vectors that represent the words. These vectors also contain semantic information about the words. Words that show up in similar contexts, such as "black", "white", and "red" will have vectors near each other. There are two architectures for implementing word2vec, CBOW (Continuous Bag-Of-Words) and Skip-gram.
<img src="assets/word2vec_architectures.png" width="500">
In this implementation, we'll be using the skip-gram architecture because it performs better than CBOW. Here, we pass in a word and try to predict the words surrounding it in the text. In this way, we can train the network to learn representations for words that show up in similar contexts.
First up, importing packages.
End of explanation
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
import zipfile
dataset_folder_path = 'data'
dataset_filename = 'text8.zip'
dataset_name = 'Text8 Dataset'
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile(dataset_filename):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc=dataset_name) as pbar:
urlretrieve(
'http://mattmahoney.net/dc/text8.zip',
dataset_filename,
pbar.hook)
if not isdir(dataset_folder_path):
with zipfile.ZipFile(dataset_filename) as zip_ref:
zip_ref.extractall(dataset_folder_path)
with open('data/text8') as f:
text = f.read()
Explanation: Load the text8 dataset, a file of cleaned up Wikipedia articles from Matt Mahoney. The next cell will download the data set to the data folder. Then you can extract it and delete the archive file to save storage space.
End of explanation
words = utils.preprocess(text)
print(words[:30])
print("Total words: {}".format(len(words)))
print("Unique words: {}".format(len(set(words))))
Explanation: Preprocessing
Here I'm fixing up the text to make training easier. This comes from the utils module I wrote. The preprocess function coverts any punctuation into tokens, so a period is changed to <PERIOD>. In this data set, there aren't any periods, but it will help in other NLP problems. I'm also removing all words that show up five or fewer times in the dataset. This will greatly reduce issues due to noise in the data and improve the quality of the vector representations. If you want to write your own functions for this stuff, go for it.
End of explanation
vocab_to_int, int_to_vocab = utils.create_lookup_tables(words)
int_words = [vocab_to_int[word] for word in words]
Explanation: And here I'm creating dictionaries to convert words to integers and backwards, integers to words. The integers are assigned in descending frequency order, so the most frequent word ("the") is given the integer 0 and the next most frequent is 1 and so on. The words are converted to integers and stored in the list int_words.
End of explanation
# create a tabel to store word frequency
from collections import Counter
word_num = len(int_words)
word_counter = Counter()
for int_word in int_words:
word_counter[int_word] += 1
word_freq = {int_word: word_counter[int_word] / word_num for int_word in word_counter.keys()}
word_freq[0]
## Your code here
t = 1e-5
train_words = [] # The final subsampled word list
for int_word in int_words:
p_discard = 1 - np.sqrt(t / word_freq[int_word])
if np.random.rand((1)) > p_discard:
train_words.append(int_word)
p_discard
word_freq[int_word]
len(int_words)
len(train_words)
Explanation: Subsampling
Words that show up often such as "the", "of", and "for" don't provide much context to the nearby words. If we discard some of them, we can remove some of the noise from our data and in return get faster training and better representations. This process is called subsampling by Mikolov. For each word $w_i$ in the training set, we'll discard it with probability given by
$$ P(w_i) = 1 - \sqrt{\frac{t}{f(w_i)}} $$
where $t$ is a threshold parameter and $f(w_i)$ is the frequency of word $w_i$ in the total dataset.
I'm going to leave this up to you as an exercise. This is more of a programming challenge, than about deep learning specifically. But, being able to prepare your data for your network is an important skill to have. Check out my solution to see how I did it.
Exercise: Implement subsampling for the words in int_words. That is, go through int_words and discard each word given the probablility $P(w_i)$ shown above. Note that $P(w_i)$ is the probability that a word is discarded. Assign the subsampled data to train_words.
End of explanation
def get_target(words, idx, window_size=5, random=True):
''' Get a list of words in a window around an index. '''
# Your code here
if random:
R = np.random.randint(1, window_size+1)
else:
R = window_size
start = idx - R if idx - R >0 else 0
stop = idx + R
out = set(words[start:idx] + words[start+1:stop+1])
return list(out)
Explanation: Making batches
Now that our data is in good shape, we need to get it into the proper form to pass it into our network. With the skip-gram architecture, for each word in the text, we want to grab all the words in a window around that word, with size $C$.
From Mikolov et al.:
"Since the more distant words are usually less related to the current word than those close to it, we give less weight to the distant words by sampling less from those words in our training examples... If we choose $C = 5$, for each training word we will select randomly a number $R$ in range $< 1; C >$, and then use $R$ words from history and $R$ words from the future of the current word as correct labels."
Exercise: Implement a function get_target that receives a list of words, an index, and a window size, then returns a list of words in the window around the index. Make sure to use the algorithm described above, where you choose a random number of words from the window.
End of explanation
def get_batches(words, batch_size, window_size=5):
''' Create a generator of word batches as a tuple (inputs, targets) '''
n_batches = len(words)//batch_size
# only full batches
words = words[:n_batches*batch_size]
for idx in range(0, len(words), batch_size):
x, y = [], []
batch = words[idx:idx+batch_size]
for ii in range(len(batch)):
batch_x = batch[ii]
batch_y = get_target(batch, ii, window_size)
y.extend(batch_y)
x.extend([batch_x]*len(batch_y))
yield x, y
Explanation: Here's a function that returns batches for our network. The idea is that it grabs batch_size words from a words list. Then for each of those words, it gets the target words in the window. I haven't found a way to pass in a random number of target words and get it to work with the architecture, so I make one row per input-target pair. This is a generator function by the way, helps save memory.
End of explanation
train_graph = tf.Graph()
with train_graph.as_default():
inputs = tf.placeholder(tf.int32, (None,))
labels = tf.placeholder(tf.int32, (None, None))
Explanation: Building the graph
From Chris McCormick's blog, we can see the general structure of our network.
The input words are passed in as integers. This will go into a hidden layer of linear units, then into a softmax layer. We'll use the softmax layer to make a prediction like normal.
The idea here is to train the hidden layer weight matrix to find efficient representations for our words. We can discard the softmax layer becuase we don't really care about making predictions with this network. We just want the embedding matrix so we can use it in other networks we build from the dataset.
I'm going to have you build the graph in stages now. First off, creating the inputs and labels placeholders like normal.
Exercise: Assign inputs and labels using tf.placeholder. We're going to be passing in integers, so set the data types to tf.int32. The batches we're passing in will have varying sizes, so set the batch sizes to [None]. To make things work later, you'll need to set the second dimension of labels to None or 1.
End of explanation
n_vocab = len(int_to_vocab)
n_embedding = 256 # Number of embedding features
with train_graph.as_default():
embedding = tf.Variable(tf.random_uniform([n_vocab, n_embedding], -1, -1)) # create embedding weight matrix here
embed = tf.nn.embedding_lookup(embedding, inputs) # use tf.nn.embedding_lookup to get the hidden layer output
Explanation: Embedding
The embedding matrix has a size of the number of words by the number of units in the hidden layer. So, if you have 10,000 words and 300 hidden units, the matrix will have size $10,000 \times 300$. Remember that we're using tokenized data for our inputs, usually as integers, where the number of tokens is the number of words in our vocabulary.
Exercise: Tensorflow provides a convenient function tf.nn.embedding_lookup that does this lookup for us. You pass in the embedding matrix and a tensor of integers, then it returns rows in the matrix corresponding to those integers. Below, set the number of embedding features you'll use (200 is a good start), create the embedding matrix variable, and use tf.nn.embedding_lookup to get the embedding tensors. For the embedding matrix, I suggest you initialize it with a uniform random numbers between -1 and 1 using tf.random_uniform.
End of explanation
inputs.get_shape().as_list()
embedding.get_shape().as_list()
embed.get_shape().as_list()
# Number of negative labels to sample
n_sampled = 100
with train_graph.as_default():
softmax_w = tf.Variable(tf.truncated_normal([n_vocab, n_embedding], stddev=0.1)) # create softmax weight matrix here
softmax_b = tf.Variable(tf.zeros(n_vocab))# create softmax biases here
# Calculate the loss using negative sampling
loss = tf.nn.sampled_softmax_loss(softmax_w, softmax_b, labels, embed, n_sampled, n_vocab )
cost = tf.reduce_mean(loss)
optimizer = tf.train.AdamOptimizer().minimize(cost)
Explanation: Negative sampling
For every example we give the network, we train it using the output from the softmax layer. That means for each input, we're making very small changes to millions of weights even though we only have one true example. This makes training the network very inefficient. We can approximate the loss from the softmax layer by only updating a small subset of all the weights at once. We'll update the weights for the correct label, but only a small number of incorrect labels. This is called "negative sampling". Tensorflow has a convenient function to do this, tf.nn.sampled_softmax_loss.
Exercise: Below, create weights and biases for the softmax layer. Then, use tf.nn.sampled_softmax_loss to calculate the loss. Be sure to read the documentation to figure out how it works.
End of explanation
import random
with train_graph.as_default():
## From Thushan Ganegedara's implementation
valid_size = 16 # Random set of words to evaluate similarity on.
valid_window = 100
# pick 8 samples from (0,100) and (1000,1100) each ranges. lower id implies more frequent
valid_examples = np.array(random.sample(range(valid_window), valid_size//2))
valid_examples = np.append(valid_examples,
random.sample(range(1000,1000+valid_window), valid_size//2))
valid_dataset = tf.constant(valid_examples, dtype=tf.int32)
# We use the cosine distance:
norm = tf.sqrt(tf.reduce_sum(tf.square(embedding), 1, keep_dims=True))
normalized_embedding = embedding / norm
valid_embedding = tf.nn.embedding_lookup(normalized_embedding, valid_dataset)
similarity = tf.matmul(valid_embedding, tf.transpose(normalized_embedding))
# If the checkpoints directory doesn't exist:
!mkdir checkpoints
Explanation: Validation
This code is from Thushan Ganegedara's implementation. Here we're going to choose a few common words and few uncommon words. Then, we'll print out the closest words to them. It's a nice way to check that our embedding table is grouping together words with similar semantic meanings.
End of explanation
epochs = 10
batch_size = 1000
window_size = 10
with train_graph.as_default():
saver = tf.train.Saver()
with tf.Session(graph=train_graph) as sess:
iteration = 1
loss = 0
sess.run(tf.global_variables_initializer())
for e in range(1, epochs+1):
batches = get_batches(train_words, batch_size, window_size)
start = time.time()
for x, y in batches:
feed = {inputs: x,
labels: np.array(y)[:, None]}
train_loss, _ = sess.run([cost, optimizer], feed_dict=feed)
loss += train_loss
if iteration % 100 == 0:
end = time.time()
print("Epoch {}/{}".format(e, epochs),
"Iteration: {}".format(iteration),
"Avg. Training loss: {:.4f}".format(loss/100),
"{:.4f} sec/batch".format((end-start)/100))
loss = 0
start = time.time()
if iteration % 1000 == 0:
## From Thushan Ganegedara's implementation
# note that this is expensive (~20% slowdown if computed every 500 steps)
sim = similarity.eval()
for i in range(valid_size):
valid_word = int_to_vocab[valid_examples[i]]
top_k = 8 # number of nearest neighbors
nearest = (-sim[i, :]).argsort()[1:top_k+1]
log = 'Nearest to %s:' % valid_word
for k in range(top_k):
close_word = int_to_vocab[nearest[k]]
log = '%s %s,' % (log, close_word)
print(log)
iteration += 1
save_path = saver.save(sess, "checkpoints/text8.ckpt")
embed_mat = sess.run(normalized_embedding)
Explanation: Training
Below is the code to train the network. Every 100 batches it reports the training loss. Every 1000 batches, it'll print out the validation words.
End of explanation
with train_graph.as_default():
saver = tf.train.Saver()
with tf.Session(graph=train_graph) as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
embed_mat = sess.run(embedding)
Explanation: Restore the trained network if you need to:
End of explanation
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import matplotlib.pyplot as plt
from sklearn.manifold import TSNE
viz_words = 500
tsne = TSNE()
embed_tsne = tsne.fit_transform(embed_mat[:viz_words, :])
fig, ax = plt.subplots(figsize=(14, 14))
for idx in range(viz_words):
plt.scatter(*embed_tsne[idx, :], color='steelblue')
plt.annotate(int_to_vocab[idx], (embed_tsne[idx, 0], embed_tsne[idx, 1]), alpha=0.7)
Explanation: Visualizing the word vectors
Below we'll use T-SNE to visualize how our high-dimensional word vectors cluster together. T-SNE is used to project these vectors into two dimensions while preserving local stucture. Check out this post from Christopher Olah to learn more about T-SNE and other ways to visualize high-dimensional data.
End of explanation |
2,594 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2018 The TensorFlow Authors.
Step1: 回帰:燃費を予測する
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https
Step2: Auto MPG データセット
このデータセットはUCI Machine Learning Repositoryから入手可能です。
データの取得
まず、データセットをダウンロードします。
Step3: データのクレンジング
このデータセットには、いくつか欠損値があります。
Step4: この最初のチュートリアルでは簡単化のためこれらの行を削除します。
Step5: "Origin" 列はカテゴリであり、数値ではないので、pd.get_dummies でワンホットに変換します。
注意
Step6: データをトレーニング用セットとテスト用セットに分割
次に、データセットをトレーニングセットとテストセットに分割します。モデルの最終評価ではテストセットを使用します。
Step7: データの観察
トレーニング用セットの列のいくつかのペアの同時分布を見てみます。
一番上の行を見ると、燃費 (MPG) が他のすべてのパラメータの関数であることは明らかです。他の行を見ると、それらが互いの関数であることが明らかです。
Step8: 全体の統計値も見てみましょう。
Step9: ラベルと特徴量の分離
ラベル、すなわち目的変数を特徴量から分離します。このラベルは、モデルに予測させたい数量です。
Step10: 正規化
統計の表を見て、それぞれの特徴量の範囲がどれほど違っているかに注目してください。
Step11: スケールや値の範囲が異なる特徴量を正規化するのはよい習慣です。
これが重要な理由の 1 つは、特徴にモデルの重みが掛けられるためです。したがって、出力のスケールと勾配のスケールは、入力のスケールの影響を受けます。
モデルは特徴量の正規化なしで収束する可能性がありますが、正規化によりトレーニングがはるかに安定します。
注意
Step12: 次にデータに .adapt() します。
Step13: これにより、平均と分散が計算され、レイヤーに保存されます。
Step14: レイヤーが呼び出されると、入力データが返され、各特徴は個別に正規化されます。
Step15: 線形回帰
DNN モデルを構築する前に、単一変数および複数変数を使用した線形回帰から始めます。
1 つの変数
単一変数の線形回帰から始めて、Horsepower から MPG を予測します。
tf.keras を使用したモデルのトレーニングは、通常、モデルアーキテクチャを定義することから始まります。ここでは、tf.keras.Sequential モデルを使用します。このモデルは、一連のステップを表します。
単一変数の線形回帰モデルには、次の 2 つのステップがあります。
入力 horsepower を正規化します。
線形変換 ($y = mx+b$) を適用して、layers.Dense を使用して 1 つの出力を生成します。
入力の数は、input_shape 引数により設定できます。また、モデルを初めて実行するときに自動的に設定することもできます。
まず、馬力 Normalization レイヤーを作成します。
Step16: Sequential モデルを作成します。
Step17: このモデルは、Horsepower から MPG を予測します。
トレーニングされていないモデルを最初の 10 の馬力の値で実行します。出力は良くありませんが、期待される形状が (10,1) であることがわかります。
Step18: モデルが構築されたら、Model.compile() メソッドを使用してトレーニング手順を構成します。コンパイルするための最も重要な引数は、loss と optimizer です。これらは、最適化されるもの (mean_absolute_error) とその方法 (optimizers.Adam を使用)を定義するためです。
Step19: トレーニングを構成したら、Model.fit() を使用してトレーニングを実行します。
Step20: history オブジェクトに保存された数値を使ってモデルのトレーニングの様子を可視化します。
Step21: 後で使用するために、テスト用セットの結果を収集します。
Step22: これは単一変数の回帰であるため、入力の関数としてモデルの予測を簡単に確認できます。
Step23: 複数の入力
ほぼ同じ設定を使用して、複数の入力に基づく予測を実行することができます。このモデルでは、$m$ が行列で、$b$ がベクトルですが、同じ $y = mx+b$ を実行します。
ここでは、データセット全体に適合した Normalization レイヤーを使用します。
Step24: 入力のバッチでこのモデルを呼び出すと、各例に対して units=1 出力が生成されます。
Step25: モデルを呼び出すと、その重み行列が作成されます。これで、kernel ($y=mx+b$ の $m$) の形状が (9,1) であることがわかります。
Step26: Keras Model.compile でモデルを構成し、Model.fit で 100 エポックトレーニングします。
Step27: この回帰モデルですべての入力を使用すると、入力が 1 つだけの horsepower_model よりもトレーニングエラーや検証エラーが大幅に低くなります。
Step28: 後で使用するために、テスト用セットの結果を収集します。
Step29: DNN 回帰
前のセクションでは、単一および複数の入力の線形モデルを実装しました。
このセクションでは、単一入力および複数入力の DNN モデルを実装します。コードは基本的に同じですが、モデルが拡張されていくつかの「非表示」の非線形レイヤーが含まれる点が異なります。「非表示」とは、入力または出力に直接接続されていないことを意味します。
コードは基本的に同じですが、モデルが拡張されていくつかの「非表示」の非線形レイヤーが含まれる点が異なります。「非表示」とは、入力または出力に直接接続されていないことを意味します。
これらのモデルには、線形モデルよりも多少多くのレイヤーが含まれます。
前と同じく正規化レイヤー。(単一入力モデルの場合は horsepower_normalizer、複数入力モデルの場合は normalizer を使用)。
relu 非線形性を使用する 2 つの非表示の非線形Dense レイヤー。
線形単一出力レイヤー
どちらも同じトレーニング手順を使用するため、compile メソッドは以下の build_and_compile_model 関数に含まれています。
Step30: DNN と単一入力を使用した回帰
入力 'Horsepower'、正規化レイヤー horsepower_normalizer(前に定義)のみを使用して DNN モデルを作成します。
Step31: このモデルには、線形モデルよりも多少多くのトレーニング可能なレイヤーが含まれます。
Step32: Keras Model.fit を使用してモデルをトレーニングします。
Step33: このモデルは、単一入力の線形 horsepower_model よりもわずかに優れています。
Step34: Horsepower の関数として予測をプロットすると、このモデルが非表示のレイヤーにより提供される非線形性をどのように利用するかがわかります。
Step35: 後で使用するために、テスト用セットの結果を収集します。
Step36: 完全モデル
すべての入力を使用してこのプロセスを繰り返すと、検証データセットの性能がわずかに向上します。
Step37: 後で使用するために、テスト用セットの結果を収集します。
Step38: 性能
すべてのモデルがトレーニングされたので、テスト用セットの性能を確認します。
Step39: これらの結果は、トレーニング中に見られる検証エラーと一致します。
モデルを使った予測
Keras Model.predict を使用して、テストセットの dnn_model で予測を行い、損失を確認します。
Step40: モデルの予測精度は妥当です。
次に、エラー分布を見てみましょう。
Step41: モデルに満足している場合は、後で使用できるように保存します。
Step42: モデルを再度読み込むと、同じ出力が得られます。 | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#@title MIT License
#
# Copyright (c) 2017 François Chollet
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
Explanation: Copyright 2018 The TensorFlow Authors.
End of explanation
# Use seaborn for pairplot.
!pip install -q seaborn
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
# Make NumPy printouts easier to read.
np.set_printoptions(precision=3, suppress=True)
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
print(tf.__version__)
Explanation: 回帰:燃費を予測する
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https://www.tensorflow.org/tutorials/keras/regression"><img src="https://www.tensorflow.org/images/tf_logo_32px.png">TensorFlow.org で表示</a></td>
<td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ja/tutorials/keras/regression.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png">Google Colab で実行</a> </td>
<td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ja/tutorials/keras/regression.ipynb"> <img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png"> GitHubでソースを表示</a></td>
<td> <a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ja/tutorials/keras/regression.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png">ノートブックをダウンロード</a> </td>
</table>
回帰問題では、価格や確率といった連続的な値の出力を予測することが目的となります。これは、分類問題の目的が、(たとえば、写真にリンゴが写っているかオレンジが写っているかといった)離散的なラベルを予測することであるのとは対照的です。
このノートブックでは、古典的な Auto MPG データセットを使用し、1970 年代後半から 1980 年台初めの自動車の燃費を予測するモデルを構築します。この目的のため、モデルにはこの時期の多数の自動車の仕様を読み込ませます。仕様には、気筒数、排気量、馬力、重量などが含まれています。
このサンプルではtf.keras APIを使用しています。詳細はこのガイドを参照してください。
End of explanation
url = 'http://archive.ics.uci.edu/ml/machine-learning-databases/auto-mpg/auto-mpg.data'
column_names = ['MPG', 'Cylinders', 'Displacement', 'Horsepower', 'Weight',
'Acceleration', 'Model Year', 'Origin']
raw_dataset = pd.read_csv(url, names=column_names,
na_values='?', comment='\t',
sep=' ', skipinitialspace=True)
dataset = raw_dataset.copy()
dataset.tail()
Explanation: Auto MPG データセット
このデータセットはUCI Machine Learning Repositoryから入手可能です。
データの取得
まず、データセットをダウンロードします。
End of explanation
dataset.isna().sum()
Explanation: データのクレンジング
このデータセットには、いくつか欠損値があります。
End of explanation
dataset = dataset.dropna()
Explanation: この最初のチュートリアルでは簡単化のためこれらの行を削除します。
End of explanation
dataset['Origin'] = dataset['Origin'].map({1: 'USA', 2: 'Europe', 3: 'Japan'})
dataset = pd.get_dummies(dataset, columns=['Origin'], prefix='', prefix_sep='')
dataset.tail()
Explanation: "Origin" 列はカテゴリであり、数値ではないので、pd.get_dummies でワンホットに変換します。
注意: keras.Model を設定して、このような変換を行うことができます。これについては、このチュートリアルでは取り上げません。例については、前処理レイヤーまたは CSV データの読み込みのチュートリアルをご覧ください。
End of explanation
train_dataset = dataset.sample(frac=0.8, random_state=0)
test_dataset = dataset.drop(train_dataset.index)
Explanation: データをトレーニング用セットとテスト用セットに分割
次に、データセットをトレーニングセットとテストセットに分割します。モデルの最終評価ではテストセットを使用します。
End of explanation
sns.pairplot(train_dataset[['MPG', 'Cylinders', 'Displacement', 'Weight']], diag_kind='kde')
Explanation: データの観察
トレーニング用セットの列のいくつかのペアの同時分布を見てみます。
一番上の行を見ると、燃費 (MPG) が他のすべてのパラメータの関数であることは明らかです。他の行を見ると、それらが互いの関数であることが明らかです。
End of explanation
train_dataset.describe().transpose()
Explanation: 全体の統計値も見てみましょう。
End of explanation
train_features = train_dataset.copy()
test_features = test_dataset.copy()
train_labels = train_features.pop('MPG')
test_labels = test_features.pop('MPG')
Explanation: ラベルと特徴量の分離
ラベル、すなわち目的変数を特徴量から分離します。このラベルは、モデルに予測させたい数量です。
End of explanation
train_dataset.describe().transpose()[['mean', 'std']]
Explanation: 正規化
統計の表を見て、それぞれの特徴量の範囲がどれほど違っているかに注目してください。
End of explanation
normalizer = tf.keras.layers.Normalization(axis=-1)
Explanation: スケールや値の範囲が異なる特徴量を正規化するのはよい習慣です。
これが重要な理由の 1 つは、特徴にモデルの重みが掛けられるためです。したがって、出力のスケールと勾配のスケールは、入力のスケールの影響を受けます。
モデルは特徴量の正規化なしで収束する可能性がありますが、正規化によりトレーニングがはるかに安定します。
注意: ここでは、簡単にするため実行しますが、ワンホット特徴を正規化する利点はありません。前処理レイヤーの使用方法の詳細については、前処理レイヤーの使用ガイドと Keras 前処理レイヤーを使用した構造化データの分類チュートリアルを参照してください。
正規化レイヤー
preprocessing.Normalization レイヤーは、その前処理をモデルに組み込むためのクリーンでシンプルな方法です。
まず、レイヤーを作成します。
End of explanation
normalizer.adapt(np.array(train_features))
Explanation: 次にデータに .adapt() します。
End of explanation
print(normalizer.mean.numpy())
Explanation: これにより、平均と分散が計算され、レイヤーに保存されます。
End of explanation
first = np.array(train_features[:1])
with np.printoptions(precision=2, suppress=True):
print('First example:', first)
print()
print('Normalized:', normalizer(first).numpy())
Explanation: レイヤーが呼び出されると、入力データが返され、各特徴は個別に正規化されます。
End of explanation
horsepower = np.array(train_features['Horsepower'])
horsepower_normalizer = layers.Normalization(input_shape=[1,], axis=None)
horsepower_normalizer.adapt(horsepower)
Explanation: 線形回帰
DNN モデルを構築する前に、単一変数および複数変数を使用した線形回帰から始めます。
1 つの変数
単一変数の線形回帰から始めて、Horsepower から MPG を予測します。
tf.keras を使用したモデルのトレーニングは、通常、モデルアーキテクチャを定義することから始まります。ここでは、tf.keras.Sequential モデルを使用します。このモデルは、一連のステップを表します。
単一変数の線形回帰モデルには、次の 2 つのステップがあります。
入力 horsepower を正規化します。
線形変換 ($y = mx+b$) を適用して、layers.Dense を使用して 1 つの出力を生成します。
入力の数は、input_shape 引数により設定できます。また、モデルを初めて実行するときに自動的に設定することもできます。
まず、馬力 Normalization レイヤーを作成します。
End of explanation
horsepower_model = tf.keras.Sequential([
horsepower_normalizer,
layers.Dense(units=1)
])
horsepower_model.summary()
Explanation: Sequential モデルを作成します。
End of explanation
horsepower_model.predict(horsepower[:10])
Explanation: このモデルは、Horsepower から MPG を予測します。
トレーニングされていないモデルを最初の 10 の馬力の値で実行します。出力は良くありませんが、期待される形状が (10,1) であることがわかります。
End of explanation
horsepower_model.compile(
optimizer=tf.optimizers.Adam(learning_rate=0.1),
loss='mean_absolute_error')
Explanation: モデルが構築されたら、Model.compile() メソッドを使用してトレーニング手順を構成します。コンパイルするための最も重要な引数は、loss と optimizer です。これらは、最適化されるもの (mean_absolute_error) とその方法 (optimizers.Adam を使用)を定義するためです。
End of explanation
%%time
history = horsepower_model.fit(
train_features['Horsepower'],
train_labels,
epochs=100,
# Suppress logging.
verbose=0,
# Calculate validation results on 20% of the training data.
validation_split = 0.2)
Explanation: トレーニングを構成したら、Model.fit() を使用してトレーニングを実行します。
End of explanation
hist = pd.DataFrame(history.history)
hist['epoch'] = history.epoch
hist.tail()
def plot_loss(history):
plt.plot(history.history['loss'], label='loss')
plt.plot(history.history['val_loss'], label='val_loss')
plt.ylim([0, 10])
plt.xlabel('Epoch')
plt.ylabel('Error [MPG]')
plt.legend()
plt.grid(True)
plot_loss(history)
Explanation: history オブジェクトに保存された数値を使ってモデルのトレーニングの様子を可視化します。
End of explanation
test_results = {}
test_results['horsepower_model'] = horsepower_model.evaluate(
test_features['Horsepower'],
test_labels, verbose=0)
Explanation: 後で使用するために、テスト用セットの結果を収集します。
End of explanation
x = tf.linspace(0.0, 250, 251)
y = horsepower_model.predict(x)
def plot_horsepower(x, y):
plt.scatter(train_features['Horsepower'], train_labels, label='Data')
plt.plot(x, y, color='k', label='Predictions')
plt.xlabel('Horsepower')
plt.ylabel('MPG')
plt.legend()
plot_horsepower(x, y)
Explanation: これは単一変数の回帰であるため、入力の関数としてモデルの予測を簡単に確認できます。
End of explanation
linear_model = tf.keras.Sequential([
normalizer,
layers.Dense(units=1)
])
Explanation: 複数の入力
ほぼ同じ設定を使用して、複数の入力に基づく予測を実行することができます。このモデルでは、$m$ が行列で、$b$ がベクトルですが、同じ $y = mx+b$ を実行します。
ここでは、データセット全体に適合した Normalization レイヤーを使用します。
End of explanation
linear_model.predict(train_features[:10])
Explanation: 入力のバッチでこのモデルを呼び出すと、各例に対して units=1 出力が生成されます。
End of explanation
linear_model.layers[1].kernel
Explanation: モデルを呼び出すと、その重み行列が作成されます。これで、kernel ($y=mx+b$ の $m$) の形状が (9,1) であることがわかります。
End of explanation
linear_model.compile(
optimizer=tf.optimizers.Adam(learning_rate=0.1),
loss='mean_absolute_error')
%%time
history = linear_model.fit(
train_features,
train_labels,
epochs=100,
# Suppress logging.
verbose=0,
# Calculate validation results on 20% of the training data.
validation_split = 0.2)
Explanation: Keras Model.compile でモデルを構成し、Model.fit で 100 エポックトレーニングします。
End of explanation
plot_loss(history)
Explanation: この回帰モデルですべての入力を使用すると、入力が 1 つだけの horsepower_model よりもトレーニングエラーや検証エラーが大幅に低くなります。
End of explanation
test_results['linear_model'] = linear_model.evaluate(
test_features, test_labels, verbose=0)
Explanation: 後で使用するために、テスト用セットの結果を収集します。
End of explanation
def build_and_compile_model(norm):
model = keras.Sequential([
norm,
layers.Dense(64, activation='relu'),
layers.Dense(64, activation='relu'),
layers.Dense(1)
])
model.compile(loss='mean_absolute_error',
optimizer=tf.keras.optimizers.Adam(0.001))
return model
Explanation: DNN 回帰
前のセクションでは、単一および複数の入力の線形モデルを実装しました。
このセクションでは、単一入力および複数入力の DNN モデルを実装します。コードは基本的に同じですが、モデルが拡張されていくつかの「非表示」の非線形レイヤーが含まれる点が異なります。「非表示」とは、入力または出力に直接接続されていないことを意味します。
コードは基本的に同じですが、モデルが拡張されていくつかの「非表示」の非線形レイヤーが含まれる点が異なります。「非表示」とは、入力または出力に直接接続されていないことを意味します。
これらのモデルには、線形モデルよりも多少多くのレイヤーが含まれます。
前と同じく正規化レイヤー。(単一入力モデルの場合は horsepower_normalizer、複数入力モデルの場合は normalizer を使用)。
relu 非線形性を使用する 2 つの非表示の非線形Dense レイヤー。
線形単一出力レイヤー
どちらも同じトレーニング手順を使用するため、compile メソッドは以下の build_and_compile_model 関数に含まれています。
End of explanation
dnn_horsepower_model = build_and_compile_model(horsepower_normalizer)
Explanation: DNN と単一入力を使用した回帰
入力 'Horsepower'、正規化レイヤー horsepower_normalizer(前に定義)のみを使用して DNN モデルを作成します。
End of explanation
dnn_horsepower_model.summary()
Explanation: このモデルには、線形モデルよりも多少多くのトレーニング可能なレイヤーが含まれます。
End of explanation
%%time
history = dnn_horsepower_model.fit(
train_features['Horsepower'],
train_labels,
validation_split=0.2,
verbose=0, epochs=100)
Explanation: Keras Model.fit を使用してモデルをトレーニングします。
End of explanation
plot_loss(history)
Explanation: このモデルは、単一入力の線形 horsepower_model よりもわずかに優れています。
End of explanation
x = tf.linspace(0.0, 250, 251)
y = dnn_horsepower_model.predict(x)
plot_horsepower(x, y)
Explanation: Horsepower の関数として予測をプロットすると、このモデルが非表示のレイヤーにより提供される非線形性をどのように利用するかがわかります。
End of explanation
test_results['dnn_horsepower_model'] = dnn_horsepower_model.evaluate(
test_features['Horsepower'], test_labels,
verbose=0)
Explanation: 後で使用するために、テスト用セットの結果を収集します。
End of explanation
dnn_model = build_and_compile_model(normalizer)
dnn_model.summary()
%%time
history = dnn_model.fit(
train_features,
train_labels,
validation_split=0.2,
verbose=0, epochs=100)
plot_loss(history)
Explanation: 完全モデル
すべての入力を使用してこのプロセスを繰り返すと、検証データセットの性能がわずかに向上します。
End of explanation
test_results['dnn_model'] = dnn_model.evaluate(test_features, test_labels, verbose=0)
Explanation: 後で使用するために、テスト用セットの結果を収集します。
End of explanation
pd.DataFrame(test_results, index=['Mean absolute error [MPG]']).T
Explanation: 性能
すべてのモデルがトレーニングされたので、テスト用セットの性能を確認します。
End of explanation
test_predictions = dnn_model.predict(test_features).flatten()
a = plt.axes(aspect='equal')
plt.scatter(test_labels, test_predictions)
plt.xlabel('True Values [MPG]')
plt.ylabel('Predictions [MPG]')
lims = [0, 50]
plt.xlim(lims)
plt.ylim(lims)
_ = plt.plot(lims, lims)
Explanation: これらの結果は、トレーニング中に見られる検証エラーと一致します。
モデルを使った予測
Keras Model.predict を使用して、テストセットの dnn_model で予測を行い、損失を確認します。
End of explanation
error = test_predictions - test_labels
plt.hist(error, bins=25)
plt.xlabel('Prediction Error [MPG]')
_ = plt.ylabel('Count')
Explanation: モデルの予測精度は妥当です。
次に、エラー分布を見てみましょう。
End of explanation
dnn_model.save('dnn_model')
Explanation: モデルに満足している場合は、後で使用できるように保存します。
End of explanation
reloaded = tf.keras.models.load_model('dnn_model')
test_results['reloaded'] = reloaded.evaluate(
test_features, test_labels, verbose=0)
pd.DataFrame(test_results, index=['Mean absolute error [MPG]']).T
Explanation: モデルを再度読み込むと、同じ出力が得られます。
End of explanation |
2,595 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step4: Copyright 2019 Google LLC.
Licensed under the Apache License, Version 2.0 (the "License");
Step5: Fashion MNIST Classifier Using Keras
The code below was presented during the practical part of the Google Elevate class about Machine Learning.
If your notebook is marked as "Read only" or is opened in Playground mode, please make a copy in Drive (see the "File" menu).
If you opened the notebook from Google Classroom, it should be editable already.
Run the cells one by one to download the dataset, train the model and inspect some predictions.
Initialize
Step6: Download and Inspect the Data
Step7: Define the Model
Step8: Train the Model
Step9: Inspect some Predictions
Step12: Check your understanding
Step14: Answer
Step16: Quiz ML.2
Visualize the training example #5 that you have extracted above
Step18: Answer
Step20: Answer
Step21: Now compute the accuracy on the first 10 testing examples
Step22: Answer
Step24: Quiz ML.5
Create a constant predictor model that always predicts the same class no matter what input data is provided, and make it compatible with the model.predict().
Hint Try to look at the shape of tensors returned by model.predict() using the field accessor .shape to better understand the shape of the data its predict() function receives and returns. The constant predictor should return a data in the same shape, but constructed from constants. You may need to check Numpy or Tensorflow documentation to find out how to create a constant of a given shape.
Step25: Now compute the accuracy of the constant model on the first 10 test examples
Step29: Answer | Python Code:
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# This is helper cell that defines code checking (testing) function Check()
# Please run it once, but you do not need to understand it.
!pip install prog_edu_assistant_tools
import re
import sys
import jinja2
from IPython.core import display
from google.colab import _message as google_message
from prog_edu_assistant_tools.magics import report, autotest, CaptureOutput
def GetNotebook():
Downloads the ipynb source of Colab notebook
notebook = google_message.blocking_request(
"get_ipynb", request="", timeout_sec=120)["ipynb"]
return notebook
def RunInlineTests(submission_source, inlinetests):
Runs an inline test.
errors = []
for test_name, test_source in inlinetests.items():
#print(f'Running inline test {test_name}:\n{test_source}', file=sys.stderr)
with CaptureOutput() as (stdout, stderr):
try:
env = {}
exec(submission_source, globals(), env)
exec(test_source, globals(), env)
except AssertionError as e:
errors.append(str(e))
if len(stderr.getvalue()) > 0:
errors.append('STDERR:' + stderr.getvalue())
if len(errors) > 0:
results = {'passed': False, 'error': '\n'.join(errors)}
else:
results = {'passed': True}
template_source =
<h4 style='color: #387;'>Your submission</h4>
<pre style='background: #F0F0F0; padding: 3pt; margin: 4pt; border: 1pt solid #DDD; border-radius: 3pt;'>{{ formatted_source }}</pre>
<h4 style='color: #387;'>Results</h4>
{% if 'passed' in results and results['passed'] %}
✅
Looks OK.
{% elif 'error' in results %}
❌
{{results['error'] | e}}
{% else %}
❌ Something is wrong.
{% endif %}
template = jinja2.Template(template_source)
html = template.render(formatted_source=submission_source, results=results)
return html
def Check(exercise_id):
Checks one exercise against embedded inline tests.
def _get_exercise_id(cell):
if 'metadata' in cell and 'exercise_id' in cell['metadata']:
return cell['metadata']['exercise_id']
if 'source' not in cell or 'cell_type' not in cell or cell['cell_type'] != 'code':
return None
source = ''.join(cell['source'])
m = re.search('(?m)^# *EXERCISE_ID: [\'"]?([a-zA-Z0-9_.-]*)[\'"]? *\n', source)
if m:
return m.group(1)
return None
notebook = GetNotebook()
# 1. Find the first cell with specified exercise ID.
found = False
for (i, cell) in enumerate(notebook['cells']):
if _get_exercise_id(cell) == exercise_id:
found = True
break
if not found:
raise Exception(f'exercise {exercise_id} not found')
submission_source = ''.join(cell['source']) # extract the submission cell
submission_source = re.sub(r'^%%(solution|submission)[ \t]*\n', '', submission_source) # cut %%solution magic
inlinetests = {}
if 'metadata' in cell and 'inlinetests' in cell['metadata']:
inlinetests = cell['metadata']['inlinetests']
if len(inlinetests) == 0:
j = i+1
# 2. If inline tests were not present in metadata, find the inline tests
# that follow this exercise ID.
while j < len(notebook['cells']):
cell = notebook['cells'][j]
if 'source' not in cell or 'cell_type' not in cell or cell['cell_type'] != 'code':
j += 1
continue
id = _get_exercise_id(cell)
source = ''.join(cell['source'])
if id == exercise_id:
# 3. Pick the last marked cell as submission cell.
submission_source = source # extract the submission cell
submission_source = re.sub(r'^%%(solution|submission)[ \t]*\n', '', submission_source) # cut %%solution magic
j += 1
continue
m = re.match(r'^%%inlinetest[ \t]*([a-zA-Z0-9_]*)[ \t]*\n', source)
if m:
test_name = m.group(1)
test_source = source[m.end(0):] # cut %%inlinetest magic
# 2a. Store the inline test.
inlinetests[test_name] = test_source
if id is not None and id != exercise_id:
# 4. Stop at the next exercise_id.
break
j += 1
html = RunInlineTests(submission_source, inlinetests)
return display.HTML(html)
# MASTER ONLY
%load_ext prog_edu_assistant_tools.magics
from prog_edu_assistant_tools.magics import report, autotest, CaptureOutput
Explanation: Copyright 2019 Google LLC.
Licensed under the Apache License, Version 2.0 (the "License");
End of explanation
# Some useful libraries.
import itertools
from matplotlib import pyplot as plt
import numpy as np
# Make an explicit choice of Tensorflow 2.0. Note that many of the tutorials
# on the interner may use 1.0 that has different API.
%tensorflow_version 2.x
import tensorflow as tf
print(f'TF version={tf.__version__}')
# Do you want to train with a GPU? Simply "Change runtime type" in the "Runtime"
# menu and re-execte the cells.
print(f'GPUS={tf.config.list_physical_devices("GPU")}')
Explanation: Fashion MNIST Classifier Using Keras
The code below was presented during the practical part of the Google Elevate class about Machine Learning.
If your notebook is marked as "Read only" or is opened in Playground mode, please make a copy in Drive (see the "File" menu).
If you opened the notebook from Google Classroom, it should be editable already.
Run the cells one by one to download the dataset, train the model and inspect some predictions.
Initialize
End of explanation
# Download MNIST dataset from the web.
# See more datasets on https://www.tensorflow.org/datasets/datasets
import tensorflow_datasets as tfds
ds, info = tfds.load('fashion_mnist', with_info=True)
info
# Datasets are iterables. Let's fetch the first example:
for example in ds['train']:
break
example.keys()
print(f'Image shape={example["image"].shape}')
# The image has a last dimension of `1` because it contains a single grayscale
# channel. Make the shape (28, 28) for drawing with Matplotlib.
plt.matshow(example['image'][:,:,0], cmap='Greys')
# The label is specified as a number.
print(example['label'])
# Use .numpy() to extract the number.
print(example['label'].numpy())
# The additional data in `info` lets us convert that number to a string label.
info.features['label'].names
# Let's plot some more examples...
rows, cols = 2, 8
plt.figure(figsize=(1.5*cols, 1.5*rows))
for i, example in enumerate(itertools.islice(ds['train'], rows*cols)):
plt.subplot(rows, cols, i + 1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
plt.imshow(example['image'].numpy().reshape((28, 28)), cmap=plt.cm.binary)
label_index = example['label'].numpy()
plt.xlabel(info.features['label'].names[label_index])
Explanation: Download and Inspect the Data
End of explanation
# Define the model; see presentation for additional explanations.
model = tf.keras.Sequential([
# We need to specify the input shape of the first layer only.
tf.keras.layers.Flatten(input_shape=(28, 28, 1)),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dense(10, activation='softmax')
])
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
model.summary()
Explanation: Define the Model
End of explanation
# Keras .fit() function expects dataset containing (data, label).
# This function maps the dictionary from tfds to this expected tuple.
def map_features(example):
return (
example['image'],
example['label'],
)
# Train the model with batches of size 128.
train_ds = ds['train'].map(map_features).batch(128)
model.fit(train_ds, epochs=2)
# Note: you can repeat this step with more epochs to further train the model.
Explanation: Train the Model
End of explanation
# Show predictions with their "confidence" (note that this confidence is simply
# the value of the largest activation in the output layer and this is by no
# means calibrated, sometimes you will even see predictions with ~100%
# "confidence" that are still wrong...)
# Incorrect predictions are shown with red text.
rows, cols = 2, 8
plt.figure(figsize=(1.5*cols, 1.5*rows))
examples = []
for i, example in enumerate(itertools.islice(ds['test'], rows*cols)):
examples.append(example)
plt.subplot(rows, cols, i + 1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
plt.imshow(example['image'].numpy().reshape((28, 28)), cmap=plt.cm.binary)
# tf.expand_dims() changes the shape (28, 28, 1) -> (1, 28, 28, 1)
# This is needed because the model expects a batch of images (batch
# dimension is the first dimension).
prediction = model.predict(tf.expand_dims(example['image'], 0))[0]
predicted_index = prediction.argmax()
predicted_name = info.features['label'].names[predicted_index]
label_index = example['label']
predicted_pct = int(100 * prediction.max())
fmt_kwargs = {}
if example['label'].numpy() != predicted_index:
fmt_kwargs['color'] = 'red'
plt.xlabel(f'{predicted_name} ({predicted_pct}%)', **fmt_kwargs)
# Inspect an individual example from above.
example = examples[2]
pred = model.predict(tf.expand_dims(example['image'], 0))[0]
plt.bar(range(len(pred)), pred)
plt.xticks(range(len(pred)), info.features['label'].names)
plt.setp(plt.gca().get_xticklabels(), rotation=45, horizontalalignment='right')
print('Prediction={} Label={}'.format(
info.features['label'].names[pred.argmax()],
info.features['label'].names[example['label']],
))
Explanation: Inspect some Predictions
End of explanation
%%solution
# EXERCISE_ID: exercise_ml_0
def num_training_examples(dataset):
# BEGIN SOLUTION
return len(dataset['train'])
# END SOLUTION
# BEGIN PROMPT
return ...
# END PROMPT
def num_testing_examples(dataset):
# BEGIN SOLUTION
return len(dataset['test'])
# END SOLUTION
# BEGIN PROMPT
return ...
# END PROMPT
%%inlinetest InlineTest_ml0
assert 'num_training_examples' in globals(), f"Have you defined the function 'num_training_examples'?"
assert str(num_training_examples.__class__) == "<class 'function'>", f"Have you defined a function 'num_training_examples'? Found a {num_training_examples.__class__} instead"
import tensorflow_datasets as tfds
_ds, info = tfds.load('fashion_mnist', with_info=True)
try:
ans = num_training_examples(_ds)
assert ans == 60000, f"Your function 'num_training_examples' returns {ans}, while the expected answer is 60000"
except AssertionError as e:
raise e
except Exception as e:
assert False, f"Your function 'num_training_examples' does not accept the dataset dictionary `ds` and raises an exception: {e}. Please try to pass `ds` to your function."
try:
ans = num_testing_examples(_ds)
assert ans == 10000, f"Your function 'num_testing_examples' returns {ans}, while the expected answer is 10000"
except AssertionError as e:
raise e
except Exception as e:
assert False, f"Your function 'num_testing_examples' does not accept the dataset dictionary `ds` and raises an exception: {e}. Please try to pass `ds` to your function."
# A dummy dataset for testing.
# TODO(salikh): Maybe create an actual dataset for more realistic testing? https://www.tensorflow.org/datasets/add_dataset
_small_ds = {'train': [1,2,3], 'test': [1,2]}
try:
ans = num_training_examples(_small_ds)
assert ans != 60000, "Are you counting the examples in global dataset instead of the passed one?"
assert ans != 2, "Are you couting training and testing examples correctly? It seems they are mixed up"
assert ans == len(_small_ds['train']), "Are you counting the training examples correctly?"
except AssertionError as e:
raise e
except Exception as e:
assert False, f"Your function 'num_training_examples' does not accept an arbitrary dataset and raises an exception: {e}."
try:
ans = num_testing_examples(_small_ds)
assert ans != 10000, "Are you counting the examples in global dataset instead of the passed one?"
assert ans != 3, "Are you couting training and testing examples correctly? It seems they are mixed up"
assert ans == len(_small_ds['test']), "Are you counting the testing examples correctly?"
except AssertionError as e:
raise e
except Exception as e:
assert False, f"Your function 'num_testing_examples' does not accept an arbitrary dataset and raises an exception: {e}."
result, log = %autotest InlineTest_ml0
print(result.results)
report(InlineTest_ml0, results=result.results, source=submission_source.source)
# Run this cell to check your solution.
# If you get an error 'Check not defined', make sure you have run all preceding
# cells once (Runtime -> Run before)
Check('exercise_ml_0')
%%submission
def num_training_examples(dataset):
return 60000
result, log = %autotest InlineTest_ml0
print(result.results)
report(InlineTest_ml0, results=result.results, source=submission_source.source)
num_training = num_training_examples(ds)
num_testing = num_testing_examples(ds)
print(f"{num_training} training examples, {num_testing} testing examples")
Explanation: Check your understanding: simple quizes
The quizzes were designed not require looking up any external API documentation, so it should be sufficient to understand the material above and basics of Python. Please review the material above to look for the hints.
Quiz ML.0
How many training and testing examples does the dataset have? Complete the function definitions below to return the number. Make sure the functions accept
the dataset variable ds that you have defined above in the notebook.
Hint: Dataset is a dictionary with each value being an iterable of examples. Python has a standard function len for computing length of lists and iterables.
End of explanation
%%solution
# EXERCISE_ID: exercise_ml_1
# BEGIN SOLUTION
example5 = list(itertools.islice(ds['train'], 6))[5]
# END SOLUTION
# BEGIN PROMPT
example5 = ...
# END PROMPT
%%inlinetest InlineTest_ml1
assert 'example5' in globals(), f"Have you defined the variable 'example5'?"
assert str(example5.__class__) == "<class 'dict'>", f"Have you defined a variable 'example5' to contain an example (dictionary)? Found a {example5.__class__} instead"
assert ('image' in example5) and ('label' in example5), f"Have you assigned an example to the variable 'example5'? Example is a dictionary with keys 'image' and 'label'."
assert example5['label'] != 1, f"Have you extracted a #5 example from testing dataset? Quiz asked for training example. Or you may have off-by-one error."
assert example5['label'] != 2, f"Have you extracted an example #5 counting from 0? You may have off-by-one error."
assert example5['label'] == 9, f"Have you extracted an example #5 from training data set counting from 0? Expected to see an Ankle boot, but got {info.features['label'].names[example5['label']]}"
result, log = %autotest InlineTest_ml1
print(result.results)
report(InlineTest_ml1, results=result.results, source=submission_source.source)
# Run this cell to check your solution.
# If you get an error 'Check not defined', make sure you have run all preceding
# cells once (Runtime -> Run before)
Check('exercise_ml_1')
%%submission
# Incorrect submission
example5 = 1.618
# Test the incorrect submission against an autograder test
result, log = %autotest InlineTest_ml1
print(result.results)
report(InlineTest_ml1, results=result.results, source=submission_source.source)
Explanation: Answer: Please check that the above cell prints 60000 training examples, 10000 testing examples.
Quiz ML.1
Extract the training example number 5 (counting from 0) and store it into a variable example5.
Hint: Remember that you have loaded the datasets into variables ds and info. Scroll and re-read the cells above to see some fragments of code dealing with examples. You may also find function itertools.islice useful,
but it is not required.
End of explanation
%%solution
# EXERCISE_ID: exercise_ml_2
def VisualizeExample(example):
# BEGIN SOLUTION
plt.matshow(np.squeeze(example['image']), cmap=plt.cm.binary)
plt.xlabel(info.features['label'].names[example5['label']])
plt.show()
# END SOLUTION
# BEGIN PROMPT
label_index = ...
label = info.features['label'].names[label_index]
plt.matshow(...)
plt.xlabel(label)
plt.show()
# END PROMPT
VisualizeExample(example5)
Explanation: Quiz ML.2
Visualize the training example #5 that you have extracted above:
Plot the image of the example
Print the label of the example (or include the label into the plot)
Complete the definition of the function VisualizeExample below and confirm that the function works with the example5 that you have extracted above.
Hint: The mapping of label class indices and human-readable class names is in info.features['label'].names. Visualization of image data can be done with plt.matshow or plt.imshow. You may want to use a grayscape colormap in visualization.
End of explanation
%%solution
# EXERCISE_ID: exercise_ml_3
def PredictExample(model, example):
# BEGIN SOLUTION
prediction = model.predict(tf.expand_dims(example5['image'], 0))[0]
predicted_index = prediction.argmax()
return info.features['label'].names[predicted_index]
# END SOLUTION
# BEGIN PROMPT
prediction = model.predict(...)
predicted_index = ...
return info.features['label'].names[predicted_index]
# END PROMPT
%%inlinetest InlineTest_ml3
assert 'PredictExample' in globals(), f"Have you defined a function 'PredictExample'?"
assert str(PredictExample.__class__) == "<class 'function'>", f"Have you defined a function 'PredictExample'? Found a {PredictExample.__class__} instead"
if 'ds' not in globals():
import tensorflow_datasets as tfds
import itertools
ds, info = tfds.load('fashion_mnist', with_info=True)
if 'model' not in globals():
model = tf.keras.Sequential([
# We need to specify the input shape of the first layer only.
tf.keras.layers.Flatten(input_shape=(28, 28, 1)),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dense(10, activation='softmax')
])
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
def map_features(example):
return (
example['image'],
example['label'],
)
# Train the model with batches of size 128.
train_ds = ds['train'].map(map_features).batch(128)
model.fit(train_ds, epochs=1)
assert PredictExample(model, list(itertools.islice(ds['train'], 6))[5]) in info.features['label'].names, "Did you return a human-readable label?"
# Run this cell to check your solution.
# If you get an error 'Check not defined', make sure you have run all preceding
# cells once (Runtime -> Run before)
Check('exercise_ml_3')
print(PredictExample(model, example5))
Explanation: Answer: Please check that you see a picture of a boot, and see a label 'Ankle boot'.
Quiz ML.3
Run the model that was trained above (model) giving it the example number 5 (example5) as an input and print the predicted label. Most of the time, you should see a correct prediction.
Complete the definition of the function PredictExample() below and verify that it works with example5.
End of explanation
%%solution
# EXERCISE_ID: exercise_ml_4
def Accuracy(model, examples):
total = 0.0
correct = 0.0
for example in examples:
# BEGIN SOLUTION
prediction = model.predict(tf.expand_dims(example['image'], 0))[0]
predicted_index = prediction.argmax()
label = example['label'].numpy()
total += 1
correct += 1 if predicted_index == label else 0
# END SOLUTION
# BEGIN PROMPT
prediction = model.predict(tf.expand_dims(example['image'], 0))[0]
predicted_index = ...
label_index = example['label'].numpy()
total += 1
correct += 1 if ... else 0
# END PROMPT
return correct/total
Explanation: Answer: You should see the label of one of the classes, and most of the time it should be the correct class Ankle boot.
Quiz ML.4
Create a function that takes a test dataset (an iterable of examples) and a model as input and returns the accuracy of the model on that dataset.
End of explanation
print(Accuracy(model, itertools.islice(ds['test'], 10)))
Explanation: Now compute the accuracy on the first 10 testing examples:
End of explanation
rows, cols = 1, 10
plt.figure(figsize=(1.5*cols, 1.5*rows))
examples = []
for i, example in enumerate(itertools.islice(ds['test'], rows*cols)):
examples.append(example)
plt.subplot(rows, cols, i + 1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
plt.imshow(example['image'].numpy().reshape((28, 28)), cmap=plt.cm.binary)
# tf.expand_dims() changes the shape (28, 28, 1) -> (1, 28, 28, 1)
# This is needed because the model expects a batch of images (batch
# dimension is the first dimension).
prediction = model.predict(tf.expand_dims(example['image'], 0))[0]
predicted_index = prediction.argmax()
predicted_name = info.features['label'].names[predicted_index]
label_index = example['label']
predicted_pct = int(100 * prediction.max())
fmt_kwargs = {}
true_index = example['label'].numpy()
true_name = info.features['label'].names[true_index]
if true_index != predicted_index:
fmt_kwargs['color'] = 'red'
plt.xlabel(f'{true_name} ->\n{predicted_name} ({predicted_pct}%)', **fmt_kwargs)
else:
plt.xlabel(f'{predicted_name} ({predicted_pct}%)', **fmt_kwargs)
Explanation: Answer: Visualize the first 10 examples from the test set with their true labels and predictions, and confirm that your accuracy computation above is correct. You should see the number above matching the fraction of correct examples (incorrect examples are marked in read), so if the accuracy was 0.8, then you should see 8 correctly predicted examples, and 2 incorrect (red).
End of explanation
%%solution
# EXERCISE_ID: exercise_ml5
class ConstantPredictor:
def __init__(self, num_classes, output):
# Total number of classes. This is necessary to define the output shape.
self.num_classes = num_classes
# The index of the class that this predictor will always return
self.output = output
def predict(self, input_tensor):
# BEGIN SOLUTION
batch_size = input_tensor.shape[0]
ans = np.zeros((batch_size, self.num_classes))
ans[0, self.output] = 1.0
return ans
# END SOLUTION
# BEGIN PROMPT
return ...
# END PROMPT
%%inlinetest InlineTest_ml5
assert 'ConstantPredictor' in globals(), f"Did you define the class 'ConstantPredictor'?"
assert str(ConstantPredictor.__class__) == "<class 'type'>", f"Did you define 'ConstantPredictor' as class? Got {str(ConstantPredictor.__class__)} instead."
#assert str(ConstantPredictor.predict.__class__) == 'function', f"Did you implement 'predict' method in ConstantPredictor class?"
_predictor = ConstantPredictor(5, 3)
_input = tf.constant(0.0, shape=(1,28,28,1), dtype=float)
ans = _predictor.predict(_input)
try:
ans.shape
except AttributeError:
assert False, f"Did you return a tensor (or numpy array)? Got {str(ans.__class__)} instead"
assert len(ans.shape) == 2, f"Did you return a constant tensor of a right shape? Expected shape is (batch_size, num_classes), but got {ans.shape} when giving input of shape {_input.shape}"
assert ans.shape[0] == 1, f"Did you return a constant tensor of a right shape? Expected shape is (batch_size, num_classes), but got {ans.shape} when giving input of shape {_input.shape}"
assert ans.shape[1] == 5, f"Did you return a constant tensor of a right shape? Expected shape is (batch_size, num_classes), but got {ans.shape} when giving input of shape {_input.shape}"
assert ans.argmax() == 3, f"Did you set the prediction of the output class to 1? Got prediction of the wrong class"
# Run this to check your answer.
# If you see an error about 'Check' not defined, pelase make sure you have run
# cells at the top of the notebook, e.g. by Runtime -> Run above.
Check('exercise_ml5')
%%submission
class ConstantPredictor:
def __init__(self, num_classes, output):
# Total number of classes. This is necessary to define the output shape.
self.num_classes = num_classes
# The index of the class that this predictor will always return
self.output = output
def predict(self, input_tensor):
return self.output
result, log = %autotest InlineTest_ml5
print(result.results)
report(InlineTest_ml5, results=result.results, source=submission_source.source)
# Find out the index of 'Coat' class.
label_indices = {v: k for k, v in enumerate(info.features['label'].names)}
coat_index = label_indices['Coat']
# Create a constant predictor that always returns 'Coat' answer.
const_model = ConstantPredictor(info.features['label'].num_classes, coat_index)
Explanation: Quiz ML.5
Create a constant predictor model that always predicts the same class no matter what input data is provided, and make it compatible with the model.predict().
Hint Try to look at the shape of tensors returned by model.predict() using the field accessor .shape to better understand the shape of the data its predict() function receives and returns. The constant predictor should return a data in the same shape, but constructed from constants. You may need to check Numpy or Tensorflow documentation to find out how to create a constant of a given shape.
End of explanation
Accuracy(const_model, itertools.islice(ds['test'], 10))
Explanation: Now compute the accuracy of the constant model on the first 10 test examples:
End of explanation
%%solution
# EXERCISE_ID: exercise_ml6
def PrecisionAndRecall(model, examples, target_index):
# BEGIN SOLUTION
positive_count = 0
predicted_count = 0
correct_count = 0
for example in examples:
prediction = model.predict(tf.expand_dims(example['image'], 0))[0]
predicted_index = prediction.argmax()
#print('predicted', predicted_index)
true_index = example['label'].numpy()
#print('true', true_index)
if predicted_index == target_index:
predicted_count += 1
if predicted_index == true_index:
correct_count += 1
if true_index == target_index:
positive_count += 1
precision = correct_count/predicted_count if predicted_count > 0 else 0.0
recall = correct_count/positive_count if positive_count > 0 else 0.0
return precision, recall
# END SOLUTION
# BEGIN PROMPT
precision = ...
recall = ...
return precision, recall
# END PROMPT
# MASTER ONLY
_predictor = _DeterministicPredictor(10)
_examples = itertools.islice(ds['train'], 50)
_target_index = 2
_true_labels = [example['label'].numpy() for example in itertools.islice(ds['train'], 50)]
_pred_labels = [_predictor.predict(tf.expand_dims(example['image'], 0))[0].argmax() for example in itertools.islice(ds['train'], 50)]
print(PrecisionAndRecall(_predictor, _examples, _target_index))
# MASTER ONLY
_target_index = 2
print('prediction correct(not using target index)', np.sum([1 if tr == pr else 0 for tr, pr in zip(_true_labels, _pred_labels)]))
print('TP', np.sum([1 if tr == pr and tr == 2 else 0 for tr, pr in zip(_true_labels, _pred_labels)]))
print('FP', np.sum([1 if pr == 2 and tr != 2 else 0 for tr, pr in zip(_true_labels, _pred_labels)]))
print('FN', np.sum([1 if pr != 2 and tr == 2 else 0 for tr, pr in zip(_true_labels, _pred_labels)]))
print('TN', np.sum([1 if pr != 2 and tr != 2 else 0 for tr, pr in zip(_true_labels, _pred_labels)]))
print('predicted(_target_index)', np.sum([1 if pr == 2 else 0 for tr, pr in zip(_true_labels, _pred_labels)]))
print('positive(_target_index)', np.sum([1 if tr == 2 else 0 for tr, pr in zip(_true_labels, _pred_labels)]))
print('expected precision: ', 2/4)
print('expected recall: ', 2/9)
%%inlinetest InlineTest_ml6
import math
assert 'PrecisionAndRecall' in globals(), f"Did you define the function 'PrecisionAndRecall'?"
assert str(PrecisionAndRecall.__class__) == "<class 'function'>", f"Did you define 'PrecisionAndRecall' as function? Got {str(PrecisionAndRecall.__class__)} instead."
class _TestConstantPredictor:
def __init__(self, num_classes, output):
# Total number of classes. This is necessary to define the output shape.
self.num_classes = num_classes
# The index of the class that this predictor will always return
self.output = output
def predict(self, input_tensor):
# BEGIN SOLUTION
batch_size = input_tensor.shape[0]
ans = np.zeros((batch_size, self.num_classes))
ans[0, self.output] = 1.0
return ans
_predictor = _TestConstantPredictor(10, 2)
# First 10 training examples: 2, 1, 8, 4, 1, 9, 2, 2, 0, 2 => 4 examples with label 2.
try:
_ans = PrecisionAndRecall(_predictor, itertools.islice(ds['train'], 10), 2)
except Exception as e:
assert False, f"Did you compute precision and recall correctly, expected two numbers, but got an exception instead: {e}"
#print(_ans)
assert str(_ans.__class__) == "<class 'tuple'>", f"Did you return two numbers, i.e. a tuple of precision and recall?"
assert len(_ans) == 2, f"Did you return two numbers, i.e. a tuple of precision and recall?"
_precision, _recall = _ans[0], _ans[1]
assert _recall == 1.0, f"Did you compute recall correctly? expected to get 1.0 recall on constant predictor that returns a target label, but got {_recall} instead"
assert abs(_precision - 0.4) < 0.00001, f"Did you compute precision correctly? Should be the number of examples where both predicted and true index == target index divided by number of examples where predicted index == target index"
# Test only on 2 first examples
try:
_ans = PrecisionAndRecall(_predictor, itertools.islice(ds['train'], 2), 2)
except Exception as e:
assert False, f"Did you compute precision and recall correctly, expected two numbers, but got an exception instead: {e}"
assert len(_ans) == 2, f"Did you return two numbers, i.e. a tuple of precision and recall?"
_precision, _recall = _ans[0], _ans[1]
assert _recall == 1.0, f"Did you compute recall correctly, expected to get 1.0 recall on constant predictor that returns a target label, but got {_recall} instead"
assert abs(_precision - 0.5) < 0.00001, f"Did you compute precision correctly? Should be the number of examples where both predicted and true index == target index divided by number of examples where predicted index == target index"
try:
# 3 does not occur in first 10 training examples.
_ans = PrecisionAndRecall(_predictor, itertools.islice(ds['train'], 10), 3)
except Exception as e:
assert False, f"Did you compute precision and recall correctly, expected two numbers, but got an exception instead: {e}"
assert len(_ans) == 2, f"Did you return two numbers, i.e. a tuple of precision and recall?"
_precision, _recall = _ans[0], _ans[1]
assert _precision == 0.0, f"Did you compute precision correctly? expected to get 0.0 precision on constant predictor that returns a label that does not occur in the example set, but got {_precision} instead"
class _DeterministicPredictor:
Nonsence, but deterministic predictor.
def __init__(self, num_classes):
# Total number of classes. This is necessary to define the output shape.
self.num_classes = num_classes
def predict(self, input_tensor):
# BEGIN SOLUTION
batch_size = input_tensor.shape[0]
ans = np.zeros((batch_size, self.num_classes))
#print(np.sum(input_tensor))
output_index = int(int(np.sum(input_tensor)/10.0)%self.num_classes)
#print(output_index)
ans[:, output_index] = 1.0
#print(ans)
return ans
_predictor = _DeterministicPredictor(10)
try:
_ans = PrecisionAndRecall(_predictor, itertools.islice(ds['train'], 50), 2)
except Exception as e:
assert False, f"Did you compute precision and recall correctly, expected two numbers, but got an exception instead: {e}"
#print(_ans)
assert str(_ans.__class__) == "<class 'tuple'>", f"Did you return two numbers, i.e. a tuple of precision and recall?"
assert len(_ans) == 2, f"Did you return two numbers, i.e. a tuple of precision and recall?"
_precision, _recall = _ans[0], _ans[1]
print('precision', _precision, ', recall', _recall)
assert abs(_precision - 0.5) < 0.00001, f"Did you compute precision correctly? Should be the number of examples where both predicted and true index == target index divided by number of examples where predicted index == target index. Expected precision 0.5, got {_precision}"
assert abs(_recall - 0.222222222) < 0.00001, f"Did you compute recall correctly? Should be the number of examples where both predicted and true index == target index divided by number of examples where true index == target index. Expected recall 0.22222(2), got {_recall}"
result, log = %autotest InlineTest_ml6
print(result.results)
report(InlineTest_ml6, results=result.results, source=submission_source.source)
# Run this to check your answer.
# If you see an error about 'Check' not defined, pelase make sure you have run
# cells at the top of the notebook, e.g. by Runtime -> Run above.
Check('exercise_ml6')
%%submission
def PrecisionAndRecall(model, examples, target_index):
# BEGIN SOLUTION
positive_count = 0
predicted_count = 0
correct_count = 0
for example in examples:
prediction = model.predict(tf.expand_dims(example['image'], 0))[0]
predicted_index = prediction.argmax()
#print('predicted', predicted_index)
true_index = example['label'].numpy()
#print('true', true_index)
if predicted_index == target_index:
predicted_count += 1
if predicted_index == true_index:
positive_count += 1
# Mistake: count "correct" for the multiclass predictor rather than for the target class only.
if predicted_index == true_index:
correct_count += 1
precision = correct_count/predicted_count if predicted_count > 0 else 0.0
recall = correct_count/positive_count if positive_count > 0 else 0.0
return precision, recall
result, log = %autotest InlineTest_ml6
print(result.results)
report(InlineTest_ml6, results=result.results, source=submission_source.source)
%%submission
def PrecisionAndRecall(model, examples, target_index):
# BEGIN SOLUTION
positive_count = 0
predicted_count = 0
correct_count = 0
for example in examples:
prediction = model.predict(tf.expand_dims(example['image'], 0))[0]
predicted_index = prediction.argmax()
#print('predicted', predicted_index)
true_index = example['label'].numpy()
#print('true', true_index)
if predicted_index == target_index:
predicted_count += 1
if predicted_index == true_index:
correct_count += 1
if predicted_index == true_index:
positive_count += 1
precision = correct_count/predicted_count if predicted_count > 0 else 0.0
recall = correct_count/positive_count if positive_count > 0 else 0.0
return precision, recall
# END SOLUTION
# BEGIN PROMPT
precision = ...
recall = ...
return precision, recall
# END PROMPT
result, log = %autotest InlineTest_ml6
print(result.results)
report(InlineTest_ml6, results=result.results, source=submission_source.source)
%%submission
def PrecisionAndRecall(model, examples, target_index):
TP = 0
Pred_is_target = 0 #TP + FP
Label_is_target = 0 #TP + FN
for example in examples:
prediction = model.predict(tf.expand_dims(example['image'], 0))[0]
predicted_index = prediction.argmax() #prediction by model
true_index = example['label'].numpy() #actual label
if true_index == target_index:
Label_is_target += 1
if predicted_index == target_index:
Pred_is_target += 1
if predicted_index == true_index:
TP += 1
precision = TP / Pred_is_target if Pred_is_target != 0 else 0 #avoid dividing by 0
recall = TP / Label_is_target if Label_is_target != 0 else 0 #avoid dividing by 0
return precision, recall
result, log = %autotest InlineTest_ml6
print(result.results)
report(InlineTest_ml6, results=result.results, source=submission_source.source)
Explanation: Answer: Make sure you get 0.3 as the accuracy of the constant model on first 10 test examples.
Quiz ML.6
This task goes a little bit farther and requires you to review the theoretical part of the ML presentation.
Let's assume that one of the predicted classes is more important than others,
e.g. consider a question: "Is this example of Ankle boot?".
This is called one-vs-others model, where a multiclass predictor
is used to predict an answer to a binary question. For binary classifiers, we can compute precision and recall, which may also be helpful to understand the performance of the original multiclass model with respect to a single class.
Your task is to complete a definition of the function PrecisionAndRecall that takes a multiclass model (e.g. model defined above in the notebook or the constant predictor from previous quiz), the iterable of examples, and the class index to be considered as one-vs-others. The function should return two numbers:
Precision (fraction of correct predictions among the examples predicted with the given class)
Recall (fraction of correct predictions among the examples having the given class)
End of explanation |
2,596 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Part 3
Step1: Import libraries
Step2: Configure GCP environment settings
Update the following variables to reflect the values for your GCP environment
Step3: Authenticate your GCP account
This is required if you run the notebook in Colab. If you use an AI Platform notebook, you should already be authenticated.
Step4: Create the embedding lookup model
You use the EmbeddingLookup class to create the item embedding lookup model. The EmbeddingLookup class inherits from tf.keras.Model, and is implemented in the
lookup_creator.py
module.
The EmbeddingLookupclass works as follows
Step5: Create the model and export the SavedModel file
Call the export_saved_model method, which uses the EmbeddingLookup class to create the model and then exports the resulting SavedModel file
Step6: Inspect the exported SavedModel using the saved_model_cli command line tool
Step7: Test the SavedModel file
Test the SavedModel by loading it and then calling it with input item IDs | Python Code:
!pip install -q -U pip
!pip install -q tensorflow==2.2.0
!pip install -q -U google-auth google-api-python-client google-api-core
Explanation: Part 3: Create a model to serve the item embedding data
This notebook is the third of five notebooks that guide you through running the Real-time Item-to-item Recommendation with BigQuery ML Matrix Factorization and ScaNN solution.
Use this notebook to wrap the item embeddings data in a Keras model that can act as an item-embedding lookup, then export the model as a SavedModel.
Before starting this notebook, you must run the 02_export_bqml_mf_embeddings notebook to process the item embeddings data and export it to Cloud Storage.
After completing this notebook, run the 04_build_embeddings_scann notebook to create an approximate nearest neighbor index for the item embeddings.
Setup
Import the required libraries, configure the environment variables, and authenticate your GCP account.
End of explanation
import os
import numpy as np
import tensorflow as tf
print(f"Tensorflow version: {tf.__version__}")
Explanation: Import libraries
End of explanation
PROJECT_ID = "yourProject" # Change to your project.
BUCKET = "yourBucketName" # Change to the bucket you created.
EMBEDDING_FILES_PATH = f"gs://{BUCKET}/bqml/item_embeddings/embeddings-*"
MODEL_OUTPUT_DIR = f"gs://{BUCKET}/bqml/embedding_lookup_model"
!gcloud config set project $PROJECT_ID
Explanation: Configure GCP environment settings
Update the following variables to reflect the values for your GCP environment:
PROJECT_ID: The ID of the Google Cloud project you are using to implement this solution.
BUCKET: The name of the Cloud Storage bucket you created to use with this solution. The BUCKET value should be just the bucket name, so myBucket rather than gs://myBucket.
End of explanation
try:
from google.colab import auth
auth.authenticate_user()
print("Colab user is authenticated.")
except:
pass
Explanation: Authenticate your GCP account
This is required if you run the notebook in Colab. If you use an AI Platform notebook, you should already be authenticated.
End of explanation
if tf.io.gfile.exists(MODEL_OUTPUT_DIR):
print("Removing {} contents...".format(MODEL_OUTPUT_DIR))
tf.io.gfile.rmtree(MODEL_OUTPUT_DIR)
Explanation: Create the embedding lookup model
You use the EmbeddingLookup class to create the item embedding lookup model. The EmbeddingLookup class inherits from tf.keras.Model, and is implemented in the
lookup_creator.py
module.
The EmbeddingLookupclass works as follows:
Accepts the embedding_files_prefix variable in the class constructor. This variable points to the Cloud Storage location of the CSV files containing the item embedding data.
Reads and parses the item embedding CSV files.
Populates the vocabulary and embeddings class variables. vocabulary is an array of item IDs, while embeddings is a Numpy array with the shape (number of embeddings, embedding dimensions).
Appends the oov_embedding variable to the embeddings variable. The oov_embedding variable value is all zeros, and it represents the out of vocabulary (OOV) embedding vector. The oov_embedding variable is used when an invalid ("out of vocabulary", or OOV) item ID is submitted, in which case an embedding vector of zeros is returned.
Writes the vocabulary value to a file, one array element per line, so it can be used as a model asset by the SavedModel.
Uses token_to_idx, a tf.lookup.StaticHashTable object, to map the
item ID to the index of the embedding vector in the embeddings Numpy array.
Accepts a list of strings with the __call__ method of the model. Each string represents the item ID(s) for which the embeddings are to be retrieved. If the input list contains N strings, then N embedding vectors are returned.
Note that each string in the input list may contain one or more space-separated item IDs. If multiple item IDs are present, the embedding vectors of these item IDs are retrieved and combined (by averaging) into a single embedding vector. This makes it possible to fetch an embedding vector representing a set of items (like a playlist) rather than just a single item.
Clear the model export directory
End of explanation
from embeddings_lookup import lookup_creator
lookup_creator.export_saved_model(EMBEDDING_FILES_PATH, MODEL_OUTPUT_DIR)
Explanation: Create the model and export the SavedModel file
Call the export_saved_model method, which uses the EmbeddingLookup class to create the model and then exports the resulting SavedModel file:
End of explanation
!saved_model_cli show --dir {MODEL_OUTPUT_DIR} --tag_set serve --signature_def serving_default
Explanation: Inspect the exported SavedModel using the saved_model_cli command line tool:
End of explanation
loaded_model = tf.saved_model.load(MODEL_OUTPUT_DIR)
input_items = ["2114406", "2114402 2120788", "abc123"]
output = loaded_model(input_items)
print(f"Embeddings retrieved: {output.shape}")
for idx, embedding in enumerate(output):
print(f"{input_items[idx]}: {embedding[:5]}")
Explanation: Test the SavedModel file
Test the SavedModel by loading it and then calling it with input item IDs:
End of explanation |
2,597 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
여러개의 Vector를 list로 만들어서 sum 연산을 하는 경우
Step1: Scalar * Vector의 연산 ex) 2 * [1,2,3,4] = [2,4,6,8]
Step3: vector 의 평균 구하기
Step5: Vector dot product
Step7: 하나의 vector에서 값 element들을 제곱하여 더한 후 값을 반환함
Step8: magnitude
Step9: Distance 구하기
Step10: Matrix indexing
shape
Step13: Matrix operation
make_matirx | Python Code:
# Original book version
def vector_sum(vectors):
return reduce(vector_add, vectors)
vectors = [v,w,v,w,v,w]
vector_sum(vectors)
# Modified version by sc82.choi at Gachon - *은 여러개의 argument를 list로 전환해줌
def vector_sum_modified(vectors):
return [sum(value) for value in zip(*vectors)]
vectors = [v,w,v,w,v,w]
vector_sum_modified(vectors)
%timeit vector_sum(vectors)
%timeit vector_sum_modified(vectors)
%timeit np.sum([v,w,v,w,v,w], axis=0)
# Numpy operation
np.sum([v,w,v,w,v,w], axis=0)
# axis=0 는 row [v,w,v,w,v,w]를 하나의 matrix로 생각했을 때, column별로 sum operation을 하라는 의미
# axis=1 는 row [v,w,v,w,v,w]를 하나의 matrix로 생각했을 때, row별로 sum operation을 하라는 의미
Explanation: 여러개의 Vector를 list로 만들어서 sum 연산을 하는 경우
End of explanation
# Original book verstion
def scalar_multiply(c, v):
return [c * v_i for v_i in v]
v = [5, 6, 7, 8]
scalar = 3
scalar_multiply(scalar, v)
# Numpy version: Numpy는 배열의 크기가 다르더라도 기본적인 vector연산을 가능하도록 지원해준다. 이를 broadcasting이라고 함
scalar * np.array(v)
%timeit scalar_multiply(scalar, v)
%timeit scalar * np.array(v)
Explanation: Scalar * Vector의 연산 ex) 2 * [1,2,3,4] = [2,4,6,8]
End of explanation
# Original book version
def vector_mean(vectors):
compute the vector whose i-th element is the mean of the
i-th elements of the input vectors
n = len(vectors)
return scalar_multiply(1/n, vector_sum(vectors))
v = [1,2,3,4]
w = [-4,-3,-2,-1]
vector_mean([v,v,v,v])
# Original book version
np.mean([v,v,v,v], axis=0)
# axis=0 는 row [v,w,v,w,v,w]를 하나의 matrix로 생각했을 때, column별로 mean operation을 하라는 의미
# axis=1 는 row [v,w,v,w,v,w]를 하나의 matrix로 생각했을 때, row별로 mean operation을 하라는 의미
%timeit vector_mean([v,v,v,v])
%timeit np.mean([v,v,v,v], axis=0)
Explanation: vector 의 평균 구하기: 크기가 같은 vector를 matrix형태로 입력했을 경우, 각 row별 평균을 구함
End of explanation
# Original book version
def dot(v, w):
v_1 * w_1 + ... + v_n * w_n
return sum(v_i * w_i for v_i, w_i in zip(v, w))
v = [1,2,3,4]
w = [-4,-3,-2,-1]
dot(v, w)
# Numpy version
np.dot(v,w)
%timeit dot(v, w)
%timeit np.dot(v, w)
Explanation: Vector dot product: 크기가 같은 두개의 vector가 있으면, 같은 column에 해당하는 값을 곱한 후 모든 값을 더함
End of explanation
# Original book version
def sum_of_squares(v):
v_1 * v_1 + ... + v_n * v_n
return dot(v, v)
v = [1,2,3,4]
sum_of_squares(v) # v * v = [1,4,9,16]
# Numpy version
np.dot(v,v) # or sum(np.square(v))
Explanation: 하나의 vector에서 값 element들을 제곱하여 더한 후 값을 반환함
End of explanation
# Orginal book version
def magnitude(v):
return math.sqrt(sum_of_squares(v))
magnitude(v)
# Numpy version
np.linalg.norm(v)
%timeit magnitude(v)
%timeit np.linalg.norm(v)
Explanation: magnitude: 하나의 vector를 dot_product한 후 양의 제곱근을 구함
End of explanation
#original version
def squared_distance(v, w):
return sum_of_squares(vector_subtract(v, w))
def distance(v, w):
return math.sqrt(squared_distance(v, w))
v = [1,2,3,4]
w = [-4,-3,-2,-1]
squared_distance(v,w)
distance(v,w)
# Numpy version
np.linalg.norm(np.subtract(v,w)) # or np.sqrt(np.sum(np.subtract(v,w)**2))
%timeit distance(v, w)
%timeit np.linalg.norm(np.subtract(v,w))
Explanation: Distance 구하기: vector간의 거리를 구하는 공식
파타고라스의 정리 처럼, 두 점(두 벡터) 사이의 거리는 (x1 - y1)^2 + (x2 - y2)^2 의 제곱근을 구해주면 됨
피타고라스의 정리 처럼 두 점이 이차원 평면이 아닌 n차원의 vector로 구성되는 차이점이 있음
정식 명칭은 Eculidian distance 라고 함
End of explanation
def shape(A):
num_rows = len(A)
num_cols = len(A[0]) if A else 0
return num_rows, num_cols
def get_row(A, i):
return A[i]
def get_column(A, j):
return [A_i[j] for A_i in A]
example_matrix = [[1,2,3,4,5], [11,12,13,14,15], [21,22,23,24,25]]
shape(example_matrix)
get_row(example_matrix, 0)
get_column(example_matrix,3)
# Numpy version
np.shape(example_matrix)
example_matrix = np.array(example_matrix)
example_matrix[0] #row slicing
example_matrix[:,3] #column slicing
Explanation: Matrix indexing
shape: matrix의 크기를 구함
get_row: matrix에서 하나의 row을 추출함
get_column: matrix에서 하나의 column을 추출함
End of explanation
def make_matrix(num_rows, num_cols, entry_fn):
returns a num_rows x num_cols matrix
whose (i,j)-th entry is entry_fn(i, j)
return [[entry_fn(i, j) for j in range(num_cols)]
for i in range(num_rows)]
def is_diagonal(i, j):
1's on the 'diagonal', 0's everywhere else
return 1 if i == j else 0
identity_matrix = make_matrix(5, 5, is_diagonal)
identity_matrix
# Numpy version
np.identity(5)
friendships = [[0, 1, 1, 0, 0, 0, 0, 0, 0, 0], # user 0
[1, 0, 1, 1, 0, 0, 0, 0, 0, 0], # user 1
[1, 1, 0, 1, 0, 0, 0, 0, 0, 0], # user 2
[0, 1, 1, 0, 1, 0, 0, 0, 0, 0], # user 3
[0, 0, 0, 1, 0, 1, 0, 0, 0, 0], # user 4
[0, 0, 0, 0, 1, 0, 1, 1, 0, 0], # user 5
[0, 0, 0, 0, 0, 1, 0, 0, 1, 0], # user 6
[0, 0, 0, 0, 0, 1, 0, 0, 1, 0], # user 7
[0, 0, 0, 0, 0, 0, 1, 1, 0, 1], # user 8
[0, 0, 0, 0, 0, 0, 0, 0, 1, 0]] # user 9
def matrix_add(A, B):
if shape(A) != shape(B):
raise ArithmeticError("cannot add matrices with different shapes")
num_rows, num_cols = shape(A)
def entry_fn(i, j): return A[i][j] + B[i][j]
return make_matrix(num_rows, num_cols, entry_fn)
A = [[ 1., 0., 0.], [ 0., 1., 2.]]
B = [[ 5., 4., 3.], [ 2., 2., 2.]]
matrix_add(A,B)
# Numpy version
np.add(A,B) # vector 마찬가지로 크기 같은 matrix 형태의 list가 돌아오면 자동으로 변환함
def make_graph_dot_product_as_vector_projection(plt):
v = [2, 1]
w = [math.sqrt(.25), math.sqrt(.75)]
c = dot(v, w)
vonw = scalar_multiply(c, w)
o = [0,0]
plt.arrow(0, 0, v[0], v[1],
width=0.002, head_width=.1, length_includes_head=True)
plt.annotate("v", v, xytext=[v[0] + 0.1, v[1]])
plt.arrow(0 ,0, w[0], w[1],
width=0.002, head_width=.1, length_includes_head=True)
plt.annotate("w", w, xytext=[w[0] - 0.1, w[1]])
plt.arrow(0, 0, vonw[0], vonw[1], length_includes_head=True)
plt.annotate(u"(v•w)w", vonw, xytext=[vonw[0] - 0.1, vonw[1] + 0.1])
plt.arrow(v[0], v[1], vonw[0] - v[0], vonw[1] - v[1],
linestyle='dotted', length_includes_head=True)
plt.scatter(*zip(v,w,o),marker='.')
plt.axis([0,3,0,2]) # 짤리는 부분이 있어서 변경
plt.show()
%pylab inline
make_graph_dot_product_as_vector_projection(plt)
Explanation: Matrix operation
make_matirx: entry_fn을 기준으로 num_rows와 num_cols을 생성함
id_diagonal: 대각행렬이 1 matrix를 만들기 위해, row와 column 의 값이 같으면 1 아니면 0을 반환
matrix_add: matrix간 덧셈
End of explanation |
2,598 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Session 1
Step1: I'll be using a popular image dataset for faces called the CelebFaces dataset. I've provided some helper functions which you can find on the resources page, which will just help us with manipulating images and loading this dataset.
Step2: Let's get the 50th image in this list of files, and then read the file at that location as an image, setting the result to a variable, img, and inspect a bit further what's going on
Step3: When I print out this image, I can see all the numbers that represent this image. We can use the function imshow to see this
Step4: <a name="understanding-image-shapes"></a>
Understanding Image Shapes
Let's break this data down a bit more. We can see the dimensions of the data using the shape accessor
Step5: This means that the image has 218 rows, 178 columns, and 3 color channels corresponding to the Red, Green, and Blue channels of the image, or RGB. Let's try looking at just one of the color channels.
Step6: We use the special colon operator to say take every value in this dimension. This is saying, give me every row, every column, and the 0th dimension of the color channels. What we're seeing is the amount of Red, Green, or Blue contributing to the overall color image.
Let's use another helper function which will load every image file in the celeb dataset rather than just give us the filenames like before. By default, this will just return the first 1000 images because loading the entire dataset is a bit cumbersome. In one of the later sessions, I'll show you how tensorflow can handle loading images using a pipeline so we can load this same dataset. For now, let's stick with this
Step7: We now have a list containing our images. Each index of the imgs list is another image which we can access using the square brackets
Step8: <a name="the-batch-dimension"></a>
The Batch Dimension
Remember that an image has a shape describing the height, width, channels
Step9: It turns out we'll often use another convention for storing many images in an array using a new dimension called the batch dimension. The resulting image shape will be exactly the same, except we'll stick on a new dimension on the beginning... giving us number of images x the height x the width x the number of color channels.
N x H x W x C
A Color image should have 3 color channels, RGB.
We can combine all of our images to have these 4 dimensions by telling numpy to give us an array of all the images.
Step10: This will only work if every image in our list is exactly the same size. So if you have a wide image, short image, long image, forget about it. You'll need them all to be the same size. If you are unsure of how to get all of your images into the same size, then please please refer to the online resources for the notebook I've provided which shows you exactly how to take a bunch of images of different sizes, and crop and resize them the best we can to make them all the same size.
<a name="meandeviation-of-images"></a>
Mean/Deviation of Images
Now that we have our data in a single numpy variable, we can do alot of cool stuff. Let's look at the mean of the batch channel
Step11: This is the first step towards building our robot overlords. We've reduced down our entire dataset to a single representation which describes what most of our dataset looks like. There is one other very useful statistic which we can look at very easily
Step12: So this is incredibly cool. We've just shown where changes are likely to be in our dataset of images. Or put another way, we're showing where and how much variance there is in our previous mean image representation.
We're looking at this per color channel. So we'll see variance for each color channel represented separately, and then combined as a color image. We can try to look at the average variance over all color channels by taking their mean
Step13: This is showing us on average, how every color channel will vary as a heatmap. The more red, the more likely that our mean image is not the best representation. The more blue, the less likely that our mean image is far off from any other possible image.
<a name="dataset-preprocessing"></a>
Dataset Preprocessing
Think back to when I described what we're trying to accomplish when we build a model for machine learning? We're trying to build a model that understands invariances. We need our model to be able to express all of the things that can possibly change in our data. Well, this is the first step in understanding what can change. If we are looking to use deep learning to learn something complex about our data, it will often start by modeling both the mean and standard deviation of our dataset. We can help speed things up by "preprocessing" our dataset by removing the mean and standard deviation. What does this mean? Subtracting the mean, and dividing by the standard deviation. Another word for that is "normalization".
<a name="histograms"></a>
Histograms
Let's have a look at our dataset another way to see why this might be a useful thing to do. We're first going to convert our batch x height x width x channels array into a 1 dimensional array. Instead of having 4 dimensions, we'll now just have 1 dimension of every pixel value stretched out in a long vector, or 1 dimensional array.
Step14: We first convert our N x H x W x C dimensional array into a 1 dimensional array. The values of this array will be based on the last dimensions order. So we'll have
Step15: The last line is saying give me a histogram of every value in the vector, and use 255 bins. Each bin is grouping a range of values. The bars of each bin describe the frequency, or how many times anything within that range of values appears.In other words, it is telling us if there is something that seems to happen more than anything else. If there is, it is likely that a neural network will take advantage of that.
<a name="histogram-equalization"></a>
Histogram Equalization
The mean of our dataset looks like this
Step16: When we subtract an image by our mean image, we remove all of this information from it. And that means that the rest of the information is really what is important for describing what is unique about it.
Let's try and compare the histogram before and after "normalizing our data"
Step17: What we can see from the histograms is the original image's distribution of values from 0 - 255. The mean image's data distribution is mostly centered around the value 100. When we look at the difference of the original image and the mean image as a histogram, we can see that the distribution is now centered around 0. What we are seeing is the distribution of values that were above the mean image's intensity, and which were below it. Let's take it one step further and complete the normalization by dividing by the standard deviation of our dataset
Step18: Now our data has been squished into a peak! We'll have to look at it on a different scale to see what's going on
Step19: What we can see is that the data is in the range of -3 to 3, with the bulk of the data centered around -1 to 1. This is the effect of normalizing our data
Step20: Let's take a look at how we might create a range of numbers. Using numpy, we could for instance use the linear space function
Step21: <a name="tensors"></a>
Tensors
In tensorflow, we could try to do the same thing using their linear space function
Step22: Instead of a numpy.array, we are returned a tf.Tensor. The name of it is "LinSpace
Step23: <a name="operations"></a>
Operations
And from this graph, we can get a list of all the operations that have been added, and print out their names
Step24: So Tensorflow has named each of our operations to generally reflect what they are doing. There are a few parameters that are all prefixed by LinSpace, and then the last one which is the operation which takes all of the parameters and creates an output for the linspace.
<a name="tensor"></a>
Tensor
We can request the output of any operation, which is a tensor, by asking the graph for the tensor's name
Step25: What I've done is asked for the tf.Tensor that comes from the operation "LinSpace". So remember, the result of a tf.Operation is a tf.Tensor. Remember that was the same name as the tensor x we created before.
<a name="sessions"></a>
Sessions
In order to actually compute anything in tensorflow, we need to create a tf.Session. The session is responsible for evaluating the tf.Graph. Let's see how this works
Step26: We could also explicitly tell the session which graph we want to manage
Step27: By default, it grabs the default graph. But we could have created a new graph like so
Step28: And then used this graph only in our session.
To simplify things, since we'll be working in iPython's interactive console, we can create an tf.InteractiveSession
Step29: Now we didn't have to explicitly tell the eval function about our session. We'll leave this session open for the rest of the lecture.
<a name="tensor-shapes"></a>
Tensor Shapes
Step30: <a name="many-operations"></a>
Many Operations
Lets try a set of operations now. We'll try to create a Gaussian curve. This should resemble a normalized histogram where most of the data is centered around the mean of 0. It's also sometimes refered to by the bell curve or normal curve.
Step31: Just like before, amazingly, we haven't actually computed anything. We *have just added a bunch of operations to Tensorflow's graph. Whenever we want the value or output of this operation, we'll have to explicitly ask for the part of the graph we're interested in before we can see its result. Since we've created an interactive session, we should just be able to say the name of the Tensor that we're interested in, and call the eval function
Step32: <a name="convolution"></a>
Convolution
<a name="creating-a-2-d-gaussian-kernel"></a>
Creating a 2-D Gaussian Kernel
Let's try creating a 2-dimensional Gaussian. This can be done by multiplying a vector by its transpose. If you aren't familiar with matrix math, I'll review a few important concepts. This is about 98% of what neural networks do so if you're unfamiliar with this, then please stick with me through this and it'll be smooth sailing. First, to multiply two matrices, their inner dimensions must agree, and the resulting matrix will have the shape of the outer dimensions.
So let's say we have two matrices, X and Y. In order for us to multiply them, X's columns must match Y's rows. I try to remember it like so
Step33: <a name="convolving-an-image-with-a-gaussian"></a>
Convolving an Image with a Gaussian
A very common operation that we'll come across with Deep Learning is convolution. We're going to explore what this means using our new gaussian kernel that we've just created. For now, just think of it a way of filtering information. We're going to effectively filter our image using this Gaussian function, as if the gaussian function is the lens through which we'll see our image data. What it will do is at every location we tell it to filter, it will average the image values around it based on what the kernel's values are. The Gaussian's kernel is basically saying, take a lot the center, a then decesasingly less as you go farther away from the center. The effect of convolving the image with this type of kernel is that the entire image will be blurred. If you would like an interactive exploratin of convolution, this website is great
Step34: Notice our img shape is 2-dimensional. For image convolution in Tensorflow, we need our images to be 4 dimensional. Remember that when we load many iamges and combine them in a single numpy array, the resulting shape has the number of images first.
N x H x W x C
In order to perform 2d convolution with tensorflow, we'll need the same dimensions for our image. With just 1 grayscale image, this means the shape will be
Step35: Instead of getting a numpy array back, we get a tensorflow tensor. This means we can't access the shape parameter like we did with the numpy array. But instead, we can use get_shape(), and get_shape().as_list()
Step36: The H x W image is now part of a 4 dimensional array, where the other dimensions of N and C are 1. So there is only 1 image and only 1 channel.
We'll also have to reshape our Gaussian Kernel to be 4-dimensional as well. The dimensions for kernels are slightly different! Remember that the image is
Step37: <a name="convolvefilter-an-image-using-a-gaussian-kernel"></a>
Convolve/Filter an image using a Gaussian Kernel
We can now use our previous Gaussian Kernel to convolve our image
Step38: There are two new parameters here
Step39: <a name="modulating-the-gaussian-with-a-sine-wave-to-create-gabor-kernel"></a>
Modulating the Gaussian with a Sine Wave to create Gabor Kernel
We've now seen how to use tensorflow to create a set of operations which create a 2-dimensional Gaussian kernel, and how to use that kernel to filter or convolve another image. Let's create another interesting convolution kernel called a Gabor. This is a lot like the Gaussian kernel, except we use a sine wave to modulate that.
<graphic
Step40: We then calculate the sine of these values, which should give us a nice wave
Step41: And for multiplication, we'll need to convert this 1-dimensional vector to a matrix
Step42: We then repeat this wave across the matrix by using a multiplication of ones
Step43: We can directly multiply our old Gaussian kernel by this wave and get a gabor kernel
Step44: <a name="manipulating-an-image-with-this-gabor"></a>
Manipulating an image with this Gabor
We've already gone through the work of convolving an image. The only thing that has changed is the kernel that we want to convolve with. We could have made life easier by specifying in our graph which elements we wanted to be specified later. Tensorflow calls these "placeholders", meaning, we're not sure what these are yet, but we know they'll fit in the graph like so, generally the input and output of the network.
Let's rewrite our convolution operation using a placeholder for the image and the kernel and then see how the same operation could have been done. We're going to set the image dimensions to None x None. This is something special for placeholders which tells tensorflow "let this dimension be any possible value". 1, 5, 100, 1000, it doesn't matter.
Step45: What we've done is create an entire graph from our placeholders which is capable of convolving an image with a gabor kernel. In order to compute it, we have to specify all of the placeholders required for its computation.
If we try to evaluate it without specifying placeholders beforehand, we will get an error InvalidArgumentError
Step46: It's saying that we didn't specify our placeholder for img. In order to "feed a value", we use the feed_dict parameter like so
Step47: But that's not the only placeholder in our graph! We also have placeholders for mean, sigma, and ksize. Once we specify all of them, we'll have our result
Step48: Now, instead of having to rewrite the entire graph, we can just specify the different placeholders. | Python Code:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
plt.style.use('ggplot')
Explanation: Session 1: Introduction to Tensorflow
<p class='lead'>
Creative Applications of Deep Learning with Tensorflow<br />
Parag K. Mital<br />
Kadenze, Inc.<br />
</p>
<a name="learning-goals"></a>
Learning Goals
Learn the basic idea behind machine learning: learning from data and discovering representations
Learn how to preprocess a dataset using its mean and standard deviation
Learn the basic components of a Tensorflow Graph
Table of Contents
<!-- MarkdownTOC autolink=true autoanchor=true bracket=round -->
Introduction
Promo
Session Overview
Learning From Data
Deep Learning vs. Machine Learning
Invariances
Scope of Learning
Existing datasets
Preprocessing Data
Understanding Image Shapes
The Batch Dimension
Mean/Deviation of Images
Dataset Preprocessing
Histograms
Histogram Equalization
Tensorflow Basics
Variables
Tensors
Graphs
Operations
Tensor
Sessions
Tensor Shapes
Many Operations
Convolution
Creating a 2-D Gaussian Kernel
Convolving an Image with a Gaussian
Convolve/Filter an image using a Gaussian Kernel
Modulating the Gaussian with a Sine Wave to create Gabor Kernel
Manipulating an image with this Gabor
Homework
Next Session
Reading Material
<!-- /MarkdownTOC -->
<a name="introduction"></a>
Introduction
This course introduces you to deep learning: the state-of-the-art approach to building artificial intelligence algorithms. We cover the basic components of deep learning, what it means, how it works, and develop code necessary to build various algorithms such as deep convolutional networks, variational autoencoders, generative adversarial networks, and recurrent neural networks. A major focus of this course will be to not only understand how to build the necessary components of these algorithms, but also how to apply them for exploring creative applications. We'll see how to train a computer to recognize objects in an image and use this knowledge to drive new and interesting behaviors, from understanding the similarities and differences in large datasets and using them to self-organize, to understanding how to infinitely generate entirely new content or match the aesthetics or contents of another image. Deep learning offers enormous potential for creative applications and in this course we interrogate what's possible. Through practical applications and guided homework assignments, you'll be expected to create datasets, develop and train neural networks, explore your own media collections using existing state-of-the-art deep nets, synthesize new content from generative algorithms, and understand deep learning's potential for creating entirely new aesthetics and new ways of interacting with large amounts of data.
<a name="promo"></a>
Promo
Deep learning has emerged at the forefront of nearly every major computational breakthrough in the last 4 years. It is no wonder that it is already in many of the products we use today, from netflix or amazon's personalized recommendations; to the filters that block our spam; to ways that we interact with personal assistants like Apple's Siri or Microsoft Cortana, even to the very ways our personal health is monitored. And sure deep learning algorithms are capable of some amazing things. But it's not just science applications that are benefiting from this research.
Artists too are starting to explore how Deep Learning can be used in their own practice. Photographers are starting to explore different ways of exploring visual media. Generative artists are writing algorithms to create entirely new aesthetics. Filmmakers are exploring virtual worlds ripe with potential for procedural content.
In this course, we're going straight to the state of the art. And we're going to learn it all. We'll see how to make an algorithm paint an image, or hallucinate objects in a photograph. We'll see how to train a computer to recognize objects in an image and use this knowledge to drive new and interesting behaviors, from understanding the similarities and differences in large datasets to using them to self organize, to understanding how to infinitely generate entirely new content or match the aesthetics or contents of other images. We'll even see how to teach a computer to read and synthesize new phrases.
But we won't just be using other peoples code to do all of this. We're going to develop everything ourselves using Tensorflow and I'm going to show you how to do it. This course isn't just for artists nor is it just for programmers. It's for people that want to learn more about how to apply deep learning with a hands on approach, straight into the python console, and learn what it all means through creative thinking and interaction.
I'm Parag Mital, artist, researcher and Director of Machine Intelligence at Kadenze. For the last 10 years, I've been exploring creative uses of computational models making use of machine and deep learning, film datasets, eye-tracking, EEG, and fMRI recordings exploring applications such as generative film experiences, augmented reality hallucinations, and expressive control of large audiovisual corpora.
But this course isn't just about me. It's about bringing all of you together. It's about bringing together different backgrounds, different practices, and sticking all of you in the same virtual room, giving you access to state of the art methods in deep learning, some really amazing stuff, and then letting you go wild on the Kadenze platform. We've been working very hard to build a platform for learning that rivals anything else out there for learning this stuff.
You'll be able to share your content, upload videos, comment and exchange code and ideas, all led by the course I've developed for us. But before we get there we're going to have to cover a lot of groundwork. The basics that we'll use to develop state of the art algorithms in deep learning. And that's really so we can better interrogate what's possible, ask the bigger questions, and be able to explore just where all this is heading in more depth. With all of that in mind, Let's get started>
Join me as we learn all about Creative Applications of Deep Learning with Tensorflow.
<a name="session-overview"></a>
Session Overview
We're first going to talk about Deep Learning, what it is, and how it relates to other branches of learning. We'll then talk about the major components of Deep Learning, the importance of datasets, and the nature of representation, which is at the heart of deep learning.
If you've never used Python before, we'll be jumping straight into using libraries like numpy, matplotlib, and scipy. Before starting this session, please check the resources section for a notebook introducing some fundamentals of python programming. When you feel comfortable with loading images from a directory, resizing, cropping, how to change an image datatype from unsigned int to float32, and what the range of each data type should be, then come back here and pick up where you left off. We'll then get our hands dirty with Tensorflow, Google's library for machine intelligence. We'll learn the basic components of creating a computational graph with Tensorflow, including how to convolve an image to detect interesting features at different scales. This groundwork will finally lead us towards automatically learning our handcrafted features/algorithms.
<a name="learning-from-data"></a>
Learning From Data
<a name="deep-learning-vs-machine-learning"></a>
Deep Learning vs. Machine Learning
So what is this word I keep using, Deep Learning. And how is it different to Machine Learning? Well Deep Learning is a type of Machine Learning algorithm that uses Neural Networks to learn. The type of learning is "Deep" because it is composed of many layers of Neural Networks. In this course we're really going to focus on supervised and unsupervised Deep Learning. But there are many other incredibly valuable branches of Machine Learning such as Reinforcement Learning, Dictionary Learning, Probabilistic Graphical Models and Bayesian Methods (Bishop), or Genetic and Evolutionary Algorithms. And any of these branches could certainly even be combined with each other or with Deep Networks as well. We won't really be able to get into these other branches of learning in this course. Instead, we'll focus more on building "networks", short for neural networks, and how they can do some really amazing things. Before we can get into all that, we're going to need to understand a bit more about data and its importance in deep learning.
<a name="invariances"></a>
Invariances
Deep Learning requires data. A lot of it. It's really one of the major reasons as to why Deep Learning has been so successful. Having many examples of the thing we are trying to learn is the first thing you'll need before even thinking about Deep Learning. Often, it is the biggest blocker to learning about something in the world. Even as a child, we need a lot of experience with something before we begin to understand it. I find I spend most of my time just finding the right data for a network to learn. Getting it from various sources, making sure it all looks right and is labeled. That is a lot of work. The rest of it is easy as we'll see by the end of this course.
Let's say we would like build a network that is capable of looking at an image and saying what object is in the image. There are so many possible ways that an object could be manifested in an image. It's rare to ever see just a single object in isolation. In order to teach a computer about an object, we would have to be able to give it an image of an object in every possible way that it could exist.
We generally call these ways of existing "invariances". That just means we are trying not to vary based on some factor. We are invariant to it. For instance, an object could appear to one side of an image, or another. We call that translation invariance. Or it could be from one angle or another. That's called rotation invariance. Or it could be closer to the camera, or farther. and That would be scale invariance. There are plenty of other types of invariances, such as perspective or brightness or exposure to give a few more examples for photographic images.
<a name="scope-of-learning"></a>
Scope of Learning
With Deep Learning, you will always need a dataset that will teach the algorithm about the world. But you aren't really teaching it everything. You are only teaching it what is in your dataset! That is a very important distinction. If I show my algorithm only faces of people which are always placed in the center of an image, it will not be able to understand anything about faces that are not in the center of the image! Well at least that's mostly true.
That's not to say that a network is incapable of transfering what it has learned to learn new concepts more easily. Or to learn things that might be necessary for it to learn other representations. For instance, a network that has been trained to learn about birds, probably knows a good bit about trees, branches, and other bird-like hangouts, depending on the dataset. But, in general, we are limited to learning what our dataset has access to.
So if you're thinking about creating a dataset, you're going to have to think about what it is that you want to teach your network. What sort of images will it see? What representations do you think your network could learn given the data you've shown it?
One of the major contributions to the success of Deep Learning algorithms is the amount of data out there. Datasets have grown from orders of hundreds to thousands to many millions. The more data you have, the more capable your network will be at determining whatever its objective is.
<a name="existing-datasets"></a>
Existing datasets
With that in mind, let's try to find a dataset that we can work with. There are a ton of datasets out there that current machine learning researchers use. For instance if I do a quick Google search for Deep Learning Datasets, i can see for instance a link on deeplearning.net, listing a few interesting ones e.g. http://deeplearning.net/datasets/, including MNIST, CalTech, CelebNet, LFW, CIFAR, MS Coco, Illustration2Vec, and there are ton more. And these are primarily image based. But if you are interested in finding more, just do a quick search or drop a quick message on the forums if you're looking for something in particular.
MNIST
CalTech
CelebNet
ImageNet: http://www.image-net.org/
LFW
CIFAR10
CIFAR100
MS Coco: http://mscoco.org/home/
WLFDB: http://wlfdb.stevenhoi.com/
Flickr 8k: http://nlp.cs.illinois.edu/HockenmaierGroup/Framing_Image_Description/KCCA.html
Flickr 30k
<a name="preprocessing-data"></a>
Preprocessing Data
In this section, we're going to learn a bit about working with an image based dataset. We'll see how image dimensions are formatted as a single image and how they're represented as a collection using a 4-d array. We'll then look at how we can perform dataset normalization. If you're comfortable with all of this, please feel free to skip to the next video.
We're first going to load some libraries that we'll be making use of.
End of explanation
from libs import utils
# utils.<tab>
files = utils.get_celeb_files()
Explanation: I'll be using a popular image dataset for faces called the CelebFaces dataset. I've provided some helper functions which you can find on the resources page, which will just help us with manipulating images and loading this dataset.
End of explanation
img = plt.imread(files[50])
# img.<tab>
print(img)
Explanation: Let's get the 50th image in this list of files, and then read the file at that location as an image, setting the result to a variable, img, and inspect a bit further what's going on:
End of explanation
# If nothing is drawn and you are using notebook, try uncommenting the next line:
#%matplotlib inline
plt.imshow(img)
Explanation: When I print out this image, I can see all the numbers that represent this image. We can use the function imshow to see this:
End of explanation
img.shape
# (218, 178, 3)
Explanation: <a name="understanding-image-shapes"></a>
Understanding Image Shapes
Let's break this data down a bit more. We can see the dimensions of the data using the shape accessor:
End of explanation
plt.imshow(img[:, :, 0], cmap='gray')
plt.imshow(img[:, :, 1], cmap='gray')
plt.imshow(img[:, :, 2], cmap='gray')
Explanation: This means that the image has 218 rows, 178 columns, and 3 color channels corresponding to the Red, Green, and Blue channels of the image, or RGB. Let's try looking at just one of the color channels.
End of explanation
imgs = utils.get_celeb_imgs()
Explanation: We use the special colon operator to say take every value in this dimension. This is saying, give me every row, every column, and the 0th dimension of the color channels. What we're seeing is the amount of Red, Green, or Blue contributing to the overall color image.
Let's use another helper function which will load every image file in the celeb dataset rather than just give us the filenames like before. By default, this will just return the first 1000 images because loading the entire dataset is a bit cumbersome. In one of the later sessions, I'll show you how tensorflow can handle loading images using a pipeline so we can load this same dataset. For now, let's stick with this:
End of explanation
plt.imshow(imgs[0])
Explanation: We now have a list containing our images. Each index of the imgs list is another image which we can access using the square brackets:
End of explanation
imgs[0].shape
Explanation: <a name="the-batch-dimension"></a>
The Batch Dimension
Remember that an image has a shape describing the height, width, channels:
End of explanation
data = np.array(imgs)
data.shape
Explanation: It turns out we'll often use another convention for storing many images in an array using a new dimension called the batch dimension. The resulting image shape will be exactly the same, except we'll stick on a new dimension on the beginning... giving us number of images x the height x the width x the number of color channels.
N x H x W x C
A Color image should have 3 color channels, RGB.
We can combine all of our images to have these 4 dimensions by telling numpy to give us an array of all the images.
End of explanation
mean_img = np.mean(data, axis=0)
plt.imshow(mean_img.astype(np.uint8))
Explanation: This will only work if every image in our list is exactly the same size. So if you have a wide image, short image, long image, forget about it. You'll need them all to be the same size. If you are unsure of how to get all of your images into the same size, then please please refer to the online resources for the notebook I've provided which shows you exactly how to take a bunch of images of different sizes, and crop and resize them the best we can to make them all the same size.
<a name="meandeviation-of-images"></a>
Mean/Deviation of Images
Now that we have our data in a single numpy variable, we can do alot of cool stuff. Let's look at the mean of the batch channel:
End of explanation
std_img = np.std(data, axis=0)
plt.imshow(std_img.astype(np.uint8))
Explanation: This is the first step towards building our robot overlords. We've reduced down our entire dataset to a single representation which describes what most of our dataset looks like. There is one other very useful statistic which we can look at very easily:
End of explanation
plt.imshow(np.mean(std_img, axis=2).astype(np.uint8))
Explanation: So this is incredibly cool. We've just shown where changes are likely to be in our dataset of images. Or put another way, we're showing where and how much variance there is in our previous mean image representation.
We're looking at this per color channel. So we'll see variance for each color channel represented separately, and then combined as a color image. We can try to look at the average variance over all color channels by taking their mean:
End of explanation
flattened = data.ravel()
print(data[:1])
print(flattened[:10])
Explanation: This is showing us on average, how every color channel will vary as a heatmap. The more red, the more likely that our mean image is not the best representation. The more blue, the less likely that our mean image is far off from any other possible image.
<a name="dataset-preprocessing"></a>
Dataset Preprocessing
Think back to when I described what we're trying to accomplish when we build a model for machine learning? We're trying to build a model that understands invariances. We need our model to be able to express all of the things that can possibly change in our data. Well, this is the first step in understanding what can change. If we are looking to use deep learning to learn something complex about our data, it will often start by modeling both the mean and standard deviation of our dataset. We can help speed things up by "preprocessing" our dataset by removing the mean and standard deviation. What does this mean? Subtracting the mean, and dividing by the standard deviation. Another word for that is "normalization".
<a name="histograms"></a>
Histograms
Let's have a look at our dataset another way to see why this might be a useful thing to do. We're first going to convert our batch x height x width x channels array into a 1 dimensional array. Instead of having 4 dimensions, we'll now just have 1 dimension of every pixel value stretched out in a long vector, or 1 dimensional array.
End of explanation
plt.hist(flattened.ravel(), 255)
Explanation: We first convert our N x H x W x C dimensional array into a 1 dimensional array. The values of this array will be based on the last dimensions order. So we'll have: [<font color='red'>251</font>, <font color='green'>238</font>, <font color='blue'>205</font>, <font color='red'>251</font>, <font color='green'>238</font>, <font color='blue'>206</font>, <font color='red'>253</font>, <font color='green'>240</font>, <font color='blue'>207</font>, ...]
We can visualize what the "distribution", or range and frequency of possible values are. This is a very useful thing to know. It tells us whether our data is predictable or not.
End of explanation
plt.hist(mean_img.ravel(), 255)
Explanation: The last line is saying give me a histogram of every value in the vector, and use 255 bins. Each bin is grouping a range of values. The bars of each bin describe the frequency, or how many times anything within that range of values appears.In other words, it is telling us if there is something that seems to happen more than anything else. If there is, it is likely that a neural network will take advantage of that.
<a name="histogram-equalization"></a>
Histogram Equalization
The mean of our dataset looks like this:
End of explanation
bins = 20
fig, axs = plt.subplots(1, 3, figsize=(12, 6), sharey=True, sharex=True)
axs[0].hist((data[0]).ravel(), bins)
axs[0].set_title('img distribution')
axs[1].hist((mean_img).ravel(), bins)
axs[1].set_title('mean distribution')
axs[2].hist((data[0] - mean_img).ravel(), bins)
axs[2].set_title('(img - mean) distribution')
Explanation: When we subtract an image by our mean image, we remove all of this information from it. And that means that the rest of the information is really what is important for describing what is unique about it.
Let's try and compare the histogram before and after "normalizing our data":
End of explanation
fig, axs = plt.subplots(1, 3, figsize=(12, 6), sharey=True, sharex=True)
axs[0].hist((data[0] - mean_img).ravel(), bins)
axs[0].set_title('(img - mean) distribution')
axs[1].hist((std_img).ravel(), bins)
axs[1].set_title('std deviation distribution')
axs[2].hist(((data[0] - mean_img) / std_img).ravel(), bins)
axs[2].set_title('((img - mean) / std_dev) distribution')
Explanation: What we can see from the histograms is the original image's distribution of values from 0 - 255. The mean image's data distribution is mostly centered around the value 100. When we look at the difference of the original image and the mean image as a histogram, we can see that the distribution is now centered around 0. What we are seeing is the distribution of values that were above the mean image's intensity, and which were below it. Let's take it one step further and complete the normalization by dividing by the standard deviation of our dataset:
End of explanation
axs[2].set_xlim([-150, 150])
axs[2].set_xlim([-100, 100])
axs[2].set_xlim([-50, 50])
axs[2].set_xlim([-10, 10])
axs[2].set_xlim([-5, 5])
Explanation: Now our data has been squished into a peak! We'll have to look at it on a different scale to see what's going on:
End of explanation
import tensorflow as tf
Explanation: What we can see is that the data is in the range of -3 to 3, with the bulk of the data centered around -1 to 1. This is the effect of normalizing our data: most of the data will be around 0, where some deviations of it will follow between -3 to 3.
If our data does not end up looking like this, then we should either (1): get much more data to calculate our mean/std deviation, or (2): either try another method of normalization, such as scaling the values between 0 to 1, or -1 to 1, or possibly not bother with normalization at all. There are other options that one could explore, including different types of normalization such as local contrast normalization for images or PCA based normalization but we won't have time to get into those in this course.
<a name="tensorflow-basics"></a>
Tensorflow Basics
Let's now switch gears and start working with Google's Library for Numerical Computation, TensorFlow. This library can do most of the things we've done so far. However, it has a very different approach for doing so. And it can do a whole lot more cool stuff which we'll eventually get into. The major difference to take away from the remainder of this session is that instead of computing things immediately, we first define things that we want to compute later using what's called a Graph. Everything in Tensorflow takes place in a computational graph and running and evaluating anything in the graph requires a Session. Let's take a look at how these both work and then we'll get into the benefits of why this is useful:
<a name="variables"></a>
Variables
We're first going to import the tensorflow library:
End of explanation
x = np.linspace(-3.0, 3.0, 100)
# Immediately, the result is given to us. An array of 100 numbers equally spaced from -3.0 to 3.0.
print(x)
# We know from numpy arrays that they have a `shape`, in this case a 1-dimensional array of 100 values
print(x.shape)
# and a `dtype`, in this case float64, or 64 bit floating point values.
print(x.dtype)
Explanation: Let's take a look at how we might create a range of numbers. Using numpy, we could for instance use the linear space function:
End of explanation
x = tf.linspace(-3.0, 3.0, 100)
print(x)
Explanation: <a name="tensors"></a>
Tensors
In tensorflow, we could try to do the same thing using their linear space function:
End of explanation
g = tf.get_default_graph()
Explanation: Instead of a numpy.array, we are returned a tf.Tensor. The name of it is "LinSpace:0". Wherever we see this colon 0, that just means the output of. So the name of this Tensor is saying, the output of LinSpace.
Think of tf.Tensors the same way as you would the numpy.array. It is described by its shape, in this case, only 1 dimension of 100 values. And it has a dtype, in this case, float32. But unlike the numpy.array, there are no values printed here! That's because it actually hasn't computed its values yet. Instead, it just refers to the output of a tf.Operation which has been already been added to Tensorflow's default computational graph. The result of that operation is the tensor that we are returned.
<a name="graphs"></a>
Graphs
Let's try and inspect the underlying graph. We can request the "default" graph where all of our operations have been added:
End of explanation
[op.name for op in g.get_operations()]
Explanation: <a name="operations"></a>
Operations
And from this graph, we can get a list of all the operations that have been added, and print out their names:
End of explanation
g.get_tensor_by_name('LinSpace' + ':0')
Explanation: So Tensorflow has named each of our operations to generally reflect what they are doing. There are a few parameters that are all prefixed by LinSpace, and then the last one which is the operation which takes all of the parameters and creates an output for the linspace.
<a name="tensor"></a>
Tensor
We can request the output of any operation, which is a tensor, by asking the graph for the tensor's name:
End of explanation
# We're first going to create a session:
sess = tf.Session()
# Now we tell our session to compute anything we've created in the tensorflow graph.
computed_x = sess.run(x)
print(computed_x)
# Alternatively, we could tell the previous Tensor to evaluate itself using this session:
computed_x = x.eval(session=sess)
print(computed_x)
# We can close the session after we're done like so:
sess.close()
Explanation: What I've done is asked for the tf.Tensor that comes from the operation "LinSpace". So remember, the result of a tf.Operation is a tf.Tensor. Remember that was the same name as the tensor x we created before.
<a name="sessions"></a>
Sessions
In order to actually compute anything in tensorflow, we need to create a tf.Session. The session is responsible for evaluating the tf.Graph. Let's see how this works:
End of explanation
sess = tf.Session(graph=g)
sess.close()
Explanation: We could also explicitly tell the session which graph we want to manage:
End of explanation
g2 = tf.Graph()
Explanation: By default, it grabs the default graph. But we could have created a new graph like so:
End of explanation
sess = tf.InteractiveSession()
x.eval()
Explanation: And then used this graph only in our session.
To simplify things, since we'll be working in iPython's interactive console, we can create an tf.InteractiveSession:
End of explanation
# We can find out the shape of a tensor like so:
print(x.get_shape())
# %% Or in a more friendly format
print(x.get_shape().as_list())
Explanation: Now we didn't have to explicitly tell the eval function about our session. We'll leave this session open for the rest of the lecture.
<a name="tensor-shapes"></a>
Tensor Shapes
End of explanation
# The 1 dimensional gaussian takes two parameters, the mean value, and the standard deviation, which is commonly denoted by the name sigma.
mean = 0.0
sigma = 1.0
# Don't worry about trying to learn or remember this formula. I always have to refer to textbooks or check online for the exact formula.
z = (tf.exp(tf.neg(tf.pow(x - mean, 2.0) /
(2.0 * tf.pow(sigma, 2.0)))) *
(1.0 / (sigma * tf.sqrt(2.0 * 3.1415))))
Explanation: <a name="many-operations"></a>
Many Operations
Lets try a set of operations now. We'll try to create a Gaussian curve. This should resemble a normalized histogram where most of the data is centered around the mean of 0. It's also sometimes refered to by the bell curve or normal curve.
End of explanation
res = z.eval()
plt.plot(res)
# if nothing is drawn, and you are using ipython notebook, uncomment the next two lines:
#%matplotlib inline
#plt.plot(res)
Explanation: Just like before, amazingly, we haven't actually computed anything. We *have just added a bunch of operations to Tensorflow's graph. Whenever we want the value or output of this operation, we'll have to explicitly ask for the part of the graph we're interested in before we can see its result. Since we've created an interactive session, we should just be able to say the name of the Tensor that we're interested in, and call the eval function:
End of explanation
# Let's store the number of values in our Gaussian curve.
ksize = z.get_shape().as_list()[0]
# Let's multiply the two to get a 2d gaussian
z_2d = tf.matmul(tf.reshape(z, [ksize, 1]), tf.reshape(z, [1, ksize]))
# Execute the graph
plt.imshow(z_2d.eval())
Explanation: <a name="convolution"></a>
Convolution
<a name="creating-a-2-d-gaussian-kernel"></a>
Creating a 2-D Gaussian Kernel
Let's try creating a 2-dimensional Gaussian. This can be done by multiplying a vector by its transpose. If you aren't familiar with matrix math, I'll review a few important concepts. This is about 98% of what neural networks do so if you're unfamiliar with this, then please stick with me through this and it'll be smooth sailing. First, to multiply two matrices, their inner dimensions must agree, and the resulting matrix will have the shape of the outer dimensions.
So let's say we have two matrices, X and Y. In order for us to multiply them, X's columns must match Y's rows. I try to remember it like so:
<pre>
(X_rows, X_cols) x (Y_rows, Y_cols)
| | | |
| |___________| |
| ^ |
| inner dimensions |
| must match |
| |
|__________________________|
^
resulting dimensions
of matrix multiplication
</pre>
But our matrix is actually a vector, or a 1 dimensional matrix. That means its dimensions are N x 1. So to multiply them, we'd have:
<pre>
(N, 1) x (1, N)
| | | |
| |___________| |
| ^ |
| inner dimensions |
| must match |
| |
|__________________________|
^
resulting dimensions
of matrix multiplication
</pre>
End of explanation
# Let's first load an image. We're going to need a grayscale image to begin with. skimage has some images we can play with. If you do not have the skimage module, you can load your own image, or get skimage by pip installing "scikit-image".
from skimage import data
img = data.camera().astype(np.float32)
plt.imshow(img, cmap='gray')
print(img.shape)
Explanation: <a name="convolving-an-image-with-a-gaussian"></a>
Convolving an Image with a Gaussian
A very common operation that we'll come across with Deep Learning is convolution. We're going to explore what this means using our new gaussian kernel that we've just created. For now, just think of it a way of filtering information. We're going to effectively filter our image using this Gaussian function, as if the gaussian function is the lens through which we'll see our image data. What it will do is at every location we tell it to filter, it will average the image values around it based on what the kernel's values are. The Gaussian's kernel is basically saying, take a lot the center, a then decesasingly less as you go farther away from the center. The effect of convolving the image with this type of kernel is that the entire image will be blurred. If you would like an interactive exploratin of convolution, this website is great:
http://setosa.io/ev/image-kernels/
End of explanation
# We could use the numpy reshape function to reshape our numpy array
img_4d = img.reshape([1, img.shape[0], img.shape[1], 1])
print(img_4d.shape)
# but since we'll be using tensorflow, we can use the tensorflow reshape function:
img_4d = tf.reshape(img, [1, img.shape[0], img.shape[1], 1])
print(img_4d)
Explanation: Notice our img shape is 2-dimensional. For image convolution in Tensorflow, we need our images to be 4 dimensional. Remember that when we load many iamges and combine them in a single numpy array, the resulting shape has the number of images first.
N x H x W x C
In order to perform 2d convolution with tensorflow, we'll need the same dimensions for our image. With just 1 grayscale image, this means the shape will be:
1 x H x W x 1
End of explanation
print(img_4d.get_shape())
print(img_4d.get_shape().as_list())
Explanation: Instead of getting a numpy array back, we get a tensorflow tensor. This means we can't access the shape parameter like we did with the numpy array. But instead, we can use get_shape(), and get_shape().as_list():
End of explanation
# Reshape the 2d kernel to tensorflow's required 4d format: H x W x I x O
z_4d = tf.reshape(z_2d, [ksize, ksize, 1, 1])
print(z_4d.get_shape().as_list())
Explanation: The H x W image is now part of a 4 dimensional array, where the other dimensions of N and C are 1. So there is only 1 image and only 1 channel.
We'll also have to reshape our Gaussian Kernel to be 4-dimensional as well. The dimensions for kernels are slightly different! Remember that the image is:
Number of Images x Image Height x Image Width x Number of Channels
we have:
Kernel Height x Kernel Width x Number of Input Channels x Number of Output Channels
Our Kernel already has a height and width of ksize so we'll stick with that for now. The number of input channels should match the number of channels on the image we want to convolve. And for now, we just keep the same number of output channels as the input channels, but we'll later see how this comes into play.
End of explanation
convolved = tf.nn.conv2d(img_4d, z_4d, strides=[1, 1, 1, 1], padding='SAME')
res = convolved.eval()
print(res.shape)
Explanation: <a name="convolvefilter-an-image-using-a-gaussian-kernel"></a>
Convolve/Filter an image using a Gaussian Kernel
We can now use our previous Gaussian Kernel to convolve our image:
End of explanation
# Matplotlib cannot handle plotting 4D images! We'll have to convert this back to the original shape. There are a few ways we could do this. We could plot by "squeezing" the singleton dimensions.
plt.imshow(np.squeeze(res), cmap='gray')
# Or we could specify the exact dimensions we want to visualize:
plt.imshow(res[0, :, :, 0], cmap='gray')
Explanation: There are two new parameters here: strides, and padding. Strides says how to move our kernel across the image. Basically, we'll only ever use it for one of two sets of parameters:
[1, 1, 1, 1], which means, we are going to convolve every single image, every pixel, and every color channel by whatever the kernel is.
and the second option:
[1, 2, 2, 1], which means, we are going to convolve every single image, but every other pixel, in every single color channel.
Padding says what to do at the borders. If we say "SAME", that means we want the same dimensions going in as we do going out. In order to do this, zeros must be padded around the image. If we say "VALID", that means no padding is used, and the image dimensions will actually change.
End of explanation
xs = tf.linspace(-3.0, 3.0, ksize)
Explanation: <a name="modulating-the-gaussian-with-a-sine-wave-to-create-gabor-kernel"></a>
Modulating the Gaussian with a Sine Wave to create Gabor Kernel
We've now seen how to use tensorflow to create a set of operations which create a 2-dimensional Gaussian kernel, and how to use that kernel to filter or convolve another image. Let's create another interesting convolution kernel called a Gabor. This is a lot like the Gaussian kernel, except we use a sine wave to modulate that.
<graphic: draw 1d gaussian wave, 1d sine, show modulation as multiplication and resulting gabor.>
We first use linspace to get a set of values the same range as our gaussian, which should be from -3 standard deviations to +3 standard deviations.
End of explanation
ys = tf.sin(xs)
plt.figure()
plt.plot(ys.eval())
Explanation: We then calculate the sine of these values, which should give us a nice wave
End of explanation
ys = tf.reshape(ys, [ksize, 1])
Explanation: And for multiplication, we'll need to convert this 1-dimensional vector to a matrix: N x 1
End of explanation
ones = tf.ones((1, ksize))
wave = tf.matmul(ys, ones)
plt.imshow(wave.eval(), cmap='gray')
Explanation: We then repeat this wave across the matrix by using a multiplication of ones:
End of explanation
gabor = tf.mul(wave, z_2d)
plt.imshow(gabor.eval(), cmap='gray')
Explanation: We can directly multiply our old Gaussian kernel by this wave and get a gabor kernel:
End of explanation
# This is a placeholder which will become part of the tensorflow graph, but
# which we have to later explicitly define whenever we run/evaluate the graph.
# Pretty much everything you do in tensorflow can have a name. If we don't
# specify the name, tensorflow will give a default one, like "Placeholder_0".
# Let's use a more useful name to help us understand what's happening.
img = tf.placeholder(tf.float32, shape=[None, None], name='img')
# We'll reshape the 2d image to a 3-d tensor just like before:
# Except now we'll make use of another tensorflow function, expand dims, which adds a singleton dimension at the axis we specify.
# We use it to reshape our H x W image to include a channel dimension of 1
# our new dimensions will end up being: H x W x 1
img_3d = tf.expand_dims(img, 2)
dims = img_3d.get_shape()
print(dims)
# And again to get: 1 x H x W x 1
img_4d = tf.expand_dims(img_3d, 0)
print(img_4d.get_shape().as_list())
# Let's create another set of placeholders for our Gabor's parameters:
mean = tf.placeholder(tf.float32, name='mean')
sigma = tf.placeholder(tf.float32, name='sigma')
ksize = tf.placeholder(tf.int32, name='ksize')
# Then finally redo the entire set of operations we've done to convolve our
# image, except with our placeholders
x = tf.linspace(-3.0, 3.0, ksize)
z = (tf.exp(tf.neg(tf.pow(x - mean, 2.0) /
(2.0 * tf.pow(sigma, 2.0)))) *
(1.0 / (sigma * tf.sqrt(2.0 * 3.1415))))
z_2d = tf.matmul(
tf.reshape(z, tf.pack([ksize, 1])),
tf.reshape(z, tf.pack([1, ksize])))
ys = tf.sin(x)
ys = tf.reshape(ys, tf.pack([ksize, 1]))
ones = tf.ones(tf.pack([1, ksize]))
wave = tf.matmul(ys, ones)
gabor = tf.mul(wave, z_2d)
gabor_4d = tf.reshape(gabor, tf.pack([ksize, ksize, 1, 1]))
# And finally, convolve the two:
convolved = tf.nn.conv2d(img_4d, gabor_4d, strides=[1, 1, 1, 1], padding='SAME', name='convolved')
convolved_img = convolved[0, :, :, 0]
Explanation: <a name="manipulating-an-image-with-this-gabor"></a>
Manipulating an image with this Gabor
We've already gone through the work of convolving an image. The only thing that has changed is the kernel that we want to convolve with. We could have made life easier by specifying in our graph which elements we wanted to be specified later. Tensorflow calls these "placeholders", meaning, we're not sure what these are yet, but we know they'll fit in the graph like so, generally the input and output of the network.
Let's rewrite our convolution operation using a placeholder for the image and the kernel and then see how the same operation could have been done. We're going to set the image dimensions to None x None. This is something special for placeholders which tells tensorflow "let this dimension be any possible value". 1, 5, 100, 1000, it doesn't matter.
End of explanation
convolved_img.eval()
Explanation: What we've done is create an entire graph from our placeholders which is capable of convolving an image with a gabor kernel. In order to compute it, we have to specify all of the placeholders required for its computation.
If we try to evaluate it without specifying placeholders beforehand, we will get an error InvalidArgumentError: You must feed a value for placeholder tensor 'img' with dtype float and shape [512,512]:
End of explanation
convolved_img.eval(feed_dict={img: data.camera()})
Explanation: It's saying that we didn't specify our placeholder for img. In order to "feed a value", we use the feed_dict parameter like so:
End of explanation
res = convolved_img.eval(feed_dict={
img: data.camera(), mean:0.0, sigma:1.0, ksize:100})
plt.imshow(res, cmap='gray')
Explanation: But that's not the only placeholder in our graph! We also have placeholders for mean, sigma, and ksize. Once we specify all of them, we'll have our result:
End of explanation
res = convolved_img.eval(feed_dict={
img: data.camera(),
mean: 0.0,
sigma: 0.5,
ksize: 32
})
plt.imshow(res, cmap='gray')
Explanation: Now, instead of having to rewrite the entire graph, we can just specify the different placeholders.
End of explanation |
2,599 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Image Classification
In this project, you'll classify images from the CIFAR-10 dataset. The dataset consists of airplanes, dogs, cats, and other objects. You'll preprocess the images, then train a convolutional neural network on all the samples. The images need to be normalized and the labels need to be one-hot encoded. You'll get to apply what you learned and build a convolutional, max pooling, dropout, and fully connected layers. At the end, you'll get to see your neural network's predictions on the sample images.
Get the Data
Run the following cell to download the CIFAR-10 dataset for python.
Step2: Explore the Data
The dataset is broken into batches to prevent your machine from running out of memory. The CIFAR-10 dataset consists of 5 batches, named data_batch_1, data_batch_2, etc.. Each batch contains the labels and images that are one of the following
Step4: Implement Preprocess Functions
Normalize
In the cell below, implement the normalize function to take in image data, x, and return it as a normalized Numpy array. The values should be in the range of 0 to 1, inclusive. The return object should be the same shape as x.
Step6: One-hot encode
Just like the previous code cell, you'll be implementing a function for preprocessing. This time, you'll implement the one_hot_encode function. The input, x, are a list of labels. Implement the function to return the list of labels as One-Hot encoded Numpy array. The possible values for labels are 0 to 9. The one-hot encoding function should return the same encoding for each value between each call to one_hot_encode. Make sure to save the map of encodings outside the function.
Hint
Step7: Randomize Data
As you saw from exploring the data above, the order of the samples are randomized. It doesn't hurt to randomize it again, but you don't need to for this dataset.
Preprocess all the data and save it
Running the code cell below will preprocess all the CIFAR-10 data and save it to file. The code below also uses 10% of the training data for validation.
Step8: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
Step12: Build the network
For the neural network, you'll build each layer into a function. Most of the code you've seen has been outside of functions. To test your code more thoroughly, we require that you put each layer in a function. This allows us to give you better feedback and test for simple mistakes using our unittests before you submit your project.
Note
Step15: Convolution and Max Pooling Layer
Convolution layers have a lot of success with images. For this code cell, you should implement the function conv2d_maxpool to apply convolution then max pooling
Step17: Flatten Layer
Implement the flatten function to change the dimension of x_tensor from a 4-D tensor to a 2-D tensor. The output should be the shape (Batch Size, Flattened Image Size). Shortcut option
Step19: Fully-Connected Layer
Implement the fully_conn function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option
Step22: Output Layer
Implement the output function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option
Step24: Create Convolutional Model
Implement the function conv_net to create a convolutional neural network model. The function takes in a batch of images, x, and outputs logits. Use the layers you created above to create this model
Step26: Show Stats
Implement the function print_stats to print loss and validation accuracy. Use the global variables valid_features and valid_labels to calculate validation accuracy. Use a keep probability of 1.0 to calculate the loss and validation accuracy.
Step27: Hyperparameters
Tune the following parameters
Step28: Train on a Single CIFAR-10 Batch
Instead of training the neural network on all the CIFAR-10 batches of data, let's use a single batch. This should save time while you iterate on the model to get a better accuracy. Once the final validation accuracy is 50% or greater, run the model on all the data in the next section.
Step29: Fully Train the Model
Now that you got a good accuracy with a single CIFAR-10 batch, try it with all five batches.
Step31: Checkpoint
The model has been saved to disk.
Test Model
Test your model against the test dataset. This will be your final accuracy. You should have an accuracy greater than 50%. If you don't, keep tweaking the model architecture and parameters. | Python Code:
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
import problem_unittests as tests
import tarfile
cifar10_dataset_folder_path = 'cifar-10-batches-py'
tar_gz_path = 'cifar-10-python.tar.gz'
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile(tar_gz_path):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='CIFAR-10 Dataset') as pbar:
urlretrieve(
'https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz',
tar_gz_path,
pbar.hook)
if not isdir(cifar10_dataset_folder_path):
with tarfile.open(tar_gz_path) as tar:
tar.extractall()
tar.close()
tests.test_folder_path(cifar10_dataset_folder_path)
Explanation: Image Classification
In this project, you'll classify images from the CIFAR-10 dataset. The dataset consists of airplanes, dogs, cats, and other objects. You'll preprocess the images, then train a convolutional neural network on all the samples. The images need to be normalized and the labels need to be one-hot encoded. You'll get to apply what you learned and build a convolutional, max pooling, dropout, and fully connected layers. At the end, you'll get to see your neural network's predictions on the sample images.
Get the Data
Run the following cell to download the CIFAR-10 dataset for python.
End of explanation
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import helper
import numpy as np
# Explore the dataset
batch_id = 1
sample_id = 5
helper.display_stats(cifar10_dataset_folder_path, batch_id, sample_id)
Explanation: Explore the Data
The dataset is broken into batches to prevent your machine from running out of memory. The CIFAR-10 dataset consists of 5 batches, named data_batch_1, data_batch_2, etc.. Each batch contains the labels and images that are one of the following:
* airplane
* automobile
* bird
* cat
* deer
* dog
* frog
* horse
* ship
* truck
Understanding a dataset is part of making predictions on the data. Play around with the code cell below by changing the batch_id and sample_id. The batch_id is the id for a batch (1-5). The sample_id is the id for a image and label pair in the batch.
Ask yourself "What are all possible labels?", "What is the range of values for the image data?", "Are the labels in order or random?". Answers to questions like these will help you preprocess the data and end up with better predictions.
End of explanation
def normalize(x):
Normalize a list of sample image data in the range of 0 to 1
: x: List of image data. The image shape is (32, 32, 3)
: return: Numpy array of normalize data
return (x - x.min())/(x.max() - x.min())
tests.test_normalize(normalize)
Explanation: Implement Preprocess Functions
Normalize
In the cell below, implement the normalize function to take in image data, x, and return it as a normalized Numpy array. The values should be in the range of 0 to 1, inclusive. The return object should be the same shape as x.
End of explanation
import sklearn.preprocessing
label_binarizer = sklearn.preprocessing.LabelBinarizer()
label_binarizer.fit(range(10))
def one_hot_encode(x):
One hot encode a list of sample labels. Return a one-hot encoded vector for each label.
: x: List of sample Labels
: return: Numpy array of one-hot encoded labels
return label_binarizer.transform(x)
tests.test_one_hot_encode(one_hot_encode)
Explanation: One-hot encode
Just like the previous code cell, you'll be implementing a function for preprocessing. This time, you'll implement the one_hot_encode function. The input, x, are a list of labels. Implement the function to return the list of labels as One-Hot encoded Numpy array. The possible values for labels are 0 to 9. The one-hot encoding function should return the same encoding for each value between each call to one_hot_encode. Make sure to save the map of encodings outside the function.
Hint: Don't reinvent the wheel.
End of explanation
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(cifar10_dataset_folder_path, normalize, one_hot_encode)
Explanation: Randomize Data
As you saw from exploring the data above, the order of the samples are randomized. It doesn't hurt to randomize it again, but you don't need to for this dataset.
Preprocess all the data and save it
Running the code cell below will preprocess all the CIFAR-10 data and save it to file. The code below also uses 10% of the training data for validation.
End of explanation
import pickle
import problem_unittests as tests
import helper
# Load the Preprocessed Validation data
valid_features, valid_labels = pickle.load(open('preprocess_validation.p', mode='rb'))
Explanation: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
End of explanation
import tensorflow as tf
def neural_net_image_input(image_shape):
Return a Tensor for a batch of image input
: image_shape: Shape of the images
: return: Tensor for image input.
return tf.placeholder(tf.float32, [None, image_shape[0], image_shape[1], image_shape[2]], 'x')
def neural_net_label_input(n_classes):
Return a Tensor for a batch of label input
: n_classes: Number of classes
: return: Tensor for label input.
return tf.placeholder(tf.float32, [None, n_classes], 'y')
def neural_net_keep_prob_input():
Return a Tensor for keep probability
: return: Tensor for keep probability.
return tf.placeholder(tf.float32, name='keep_prob')
tf.reset_default_graph()
tests.test_nn_image_inputs(neural_net_image_input)
tests.test_nn_label_inputs(neural_net_label_input)
tests.test_nn_keep_prob_inputs(neural_net_keep_prob_input)
Explanation: Build the network
For the neural network, you'll build each layer into a function. Most of the code you've seen has been outside of functions. To test your code more thoroughly, we require that you put each layer in a function. This allows us to give you better feedback and test for simple mistakes using our unittests before you submit your project.
Note: If you're finding it hard to dedicate enough time for this course each week, we've provided a small shortcut to this part of the project. In the next couple of problems, you'll have the option to use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages to build each layer, except the layers you build in the "Convolutional and Max Pooling Layer" section. TF Layers is similar to Keras's and TFLearn's abstraction to layers, so it's easy to pickup.
However, if you would like to get the most out of this course, try to solve all the problems without using anything from the TF Layers packages. You can still use classes from other packages that happen to have the same name as ones you find in TF Layers! For example, instead of using the TF Layers version of the conv2d class, tf.layers.conv2d, you would want to use the TF Neural Network version of conv2d, tf.nn.conv2d.
Let's begin!
Input
The neural network needs to read the image data, one-hot encoded labels, and dropout keep probability. Implement the following functions
* Implement neural_net_image_input
* Return a TF Placeholder
* Set the shape using image_shape with batch size set to None.
* Name the TensorFlow placeholder "x" using the TensorFlow name parameter in the TF Placeholder.
* Implement neural_net_label_input
* Return a TF Placeholder
* Set the shape using n_classes with batch size set to None.
* Name the TensorFlow placeholder "y" using the TensorFlow name parameter in the TF Placeholder.
* Implement neural_net_keep_prob_input
* Return a TF Placeholder for dropout keep probability.
* Name the TensorFlow placeholder "keep_prob" using the TensorFlow name parameter in the TF Placeholder.
These names will be used at the end of the project to load your saved model.
Note: None for shapes in TensorFlow allow for a dynamic size.
End of explanation
def conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides, ):
Apply convolution then max pooling to x_tensor
:param x_tensor: TensorFlow Tensor
:param conv_num_outputs: Number of outputs for the convolutional layer
:param conv_ksize: kernal size 2-D Tuple for the convolutional layer
:param conv_strides: Stride 2-D Tuple for convolution
:param pool_ksize: kernal size 2-D Tuple for pool
:param pool_strides: Stride 2-D Tuple for pool
: return: A tensor that represents convolution and max pooling of x_tensor
weights = tf.Variable(tf.truncated_normal([conv_ksize[0], conv_ksize[1], x_tensor.get_shape().as_list()[-1], conv_num_outputs], stddev=0.1))
bias = tf.Variable(tf.zeros([conv_num_outputs]))
conv1 = tf.nn.relu(tf.nn.bias_add(tf.nn.conv2d(x_tensor, weights, [1, conv_strides[0], conv_strides[1], 1], 'SAME'), bias))
return tf.nn.max_pool(conv1, [1, pool_ksize[0], pool_ksize[1], 1], [1, pool_strides[0], pool_strides[1], 1], 'SAME')
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_con_pool(conv2d_maxpool)
Explanation: Convolution and Max Pooling Layer
Convolution layers have a lot of success with images. For this code cell, you should implement the function conv2d_maxpool to apply convolution then max pooling:
* Create the weight and bias using conv_ksize, conv_num_outputs and the shape of x_tensor.
* Apply a convolution to x_tensor using weight and conv_strides.
* We recommend you use same padding, but you're welcome to use any padding.
* Add bias
* Add a nonlinear activation to the convolution.
* Apply Max Pooling using pool_ksize and pool_strides.
* We recommend you use same padding, but you're welcome to use any padding.
Note: You can't use TensorFlow Layers or TensorFlow Layers (contrib) for this layer, but you can still use TensorFlow's Neural Network package. You may still use the shortcut option for all the other layers.
End of explanation
def flatten(x_tensor):
Flatten x_tensor to (Batch Size, Flattened Image Size)
: x_tensor: A tensor of size (Batch Size, ...), where ... are the image dimensions.
: return: A tensor of size (Batch Size, Flattened Image Size).
size = x_tensor.get_shape().as_list()
return tf.reshape(x_tensor, shape=[-1, size[1] * size[2] * size[3]])
tests.test_flatten(flatten)
Explanation: Flatten Layer
Implement the flatten function to change the dimension of x_tensor from a 4-D tensor to a 2-D tensor. The output should be the shape (Batch Size, Flattened Image Size). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.
End of explanation
def fully_conn(x_tensor, num_outputs):
Apply a fully connected layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs.
weights = tf.Variable(tf.truncated_normal([x_tensor.get_shape().as_list()[1], num_outputs], stddev=0.1))
bias = tf.Variable(tf.zeros([num_outputs]))
return tf.nn.relu(tf.nn.bias_add(tf.matmul(x_tensor, weights), bias))
tests.test_fully_conn(fully_conn)
Explanation: Fully-Connected Layer
Implement the fully_conn function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.
End of explanation
def output(x_tensor, num_outputs):
Apply a output layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs.
weights = tf.Variable(tf.truncated_normal([x_tensor.get_shape().as_list()[1], num_outputs], stddev=0.1))
bias = tf.Variable(tf.zeros([num_outputs]))
return tf.nn.bias_add(tf.matmul(x_tensor, weights), bias)
tests.test_output(output)
def conv_net(x, keep_prob):
Create a convolutional neural network model
: x: Placeholder tensor that holds image data.
: keep_prob: Placeholder tensor that hold dropout keep probability.
: return: Tensor that represents logits
conv1 = conv2d_maxpool(x, 64, [2, 2], [1, 1], [2, 2], [2, 2])
conv1 = conv2d_maxpool(conv1, 128, [3, 3], [1, 1], [2, 2], [2, 2])
conv1 = conv2d_maxpool(conv1, 256, [3, 3], [1, 1], [2, 2], [2, 2])
conv1 = flatten(conv1)
conv1 = fully_conn(conv1, 2500)
conv1 = tf.nn.dropout(conv1, keep_prob)
return output(conv1, 10)
##############################
## Build the Neural Network ##
##############################
# Remove previous weights, bias, inputs, etc..
tf.reset_default_graph()
# Inputs
x = neural_net_image_input((32, 32, 3))
y = neural_net_label_input(10)
keep_prob = neural_net_keep_prob_input()
# Model
logits = conv_net(x, keep_prob)
# Name logits Tensor, so that is can be loaded from disk after training
logits = tf.identity(logits, name='logits')
# Loss and Optimizer
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y))
optimizer = tf.train.AdamOptimizer().minimize(cost)
# Accuracy
correct_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32), name='accuracy')
tests.test_conv_net(conv_net)
Explanation: Output Layer
Implement the output function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.
Note: Activation, softmax, or cross entropy should not be applied to this.
End of explanation
def train_neural_network(session, optimizer, keep_probability, feature_batch, label_batch):
Optimize the session on a batch of images and labels
: session: Current TensorFlow session
: optimizer: TensorFlow optimizer function
: keep_probability: keep probability
: feature_batch: Batch of Numpy image data
: label_batch: Batch of Numpy label data
session.run(optimizer, feed_dict={x: feature_batch, y: label_batch , keep_prob: keep_probability})
tests.test_train_nn(train_neural_network)
Explanation: Create Convolutional Model
Implement the function conv_net to create a convolutional neural network model. The function takes in a batch of images, x, and outputs logits. Use the layers you created above to create this model:
Apply 1, 2, or 3 Convolution and Max Pool layers
Apply a Flatten Layer
Apply 1, 2, or 3 Fully Connected Layers
Apply an Output Layer
Return the output
Apply TensorFlow's Dropout to one or more layers in the model using keep_prob.
Train the Neural Network
Single Optimization
Implement the function train_neural_network to do a single optimization. The optimization should use optimizer to optimize in session with a feed_dict of the following:
* x for image input
* y for labels
* keep_prob for keep probability for dropout
This function will be called for each batch, so tf.global_variables_initializer() has already been called.
Note: Nothing needs to be returned. This function is only optimizing the neural network.
End of explanation
def print_stats(session, feature_batch, label_batch, cost, accuracy):
Print information about loss and validation accuracy
: session: Current TensorFlow session
: feature_batch: Batch of Numpy image data
: label_batch: Batch of Numpy label data
: cost: TensorFlow cost function
: accuracy: TensorFlow accuracy function
cost_val = session.run(cost, feed_dict={x: feature_batch, y: label_batch , keep_prob: 1})
accuracy_val = session.run(accuracy, feed_dict={x: valid_features, y: valid_labels , keep_prob: 1})
print("Cost: {} Accuracy: {}".format(cost_val, accuracy_val))
Explanation: Show Stats
Implement the function print_stats to print loss and validation accuracy. Use the global variables valid_features and valid_labels to calculate validation accuracy. Use a keep probability of 1.0 to calculate the loss and validation accuracy.
End of explanation
# TODO: Tune Parameters
epochs = 50
batch_size = 256
keep_probability = 0.75
Explanation: Hyperparameters
Tune the following parameters:
* Set epochs to the number of iterations until the network stops learning or start overfitting
* Set batch_size to the highest number that your machine has memory for. Most people set them to common sizes of memory:
* 64
* 128
* 256
* ...
* Set keep_probability to the probability of keeping a node using dropout
End of explanation
print('Checking the Training on a Single Batch...')
with tf.Session() as sess:
# Initializing the variables
sess.run(tf.global_variables_initializer())
# Training cycle
for epoch in range(epochs):
batch_i = 1
for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')
print_stats(sess, batch_features, batch_labels, cost, accuracy)
Explanation: Train on a Single CIFAR-10 Batch
Instead of training the neural network on all the CIFAR-10 batches of data, let's use a single batch. This should save time while you iterate on the model to get a better accuracy. Once the final validation accuracy is 50% or greater, run the model on all the data in the next section.
End of explanation
save_model_path = './image_classification'
print('Training...')
with tf.Session() as sess:
# Initializing the variables
sess.run(tf.global_variables_initializer())
# Training cycle
for epoch in range(epochs):
# Loop over all batches
n_batches = 5
for batch_i in range(1, n_batches + 1):
for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')
print_stats(sess, batch_features, batch_labels, cost, accuracy)
# Save Model
saver = tf.train.Saver()
save_path = saver.save(sess, save_model_path)
Explanation: Fully Train the Model
Now that you got a good accuracy with a single CIFAR-10 batch, try it with all five batches.
End of explanation
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import tensorflow as tf
import pickle
import helper
import random
# Set batch size if not already set
try:
if batch_size:
pass
except NameError:
batch_size = 64
save_model_path = './image_classification'
n_samples = 4
top_n_predictions = 3
def test_model():
Test the saved model against the test dataset
test_features, test_labels = pickle.load(open('preprocess_test.p', mode='rb'))
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load model
loader = tf.train.import_meta_graph(save_model_path + '.meta')
loader.restore(sess, save_model_path)
# Get Tensors from loaded model
loaded_x = loaded_graph.get_tensor_by_name('x:0')
loaded_y = loaded_graph.get_tensor_by_name('y:0')
loaded_keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
loaded_logits = loaded_graph.get_tensor_by_name('logits:0')
loaded_acc = loaded_graph.get_tensor_by_name('accuracy:0')
# Get accuracy in batches for memory limitations
test_batch_acc_total = 0
test_batch_count = 0
for test_feature_batch, test_label_batch in helper.batch_features_labels(test_features, test_labels, batch_size):
test_batch_acc_total += sess.run(
loaded_acc,
feed_dict={loaded_x: test_feature_batch, loaded_y: test_label_batch, loaded_keep_prob: 1.0})
test_batch_count += 1
print('Testing Accuracy: {}\n'.format(test_batch_acc_total/test_batch_count))
# Print Random Samples
random_test_features, random_test_labels = tuple(zip(*random.sample(list(zip(test_features, test_labels)), n_samples)))
random_test_predictions = sess.run(
tf.nn.top_k(tf.nn.softmax(loaded_logits), top_n_predictions),
feed_dict={loaded_x: random_test_features, loaded_y: random_test_labels, loaded_keep_prob: 1.0})
helper.display_image_predictions(random_test_features, random_test_labels, random_test_predictions)
test_model()
Explanation: Checkpoint
The model has been saved to disk.
Test Model
Test your model against the test dataset. This will be your final accuracy. You should have an accuracy greater than 50%. If you don't, keep tweaking the model architecture and parameters.
End of explanation |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.